Don’t Call It A WOC: Infineta Systems

When I was invited along with the other NFD3 delegates to take a look at Infineta, I was prepared for a snooze-fest. After all, WOCs (WAN Optimization Controllers)  have pretty much been played out.

Turns out though, Infineta isn’t a WOC. At least not as we know it.

If you’re not familiar with WAN Optimization Controllers, they are end points where the following takes place between them to increase bandwidth/decrease latency:

  • TCP Flow Optimizatin (TFO): Things like adjusting the TCP windows, SACK, etc.
  • LZ Compression: Compressing data on the fly
  • Deduplication
  • Caching (Usually SMB/CIFS)

The WAN Optimization market has had it’s ups and downs. Around 2006/7, it seemed few things where hotter in the networking world. Riverbed, F5, Bluecoat, and even Cisco were pushing the technology heavily.

In 2007 I got to play with a few of the products out there while evaluating solutions for a financial company. F5’s WANJets were stink-out-loud bad, which was strange from a company that (deservedly) dominates the load balancing market. Cisco had an offering that was pretty good (surprised me) and performed extremely well in our tests. Riverbed performed extermely well and was easier than Cisco to manage.

Vendors were furiously trying to get mobile clients out the door as well, so remote users could have the benefit of all four of the optimizations (TFO, compression, deduplication, caching) on their personal VPN connections.

But this was 2007/2008, so the game has changed since then. Cisco doesn’t seem to be as focused on WAAS as they were. Riverbed is the F5 of the WOC world, very specialized and executing exceptionally well. F5 dropped out, and integrated some of the functionality into the LTMs (BIG-IPs) and didn’t keep the WANJet name, which is good, because it had become a four letter word.

The biggest feature the organizations I was involved in were after weren’t necessarily TFO, compression, or de-dupe. It was what is commonly referred to as CIFS (it’s not CIFS, it’s SMB) caching. The issue is that SMB/CIFS has been a chatty protocol. Opening up a single 2 Mbyte file involves a lot of back-and-forths, and it’s all done serially (no parallelism).  Opening up a 2 Mbyte Word file over 100 ms of latency and a T1 could take minutes (back, forth, back, forth, back forth, etc.)

WOCs solved this issue by implementing SMB caching, allow you to open a file on an SMB share in seconds instead of minutes, even on a congested link. Banks in particular ate this up like candy (a terrifying amount of financial modeling isn’t done on mainframes, super computers, or AI cores, it’s done on Excel spreadsheets).

Oh yeah, no worries. I’m just a 600 teraflop super computer. But hey, if you’re more comfortable putting the fate of capitalism in the hands of Clippy, go for it. Dick.

But this need has waned. Microsoft has introduced BranchCache, which is part of the new version of SMB (formerly CIFS) in Windows Server 2008 R2. An even new version of SMB called SMB v3 has Stephen Foskett exited also adds more BranchCaching abilities.

It looks like you’re trying to implode the world economy

Without the need for SMB caching, WOCs (which are comparatively expensive) look a lot less attractive to a lot of people.  Which leads me to data center.

Infineta is a WOC, although not really. At least, it’s not a WOC as we know it. They’re bag is purely a data center to data center play. They do no SMB caching. They don’t try to put their box in every branch office.

What’s more, it’s not x86, which I found the most interesting. All the WOCs I know are x86-based or virtual machines running on x86 platforms, doing most of the heavy lifting done with x86 CPU cycles (with some hardware offloading  for compression, TOE, etc.).

Instead, the box is full of the same components you’d find in a data center switch. Some merchant switch silicon, and some FPGAs (Field Programmable Gate Arrays, essentially programmable ASICs) that are programmed with the Infinita secret sauce.

But they had a pretty compelling case, going after data center to data center transfers. It’s a smart play I think, given unyeilding desire for all things multi-data center. It’s a tough nut to crack, and when I was designing infrastructure in the late ’90s and earlyer 2000s, every company wanted to have multiple data centers. Few actually went through with it though, because of the expense and complexity of keeping two sites in sync.

They do depupe and compression, although they do it through specalized silicon rather than x86 horsepower. We got a PhD level explanation of the concepts (which you can see here) which almost imploded my brain.

It got a little intense

Then I distracted myself (and Mrs. Y) by going over the other thing that Infineta (and other WOCs) do which is TCP Flow Optimization.

Something that we often forget (myself included) is that a single TCP flow has limits on throughput, based on a number of factors, including window size, latency, segment/packet loss, etc. WOCs in general analyze the latency and tune their TCP paramaters according, and then act as TCP proxies for clients on either end. Infineta takes this to an extreme by being able to push tens of gigs.

It’s an interesting play, and a good market to go after methinks. I’ve yet to talk to end users that have used it, so don’t take this as a product endorsement. I am, however, optimistically curious.

More Info:

Disclaimer: I was invited graciously by Stephen Foskett and Greg Ferro to be a delegate at Networking Field Day 3. My time was unpaid. The vendors who presented indirectly paid for my meals and lodging, but otherwise the vendors did not give me any compensation for my time, and there was no expectation (specific or implied) that I write about them, or write positively about them. I’ve even called a presenter’s product shit, because it was. Also, I wrote this blog post mostly in Aruba while drunk.

2 Responses to Don’t Call It A WOC: Infineta Systems

  1. Peter Foppen says:

    Riverbed doesn’t do only CIFS anymore. On performance, Infineta is hard to beat. Riverbed does 1Gbps with appliances that use SSD. That’s optimized throughput. LAN performance is much higher. You can use a loadbalancer (Interceptor) to bundle performance to 40Gbps optimized throughput. I don’t know anything about pricing Infineta, but I guess it can achieve these throughputs less costly than Riverbed.

    Riverbed optimizes iSCSI traffic now too. (The Granite product this.) Storage traffic and WAN traffic have different traffic mix, so they use different boxes to optimize traffic. The Granite product has also more hard disks. For Branch offices they have a single box. (The EX series.)

    You can run not only physical boxes en VM’s, but they have also a solution in the Amazon AWS cloud, so you can optimize traffic between branch offices and Amazon AWS.

    They also have a partnership with Akamai. Akamai has thousands of servers to do content delivery for companies like Microsoft and Apple. When you download a song from iTunes, the content delivering network of Akamai ensures you get your download.
    When you use MS Office 365, Google Apps or Salesforce.com, Riverbed and Akamai provides an optimized path to the edge of Microsoft, Google, or Salesforce.com’s networks.

    You can use compression, TFO, and dedupe techniques not only in a synchronous WAN optimization solution, but Riverbed use it now in a asynchronous solution. With Whitewater you have an appliance that you can point your backup solution to.
    Whitewater does some encryption, compression, deduplication and TFO TCP optimization and stores the data in the cloud. (e.g. Amazon S3 or MS Azure service.)

    If the Whitewater appliance dies, swap the hardware, install the PKI Certificate you have stored in your vault and you’re up and running again.

    Riverbed is not about WOCs anymore…

  2. Pingback: Networking Field Day 3: The Links

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.