Fibre Channel: What Is It Good For?

In my last article, I talked about how Fibre Channel, as a technology, has probably peaked. It’s not dead, but I think we’re seeing the beginning of a slow decline. Fibre Channel’s long goodbye is caused by a number of factors (that mostly aren’t related to Fibre Channel itself), including explosive growth in non-block storage, scale-out storage, and interopability issues.

But rather than diss Fibre Channel, in this article I’m going to talk about the advantages of Fibre Channel has over IP/Ethernet storage (and talk about why the often-talked about advantages aren’t really advantages).

Fibre Channel’s benefits have nothing to do with buffer to buffer credits, the larger MTU (2048 bytes), its speed, or even its lossless nature. Instead, Fibre Channel’s (very legitimate) advantages are mostly non-technical in nature.

It’s Optimized Out of the Box

When you build a Fibre Channel-based SAN, there’s no optimization that needs to be done: Fibre Channel comes out of the box optimized for storage (SCSI) traffic. There are settings you can tweak, but most of the time there’s nothing that needs to be done other than set port modes and setup zoning. The same is true for the host HBAs. While there are some knobs you can tweak, for the most part the default settings will get you a highly performant storage network.

It’s possible to build an Ethernet network that performs just as well as a Fibre Channel network. It just typically takes more work. You might need to tune MTU (jumbo frames), tune TCP driver settings,tweak flow control settings, or a several other tweaks. And you need someone that knows what all the little nerd-knobs do on IP/Ethernet networks. In Fibre Channel it’s fire and forget.

It’s an Air-Gapped Network

From host to storage array, Fibre Channel is an air-gapped network in that storage traffic and non-storage traffic would run on completely separate networks. Fibre Channel’s nearly exclusive payload is SCSI, and SCSI as a protocol is far more fragile than other protocols, so running it on a separate network makes sense operationally.

Think about it: If you unplug an Ethernet cable while you’re watching a Youtube video of cats for 5 seconds, and plug it back in, you might see some buffering (and you might not, depending on how much it pre-fetched). If you unplug your hard drive for 5 seconds, well, buffering is going to be the last of your worries.

SCSI is more fragile, so having it on a separate network makes sense.

You’ve Got One Job

Ethernet’s strength is that it is supremely flexible. You can run storage traffic on it, video traffic, voice traffic, animated GIFs of cats, etc. You can run iSCSI, HTTP, SMTP, etc. You can run TCP, UDP, IPv4, IPv6, etc. This does add a bit of complication to the configuration of Ethernet/IP networks, however, in the need for tweaking (QoS, flow control, etc.)

Fibre Channel’s strength is that you’re just doing one type of traffic: SCSI (though there is talk of NVMe over Fibre Channel now). Either way, it’s block storage, and that’s all you’re ever going to run on Fibre Channel. This particular characteristic is one of the reasons that Fibre Channel is optimized out of the box.

Slow To Change

In IT, we’ve usually been pretty terrified of change. Both in terms of the technology that we’re familiar with, and (more specifically) topological or configuration changes. With DevOps/Agile/whateveryouwanttocallit, the later is changing. But not with Fibre Channel. Fibre Channel configurations are fairly static. And for traditional IT operations, that means a very stable setup. This goes along with the air-gapped network, in that we tend to be  much more careful with SCSI traffic.

Double Your SAN

Fibre Channel has a rather unique solution to network redundancy: Build two completely separate networks: SAN A and SAN B. Fibre Channel’s job is to provide two independent data paths to from the initiator to the target.

fibrechannelpass

From my article Fibre Channel and Ethernet. Also the greatest SAN diagram ever made.

Most of the redundancy in Fibre Channel is instead provided by the host’s drivers (multi-path driver, or MPIO) and in some cases, the storage array’s controller. Network redundancy, beyond having two separate networks, is not required and often not implemented (though available). While Ethernet/IP networks mesh the hell out of everything, in Fibre Channel it’s strictly forbidden to interconnect the A and B fabrics in any way.

A/B network separation wouldn’t work on a global scale of course, but Fibre Channel wasn’t meant to run a global network: Just a local SAN. As a result, it’s a simple (and effective way) to handle redundancy. Plus, it puts the onus on the host and storage arrays, not us SAN administrators. Our responsibility is simple and clear: Two independent data paths.

Centralized Management

Another advantage is the centralized configuration for zoning and zonesets with Fibre Channel. You create multiple zones, create a zoneset, and voila, that configuration is automatically pushed out to the other switches in the fabric. That saves a lot of time (and configuration errors) by having one connectivity configuration (zone configuration are what allows which initiators to talk to which targets) that is shared among the switches in a given fabric.

In fact, Fibre Channel provides a whole host of fabric services (name, configuration, etc.) that make management of a SAN easy, even if you’re using the CLI. Both Cisco and Brocade have GUI tools if that’s your thing too (I won’t laugh derisively at you, I promise).

In Ethernet/IP networks, each network device is usually a configuration point itself. As a result, we tend not to use IP access lists for iSCSI or NFS security, instead relying on security mechanisms on the hosts and storage arrays. That’s changing with policy-based Ethernet fabrics (such as Cisco ACI) but for the most part, configuring a storage network based on IP/Ethernet is a bit more of a configuration burden.

What Aren’t Fibre Channel’s Strengths

Having said all that, there are a few things that I see people point out to as the strengths of Fibre Channel that aren’t really strengths, in that they don’t provide material benefit over other technologies.

Buffer to buffer credits is one of those features. Buffer to buffer credits allows for a lossless fabric overall by preventing frame drop on a port-by-port basis. But buffer to buffer credits aren’t the only way to provide losslessness. iSCSI provides lossless transport by re-transmitting any loss segments. Converged Ethernet (CE) provides losslessness with PFC (priority flow control) sending PAUSE frames to prevent buffer overruns. Both TCP and CE provide the same effect as buffer to buffer credits: Lossless transport.

So if losslessness is your goal, then there’s more than one way to handle that.

Whether its re-transmitting TCP segments, PAUSE frames, or buffer to buffer credits, congestion is congestion. If you try to push 16 Gigabits through an 8 Gigabit link, something has to give.

The only way a buffer can be overfilled is if there’s congestion. Buffer to buffer credits do not eliminate congestion, they’re just a specific way of dealing with it. Congestion is congestion, and the only solution is more bandwidth.

200_s

I’ve got congestion, and the only cure is more bandwidth

Buffer to buffer credits, gigantic buffers, flow control, none of these fix bandwidth issues. If you’re starved of bandwidth, add more bandwidth.

While I think the future of storage will be one without Fibre Channel, for traditional workloads (read VMware vSphere), there is no better storage technology in most cases than Fibre Channel. Its strength is not in its underlying technology or engineering, but in its single-minded purpose and simplicity. Most of Fibre Channel’s benefits aren’t even technological: Instead they’re more of a “Layer 8” benefit. And these are the reasons why Fibre Channel, thus far, has been so successful (and nice to work with).

Peak Fibre Channel

There have been several articles talking about the death of Fibre Channel. This isn’t one of them. However, it is an article about “peak Fibre Channel”. I think, as a technology, Fibre Channel is in the process of (if it hasn’t already) peaking.

There’s a lot of technology in IT that doesn’t simply die. Instead, it grows, peaks, then slowly (or perhaps very slowly) fades. Consider Unix/RISC. The Unix/RISC market right now is a caretaker platform. Very few new projects are built on Unix/RISC. Typically a new Unix server is purchased to replace an existing but no-longer-supported Unix server to run an older application that we can’t or won’t move onto a more modern platform. The Unix market has been shrinking for over a decade (2004 was probably the year of Peak Unix), yet the market is still a multi-billion dollar revenue market. It’s just a (slowly) shrinking one.

I think that is what is happening to Fibre Channel, and it may have already started. It will become (or already is) a caretaker platform. It will run the workloads of yesterday (or rather the workloads that were designed yesterday), while the workloads of today and tomorrow have a vastly different set of requirements, and where Fibre Channel doesn’t make as much sense.

Why Fibre Channel Doesn’t Make Sense in the Cloud World

There are a few trends in storage that are working against Fibre Channel:

  • Public cloud growth outpaces private cloud
  • Private cloud storage endpoints are more ephemeral and storage connectivity is more dynamic
  • Block storage is taking a back seat to object (and file) storage
  • RAIN versus RAID
  • IP storage is as performant as Fibre Channel, and more flexible

Cloudy With A Chance of Obsolescence

The transition to cloud-style operations isn’t a great for Fibre Channel. First, we have the public cloud providers: Amazon AWS, Microsoft Azure, Rackspace, Google, etc. They tend not to use much Fibre Channel (if any at all) and rely instead on IP-based storage or other solutions. And what Fibre Channel they might consume, it’s still far fewer ports purchased (HBAs, switches) as workloads migrate to public cloud versus private data centers.

The Ephemeral Data Center

In enterprise datacenters, most operations are what I would call traditional virtualization. And that is dominated by VMware’s vSphere. However, vSphere isn’t a private cloud. According to NIST, to be a private cloud you need to be self service, multi-tenant, programmable, dynamic, and show usage. That ain’t vSphere.

For VMware’s vSphere, I believe Fibre Channel is the hands down best storage platform. vSphere likes very static block storage, and Fibre Channel is great at providing that. Everything is configured by IT staff, a few things are automated though Fibre Channel configurations are still done mostly by hand.

Probably the biggest difference between traditional virtualization (i.e. VMware vSphere) and private cloud is the self-service aspect. This also makes it a very dynamic environment. Developers, DevOpsers, and overall consumers of IT resources configure spin-up and spin-down their own resources. This leads to a very, very dynamic environment.

wackafinger

Endpoints are far more ephemeral, as demonstrated here by Mr Mittens.

Where we used to deal with virtual machines as everlasting constructs (pets), we’re moving to a more ephemeral model (cattle). In Netflix’s infrastructure, the average lifespan of a virtual machine is 36 hours. And compared to virtual machines, containers (such as Docker containers) tend to live for even shorter periods of time. All of this means a very dynamic environment, and that requires self-service portals and automation.

And one thing we’re not used to in the Fibre Channel world is a dynamic environment.

scaredgifA SAN administrator at the thought of automated zoning and zonesets

Virtual machines will need to attach to block storage on the fly, or they’ll rely on other types of storage, such as container images, retrieved from an object store, and run on a local file system. For these reasons, Fibre Channel is not usually a consideration for Docker, OpenStack (though there is work on Fibre Channel integration), and very dynamic, ephemeral workloads.

Objectification

Block storage isn’t growing, at least not at the pace that object storage is. Object storage is becoming the de-facto way to store the deluge of unstructured data being stored. Object storage consumption is growing at 25% per year according to IDC, while traditional RAID revenues seem to be contracting.

Making it RAIN

rain

In order to handle the immense scale necessary, storage is moving from RAID to RAIN. RAID is of course Redundant Array of Inexpensive Disks, and RAIN is Redundant Array of Inexpensive Nodes. RAID-based storage typically relies on controllers and shelves. This is a scale-up style approach. RAIN is a scale-out approach.

For these huge scale storage requirements, such as Hadoop’s HDFS, Ceph, Swift, ScaleIO, and other RAIN handle the exponential increase in storage requirements better than traditional scale-up storage arrays. And primarily these technologies are using IP connectivity/Ethernet as the node-to-node and node-to-client communication, and not Fibre Channel. Fibre Channel is great for many-to-one communication (many initiators to a few storage arrays) but is not great at many-to-many meshing.

Ethernet and Fibre Channel

It’s been widely regarded in many circles that Fibre Channel is a higher performance protocol than say, iSCSI. That was probably true in the days of 1 Gigabit Ethernet, however these days there’s not much of a difference between IP storage and Fibre Channel in terms of latency and IOPS. Provided you don’t saturate the link (neither handles eliminates congestion issues when you oversaturate a link) they’re about the same, as shown in several tests such as this one from NetApp and VMware.

Fibre Channel is currently at 16 Gigabit per second maximum. Ethernet is 10, 40, and 100, though most server connections are currently at 10 Gigabit, with some storage arrays being 40 Gigabit. Iin 2016 Fibre Channel is coming out with 32 Gigabit Fibre Channel HBAs and switches, and Ethernet is coming out with 25 Gigabit Ethernet interfaces and switches. They both provide nearly identical throughput.

Wait, what?

But isn’t 32 Gigabit Fibre Channel faster than 25 Gigabit Ethernet? Yes, but barely.

  • 25 Gigabit Ethernet raw throughput: 3125 MB/s
  • 32 Gigabit Fibre Channel raw throughput: 3200 MB/s

Do what now?

32 Gigabit Fibre Channel isn’t really 32 Gigabit Fibre Channel. It actually runs at about 28 Gigabits per second. This is a holdover from the 8/10 encoding in 1/2/4/8 Gigabit FC, where every Gigabit of speed brought 100 MB/s of throughput (instead of 125 MB/s like in 1 Gigabit Ethernet). When FC switched to 64/66 encoding for 16 Gigabit FC, they kept the 100 MB/s per gigabit, and as such lowered the speed (16 Gigabit FC is really 14 Gigabit FC). This concept is outlined here in this screencast I did a while back. 16 Gigabit Fibre Channel is really 14 Gigabit Fibre Channel. 32 Gigabit Fibre Channel is 28 Gigabit Fibre Channel.

As a result, 32 Gigabit Fibre Channel is only about 2% faster than 25 Gigabit Ethernet. 128 Gigabit Fibre Channel (12800 MB/s) is only 2% faster than 100 Gigabit Ethernet (12500 MB/s).

Ethernet/IP Is More Flexible

In the world of bare metal server to storage array, and virtualization hosts to storage array, Fibre Channel had a lot of advantages over Ethernet/IP. These advantages included a fairly easy to learn distributed access control system, a purpose-built network designed exclusively to carry storage traffic, and a separately operated fabric.  But those advantages are turning into disadvantages in a more dynamic and scaled-out environment.

In terms of scaling, Fibre Channel has limits on how big a fabric can get. Typically it’s around 50 switches and a couple thousand endpoints. The theoretical maximums are higher (based on the 24-bit FC_ID address space) but both Brocade and Cisco have practical limits that are much lower. For the current (or past) generations of workloads, this wasn’t a big deal. Typically endpoints numbered in the dozens or possibly hundreds for the large scale deployments. With a large OpenStack deployment, it’s not unusual to have tens of thousands of virtual machines in a large OpenStack environment, and if those virtual machines need access to block storage, Fibre Channel probably isn’t the best choice. It’s going to be iSCSI or NFS. Plus, you can run it all on a good Ethernet fabric, so why spend money on extra Fibre Channel switches when you can run it all on IP? And IP/Ethernet fabrics scale far beyond Fibre Channel fabrics.

Another issue is that Fibre Channel doesn’t play well with others. There’s only two vendors that make Fibre Channel switches today, Cisco and Brocade (if you have a Fibre Channel switch that says another vendor made it, such as IBM, it’s actually a re-badged Brocade). There are ways around it in some cases (NPIV), though you still can’t mesh two vendor fabrics reliably.

tumblr_lldl4xFeQu1qclvq3

Pictured: Fibre Channel Interoperability Mode

And personally, one of my biggest pet peeves regarding Fibre Channel is the lack of ability to create a LAG to a host. There’s no way to bond several links together to a host. It’s all individual links, which requires special configurations to make a storage array with many interfaces utilize them all (essentially you zone certain hosts).

None of these are issues with Ethernet. Ethernet vendors (for the most part) play well with others. You can build an Ethernet Layer 2 or Layer 3 fabric with multiple vendors, there are plenty of vendors that make a variety of Ethernet switches, and you can easily create a LAG/MCLAG to a host.

img_7011

My name is MCLAG and my flows be distributed by a deterministic hash of a header value or combination of header values.

What About FCoE?

FCoE will share the fate of Fibre Channel. It has the same scaling, multi-node communication, multi-vendor interoperability, and dynamism problems as native Fibre Channel. Multi-hop FCoE never really caught on, as it didn’t end up being less expensive than Fibre Channel, and it tended to complicate operations, not simplify them. Single-hop/End-host FCoE, like the type used in Cisco’s popular UCS server system, will continue to be used in environments where blades need Fibre Channel connectivity. But again, I think that need has peaked, or will peak shortly.

Fibre Channel isn’t going anywhere anytime soon, just like Unix servers can still be found in many datacenters. But I think we’ve just about hit the peak. The workload requirements have shifted. It’s my belief that for the current/older generation of workloads (bare metal, traditional/pet virtualization), Fibre Channel is the best platform. But as we transition to the next generation of platforms and applications, the needs have changed and they don’t align very well with Fibre Channel’s strengths.

It’s an IP world now. We’re just forwarding packets in it.

 

 

Hey, Remember vTax?

Hey, remember vTax/vRAM? It’s dead and gone, but with 6 Terabyte of RAM servers now available, imagine what could have been (your insanely high licensing costs).

Set the wayback machine to 2011, when VMware introduced vSphere version 5. It had some really great enhancements over version 4, but no one was talking about the new features. Instead, they talked about the new licensing scheme and how much it sucked.

wayback2

While some defended VMware’s position, most were critical, and my own opinion… let’s just say I’ve likely ensured I’ll never be employed by VMware. Fortunately, VMware came to their senses and realized what a bone-headed, dumbass move that vRAM/vTax was, and repealed the vRAM licensing one year later in 2012. So while I don’t want to beat a dead horse (which, seriously, disturbing idiom), I do think it’s worth looking back for just a moment to see how monumentally stupid that licensing scheme was for customers, and serve as a lesson in the economies of scaling for the x86 platform, and as a reminder about the ramifications of CapEx versus OpEx-oriented licensing.

Why am I thinking about this almost 2 years after they got rid of vRAM/vTax? I’ve been reading up on the newly released Intel’s E7 v2 processors, and among the updates to Intel’s high-end server chip is the ability to have 24 DIMMs per socket (the previous limit was 12) and the support of 64 GB DIMMs. This means that a 4-way motherboard (which you can order now from Cisco, HP, and others) can support up to 6 TB of RAM, using 96 DIMM slots and 64 GB DIMMs. And you’d get up to 60 cores/120 threads with that much RAM, too.

And I remembered one (of many) aspects about vRAM that I found horrible, which was just how quickly costs could spiral out of control, because server vendors (which weren’t happy about vRAM either) are cramming more and more RAM into these servers.

The original vRAM licensing with vSphere 5 was that for every socket you paid for, you were entitled to/limited to 48 GB of vRAM with Enterprise Plus. To be fair the licensing scheme didn’t care how much physical RAM (pRAM) you had, only how much of the RAM was consumed by spun-up VMs (vRAM). With vSphere 4 (and the current vSphere licensing, thankfully), RAM had been essentially free: you only paid per socket. You could use as much RAM as you could cram into a server.  But with the vRAM licensing, if you had a dual-socket motherboard with 256 GB of RAM you would have to buy 6 licenses instead of 2. At the time, 256 GB servers weren’t super common, but you could order them from the various server vendors (IBM, Cisco, HP, etc.). So with vSphere 4, you would have paid about $7,000 to license that system. With vSphere 5, assuming you used all the RAM, you’d pay about $21,000 to license the system, a 300% increase in licensing costs. And that was day 1.

Now lets see how much it would cost to license a system with 6 TB of RAM. If you use the original vRAM allotment amounts from 2011, each socket granted you 48 GB of vRAM with Enterprise Plus (they did up the allotments after all of the backlash, but that ammended vRAM licensing model was so convoluted you literally needed an application to tell you how much you owed). That means to use all 6 TB (and after all, why would you buy that much RAM and not use it), you would need 128 socket licences, which would have cost $448,000 in licensing. A cluster of 4 vSphere hosts would cost just shy of $2 million to license. With current, non-insane licensing, the same 4-way 6 TB server costs a whopping $14,000. That’s a 32,000% price differential. 

Again, this is all old news. VMware got rid of the awful licensing, so it’s a non-issue now. But still important to remember what almost happened, and how insane licensing costs could have been just a few years later.

saved

My graph from 2011 was pretty accurate.

Rumor has it VMware is having trouble getting customers to go for OpEx-oriented licensing for NSX. While VMware hasn’t publicly discussed licensing, it’s a poorly kept secret that VMware is looking to charge for NSX on a per VM, per month basis. The number I’d been hearing is $10 per month ($120 per year), per VM. I’ve also heard as high as $40, and as low as $5. But whatever the numbers are, VMware is gunning for OpEx-oriented licensing, and no one seems to be biting. And it’s not the technology, everyone agrees that it’s pretty nifty, but the licensing terms are a concern. NSX is viewed as network infrastructure, and in that world we’re used to CapEx-oriented licensing. Some of VMware’s products are OpEx-oriented, but their attempt to switch vSphere over to OpEx was disastrous. And it seems to be the same for NSX.

Death To vMotion

There are very few technologies in that data center that have had as significant of an impact of VMware’s vMotion. It allowed us to de-couple operating system and server operations. We could maintain, update, and upgrade the underlying compute layer without disturbing the VMs they ran on. We can write web applications in the same model that we’re used to, when we wrote them to specific physical servers. From an application developer perspective, nothing needed to be changed. From a system administrator perspective, it helped make (virtual) server administration easier and more flexible. vMotion helped us move almost seamlessly from the physical world to the virtualization world with nary a hiccup. Combined with HA and DRS, it’s made VMware billions of dollars.

And it’s time for it to go.

From a networking perspective, vMotion has reeked havoc on our data center designs. Starting in the mid 2000s, we all of a sudden needed to build huge Layer 2 networking domains, instead of beautiful and simple Layer 3 fabrics. East-West traffic went insane. With multi-layer switches (Ethernet switches that could route as fast as they could switch), we had just gotten to the point where we could build really fast Layer 3 fabrics, and get rid of spanning-tree. vMotion required us to undo all that, and go back to Layer 2 everywhere.

But that’s not why it needs to go.

Redundant data centers and/or geographic diversification is another area that vMotion is being applied to. Having the ability to shift a workload from one data center to another is one of the holy grails of data centers but to accomplish this we need Layer 2 data center interconnects (DCI), with technologies like OTV, VPLS, EoVPLS, and others. There’s also a distance limitation, as the latency between two datacenters needs to be 10 milliseconds or less. And since light can only travel so far in 10 ms, there is a fairly limited distance that you can effectively vMotion (200 kilometers, or a bit over 120 miles). That is, unless you have a Stargate.

DCI

You do have a Stargate in your data center, right?

And that’s just getting a VM from one data center to another, which someone described to me once as a parlor trick. By itself, it serves no purpose to move a VM from one data center to another. You have to get its storage over as well (VMDK files if your lucky, raw LUNs if you’re not) and deal with the traffic tromboning problem from one data center to another.

The IP address is still coupled to the server (identity and location are coupled in normal operations, something LISP is meant to address), so traffic still comes to the server via the original data center, traverses the DCI, then the server responds through its default gateway, which is still likely the original data center. All that work to get a VM to a different data center, wasted.

trombone2

All for one very simple reasons: A VM needs to keep its IP address when it moves. It’s IP statefullness, and there are various solutions that attempt to address the limitations of IP statefullness. Some DCI technologies like OTV will help keep default gateways to the local data center, so when a server responds it at least doesn’t trombone back through the original data center. LISP is (another) overlay protocol meant to decouple the location from the identity of a VM, helping with mobility. But as you add all these stopgap solutions on top of each other, it becomes more and more cumbersome (and expensive) to manage.

All of this because a VM doesn’t want to give its IP address up.

But that isn’t the reason why we need to let go of vMotion.

The real reason why it needs to go is that it’s holding us back.

Do you want to really scale your application? Do you want to have fail-over from one data center to another, especially over distances greater than 200 kilometers? Do you want to be be able to “follow the Sun” in terms of moving your workload? You can’t rely on vMotion. It’s not going to do it, even with all the band-aids meant to help it.

The sites that are doing this type of scaling are not relying on vMotion, they’re decoupling the application from the VM. It’s the metaphor of pets versus cattle (or as I like to refer to it, bridge crew versus redshirts). Pets is the old way, the traditional virtualization model. We care deeply what happens to a VM, so we put in all sorts of safety nets to keep that VM safe. vMotion, HA, DRS, even Fault Tolerance. With cattle (or redshirts), we don’t really care what happens to the VMs. The application is decoupled from the VM, and state is not solely stored on a single VM. The “shopping cart” problem, familiar to those who work with load balancers, isn’t an issue. So a simple load balancer is all that’s required, and can send traffic to another server without disrupting the user experience. Any VM can go away at any level (database, application, presentation/web layer) and the user experience will be undisturbed. We don’t shed a tear when a redshirt bites it, thus vMotion/HA/DRS are not needed.

If you write your applications and build your application stack as if vMotion didn’t exit, scaling and redundancy are geographic diversification get a lot easier. If your platform requires Layer 2 adjacency, you’re doing it wrong (and you’ll be severely limited in how you can scale).

And don’t take my word for it. Take a look at any of the huge web sites, Netflix, Twitter, Facebook: They all shard their workloads across the globe and across their infrastructure (or Amazons). Most of them don’t even use virtualization. Traditional servers sitting behind a load balancer with a active/standby pair of databases on the back-end isn’t going to cut it.

When you talk about sharding, make sure people know it’s spelled with a “D”. 

If you write an application on Amazon’s AWS, you’re probably already doing this since there’s no vMotion in AWS. If an Amazon data center has a problem, as long as the application is architected correctly (again, done on the application itself), then I can still watch my episodes of Star Trek: Deep Space 9. It takes more work to do it this way, but it’s a far more effective way to scale/diversify your geography.

It’s much easier (and quicker) to write a web application for the traditional model of virtualization. And most sites first outing will probably be done in this way. But if you want to scale, it will be way easier (and more effective) to build and scale an application.

VMware’s vMotion (and Live Migration, and other similar technologies by other vendors) had their place, and they helped us move from the physical to the virtual. But now it’s holding us back, and it’s time for it to go.

Link Aggregation Confusion

In a previous article, I discussed the somewhat pedantic question: “What’s the difference between EtherChannel and port channel?” The answer, as it turns out, is none. EtherChannel is mostly an IOS term, and port channel is mostly an NXOS term. But either is correct.

But I did get one thing wrong. I was using the term LAG incorrectly. I had assumed it was short for Link Aggregation (the umbrella term of most of this). But in fact, LAG is short for Link Aggregation Group, which is a particular instance of link aggregation, not the umbrella term. So wait, what do we call the technology that links links together?

saymyname

LAG? Link Aggregation? No wait, LACP. It’s gotta be LACP.

In case you haven’t noticed, the terminology for one of the most critical technologies in networking (especially the data center) is still quite murky.

Before you answer that, let’s throw in some more terms, like LACP, MLAG, MC-LAG, VLAG, 802.3ad, 802.1AX, link bonding, and more.

The term “link aggregation” can mean a number of things. Certainly EtherChannel and port channels are are form of link aggregation. 802.3ad and 802.1AX count as well. Wait, what’s 802.1AX?

802.3ad versus 802.1AX

What is 802.3ad? It’s the old IEEE working group for what is now known as 802.1AX. The standard that we often refer to colloquially as port channel, EtherChannels, and link aggregation was moved from the 802.3 working group to the 802.1 working group sometime in 2008. However, it is sometimes still referred to as 802.3ad. Or LAG. Or link aggregation. Or link group things. Whatever.

spaceghost

What about LACP? LACP is part of the 802.1AX standard, but it is neither the entirety of the 802.1AX standard, nor is it required in order to stand up a LAG.  LACP is also not link aggregation. It is a protocol to build LAGs automatically, versus static. You can usually build an 802.1AX LAG without using LACP. Many devices support static and dynamic LAGs. VMware ESXi 5.0 only supported static LAGs, while ESXi 5.1 introduced LACP as a method as well.

Some devices only support dynamic LAGs, while some only support static. For example, Cisco UCS fabric interconnects require LACP in order to setup a LAG (the alternative is to use pinning, which is another type of link aggregation, but not 802.1AX). The discontinued Cisco ACE 4710 doesn’t support LACP at all, instead only static LAGs are supported.

One way to think of LACP is that it is a control-plane protocol, while 802.1AX is a data-plane standard. 

Is Cisco’s EtherChannel/port channel proprietary?

As far as I can tell, no, they’re not. There’s no (functional at least) difference between 802.3ad/802.1ax and what Cisco calls EtherChannel/port channel, and you can set up LAGs between Cisco and non-Cisco without any issue.  PAgP (Port Aggregation Protocol), the precursor to LACP, was proprietary, but Cisco has mostly moved to LACP for its devices. Cisco Nexus kit won’t even support PAgP.

Even in LACP, there’s no method for negotiating the load distribution method. Each side picks which method it wants to do. In fact, you don’t have to have the same load distribution method configured on both ends of a LAG (though it’s usually a good idea).

There is are also types of link aggregation that aren’t part of the 802.1AX or any other standard. I group these types of link aggregation into two types: Pinning, and fake link aggregation. Or FLAG (Fake Link Aggregation).

First, lets talk about pinning. In Ethernet, we have the rule that there can’t be more than one way to get anywhere. Ethernet can’t handle multi-pathing, which is why we have spanning-tree and other tricks to prevent there from being more than one logical way for an Ethernet frame to get from one source MAC to a given destination MAC. Pinning is a clever way to get around this.

The most common place we tend to see pinning is in VMware. Most ESXi hosts have multiple connections to a switch. But it doesn’t have to be the same switch. And look at that, we can have multiple paths. And no spanning-tree protocol. So how do we not melt down the network?

The answer is pinning. VMware refers to this as load balancing by virtual port ID. Each VM’s vNIC has a virtual port ID, and that ID is pinning to one and only one of the external physical NICs (pNICs). To utilize all your links, you need at least as many virtual ports as you do physical ports. And load distributation can be an issue. But generally, this pinning works great. Cisco UCS also uses pinning for both Ethernet and Fibre Channel, when 802.1AX-style link aggregation isn’t used.

It works great, and a fantastic way to get active/active links without running into spanning-tree issues and doesn’t require 802.1AX.

Then there’s… a type of link aggregation that scares me. This is FLAG.

killitwithfire

Some operating systems such as FreeBSD and Linux support a weird kind of link aggregation where packets are sent out various active links, but only received on one link. It requires no special configuration on a switch, but the server is oddly blasting out packets on various switch ports. Transmit is active/active, but receive is active/standby.

What’s the point? I’d prefer active/standby in a more sane configuration.  I think it would make troubleshooting much easier that way.

There’s not much need for this type of fake link aggregation anymore. Most managed switches support 802.1AX, and end hosts either support the aforementioned pinning or they support 802.1AX well (LACP or static). So there are easier ways to do it.

So as you can see, link aggregation is a pretty broad term, too broad to encompass only what would be under the umbrella of 802.1AX, as it also includes pinning and Fake Link Aggregation. LAG isn’t a good term either, since it refers to a specific instance, and isn’t suited as the catch-all term for the methodology of inverse-multiplexing. 802.1AX is probably the best term, but it’s not widely known, and it also includes the optional LACP control plane protocol. Perhaps we need a new term. But if you’ve found the terms confusing, you’re not alone.

VMware Year-Long vTax Disaster is Gone!

The rumors were true, vTax is gone. The announcement, rumored last week, was confirmed today.

Tequila!

Don’t let the door hit you on the ass on your way out. And perusing their pricing white paper you can see the vRAM allotments are all listed as unlimited.

vRAM limits (or vTax as it’s derisively called) has been a year-long disaster for VMware, and here’s why:

It stole the narrative

vTax stole the narrative. All of it. Yay, you presented a major release of your flagship product with tons of new features and added more awesomesauce. Except no one wanted to talk about any of it. Everyone wanted to talk about vRAM, and how it sucked. In blogs, message boards, and IT discussions, it’s all anyone wanted to talk about. And other than a few brave folks who defended vTax, the reaction was overwhelmingly negative.

It peeved off the enthusiast community

Those key nerds (such as yours truly) who champion a technology felt screwed after they limited the free version to 8 GB. They later revised it to 32 GB after the uproar, which right now is fair (and so far it’s stil 32 GB). That’ll do for now, but I think by next year they need to kick it up to 48 GB.

It was fucking confusing

Wait, what? How much vRAM do I need to buy? OK, why do I need to 10 socket licenses for a 2-way server. I have 512 GB of RAM and 2 CPUs, so I have to buy 512 GB of vRAM? Oh, only if I use it all. So if I’m only using 128 GB, I only need to buy 4?  OK, well, wait, what about VMs over 96 GB, they only count towards 96 GB? What?

What?

It was so complicated, there even a tool to help you figure out how much vRAM you needed. (Hint: If you need a program to figure out your licensing, your licensing sucks.)

It gave the competition a leg up

It’s almost as if VMware said to Microsoft and RedHat: “Here guys, have some market share.” I imagined that executives over at Microsoft and Redhat were naming their children after VMware for the gift they gave them. A year later, I see a lot more Hyper-V (and lots of excitement towards Hyper-V 3) and KVM discussions. And while VMware is in virtually (get it?) every data center, from my limited view Hyper-V and KVM seem to be installed in production in far more data centers than they were a year ago, presumably taking away seats from VMware. (What I don’t see, oddly enough, is Citrix Xen for server virtualization. Citrix seems to be concentrating only on the VDI.)

It fought the future

One of the defenses that VMware and those that sided with VMware on the vTax issue was that 90+% of current customers wouldn’t need to pay additional licensing fees to upgrade to vSphere 5. I have a hard time swallowing that, I think the number was much lower than they were saying (perhaps some self-delusion there). They saw a customer average of 6:1 in terms of VMs per host, which I think is laughably low.  As laughable as when Dr Evil vastly over-estimated the value of 1 million dollars.

We have achieved server consolidation of 6:1!

And add into that hardware refreshes. The servers and blades that IT organizations are looking to buy aren’t 48 GB systems anymore. The ones that are catching our wandering eyes are stuffed to the brim with RAM. A 2-way blade with 512 GB of RAM would need to buy 11 socket licenses with the original 48 GB vRAM allotments for Enterprise+, or 6 socket licenses with the updated 96 GB of vRAM allotments for Enterprise+.  That’s either a %550 or 400% increase in price over the previous licensing model.

They had the gall to say it was good for customers

So, how is a dramatic price increase good for customers? Mathematically, there was no way for any customer to save money with vRAM. It either cost the same, or it cost more. And while vSphere 5 brought some nice advancements, I don’t think any of them justified the price increase. So while they thought it was good for VMware (I don’t think it did VMware any good at all), it certainly wasn’t “good” for customers.

It stalled adoption of vSphere 5

Because of all these reasons, vSphere 5 uptake seemed to be a lot lower than they’d hoped, at least from what I’ve seen.

So I’m glad VMware got rid of vTax. It was a pretty significant blunder for a company that has done really well navigating an ever-changing IT realm, all things considered.

Some still defend the new-old licensing model, but I respectfully disagree. It had no upside. I think the only kind-of-maybe semi-positive outcome of vRAM is it trolled Microsoft and other competitors, because now one of their best attacks of VMware (which VMware created themselves out of thin air) is now gone. I’m happy that VMware seems to have acknowledged the blunder, in a rare moment of humility. Hopefully this humility sticks.

It pissed off partners, it pissed off hardware vendors, it pissed of the enthusiast community, and it pissed off even the most loyal customers.

Good riddance.

VMware Getting Rid of vRAM Licensing (vTax)?

(Update 8/21/12: VMware has a comment on the rumors [they say check next week])

A colleague pointed me to this article, which apparently indicates that with vSphere 5.1 VMware is getting rid of vRAM (couch vTax cough). I have found an appropriate animated GIF that both communicates my feelings, as well as the sweet dance moves I have just performed.

Nailed it

If this turns out to be true, it’s awesome. Even if most organizations weren’t currently affected by vTax, it’s almost certain they would soon as they refreshed their server and blade models that gleefully include obscene amounts of RAM. With Cisco’s UCS for example, you can get a half-width blade (the B230 M2) and cram it with 512 GB of RAM. Or a full-width blade (B420 M3) and stuff it with 1.5 TB of RAM. The later is a 4 socket system, and to license it for the full 1.5 TB of RAM would require buying not 4 licenses, but 16, making the high-RAM systems far more expensive to license.

That was darkest side of vRAM, even if you weren’t affected by it today, it was only a matter of time. One might say that it’s… A TRAP, as I’ve written about before. VMware fanboys/girls tried to apologize for it, but fact is it was not a popular move, either in the VMware enthusiast community or the business community.

So if it’s really gone, good. Good riddance. The only thing now is, how much RAM do we get to use in the free version, which is critical to the study/home lab market.