Fibre Channel and Ethernet: The Odd Couple

Fibre Channel? Meet Ethernet. Ethernet? Meet Fibre Channel. Hilarity ensues.

The entire thesis of this blog is that the traditional data center silos are collapsing. We are witnessing the rapid convergence of networking, storage, virtualization, server administration, security, and who knows what else. It’s becoming more and more difficult to be “just a networking/server/storage/etc person”.

One of the byproducts of this is the often hilarious fallout from conflicting interests, philosophies, and mentalities. And perhaps the greatest friction comes from the conflict of storage and network administrators. They are the odd couple of the data center.

Storage and Networking: The Odd Couple

Ethernet is the messy roomate. Ethernet just throws its shit all over the place, dirty clothes never end up in the hamper, and I think you can figure out Ethernet’s policy on dish washing.  It’s disorganized and loses stuff all the time. Overflow a receive buffer? No problem. Hey, Ethernet, why’d you drop that frame? Oh, I dunno, because WRED, that’s why.

WRED is the Yosamite Sam of Networking

But Ethernet is also really flexible, and compared to Fibre Channel (and virtually all other networking technologies) inexpensive. Ethernet can be messy, because it either relies on higher protocols to handle dropped frames (TCP) or it just doesn’t care (UDP).

Fibre Channel, on the other hand, is the anal-retentive network: A place for everything, and everything in its place. Fibre Channel never loses anything, and keeps track of it all.

There now, we’re just going to put this frame right here in this reserved buffer space.

The overall philosophies are vastly different between the two. Ethernet (and TCP/IP on top of it) is meant to be flexible, mostly reliable, and lossy. You’ll probably get the Layer 2 frames and Layer 3 packets from one destination to another, but there’s no gurantee. Fibre Channel is meant to be inflexible (compared with Ethernet), absolutely reliable, and loss-less.

Fibre channel and Ethernet have a very different set of philosophies in terms of building out a network. For instance, in Ethernet networks, we cross-connect the hell out of everything. Network administrators haven’t met two switches they didn’t want to cross connect.


Did I miss a way to cross-connect? Because I totally have more cables

It’s just one big cloud to Ethernet administrators. For Fibre Channel administrators, one “SAN” is abomination. There are always two, air gap separated, completely separate fabrics.

The greatest SAN diagram ever created

The Fibre Channel host at the bottom is connected into two separate, Gandalf-separated, non-overlapping Fibre Channel fabrics. This allows the host two independent paths to get to the same storage array for full redundancy. You’ll note that the Fibre Channel switches on both sides have two links from switch to switch in the same fabric. Guess what? They’re both active. Multi-pathing in Fibre Channel is allowed through use of the FSPF protocol (Fabric Shortest Path First). Fibre Channel switch to Fibre Channel switch is, what we would consider in the Ethernet world, layer 3 routed. It’s enough to give one multi-path envy.

One of the common ways (although by no means the only way) that an Ethernet frame could meet an unfortunate demise is through tail drop or WRED of a receive buffer. As a buffer in Ethernet gets full, WRED or a similar technology will typically start to randomly drop frames. As the buffer gets closer to full, the faster the frames are randomly dropped. WRED prevents tail drop, which is bad for TCP, but dropping frames when the buffer gets closer to full.

Essentially, an Ethernet buffer is a bit like Thunderdome: Many frames enter, not all frames leave. With Ethernet, if you tried to do full line rate of two 10 Gbit links through a single 10 Gbit choke point, half the frames would be dropped.

To a Fibre Channel adminsitrator, this is barbaric. Fibre Channel is much more civilized with the use of Buffer-to-Buffer (B2B) credits. Before a Fibre Channel frame is sent from one port to another, the sending port reserves space on the receiving port’s buffer. A Fibre Channel frame won’t get sent  unless there’s guaranteed space at the receiving end. This insures that no matter how much you over subscribe a port, no frames will get lost. Also, when a Fibre Channel frame meets another Fibre Channel frame in a buffer, it asks for the Grey Poupon.

With Fibre Channel, if you tried to push two 8 Gbit links through a single 8 Gbit choke point, no frames would be lost, and each 8 Gbit port would end up throttled back to roughly 4 Gbit through the use of B2B credits.

Why is Fibre Channel so anal retentive? Because SCSI, that’s why. SCSI is the protocol that most enterprise servers use to communicate with storage. (I mean, there’s also SATA, but SCSI makes fun of SATA behind SATA’s back.) Fibre Channel runs the Fibre Channel Protocol, which encapsulates SCSI commands onto Fibre Channel fames (as odd as it sounds, Fibre Channel and Fibre Channel Protocol are two distinct technologies).  Fibre Channel is essentially SCSI over Fibre Channel.

SCSI doesn’t take kindly to dropped commands. It’s a bit of a misconception that SCSI can’t tolerate a lost command. It can, it just takes a long time to recover (relatively speaking). I’ve seen plenty of SCSI errors, and they’ll slow a system down to a crawl. So it’s best not to lose any SCSI commands.

The Converged Clusterfu… Network

We used to have separate storage and networking environments. Now we’re seeing an explosion of convergence: Putting data and storage onto the same (Ethernet) wire.

Ethernet is the obvious choice, because it’s the most popular networking technology. Port per port, Ethernet is the most inexpensive, most flexible, most widely deployed networking technology around. It has slated the FIDDI dragon, the token ring revolution, and now it has its sights on the Fibre Channel Jabberwocky.

The current two competing technologies for this convergence are iSCSI and FCoE. SCSI doesn’t tolerate failure to deliver the SCSI command very well, so both iSCSI and FCoE have ways to guarantee delivery. With iSCSI, delivery is guaranteed because iSCSI runs on TCP, the reliable Layer 4 protocol. If a lower level frame or packet carrying a TCP segment gets lost, no big deal. TCP using sequence numbers, which are like FedEx tracking numbers, and can re-send a lost segment. So go ahead, WRED, do your worst.

FCoE provides losslessness through priority flow control, which is similar to B2B credits in Fibre Channel. Instead of reserving space on the receiving buffer, PFC keeps track of how full a particular buffer is, the one that’s dedicated to FCoE traffic. If that FCoE buffer gets close to full, the receiving Ethernet port sends a PAUSE MAC control frame to the sending port, and the sending port stops. This is done on a port-per-port basis, so end-to-end FCoE traffic is guaranteed to drive without dropping frames. For this to work though, the Ethernet switches need to speak PFC, and that isn’t part of the regular Ethernet standard, and is instead part of the DCB (Data Center Bridging)  set of standards.

Hilarity Ensues

Like the shields of the Enterprise, converged networking is in a state of flux. Network administrators and storage administrators are not very happy with the result. Network administrators don’t want storage traffic (and their silly demands for losslessness) on their data networks. St0rage administrators are appalled by Ethernet and it’s devil-may-care attitude towards frames. They’re also not terribly fond of iSCSI, and only grudgingly accepting of FCoE. But convergence is happening, whether they like it or not.

Personally, I’m not invested in any particular technology. I’m a bit more pro-iSCSI than pro-FCoE, but I’m warming to the later (and certainly curious about it).

But given some dyed-in-the-wool network administrators and server administrators are, the biggest problems in convergence won’t be the technology, but instead will be the Layer 8 issues generated. My take is that it’s time to think like a data center administrator, and not a storage or network administrator. However, that will take time. Until then, hilarity ensues.

17 Responses to Fibre Channel and Ethernet: The Odd Couple

  1. Stuart Miniman says:

    A CIO I talked to 3 years ago summed up the challenge well:
    Look at the change management practices:
    Ethernet – you can unplug any cable as long as you plug it back in 7 seconds, VLANs are reconfigured about once a quarter.
    FC – wire it once and don’t change unless there is a failure, be sure to zone properly because changing is a nightmare.
    As you said – file under Layer 8 🙂

  2. jtopping says:

    WRED is not an Ethernet technology by any means.

    • tonybourke says:

      Absolutely right (W)RED is a generic mechanism and not tied to Ethernet, although I don’t believe I insinuated that it was. (W)RED is one of the common ways, however, that an Ethernet frame (and it’s encapsulated IP packet [and encapsulated TCP segment] would meet an untimely demise).

      • Bob says:

        WRED can act on DSCP, which has nothing to do with Ethernet and in such cases the frame is not dropped (the IP packet is). It’s a bad analogy.

        Your comparison of cross connected switches to air gapped multi paths also fails. Some admins might cross connect every switch they have but it’s certainly not done or recommended by anyone with a clue.

      • tonybourke says:

        Hi Bob,

        WRED can act on DSCP, which has nothing to do with Ethernet and in such cases the frame is not dropped (the IP packet is). It’s a bad analogy.

        It’s not an analogy, it’s an example, and one of how a frame can meet an unfortunate demise (and not the only way). I can’t find anywhere in the article where I claimed WRED is exclusive to Ethernet. And since this article deals with Ethernet and not IP, we’re dealing with switches and the various hardware queues that the higher end ones have. Remember, not every Ethernet frame e throw onto the wire will necessarily have an IP packet (v4 or v6) encapsulated inside it (i.e. FCoE). And even with IP, on most Layer 2/3 switches, an IP packet’s DSCP value s examined and a CoS value (from a mapping table) is added

        Your comparison of cross connected switches to air gapped multi paths also fails. Some admins might cross connect every switch they have but it’s certainly not done or recommended by anyone with a clue.

        I’m not exactly clear on how you’re claiming that it fails. In Fibre Channel, there are always two separate fabrics. The switches in Fabric A and fabric B don’t ever cross connect. This provides two independent paths for a host to get to a storage array. In Ethernet networks, we don’t do that. We make redundant links typically with redundant pairs of switches using MLAG/vPC/VSS or relying on STP, but we typically don’t air-gap separate.

        -Tony

  3. AG says:

    I just found your blog and its hilarious, I particularly liked the Gandalf-seperated fabric 🙂

  4. Pingback: Technology Short Take #15 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

  5. Jeff says:

    How about ATA over ethernet?

    • tonybourke says:

      Hi Jeff,

      An interesting technology, however the AoE data center penetration is negligible from what I can tell.

  6. Paul Senior says:

    Excellent article. I would like to hear more from you about “thinking like a data center administrator” versus a network or server administrator.

  7. Pingback: A High Fibre Diet | The Data Center Overlords

  8. Pingback: Securing SANS | Senetas Europe

  9. Pingback: Fibre Channel: What Is It Good For? | The Data Center Overlords

  10. David P says:

    ok, convergence of Eth and FC is a joke. Iscsi and FCoE are not scalable in big data situations. You can’t add to the frame of FC and expect it to get better. Sorry ISCSI is for test environments and is abysmal at best in real world growing architectures. Keep storage and ethernet a far distance apart. I’d equate this to having your plumbing in your house also be the conduit for your electricity. yeah, you could do it but why risk it.

    • tonybourke says:

      So what exactly isn’t scalable about iSCSI or FCoE? FCoE does have a limit in scale, but that’s mostly because of FC itself (limited domain IDs, etc.), which are the same with FC. iSCSI doesn’t have those addressing limitations. iSCSI is a pain in the ass to setup, but if you introduce automation to the mix, then most of those concerns go away.

      iSCSI (and other IP based protocols) have a scalability advantage, and they’re the protocols of choice for dynamic environments (IaaS) as FC is better suited to a more static environment.

      I think right now the biggest growth in storage is object storage, and there Fibre Channel isn’t really suited, as well as scale out block/file/object solutions like Ceph, where you need many-to-many connectivity (Fibre Channel is one to one or many to one).

      FCoE multi-hop didn’t make sense, not because of the frames or other technological issues, but because it just wasn’t cheaper. It’ll be interesting to see if the new generation of Ethernet (inexpensive 25/100 Gigabit ports) competes with 28/112 gigabit FC (32/128 Gbit FC doesn’t actually run at 32/128 Gbit).

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.