A Tale of Two FCoEs

A favorite topic of discussion among the data center infrastructure crowd is the state of FCoE. Depending on who you ask, FCoE is dead, stillborn, or thriving.

So, which is it? Are we dealing with FUD or are we dealing with vendor hype?  Is FCoE a success, or is it a failure? The quick answer is.. yes? FCoE is both thriving, and yet-to-launch. So… are we dealing with Schrödinger’s protocol?

Note quite. To understand the answer, it’s important to make to make the distinction with two very different ways that FCoE is implemented: Edge FCoE and Multi-hop FCoE (a subject I’ve written about before, although I’ve renamed things a bit).

Edge FCoE

Edge FCoE is thriving, and has been for the past few years. Edge FCoE is when you take a server (or sometimes a storage array), connect it to an FCoE switch. And everything beyond that first switch is either native Fibre Channel or native Ethernet.

Edge FCoE is distinct from Multi-hop for one main reason: It’s a helluva lot easier to pull off than multi-hop FCoE. With edge-FCoE, the only switch that needs to understand FCoE  is that edge FCoE switch. They plug into traditional Fibre Channel networks over traditional Fibre Channel links (typically with NPV mode).

Essentially, no other part of your network needs to do anything you haven’t done already. You do traditional Ethernet, and traditional Fibre Channel. FCoE only exists in that first switch, and is invisible to the rest of your LAN and SAN.

Here are the things you (for the most part) don’t have to worry about configuring on your network with Edge FCoE:

  • Data Center Bridging (DCB) technologies
    • Priority Flow Control (PFC) which enables lossless Ethernet
    • Enhanced Transmission Selection (ETS) allowing the ability to dedicate bandwidth to various traffic (not required but recommended -Ivan Pepelnjak)
    • DCBx: A method to communicate DCB functionality between switches over LLDP (oh, hey, you do PFC? Me too!)
  • Whether or not your aggregation and core switches support FCoE (they probably don’t, or at least the line cards don’t)

There is PFC and DCBx in the server-to-edge FCoE link, but it’s typically inherint, and supported by the CNA and the edge-FCoE switch and turned on by default or auto-detected. In some implementations, there’s nothing to configure. PFC is there, and un-alterable. Even if there are some settings to tweak, it’s generally easier to do it on edge ports than on a aggregation/core network.

Edge FCoE is the vast majority of how FCoE is implemented today. Everyone from Cisco’s UCS to HP C7000 series can do it, and do it well.

Multi-Hop

The very term multi-hop FCoE is controversial in nature (just check the comments section of my terminology FCoE article), but for the sake of this article, multi-hop FCoE is any topological implmentation of FCoE where FCoE frames move around a converged network beyond a single switch.

Multi-hop FCoE requires a couple of things: It requires a Fibre Channel-aware network, losslessness through priority flow control (PFC), DCBx (Data Center Bridging Exchange), enhanced transmission selection (ETS),  and you’ve got a recipe for a switch that I’m pretty sure ain’t in your rack right now. For instance, the old man of the data center, the Cisco Catalyst 6500, doesn’t now, and will likely never do FCoE.

Switch-wise, there are two types of ways to do multi-hop FCoE: A switch can either forward FCoE frames based on the Ethernet headers (MAC address source/destination), or you can forward frames based on the Fibre Channel headers (FCID source/destionation).

Ethernet-forwarded/Pass-through Multi-hop

If you build a multi-hop network with switches that forward based on Ethernet headers (as Juniper and Brocade do), then you’ll want something other than spanning-tree to do loop prevention and enable multi-pathing. Brocade uses a method based on TRILL, and Juniper uses their proprietary QFabric (based on unicorn tears).

Ethernet-forwaded FCoE switches don’t have a full Fibre Channel stack, so they’re unaware of what goes on in the Fibre Channel world, such as zoning and with the exception of the FIP (FCoE Initiation Protocol), which handles discovery of attached Fibre Channel devices (connecting virtual N_Ports to virtual F_Ports).

FC-Forwarded/Dual-stack Multi-hop

If you build a multi-hop network with switches that forward based on Fibre Channel headers, your FCoE switch needs to have both a full DCB-enabled Ethernet stack, and a full Fibre Channel stack. This is the way Cisco does it on their Nexus 5000s, Nexus 7000s, and MDS 9000 (with FCoE line cards), although the Nexus 4000 blade switch is the Ethernet-forwarded kind of switch.

The benefit of using a FC-Forwarded switch is that you don’t need a network that does TRILL or anything fancier than spanning-tree (spanning-tree isn’t enabled on any VLAN that passes FCoE). It’s pretty much a Fibre Channel network, with the ports being Ethernet instead of Fibre Channel. In fact, in Cisco’s FCoE reference design, storage and networking traffic are still port-gaped (a subject of a future blog post). FCoE frames and regular networking frames don’t run over the same links, there are dedicated FCoE links.

It’s like running a Fibre Channel SAN that just happens to sit on top of your Ethernet network. As Victor Moreno the LISP project manager at Cisco says: “The only way is to overlay”.

State of FCoE

It’s not accurate to say that FCoE is dead, or that FCoE is a success, or anything in between really, because the answer is very different once you separate multi-hop and edge-FCoE.

Currently, multi-hop has yet to launch in a significant way. In the past 2 months, I have heard rumors of a customer here or there implementing it, but I’ve yet to hear any confirmed reports or first hand tales. I haven’t even configured it personally. I’m not sure I’m quite as wary as Greg Ferro is, but I do agree with his wariness. It’s new, it’s not widely deployed, and that makes it riskier. There are interopability issues, which in some ways are obviated by the fact no one is doing Ethernet fabrics in a multi-vendor way, and NPV/NPIV can help keep things “native”. But historically, Fibre Channel vendors haven’t played well together. Stephen Foskett lists interopability among his reasonable concerns with FCoE multi-hop. (Greg, Stephen, and everyone else I know are totally fine with edge FCoE.)

Edge-FCoE is of course vibrant and thriving. I’ve configured it personally, and it works easily and seamlessly into an existing FC/Ethernet network. I have no qualms about deploying it, and anyone doing convergence should at least consider it.

Crystal Ball

In terms of networking and storage, it’s impossible to tell what the future will hold. There are a number of different directions FCoE, iSCSI, NFS, DCB, Ethernet Fabrics, et all could go. FCoE could end up replacing Fibre Channel entirely, or it could be relegated to the edge and never move from there. Another possibility as suggested to me by Stephen Foskett is that Ethernet will become the connection standard for Fibre Channel devices. They would still be called Fibre Channel switches, and SANs setup just like they always have been, but instead of having 8/16/32 Gbit FC ports, they’d have 10/40/100 Gbit Ethernet ports. To paraphrase Bob Metcalfe, “I don’t know what will come after Fibre Channel, but it will be called Ethernet”.

The Problem

One recurring theme from virtually every one of the Network Field Day 2 vendor presentations last week (as well as the OpenFlow symposium) was affectionately referred to as “The Problem”.

It was a theme because, as vendor after vendor gave a presentation, they essentially said the same thing when describing the problem they were going to solve. For us the delegates/bloggers, it quickly went from the problem to “The Problem”. We’d heard it over and over again so often that during the (5th?) iteration of the same problem we all started laughing like a group of Beavis and Butt-Heads during a vendor’s presentation, and we had to apologize profusely (it wasn’t their fault, after all).

Huh huhuhuhuhuh… he said “scalability issues”

In fact, I created a simple diagram with some crayons brought by another delegate to save everyone some time.

Hello my name is Simon, and I like to do draw-wrings

But with The Problem on repeat it became very clear that the majority of networking companies are all tackling the very same Problem. And imagine the VC funding that’s chasing the solution as well.

So what is “The Problem”? It’s multi-faceted and interrelated set of issues:

Virtualization Has Messed Things Up, Big Time

The biggest problem of them all was caused by the rise of virtualization. Virtualization has disrupted much of the server world, but the impact that it’s had on the network is arguably orders of magnitude greater. Virtualization wants big, flat networks, just when we got to the point where we could route Layer 3 as fast as we could switch Layer 2. We’d just gotten to the point where we could get our networks small.

And it’s not just virtualization in general, much of its impact is the very simple act of vMotion. VMs want to keep their IPs the same when they move, so now we have to bend over backwards to get it done. Add to the the vSwitch sitting inside the hypervisor, and the limited functionality of that switch (and who the hell manages it anyway? Server team? Network team?)

4000 VLANs Ain’t Enough

If you’re a single enterprise running your own network, chances are 4000+ VLANs are sufficient (or perhaps not). In multi-tenant environments with thousands of customers, 4000+ VLANs quickly becomes a problem. There is a need for some type of VLAN multiplier, something like QinQ or VXLAN, which gives us 4096 times 4096 VLANs (16 million or so).

Spanning Tree Sucks

One of my first introductions to networking was accidentally causing a bridging loop on a 10 megabit Ethernet switch (with a 100 Mbit uplink) as a green Solaris admin. I’d accidentally double-connected a hub, and I noticed the utilization LED on the switch went from 0% to 100% when I plugged a certain cable in. I entertained myself with plugging in and unplugging the port to watch the utilization LED flucutate (that is, until the network admin stormed in and asked what the hell was going on with his network).

And thus began my love affair with bridging loops. After the Brocade presentation where we built a TRILL-based Fabric very quickly, with active-active uplinks and nary a port in blocking mode, Ethan Banks became a convert to my anti-spanning tree cause.

OpenFlow offers an even more comprehensive (and potentially more impressive) solution as well. More on that later.

Layer 2 Switching Isn’t Scaling

The current method by which MAC addresses are learned in modern switches causes two problems: Only one viable path can be allowed at a time (only way to prevent loops is to prevent multiple paths by blocking ports), and large Layer 2 networks involve so many MAC addresses that it doesn’t scale.

From QFabric, to TRILL, to OpenFlow (to half a dozen other solutions), Layer 2 transforms into something Layer 3-like. MAC addresses are routed just like IP addresses, and the MAC address becomes just another tuple (another recurring word) for a frame/packet/segment traveling from one end of your datacenter to another. In the simplest solution (probably TRILL?) MAC learning is done at the edge.

There’s A Lot of Shit To Configure

Automation is coming, and in a big way. Whether it’s a centralized controller environment, or magical software powered by unicorn tears, vendors are chomping at the bit to provide some sort of automation for all the shit we need to do in the network and server world. While certainly welcomed, it’s a tough nut to crack (as I’ve mentioned before in Automation Conundrum).

Data center automation is a little bit like the Gom Jabbar. They tried and failed you ask? They tried and died.

“What’s in the box?”

“Pain. And an EULA that you must agree to. Also, man-years of customization. So yeah, pain.”

Ethernet Rules Everything Around Me

It’s quite clear that Ethernet has won the networking wars. Not that this is any news to anyone who’s worked in a data center for the past ten years, but it has struck me that no other technology has been so much as even mentioned as one for the future. Bob Metcalfe had the prophetic quote that Stephen Foskett likes to use: “I don’t know what will come after Ethernet, but it will be called Ethernet.”

But there are limitations (Layer 2 MAC learning, virtualization, VLANs, storage) that need to be addressed for it to become what comes after Ethernet. Fibre Channel is holding ground, but isn’t exactly expanding, and some crazy bastards are trying to merge the two.

Oof. Storage.

Most people agree that storage is going to end up on our network (converged networking), but there are as many opinions on how to achieve this network/storage convergence as there are nerd and pop culture reference in my blog posts. Some companies are pro-iSCSI, others pro FC/NFS, and some like Greg Ferro have the purest of all hate: He hates SCSI.

“Yo iSCSI, I’m really happy for you and imma let you finish, but Fibre Channel is the best storage protocol of all time”

So that’s “The Problem”. And for the most part, the articles on Networking Field Day, and the solutions the vendors propose will be framed around The Problem.

Your Momma Is So Proprietary

Let’s talk about a very sensitive subject for both networking admins and networking vendors: The subject of proprietary technologies.

The word proprietary in most cases has a very negative connotation. Most network designers would prefer that everything be based on open standards, like OSPF and (shudder) Spanning Tree. After all, IP and Ethernet are open standards, and those along with many other open standard technologies, make the Internet and industry what it is today. But at the same time, we can be a bit hypocritical, in that we also tend to want awesome features that are often on the propriety side.

Conversely, most network vendors would love to come up with the Kernel’s Secret Recipe that makes their stuff so awesome, that no sane engineer would dare use anything else. But they also like to say they’re open, in order to allay fears that a customer might have of being “locked in”. So when vendors go after customers, you’ll hear “open” a lot. When vendors go after each other, you hear “proprietary” thrown about as an epithet. And when a vendor is accused of being proprietary, they often lash out into an epic battle of “your momma is so proprietary”.

Proprietary Bad!

So last week there was a discussion on Twitter  between former Cisco employee and new Dell Force10 employee Brad Hedlund (@bradhedlund), and former Cisco employee and new Juniper employee Chistopher Hoff (@beaker). (By the way, they are both people I admire and respect.)

I believe they were talking about the different approaches their respective companies were taking solve the evolving needs of modern data centers. Juniper’s solution is QFabric, while Dell Force 10 is going the NVGRE/VXLAN/OpenFlow route.  Brad cited QFabric as proprietary, and Christopher Hoff countered that Cisco’s FEX is also proprietary. And while true, something about that bothered me a bit.

QFabric and FEX are both proprietary, but the effect of the proprietary is very different. With QFabric, you can build a huge network fabric, without worrying about spanning-tree, and have one control plane for a whole mesh of switches. With FEX, you can plug what looks like a switch into a Nexus 5000 or Nexus 7000, and that switch looks like a line card on the 5000/7000. FEX affects the next hop. QFabric can affect your entire data center.

FEX is pretty limited, and honestly I think it’s fairly inconsequential in terms of its proprietariness. You can use FEX, or just hang another Cisco switch off a 5K/7K, like a Nexus 3000 (with its merchant silicon) or even an Arista or Juniper box. Even if you use FEX, the effect is limited to one switch hop away. How concerned would a designer be about the effect of proprietary FEX? Pretty much it would have little effect.

The effect of QFabric, however, is potentially far more wide ranging.

That’s no moon, that’s a data center fabric

From the Packet Pushers episode (episode 51) on QFabric, Abner Germanow talks about 500 10 Gigabit Ethernet port where QFabric makes sense, which is a pretty large investment. If you figure roughly $2,000 a port, that would make it a $1,000,000 decision. If you order enough FEX/Nexus switches, you can spend that much, but you can go step by step and back out if you want.

With the proprietary versus open debate, it’s quite understandable that Juniper is very sensitive to the word “proprietary”. However, it’s tough to classify QFabric as anything but, as Ivan Pepelnjak says, “completely proprietary“.

Right now there are several open standards, such as TRILL, SPB, OpenFlow, VXLAN, FCoE, NVGRE and others looking to solve many of the same data center problems that QFabric looks to solve. And from the looks of it Juniper has been rather dismissive of some of the open fabric standard technologies, such as the much discussedWhy TRILL Won’t Work For The Data Center” argument (requires registration, fuck you TechTarget). Juniper is also taking a wait-and-see approach to VXLAN.

Even so, I don’t think Juniper should care if people call it proprietary. Yes, it’s proprietary. And yes, the effect of this proprietary-ness is huge compared to Cisco’s FEX because it affects more of the data center. But that’s a good thing.

Right now, because these open standards are mostly brand-spanking new, and no one is bat-shit crazy enough to build a multi-vendor fabric based on these new standards.

OK, maybe there is someone is crazy enough to build a multi-vendor Ethernet fabric

So QFabric has the advantage there, since even open standards are likely to be vendor-locked for now. And QFabric is a bit more mature than most of the new standards, in that it’s at least impelemtend and released. (Despite the terrible, and I mean just awful PR move bashing Juniper. Seriously, Cisco, that shit reeks sophomoric desperation. I feel cheep even linking it.)

What we do have to consider, however, is that in time the interoperability and maturity situation will be different, as it is for mature open standards today. It’s very common to have multi-vendor 802.1Q, OSPF, IS-IS, BGP, and spanning-tree deployments, without thinking twice about it. There will likely be a day when whatever new standards we’re dealing with now succeed and evolve to the point where we wouldn’t think twice about building say a TRILL fabric with multiple vendors like we do now with spanning-tree.

So QFabric is proprietary, and is not going to play well with others. That doesn’t discount it as a solution, but it is a serious consideration, more so than something like proprietary FEX. Proprietary has its advantages, and disadvantages, and the effect can be substantial or inconsequential, all factors to consider. I won’t even hazard a guess at this point as to how it’s going to play out, but like a good twitter battle, I’m going to enjoy watching.

FCoE: I’m not Dead! Arista: You’ll Be Stone Dead in a Moment!

I was at Arista on Friday for Tech Field Day 8, and when FCoE was brought up (always a good way to get a lively discussion going), Andre Pech from Arista (who did a fantastic job as a presenter) brought up an article written by Douglas Gourlay, another Arista employee, entitled “Why FCoE is Dead, But Not Buried Yet“.

FCoE: “I feel happy!”

It’s an interesting article, because much of the player-hating seems to directed at TRILL, not FCoE, and as J Metz has said time and time again, you don’t need TRILL to do FCoE if you do FCoE the way Cisco does (by using Fibre Channel Forwarders in each FCoE switch). Arista, not having any Fibre Channel skills, can’t do it this way. If they were to do FCoE, Arista (like Juniper) would need to do it the sparse-mode/FIP-snooping FCoE way, which would need a non-STP way of handling multi-pathing such as TRILL or SPB.

Jayshree Ullal, The CEO of Arista, hated on TRILL and spoke highly of VXLAN and NVGRE (Arista is on the standards body for both). I think part of that is that like Cisco, not all of their switches will be able to support TRILL, since TRILL requires new Ethernet silicon.

Even the CEO of Arista acknowledged that FCoE worked great at the edge, where you plug a server with a FCoE CNA into an FCoE switch, and the traffic is sent along to native Ethernet and native Fibre Channel networks from there (what I call single-hop or no-hop FCoE). This doesn’t require any additional FCoE infrastructure in your environment, just the edge switch. The Cisco UCS Fabric Interconnects are a great example of this no-hop architecture.

I don’t think FCoE is quite dead, but I have to imagine that it’s not going as well as vendors like Cisco have hoped. At least, it’s not been the success that some vendors have imagined. And I think there are two major contributors to FCoE’s failure to launch, and both of those reasons are more Layer 8 than Layer 2.

Old Man of the Data Center

Reason number one is also the reason why we won’t see TRILL/Fabric Path deployed widely: It’s this guy:

Don’t let him trap you into hearing him tell stories about being a FDDI bridge, whatever FDDI is

The Catalyst 6500 series switch. This is “The Old Man of the Data Center”. And he’s everywhere. The switch is a bit long in the tooth, and although capacity is much higher on the Nexus 7000s (and even the 5000s in some cases), the Catalyst 6500 still has a huge install base.

And it won’t ever do FCoE.

And it (probably) won’t ever do TRILL/Fabric Path (spanning-tree fo-evah!)

The 6500s aren’t getting replaced in significant numbers from what I can see. Especially with the release of the Sup 2T supervisor for the 6500es, the 6500s aren’t going anywhere anytime soon. I can only speculate as to why Cisco is pursuing the 6500 so much, but I think it comes down to two reasons:

Another reason why customers haven’t replaced the 6500s are that the Nexus 7000 isn’t a full-on replacement. With no service modules, limited routing capability (it just recently got the ability to do MPLS), and a form factor that’s much larger than the 6500 (although the 7009 just hit the streets with a very similar 6500 form factor, which begs the question: Why didn’t Cisco release the 7009 first?).

Premature FCoE

So reason number two? I think Cisco jumped the gun. They’ve been pushing FCoE for a while, but they weren’t quite ready. It wasn’t until July 2011 that Cisco released NX-OS 5.2, which is what’s required to do multi-hop FCoE in the Nexus 7000s and MDS 9000. They’ve had the ability to do multi-hop FCoE in the Nexus 5000s for a bit longer, but not much. Yet they’ve been talking about multi-hop for longer than it was possible to actually implement. Cisco has had a multi-hop FCoE reference architecture posted since March 2011 on their website, showing a beautifully designed multi-hop FCoE network with 5000s, 7000s, and MDS 9000s, that for months wasn’t possible to implement. Even today, if you wanted to implement multi-hop FCoE with Cisco gear (or anyone else), you’d be a very, very early adopter.

So no, I don’t think FCoE is dead. No-hop FCoE is certainly successful (even Arista’s CEO acknowedged as such), and I don’t think even multi-hop FCoE is dead, but it certainly hasn’t caught on (yet). Will multi-hop FCoE catch on? I’m not sure. We’ll have to see.

Fibre Channel and Ethernet: The Odd Couple

Fibre Channel? Meet Ethernet. Ethernet? Meet Fibre Channel. Hilarity ensues.

The entire thesis of this blog is that the traditional data center silos are collapsing. We are witnessing the rapid convergence of networking, storage, virtualization, server administration, security, and who knows what else. It’s becoming more and more difficult to be “just a networking/server/storage/etc person”.

One of the byproducts of this is the often hilarious fallout from conflicting interests, philosophies, and mentalities. And perhaps the greatest friction comes from the conflict of storage and network administrators. They are the odd couple of the data center.

Storage and Networking: The Odd Couple

Ethernet is the messy roomate. Ethernet just throws its shit all over the place, dirty clothes never end up in the hamper, and I think you can figure out Ethernet’s policy on dish washing.  It’s disorganized and loses stuff all the time. Overflow a receive buffer? No problem. Hey, Ethernet, why’d you drop that frame? Oh, I dunno, because WRED, that’s why.

WRED is the Yosamite Sam of Networking

But Ethernet is also really flexible, and compared to Fibre Channel (and virtually all other networking technologies) inexpensive. Ethernet can be messy, because it either relies on higher protocols to handle dropped frames (TCP) or it just doesn’t care (UDP).

Fibre Channel, on the other hand, is the anal-retentive network: A place for everything, and everything in its place. Fibre Channel never loses anything, and keeps track of it all.

There now, we’re just going to put this frame right here in this reserved buffer space.

The overall philosophies are vastly different between the two. Ethernet (and TCP/IP on top of it) is meant to be flexible, mostly reliable, and lossy. You’ll probably get the Layer 2 frames and Layer 3 packets from one destination to another, but there’s no gurantee. Fibre Channel is meant to be inflexible (compared with Ethernet), absolutely reliable, and loss-less.

Fibre channel and Ethernet have a very different set of philosophies in terms of building out a network. For instance, in Ethernet networks, we cross-connect the hell out of everything. Network administrators haven’t met two switches they didn’t want to cross connect.


Did I miss a way to cross-connect? Because I totally have more cables

It’s just one big cloud to Ethernet administrators. For Fibre Channel administrators, one “SAN” is abomination. There are always two, air gap separated, completely separate fabrics.

The greatest SAN diagram ever created

The Fibre Channel host at the bottom is connected into two separate, Gandalf-separated, non-overlapping Fibre Channel fabrics. This allows the host two independent paths to get to the same storage array for full redundancy. You’ll note that the Fibre Channel switches on both sides have two links from switch to switch in the same fabric. Guess what? They’re both active. Multi-pathing in Fibre Channel is allowed through use of the FSPF protocol (Fabric Shortest Path First). Fibre Channel switch to Fibre Channel switch is, what we would consider in the Ethernet world, layer 3 routed. It’s enough to give one multi-path envy.

One of the common ways (although by no means the only way) that an Ethernet frame could meet an unfortunate demise is through tail drop or WRED of a receive buffer. As a buffer in Ethernet gets full, WRED or a similar technology will typically start to randomly drop frames. As the buffer gets closer to full, the faster the frames are randomly dropped. WRED prevents tail drop, which is bad for TCP, but dropping frames when the buffer gets closer to full.

Essentially, an Ethernet buffer is a bit like Thunderdome: Many frames enter, not all frames leave. With Ethernet, if you tried to do full line rate of two 10 Gbit links through a single 10 Gbit choke point, half the frames would be dropped.

To a Fibre Channel adminsitrator, this is barbaric. Fibre Channel is much more civilized with the use of Buffer-to-Buffer (B2B) credits. Before a Fibre Channel frame is sent from one port to another, the sending port reserves space on the receiving port’s buffer. A Fibre Channel frame won’t get sent  unless there’s guaranteed space at the receiving end. This insures that no matter how much you over subscribe a port, no frames will get lost. Also, when a Fibre Channel frame meets another Fibre Channel frame in a buffer, it asks for the Grey Poupon.

With Fibre Channel, if you tried to push two 8 Gbit links through a single 8 Gbit choke point, no frames would be lost, and each 8 Gbit port would end up throttled back to roughly 4 Gbit through the use of B2B credits.

Why is Fibre Channel so anal retentive? Because SCSI, that’s why. SCSI is the protocol that most enterprise servers use to communicate with storage. (I mean, there’s also SATA, but SCSI makes fun of SATA behind SATA’s back.) Fibre Channel runs the Fibre Channel Protocol, which encapsulates SCSI commands onto Fibre Channel fames (as odd as it sounds, Fibre Channel and Fibre Channel Protocol are two distinct technologies).  Fibre Channel is essentially SCSI over Fibre Channel.

SCSI doesn’t take kindly to dropped commands. It’s a bit of a misconception that SCSI can’t tolerate a lost command. It can, it just takes a long time to recover (relatively speaking). I’ve seen plenty of SCSI errors, and they’ll slow a system down to a crawl. So it’s best not to lose any SCSI commands.

The Converged Clusterfu… Network

We used to have separate storage and networking environments. Now we’re seeing an explosion of convergence: Putting data and storage onto the same (Ethernet) wire.

Ethernet is the obvious choice, because it’s the most popular networking technology. Port per port, Ethernet is the most inexpensive, most flexible, most widely deployed networking technology around. It has slated the FIDDI dragon, the token ring revolution, and now it has its sights on the Fibre Channel Jabberwocky.

The current two competing technologies for this convergence are iSCSI and FCoE. SCSI doesn’t tolerate failure to deliver the SCSI command very well, so both iSCSI and FCoE have ways to guarantee delivery. With iSCSI, delivery is guaranteed because iSCSI runs on TCP, the reliable Layer 4 protocol. If a lower level frame or packet carrying a TCP segment gets lost, no big deal. TCP using sequence numbers, which are like FedEx tracking numbers, and can re-send a lost segment. So go ahead, WRED, do your worst.

FCoE provides losslessness through priority flow control, which is similar to B2B credits in Fibre Channel. Instead of reserving space on the receiving buffer, PFC keeps track of how full a particular buffer is, the one that’s dedicated to FCoE traffic. If that FCoE buffer gets close to full, the receiving Ethernet port sends a PAUSE MAC control frame to the sending port, and the sending port stops. This is done on a port-per-port basis, so end-to-end FCoE traffic is guaranteed to drive without dropping frames. For this to work though, the Ethernet switches need to speak PFC, and that isn’t part of the regular Ethernet standard, and is instead part of the DCB (Data Center Bridging)  set of standards.

Hilarity Ensues

Like the shields of the Enterprise, converged networking is in a state of flux. Network administrators and storage administrators are not very happy with the result. Network administrators don’t want storage traffic (and their silly demands for losslessness) on their data networks. St0rage administrators are appalled by Ethernet and it’s devil-may-care attitude towards frames. They’re also not terribly fond of iSCSI, and only grudgingly accepting of FCoE. But convergence is happening, whether they like it or not.

Personally, I’m not invested in any particular technology. I’m a bit more pro-iSCSI than pro-FCoE, but I’m warming to the later (and certainly curious about it).

But given some dyed-in-the-wool network administrators and server administrators are, the biggest problems in convergence won’t be the technology, but instead will be the Layer 8 issues generated. My take is that it’s time to think like a data center administrator, and not a storage or network administrator. However, that will take time. Until then, hilarity ensues.

The Case for FCoE Terminology

A previous post of mine (Jinkies! It’s an FCoE Mystery) talked about the need for some additional terminology in the FCoE world, specifically three different types of FCoE deployments. It’s generated a lot of comments, some which seem even longer than the actual post. I wanted to do a follow up, specifically regarding my reasoning for having the topology definitions.

FCoE, as a term, is very broad: It means that you’re taking a Fibre Channel frame and encapsulating it into an Ethernet frame. That’s it. There’s only one “FCoE” method in terms of this encapsulation. However, my point is that there are a number of very different ways you can go about moving those FCoE frames onto your Ethernet network.

Take this scenario: You’re presented with a switch. It has a nice sticker on it that says “FCoE switch”. Now what does that tell you about how you can fit it in your network?

Attention Cisco: You’re welcome

Almost nothing.

If you said it was a data center bridge (DCB) switch, you would then know that it’s a transit switch. No FCoE frames will be encap’d/decaped on that switch, but it supports at least PFC (priority flow control) so that FCoE frames can be guaranteed to be lossless.

Now, if you were told the FCoE switch is a FCF switch (has a full Fibre Channel stack), what does that tell you about how you can deploy it?

Still, almost nothing.

Take the example of a Cisco 6X00 Fabric Interconnect, the brains behind Cisco’s UCS server system. They are FCoE devices, and they are Fibre Channel Forwarders (FCFs). However, you can’t do what I would consider multi-hop FCoE. You can connect to a native Fibre Channel fabric, but not an FCoE fabric. That is, you can’t set up an FCoE ISL (Inter-Switch Link, but not the old Cisco pre-802.1Q VLAN tagging, it means something different in Fibre Channel) to another FCoE capable switch. This is why I added a third method to Ivan Pepelnjak’s sparse-mode (SMFCoE) and dense-mode (DMFCoE) definitions. (Note: That’s an embarassing number of acronyms).

So by having those three different distinctions (dense-mode/FCF, sparse-mode/DCB, one-hop/zero-hop) you can then tell immediately how you can deploy a FCoE switch in your network. Some switches will likely support multiple ways, but most right now are limited to one in how they’re deployed on your network.

I understand the concerns that both J Metz from Cisco and Erik Smith from EMC about adding complexity, but I think having these three different topology definitions can go a long way to help simplify discussions on FCoE topology, and in fact removes a lot of complexity (and mystery).

This morning I attended a webinar held by the Ethernet Alliance (based near me in Beaverton, Oregon) and I was happy to hear they also make a distinction between FCF FCoE switches and non-FCF FCoE switches. It really helps simplify things in terms of deployment.

Jinkies! It’s an FCoE Mystery!

Preamble: Chances are I’m going to get something wrong in this article. Please feel free to point anything out so long as you state the correction. You can’t just say “that’s wrong” and not say why. One of the great mysteries of the data center right now is FCoE.

Ah, Fibre Channel over Ethernet. It promises to do away with separate data and storage networks, and run everything on a single unified fabric. The problem though is that FCoE is a bit of a mystery. It involves two very different protocols (Ethernet and Fibre Channel), it involves the interaction between the protocols, and vendors can bicker over requirements, make polar opposite statements, and both can be technically correct.

So that makes it kind of a mess. I’ve been teaching basics of FCoE (mostly single-hop) for a bit now, and I think I’ve come across a way to simplify perception of FCoE: Realize FCoE is implemented in three different ways.

  • Single-hop FCoE (SHFCoE)
  • Dense-mode FCoE (DMFCoE) [multi-hop]
  • Sparse-mode FCoE (SMFCoE) [multi-hop]

When we talk about FCoE in general, we should be talking about which specific method that’s being referenced. That came to me when I read Ivan Pepelnjak’s article on the two ways to implement multi-hop  FCoE , although I’m also adding single-hop as a separate way to implement FCoE.

While all three ways are technically “FCoE”, they are implemented in very different manners, have very different hardware and topology requirements, and different vendors support different methods. They’re almost three completely different beasts. So let’s talk about them separately, and be specific when we talk about it.

So let’s talk about FCoE.

Single Hop FCoE (SHFCoE)

This is the simplest way to implement FCoE, as it doesn’t really require any of the new data center standards on the rest of your network devices. Typically, a pair of switches is enabled for FCoE, as well as some server network/storage adapters known as CNAs (Converged Network Adapter).

In the Cisco realm, this is either a Nexus 5000 series or Fabric Interconnects which are part of the Cisco UCS server system. In HP, this might be part of Virtual Connect. A CNA is a Ethernet/Fibre Channel combo networking card. The server’s operating system is presented with separate  native Ethernet and native Fibre Channel devices, so the OS doesn’t even know that FCoE is going on. It just thinks there’s native Ethernet and native Fibre Channel.

Oh hey, look! An actual diagram. Not just proof you were alive in the 80’s.

Ethernet frames containing FC frames are isolated onto their own FCoE VLANs. When the Ethernet frames reach the FCoE switch they are de-encapsulated and forwarded via regular Fibre Channel methods to their final destination as native Fibre Channel.

This method has been in place for a few years now, and it works (and works well). It’s pretty well understood, and there’s plenty of stick time for it. You also don’t need to do anything special on your Ethernet networks, and most of the time nothing special needs to be done on your Fibre Channel SAN (although NPV/NPIV may be needed to get the FCoE switch connected to the Fibre Channel switch). You don’t have to worry about any of the new DCB standards, such as DCBX, PFC, ETS, etc., because they only need to be on the FCoE single-hop switch, and are already there. No tweaking of those standards is typically necessary.

The Multi-Hops

There are two types of multi-hop FCoE, where the FCoE goes beyond just the initial switch. J Metz from Cisco elaborated on the various definitions (and types) of multi-hop in this great blog article here, but I think we can even make it more simple by saying that multi-hop means more than one FCoE switch.

Dense-Mode FCoE (DMFCoE)

With DMFCoE, a FCoE frame is received at the DMFCoE switch and de-encapsulated into a regular FC frame. The FCF (Fibre Channel Forwarder) portion of the DMFCoE switch makes the forwarding decision and sends it to the next port. At that port, the FC frame is re-encapsulated into an FCoE Ethernet frame and send out an Ethernet port to the next hop.

With DMFCoE, each of your Ethernet switches is also a full-stack Fibre Channel switch. You’re running essentially a Fibre Channel SAN overlay on top of your Ethernet switches. Zoning, name services, FSPF, etc., are all the same as on your regular Fibre Channel network. Also, FCoE frames are routed along not by Ethernet, but by Fibre Channel routing (FSPF) which is multi-path (so no bridging loops).

The drawback is that it requires a pretty advanced switch to do it. In fact, it wasn’t until July of 2011 that Cisco had more than one switch that could even do DMFCoE (the MDS and Nexus 7000 needed 5.2 to do DMFCoE, which wasn’t released until July).

Alternative names for dense-mode FCoE:

  • FC-Forwarded FCoE
  • DMFCoE
  • Full FCoE
  • Heavy FCoE
  • Overlay Mode

Sparse Mode FCoE (SMFCoE)

Sparse Mode FCoE (SMFCoE) is when an Ethernet network forwards FCoE frames via regular Ethernet forwarding mechanisms. Unlike DMFCoE, the Fibre Channel frame is not de-encapsulated (although but it might be snooped with FIP snooping if the switch supports it). For the most part, the Ethernet switches have little to no awareness of the Fibre Channel layers.

The benefit of SMFCoE is that it doesn’t require quite the beefiness that DMFCoE needs, as you don’t need silicon that can understand and forward FCP (Fibre Channel Protocol) traffic. You still need priority flow control and other DCB standards, and probably DCBx (to set up the FCoE lossless CoS and so forth).

The drawback is that you’ll usually need some sort of multi-path Ethernet protocol, such as TRILL/SPB/Fabric Path as spanning-tree would likely be a disaster for a storage protocol. Since none of the potential multi-path Ethernet protocols are in wide use with the various vendors, that makes SMFCoE somewhat dead right now.

Alternative names for SMFCoE might be:

  • Ethernet-forwarded FCoE
  • FCoE light
  • Diet-FCoE

Why Differentiate?

Because it gets damn confusing otherwise. Recently Juniper and Cisco had a dustup about the requirement of TRILL for FCoE. Juniper posted the article on why TRILL won’t scale for data centers, and mentioned that TRILL is required for FCoE. J Metz from Cisco counter-reponded with essentially “no, FCoE doesn’t need TRILL“. Who’s right? Well they both are.

Cisco has gone the DMFCoE route, so no you don’t need TRILL (or other multi-path Ethernet). Since Juniper is going SMFCoE, it will need some sort of multi-path (and his article is calling for QFabric to be that solution).

Whither FCoE?

So can you do FCoE multi-hop right now, either DMFCoE or SMFCoE? It probably would be wise to wait. In the Cisco realm, the code that supports DMFCoE was just released in July for their Nexus 7K and MDS lines, and the 5Ks could have done DMFCoE since December I think (although I don’t know any one that did).

Right now, I don’t know of any customers actually doing mutli-hop FCoE (and I don’t know anyone who’s all that interested).  SMFCoE is a moot point right now until more switches can get multi-path Ethernet, whether that be QFabric, TRILL, SPB or another method.