Requiem for FCoE

FCoE is dead. We’re beyond the point of even asking if FCoE is dead, we all know it just is. It was never widely adopted and it’s likely never going to be widely adopted. It enjoy a few awkward deployments here and there, and a few isolated islands in the overall data center market, but it it never caught on the way it was intended to.

So What Killed FCoE?

So what killed FCoE? Here I’m going to share a few thoughts on why FCoE is dead, and really never was A Thing(tm).

It Was Never Cheaper

Ethernet is the champion of connectivity. It’s as ubiquitous as water in an ocean and air in the.. well, air. All the other mediums (ATM, Frame Relay, FDDI, Token Ring) have long ago fallen by the wayside. Even mighty Infiniband has fallen. Only Fibre Channel still stands as the alternative for a very narrow use case.

The thought is that the sheer volume of Ethernet ports would make them cheaper (and that still might happen), but right now there is no real price benefit from using FCoE versus FC.

In the beginning, especially, FCoE was quite a bit more expensive than running separate FC and Ethernet options.

Even if it comes out as a draw, the extra management and clumsy integration with management styles make them more expensive from a practical perspective. Which brings me to the next point:

Fibre Channel and Ethernet/IP Networks are Just Managed Differently

The joke is that you can unplug any Ethernet cable for up to 7 seconds, plug it back in, and you don’t have to tell anyone. If you unplug any Fibre Channel cable for even 2 seconds, find a new job.

Fibre Channel is really SCSI over Fibre Channel (and now NVMe over Fibre Channel, though that’s uncommon). And SCSI is a high-maintenance payload. IP-based protocols have various recovery mechanisms at various levels if payloads are lost, or the protocols don’t care. SCSI does care if a message is lost, it cares a lot. Its recovery mechanisms are time consuming and still possible to end up with data corruption.

As a result, Fibre Channel networks are handled with a lot more care than we do with a traditional Ethernet/IP network. The environment is lot more static, with changes made infrequently, where as Ethernet/IP networks, especially with EVPN/VXLAN implementations are only getting more dynamic. Dynamic and Fibre Channel don’t go well together. FCoE doesn’t change that.

Trying to impose the same rules of Fibre Channel management onto an Ethernet/IP switch generally doesn’t go over well.

Fibre Channel Interconnectivity Has Always Sucked

Fibre Channel switches are designed around open standards (such ANSI T11). They’re well documented, well understood. And yet few people build fabrics that include both Cisco and Brocade (now part of Broadcom) switches.

They implemented the standards slightly differently, and there’s lots of orchestration and control plane stuff going on (yes, I know, super technical here).

There are a few ways around this, such as interoperability mode but it’s clumsy and awkward and seldom used (expect perhaps in migrating from one vendor to another).

There’s also NPIV in combination with NPV/Access Virtual Gateway mode (Cisco and Brocade’s “proxy” mode, respectively), but that makes the the NPV/Access Virtual Gateway switches “invisible” to the fabric, getting around the fabric services integration.

Ethernet itself is way more interoperable. You wouldn’t think twice about connecting a Cisco switch to an Arista switch via Ethernet/IP. Or a Juniper switch to an Extreme Networks switch. The protocols are simpler, and way more interoperable. That’s an advantage to those technologies. FCoE forces you to go the single-vendor route, since FC is generally single-vendor.

(One exception that we’re seeing is VXLAN/EVPN, right now you would not build an VXLAN/EVPN network with two vendors, and it could be that it’s never a good idea to. That might be a next blog post.)

Fibre Channel Generally is in Decline

While not a direct reason why FCoE is dead, it certainly didn’t help. When FCoE was developed, Fibre Channel was in its heyday. It was, for a while, the very best way to do storage. Now there’s a lot of options out there, and many of them are better suited for most environments than Fibre Channel. And there’s not much innovation in declining tech.

Fibre Channel in general is dying off, but like a lot of technology in IT, it’s dying very, very slowly. Unix servers peaked around 2004, and have been in decline since. Still though, both IBM and Oracle (Sun) continue to do respectable business in the Unix market.

Probably a better way to describe Fibre Channel in general is to call it a legacy technology. Enterprise IT especially is very sedimentary and full of legacy tech. That’s the technology that isn’t growing, expanding, but we still need to keep it around because modernization is either not possible or too costly (or management makes poor choices…)

Fibre Channel is likely to be around for a while, and while there will be new deployments here and there (I was involved in one recently) it will mostly be deployed and refreshed to “keep the lights on”, so to speak. Fibre Channel is mostly a “scale up” technology, and storage has moved to “scale out” where Fibre Channel is not as well suited.

Since Fibre Channel is in decline, the need to put it on Ethernet is, via the transitive property, also in decline.

One Place FCoE Will Continue (and Thrive)

Cisco UCS uses FCoE for their B-series blades. It works, and it works well. It’s its own little island of FCoE, and doesn’t require any special configuration. Fabric and the hosts see native Fibre Channel, so operationally it’s no different than regular Fibre Channel connectivity to a SAN. It works because it’s mostly hidden from everyone involved. It just looks like regular FC.

I think FCoE will continue in that environment as long as B-series blades support Fibre Channel.

One Way FCoE Might Come Back

There’s one scenario I think possible (though not likely) where FCoE makes a resurgence, and even becomes the dominant way Fibre Channel is deployed: When native fibre channel switches no longer make sense.

Right now development in Fibre Channel is not… much of a thing. 64 GFC has been a standard for a while, and only recently Brocade has a product. Cisco has announced future support for 64 GFC but hasn’t released any switches or line cards that have them. There’s also a 128 GFC and 256 GFC standard (using four lanes, much like 40, 100, and 400 Gigabit Ethernet) but as far as I know the interfaces have never been produced. The 128 GFC standard has been around for 5 years, and the 256 GFC standard for about 2 years, and interfaces haven’t ben produced. I don’t foresee either being implemented. Ever.

So it’s certainly possible that 64 GFC is the last interface speed that Fibre Channel will see. There doesn’t seem to be much of a demand for faster, and the vendors (Cisco and Brocade/Broadcom) seem more of a wait-and-see. Ethernet is getting all the speed increases, with 400 Gigabit interfaces shipping, 100 Gigabit common place and relatively cheap, and plans for 800 Gigabit already being finalized.

So if Fibre Channel there’s demand for faster than 64 GFC (such as ISLs), to get to those speeds it might need to be Ethernet. I think it would be in the form of a switch that we treat like a Fibre Channel switch, in that we build a single vendor SAN, use zones and zonesets, and it only carries storage traffic. There would be A/B fabrics, etc. Hosts would have separate FCoE and Ethernet interfaces, and wouldn’t try to combine the two. But instead of native Fibre Channel interfaces, the interfaces would be FCoE. You can do this today: You can build a Fibre Channel fabric comprised of entirely FCoE interfaces from the host to the storage array. It’s just not currently practical from a cost and switch model availability situation.

Final Thoughts

So Fibre Channel over Ethernet is pretty much dead. It never really became A Thing, where as Fibre Channel was most certainly A Thing. But now Fibre Channel is a legacy technology, so while we’ll continue to see it for years to come, it’s not an area that’s likely to see a lot of innovation or investment.

2 Responses to Requiem for FCoE

  1. Howard Marks says:

    FCoE was the solution to blade servers (especially but not exclusively Cicso UCS) not having enough slots for all the connectivity users want. Combine the FC HBA and NIC into a CNA and viola FC for storage and Ethernet for everything else but just one mezzanine slot.

    Niche problem, niche solution.

  2. Pingback: Random Short Take #52 | PenguinPunk.net

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

<span>%d</span> bloggers like this: