Ethernet over Fibre Channel

Since the 80’s, Ethernet has dominated the networking world. The LAN, the WAN, and the MAN are all now dominated by Ethernet links. FIDDI, HIPPI, ATM, Frame Relay, they’ve all gone by the wayside. But there is one protocol that has stuck around to run alongside Ethernet, and that’s Fibre Channel. While Fibre Channel has mostly sat in the shadow of Ethernet, relegated to only storage traffic, it’s now poised to overtake Ethernet in the battle for the LAN. And the way that Fibre Channel is taking on Ethernet is with Ethernet over Fibre Channel.

Slide2

Suck it, Metcalfe

While Ethernet has enjoyed tremendous popularity, it has several (debilitating) limitations. For one, forwarding is haunted the possibility of a loop, and Spanning Tree Protocol is required to keep a watchful eye. Unfortunately, STP is almost as bad as a loop, with the ample opportunity for misconfigurations (rouge root bridges) and other shenanigans.  TRILL, a Layer 2 overlay for Ethernet that allows multi-pathing, hasn’t found its way into a commercial product yet, and its derivatives (FabricPath from Cisco and VCS from Brocade) haven’t seen much in the way of adoption.

Rathern than pile fix upon fix on Ethernet, SAN administrators (known for being the loose canons of the data center) are making a bold push to take over LAN networks as well… and they’re winning.

The T17 committe had been established by the INCITS, which is the standards body that is responsible for Fibre Channel, FCoE, and now EoFC. The T17 is responsible for all the specifications around EoFC, and in particular the interface between the two.

We really have a lot of advantages over Ethernet in terms of topology and forwarding. For one, we’re a lossless network, providing a lot more reliability than a traditional Ethernet network. We also have multi-pathing built in with FSPF routing, while still providing Layer 2 adjacencies that are still required by the old crusty crapplications that are still on people’s networks, somehow.” -John Etherman, T17 committee chair.

They’ve made a lot of progress in a relatively short time, from ironing out the specifications to getting ASICs spun, and their work is bearing fruit. Products are starting to ship, and several marquee clients have announced fabrics built entirely with EoFC.

A Day in the life of a EoFC Frame

To keep compatibility with older Ethernet/TCP/IP stacks, CNHs (Converged Network HBAs) provide Ethernet interfaces to the host operating system. The frame is formed by the host, and the CNH encapsulates the Ethernet frame into a Fibre Channel frame. Since standard Ethernet MTU is only 1500 bytes, they fit quite nicely into the maximum 2048 byte Fibre Channel frame. The T13 working group also provides specifications for Jumbo Ethernet frames up to 9216 bytes, by either fragmenting the frame into multiple 2048-byte Fibre Channel frames,

WWPNs are derived from the MAC addresses that the hosts sees. Since MAC addresses aren’t a full 64-bits, the T17 working group has allocated the 80:08 prefix to EoFC. So if your MAC address was 00:25:B6:01:23:45, the WWPN would be 80:08:00:25:B6:01:23:45. This keeps the EoFC WWPNs out of the range of the initiators (starting with 1 or 2) and targets (starting with 5).

EoFC

FC_IDs are assigned to the WWPNs on a transitory basis, and are what the Fibre Channel headers have in terms of source/destination addresses. When the Fibre Channel frame reaches its destination NX_Port (Node LAN port), the Ethernet frame is de-encapsulated from the Fibre Channel frame, and the hosts networking stack takes care of the rest. From a host’s perspective, it has no idea the transport is Fibre Channel.

Reliability

The biggest benefit to EoFC is the lossless network that Fibre Channel provides. Since the majority of traffic is East/West in modern data center workloads, busy hosts can suffer from an incast problem, where the buffers can be overloaded as a single 10 Gigabit link receives packets from multiple sources, all operating at 10 Gigabit. Fibre Channel transport provides port to port flow control, and can ensure that nothing gets dropped.

Configuration

Configuration of EoFC is fairly straightforward. I’ve got access to a new Nexus 8008, with a 32 Gbit EoFC line card that I’ve connected to a Cisco C-series server with a CNH.

nexus1# feature eofc
EoFC feature checked out
Loading Ethernet module...
Loading Spanning Tree module...
Loading LLDP...
Grace period license remaining: 110 days

nexus1# vlan 10
nexus1(vlandb)# vsan 10
nexus1(vsandb)# 10 name Storage-A
nexus1(vsandb)# vsan 1010
nexus1(vsandb)# vsan 1010 name Ethernet transport
nexus1(vsandb)# eofc vlan 10
nexus1(vsandb)# interface veth1
nexus1(vif)# switchport
nexus1(vif)# switchport mode access
nexus1(vif)# switchport access vlan 10
nexus1(vif)# bind interface fc1/1
nexus1(vif)# no shut
nexus1(vif)# int fc1/1
nexus1(if)# switchport mode F
nexus1(if)# switchport allowed vsan 10,1010
nexus1(if)# no shut 

Doing a show interface shows me that my connection is live.

 nexus1# show interface ethernet veth1 
 vEthernet1 is up
 Hardware: 1000/10000 Ethernet, address: 000d.ece7.df48 (bia 000d.ece7.df48)
 Attached to: fc1/1 (pWWN: 80:08:00:0D:EC:E7:DF:48)
 MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
 reliability 255/255, txload 1/255, rxload 1/255
 Encapsulation EoFC/ARPA
 Port mode is EoFC
 full-duplex, 32 Gb/s, media type is 1/2/4/8/16/32g
 Beacon is turned off
 Input flow-control is off, output flow-control is off
 Rate mode is dedicated
 Switchport monitor is off
 Last link flapped 09:03:57
 Last clearing of "show interface" counters never
 30 seconds input rate 2376 bits/sec, 0 packets/sec
 30 seconds output rate 1584 bits/sec, 0 packets/sec
 Load-Interval #2: 5 minute (300 seconds)
 input rate 1.58 Kbps, 0 pps; output rate 792 bps, 0 pps
 RX
 0 unicast packets 10440 multicast packets 0 broadcast packets
 10440 input packets 11108120 bytes
 0 jumbo packets 0 storm suppression packets
 0 runts 0 giants 0 CRC 0 no buffer
 0 input error 0 short frame 0 overrun 0 underrun 0 ignored
 0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
 0 input with dribble 0 input discard
 0 Rx pause
 TX
 0 unicast packets 20241 multicast packets 105 broadcast packets
 20346 output packets 7633280 bytes
 0 jumbo packets
 0 output errors 0 collision 0 deferred 0 late collision
 0 lost carrier 0 no carrier 0 babble
 0 Tx pause
 1 interface resets
 switch#

Speeds and Feeds

EoFC is backwards compatible with 1/2/4/8 and 16 Gigabit Fibre Channel, but it’s really expected to take off with the newest 32/128 Gbit interfaces that are being released from vendors like Cisco, Juniper, and Brocade. Brocade, QLogix, Intel, and Emulex are all expected to provide CNHs operating at 32 Gbit speeds, with 32 and 128 Gbit interfaces on line cards and fixed switches to operate as ISLs.

Nexus 8009

Nexus 8008: 384 ports of 32 Gbit EoFC

Switches are already shipping from Cisco and Brocade, with Juniper to release their newest QFC line before the end of Q2.

Top 5 Reasons The Evaluator Group Screwed Up

It’s been a while since the trainwreck of a “study” commissioned by Brocade and performed by The Evaluator Group,  but it’s still being discussed in various storage circles (and that’s not good news for Brocade). Some pretty much parroted the results, seemingly without reading the actual test. Then got all pissy when confronted about it.  I did a piece on my interpretations of the results, as did Dave Alexander of WWT and J Metz of Cisco. Our mutual conclusion can be best summed up with a single animated GIF.

 

bullshit

But since a bit of time has passed, I’ve had time to absorb Dave and J’s opinions, as well as others, I’ve come up with a list of the Top 5 Reasons by The Evaluator Group Screwed Up. This isn’t the complete list, of course, but some of the more glaring problems. Let’s start with #1:

Reason #1: I Have No Idea What I’m Doing

Their hilariously bad conclusion to the higher variance in response times and higher CPU usage was that it was the cause of the software initiators. Except, they didn’t use software initiators. The had actually configured hardware initiators, and didn’t know it. Let that sink in: They’re charged with performing an evaluation, without knowing what they’re doing.

The Cisco UCS VIC 1240 hardware CNA’s were utilized.  Referring to them as software initiators caused some confusion. The Cisco VIC is a hardware initiator and we configured them with virtual HBAs. Evaluator Group has no knowledge of the internal architecture of the VIC or its driver.  Our commentary of the possible cause for higher CPU utilization is our opinion and further analysis would be required to pinpoint the specific root cause.

Of course, it wasn’t the software initiator. They didn’t use a software initiator, but they were so clueless, they didn’t know they’d actually used a hardware initiator. Without knowing how they performed their tests (since they didn’t publish their methodology) it’s purely speculation, but it looks like the problem was caused by congestion (from them architecting the UCS solution incorrectly).

Reason #2: They’re Hilariously Bad At Math.

They claimed FCoE required 50% more cables, based on the fact that there were 50% more cables in the FCoE solution than the FC solution. Which makes sense… except that the FC system had zero Ethernet.

That’s right, in the HP/Fibre Channel solution, each blade had absolutely zero Ethernet connectivity. In the Cisco UCS solution, every blade had full Ethernet and Fibre Channel connectivity.  None. Zilch. Why did they do that? Probably because had they included any network connectivity to the HP system, the cable count would have shifted to FCoE’s favor.  Let me state this again, because it’s astonishingly stupid: They claimed FCoE (which included Ethernet and FC connectivity) required more cables without including any network connectivity for the HP/FC system. 

not_even_mad

Also, they made some power/cooling claims, despite the fact that the UCS solution didn’t require a separate FC switch (it’s capable of being a full-fledged Fibre Channel switch by itself), though the HP solution would have required a separate pair of Ethernet switches (which wasn’t included). So yeah, their math is a bit off. Had they done things, you know, correctly, the power, cooling, and cable count would have flipped in favor of FCoE.

Reason #3: UCS is Hard, You Guys!

They whinged about UCS being more difficult to setup. Anytime you’re dealing with unfamiliar technology, it’s natural that it’s going to be more difficult. However, they claimed that they had zero experience with HP as well (seriously, who at Brocade hired these guys?) How easy is UCS? Here is a video done from Amsterdam where a couple of Cisco techs added a new chassis and blade and had it booted up and running ESXi in less than 30 minutes from in the box to booted. Cisco UCS is different than other blade systems, but it’s also very easy (and very quick) to stand up. And keep in mind, the video I linked was done in Amsterdam, so they were probably baked   

Reason #4: It Contradicts Everyone Else’s Results (Especially those that know what they’re doing)

For the past couple of years, VMware and NetApp have been doing performance tests on various storage protocols. Here’s one from a few years ago, which includes (native) 4 and 8 Gbit Fibre Channel, 10 Gbit FCoE, 10 Gbit iSCSI, and 10 Gbit NFS. The conclusion? The protocol doesn’t much matter. They all came out about the same when normalized for bandwidth. The big difference is in the storage backend. At least they published their methodology (I’m looking at you, Evaluator Group). Here’s one from Demartek that shows a mixture of storage protocols saturating 10 Gbit Ethernet. Again, the limitation is only the link speed itself, not the protocol. And again, again, Demartek published their methodology.

Reason #5: How Did They Set Everything Up? Magic!

Most of the time with these commissioned reports, the details of how it’s configured are given so that the results can be reproduced and audited. How did the Evaluator Group set up their environment?

GOB.MAGIC_.GIF_

As far as I can tell, magic. There’s several things they could have easily gotten wrong with the UCS setup, and given their mistake about software/hardware initiators, quite likely. They didn’t even mention which storage vendor they used.

So there you have it. A bit of a re-hash, but hey, it was a dumb report. The upside though is that it did provide me with some entertainment.

Fibre Channel: The Heart of New SDN Solutions

From Juniper to Cisco to VMware, companies are spouting up new SDN solutions. Juniper’s Contrail, Cisco’s ACI, VMware’s NSX, and more are all vying to be the next generation of data center networking. What is surprising, however, is what’s at the heart of these new technologies.

Is it VXLAN, NVGRE, Openflow? Nope. It’s Fibre Channel.

Seriously.

If you think about it, it makes sense. Fibre Channel has been doing fabrics since before we ever called Ethernet fabrics, well, fabrics. And this isn’t the first time that Fibre Channel has shown up in unusual places. There’s a version of Fibre Channel that runs inside certain airplanes, including jet fighters like the F-22.

1_FW_F-22_Raptor_participates_in_Red_Flag

Keep the skies safe from FCoE (sponsored by the Evaluator Group)

New generation of switches have been capable of Data Center Bridging (DCB), which enables Fibre Channel over Ethernet. These chips are also capable of doing native Fibre Channel So rather than build complicated VPLS fabrics or routed networks, various data center switching companies are leveraging the inherent Fibre Channel capabilities of the merchant silicon and building Fibre Channel-based underlay networks to support an IP-based overlay.

Buffer-to-buffer (B2B) credit system and losslessness of Fibre Channel, plus the new 32/128 Gigabit interfaces with the newest Fibre Channel standard are all being leveraged for these underlays. I find it surprising that so many companies are adopting this, you’d think it’d be just Brocade. But Cisco, Arista (who notoriously shunned FCoE) and Juniper are all on board with new or announced SDN offerings that are based mostly or in part on Fibre Channel.

However, most of the switches from various vendors are primarily Ethernet today, so the 10/40 Gigabit interfaces can run FCoE until more switches are available with native FC interfaces. Of course, these switches will still be required to have a number of native Ethernet ports in order to connect to border networks that aren’t part of the overlay network, so there will be still a need for Ethernet. But it seems the market has spoken, and they want Fibre Channel.

 

CCIE DC Attempt #1: Did Not Pass

Earlier this month, I drove my rental car up to Cisco’s infamous 150 Tasman Drive after being stuck on the 101 for about an hour. I checked in, sat down, and dug into my very first CCIE lab attempt. A bit over 8 hours later, I knew I didn’t pass, but I got a good feel for what the lab is like.

My preparation for the exam had been very unbalanced, working extensively with some parts of the blueprint, while other aspects of the blueprint I hadn’t really touched in over a year. So I was not surprised at all to see the “FAIL” notice when I got my score.

The good news is that I think with the right preparation on my weak parts, I can pass on the next attempt (which I haven’t yet scheduled, but will soon).

The following animated GIF is what it’s like to do parts of a CCIE lab exam that you haven’t prepared for.

beavis

 

 

 

VTP

VTP

How It Feels Studying for my CCIE DC Lab

alwaysbelearning

First Call I Made When I First Heard About “Gen5 Fibre Channel”

callingbullshit

Follow

Get every new post delivered to your Inbox.

Join 94 other followers