Top 5 Reasons The Evaluator Group Screwed Up

It’s been a while since the trainwreck of a “study” commissioned by Brocade and performed by The Evaluator Group,  but it’s still being discussed in various storage circles (and that’s not good news for Brocade). Some pretty much parroted the results, seemingly without reading the actual test. Then got all pissy when confronted about it.  I did a piece on my interpretations of the results, as did Dave Alexander of WWT and J Metz of Cisco. Our mutual conclusion can be best summed up with a single animated GIF.

 

bullshit

But since a bit of time has passed, I’ve had time to absorb Dave and J’s opinions, as well as others, I’ve come up with a list of the Top 5 Reasons by The Evaluator Group Screwed Up. This isn’t the complete list, of course, but some of the more glaring problems. Let’s start with #1:

Reason #1: I Have No Idea What I’m Doing

Their hilariously bad conclusion to the higher variance in response times and higher CPU usage was that it was the cause of the software initiators. Except, they didn’t use software initiators. The had actually configured hardware initiators, and didn’t know it. Let that sink in: They’re charged with performing an evaluation, without knowing what they’re doing.

The Cisco UCS VIC 1240 hardware CNA’s were utilized.  Referring to them as software initiators caused some confusion. The Cisco VIC is a hardware initiator and we configured them with virtual HBAs. Evaluator Group has no knowledge of the internal architecture of the VIC or its driver.  Our commentary of the possible cause for higher CPU utilization is our opinion and further analysis would be required to pinpoint the specific root cause.

Of course, it wasn’t the software initiator. They didn’t use a software initiator, but they were so clueless, they didn’t know they’d actually used a hardware initiator. Without knowing how they performed their tests (since they didn’t publish their methodology) it’s purely speculation, but it looks like the problem was caused by congestion (from them architecting the UCS solution incorrectly).

Reason #2: They’re Hilariously Bad At Math.

They claimed FCoE required 50% more cables, based on the fact that there were 50% more cables in the FCoE solution than the FC solution. Which makes sense… except that the FC system had zero Ethernet.

That’s right, in the HP/Fibre Channel solution, each blade had absolutely zero Ethernet connectivity. In the Cisco UCS solution, every blade had full Ethernet and Fibre Channel connectivity.  None. Zilch. Why did they do that? Probably because had they included any network connectivity to the HP system, the cable count would have shifted to FCoE’s favor.  Let me state this again, because it’s astonishingly stupid: They claimed FCoE (which included Ethernet and FC connectivity) required more cables without including any network connectivity for the HP/FC system. 

not_even_mad

Also, they made some power/cooling claims, despite the fact that the UCS solution didn’t require a separate FC switch (it’s capable of being a full-fledged Fibre Channel switch by itself), though the HP solution would have required a separate pair of Ethernet switches (which wasn’t included). So yeah, their math is a bit off. Had they done things, you know, correctly, the power, cooling, and cable count would have flipped in favor of FCoE.

Reason #3: UCS is Hard, You Guys!

They whinged about UCS being more difficult to setup. Anytime you’re dealing with unfamiliar technology, it’s natural that it’s going to be more difficult. However, they claimed that they had zero experience with HP as well (seriously, who at Brocade hired these guys?) How easy is UCS? Here is a video done from Amsterdam where a couple of Cisco techs added a new chassis and blade and had it booted up and running ESXi in less than 30 minutes from in the box to booted. Cisco UCS is different than other blade systems, but it’s also very easy (and very quick) to stand up. And keep in mind, the video I linked was done in Amsterdam, so they were probably baked   

Reason #4: It Contradicts Everyone Else’s Results (Especially those that know what they’re doing)

For the past couple of years, VMware and NetApp have been doing performance tests on various storage protocols. Here’s one from a few years ago, which includes (native) 4 and 8 Gbit Fibre Channel, 10 Gbit FCoE, 10 Gbit iSCSI, and 10 Gbit NFS. The conclusion? The protocol doesn’t much matter. They all came out about the same when normalized for bandwidth. The big difference is in the storage backend. At least they published their methodology (I’m looking at you, Evaluator Group). Here’s one from Demartek that shows a mixture of storage protocols saturating 10 Gbit Ethernet. Again, the limitation is only the link speed itself, not the protocol. And again, again, Demartek published their methodology.

Reason #5: How Did They Set Everything Up? Magic!

Most of the time with these commissioned reports, the details of how it’s configured are given so that the results can be reproduced and audited. How did the Evaluator Group set up their environment?

GOB.MAGIC_.GIF_

As far as I can tell, magic. There’s several things they could have easily gotten wrong with the UCS setup, and given their mistake about software/hardware initiators, quite likely. They didn’t even mention which storage vendor they used.

So there you have it. A bit of a re-hash, but hey, it was a dumb report. The upside though is that it did provide me with some entertainment.

Fibre Channel: The Heart of New SDN Solutions

From Juniper to Cisco to VMware, companies are spouting up new SDN solutions. Juniper’s Contrail, Cisco’s ACI, VMware’s NSX, and more are all vying to be the next generation of data center networking. What is surprising, however, is what’s at the heart of these new technologies.

Is it VXLAN, NVGRE, Openflow? Nope. It’s Fibre Channel.

Seriously.

If you think about it, it makes sense. Fibre Channel has been doing fabrics since before we ever called Ethernet fabrics, well, fabrics. And this isn’t the first time that Fibre Channel has shown up in unusual places. There’s a version of Fibre Channel that runs inside certain airplanes, including jet fighters like the F-22.

1_FW_F-22_Raptor_participates_in_Red_Flag

Keep the skies safe from FCoE (sponsored by the Evaluator Group)

New generation of switches have been capable of Data Center Bridging (DCB), which enables Fibre Channel over Ethernet. These chips are also capable of doing native Fibre Channel So rather than build complicated VPLS fabrics or routed networks, various data center switching companies are leveraging the inherent Fibre Channel capabilities of the merchant silicon and building Fibre Channel-based underlay networks to support an IP-based overlay.

Buffer-to-buffer (B2B) credit system and losslessness of Fibre Channel, plus the new 32/128 Gigabit interfaces with the newest Fibre Channel standard are all being leveraged for these underlays. I find it surprising that so many companies are adopting this, you’d think it’d be just Brocade. But Cisco, Arista (who notoriously shunned FCoE) and Juniper are all on board with new or announced SDN offerings that are based mostly or in part on Fibre Channel.

However, most of the switches from various vendors are primarily Ethernet today, so the 10/40 Gigabit interfaces can run FCoE until more switches are available with native FC interfaces. Of course, these switches will still be required to have a number of native Ethernet ports in order to connect to border networks that aren’t part of the overlay network, so there will be still a need for Ethernet. But it seems the market has spoken, and they want Fibre Channel.

 

Changing Data Center Workloads

Networking-wise, I’ve spent my career in the data center. I’m pursuing the CCIE Data Center. I study virtualization, storage, and DC networking. Right now, the landscape in the network is constantly changing, as it has been for the past 15 years. However, with SDN, merchant silicon, overlay networks, and more, the rate of change in a data center network seems to be accelerating.

speed

Things are changing fast in data center networking. You get the picture

Whenever you have a high rate of change, you’ll end up with a lot of questions such as:

  • Where does this leave the current equipment I’ve got now?
  • Would SDN solve any of the issues I’m having?
  • What the hell is SDN, anyway?
  • I’m buying vendor X, should I look into vendor Y?
  • What features should I be looking for in a data center networking device?

I’m not actually going to answer any of these questions in this article. I am, however, going to profile some of the common workloads that you find in data centers currently. Your data center may have one, a few, or all of these workloads. It may not have any of them. Your data center may have one of the workloads listed, but my description and/or requirements is way off. All certainly possible. These are generalizations, and with all generalizations your mileage may vary. With that disclaimer out of the way, strap in. Let’s go for a ride.

Traditional Virtualization

It’s interesting to say that something which only exploded into the data center in a big way in about 2008 as now being “traditional”, rather than “new-fangled”. But that’s the situation we have here. Traditional virtualization workload  is centered primarily around VMware vSphere. There are other traditional virtualization products of course, such as Red Hat’s RHEV, Xen, and Microsoft Hyper-V, but VMware has the largest market share for this by far.

  • Latency is not a huge concern (30 usecs not a big deal)
  • Layer 2 adjacencies are mandatory (required for vMotion)
  • Large Layer 2 domains (thousands of hosts layer 2 adjacent)
  • Converged infrastructures (storage and data running on the same wires, FCoE, iSCSI, NFS, SMB3, etc.)
  • Buffer requirements aren’t typically super high. Bursting isn’t much of an issue for most workloads of this type.
  • Fibre Channel is often the storage protocol of choice, along with NFS and some iSCSI as well

Cisco has been especially successful in this realm with the Nexus line because of vPC, FabricPath, OTV and (to a much lesser extent) LISP, as they address some of the challenges with workload mobility (though not all of them, such as the speed of light). Arista, Juniper, and many others also compete in this particular realm, but Cisco is the market leader.

With the multi-pathing Layer 2 technologies such as SPB, TRILL, Cisco FabricPath, and Brocade VCS (the latter two are based on TRILL), you can build multi-spine leaf/spine networks/CLOS networks that you can’t with spanning-tree based networks, even with MLAG.

This type of network is what I typically see in data centers today. However, there is a shift towards Layer 3 networks and cloud workloads over traditional virtualization, so it will be interested to see how long traditional virtualization lasts.

VDI

VDI (Virtual Desktops) are a workload with the exact same requirements as traditional virtualization, with one main difference: The storage requirements are much, much higher.

  • Latency is not as important (most DC-grade switches would qualify), especially since latency is measured in milliseconds for remote desktop users
  • Layer 2 adjacencies are mandatory (required for vMotion)
  • Large Layer 2 domains
  • Converged infrastructures
  • Buffer requirements aren’t typically very high
  • High-end storage backends. All about the IOPs, y’all

For storage here, IOPs are the biggest concern. VDI eats IOPs like candy.

Legacy Workloads

This is the old, old school. And by old school, I mean late 90s, early 2000s. Before virtualization changed the landscape. There’s still quite a few crusty old servers, with uptimes measured in years, running long-abandoned applications. The problem is, these types of applications are usually running something mission critical and/or significant revenue generating. Organizations just haven’t found a way out of it yet. And hey, they’re working right now. Often running on proprietary Unix systems, they couldn’t or wouldn’t be migrated to a virtualized environment (where it would be much easier to deal with).

The hardware still works, so why change something that works? Because it would be tough to find more. It’s also probably out of vendor-supported service.

  • Latency? Who cares. Is it less than 1 second? Good enough.
  • Layer 2 adjacencies, if even required, are typically very small, typically just needed for the local clustering application (which is usually just stink-out-loud awful)
  • 100 megabit and gigabit Ethernet typically. 10 Gigabit? That’s science-fiction talk!
  • Buffers? You mean like, what shines the floor?

buffers

My own personal opinion is that this is the only place where Cisco Catalyst switches belong in a data center, and even then only because they’re already there. If you’re going with Cisco, I think everything else (and everything new) in the DC should be Nexus.

Cloud Workloads (Private Cloud)

If you look at a cloud workload, it looks very similar to the previous traditional virtualization workload. They both use VMs sitting on top of hypervisors. They both have underlying infrastructure of compute, network, and storage to support these VMs. The difference is primarily is in the operational model.

It’s often described as the difference between pets and cattle. With traditional virtualization, you have pets. You care what happens to these VMs. They have HA and DRS and other technologies to care for them. They’re given clever names, like Bart and Lisa, or Happy and Sleepy. With cloud VMs, they’re not given fun names. We don’t do vMotion/Live Migration with them. When we need them, they’re spun up. When they’re not, they’re destroyed. We don’t back them up, we don’t care if the host they reside on dies so long as there are other hosts carrying the workload. The workload is automatically sharded across the available hosts using logic in the application. Instead of backups, templates are used to create new VMs when the workload increases. And when the workload decreases, some of the VMs get destroyed. State is not kept on any single VM, instead the state of the application (and underlying database) is sharded to the available systems.

This is very different than traditional virtualization. Because the workload distribution is handled with the application, we don’t need to do vMotion and thus have Layer 2 adjacencies. This makes it much more flexible for the network architects to put together network to support this type of workload. Storage with this type of workload also tends to be IP-based (NFS, iSCSI) rather than FC-based (native Fibre Channel or FCoE).

With cloud-based workloads, there’s also a huge self-service component. VMs are spun-up and managed by developers or end-users, rather than the IT staff. There’s typically some type of portal that end-users can use to spin up/down resources. Chargebacks are also a component, so that even in a private cloud setting, there’s a resource cost associated and can be tracked.

OpenStack is a popular choice for these cloud workloads, as is Amazon and Windows Azure. The former is a private cloud, with the later two being public cloud.

  • Latency requirements are mostly the same as traditional virtualization
  • Because vMotion isn’t required, it’s all Layer 3, all the time
  • Storage is mostly IP-based, running on the same network infrastructure (not as much Fibre Channel)
  • Buffer requirements are typically the same as traditional virtualization
  • VXLAN/NVGRE burned into the chips for SDN/Overlays

You can use much cheaper switches for this type of network, since the advanced Layer 2 features (OTV, FabricPath, SPB/TRILL, VCS) aren’t needed. You can build a very simple Layer 3 mesh using inexpensive and lower power 10/40/100 Gbit ports.

However, features such as VXLAN/NVGRE encap/decap is increasingly important. The new Trident2 chips from Broadcom support this now, and several vendors, including Cisco, Juniper, and Arista all have switches based on this new SoC (switch-on-chip) from Broadcom.

High Frequency Trading

This is a very specialized market, and one that has very specialized requirements.

  • Latency is of the utmost concern. To the point of making sure ports are on the same ASIC. Latency is measured in nano-seconds, microseconds are an eternity
  • 10 Gbit at the very least
  • Money is typically not a concern
  • Over-subscription is non-existent (again, money no concern)
  • Buffers are a trade off, they can increase latency but also prevent packet loss

This is a very niche market, one that Arista dominates. Cisco and a few other vendors have small inroads here, using the same merchant silicon that Arista uses, however Arista has had huge experience in this market. Every tick of the clock can mean hundreds of thousands of dollars in a single trade, so companies have no problem throwing huge amounts of money at this issue to shave every last nanosecond off of latency.

Hadoop/Big Data

  • Latency is of high concern
  • Large buffers are critical
  • Over-subscription is low
  • Layer 2 adjacency is neither required nor desired
  • Layer 3 Leaf/spine networks
  • Storage is distributed, sharded over IP

Arista has also extremely successful in this market. They glue PC RAM onto their switch boards to provide huge buffers (around 760 MB) to each port, so it can absorb quite a bit of bursty traffic, which occurs a lot in these types of setups. That’s about .6 seconds of buffering a 10 Gbit link. Huge buffers will not prevent congestion, but they do help absorb situations where you might be overwhelmed for a short period of time.

Since nodes don’t need to be Layer 2 adjacent, simple Layer 3 ECMP networks can be created using inexpensive and basic switches. You don’t need features like FabricPath, TRILL, SPB, OTV. Just fast, inexpensive, low power ports. 10 Gigabit is the bare minimum for these networks, with 40 and 100 Gbit used for connectivity to the spines. Arista (especially with their 7500E platform) does very well in this area. Cisco is moving into this area with the Nexus 9000 line, which was announced late last year.

 

Conclusions

Understanding the requirements for the various workloads may help you determine the right switches for you. It’s interesting to see how quickly the market is changing. Perhaps 2 years ago, the large-Layer 2 networks seemed like the immediate future. Then all of a sudden Layer 3 mesh networks became popular again. Then you’ve got SDN like VMware’s NSX and Cisco’s ACI on top of that. Interesting times, man. Interesting times.

FCoE versus FC Farce (I’m Tellin’ All Y’All It’s Sabotage!)

Updates 2/6/2014:

  • @JohnKohler noticed that the UCS Manager screenshot used (see below) is from a UCS Emulator, not any system they used for testing.
  • Evaluator Group promises answers to questions that both I and Dave Alexander (@ucs_dave) have brought up.

On my way back from South America/Antarctica, I was pointed to a bake-off/performance test commissioned by Brocade and performed by a company called Evaluator Group. It compared the performance of edge FCoE (non-multi-hop FCoE) to native 16 Gbit FC. The FCoE test was done on a Cisco UCS blade system connecting to a Brocade switch, and the FC was done on an HP C7000 chassis system connecting to the same switch. At first glance, it would seem to show that FC is superior to FCoE for a number of reasons.

I’m not a Cisco fanboy, but I am a Cisco UCS fanboy, so I took great interest in the report. (I also work for a Cisco Learning Partner as an instructor and courseware developer.) But I also like Brocade, and have a huge amount of respect for many of Brocade employees that I have met over the years. These are great and smart people, and they serve their customers well. 

First, a little bit about these types of reports. They’re pretty standard in the industry, and they’re commissioned by one company to showcase superiority of a product or solution against one or more of their competitors. They can produce some interesting information, but most of the time it’s a case of: “Here’s our product in a best-case scenario versus other products in a mediocre-to-worst case scenario.” No company would release a test showing other products superior to theirs of course, so they’re only released when a particular company comes out on top, or (most likely) the parameters are changed until they do. As such, they’re typically taken with a grain of salt. In certain markets, such as the load balancer market, vendors will make it rain with these reports on a regular basis. 

But for this particular report, I found several substantial issues with it which I’d like to share. It’s kind of a trainwreck. Let’s start with the biggest issue, one that is rather embarrassing.

What The Frak?

On page 17 check out the Evaluator Group comments:

“…This indicates the primary factor for higher CPU utilization within the FCoE test was due to using a software initiator, rather than a dedicated HBA. In general, software initiators require more server CPU cycles than do hardware initiators, often negating any cost advantages.”

For one, no shit. Hardware initiators will perform better than software initiators. However, the Cisco VIC 1240 card (which according to page 21 was included in the UCS blades) is a hardware initiator card. Being a CNA (converged network adaptor) the OS would see a native FC interface. With ESXi you don’t even need to install extra drivers, the FC interfaces just show up. Setting up a software FCoE initiator would actually be quite a bit more difficult to get going, which might account for why it took so long to configure UCS. Configuring a hardware vHBA in UCS is quite easy (it can be done in literally less than a minute).

Using software initiators against hardware FC interfaces is beyond a nit-pick in a performance test. It would be downright sabotage.

sabotage

I asked the @Evalutor_Group account if they really did software initiators:

And they responded in the affirmative.

Wow. First of all, software FCoE initiators is absolutely not standard configuration for UCS. In the three years I’ve been configuring and teaching UCS, I’ve never seen or even heard of FCoE software initiators being used, either in production or in a testing environment. The only reason you *might* want to do FCoE software initiators is when you’ve got the Intel mezzanine card (which is not a CNA, just an Ethernet card), and want to test FCoE. However, on page 21 it shows the UCS blades has having the VIC 1240 cards, not the Intel card.

So where were they getting that it was standard configuration?

They countered with a reference to a UCS document regarding the VIC:

and then here:

Which is this document here.

The part they referenced is as follows:

The adapter offerings are:

■ Cisco Virtual Interface Cards (VICs)

Cisco developed Virtual Interface Cards (VICs) to provide flexibility to create multiple NIC and HBA devices. The VICs also support adapter Fabric Extender and Virtual Machine Fabric Extender technologies.

■ Converged Network Adapters (CNAs)

Emulex and QLogic Converged Network Adapters (CNAs) consolidate Ethernet and Storage (FC) traffic on the Unified Fabric by supporting FCoE.

■ Cisco UCS Storage Accelerator Adapters

Cisco UCS Storage Accelerator adapters are designed specifically for the Cisco UCS B-series M3 blade servers and integrate seamlessly to allow improvement in performance and relief of I/O bottlenecks.

Wait… I think they think that the VIC card is a software-only FCoE card. It appears they came to that conclusion because the VIC doesn’t specifically mention it’s a CNA in this particular document (other UCS documents clearly and correctly indicate that the VIC card is a CNA). Because it mentions the VICs separately from the traditional CNAs from Emulex and Qlogic, it seems they believe it not to be a CNA, and thus a software card.

So it may be they did use hardware initiators, and mistakenly called them software initiators. Or they actually did configure software initiators, and did a very unfair test.

No matter how you slice it, it’s troubling. On one hand, if they did configure software initiators, they either ignorantly or willfully sabotaged the FCoE results. If they just didn’t understand the basic VIC concept, it means they setup a test without understand the most basic aspects of the Cisco UCS system. We’re talking 101 level stuff, too.  I suspect it’s the later, but since the only configuration of any of the devices they shared was a worthless screenshot of UCS manager, I can’t be sure.

This lack of understanding could have a significant impact on the results. For instance, on page 14 the response time starts to get worse for FCoE at about the 1200 MB/s mark. That’s roughly the max for a single 10 Gbit Ethernet FCoE link (1250 MB/s). While not definitive, it could mean that the traffic was going over only one of the links from the chassis to the Fabric Interconnect, or the traffic distribution was way off. My guess is they didn’t check the link utilization, or even know how, or how to fix it if it were off.

Conclusions First?

One of the more odd aspects of this report are where you found some of the conclusions, such as this one:

“Evaluator Group believes that Fibre Channel connectivity is required in order to achieve the full benefits that solid-state storage is able to provide.”

That was page 1. The first page of the report is a little weird to make a conclusion like that. Makes you sound a little… biased. These reports usually try to be impartial (despite being commissioned by a particular vendor). This one starts right out with an opinion.

Lack of Configuration

The amount of configuration they provide for the setup very sparse. For the UCS side, all they provide is one pretty worthless screenshot. Same for the HP system. For the UCS part, it would be important to know how they configured the vHBAs, and how they configured the two 10 Gbit links from the IOM to the chassis. Where they configured for the preferred Fabric Port Channel, or static pinning? So there’s no way to duplicate this test. That’s not very transparent.

Speaking of configuration, one of the issues they had with the FCoE side was how long it took them to stand up a UCS system. On page 11, second paragraph, they mention they need the help of a VAR to get everything configured. They even made a comment on page 10:

“…this approach was less intuitive during installation than other enterprise systems Evalutor Group has tested.”

youdontsay

So wow, you’ve got an environment that you’re unfamiliar with (we’ll see just how unfamiliar in a minute), and it took you *gasp* longer to configure? I’m a bit of an expert on Cisco UCS. I’m kind of a big deal. I have many paper-bound UCS books and my apartment smells of rich mahogany. I teach it regularly. And I’m not nearly as familiar with the C7000 system from HP, so I’d be willing to bet that it would take me longer to stand up an HP system than it would a Cisco UCS system. Anyone want in on that action?

Older Version? Update 2-6-2014: Screenshot Is A Fake

itsafake

@JohnKohler noticed that on the screenshot on page 23 that the serial number in the screenshot is “1″, which means the screenshot is not from any physical instance of a UCS Manager, but the UCS Emulator. The date (if accurate on the host machine) shows June of 2010, so it’s a very, very old screenshot (probably of 1.3 or 1.4). So we have no idea what version of UCS they used for these tests (and more importantly, how they were configured). 

Based on the screenshot on page 23, the UCS version is 2.0. How can you tell? Take a look where it says “Unconfigured Ports”. As of 2.1, Cisco changed the way ports are shown in the GUI. 2.1 and later do not have a sub-menu for unconfigured ports. Only 2.0 and prior.

nounconfiguredports

In version 2.1, there’s no “Unconfigured Ports” sub-menu. 

evalgroupucs

From Page 21, you can see “Unconfigured Ports”, indicating it’s UCS 2.0 or earlier.

If the test was done in November 2013, that would put it one major revision behind, as 2.1 was released in November of 2012 (2.2 was released in Dec 2013). We don’t know where they got the equipment, but if it acquired anytime in the past year, it likely came with 2.1 already installed (but not definite). To get to 2.0, they’d have to downgrade. I’m not sure why they would have done that, and if they brought in a VAR with certified UCS people they would have likely recommended 2.1. With 2.1, they could have directly connected the storage array to the UCS fabric interconnects. UCS 2.1 can do Fibre Channel zoning, and can function as a standard Fibre Channel switch. The Brocade switch wouldn’t have been needed, and the links to the storage array would be 8 Gbit instead of 16 Gbit. 

They also don’t mention the solid state array vendor, so we don’t know if there was capability to do FCoE directly to the storage array. Though not terribly common yet, FCoE connection to a storage array is done in production environments and would have benefited the UCS configuration if it were possible (and would be the preferred way if competing with 16 Gbit FC). There would have been the ability to do an LACP port channel between the Fabric Interconnects, providing better load distribution and redundancy.

Power Mad

The claim that the UCS system uses more power is laughable, especially since they specifically mentioned this setup is not how it would be deployed in production. Nothing about this setup was production worthy, and it wasn’t supposed to be. It’s fine for this type of test, but not for production. It’s non-HA, and using only 2 blades would be a waste of a blade system (and power, for either HP or Cisco). If you were only using a handful of servers, buying pizza boxes would be far more economical. Blades from either HP or Cisco only make sense past a certain number to justify the enclosures, networking, etc. If you want a good comparison, do 16 blades, or 40 blades, or 80 blades. Also include the Ethernet network connectivity. The UCS configuration has full Ethernet connectivity, the HP configuration as shown has squat.

Even by competitive report standards, this one is utter bunk. If I was Brocade, I would pull the report. With all the technology mistakes and ill-found conclusions, it’s embarrassing for them. It’s quite easy for Cisco to rip it to shreds.

(Just so there’s no confusion, no one paid me or asked me to write this. In fact, I’m still on PTO. I wrote this because someone is wrong on the Internet…)

Software Defined Fowarding: What’s In A Name?

I’ve got about four or five articles on SDN/ACI and networking in my drafts folder, and there’s something that’s been bothering me. There’s a concept, and it doesn’t have a name (at least one that I’m aware of, though it’s possible there is). In networking, we’re constantly flooded with a barrage of new names. Sometimes we even give things a name that already have a name. But naming things is important. It’s like a programming pointer, a name of something will be a pointer to a specific part of our brain that contains the understanding a of a given concept.

Software Defined Forwarding

I decided SDF needed a name because I was having trouble describing a concept, a very important one, in reference to SDN. And here’s the concept, with three key aspects:

  1. Forwarding traffic in a way that is different than traditional MAC learning/IP forwarding.
  2. Forwarding traffic based on more than the usual Layer 2 and Layer 3 headers (such as VXLAN headers, TCP and/or UDP headers).
  3. Programming these forwarding rules via a centralized controller (which would be necessary if you’re going to throw out all the traditional forwarding rules and configure this in any reasonable amount of time).
  4. In the cases of Layer 2 adjancencies, multipathing is inherent (either through overlays or something like TRILL).

Throwing out traditional rules of forwarding is what OpenFlow does, and OpenFlow got a lot of the SDN ball rolling. However, SDN has gone far beyond just OpenFlow, so it’s not fair to call it just OpenFlow anymore. And SDN is now far too broad of a term to be specific to fowarding rules, as SDN encompasses a wide range of topics, from forwarding to automation/orchestration, APIs, network function virtualization (NFV), stack integration (compute, storage), etc. If you take a look at the shipping SDN product listing, many of the products don’t have anything to do with actually forwarding traffic. Also something like FabricPath, TRILL, VCS, and SPB all do Ethernet forwarding in a non-standard way, but this is different. So herein lies the need I think to define a new term.

And make no mistake, SDF (or whatever we end up calling it) is going to be an important factor in what pushes SDN from hype to implementation.

Forwarding is simply how we get a packet from here to there. In traditional networking, forwarding rules are separate by layer. Layer 2 Ethernet forwarding works in one particular way (and not a particularly intelligent way), IP routing works in another way.  Layer 4 (TCP/UDP) gets a little tricky, as switches and routers typically don’t forward traffic based on Layer 4 headers. You can do something like policy based routing, but it’s a bit combersome to setup. You also have load balancers handling some Layer 4-7 features, but that’s handled in a separate manner.

So traditional forwarding for Layer 2 and Layer 3 hasn’t changed much. For instance, take the example of a server plugging into a switch and powering up. Its IP address and MAC address haven’t been seen on the network, so the network is unaware of both the Layer 2 and Layer 3 addresses. The server is plugged it into a switch with a port set for the right VLAN. The server wants to send a packet out to the internet, so it ARPs (WHO HAS 1.1.1.1, for example). The router (or more likely, SVI) responds with its MAC address, or the floating MAC (VMAC) of the first-hop redundancy protocol, such as VRRP or HSRP. Every switch attached to that VLAN sees this ARP and ARP response, and every switches Layer 2 forwarding table learned which port to find that particular MAC address on.

tradcoreaccess

In several of the SDF implementations that I’m familiar with, such as Cisco’s ACI, NEC’s OpenFlow controller, and to a certain degree Juniper’s QFabric, things get a little weird.

dogsandcats

Forwarding in SDF/SDN, Gets A Little Weird

In the SDF (or many implementations of SDF, they differ in how they accomplish this) the local TOR/leaf switch is what answers the server’s ARP. Another server, on the same subnet and L2 segment (or increasingly a tenant), ARPs on a different leaf switch.

leafspinecray

In the diagram above, note both servers have a default gateway of 1.1.1.1. The two servers are Layer 2 adjacent, existing on the same network segment (same VLAN or VXLAN). Both ARP for their default gateways, and both receive a response from their local leaf switch with an identical MAC address. Neither server would actually see the other’s ARP, because there’s no need to flood that ARP traffic (or most other BUM traffic) beyond the port that the server is connected to. BUM traffic goes to the local leaf, and the leaf can learn and forward in a more intelligent manner. The other leaf nodes don’t need to see the flooded traffic, and a centralized controller can program the forwarding tables of the leafs and spines accordingly.

In most cases, the packet’s forwarding decision, based on a combination of Layer 2, 3, 4 and possibly more (such as VXLAN) is made at the leaf. If it’s moving past a Layer 3 segment, the TTL gets decremented. The forwarding is typically determined at the leaf, and some sort of label or header applies that contains its destination port.

You have this somewhat with a full Layer 3 leaf/spine mesh, in that the local leaf is who answers the ARP. However, in a Layer 3 mesh hosts connected to different leaf switches are on different network segments, and the leaf is the default gateway (without getting werid).  In some applications, such as Hadoop, that’s fine. But for virtualization (unfortunately) there’s a huge need for Layer 2 adjacencies. For now, at least

Another benefit of SDF is the ability to intelligently steer traffic through various network devices, known as service chaining. This is done without changing default gateways of the servers, bridging VLANs and proxy arp, or other current methodologies. Since SDF throws out the rulebook in terms of forwarding, it becomes a much simpler matter to perform traffic steering. Cisco’s ACI does this, as does Cisco vPath and VMware’s NSX.

servicechaining

A policy, programmed on a central controller, can be put in place to ensure that traffic forwards through a load balancer and firewall. This also has a lot of potential in the realm of multi-tenancy and network function virtualization. In short, combined with other aspects of SDN, it can change the game in terms of how network are architected in the near future.

SDF is only a part of SDN. By itself, it’s compelling, but as there have been some solutions on the market for a little while, it doesn’t seem to be “must have” to the point where customers are upending their data center infrastructures to replace everything. I think for it to be a must have, it needs to be a part

Link Aggregation Confusion

In a previous article, I discussed the somewhat pedantic question: “What’s the difference between EtherChannel and port channel?” The answer, as it turns out, is none. EtherChannel is mostly an IOS term, and port channel is mostly an NXOS term. But either is correct.

But I did get one thing wrong. I was using the term LAG incorrectly. I had assumed it was short for Link Aggregation (the umbrella term of most of this). But in fact, LAG is short for Link Aggregation Group, which is a particular instance of link aggregation, not the umbrella term. So wait, what do we call the technology that links links together?

saymyname

LAG? Link Aggregation? No wait, LACP. It’s gotta be LACP.

In case you haven’t noticed, the terminology for one of the most critical technologies in networking (especially the data center) is still quite murky.

Before you answer that, let’s throw in some more terms, like LACP, MLAG, MC-LAG, VLAG, 802.3ad, 802.1AX, link bonding, and more.

The term “link aggregation” can mean a number of things. Certainly EtherChannel and port channels are are form of link aggregation. 802.3ad and 802.1AX count as well. Wait, what’s 802.1AX?

802.3ad versus 802.1AX

What is 802.3ad? It’s the old IEEE working group for what is now known as 802.1AX. The standard that we often refer to colloquially as port channel, EtherChannels, and link aggregation was moved from the 802.3 working group to the 802.1 working group sometime in 2008. However, it is sometimes still referred to as 802.3ad. Or LAG. Or link aggregation. Or link group things. Whatever.

spaceghost

What about LACP? LACP is part of the 802.1AX standard, but it is neither the entirety of the 802.1AX standard, nor is it required in order to stand up a LAG.  LACP is also not link aggregation. It is a protocol to build LAGs automatically, versus static. You can usually build an 802.1AX LAG without using LACP. Many devices support static and dynamic LAGs. VMware ESXi 5.0 only supported static LAGs, while ESXi 5.1 introduced LACP as a method as well.

Some devices only support dynamic LAGs, while some only support static. For example, Cisco UCS fabric interconnects require LACP in order to setup a LAG (the alternative is to use pinning, which is another type of link aggregation, but not 802.1AX). The discontinued Cisco ACE 4710 doesn’t support LACP at all, instead only static LAGs are supported.

One way to think of LACP is that it is a control-plane protocol, while 802.1AX is a data-plane standard. 

Is Cisco’s EtherChannel/port channel proprietary?

As far as I can tell, no, they’re not. There’s no (functional at least) difference between 802.3ad/802.1ax and what Cisco calls EtherChannel/port channel, and you can set up LAGs between Cisco and non-Cisco without any issue.  PAgP (Port Aggregation Protocol), the precursor to LACP, was proprietary, but Cisco has mostly moved to LACP for its devices. Cisco Nexus kit won’t even support PAgP.

Even in LACP, there’s no method for negotiating the load distribution method. Each side picks which method it wants to do. In fact, you don’t have to have the same load distribution method configured on both ends of a LAG (though it’s usually a good idea).

There is are also types of link aggregation that aren’t part of the 802.1AX or any other standard. I group these types of link aggregation into two types: Pinning, and fake link aggregation. Or FLAG (Fake Link Aggregation).

First, lets talk about pinning. In Ethernet, we have the rule that there can’t be more than one way to get anywhere. Ethernet can’t handle multi-pathing, which is why we have spanning-tree and other tricks to prevent there from being more than one logical way for an Ethernet frame to get from one source MAC to a given destination MAC. Pinning is a clever way to get around this.

The most common place we tend to see pinning is in VMware. Most ESXi hosts have multiple connections to a switch. But it doesn’t have to be the same switch. And look at that, we can have multiple paths. And no spanning-tree protocol. So how do we not melt down the network?

The answer is pinning. VMware refers to this as load balancing by virtual port ID. Each VM’s vNIC has a virtual port ID, and that ID is pinning to one and only one of the external physical NICs (pNICs). To utilize all your links, you need at least as many virtual ports as you do physical ports. And load distributation can be an issue. But generally, this pinning works great. Cisco UCS also uses pinning for both Ethernet and Fibre Channel, when 802.1AX-style link aggregation isn’t used.

It works great, and a fantastic way to get active/active links without running into spanning-tree issues and doesn’t require 802.1AX.

Then there’s… a type of link aggregation that scares me. This is FLAG.

killitwithfire

Some operating systems such as FreeBSD and Linux support a weird kind of link aggregation where packets are sent out various active links, but only received on one link. It requires no special configuration on a switch, but the server is oddly blasting out packets on various switch ports. Transmit is active/active, but receive is active/standby.

What’s the point? I’d prefer active/standby in a more sane configuration.  I think it would make troubleshooting much easier that way.

There’s not much need for this type of fake link aggregation anymore. Most managed switches support 802.1AX, and end hosts either support the aforementioned pinning or they support 802.1AX well (LACP or static). So there are easier ways to do it.

So as you can see, link aggregation is a pretty broad term, too broad to encompass only what would be under the umbrella of 802.1AX, as it also includes pinning and Fake Link Aggregation. LAG isn’t a good term either, since it refers to a specific instance, and isn’t suited as the catch-all term for the methodology of inverse-multiplexing. 802.1AX is probably the best term, but it’s not widely known, and it also includes the optional LACP control plane protocol. Perhaps we need a new term. But if you’ve found the terms confusing, you’re not alone.

EtherChannel and Port Channel

In the networking world, you’ve no doubt heard the terms EtherChannel, port channel, LAG, MLAG, etc. These of course refer to taking multiple Ethernet connections and treating them as a single link. But one of the more confusing aspects I’ve run into is what’s the difference, if any, between the term EtherChannel and port channel? Well, I’m here to break it down for you.

break-it-down

OK, not that kind of break-it-down

First, let’s talk about what is vendor-neutral and what is Cisco trademark. EtherChannel is a Cisco trademarked term (I’m not sure if port channel is), while the vendor neutral term is LAG (Link Aggregation). Colloquially, however, I’ve seen both Cisco terms used with non-Cisco gear. For instance: “Let’s setup an Etherchannel between the Arista switch and the Juniper switch”. It’s kind of like in the UK using the term “hoovering” when the vacuum cleaner says Dyson on the side.

So what’s the difference between EtherChannel and port channel? That’s a good question. I used to think that EtherChannel was the name of the technology, and port channel was a single instance of that technology. But in researching the terms, it’s a bit more complicated than that.

Both Etherchannel and port channel appear in early Cisco documentation, such as this CatOS configuration guide. (Remember configuring switches with the “set” command?) In that document, it seems that port channel was used as the name of the individual instance of Etherchannel, just as I had assumed.

imright

I love it when I’m right

And that seems to hold true in this fairly recent document on Catalyst IOS 15, where EtherChannel is the technology and port channel is the individual instance.

But wait… in this older CatOS configuration guide, it explicitly states:

This document uses the term “EtherChannel” to refer to GEC (Gigabit EtherChannel), FEC (Fast EtherChannel), port channel, channel, and port group.

So it’s a bit murkier than I thought. And that’s just the IOS world. In the Nexus world, EtherChannel as a term seems to be falling out of favor.

Take a look at this Nexus 5000 CLI configuration guide for NXOS 4.0, and you see they use the term EtherChannel. By NX-OS 5.2, the term seems to have changed to just port channel. In the great book NX-OS and Cisco Nexus Switching, port-channel is used as the term almost exclusively. EtherChannel is mentioned once that I can see.

So in the IOS world, it seems that EtherChannel is the technology, and port channel is the interface. In the Nexus world, port channel is used as the term for the technology and the individual interface, though sometimes EtherChannel is referenced.

It’s likely that port channel is preferred in the Nexus world because NX-OS is an offspring of SANOS, which Cisco initially developed for the MDS line of Fibre Channel switches. Bundling Fibre Channels ports on Cisco switches isn’t called EtherChannels, since those interfaces aren’t, well, Ethernet. The Fibre Channel bundling technology is instead called a SAN port channel. The command on a Nexus switch to look at a port cchannel is “show port-channel”, while on IOS switches its “show etherchannel”.

When a dual-homed technology was developed on the Nexus platform, it was called vPC (Virtual Port Channel) instead of VEC (Virtual EtherChannel).

Style Guide

Another interesting aspect to this discussion is that EtherChannel is capitalized as a proper noun, while port channel is not. In the IOS world, it’s EtherChannel, though when its even mentioned in the Nexus world, it’s sometimes Etherchannel, without the capital “C”. Port channel is written often as port channel or port-channel (the later is used almost exclusively in the NX-OS book).

So where does that leave the discussion? Well, I think in very general terms, if you’re talking about Cisco technology, Etherchannel, EtherChannel, port channel, port channel, and LAG are all acceptable term for the same concept. When discussing IOS, it’s probably more correct to use the term Etherchannel. When discussing NX-OS, port channel. But again, either way would work.

VXLAN: Millions or Billions?

I was putting slides together for my upcoming talk and there is some confusion about VXLAN in particular, how many VLANs it provides.

The VXLAN header provides a 24-bit address space called the VNI (VXLAN Network Identifier) to separate out tenant segments, which is 16 million. And that’s the number I see quoted with regards to VXLAN (and NVGRE, which also has a 24-bit identifier). However, take a look at the entire VXLAN packet (from the VXLAN standard… whoa, how did I get so sleepy?):

vxlan

Tony’s amazing Technicolor Packet Nightmare

The full 802.1Q Ethernet frame can be encapsulated, providing the full 12-bit 4096 VLANs per VXLAN. 16 million multiplied by 4096 is about 68 billion (with a “B”). However, most material discussing VXLAN refers to 16 million.

So is it 16 million or 68 billion?

billionsandbillionsofvlans

The answer is: Yes?

So according to the standard, each VXLAN segment is capable of carrying an 802.1Q encoded Ethernet frame, so each VXLAN can have a total of 4096(ish) VLANs. The question is whether or not this is actually feasible. Can we run multiple VLANs over VXLAN? Or is each VXLAN only going to realistically carry a single (presumably non-tagged) VLAN.

I think much of this depends on how smart the VTEP is. The VTEP is the termination point, the encap/decap point for the VXLANs. Regular frames enter a VTEP and get encapsulated and sent over the VXLAN overlays (regular Layer 3 fabric) to another VTEPthe terminating endpoint and decap’d.

The trick is the MAC learning process of the VTEPs. Each VTEP is responsible for learning the local MAC addresses as well as the destination MAC addresses, just like a traditional switch’s CAM table. Otherwise, each VTEP would act kind of like a hub, and send every single unicast frame to every other VTEP associated with that VXLAN.

What I’m wondering is, do VTEPs keep separate MAC tables per VLAN?

I’m thinking it must create a multi-VLAN table, because what happens if we have the same MAC address in two different VLANs? A rare occurrence  to be sure, but I don’t think it violates any standards (could be wrong on that). If it only keeps a single MAC table for all VLANs, then we really can’t run multiple VLANs per VXLAN. But I imagine it has to keep multiple tables per VLAN. Or at least, it should.

I can’t imagine there would be any situation where different tenants would get VLANs in the same VXLAN/VNI, so there are still 16 million multi-tenant segments, so it’s not exactly 68 billion VLANs.  But each tenant might be able to have multiple VLANs.

Having tenants capable of having multiple VLAN segments may prove to be useful, though I doubt any tenant would have more than a handful of VLANs (perhaps DMZ, internal, etc.). I haven’t played enough with VXLAN software yet to figure this one out, and discussions on Twitter (many thanks to @dkalintsev for great discussions) while educational haven’t seemed to solidify the answer.

Ethernet Congestion: Drop It or Pause It

Congestion happens. You try to put a 10 pound (soy-based vegan) ham in a 5 pound bag, it just ain’t gonna work. And in the topsy-turvey world of data center switches, what do we do to mitigate congestion? Most of the time, the answer can be found in the wisdom of Snoop Dogg/Lion.

dropitlikephraell

Of course, when things are fine, the world of Ethernet is live and let live.

everythingisfine

We’re fine. We’re all fine here now, thank you. How are you?

But when push comes to shove, frames get dropped. Either the buffer fills up and tail drop occurs, or QoS is configured and something like WRED (Weight Random Early Detection) kicks in to proactively drop frames before taildrop can occur (mostly to keep TCP’s behavior from causing spiky behavior).

buffertaildrop

The Bit Grim Reaper is way better than leaky buckets

Most congestion remediation methods involve one or more types of dropping frames. The various protocols running on top of Ethernet such as IP, TCP/UDP, as well as higher level protocols, were written with this lossfull nature in mind. Protocols like TCP have retransmission and flow control, and higher level protocols that employ UDP (such as voice) have other ways of dealing with the plumbing gets stopped-up. But dropping it like it’s hot isn’t the only way to handle congestion in Ethernet:

stophammertime

Please Hammer, Don’t PAUSE ‘Em

Ethernet has the ability to employ flow control on physical interfaces, so that when congestion is about to occur, the receiving port can signal to the sending port to stop sending for a period of time. This is referred to simply as 802.3x Ethernet flow control, or as I like to call it, old-timey flow control, as it’s been in Ethernet since about 1997. When a receive buffer is close to being full, the receiving side will send a PAUSE frame to the sending side.

PAUSEHAMMERTIME

Too legit to drop

A wide variety of Ethernet devices support old-timey flow control, everything from data center switches to the USB dongle for my MacBook Air.

Screen Shot 2013-02-01 at 6.04.06 PM

One of the drawbacks of old-timey flow control is that it pauses all traffic, regardless of any QoS considerations. This creates a condition referred to as HoL (Head of Line) blocking, and can cause higher priority (and latency sensitive) traffic to get delayed on account of lower priority traffic. To address this, a new type of flow control was created called 802.1Qbb PFC (Priority Flow Control).

PFC allows a receiving port send PAUSE frames that only affect specific CoS lanes (0 through 7). Part of the 802.1Q standard is a 3-bit field that represents the Class of Service, giving us a total of 8 classes of service, though two are traditionally reserved for control plane traffic so we have six to play with (which, by the way, is a lot simpler than the 6-bit DSCP field in IP). Utilizing PFC, some CoS values can be made lossless, while others are lossfull.

Why would you want to pause traffic instead of drop traffic when congestion occurs?

Much of the IP traffic that traverses our data centers is OK with a bit of loss. It’s expected. Any protocol will have its performance degraded if packet loss is severe, but most traffic can take a bit of loss. And it’s not like pausing traffic will magically make congestion go away.

But there is some traffic that can benefit from losslessness, and and that just flat out requires it. FCoE (Fibre Channel of Ethernet), a favorite topic of mine, requires losslessness to operate. Fibre Channel is inherently a lossless protocol (by use of B2B or Buffer to Buffer credits), since the primary payload for a FC frame is SCSI. SCSI does not handle loss very well, so FC was engineered to be lossless. As such, priority flow control is one of the (several) requirements for a switch to be able to forward FCoE frames.

iSCSI is also a protocol that can benefit from pause congestion handling rather than dropping. Instead of encapsulating SCSI into FC frames, iSCSI encapsulates SCSI into TCP segments. This means that if a TCP segment is lost, it will be retransmitted. So at first glance it would seem that iSCSI can handle loss fine.

From a performance perspective, TCP suffers mightily when a segment is lost because of TCP congestion management techniques. When a segment is lost, TCP backs off on its transmission rate (specifically the number of segments in flight without acknowledgement), and then ramps back up again. By making the iSCSI traffic lossless, packets will be slowed down during congestions but the TCP congestion algorithm wouldn’t be used. As a result, many iSCSI vendors recommend turning on old-timey flow control to keep packet loss to a minimum.

However, many switches today can’t actually do full losslessness. Take the venerable Catalyst 6500. It’s a switch that would be very common in data centers, and it is a frame murdering machine.

The problem is that while the Catalyst 6500 supports old-timey flow control (it doesn’t support PFC) on physical ports, there’s no mechanism that I’m aware of to prevent buffer overruns from one port to another inside the switch. Take the example of two ingress Gigabit Ethernet ports sending traffic to a single egress Gigabit Ethernet port. Both ingress ports are running at line rate. There’s no signaling (at least that I’m aware of, could be wrong) that would prevent the egress ports from overwhelming the transmit buffer of the ingress port.

congestion

Many frames enter, not all leave

This is like flying to Hawaii and not reserving a hotel room before you get on the plane. You could land and have no place to stay. Because there’s no way to ensure losslessness on a Catalyst 6500 (or many other types of switches from various vendors), the Catalyst 6500 is like Thunderdome. Many frames enter, not all leave.

thunderdome

Catalyst 6500 shown with a Sup2T

The new generation of DCB (Data Center Bridging) switches, however, use a concept known as VoQ (Virtual Output Queues). With VoQs, the ingress port will not send a frame to the egress port unless there’s room. If there isn’t room, the frame will stay in the ingress buffer until there’s room.If the ingress buffer is full, it can have signaled the sending port it’s connected to to PAUSE (either old-timey pause or PFC).

This is a technique that’s been in used in Fibre Channel switches from both Brocade and Cisco (as well as others) for a while now, and is now making its way into DCB Ethernet switches from various vendors. Cisco’s Nexus line, for example, make use of VoQs, and so do Brocade’s VCS switches. Some type of lossless ability between internal ports is required in order to be a DCB switch, since FCoE requires losslessness.

DCB switches require lossless backplanes/internal fabrics, support for PFC, ETS (Enhanced Transmission Selection, a way to reserve bandwidth on various CoS lanes), and DCBx (a way to communicate these capabilities to adjacent switches). This makes them capable of a lot of cool stuff that non-DCB switches can’t do, such as losslessness.

One thing to keep in mind, however, is when Layer 3 comes into play. My guess is that even in a DCB switch that can do Layer 3, losslessness can’t be extended beyond a Layer 2 boundary. That’s not an issue with FCoE, since it’s only Layer 2, but iSCSI can be routed.

Po-tay-to, Po-ta-to: Analogies and NPIV/NPV

In a recent post, I took a look at the Fibre Channel subjects of NPIV and NPV, both topics covered in the CCIE Data Center written exam (currently in beta, take yours now, $50!). The post generated a lot of comments. I mean, a lot. Over 50 so far (and still going).  An epic battle (although very unInternet-like in that it was very civil and respectful) brewed over how Fibre Channel compares to Ethernet/IP. The comments look like the aftermath of the battle of Wolf 359.

Captain, the analogy regarding squirrels and time travel didn’t survive

One camp, lead by Erik Smith from EMC (who co-wrote the best Fibre Channel book I’ve seen so far, and it’s free), compares the WWPNs to IP addresses, and FCIDs to MAC addresses. Some others, such as Ivan Pepelnjak and myself, compare WWPNs to MAC addresses, and FCIDs to IP addresses. There were many points and counter-points. Valid arguments were made supporting each position. Eventually, people agreed to disagree. So which one is right? They both are.

Wait, what? Two sides can’t be right, not on the Internet!

When comparing Fibre Channel to Ethernet/IP, it’s important to remember that they are different. In fact, significantly different. The only purpose for relating Fibre Channel to Ethernet/IP is for the purpose of relating those who are familiar with Ethernet/IP to the world of Fibre Channel. Many (most? all?) people learn by building associations with known subjects (in our case Ethernet/IP)  to lesser known (in this case Fibre Channel) subjects.

Of course, any association includes includes its inherent inaccuracies. We purposefully sacrifice some accuracy in order to attain relatability. Specific details and inaccuracies are glossed over. To some, introducing any inaccuracy is sacrilege. To me, it’s being overly pedantic. Pedantic details are for the expert level. Using pedantic facts as an admonishment of an analogy misses the point entirely. With any analogy, there will always be inaccuracies, and there will always be many analogies to be made.

Personally, I still prefer the WWPN ~= MAC/FC_ID ~= IP approach, and will continue to use it when I teach. But the other approach I believe is completely valid as well. At that point, it’s just a matter of preference. Both roads lead to the same destination, and that is what’s really important.

Learning always happens in layers. Coat after coat is applied, increasing in accuracy and pedantic details as you go along. Analogies is a very useful and effective tool to learn any subject.

Follow

Get every new post delivered to your Inbox.

Join 73 other followers