Top 5 Reasons The Evaluator Group Screwed Up

It’s been a while since the trainwreck of a “study” commissioned by Brocade and performed by The Evaluator Group,  but it’s still being discussed in various storage circles (and that’s not good news for Brocade). Some pretty much parroted the results, seemingly without reading the actual test. Then got all pissy when confronted about it.  I did a piece on my interpretations of the results, as did Dave Alexander of WWT and J Metz of Cisco. Our mutual conclusion can be best summed up with a single animated GIF.

 

bullshit

But since a bit of time has passed, I’ve had time to absorb Dave and J’s opinions, as well as others, I’ve come up with a list of the Top 5 Reasons by The Evaluator Group Screwed Up. This isn’t the complete list, of course, but some of the more glaring problems. Let’s start with #1:

Reason #1: I Have No Idea What I’m Doing

Their hilariously bad conclusion to the higher variance in response times and higher CPU usage was that it was the cause of the software initiators. Except, they didn’t use software initiators. The had actually configured hardware initiators, and didn’t know it. Let that sink in: They’re charged with performing an evaluation, without knowing what they’re doing.

The Cisco UCS VIC 1240 hardware CNA’s were utilized.  Referring to them as software initiators caused some confusion. The Cisco VIC is a hardware initiator and we configured them with virtual HBAs. Evaluator Group has no knowledge of the internal architecture of the VIC or its driver.  Our commentary of the possible cause for higher CPU utilization is our opinion and further analysis would be required to pinpoint the specific root cause.

Of course, it wasn’t the software initiator. They didn’t use a software initiator, but they were so clueless, they didn’t know they’d actually used a hardware initiator. Without knowing how they performed their tests (since they didn’t publish their methodology) it’s purely speculation, but it looks like the problem was caused by congestion (from them architecting the UCS solution incorrectly).

Reason #2: They’re Hilariously Bad At Math.

They claimed FCoE required 50% more cables, based on the fact that there were 50% more cables in the FCoE solution than the FC solution. Which makes sense… except that the FC system had zero Ethernet.

That’s right, in the HP/Fibre Channel solution, each blade had absolutely zero Ethernet connectivity. In the Cisco UCS solution, every blade had full Ethernet and Fibre Channel connectivity.  None. Zilch. Why did they do that? Probably because had they included any network connectivity to the HP system, the cable count would have shifted to FCoE’s favor.  Let me state this again, because it’s astonishingly stupid: They claimed FCoE (which included Ethernet and FC connectivity) required more cables without including any network connectivity for the HP/FC system. 

not_even_mad

Also, they made some power/cooling claims, despite the fact that the UCS solution didn’t require a separate FC switch (it’s capable of being a full-fledged Fibre Channel switch by itself), though the HP solution would have required a separate pair of Ethernet switches (which wasn’t included). So yeah, their math is a bit off. Had they done things, you know, correctly, the power, cooling, and cable count would have flipped in favor of FCoE.

Reason #3: UCS is Hard, You Guys!

They whinged about UCS being more difficult to setup. Anytime you’re dealing with unfamiliar technology, it’s natural that it’s going to be more difficult. However, they claimed that they had zero experience with HP as well (seriously, who at Brocade hired these guys?) How easy is UCS? Here is a video done from Amsterdam where a couple of Cisco techs added a new chassis and blade and had it booted up and running ESXi in less than 30 minutes from in the box to booted. Cisco UCS is different than other blade systems, but it’s also very easy (and very quick) to stand up. And keep in mind, the video I linked was done in Amsterdam, so they were probably baked   

Reason #4: It Contradicts Everyone Else’s Results (Especially those that know what they’re doing)

For the past couple of years, VMware and NetApp have been doing performance tests on various storage protocols. Here’s one from a few years ago, which includes (native) 4 and 8 Gbit Fibre Channel, 10 Gbit FCoE, 10 Gbit iSCSI, and 10 Gbit NFS. The conclusion? The protocol doesn’t much matter. They all came out about the same when normalized for bandwidth. The big difference is in the storage backend. At least they published their methodology (I’m looking at you, Evaluator Group). Here’s one from Demartek that shows a mixture of storage protocols saturating 10 Gbit Ethernet. Again, the limitation is only the link speed itself, not the protocol. And again, again, Demartek published their methodology.

Reason #5: How Did They Set Everything Up? Magic!

Most of the time with these commissioned reports, the details of how it’s configured are given so that the results can be reproduced and audited. How did the Evaluator Group set up their environment?

GOB.MAGIC_.GIF_

As far as I can tell, magic. There’s several things they could have easily gotten wrong with the UCS setup, and given their mistake about software/hardware initiators, quite likely. They didn’t even mention which storage vendor they used.

So there you have it. A bit of a re-hash, but hey, it was a dumb report. The upside though is that it did provide me with some entertainment.

Fibre Channel: The Heart of New SDN Solutions

From Juniper to Cisco to VMware, companies are spouting up new SDN solutions. Juniper’s Contrail, Cisco’s ACI, VMware’s NSX, and more are all vying to be the next generation of data center networking. What is surprising, however, is what’s at the heart of these new technologies.

Is it VXLAN, NVGRE, Openflow? Nope. It’s Fibre Channel.

Seriously.

If you think about it, it makes sense. Fibre Channel has been doing fabrics since before we ever called Ethernet fabrics, well, fabrics. And this isn’t the first time that Fibre Channel has shown up in unusual places. There’s a version of Fibre Channel that runs inside certain airplanes, including jet fighters like the F-22.

1_FW_F-22_Raptor_participates_in_Red_Flag

Keep the skies safe from FCoE (sponsored by the Evaluator Group)

New generation of switches have been capable of Data Center Bridging (DCB), which enables Fibre Channel over Ethernet. These chips are also capable of doing native Fibre Channel So rather than build complicated VPLS fabrics or routed networks, various data center switching companies are leveraging the inherent Fibre Channel capabilities of the merchant silicon and building Fibre Channel-based underlay networks to support an IP-based overlay.

Buffer-to-buffer (B2B) credit system and losslessness of Fibre Channel, plus the new 32/128 Gigabit interfaces with the newest Fibre Channel standard are all being leveraged for these underlays. I find it surprising that so many companies are adopting this, you’d think it’d be just Brocade. But Cisco, Arista (who notoriously shunned FCoE) and Juniper are all on board with new or announced SDN offerings that are based mostly or in part on Fibre Channel.

However, most of the switches from various vendors are primarily Ethernet today, so the 10/40 Gigabit interfaces can run FCoE until more switches are available with native FC interfaces. Of course, these switches will still be required to have a number of native Ethernet ports in order to connect to border networks that aren’t part of the overlay network, so there will be still a need for Ethernet. But it seems the market has spoken, and they want Fibre Channel.

 

Changing Data Center Workloads

Networking-wise, I’ve spent my career in the data center. I’m pursuing the CCIE Data Center. I study virtualization, storage, and DC networking. Right now, the landscape in the network is constantly changing, as it has been for the past 15 years. However, with SDN, merchant silicon, overlay networks, and more, the rate of change in a data center network seems to be accelerating.

speed

Things are changing fast in data center networking. You get the picture

Whenever you have a high rate of change, you’ll end up with a lot of questions such as:

  • Where does this leave the current equipment I’ve got now?
  • Would SDN solve any of the issues I’m having?
  • What the hell is SDN, anyway?
  • I’m buying vendor X, should I look into vendor Y?
  • What features should I be looking for in a data center networking device?

I’m not actually going to answer any of these questions in this article. I am, however, going to profile some of the common workloads that you find in data centers currently. Your data center may have one, a few, or all of these workloads. It may not have any of them. Your data center may have one of the workloads listed, but my description and/or requirements is way off. All certainly possible. These are generalizations, and with all generalizations your mileage may vary. With that disclaimer out of the way, strap in. Let’s go for a ride.

Traditional Virtualization

It’s interesting to say that something which only exploded into the data center in a big way in about 2008 as now being “traditional”, rather than “new-fangled”. But that’s the situation we have here. Traditional virtualization workload  is centered primarily around VMware vSphere. There are other traditional virtualization products of course, such as Red Hat’s RHEV, Xen, and Microsoft Hyper-V, but VMware has the largest market share for this by far.

  • Latency is not a huge concern (30 usecs not a big deal)
  • Layer 2 adjacencies are mandatory (required for vMotion)
  • Large Layer 2 domains (thousands of hosts layer 2 adjacent)
  • Converged infrastructures (storage and data running on the same wires, FCoE, iSCSI, NFS, SMB3, etc.)
  • Buffer requirements aren’t typically super high. Bursting isn’t much of an issue for most workloads of this type.
  • Fibre Channel is often the storage protocol of choice, along with NFS and some iSCSI as well

Cisco has been especially successful in this realm with the Nexus line because of vPC, FabricPath, OTV and (to a much lesser extent) LISP, as they address some of the challenges with workload mobility (though not all of them, such as the speed of light). Arista, Juniper, and many others also compete in this particular realm, but Cisco is the market leader.

With the multi-pathing Layer 2 technologies such as SPB, TRILL, Cisco FabricPath, and Brocade VCS (the latter two are based on TRILL), you can build multi-spine leaf/spine networks/CLOS networks that you can’t with spanning-tree based networks, even with MLAG.

This type of network is what I typically see in data centers today. However, there is a shift towards Layer 3 networks and cloud workloads over traditional virtualization, so it will be interested to see how long traditional virtualization lasts.

VDI

VDI (Virtual Desktops) are a workload with the exact same requirements as traditional virtualization, with one main difference: The storage requirements are much, much higher.

  • Latency is not as important (most DC-grade switches would qualify), especially since latency is measured in milliseconds for remote desktop users
  • Layer 2 adjacencies are mandatory (required for vMotion)
  • Large Layer 2 domains
  • Converged infrastructures
  • Buffer requirements aren’t typically very high
  • High-end storage backends. All about the IOPs, y’all

For storage here, IOPs are the biggest concern. VDI eats IOPs like candy.

Legacy Workloads

This is the old, old school. And by old school, I mean late 90s, early 2000s. Before virtualization changed the landscape. There’s still quite a few crusty old servers, with uptimes measured in years, running long-abandoned applications. The problem is, these types of applications are usually running something mission critical and/or significant revenue generating. Organizations just haven’t found a way out of it yet. And hey, they’re working right now. Often running on proprietary Unix systems, they couldn’t or wouldn’t be migrated to a virtualized environment (where it would be much easier to deal with).

The hardware still works, so why change something that works? Because it would be tough to find more. It’s also probably out of vendor-supported service.

  • Latency? Who cares. Is it less than 1 second? Good enough.
  • Layer 2 adjacencies, if even required, are typically very small, typically just needed for the local clustering application (which is usually just stink-out-loud awful)
  • 100 megabit and gigabit Ethernet typically. 10 Gigabit? That’s science-fiction talk!
  • Buffers? You mean like, what shines the floor?

buffers

My own personal opinion is that this is the only place where Cisco Catalyst switches belong in a data center, and even then only because they’re already there. If you’re going with Cisco, I think everything else (and everything new) in the DC should be Nexus.

Cloud Workloads (Private Cloud)

If you look at a cloud workload, it looks very similar to the previous traditional virtualization workload. They both use VMs sitting on top of hypervisors. They both have underlying infrastructure of compute, network, and storage to support these VMs. The difference is primarily is in the operational model.

It’s often described as the difference between pets and cattle. With traditional virtualization, you have pets. You care what happens to these VMs. They have HA and DRS and other technologies to care for them. They’re given clever names, like Bart and Lisa, or Happy and Sleepy. With cloud VMs, they’re not given fun names. We don’t do vMotion/Live Migration with them. When we need them, they’re spun up. When they’re not, they’re destroyed. We don’t back them up, we don’t care if the host they reside on dies so long as there are other hosts carrying the workload. The workload is automatically sharded across the available hosts using logic in the application. Instead of backups, templates are used to create new VMs when the workload increases. And when the workload decreases, some of the VMs get destroyed. State is not kept on any single VM, instead the state of the application (and underlying database) is sharded to the available systems.

This is very different than traditional virtualization. Because the workload distribution is handled with the application, we don’t need to do vMotion and thus have Layer 2 adjacencies. This makes it much more flexible for the network architects to put together network to support this type of workload. Storage with this type of workload also tends to be IP-based (NFS, iSCSI) rather than FC-based (native Fibre Channel or FCoE).

With cloud-based workloads, there’s also a huge self-service component. VMs are spun-up and managed by developers or end-users, rather than the IT staff. There’s typically some type of portal that end-users can use to spin up/down resources. Chargebacks are also a component, so that even in a private cloud setting, there’s a resource cost associated and can be tracked.

OpenStack is a popular choice for these cloud workloads, as is Amazon and Windows Azure. The former is a private cloud, with the later two being public cloud.

  • Latency requirements are mostly the same as traditional virtualization
  • Because vMotion isn’t required, it’s all Layer 3, all the time
  • Storage is mostly IP-based, running on the same network infrastructure (not as much Fibre Channel)
  • Buffer requirements are typically the same as traditional virtualization
  • VXLAN/NVGRE burned into the chips for SDN/Overlays

You can use much cheaper switches for this type of network, since the advanced Layer 2 features (OTV, FabricPath, SPB/TRILL, VCS) aren’t needed. You can build a very simple Layer 3 mesh using inexpensive and lower power 10/40/100 Gbit ports.

However, features such as VXLAN/NVGRE encap/decap is increasingly important. The new Trident2 chips from Broadcom support this now, and several vendors, including Cisco, Juniper, and Arista all have switches based on this new SoC (switch-on-chip) from Broadcom.

High Frequency Trading

This is a very specialized market, and one that has very specialized requirements.

  • Latency is of the utmost concern. To the point of making sure ports are on the same ASIC. Latency is measured in nano-seconds, microseconds are an eternity
  • 10 Gbit at the very least
  • Money is typically not a concern
  • Over-subscription is non-existent (again, money no concern)
  • Buffers are a trade off, they can increase latency but also prevent packet loss

This is a very niche market, one that Arista dominates. Cisco and a few other vendors have small inroads here, using the same merchant silicon that Arista uses, however Arista has had huge experience in this market. Every tick of the clock can mean hundreds of thousands of dollars in a single trade, so companies have no problem throwing huge amounts of money at this issue to shave every last nanosecond off of latency.

Hadoop/Big Data

  • Latency is of high concern
  • Large buffers are critical
  • Over-subscription is low
  • Layer 2 adjacency is neither required nor desired
  • Layer 3 Leaf/spine networks
  • Storage is distributed, sharded over IP

Arista has also extremely successful in this market. They glue PC RAM onto their switch boards to provide huge buffers (around 760 MB) to each port, so it can absorb quite a bit of bursty traffic, which occurs a lot in these types of setups. That’s about .6 seconds of buffering a 10 Gbit link. Huge buffers will not prevent congestion, but they do help absorb situations where you might be overwhelmed for a short period of time.

Since nodes don’t need to be Layer 2 adjacent, simple Layer 3 ECMP networks can be created using inexpensive and basic switches. You don’t need features like FabricPath, TRILL, SPB, OTV. Just fast, inexpensive, low power ports. 10 Gigabit is the bare minimum for these networks, with 40 and 100 Gbit used for connectivity to the spines. Arista (especially with their 7500E platform) does very well in this area. Cisco is moving into this area with the Nexus 9000 line, which was announced late last year.

 

Conclusions

Understanding the requirements for the various workloads may help you determine the right switches for you. It’s interesting to see how quickly the market is changing. Perhaps 2 years ago, the large-Layer 2 networks seemed like the immediate future. Then all of a sudden Layer 3 mesh networks became popular again. Then you’ve got SDN like VMware’s NSX and Cisco’s ACI on top of that. Interesting times, man. Interesting times.

FCoE versus FC Farce (I’m Tellin’ All Y’All It’s Sabotage!)

Updates 2/6/2014:

  • @JohnKohler noticed that the UCS Manager screenshot used (see below) is from a UCS Emulator, not any system they used for testing.
  • Evaluator Group promises answers to questions that both I and Dave Alexander (@ucs_dave) have brought up.

On my way back from South America/Antarctica, I was pointed to a bake-off/performance test commissioned by Brocade and performed by a company called Evaluator Group. It compared the performance of edge FCoE (non-multi-hop FCoE) to native 16 Gbit FC. The FCoE test was done on a Cisco UCS blade system connecting to a Brocade switch, and the FC was done on an HP C7000 chassis system connecting to the same switch. At first glance, it would seem to show that FC is superior to FCoE for a number of reasons.

I’m not a Cisco fanboy, but I am a Cisco UCS fanboy, so I took great interest in the report. (I also work for a Cisco Learning Partner as an instructor and courseware developer.) But I also like Brocade, and have a huge amount of respect for many of Brocade employees that I have met over the years. These are great and smart people, and they serve their customers well. 

First, a little bit about these types of reports. They’re pretty standard in the industry, and they’re commissioned by one company to showcase superiority of a product or solution against one or more of their competitors. They can produce some interesting information, but most of the time it’s a case of: “Here’s our product in a best-case scenario versus other products in a mediocre-to-worst case scenario.” No company would release a test showing other products superior to theirs of course, so they’re only released when a particular company comes out on top, or (most likely) the parameters are changed until they do. As such, they’re typically taken with a grain of salt. In certain markets, such as the load balancer market, vendors will make it rain with these reports on a regular basis. 

But for this particular report, I found several substantial issues with it which I’d like to share. It’s kind of a trainwreck. Let’s start with the biggest issue, one that is rather embarrassing.

What The Frak?

On page 17 check out the Evaluator Group comments:

“…This indicates the primary factor for higher CPU utilization within the FCoE test was due to using a software initiator, rather than a dedicated HBA. In general, software initiators require more server CPU cycles than do hardware initiators, often negating any cost advantages.”

For one, no shit. Hardware initiators will perform better than software initiators. However, the Cisco VIC 1240 card (which according to page 21 was included in the UCS blades) is a hardware initiator card. Being a CNA (converged network adaptor) the OS would see a native FC interface. With ESXi you don’t even need to install extra drivers, the FC interfaces just show up. Setting up a software FCoE initiator would actually be quite a bit more difficult to get going, which might account for why it took so long to configure UCS. Configuring a hardware vHBA in UCS is quite easy (it can be done in literally less than a minute).

Using software initiators against hardware FC interfaces is beyond a nit-pick in a performance test. It would be downright sabotage.

sabotage

I asked the @Evalutor_Group account if they really did software initiators:

And they responded in the affirmative.

Wow. First of all, software FCoE initiators is absolutely not standard configuration for UCS. In the three years I’ve been configuring and teaching UCS, I’ve never seen or even heard of FCoE software initiators being used, either in production or in a testing environment. The only reason you *might* want to do FCoE software initiators is when you’ve got the Intel mezzanine card (which is not a CNA, just an Ethernet card), and want to test FCoE. However, on page 21 it shows the UCS blades has having the VIC 1240 cards, not the Intel card.

So where were they getting that it was standard configuration?

They countered with a reference to a UCS document regarding the VIC:

and then here:

Which is this document here.

The part they referenced is as follows:

The adapter offerings are:

■ Cisco Virtual Interface Cards (VICs)

Cisco developed Virtual Interface Cards (VICs) to provide flexibility to create multiple NIC and HBA devices. The VICs also support adapter Fabric Extender and Virtual Machine Fabric Extender technologies.

■ Converged Network Adapters (CNAs)

Emulex and QLogic Converged Network Adapters (CNAs) consolidate Ethernet and Storage (FC) traffic on the Unified Fabric by supporting FCoE.

■ Cisco UCS Storage Accelerator Adapters

Cisco UCS Storage Accelerator adapters are designed specifically for the Cisco UCS B-series M3 blade servers and integrate seamlessly to allow improvement in performance and relief of I/O bottlenecks.

Wait… I think they think that the VIC card is a software-only FCoE card. It appears they came to that conclusion because the VIC doesn’t specifically mention it’s a CNA in this particular document (other UCS documents clearly and correctly indicate that the VIC card is a CNA). Because it mentions the VICs separately from the traditional CNAs from Emulex and Qlogic, it seems they believe it not to be a CNA, and thus a software card.

So it may be they did use hardware initiators, and mistakenly called them software initiators. Or they actually did configure software initiators, and did a very unfair test.

No matter how you slice it, it’s troubling. On one hand, if they did configure software initiators, they either ignorantly or willfully sabotaged the FCoE results. If they just didn’t understand the basic VIC concept, it means they setup a test without understand the most basic aspects of the Cisco UCS system. We’re talking 101 level stuff, too.  I suspect it’s the later, but since the only configuration of any of the devices they shared was a worthless screenshot of UCS manager, I can’t be sure.

This lack of understanding could have a significant impact on the results. For instance, on page 14 the response time starts to get worse for FCoE at about the 1200 MB/s mark. That’s roughly the max for a single 10 Gbit Ethernet FCoE link (1250 MB/s). While not definitive, it could mean that the traffic was going over only one of the links from the chassis to the Fabric Interconnect, or the traffic distribution was way off. My guess is they didn’t check the link utilization, or even know how, or how to fix it if it were off.

Conclusions First?

One of the more odd aspects of this report are where you found some of the conclusions, such as this one:

“Evaluator Group believes that Fibre Channel connectivity is required in order to achieve the full benefits that solid-state storage is able to provide.”

That was page 1. The first page of the report is a little weird to make a conclusion like that. Makes you sound a little… biased. These reports usually try to be impartial (despite being commissioned by a particular vendor). This one starts right out with an opinion.

Lack of Configuration

The amount of configuration they provide for the setup very sparse. For the UCS side, all they provide is one pretty worthless screenshot. Same for the HP system. For the UCS part, it would be important to know how they configured the vHBAs, and how they configured the two 10 Gbit links from the IOM to the chassis. Where they configured for the preferred Fabric Port Channel, or static pinning? So there’s no way to duplicate this test. That’s not very transparent.

Speaking of configuration, one of the issues they had with the FCoE side was how long it took them to stand up a UCS system. On page 11, second paragraph, they mention they need the help of a VAR to get everything configured. They even made a comment on page 10:

“…this approach was less intuitive during installation than other enterprise systems Evalutor Group has tested.”

youdontsay

So wow, you’ve got an environment that you’re unfamiliar with (we’ll see just how unfamiliar in a minute), and it took you *gasp* longer to configure? I’m a bit of an expert on Cisco UCS. I’m kind of a big deal. I have many paper-bound UCS books and my apartment smells of rich mahogany. I teach it regularly. And I’m not nearly as familiar with the C7000 system from HP, so I’d be willing to bet that it would take me longer to stand up an HP system than it would a Cisco UCS system. Anyone want in on that action?

Older Version? Update 2-6-2014: Screenshot Is A Fake

itsafake

@JohnKohler noticed that on the screenshot on page 23 that the serial number in the screenshot is “1”, which means the screenshot is not from any physical instance of a UCS Manager, but the UCS Emulator. The date (if accurate on the host machine) shows June of 2010, so it’s a very, very old screenshot (probably of 1.3 or 1.4). So we have no idea what version of UCS they used for these tests (and more importantly, how they were configured). 

Based on the screenshot on page 23, the UCS version is 2.0. How can you tell? Take a look where it says “Unconfigured Ports”. As of 2.1, Cisco changed the way ports are shown in the GUI. 2.1 and later do not have a sub-menu for unconfigured ports. Only 2.0 and prior.

nounconfiguredports

In version 2.1, there’s no “Unconfigured Ports” sub-menu. 

evalgroupucs

From Page 21, you can see “Unconfigured Ports”, indicating it’s UCS 2.0 or earlier.

If the test was done in November 2013, that would put it one major revision behind, as 2.1 was released in November of 2012 (2.2 was released in Dec 2013). We don’t know where they got the equipment, but if it acquired anytime in the past year, it likely came with 2.1 already installed (but not definite). To get to 2.0, they’d have to downgrade. I’m not sure why they would have done that, and if they brought in a VAR with certified UCS people they would have likely recommended 2.1. With 2.1, they could have directly connected the storage array to the UCS fabric interconnects. UCS 2.1 can do Fibre Channel zoning, and can function as a standard Fibre Channel switch. The Brocade switch wouldn’t have been needed, and the links to the storage array would be 8 Gbit instead of 16 Gbit. 

They also don’t mention the solid state array vendor, so we don’t know if there was capability to do FCoE directly to the storage array. Though not terribly common yet, FCoE connection to a storage array is done in production environments and would have benefited the UCS configuration if it were possible (and would be the preferred way if competing with 16 Gbit FC). There would have been the ability to do an LACP port channel between the Fabric Interconnects, providing better load distribution and redundancy.

Power Mad

The claim that the UCS system uses more power is laughable, especially since they specifically mentioned this setup is not how it would be deployed in production. Nothing about this setup was production worthy, and it wasn’t supposed to be. It’s fine for this type of test, but not for production. It’s non-HA, and using only 2 blades would be a waste of a blade system (and power, for either HP or Cisco). If you were only using a handful of servers, buying pizza boxes would be far more economical. Blades from either HP or Cisco only make sense past a certain number to justify the enclosures, networking, etc. If you want a good comparison, do 16 blades, or 40 blades, or 80 blades. Also include the Ethernet network connectivity. The UCS configuration has full Ethernet connectivity, the HP configuration as shown has squat.

Even by competitive report standards, this one is utter bunk. If I was Brocade, I would pull the report. With all the technology mistakes and ill-found conclusions, it’s embarrassing for them. It’s quite easy for Cisco to rip it to shreds.

(Just so there’s no confusion, no one paid me or asked me to write this. In fact, I’m still on PTO. I wrote this because someone is wrong on the Internet…)

EtherChannel and Port Channel

In the networking world, you’ve no doubt heard the terms EtherChannel, port channel, LAG, MLAG, etc. These of course refer to taking multiple Ethernet connections and treating them as a single link. But one of the more confusing aspects I’ve run into is what’s the difference, if any, between the term EtherChannel and port channel? Well, I’m here to break it down for you.

break-it-down

OK, not that kind of break-it-down

First, let’s talk about what is vendor-neutral and what is Cisco trademark. EtherChannel is a Cisco trademarked term (I’m not sure if port channel is), while the vendor neutral term is LAG (Link Aggregation). Colloquially, however, I’ve seen both Cisco terms used with non-Cisco gear. For instance: “Let’s setup an Etherchannel between the Arista switch and the Juniper switch”. It’s kind of like in the UK using the term “hoovering” when the vacuum cleaner says Dyson on the side.

So what’s the difference between EtherChannel and port channel? That’s a good question. I used to think that EtherChannel was the name of the technology, and port channel was a single instance of that technology. But in researching the terms, it’s a bit more complicated than that.

Both Etherchannel and port channel appear in early Cisco documentation, such as this CatOS configuration guide. (Remember configuring switches with the “set” command?) In that document, it seems that port channel was used as the name of the individual instance of Etherchannel, just as I had assumed.

imright

I love it when I’m right

And that seems to hold true in this fairly recent document on Catalyst IOS 15, where EtherChannel is the technology and port channel is the individual instance.

But wait… in this older CatOS configuration guide, it explicitly states:

This document uses the term “EtherChannel” to refer to GEC (Gigabit EtherChannel), FEC (Fast EtherChannel), port channel, channel, and port group.

So it’s a bit murkier than I thought. And that’s just the IOS world. In the Nexus world, EtherChannel as a term seems to be falling out of favor.

Take a look at this Nexus 5000 CLI configuration guide for NXOS 4.0, and you see they use the term EtherChannel. By NX-OS 5.2, the term seems to have changed to just port channel. In the great book NX-OS and Cisco Nexus Switching, port-channel is used as the term almost exclusively. EtherChannel is mentioned once that I can see.

So in the IOS world, it seems that EtherChannel is the technology, and port channel is the interface. In the Nexus world, port channel is used as the term for the technology and the individual interface, though sometimes EtherChannel is referenced.

It’s likely that port channel is preferred in the Nexus world because NX-OS is an offspring of SANOS, which Cisco initially developed for the MDS line of Fibre Channel switches. Bundling Fibre Channels ports on Cisco switches isn’t called EtherChannels, since those interfaces aren’t, well, Ethernet. The Fibre Channel bundling technology is instead called a SAN port channel. The command on a Nexus switch to look at a port cchannel is “show port-channel”, while on IOS switches its “show etherchannel”.

When a dual-homed technology was developed on the Nexus platform, it was called vPC (Virtual Port Channel) instead of VEC (Virtual EtherChannel).

Style Guide

Another interesting aspect to this discussion is that EtherChannel is capitalized as a proper noun, while port channel is not. In the IOS world, it’s EtherChannel, though when its even mentioned in the Nexus world, it’s sometimes Etherchannel, without the capital “C”. Port channel is written often as port channel or port-channel (the later is used almost exclusively in the NX-OS book).

So where does that leave the discussion? Well, I think in very general terms, if you’re talking about Cisco technology, Etherchannel, EtherChannel, port channel, port channel, and LAG are all acceptable term for the same concept. When discussing IOS, it’s probably more correct to use the term Etherchannel. When discussing NX-OS, port channel. But again, either way would work.

First Call I Made When I First Heard About “Gen5 Fibre Channel”

callingbullshit

Fibre Channel: Generations

In case you haven’t heard, Brocade is rebranding their 16 Gbit Fibre Channel offerings as Generation 5 Fibre Channel. Upcoming 32 Gbit Fibre Channel will also be called “Gen 6 Fibre Channel”. Seriously.

youkiddingme

Cisco’s J Metz responded, and then Brocade responded to that. And a full-on storage smack-down started.

And you thought storage was boring.ilikethisship

It’s exciting!

Brocade is trying to de-emphasize speed as the primary differentiator to a specific Fibre Channel technology, which is weird, since that’s by far the primary differentiator between the generations. This strategy has two major flaws as I see it:

Flaw #1: They’re trying to make it look like you can solve a problem that you really can’t with 16 Gbit FC. Whether you emphasize speed or other technological aspects of 16 Gbit Fibre Channel, 16 Gbit/Gen 5 isn’t going to solve any of the major problems that currently exists in the data center or storage for that matter, at least for the vast majority of Fibre Channel installations. Virtualization workloads, databases, and especially VDI are thrashing our storage systems. However, generally speaking (always exceptions) we’re not saturating the physical links. Not on the storage array links, not on the ISLs, and definitely not on the server FC links. Primarily the issue we face in the data center are limitations are IOPS.

insignificantpowerofforce

The latency differences between Fibre Channel speeds is insignificant compared to the latencies introduced by overwhelmed storage arrays

Or, no wait, guys. Guys. Guys. Check out the… choke point

16 Gbit can give us more throughput, but so can aggregating more 8 Gbit links, especially since single flows/transactions/file operations aren’t likely to eat up more than 8 Gbit (or even a fraction of that). There’s a lower serialization delay and lower latency associated with 16 Gbit, but that’s minuscule compared to the latencies introduced by storage systems. The vast majority of workloads aren’t likely to see significant benefit moving to 16 Gbit. So for right now, those in the storage world are concentrating on the arrays, and not the fabric. And that’s where they should be concentrating.

From one of Brocade’s posts, they mention this of Gen5 Fibre Channel:

 “It’s about the innovative technology and unique capabilities that solve customer challenges.”

Fibre Channel is great. And Brocade has a great Fibre Channel offering. For the most part, better than Cisco. But there isn’t any innovation in this generation of Fibre Channel other than the speed increase. I’m kind of surprised Brocade didn’t call it something like “CloudFC”. This reeks of cloud washing, without the use of the word cloud. I mean, it’s Fibre Channel. It’s reliable, it’s simple to implement, best practices are easily understood, and it’s not terribly sexy, and calling it Gen5 isn’t going to change any of that.

Flaw #2: It creates market confusion.

Cisco doesn’t have any 16 Gbit Fibre Channel offerings (they’re pushing for FCoE, which is another issue). And when they do get 16 Gbit, they’re probably not going to call it Gen 5. Nor is most of the other Fibre Channel vendors, such as Emulex, Qlogic, NetApp, EMC, etc. HP and Dell have somewhat gone with it, but they kind of have to since they sell re-branded Brocade kit (it’s worth noting that even HP’s material is peppered with the words “16 Gbit”). So having another term is going to cause a lot of unnecessary conversations.

Here’s how I suspect a lot of Brocade conversations with new and existing customers will go:

“We recommend our Gen 5 products”

“What’s Gen 5?”

“It’s 16 Gbit Fibre Channel”

“OK, why didn’t you just say that?”

This is what’s happened in the load balancing world. A little over six years ago, Gartner and marketing departments tried to rename load balancers to “Application Delivery Controllers”, or ADCs for short. No one outside of marketing knows that the hell an ADC is. But anyone who’s worked in a data center knows what a load balancer is. They’re the same thing, and I’ve had to have a lot of unnecessary conversations since. Because of this, I’m particularly sensitive to changing the name of something that everyone already knows of for no good frickin’ reason.

Where does that leave Fibre Channel? For the challenges that most organizations are facing in the data center, an upgrade to 16 Gbit FC would be a waste of money. Sure, if given the choice between 8 and 16 Gbit FC, I’d pick 16. But there’s no compelling reason for the vast majority of existing workloads to convert to 16 Gbit FC. It just doesn’t solve any of the problems that we’re having. If you’re building a new fabric, then yes, absolutely look at 16 Gbit. It’s better to have more than to have less of course, but the benefits of 16 Gbit probably won’t be felt for a few years in terms of throughput. It’s just not a pain point right now, but it will be in the future.

In fact, looking at most of the offerings from the various storage vendors (EMC, NetApp, etc.), they’re mostly content to continue to offer 8 Gbit as their maximum speed. The same goes for server vendors (though there are 16 Gbit HBAs now available). I teach Cisco UCS, and most Cisco UCS installations plug into Brocade fabrics. Cisco UCS Fibre Channel ports only operates at a maximum of 8 Gbit, and I’ve never heard a complaint regarding the lack of 16 Gbit. Especially since you can use multiple 8 Gbit uplinks to scale connectivity.

Jumbo Fibre Channel Frames

In the world of Ethernet, jumbo frames (technically any Ethernet frame larger than 1,500 bytes) is often a recommendation for certain workloads, such as iSCSI, vMotion, backups, basically anything that doesn’t communicate with the Internet because of MTU issues. And in fact, MTU issues is one of the biggest hurdles with Ethernet jumbo frames, since every device on a given LAN must have the correct MTU size set if you want them to successfully communicate with each other.

What’s less known is that you can also enable jumbo frames for Fibre Channel as well, and that doing so can have dramatic benefits with certain workloads. Best of all, since SANs are typically smaller in scope, ensuring every device has jumbo frames enabled is much easier.

Fibre Channel Frames

Fibre Channel frames typically have a maximum payload size of 2112, and with with headers makes the MTU 2148 bytes. However you can increase the payload up to 9000 bytes, and with headers that 9036 bytes. You can play with the payload size if you like, but just to make it easier I either do regular MTU (2148 bytes) or I do the max for most switches (9036). My perfunctory tests don’t seem to indicate much benefit with anything in between.

Supported

Of course, not all Fibre Channel devices can handle jumbo frames. It’s part of the T10 standard though since 2005, so most modern devices (anything capable of 4 Gbit or more, for the most part). This includes switches from Cisco and Brocade, as well as HBAs from Qlogic and Emulex.

Configuration

The way you configure jumbo frames of course varies, and of course I don’t have lots of different types of equipment, so I’m just going to demonstrate on an MDS that I have access to.

Note: Do not do anything in a production environment until you’ve tested it in a dev environment. If you do otherwise, you’re an idiot.

If you go into an interface configuration, you’ll probably notice there’s no MTU option. That’s because we’re dealing with a fabric here, and the MTU is set per VSAN, not per port. Multi-VSAN ISLs (TE_Ports) will do multiple MTUs without any problem. Keep in mind, changing MTUs on an MDS/Nexus requires a reboot (I think mostly because the N_Ports need to do a new FLOGI). I’m not sure about Brocade, however.

Switch#1# config t 
Enter configuration commands, one per line.  End with CNTL/Z. 
Switch#1(config)# mtu size vsan 10 9036
VSAN 10 MTU changed. Reboot required.
Switch#1(config)# mtu size vsan 20 9036
VSAN 20 MTU changed. Reboot required.
Switch#1(config)# exit 
Switch#1# reload

Right now the maximum MTU size per the standard is 9036 bytes, though it depends on the vendor. Brocade, with it’s Gen 6 FC (32 Gbit) is proposing a maximum MTU size of around 15,000 bytes, though it’s not set in stone yet. It’ll be interesting to see how this all plays out, that’s for sure.

Testing

How well does it work? I haven’t tested it for VDI or otherwise high-IOPs environments, but my suspicion is that it’s not going to help. Given the longer serialization, it would actually probably hurt performance. For other workloads, such as backups, they can make a dramatic difference. As a test, I did a couple of vSphere Storage vMotions, and I was surprised to see how fast it went.

Hosts were B200 M2 blades, with 96 GByte of RAM running EXI 5.1.  Fibre Channel cards were Cisco M81KR, configured for 9036 byte frames (9,100 bytes on the FCoE interface since VICs are CNAs, so had to add several dozen bytes for FCoE headers/VNTAG, etc.). The FC switch was actually a Cisco 6248 Fabric Interconnect set for Fibre Channel switching mode. All tests were done on the A fabric. I initiated Storage vMotions on a 2 TB VMDK file. The VMDK wasn’t receiving an active workload, so it was mostly idle. I ran the test 20 times for each size, and averaged the results.

svmotion

It’s just a quick and dirty test on a barely-active VMDK, but you can see that with 9000 Byte FC frames, it’s almost twice as fast.

Po-tay-to, Po-ta-to: Analogies and NPIV/NPV

In a recent post, I took a look at the Fibre Channel subjects of NPIV and NPV, both topics covered in the CCIE Data Center written exam (currently in beta, take yours now, $50!). The post generated a lot of comments. I mean, a lot. Over 50 so far (and still going).  An epic battle (although very unInternet-like in that it was very civil and respectful) brewed over how Fibre Channel compares to Ethernet/IP. The comments look like the aftermath of the battle of Wolf 359.

Captain, the analogy regarding squirrels and time travel didn’t survive

One camp, lead by Erik Smith from EMC (who co-wrote the best Fibre Channel book I’ve seen so far, and it’s free), compares the WWPNs to IP addresses, and FCIDs to MAC addresses. Some others, such as Ivan Pepelnjak and myself, compare WWPNs to MAC addresses, and FCIDs to IP addresses. There were many points and counter-points. Valid arguments were made supporting each position. Eventually, people agreed to disagree. So which one is right? They both are.

Wait, what? Two sides can’t be right, not on the Internet!

When comparing Fibre Channel to Ethernet/IP, it’s important to remember that they are different. In fact, significantly different. The only purpose for relating Fibre Channel to Ethernet/IP is for the purpose of relating those who are familiar with Ethernet/IP to the world of Fibre Channel. Many (most? all?) people learn by building associations with known subjects (in our case Ethernet/IP)  to lesser known (in this case Fibre Channel) subjects.

Of course, any association includes includes its inherent inaccuracies. We purposefully sacrifice some accuracy in order to attain relatability. Specific details and inaccuracies are glossed over. To some, introducing any inaccuracy is sacrilege. To me, it’s being overly pedantic. Pedantic details are for the expert level. Using pedantic facts as an admonishment of an analogy misses the point entirely. With any analogy, there will always be inaccuracies, and there will always be many analogies to be made.

Personally, I still prefer the WWPN ~= MAC/FC_ID ~= IP approach, and will continue to use it when I teach. But the other approach I believe is completely valid as well. At that point, it’s just a matter of preference. Both roads lead to the same destination, and that is what’s really important.

Learning always happens in layers. Coat after coat is applied, increasing in accuracy and pedantic details as you go along. Analogies is a very useful and effective tool to learn any subject.

NPV and NPIV

The CCIE Data Center blueprint makes mention of NPV and NPIV, and Cisco UCS also makes heavy use of both topics, topics that many may be unfamiliar with most. This post (part of my CCIE Data Center prep series) will explain what they do, and how they’re different.

(Another great post on NPV/NPIV is from Scott Lowe, and can be found here. This is a slightly different approach to the same information.)

NPIV and NPV are among the two most ill-named of acronyms I’ve come across in IT, especially since they sound very similar, yet do two fairly different things. NPIV is an industry-wide term and is short for N_Port ID Virtualization, and NPV is a Cisco-specific term, and is short for N_Port Virtualization. Huh? Yeah, not only do they sound similar, but the names give very little indication as to what they do.

NPIV

First, let’s talk about NPIV. To understand NPIV, we need to look at what happens in a traditional Fibre Channel environment.

When a host plugs into a Fibre Channel switch, the host end is called an N_Port (Node Port), and the FC switch’s interface is called an F_Port (Fabric Port). The host has what’s known as a WWPN (World Wide Port Name, or pWWN), which is a 64-bit globally unique label very similar to a MAC address.

However, when a Fibre Channel host sends a Fibre Channel frame, that WWPN is no where in the header. Instead, the host does a Fabric Login, and obtains an FCID (somewhat analagous to an IP addres). The FCID is a 24-bit number, and when FC frames are sent in Fibre Channel, the FCID is what goes into the source and destination fields of the header.

Note that the first byte (08) of the FCID is the same domain ID as the FC switch that serviced the host’s FLOGI.

In regular Fibre Channel operations, only one FCID is given per physical port. That’s it. It’s a 1 to 1 relationship.

But what if you have an ESXi host, for example, with virtual fibre channel interfaces. For those virtual fibre channel interfaces to complete a fabric login (FLOGI), they’ll need their own FCIDs. Or, what if you don’t want to have a Fibre Channel switch (such as an edge or blade FC switch) go full Fibre Channel switch?

NPIV lets a FC switch give out multiple FCIDs on a single port. Simple as that.

The magic of NPIV: One F_Port gives out multiple FCIDs on (0x070100 to the ESXi host and 0x070200 and 0x070300 to the virtual machines)

NPV: Engage Cloak!

NPV is typically used on edge devices, such as a ToR Fibre Channel switch or a FC switch installed in a blade chassis/infrastruture. What does it do? I’m gonna lay some Star Trek on you.

NPV is a clocking device for a Fibre Chanel switch.

Wait, did you just compare Fibre Channel to a Sci-Fi technology? 

How is NPV like a cloaking device? Essentially, an NPV enabled FC switch is hidden from the Fibre Channel fabric.

When a Fibre Channel switch joins a fabric, it’s assigned a Domain_ID, and participates in a number of fabric services. With this comes a bit of baggage, however. See, Fibre Channel isn’t just like Ethernet. A more accurate analogue to Fibre Channel would be Ethernet plus TCP/IP, plus DHCP, distributed 802.1X, etc. Lots of stuff is going on.

And partly because of that, switches from various vendors tend not to get a long, at least without enabling some sort of Interopability Mode. Without interopability mode, you can’t plug a Cisco MDS FC switch into say, a Brocade FC switch. And if you do use interopability mode and two different vendors in the same fabric, there are usually limitations imposed on both switches. Because of that, not many people build multi-vendor fabrics.

Easy enough. But what if you have a Cisco UCS deployment, or some other blade system, and your Fibre Channel switches from Brocade? As much as your Cisco rep would love to sell you a brand new MDS-based fabric, there’s a much easier way.

Engage the cloaking device.

(Note: NPV is a Cisco-specific term, while other vendors have NPV functionality but call it something else, like Brocade’s Access Gateway.) A switch in NPV mode is invisible to the Fibre Channel fabric. It doesn’t participate in fabric services, doesn’t get a domain ID, doesn’t do fabric logins or assign FCIDs. For all intents and purposes, it’s invisible to the fabric, i.e. cloaked. This simplifies deployments, saves on domain IDs, and lets you plug switches from one vendor into a switch of another vendor.  Plug a Cisco UCS Fabric Interconnect into a Brocade FC switch? No problem. NPV. Got a Qlogic blade FC switch plugging into a Cisco MDS? No problem, run NPV on the Qlogic blade FC switch (and NPIV on the MDS).

The hosts perform fabric logins just like they normally would, but the NPV/cloaked switch passes FLOGIs up to the NPIV enabled port on the upstream switch. The FCID of the devices bears the  the Domain ID of the upstream switch (and appears directly attached to the upstream switch).

The NPV enabled switch just proxies the FLOGIs up to the upstream switch. No fuss, no muss. Different vendors can interoperate, we save domain IDs, and it’s typically easier to administer.

TL;DR: NPIV allows multiple FLOGIs (and multiple FCIDs issued) from a single port. NPV hides the FC switch from the fabric.

Follow

Get every new post delivered to your Inbox.

Join 75 other followers