UCS Manager 2.1: Case of the Missing Management IP Pools

My colleague Barry Gursky was playing with the new UCS Manager Emulator for the new 2.1 release (which you can find on Cisco’s web site, you’ll need a CCO account but no special contracts from what I can tell) and he noticed something. The Management IP Pool menu was missing.

ippoolsmoved1

 

UCS Manager 2.0 and prior had the Management IP Pool in the Admin Tab

It’s normally in the Admin tab, as shown above. But sure enough, it was gone.

You need pools of IP addresses to assign to service profiles and blades for out of band management (Cisco Integrated Management Controller, gives you KVM, IPMI, etc.). Or you could assign the IPs manually, but that would be rather annoying. Pools are way easier.

We searched and finally found it moved to the LAN tab.

ippoolsmoved2

Mystery solved.

sherlockwink

Ethernet Congestion: Drop It or Pause It

Congestion happens. You try to put a 10 pound (soy-based vegan) ham in a 5 pound bag, it just ain’t gonna work. And in the topsy-turvey world of data center switches, what do we do to mitigate congestion? Most of the time, the answer can be found in the wisdom of Snoop Dogg/Lion.

dropitlikephraell

Of course, when things are fine, the world of Ethernet is live and let live.

everythingisfine

We’re fine. We’re all fine here now, thank you. How are you?

But when push comes to shove, frames get dropped. Either the buffer fills up and tail drop occurs, or QoS is configured and something like WRED (Weight Random Early Detection) kicks in to proactively drop frames before taildrop can occur (mostly to keep TCP’s behavior from causing spiky behavior).

buffertaildrop

The Bit Grim Reaper is way better than leaky buckets

Most congestion remediation methods involve one or more types of dropping frames. The various protocols running on top of Ethernet such as IP, TCP/UDP, as well as higher level protocols, were written with this lossfull nature in mind. Protocols like TCP have retransmission and flow control, and higher level protocols that employ UDP (such as voice) have other ways of dealing with the plumbing gets stopped-up. But dropping it like it’s hot isn’t the only way to handle congestion in Ethernet:

stophammertime

Please Hammer, Don’t PAUSE ‘Em

Ethernet has the ability to employ flow control on physical interfaces, so that when congestion is about to occur, the receiving port can signal to the sending port to stop sending for a period of time. This is referred to simply as 802.3x Ethernet flow control, or as I like to call it, old-timey flow control, as it’s been in Ethernet since about 1997. When a receive buffer is close to being full, the receiving side will send a PAUSE frame to the sending side.

PAUSEHAMMERTIME

Too legit to drop

A wide variety of Ethernet devices support old-timey flow control, everything from data center switches to the USB dongle for my MacBook Air.

Screen Shot 2013-02-01 at 6.04.06 PM

One of the drawbacks of old-timey flow control is that it pauses all traffic, regardless of any QoS considerations. This creates a condition referred to as HoL (Head of Line) blocking, and can cause higher priority (and latency sensitive) traffic to get delayed on account of lower priority traffic. To address this, a new type of flow control was created called 802.1Qbb PFC (Priority Flow Control).

PFC allows a receiving port send PAUSE frames that only affect specific CoS lanes (0 through 7). Part of the 802.1Q standard is a 3-bit field that represents the Class of Service, giving us a total of 8 classes of service, though two are traditionally reserved for control plane traffic so we have six to play with (which, by the way, is a lot simpler than the 6-bit DSCP field in IP). Utilizing PFC, some CoS values can be made lossless, while others are lossfull.

Why would you want to pause traffic instead of drop traffic when congestion occurs?

Much of the IP traffic that traverses our data centers is OK with a bit of loss. It’s expected. Any protocol will have its performance degraded if packet loss is severe, but most traffic can take a bit of loss. And it’s not like pausing traffic will magically make congestion go away.

But there is some traffic that can benefit from losslessness, and and that just flat out requires it. FCoE (Fibre Channel of Ethernet), a favorite topic of mine, requires losslessness to operate. Fibre Channel is inherently a lossless protocol (by use of B2B or Buffer to Buffer credits), since the primary payload for a FC frame is SCSI. SCSI does not handle loss very well, so FC was engineered to be lossless. As such, priority flow control is one of the (several) requirements for a switch to be able to forward FCoE frames.

iSCSI is also a protocol that can benefit from pause congestion handling rather than dropping. Instead of encapsulating SCSI into FC frames, iSCSI encapsulates SCSI into TCP segments. This means that if a TCP segment is lost, it will be retransmitted. So at first glance it would seem that iSCSI can handle loss fine.

From a performance perspective, TCP suffers mightily when a segment is lost because of TCP congestion management techniques. When a segment is lost, TCP backs off on its transmission rate (specifically the number of segments in flight without acknowledgement), and then ramps back up again. By making the iSCSI traffic lossless, packets will be slowed down during congestions but the TCP congestion algorithm wouldn’t be used. As a result, many iSCSI vendors recommend turning on old-timey flow control to keep packet loss to a minimum.

However, many switches today can’t actually do full losslessness. Take the venerable Catalyst 6500. It’s a switch that would be very common in data centers, and it is a frame murdering machine.

The problem is that while the Catalyst 6500 supports old-timey flow control (it doesn’t support PFC) on physical ports, there’s no mechanism that I’m aware of to prevent buffer overruns from one port to another inside the switch. Take the example of two ingress Gigabit Ethernet ports sending traffic to a single egress Gigabit Ethernet port. Both ingress ports are running at line rate. There’s no signaling (at least that I’m aware of, could be wrong) that would prevent the egress ports from overwhelming the transmit buffer of the ingress port.

congestion

Many frames enter, not all leave

This is like flying to Hawaii and not reserving a hotel room before you get on the plane. You could land and have no place to stay. Because there’s no way to ensure losslessness on a Catalyst 6500 (or many other types of switches from various vendors), the Catalyst 6500 is like Thunderdome. Many frames enter, not all leave.

thunderdome

Catalyst 6500 shown with a Sup2T

The new generation of DCB (Data Center Bridging) switches, however, use a concept known as VoQ (Virtual Output Queues). With VoQs, the ingress port will not send a frame to the egress port unless there’s room. If there isn’t room, the frame will stay in the ingress buffer until there’s room.If the ingress buffer is full, it can have signaled the sending port it’s connected to to PAUSE (either old-timey pause or PFC).

This is a technique that’s been in used in Fibre Channel switches from both Brocade and Cisco (as well as others) for a while now, and is now making its way into DCB Ethernet switches from various vendors. Cisco’s Nexus line, for example, make use of VoQs, and so do Brocade’s VCS switches. Some type of lossless ability between internal ports is required in order to be a DCB switch, since FCoE requires losslessness.

DCB switches require lossless backplanes/internal fabrics, support for PFC, ETS (Enhanced Transmission Selection, a way to reserve bandwidth on various CoS lanes), and DCBx (a way to communicate these capabilities to adjacent switches). This makes them capable of a lot of cool stuff that non-DCB switches can’t do, such as losslessness.

One thing to keep in mind, however, is when Layer 3 comes into play. My guess is that even in a DCB switch that can do Layer 3, losslessness can’t be extended beyond a Layer 2 boundary. That’s not an issue with FCoE, since it’s only Layer 2, but iSCSI can be routed.

#CCNADC CCNA Data Center (my short journey)

On Monday I think it was, Cisco announced the completion of the Data Center track: The CCNA Data Center and CCNP Data Center certifications, and tests are available immediately. And you know me, I live in PearsonVUE test centers, and I’m a data center nut, so I signed that shit right up.

I’m now CCNA Data Center certified.

CCNA Data Center in less than a week of it coming out

Took the first test (640-911) on Wednesday 11/21/12 (first day I could schedule) and passed with an 830. I booked the next available date (today 11/24/12) for the 640-916 test and passed, squeaking by with a 798 (797 required).

How I felt when I saw that I passed by one point

I found 640-911 tougher, and thought I got more answers wrong. 640-916 seemed easier, since it’s more of the topics I teach on a regular basis (UCS, ACE, Fibre Channel). But for some reason I scored higher on the 640-911. Go figure.

I took them both blind, without studying or reading up (and no, no “study guides”). I didn’t even look at the exam topics for 640-911, and I barely glanced at them for 640-916. Generally, the questions were all data center specific, and covered topics you’d find in the various non-track (specialization) data center certs from Cisco. Also, I’ve gotten the question “Is there WAAS on the CCNA Data Center?” It’s not in the exam topics, and I don’t think I’m violating the confidentiality agreement by confirming the exam topics list by saying no, there’s no WAAS. Thankfully, because ugh WAAS.

So why take the trouble for a CCNA Data Center when I’m working on the CCIE Data Center? The reason is the CCNP Data Center. To get the CCNP Data Center, I need the CCNA Data Center. My goal is CCIE Data Center, but I’m impatient. There are very limited seats for the CCIE Data Center because right now, I think there’s only a single pod for the entire world (I think CCIE Wireless is like that too, or at least it was when it started out). Thus it’ll be a while before I get it (I’m guessing Summer 2013), even assuming I make it on the first try (which, odds are, I won’t). My highest Cisco certification is a CCSI, which is the teaching certification. I don’t have an NP-level at all, having dropped pursuit of my CCNP R&S a while ago in pursuit of other certs.

So by January I hope to have the CCNP Data Center hammered out. I’ve already got one of the tests done (DCUCI from like, ages ago), and I can’t recall if I did DCUCD or not. I need DCUFI and DCUFD, both of which I need to get anyway. Plus one of the troubleshooting (DCUFTS/DCUCTS) and I’ll be a CCNP Data Center.

Edit (11/25/12): Turns out my DCUCI pass won’t cut it. It’s an older version of the test, and they need either the V4 or the V5. So I’m back to square zero. Also, I got the required tests wrong:

You need to pass only four exams.

You have to pass DCUCI and DCUFI (V4 or V5), and you can either do the two design exams (DCUCD and DCUFD) or do the two troubleshooting exams (DCUCT and DCUFT). In all likelihood, I’ll end up doing all 6 tests because I’m a Cisco instructor and I need certs like woah, but I think I’ll go design first.

Overall, I’m very pleased that Cisco now has a full data center track. They’ve had several specializations, but unless you’re an instructor like me or have a partner-level requirement, those certs are pretty much worthless career wise. They have zero brand recognition. For example, if I told you I’m a Cisco Data Center Application Services Support Specialist, would you care? Probably not. You’ve never heard of it, so you have no idea how difficult/easy it is.  That’s the benefit of a CCIE, since it has probably the best brand recognition of any certification in any genre of IT. Whether you’re a Linux admin, Microsoft developer, or Juniper router jockey, you likely are aware of the CCIE (and the difficulty associated with it). CCNP is not too far down that list either.

So, onward to the CCNP Data Center.

Latest Rumors: Cisco to license/puchase NetScaler?

I feel like I’ve become the TMZ of Cisco load balancer gossip, and as much as I’d like to stop, I’ve got some more rumors for y’all.

Cisco! Cisco! Cisco! Is it true you’re having a love child with Citrix?

I’ve heard from a number of unofficial non-Cisco sources that Cisco is in talks to do something with NetScaler, and something will be announced soon. Some of the stock analysis sites (which first reported the impending death of ACE) have picked up the rumors, and so has Network World.

The rumors have been anything from Cisco buying NetScaler from Citrix to an OEM agreement, to a sales agreement where Cisco sales sells Citrix as part of their data center offerings. So we’ll see what happens.

 

 

As The Datacenter Turns…

This whole ACE thing has had more twists and turns than a daytime soap opera, or perhaps a vampire franchise on the CW aimed at teens and young adults. And things keep getting more interesting. Greg Ferro recently talked to a Cisco official about the ACE, a discussion I believe started by a comment over at Brad Casemore’s blog by a Cisco representative insisting that no, ACE is not dead. Meanwhile, the folks over at Cisco WAAS are very eager to let you know that they have a pulse, and aren’t going anywhere. This seemed necessary as WAAS has been long associated with ACE (I think they shared a business unit at one point), and has been eyed as another potential Cisco market exit. Plus it didn’t help that the WAAS group has recently been rumored to have had massive layoffs.

Calculon on learning the ACE, his fiance, is on life support, and is actually his sister. Also, double-amnesia.

With the ACE in abandoned-but-not-discontinued limbo, speculating is rampant about Cisco’s next move. I think they’re still working on what do to next, and I think the ACE discontinuation got outed quicker than they expected (again, no inside knowledge here). They could do what Juniper did, and just drop out entirely and partner with vendors that have a better product. The obvious partnership would be F5, assuming Cisco could swallow its pride. A10 is another, and also a purchase target since they’re privately held, though I think neither are likely. There are a lot of Riverbed Stingray fans showing up in the comments section of mine and other articles, but since Cisco is still actively competing with Riverbed in the WOC space, that seems especially unlikely. They could end up buying a part of another company, such as Citrix’s NetScaler business. Radware could also be purchased, but they have near zero footprint in the US, and not a great reputation. We’ll have to wait and see, I’m sure there will be more twists and turns.

Requiem for the ACE

Ah, the Cisco ACE. As we mourn our fallen product, I’ll take a moment to reflect on this development as well as what the future holds for Cisco and load balancing/ADC. First off, let me state I have no inside knowledge of what Cisco’s plans are in this regard. While I teach Cisco ACE courses for Firefly and develop Firefly’s courseware for both ACE products and bootcamp material for the CCIE Data Center, I’m not an employee of Cisco and have no inside knowledge of their plans. As a result, I’ve no idea what Cisco’s plans are, so this is pure speculation.

Also, it should be made clear that Cisco has not EOL’d (End of Life) or even EOS’d (End of Sale) the ACE product, and in a post on the CCIE Data Center group Walid Issa, the project manager for CCIE Data Center, made a statement reiterating this. And just as I was about to publish this post, there’s a great post by Brad Casemore also reflecting on the ACE, and there’s an interesting comment from Steven Schuchart of Cisco (analyst relations?) making a claim that ACE is, in fact, not dead.

However, there was a statement Cisco sent to CRN confirming the rumor, and my conversations with people inside Cisco have confirmed that yes, the ACE is dead. Or at least, that’s the understanding of Cisco employees in several areas. The word I’m getting will be bug-fixed and security-fixed, but further development will halt. The ACE may not officially be EOL/EOS, but for all intents and purposes, and until I hear otherwise, it’s a dead-end product.

The news of ACE’s probable demise was kind of like a red-shirt getting killed. We all knew it was coming, and you’re not going to see a Spock-like funeral, either. 

We do know one thing: For now at least, the ACE 4710 appliance is staying inside the CCIE Data Center exam. Presumably in the written (I’ve yet to sit the non-beta written) as well as in the lab. Though it seems certain now that the next iteration (2.0) of the CCIE Data Center will be ACE-less.

Now let’s take a look down memory land, to the Ghosts of Load Balancers Past…

Ghosts of Load Balancers Past

As many are aware, Cisco has long had a long yet… imperfect relationship with load balancing. This somewhat ironic considering that Cisco was, in fact, the very first vendor to bring a load balancer to market. In 1996, Cisco released the LocalDirector, the world’s first load balancer. The product itself sprung from the Cisco purchase of Network Translation Incorporated in 1996, which also brought about the PIX firewall platform.

The LocalDirectors did relatively well in the market, at least at first. It addressed a growing need for scaling out websites (rather than the more expensive, less resilient method of scaling up). The LocalDirectors had a bit of a cult following, especially from the routing and switching crowd, which I suspect had a lot to do with its relatively simple functionality: For most of its product life, the LocalDirector was just a simple Layer 4 device, and only moved up the stack in the last few years of its product life. While other vendors went higher up the stack with Layer 7 functionality, the LocalDirector stayed Layer 4 (until near the end, when it got cookie-based persistence). In terms of functionality and performance, however,  vendors were able to surpass the LocalDirector pretty quickly.

The most important feature that the other vendors developed in the late 90s was arguably cookie persistence. (The LocalDirector didn’t get this feature until about 2001 if I recall correctly.) This allowed the load balancer to treat multiple people coming from the same IP address as separate users. Without cookie-based persistence, load balancers could only do persistence based on an IP address, and was thus susceptible to the AOL megaproxy problem (you could have thousands of individual users coming from a single IP address). There was more than one client in the 1999-2000 time period where I had to yank out a LocalDirector and put in a Layer 7-capable device because of AOL.

Cookie persistence is a tough habit to break

At some point Cisco came to terms with the fact that the LocalDirector was pretty far behind and must have concluded it was an evolutionary dead end, so it paid $6.7 billion (with B) to buy ArrowPoint, a load balancing company that had a much better product than the LocalDirector. That product became the Cisco CSS, and for a short time Cisco was on par with other offerings from other vendors. Unfortunately, as with the LocalDirector, development and innovation seemed to stop after the purchase, and the CSS was forever a product frozen in the year 2000. Other vendors innovated (especially F5), and as time went on the CSS won fewer and fewer deals. By 2007, the CSS was largely a joke in load balancing circles. Many sites were happily running the CSS of course, (and some still are today), but feature-wise, it was getting its ass handed to it by the competition.

The next load balancer Cisco came up with had a very short lifecycle. The Cisco CSM (Content Switch Module), a load balancing module for the Catalyst 6500 series, didn’t last very long and as far as I can remember never had a significant install base. Also, I don’t recall ever using, and know it only through legend (as being not very good). It was replaced quickly by the next load balancing product from Cisco.

And that brings us to the Cisco ACE. Available in two iterations, the Service Module and the ACE 4710 Appliance, it looked like Cisco might have learned from its mistakes when it released the Cisco ACE. Out of the gate it was a bit more of a modern load balancer, offering features and capabilities that the CSS lacked, such as a three-tired VIP configuration mechanism (real servers, server farms, and VIPs, which made URL rules much easier) and the ability to insert the client’s true-source IP address in an HTTP header in SNAT situations. The latter was a critical function that the CSS never had.

But the ACE certainly had its downsides. The biggest issue is that the ACE could never go toe-to-toe with the other big names in load balancing in terms of features. F5 and NetScaler, as well as A10, Radware, and others, always had a far richer feature set than the ACE. It is, as Greg Ferro said, a moderately competent load balancer in that it does what it’s supposed to do, but it lacked the features the other guys had.

The number one feature that keeps ACE from eating at the big-boy table is an answer to F5’s iRules. F5’s iRules give a huge amount of control over how to load balance and manipulate traffic. You can use it to create a login page on the F5 that authenticates against AD(without ever touching a web server), re-write http:// URLs to https:// (very useful in certain SSL termination setups), and even calculate Pi everytime someone hits a web page. Many of the other high end vendors have something similar, but F5’s iRules is the king of the hill.

In contrast, the ACE can evaluate existing HTTP headers, and can manipulate headers to a certain extent, but the ACE cannot do anything with HTTP content. There’s more than one installation where I had to replace the ACE with another load balancer because of that issue.

The ACE never had a FIPS-compliant SSL implementation either, which prevented the ACE from being in a lot of deals, especially with government and financial institutions. ACE was very late to the game with OCSP support and IPv6 (both were part of the 5.0 release in 2011), and the ACE10 and ACE20 Service Modules will never, ever be able to do IPv6. You’d have to upgrade to the ACE30 Module to do IPv6, though right now you’d be better off with another vendor.

For some reason, Cisco decided to make use of MQC (Module QoS CLI) as the configuration framework in the ACE. This meant configuring a VIP required setting up class-maps, policy-maps, and service-policies in addition to real server and server farms. This was far more complicated than the configuring of most of the competition, despite the fact that the ACE had less functionality. If you weren’t a CCNP level or higher, the MQC could be maddening. (On the upside, if you mastered it on the ACE, QoS was a lot easier to learn, as was my case.)

If the CLI was too daunting, there was always the GUI on the ACE 4710 Appliance and/or the ACE Network Manager (ANM), which was separate user interface that ran on RedHat and later became it’s own OVA-based virtual appliance. The GUI in the beginning wasn’t very good, and the ACE Service Modules (ACE10, ACE20, and now the ACE30) lacked a built-in GUI. Also, when it hits the fan, the CLI is the best way to quickly diagnose an issue. If you weren’t fluent in the MQC and the ACE’s rather esoteric utilization of such, it was tough to troubleshoot.

There was also a brief period of time when Cisco was selling the ACE XML Gateway, a product obtained through the purchase of Reactivity in 2007, which provided some (but not nearly all) of the features the ACE lacked. It still couldn’t do something like iRules, but it did have Web Application Firewall abilities, FIPS compliance, and could do some interesting XML validation and other security. Of course, that product was short lived as well, and Cisco pulled the plug in 2010.

Despite these short comings, the ACE was a decent load balancer. The ACE service module was a popular service module for the Catalyst 6500 series, and could push up to 16 Gbps of traffic, making it suitable for just about any site. The ACE 4710 appliance was also a popular option at a lower price point, and could push 4 Gbps (although it only had (4) 1 Gbit ports, never 10 Gbit). Those that were comfortable with the ACE enjoyed it, and there are thousands of happy ACE customers with deployments.

But “decent” isn’t good enough in the highly competitive load balancing/ADC market. Industry juggernauts like F5 and scrappy startups like A10 smoke the ACE in terms of features, and unless a shop is going all-Cisco, the ACE almost never wins in a bake-off. I even know of more than one occasion where Cisco had to essentially invite itself to a bake-off (which in those cases never won). The ACE’s market share continued to drop from its release, and from what I’ve heard is in the low teens in terms of percentage, while F5 has about 50%.

In short, the ACE was the knife that Cisco brought to the gunfight. And F5 had a machine gun.

I’d thought for years that Cisco might just up and decide to drop the ACE. Even with the marketing might and sales channels of Cisco, the ACE could never hope to usurp F5 with the feature set it had. Cisco didn’t seem committed to developing new features, and it fell further behind.

Then Cisco included ACE in the CCIE Data Center blueprint, so I figured they were sticking with it for the long haul. Then the CRN article came out, and surprised everybody (including many in Cisco from what I understand).

So now the big question is whether or not Cisco is bowing out of load balancing entirely, or coming out with something new. We’re certainly getting conflicting information out of Cisco.

I think both are possible. Cisco has made a commitment (that they seem to be living up to) to drop businesses and products that they aren’t successful in. While Cisco has shipped tens of thousands of load balancing units since the first LocalDirector was unboxed, except for the beginning they’ve never led the market. Somewhere in the early 2000s, that title belong almost exclusively to F5.

For a company as broad as Cisco is, load balancing as a technology is especially tough to sell and support. It takes a particular skill set that doesn’t relate fully to Cisco’s traditional routing and switching strengths, as load balancing sits in two distinct worlds: Server/app development, and networking. With companies like F5, A10, Citrix, and Radware, it’s all they do, and every SE they have knows their products forwards and backwards.

The hardware platform that the ACE is based on (Cavium Octeon network processors) I think are one of the reasons why the ACE hasn’t caught up in terms of features. To do things like iRules, you need fast, generalized processors. And most of the vendors have gone with x86 cores, and lots of them. Vendors can use pure x86 power to do both Layer 4 and Layer 7 load balancing, or some like F5 and A10 incorporate FGPAs to hardware-assist the Layer 4 load balancing, and distribute flows to x86 cores for the more advanced Layer 7 processing.

The Cavium network processors don’t have the horsepower to handle the advanced Layer 7 functionality, and the ACE Modules don’t have x86 at all. The ACE 4710 Appliance has an x86 core, but it’s several generations back (it’s seriously a single Pentium 4 with one core). As Greg Ferro mentioned, they could be transitioning completely away from that dead-end hardware platform, and going all virtualized x86. That would make a lot more sense, and would allow Cisco to add features that it desperately needs.

But for now, I’m treating the ACE as dead.

Cisco ACE 101: Tony’s 5 Steps to a Happy VIP

I’ve been teaching Cisco ACE for over four years now, and I developed a quick trick/check list to teach students the minimum configuration to get a virtual service (VIP) up and running. And since the CCIE Data Center lab will soon be upon us, I’m sharing this little trick with you. I call it “Tony’s 5 Steps to a Happy VIP”. And here it is:

Step #1: ACL
Step #2: class-map: Defines the VIP address and port
Step #3: policy-map: Which server farm(s) do we send traffic to
Step #4: policy-map: Multi-match, will pair every class-map to its policy-map
Step #5: service-policy: Apply step #4 to the VLAN interface

Using that checklist, you can quickly troubleshoot/understand most ACE configurations. So what does that list mean?

First off, let’s define what a VIP even is: In load balancing terms, it refers to an IP and TCP or UDP port combination. In that regard, it’s a bit of a misnomer, since VIP is an acronym for “Virtual IP”, and only implies an IP address. Depending on the vendor, a VIP can be called a “Virtual Server”, “Virtual Service”, although it’s commonly referred to simply as “VIP”. It’s whatever you point the firehouse of network traffic to.

I’m not anti-GUI (in fact, I think the GUI is increasingly necessary in the network world), but in the case of the ACE (and CCIE DC) you’re going to want to use the CLI. It’s just faster, and you’re going to feel the need for speed in that 8 hour window. Also, when things go wrong, the CLI (and config file) is going to allow you to troubleshoot much more quickly than the GUI in the case of the ACE.

The CLI for Cisco ACE can be a little overwhelming. For some reason, Cisco decided to use the Modular QoS CLI (MQC) configuration framework. To me, it seems overly complicated.  Other vendors have CLIs that tend to make a lot more sense, or at least is a lot easier to parse with your eyes. If you’re familiar with class-maps, policy-maps, and service-policies, the transition to the ACE CLI won’t be all that difficult. It works very similar to setting up QoS. However, if you’re new to MQC, it’s going to be a bit of a bumpy ride.

How I felt learning MQC for the first time

The Configuration

Here is a very basic configuration for an ACE:

access-list ANYANY line 10 extended permit ip any any 

rserver host SERVER1 ip address 192.168.10.100
  inservice 
rserver host SERVER2 ip address 192.168.10.101 
  inservice 
rserver host SERVER3 ip address 192.168.10.101 
  inservice

serverfarm host SERVERFARM1
  rserver SERVER1
    inservice
  rserver SERVER2
    inservice
  rserver SERVER3
    inservice 

class-map match-all VIP1-80 
  2 match virtual-address 192.168.1.200 tcp eq http

class-map match-all VIP1-443
  2 match virtual-address 192.168.1.200 tcp eq https

policy-map type loadbalance first-match VIP1-POLICY
  class class-default 
    serverfarm SERVERFARM1 

policy-map multi-match CLIENT-VIPS 
  class VIP1-80
    loadbalance vip inservice 
    loadbalance policy VIP1-POLICY
  class VIP1-443
    loadbalance vip inservice
    loadbalance policy VIP1-POLICY

interface vlan 200 
  description Client-facing interface 
  ip address 192.168.1.10 255.255.255.0 
  access-group input ANYANY
  service-policy input CLIENT-VIPS 
  no shutdown
interface vlan 100
  description Server VLAN
  ip address 192.168.10.1 255.255.255.0
  no shutdown

Step #1: ACL

It’s not necessarily part of the VIP setup, but you do need to have an ACL rule in before a VIP will work. The reason is that the ACE, unlike most load balancers, is deny all by default. Without an ACL you can’t pass any traffic through the ACE. (However, ACLs have no effect on traffic to the ACE for management.)

Many an ACE configuration problem has been caused by forgetting to put an ACL rule in. My recommendation? Even if you plan on using specific ACLs, start out with an “any/any” rule.

access-list ANYANY line 10 extended permit ip any any

And don’t forget to put them on the interface facing the client (outside VLAN).

interface vlan 200 
  description Client-facing interface 
  ip address 192.168.1.10 255.255.255.0 
  access-group ANYANY input 
  service-policy input CLIENT-VIPS 
  no shutdown

Once you get everything working, then you can make a more nailed-down ACL if required, although most don’t since there is likely a firewall in place anyway (even the Cisco example configurations typically only have an any-any rule in place).

If you do use a more specific ACL, it’s often a good idea to switch back to any-any for troubleshooting. Put the more specific rule in place only when you’re sure your config works.

Step #2: class-map (VIP declaration)

The next step is to create a class-map that will catch traffic destined for the VIP. You should always include an IP address as well as a single TCP or UDP port. I’ve seen configurations that match any TCP/UDP port on a specific IP address, and this is usually a really, really bad idea.

class-map match-all VIP1-80
  2 match virtual-address 192.168.1.200 tcp eq http

This defines a VIP with an address of 192.168.1.200 on port http (port 80). Even if you set up multiple ports on the same IP address, such as port 80 and 443, use different class-maps and configure them separately.

Step #3: policy-map (what do we do with traffic hitting the VIP)

Here is where the VIP is defined as either a Layer 4 VIP or a Layer 7 VIP. The example below is a simple Layer 4 VIP (the ACE is not aware of anything that happens above Layer 4). You can get a lot fancier in this section, such as sending certain matched traffic to one server farm, and other traffic to others, and/or setting up persistence. Again, this is the most basic configuration.

policy-map type loadbalance first-match VIP1-POLICY
  class class-default <-- This matches everything
    serverfarm SERVERFARM1 <-- And sends it all right here

Step #4: policy-map (round-up policy-map, pairs a VIP with a decision process, and all the pairs are joined into a single statement)

You will typically have multiple Step 2’s and Step 3’s, but they exist as independent declarations so you’ll need something to round them all up into a single place and join them. In most configurations, you will typically only have one multi-match policy-map. This multi-match is where you marry a Step 2 class-map to a Step 3 policy-map. In this example, two separate class-maps use the same policy-map (which is fine).

policy-map multi-match CLIENT-VIPS 
  class VIP1-80 <-- This VIP...
    loadbalance vip inservice 
    loadbalance policy VIP1-POLICY <-- ...sends traffic to this policy
  class VIP1-443 <-- This VIP...
    loadbalance vip inservice
    loadbalance policy VIP1-POLICY <-- ...sends traffic to this policy

Step #5: service-policy (apply the round-up to the client-facing interface)

Finally, for any of this to work, you’ll need to apply the Step 4 multi-match policy-map to a VLAN interface, the one that faces the client.
interface vlan 200 

 description Client-facing interface 
 ip address 192.168.1.10 255.255.255.0 
 access-group input ANYANY <-- Step 1's ACL is applied
 service-policy input CLIENT-VIPS <-- Step 5's multi-match policy map is applied
 no shutdown <-- Don't forget the no shut!

Hope this helps with demystifying the ACE configuration. A short little check list can really help save time, especially in a time-constrained environment like a CCIE lab.

Is The OS Relevant Anymore?

I started out my career as a condescending Unix administrator, and while I’m not a Unix administrator anymore, I’m still quite condescending. In the past, I’ve run data centers based on Linux, FreeBSD, Solaris, as well as administered Windows boxes, OpenBSD and NetBSD, and even NeXTSTEP (best desktop in the 90s).

Dilbert.com

In my role as a network administrator (and network instructor), this experience has become invaluable. Why? One reason is that most networking devices these days have an open sourced based operating system as the underlying OS.

And recently, I got into a discussion on Twitter (OK, kind of a twitter fight, but it’s all good with the other party) about the underlying operating systems for these network devices, and their relevance. My position? The underlying OS is mostly irrelevant.

First of all, the term OS can mean a great many things. In the context of this post, when I talk about OS I’m referring to only the underlying OS. That’s the kernel, libraries, command line, drivers, networking stack, and file system. I’m not referring to the GUI stack (GNOME, KDE, or Unity for the Unixes, Mac OS X’s GUI stack, Win32 for Window) or other types of stack such as a web application stack like LAMP (Linux, Apache, MySQL, and PHP).

Most routers and MLS (multi-layer switches, swtiches that can route as fast as they can switch) run an open source operating system as its control plane. The biggest exception is of course Cisco’s IOS, which is proprietary as hell. But IOS has reached its limits, and Cisco’s NX-OS, which runs on Cisco’s next-gen Nexus switches, is based on Linux. Arista famously runs Linux (Fedora Core) and doesn’t hide it from the users (which allows it to do some really cool things). Juniper’s Junos is based on FreeBSD.

In almost every case of router and multi-layer switch however, the operating system doesn’t forward any packets. That is all handled in specialized silicon. The operating system is only responsible for the control plane, running processes like an OSPF, spanning-tree, BGP, and other services to decide on a set of rules for forwarding incoming packets and frames. These rules, sometimes called a FIB (Forwarding Information Base), are programmed into the hardware forwarding engines (such as the much-used Broadcom Trident chipset). These forwarding engines do the actual switching/routing. Packets don’t hit the general x86 CPU, they’re all handled in the hardware. The control plane (running as various coordinated processes on top of a one of these open source operating systems) tells the hardware how to handle packets.

So the only thing the operating system does (other than the occasional punted packet) is tell the hardware how to handle traffic the general CPU will never see. This is the way it has to be, because x86 hardware can’t scale nearly as well as special purpose silicon can, especially considering power and cooling consumption. Latency is way lower as well.

In fact, hardware wise, most vendors (Juniper, Arista, Huawei, Alcatel-Lucent ,etc.) have been using the exact same chip in their latest switches. So the differentiation isn’t the silicon. Is the differentiation the underlying operating system? No, it makes little difference for the end user. They are instead a (mostly) invisible platform for which the services (CLI, APIs, routing protocols, SDN hooks, etc.) are built upon. Networking vendors are in the middle of a transition into software developers (and motherboard gluers).

All you need to create a 10 Gigabit Switch

The biggest holdout in networking devices and non-open source is of course, Cisco’s IOS, which is proprietary as hell. Still, the future for Cisco appears to be NX-OS running on all of the Nexus switches, and that’s based on Linux.

Let’s also take a look at networking devices where the underlying OS may actually touch the data plane, and a genre in which I’m very much acquatned with: Load balancers (and no, I’m not calling them Application Delivery Controllers).

F5’s venerable BIG-IPs used to be based on BSDI initially (a years-dead BSD), and then switched to Linux. CoyotePoint was based on FreeBSD, and is now based on NetBSD. Cisco’s ACE is based on Linux (although Cisco’s shitty CSS runs proprietary vxWorks, but it’s not shitty because of vxWorks). Most of the other vendors are based on Linux. However, the baseline operating system makes very little difference these days.

Most load balancers have SSL offload (to push the CPU-intensive asymmetric encryption onto a specialized processor). This is especially important as we move to 2048-bit SSL certificates. Some load balancers have Layer 2/3/4 silicon (either ASICs or FPGAs, which are flexible ASICs) to help out with forwarding traffic, and hit general CPUs (usually x86) for the Layer 7 parsing. So does the operating system touch the traffic going through a load balancer? Usually, not always, and well, it depends.

So with Cisco on Linux and Juniper with FreeBSD, would either company benefit from switching to a different OS? Does either company enjoy a competitive advantage by having chose their respective platform? No. In fact, switching platforms would likely be a colossal waist of time and resources. The underlying operating systems just provide some common services to run the networking services that program the line cards and silicon.

When I brought up Arista and their Fedora Core-based control plane which they open up to customers, here’s what someone (a BSD fan) described Fedora as: “Inconsistent and convoluted”, “building/testing/development as painful”, and “hasn’t a stable file system after 10 years”.

Reading that statement, you’d think that dealing with Fedora is a nightmare. That’s not remotely true. Some of that statement is exaggeration (and you could find specific examples to support that statement for any operating system) and some of it is fantasy. No stable file system? Linux has had several file systems, including ext2, ext3, ext4, XFS, and more for a while, and they’ve been solid.

In a general sense, I think the operating system is less relevant than it used to be. Take OpenBSD for example. It’s well deserved reputation for security is legendary. Still, would there be any advantage today to running your web application stack on OpenBSD? Would your site be any more secure? Probably not. Not because OpenBSD is any less secure today than it was a while ago, quite the opposite. It’s because the attack vectors have changed. The attacks are hitting the web stack and other pieces rather than the underlying operating system. Local exploits aren’t that big of deal because few systems let anyone but a few users log in anyway. The biggest attacks lately have come from either SQL injection or attacks on desktop operating systems (mostly Windows, but now recently Apple as well).

If you’re going to expose a server directly to the Internet on a DMZ or (gasp) without any firewall at all, OpenBSD is an attractive choice. But that doesn’t happen much anymore. Servers are typically protected by layers of firealls, IPS/IDS, and load balancers.

Would Android be more successful or less successful if Google switched from Linux as the underpinnings to one of the BSDs? Would it be more secure if they switched to OpenBSD? No, and it would it be an entirely wasted effort. It’s not likely any of the security benefits of OpenBSD would translate into the Dalvik stack that is the heart of Android.

As much as fanboys/girls don’t want to admit it, it’s likely the number one reason people choose an OS is familiarity. I tend to go with Linux (although I have FreeBSD and OpenBSD-based VMs running in my infrastructure) because I’m more familiar with it. For my day to day uses, Linux or FreeBSD would both work. There’s not a competitive advantage either have over each other in that regard. Linux outright wins in some cases, such as virtualization (BSDs have been very behind in that technology, though they run fine as guests), but for most stuff it doesn’t matter. I use FreeNAS, which is FreeBSD based, but I don’t care what it runs. I’d use FreeNAS if it were based on Linux, OpenBSD, or whatever.  (Because it’s based on FreeBSD, FreeNAS does run ZFS, which for some uses is better than any of the Linux file systems, although I don’t run FreeNAS’s ZFS since it’s missing encryption).

So fanboy/girlism aside, for the most part today, choice of an operating system isn’t the huge deal it may once have been. People succeed with using Linux, FreeBSD, OpenBSD, NetBSD, Windows, and more as the basis for their platforms (web stack, mobile stack, network device OS, etc.).

Po-tay-to, Po-ta-to: Analogies and NPIV/NPV

In a recent post, I took a look at the Fibre Channel subjects of NPIV and NPV, both topics covered in the CCIE Data Center written exam (currently in beta, take yours now, $50!). The post generated a lot of comments. I mean, a lot. Over 50 so far (and still going).  An epic battle (although very unInternet-like in that it was very civil and respectful) brewed over how Fibre Channel compares to Ethernet/IP. The comments look like the aftermath of the battle of Wolf 359.

Captain, the analogy regarding squirrels and time travel didn’t survive

One camp, lead by Erik Smith from EMC (who co-wrote the best Fibre Channel book I’ve seen so far, and it’s free), compares the WWPNs to IP addresses, and FCIDs to MAC addresses. Some others, such as Ivan Pepelnjak and myself, compare WWPNs to MAC addresses, and FCIDs to IP addresses. There were many points and counter-points. Valid arguments were made supporting each position. Eventually, people agreed to disagree. So which one is right? They both are.

Wait, what? Two sides can’t be right, not on the Internet!

When comparing Fibre Channel to Ethernet/IP, it’s important to remember that they are different. In fact, significantly different. The only purpose for relating Fibre Channel to Ethernet/IP is for the purpose of relating those who are familiar with Ethernet/IP to the world of Fibre Channel. Many (most? all?) people learn by building associations with known subjects (in our case Ethernet/IP)  to lesser known (in this case Fibre Channel) subjects.

Of course, any association includes includes its inherent inaccuracies. We purposefully sacrifice some accuracy in order to attain relatability. Specific details and inaccuracies are glossed over. To some, introducing any inaccuracy is sacrilege. To me, it’s being overly pedantic. Pedantic details are for the expert level. Using pedantic facts as an admonishment of an analogy misses the point entirely. With any analogy, there will always be inaccuracies, and there will always be many analogies to be made.

Personally, I still prefer the WWPN ~= MAC/FC_ID ~= IP approach, and will continue to use it when I teach. But the other approach I believe is completely valid as well. At that point, it’s just a matter of preference. Both roads lead to the same destination, and that is what’s really important.

Learning always happens in layers. Coat after coat is applied, increasing in accuracy and pedantic details as you go along. Analogies is a very useful and effective tool to learn any subject.

Cisco ACE: Insert Client IP Address

Source-NAT (also referred to as one-armed mode) is a common way of implementing load balancers into a network. It has several advantages over routed-mode (where the load balancer is the default gateway of the servers), most importantly that the load balancer doesn’t need to be Layer 2 adjacent/on the same subnet as the servers.  As long as the SNAT IP address of the load balancer has bi-directional communication with the IP address of the servers, the load balancer can be anywhere. A different subnet, a different data center, even a different continent.

However, one drawback is that with Source NAT the client’s IP address is obscured. The server’s logs will show only the IP address of the SNAT address(s).

There is a way to remedy that if the traffic is HTTP/HTTPS, and that’s by having the load balancer insert the true source IP address into the HTTP request header from the client. You can do it with the ACE by putting it into the load balance policy-map.

policy-map type loadbalance http first-match VIP1_L7_POLICY
  class class-default
     serverfarm FARM1
     insert-http x-forwarded-for header-value "%is"

But alone is not enough. There are two extra steps you need to take.

The first step is you need to tell the web server to log the x-forwarded-for. For Apache, it’s a configuration file change. For IIS, you need to run an ISAPI filter in IIS.

The other thing you need to do is fix the ACE’s attention span. You see, by default the ACE has a short attention span. The HTTP protocol allows you to make multiple HTTP requests on a single TCP connection. By default, the ACE will only evaluate/manipulate the first HTTP request in a TCP connection.

So your log files will look like this:

1.1.1.1 "GET /lb/archive/10-2002/index.htm"
- "GET /lb/archive/10-2003/index.html"
- "GET /lb/archive/05-2004/0100.html HTTP/1.1"
2.2.2.2 "GET /lb/archive/10-2007/0010.html"
- "GET /lb/archive/index.php"
- "GET /lb/archive/09-2002/0001.html"

The “-” indicates Apache couldn’t find the header, because the ACE didn’t insert it. The ACE did add the first source IP address, but every request after it in the same TCP connection was ignored.

Why does the ACE do this? It’s less work for one, only evaluating/manipulating the first request in a connection. Since browsers will make dozens or even hundreds of requests over a single connection, this would be  a significant saving of resources. After all, most of the time when L7 configurations are used, it’s for cookie-based persistence. If that’s the case, all the requests in the same TCP connection are going to contain the same cookies anyway.

How do you fix it? By using a very ill-named feature called persistence-rebalance. This gives the ACE a longer attention span, telling the ACE to look at every HTTP request in the TCP connection.

First, create an HTTP parameter-map.

parameter-map type http HTTP_LONG_ATTENTION_SPAN
  persistence-rebalance

Then apply the parameter-map to the VIP in the multi-match policy map.

policy-map multi-match VIPsOnInterface
  class VIP1
    loadbalance vip inservice
    loadbalance policy VIP1_L7_POLICY
    appl-parameter http advanced-options HTTP_LONG_ATTENTION_SPAN

When that happens, the IP address will show up in all of the log entries.

1.1.1.1 "GET /lb/archive/10-2002/index.htm"
2.2.2.2 "GET /lb/archive/10-2003/index.html"
1.1.1.1 "GET /lb/archive/05-2004/0100.html HTTP/1.1"
2.2.2.2 "GET /lb/archive/10-2007/0010.html"
1.1.1.1 "GET /lb/archive/index.php"
2.2.2.2 "GET /lb/archive/09-2002/0001.html"

But remember, configuring the ACE (or load balancer in general) isn’t the only step you need to perform. You also need to tell the web service (Apache, Nginx, IIS) to use the header as well. None of them automatically use the X-Forwarded-for header.

I don’t know if they’ll try to trick you with this in the CCIE Lab, but it’s something to keep in mind for the CCIE and for implementations.