Top 5 Reasons The Evaluator Group Screwed Up

It’s been a while since the trainwreck of a “study” commissioned by Brocade and performed by The Evaluator Group,  but it’s still being discussed in various storage circles (and that’s not good news for Brocade). Some pretty much parroted the results, seemingly without reading the actual test. Then got all pissy when confronted about it.  I did a piece on my interpretations of the results, as did Dave Alexander of WWT and J Metz of Cisco. Our mutual conclusion can be best summed up with a single animated GIF.

 

bullshit

But since a bit of time has passed, I’ve had time to absorb Dave and J’s opinions, as well as others, I’ve come up with a list of the Top 5 Reasons by The Evaluator Group Screwed Up. This isn’t the complete list, of course, but some of the more glaring problems. Let’s start with #1:

Reason #1: I Have No Idea What I’m Doing

Their hilariously bad conclusion to the higher variance in response times and higher CPU usage was that it was the cause of the software initiators. Except, they didn’t use software initiators. The had actually configured hardware initiators, and didn’t know it. Let that sink in: They’re charged with performing an evaluation, without knowing what they’re doing.

The Cisco UCS VIC 1240 hardware CNA’s were utilized.  Referring to them as software initiators caused some confusion. The Cisco VIC is a hardware initiator and we configured them with virtual HBAs. Evaluator Group has no knowledge of the internal architecture of the VIC or its driver.  Our commentary of the possible cause for higher CPU utilization is our opinion and further analysis would be required to pinpoint the specific root cause.

Of course, it wasn’t the software initiator. They didn’t use a software initiator, but they were so clueless, they didn’t know they’d actually used a hardware initiator. Without knowing how they performed their tests (since they didn’t publish their methodology) it’s purely speculation, but it looks like the problem was caused by congestion (from them architecting the UCS solution incorrectly).

Reason #2: They’re Hilariously Bad At Math.

They claimed FCoE required 50% more cables, based on the fact that there were 50% more cables in the FCoE solution than the FC solution. Which makes sense… except that the FC system had zero Ethernet.

That’s right, in the HP/Fibre Channel solution, each blade had absolutely zero Ethernet connectivity. In the Cisco UCS solution, every blade had full Ethernet and Fibre Channel connectivity.  None. Zilch. Why did they do that? Probably because had they included any network connectivity to the HP system, the cable count would have shifted to FCoE’s favor.  Let me state this again, because it’s astonishingly stupid: They claimed FCoE (which included Ethernet and FC connectivity) required more cables without including any network connectivity for the HP/FC system. 

not_even_mad

Also, they made some power/cooling claims, despite the fact that the UCS solution didn’t require a separate FC switch (it’s capable of being a full-fledged Fibre Channel switch by itself), though the HP solution would have required a separate pair of Ethernet switches (which wasn’t included). So yeah, their math is a bit off. Had they done things, you know, correctly, the power, cooling, and cable count would have flipped in favor of FCoE.

Reason #3: UCS is Hard, You Guys!

They whinged about UCS being more difficult to setup. Anytime you’re dealing with unfamiliar technology, it’s natural that it’s going to be more difficult. However, they claimed that they had zero experience with HP as well (seriously, who at Brocade hired these guys?) How easy is UCS? Here is a video done from Amsterdam where a couple of Cisco techs added a new chassis and blade and had it booted up and running ESXi in less than 30 minutes from in the box to booted. Cisco UCS is different than other blade systems, but it’s also very easy (and very quick) to stand up. And keep in mind, the video I linked was done in Amsterdam, so they were probably baked   

Reason #4: It Contradicts Everyone Else’s Results (Especially those that know what they’re doing)

For the past couple of years, VMware and NetApp have been doing performance tests on various storage protocols. Here’s one from a few years ago, which includes (native) 4 and 8 Gbit Fibre Channel, 10 Gbit FCoE, 10 Gbit iSCSI, and 10 Gbit NFS. The conclusion? The protocol doesn’t much matter. They all came out about the same when normalized for bandwidth. The big difference is in the storage backend. At least they published their methodology (I’m looking at you, Evaluator Group). Here’s one from Demartek that shows a mixture of storage protocols saturating 10 Gbit Ethernet. Again, the limitation is only the link speed itself, not the protocol. And again, again, Demartek published their methodology.

Reason #5: How Did They Set Everything Up? Magic!

Most of the time with these commissioned reports, the details of how it’s configured are given so that the results can be reproduced and audited. How did the Evaluator Group set up their environment?

GOB.MAGIC_.GIF_

As far as I can tell, magic. There’s several things they could have easily gotten wrong with the UCS setup, and given their mistake about software/hardware initiators, quite likely. They didn’t even mention which storage vendor they used.

So there you have it. A bit of a re-hash, but hey, it was a dumb report. The upside though is that it did provide me with some entertainment.

Hey, Remember vTax?

Hey, remember vTax/vRAM? It’s dead and gone, but with 6 Terabyte of RAM servers now available, imagine what could have been (your insanely high licensing costs).

Set the wayback machine to 2011, when VMware introduced vSphere version 5. It had some really great enhancements over version 4, but no one was talking about the new features. Instead, they talked about the new licensing scheme and how much it sucked.

wayback2

While some defended VMware’s position, most were critical, and my own opinion… let’s just say I’ve likely ensured I’ll never be employed by VMware. Fortunately, VMware came to their senses and realized what a bone-headed, dumbass move that vRAM/vTax was, and repealed the vRAM licensing one year later in 2012. So while I don’t want to beat a dead horse (which, seriously, disturbing idiom), I do think it’s worth looking back for just a moment to see how monumentally stupid that licensing scheme was for customers, and serve as a lesson in the economies of scaling for the x86 platform, and as a reminder about the ramifications of CapEx versus OpEx-oriented licensing.

Why am I thinking about this almost 2 years after they got rid of vRAM/vTax? I’ve been reading up on the newly released Intel’s E7 v2 processors, and among the updates to Intel’s high-end server chip is the ability to have 24 DIMMs per socket (the previous limit was 12) and the support of 64 GB DIMMs. This means that a 4-way motherboard (which you can order now from Cisco, HP, and others) can support up to 6 TB of RAM, using 96 DIMM slots and 64 GB DIMMs. And you’d get up to 60 cores/120 threads with that much RAM, too.

And I remembered one (of many) aspects about vRAM that I found horrible, which was just how quickly costs could spiral out of control, because server vendors (which weren’t happy about vRAM either) are cramming more and more RAM into these servers.

The original vRAM licensing with vSphere 5 was that for every socket you paid for, you were entitled to/limited to 48 GB of vRAM with Enterprise Plus. To be fair the licensing scheme didn’t care how much physical RAM (pRAM) you had, only how much of the RAM was consumed by spun-up VMs (vRAM). With vSphere 4 (and the current vSphere licensing, thankfully), RAM had been essentially free: you only paid per socket. You could use as much RAM as you could cram into a server.  But with the vRAM licensing, if you had a dual-socket motherboard with 256 GB of RAM you would have to buy 6 licenses instead of 2. At the time, 256 GB servers weren’t super common, but you could order them from the various server vendors (IBM, Cisco, HP, etc.). So with vSphere 4, you would have paid about $7,000 to license that system. With vSphere 5, assuming you used all the RAM, you’d pay about $21,000 to license the system, a 300% increase in licensing costs. And that was day 1.

Now lets see how much it would cost to license a system with 6 TB of RAM. If you use the original vRAM allotment amounts from 2011, each socket granted you 48 GB of vRAM with Enterprise Plus (they did up the allotments after all of the backlash, but that ammended vRAM licensing model was so convoluted you literally needed an application to tell you how much you owed). That means to use all 6 TB (and after all, why would you buy that much RAM and not use it), you would need 128 socket licences, which would have cost $448,000 in licensing. A cluster of 4 vSphere hosts would cost just shy of $2 million to license. With current, non-insane licensing, the same 4-way 6 TB server costs a whopping $14,000. That’s a 32,000% price differential. 

Again, this is all old news. VMware got rid of the awful licensing, so it’s a non-issue now. But still important to remember what almost happened, and how insane licensing costs could have been just a few years later.

saved

My graph from 2011 was pretty accurate.

Rumor has it VMware is having trouble getting customers to go for OpEx-oriented licensing for NSX. While VMware hasn’t publicly discussed licensing, it’s a poorly kept secret that VMware is looking to charge for NSX on a per VM, per month basis. The number I’d been hearing is $10 per month ($120 per year), per VM. I’ve also heard as high as $40, and as low as $5. But whatever the numbers are, VMware is gunning for OpEx-oriented licensing, and no one seems to be biting. And it’s not the technology, everyone agrees that it’s pretty nifty, but the licensing terms are a concern. NSX is viewed as network infrastructure, and in that world we’re used to CapEx-oriented licensing. Some of VMware’s products are OpEx-oriented, but their attempt to switch vSphere over to OpEx was disastrous. And it seems to be the same for NSX.

FCoE versus FC Farce (I’m Tellin’ All Y’All It’s Sabotage!)

Updates 2/6/2014:

  • @JohnKohler noticed that the UCS Manager screenshot used (see below) is from a UCS Emulator, not any system they used for testing.
  • Evaluator Group promises answers to questions that both I and Dave Alexander (@ucs_dave) have brought up.

On my way back from South America/Antarctica, I was pointed to a bake-off/performance test commissioned by Brocade and performed by a company called Evaluator Group. It compared the performance of edge FCoE (non-multi-hop FCoE) to native 16 Gbit FC. The FCoE test was done on a Cisco UCS blade system connecting to a Brocade switch, and the FC was done on an HP C7000 chassis system connecting to the same switch. At first glance, it would seem to show that FC is superior to FCoE for a number of reasons.

I’m not a Cisco fanboy, but I am a Cisco UCS fanboy, so I took great interest in the report. (I also work for a Cisco Learning Partner as an instructor and courseware developer.) But I also like Brocade, and have a huge amount of respect for many of Brocade employees that I have met over the years. These are great and smart people, and they serve their customers well. 

First, a little bit about these types of reports. They’re pretty standard in the industry, and they’re commissioned by one company to showcase superiority of a product or solution against one or more of their competitors. They can produce some interesting information, but most of the time it’s a case of: “Here’s our product in a best-case scenario versus other products in a mediocre-to-worst case scenario.” No company would release a test showing other products superior to theirs of course, so they’re only released when a particular company comes out on top, or (most likely) the parameters are changed until they do. As such, they’re typically taken with a grain of salt. In certain markets, such as the load balancer market, vendors will make it rain with these reports on a regular basis. 

But for this particular report, I found several substantial issues with it which I’d like to share. It’s kind of a trainwreck. Let’s start with the biggest issue, one that is rather embarrassing.

What The Frak?

On page 17 check out the Evaluator Group comments:

“…This indicates the primary factor for higher CPU utilization within the FCoE test was due to using a software initiator, rather than a dedicated HBA. In general, software initiators require more server CPU cycles than do hardware initiators, often negating any cost advantages.”

For one, no shit. Hardware initiators will perform better than software initiators. However, the Cisco VIC 1240 card (which according to page 21 was included in the UCS blades) is a hardware initiator card. Being a CNA (converged network adaptor) the OS would see a native FC interface. With ESXi you don’t even need to install extra drivers, the FC interfaces just show up. Setting up a software FCoE initiator would actually be quite a bit more difficult to get going, which might account for why it took so long to configure UCS. Configuring a hardware vHBA in UCS is quite easy (it can be done in literally less than a minute).

Using software initiators against hardware FC interfaces is beyond a nit-pick in a performance test. It would be downright sabotage.

sabotage

I asked the @Evalutor_Group account if they really did software initiators:

And they responded in the affirmative.

Wow. First of all, software FCoE initiators is absolutely not standard configuration for UCS. In the three years I’ve been configuring and teaching UCS, I’ve never seen or even heard of FCoE software initiators being used, either in production or in a testing environment. The only reason you *might* want to do FCoE software initiators is when you’ve got the Intel mezzanine card (which is not a CNA, just an Ethernet card), and want to test FCoE. However, on page 21 it shows the UCS blades has having the VIC 1240 cards, not the Intel card.

So where were they getting that it was standard configuration?

They countered with a reference to a UCS document regarding the VIC:

and then here:

Which is this document here.

The part they referenced is as follows:

The adapter offerings are:

■ Cisco Virtual Interface Cards (VICs)

Cisco developed Virtual Interface Cards (VICs) to provide flexibility to create multiple NIC and HBA devices. The VICs also support adapter Fabric Extender and Virtual Machine Fabric Extender technologies.

■ Converged Network Adapters (CNAs)

Emulex and QLogic Converged Network Adapters (CNAs) consolidate Ethernet and Storage (FC) traffic on the Unified Fabric by supporting FCoE.

■ Cisco UCS Storage Accelerator Adapters

Cisco UCS Storage Accelerator adapters are designed specifically for the Cisco UCS B-series M3 blade servers and integrate seamlessly to allow improvement in performance and relief of I/O bottlenecks.

Wait… I think they think that the VIC card is a software-only FCoE card. It appears they came to that conclusion because the VIC doesn’t specifically mention it’s a CNA in this particular document (other UCS documents clearly and correctly indicate that the VIC card is a CNA). Because it mentions the VICs separately from the traditional CNAs from Emulex and Qlogic, it seems they believe it not to be a CNA, and thus a software card.

So it may be they did use hardware initiators, and mistakenly called them software initiators. Or they actually did configure software initiators, and did a very unfair test.

No matter how you slice it, it’s troubling. On one hand, if they did configure software initiators, they either ignorantly or willfully sabotaged the FCoE results. If they just didn’t understand the basic VIC concept, it means they setup a test without understand the most basic aspects of the Cisco UCS system. We’re talking 101 level stuff, too.  I suspect it’s the later, but since the only configuration of any of the devices they shared was a worthless screenshot of UCS manager, I can’t be sure.

This lack of understanding could have a significant impact on the results. For instance, on page 14 the response time starts to get worse for FCoE at about the 1200 MB/s mark. That’s roughly the max for a single 10 Gbit Ethernet FCoE link (1250 MB/s). While not definitive, it could mean that the traffic was going over only one of the links from the chassis to the Fabric Interconnect, or the traffic distribution was way off. My guess is they didn’t check the link utilization, or even know how, or how to fix it if it were off.

Conclusions First?

One of the more odd aspects of this report are where you found some of the conclusions, such as this one:

“Evaluator Group believes that Fibre Channel connectivity is required in order to achieve the full benefits that solid-state storage is able to provide.”

That was page 1. The first page of the report is a little weird to make a conclusion like that. Makes you sound a little… biased. These reports usually try to be impartial (despite being commissioned by a particular vendor). This one starts right out with an opinion.

Lack of Configuration

The amount of configuration they provide for the setup very sparse. For the UCS side, all they provide is one pretty worthless screenshot. Same for the HP system. For the UCS part, it would be important to know how they configured the vHBAs, and how they configured the two 10 Gbit links from the IOM to the chassis. Where they configured for the preferred Fabric Port Channel, or static pinning? So there’s no way to duplicate this test. That’s not very transparent.

Speaking of configuration, one of the issues they had with the FCoE side was how long it took them to stand up a UCS system. On page 11, second paragraph, they mention they need the help of a VAR to get everything configured. They even made a comment on page 10:

“…this approach was less intuitive during installation than other enterprise systems Evalutor Group has tested.”

youdontsay

So wow, you’ve got an environment that you’re unfamiliar with (we’ll see just how unfamiliar in a minute), and it took you *gasp* longer to configure? I’m a bit of an expert on Cisco UCS. I’m kind of a big deal. I have many paper-bound UCS books and my apartment smells of rich mahogany. I teach it regularly. And I’m not nearly as familiar with the C7000 system from HP, so I’d be willing to bet that it would take me longer to stand up an HP system than it would a Cisco UCS system. Anyone want in on that action?

Older Version? Update 2-6-2014: Screenshot Is A Fake

itsafake

@JohnKohler noticed that on the screenshot on page 23 that the serial number in the screenshot is “1”, which means the screenshot is not from any physical instance of a UCS Manager, but the UCS Emulator. The date (if accurate on the host machine) shows June of 2010, so it’s a very, very old screenshot (probably of 1.3 or 1.4). So we have no idea what version of UCS they used for these tests (and more importantly, how they were configured). 

Based on the screenshot on page 23, the UCS version is 2.0. How can you tell? Take a look where it says “Unconfigured Ports”. As of 2.1, Cisco changed the way ports are shown in the GUI. 2.1 and later do not have a sub-menu for unconfigured ports. Only 2.0 and prior.

nounconfiguredports

In version 2.1, there’s no “Unconfigured Ports” sub-menu. 

evalgroupucs

From Page 21, you can see “Unconfigured Ports”, indicating it’s UCS 2.0 or earlier.

If the test was done in November 2013, that would put it one major revision behind, as 2.1 was released in November of 2012 (2.2 was released in Dec 2013). We don’t know where they got the equipment, but if it acquired anytime in the past year, it likely came with 2.1 already installed (but not definite). To get to 2.0, they’d have to downgrade. I’m not sure why they would have done that, and if they brought in a VAR with certified UCS people they would have likely recommended 2.1. With 2.1, they could have directly connected the storage array to the UCS fabric interconnects. UCS 2.1 can do Fibre Channel zoning, and can function as a standard Fibre Channel switch. The Brocade switch wouldn’t have been needed, and the links to the storage array would be 8 Gbit instead of 16 Gbit. 

They also don’t mention the solid state array vendor, so we don’t know if there was capability to do FCoE directly to the storage array. Though not terribly common yet, FCoE connection to a storage array is done in production environments and would have benefited the UCS configuration if it were possible (and would be the preferred way if competing with 16 Gbit FC). There would have been the ability to do an LACP port channel between the Fabric Interconnects, providing better load distribution and redundancy.

Power Mad

The claim that the UCS system uses more power is laughable, especially since they specifically mentioned this setup is not how it would be deployed in production. Nothing about this setup was production worthy, and it wasn’t supposed to be. It’s fine for this type of test, but not for production. It’s non-HA, and using only 2 blades would be a waste of a blade system (and power, for either HP or Cisco). If you were only using a handful of servers, buying pizza boxes would be far more economical. Blades from either HP or Cisco only make sense past a certain number to justify the enclosures, networking, etc. If you want a good comparison, do 16 blades, or 40 blades, or 80 blades. Also include the Ethernet network connectivity. The UCS configuration has full Ethernet connectivity, the HP configuration as shown has squat.

Even by competitive report standards, this one is utter bunk. If I was Brocade, I would pull the report. With all the technology mistakes and ill-found conclusions, it’s embarrassing for them. It’s quite easy for Cisco to rip it to shreds.

(Just so there’s no confusion, no one paid me or asked me to write this. In fact, I’m still on PTO. I wrote this because someone is wrong on the Internet…)

VMware Year-Long vTax Disaster is Gone!

The rumors were true, vTax is gone. The announcement, rumored last week, was confirmed today.

Tequila!

Don’t let the door hit you on the ass on your way out. And perusing their pricing white paper you can see the vRAM allotments are all listed as unlimited.

vRAM limits (or vTax as it’s derisively called) has been a year-long disaster for VMware, and here’s why:

It stole the narrative

vTax stole the narrative. All of it. Yay, you presented a major release of your flagship product with tons of new features and added more awesomesauce. Except no one wanted to talk about any of it. Everyone wanted to talk about vRAM, and how it sucked. In blogs, message boards, and IT discussions, it’s all anyone wanted to talk about. And other than a few brave folks who defended vTax, the reaction was overwhelmingly negative.

It peeved off the enthusiast community

Those key nerds (such as yours truly) who champion a technology felt screwed after they limited the free version to 8 GB. They later revised it to 32 GB after the uproar, which right now is fair (and so far it’s stil 32 GB). That’ll do for now, but I think by next year they need to kick it up to 48 GB.

It was fucking confusing

Wait, what? How much vRAM do I need to buy? OK, why do I need to 10 socket licenses for a 2-way server. I have 512 GB of RAM and 2 CPUs, so I have to buy 512 GB of vRAM? Oh, only if I use it all. So if I’m only using 128 GB, I only need to buy 4?  OK, well, wait, what about VMs over 96 GB, they only count towards 96 GB? What?

What?

It was so complicated, there even a tool to help you figure out how much vRAM you needed. (Hint: If you need a program to figure out your licensing, your licensing sucks.)

It gave the competition a leg up

It’s almost as if VMware said to Microsoft and RedHat: “Here guys, have some market share.” I imagined that executives over at Microsoft and Redhat were naming their children after VMware for the gift they gave them. A year later, I see a lot more Hyper-V (and lots of excitement towards Hyper-V 3) and KVM discussions. And while VMware is in virtually (get it?) every data center, from my limited view Hyper-V and KVM seem to be installed in production in far more data centers than they were a year ago, presumably taking away seats from VMware. (What I don’t see, oddly enough, is Citrix Xen for server virtualization. Citrix seems to be concentrating only on the VDI.)

It fought the future

One of the defenses that VMware and those that sided with VMware on the vTax issue was that 90+% of current customers wouldn’t need to pay additional licensing fees to upgrade to vSphere 5. I have a hard time swallowing that, I think the number was much lower than they were saying (perhaps some self-delusion there). They saw a customer average of 6:1 in terms of VMs per host, which I think is laughably low.  As laughable as when Dr Evil vastly over-estimated the value of 1 million dollars.

We have achieved server consolidation of 6:1!

And add into that hardware refreshes. The servers and blades that IT organizations are looking to buy aren’t 48 GB systems anymore. The ones that are catching our wandering eyes are stuffed to the brim with RAM. A 2-way blade with 512 GB of RAM would need to buy 11 socket licenses with the original 48 GB vRAM allotments for Enterprise+, or 6 socket licenses with the updated 96 GB of vRAM allotments for Enterprise+.  That’s either a %550 or 400% increase in price over the previous licensing model.

They had the gall to say it was good for customers

So, how is a dramatic price increase good for customers? Mathematically, there was no way for any customer to save money with vRAM. It either cost the same, or it cost more. And while vSphere 5 brought some nice advancements, I don’t think any of them justified the price increase. So while they thought it was good for VMware (I don’t think it did VMware any good at all), it certainly wasn’t “good” for customers.

It stalled adoption of vSphere 5

Because of all these reasons, vSphere 5 uptake seemed to be a lot lower than they’d hoped, at least from what I’ve seen.

So I’m glad VMware got rid of vTax. It was a pretty significant blunder for a company that has done really well navigating an ever-changing IT realm, all things considered.

Some still defend the new-old licensing model, but I respectfully disagree. It had no upside. I think the only kind-of-maybe semi-positive outcome of vRAM is it trolled Microsoft and other competitors, because now one of their best attacks of VMware (which VMware created themselves out of thin air) is now gone. I’m happy that VMware seems to have acknowledged the blunder, in a rare moment of humility. Hopefully this humility sticks.

It pissed off partners, it pissed off hardware vendors, it pissed of the enthusiast community, and it pissed off even the most loyal customers.

Good riddance.

VMware Getting Rid of vRAM Licensing (vTax)?

(Update 8/21/12: VMware has a comment on the rumors [they say check next week])

A colleague pointed me to this article, which apparently indicates that with vSphere 5.1 VMware is getting rid of vRAM (couch vTax cough). I have found an appropriate animated GIF that both communicates my feelings, as well as the sweet dance moves I have just performed.

Nailed it

If this turns out to be true, it’s awesome. Even if most organizations weren’t currently affected by vTax, it’s almost certain they would soon as they refreshed their server and blade models that gleefully include obscene amounts of RAM. With Cisco’s UCS for example, you can get a half-width blade (the B230 M2) and cram it with 512 GB of RAM. Or a full-width blade (B420 M3) and stuff it with 1.5 TB of RAM. The later is a 4 socket system, and to license it for the full 1.5 TB of RAM would require buying not 4 licenses, but 16, making the high-RAM systems far more expensive to license.

That was darkest side of vRAM, even if you weren’t affected by it today, it was only a matter of time. One might say that it’s… A TRAP, as I’ve written about before. VMware fanboys/girls tried to apologize for it, but fact is it was not a popular move, either in the VMware enthusiast community or the business community.

So if it’s really gone, good. Good riddance. The only thing now is, how much RAM do we get to use in the free version, which is critical to the study/home lab market.

BYOD: A Tale of Two Bringings of Devices

BYOD is certainly a hot topic lately. A bit ago I raised some concerns over Juniper’s Junos Pulse product, which allows a company to not only protect employees BYOD devices, but to also view their photos, messages, and other potentially private information. My argument that it wasn’t that it shouldn’t be used on employer owned devices (that’s fine), but that Juniper was marketing it to be installed on employee owned devices, potentially greatly intruding on their privacy.

I got in a similar twitter discussion recently too, and it dawned on me that there were really two distinct types of BYOD:  BYODe, and BYODr. BYODe is Bring Your Own Device, Employee controlled and BYODr is BYOD, EmployeR controlled.

What’s the difference? Most BYOD I see is BYODe, where an employee is given accounts on a mail server running IMAP/POP/Exchange, a VPN client to connect to the local intranet and internal file servers, and maybe some other services (such as a salesforce.com login). The employee might be required to use an antivirus software and encrypt their hard drive, but there’s a clear delineation.

BYODr is when the employer requires a much greater level of control over the employees personal property. It might come in the form of a standard software load from IT, the ability to remotely access the employee’s device, and the ability to remote wipe the device.

If the company has the ability to look on the device itself, it’s going to limit what I do with it. Many types of personal communications, certain, uh, websites, etc., are going to be off limits for that device because my privacy is explicitly signed away for that device.

This approach chaffs at me a little, but I’m coming around to it a bit. So long as the employees have been explicitly told about all of the potentially privacy-invading functionality of the software. Students of a school weren’t informed about such capabilities in school-supplied laptops in once case (so not BYOD, but still), such as when a school district was caught viewing the webcams of students laptops in a similar BYODr scenario.

So while forking over cash for a device that you don’t get to control sounds like a raw deal, it doesn’t always need to be. I’ve become accustomed to a certain level of hardware. You know what I’m not down with? A 6 year old Dell craptop computer (Dell seems to have gotten better, but man they made some crap laptops, and IT departments ate them up like candy).

You know those shitty Dells that are universally despised? Order one for every person on staff. Except the executives. Get us the good stuff.

If my primary job is technology related, I would rather bring my own device than deal with the ancient PoS laptop they’d likely give me.

BYODe, or employee controlled BYOD, likely not be appropriate for certain industries (such as defense and healthcare), but for the most part this is what I’ve seen (and what I think about when discussing BYOD). Many high technology companies follow this approach, and it works great with a tech savvy staff who chaff at the snails pace that corporate IT can sometimes work at.

From Dropbox to app stores to gmail, corporate IT organizations can’t keep up. Sometimes it’s just a matter of the breakneck of the industry. And sometimes it’s just a matter of corporate IT sucking. I saw a few employees of a huge networking vendor lamenting their 200 MB mail box limit. It’s 2012, and people still have 200 MB as a limit? I’ve got like, 7 GBytes on my gmail account. That would go into the corporate suckage column. Dropbox? It’s hard to compete with a silicon valley startup when it comes to providing a service. Yet Dropbox is something that organizations (for some legit reasons, for some paranoid delusional reasons) fear.

So when talking about BYOD, I think it’s important to know which kind of BYOD we’re talking about. The employee requirements (simple non-intrusive VPN client to Big Brother looking into my stuff) very much change the dynamics of BYOD.

Adobe’s eBook Platform Is A Piece of Shit

Adobe’s eBook platform is utter shit. To those of you that have dealt with ACSM files, that statement is as controversial as saying “the sky is blue”. To those of you that haven’t, and are wondering what makes it such shit, read on.

It all started with a deal that Cisco Press had on cybermonday this year, offering 50% off if you buy three books. As a certified Cisco course instructor (I do not work for Cisco, I just teach Cisco courses) who is also working on my CCIE Storage, I can always do with a few more books, especially if they’re on the recommended reading list for CCIE Storage.

Also, since I travel quite a bit (150,000 miles this year), eBooks are the preferred knowledge delivery vector, since books are, well, frickin’ heavy. I took a nearly 800 page CCNP route book with me all over Europe last year, and it almost killed me. eBooks it is.  I’ve got an iPad, and I absolutely love the Kindle reader app. If I’ve got a long flight ahead of me (such as to say, India) then I make sure I’ve got plenty of books loaded up into my first generation iPad and iPhone 4 (which is also a surprisingly good e-reader). I also have a half decent PDF viewer for non-eBook format documents to read on the road.

I found three eBooks from Cisco Press that fit the bill, loaded them up in my shopping cart, and pulled the trigger. $150 worth of books for $75, not too bad. Two of the books were in an unprotected PDF format (watermarked with my name to discourage rampant sharing, which is fine), the other book downloaded as a tiny little file, with an .acsm extension.

I’d never heard of a .acsm file, but I would soon come to loath those four letters with the burning hatred of a thousand suns. My Canadian friend Jaymie Koroluk (@jaymiek) had this to say about it:

FFUUUUUU indeed. And thus began my Zeldian quest to get a friggin’ eBook on a friggin’ eBook reader. How hard could it be?

Well, of course my Mac didn’t recognize the .acsm file type. I tried loading it into a couple of readers, such as Kindle (it laughed at it) and a PDF viewer that I use. It turns out that .acsm didn’t actually contain the eBook, just a reference to it (and I believe the DRM rights to open the book). I had no idea what to do with it. The Cisco Press site didn’t have any specific instructions that I could find, so I Googled .acsm and eBook.

What I found was link after link that all said essentially “How the fuck do I get an .acsm book onto my reader???” Searching for acsm on Google reveals a world of woe, frustration, and hopelessness.

Google searches for “.acsm”  should just show this

After sifting through a few links, I found out that I needed to download something called Adobe Digital Editions. So I go to Adobe’s site, and I get this is the message I get when I try to download it:

What? I’ve got a new MacBook Air with MacOS Lion. There’s no “here’s what you need to do”, just that obnoxious error. With a bit of digging, I’m able to download it anyway.

I install Adobe Digital Editions, which is not intuitive and bizarrely laid out, and I’m finally able to load up the acsm file, and download a copy of the eBook. And the eBook is… a protected PDF. All that shit for a protected PDF.

But hey, at least I got it, right? Horray! But wait, I can only read it on my laptop, however. I need to get it on my iPad for this book to be of any use.

Yes, I’ve just experienced the eBook version of “The Princess is in another castle”.

But I told her to meet me here like five… fine. You know what? Tell here she’s on her own. I’m gonna go find a girl who can manage to stay un-kidnapped for say, 30 minutes at a time. 

Laptops are generally not great eBook readers, because among other issues, the batteries don’t last as long. The iPad’s battery lasts 10 hours of active use, and the various Kindle readers have their active battery life measured in days.  If I can’t find a way to get this onto my iPad, then there’s not much point in me having spent the money for this book.

I try to find some iPad app at the App Store that reads that format, that would allow me to open the protected PDF, but I came up blank. Or at least, none of them would obviously work. And most of them cost money, so I wasn’t about to do trial and error on which ones might work.

Jaymie mentioned she found an app called txtr, which I downloaded an installed. Txtr apparently was a failed ebook reader, and moved to a purely software play. They also had the ability to read Adobe eBooks (and as far as I can tell, the only iPad app that can). So Finally, I’m able to read the eBook on my iPad.

All told, it takes me over an hour and lots of tinkering, installing, and Googling to get an Adobe eBook onto my iPad.

So how does the Adobe eBook platform compare to other eBook platforms when you finally get the fucking book loaded up on your fucking eBook reader (which again, should not be nearly as difficult as it was)? Let’s compare.

First, ease of getting a book. How long does it take me to get an eBook on the Kindle, iBook, or Nook platforms? About 10 fucking seconds with a decent Internet connection. On Adobe’s platform? About an hour. By my math, Adobe’s platform is 360 times worse than the competition.

So how about usability? The book is a PDF, and PDFs are not ideal as a book format, even the non-DRMd ones that can be opened up on any reader. They’re just not optimized for eReaders and it shows. When you turn a page, the page is blurry for a split second before coming into focus. You can’t zoom in on individual photos like you can with the other readers. And there are about a dozen other nit-picky yet important UI niceties that Kindle and the others have that a PDF eBook lacks. Adobe’s platform seems like they took their existing PDF format, and slapped an eBook layer onto it in a half-assed manner.

In studying for my CCIE Storage, I came across a fantastic free Fibre Channel eBook from EMC (the storage vendor). It’s in an unprotected PDF format, but I’d happily pay $10 to get it in the Kindle format, which is much more conducive to eBook formats.

Final Thoughts

I have a simple plea to anyone thinking of publishing an eBook: For the love of all that is sacred and good in the world, do not use the Adobe book format. It will annoy your readers, and severely limit your eBook sales.

Adobe either has no clue about the eBook market, or they’re trying to sabotage it with a platform so shitty, so mind-bogglingly difficult for even tech-savvy consumers, that no one will ever want to read an eBook ever again.

That’s right, sometimes you have a product so bad, that it doesn’t just leave a bad taste in your mouth, it actually does harm to the industry. And that’s what we have with Adobe.

So Adobe, what did eBooks ever do to you?

VMware? Surprised. Oracle? Not Surprised.

There’s a rumor that VMware is backing away a bit from their vRAM (vtax) licensing. Nothing confirmed yet, but according to gabesvirtualworld.com, they’re not getting rid of vRAM licensing entirely, but are making some adjustments. They’re upping the vRAM allotments, and 96 GB is the maximum a VM can count towards vRAM allocation, If you had a VM with 256 GB of RAM, only 96 of it would count towards vRAM usage. I’d still like to see vRAM go away entirely, but it seems the new limits are less catastrophic in terms of future growth.

On a related note, it seems Oracle is one-upping VMware in the “dick move” department. A blog post on virtualizationpractice.com points out Oracle’s new Java 7 licensing: In virtualized environments, Java 7 only officially supported by an Oracle hypervisor. Knowing Larry Ellison, it’s not all the surprising. He’s a lock’em in kinda guy. I avoid Oracle at all costs.

VMware’s dick move was a surprise. That didn’t seem like the VMware we’d been working with for years. Oracle on the other hand, you’re only surprised they didn’t do it sooner.