BYOD: A Tale of Two Bringings of Devices

BYOD is certainly a hot topic lately. A bit ago I raised some concerns over Juniper’s Junos Pulse product, which allows a company to not only protect employees BYOD devices, but to also view their photos, messages, and other potentially private information. My argument that it wasn’t that it shouldn’t be used on employer owned devices (that’s fine), but that Juniper was marketing it to be installed on employee owned devices, potentially greatly intruding on their privacy.

I got in a similar twitter discussion recently too, and it dawned on me that there were really two distinct types of BYOD:  BYODe, and BYODr. BYODe is Bring Your Own Device, Employee controlled and BYODr is BYOD, EmployeR controlled.

What’s the difference? Most BYOD I see is BYODe, where an employee is given accounts on a mail server running IMAP/POP/Exchange, a VPN client to connect to the local intranet and internal file servers, and maybe some other services (such as a salesforce.com login). The employee might be required to use an antivirus software and encrypt their hard drive, but there’s a clear delineation.

BYODr is when the employer requires a much greater level of control over the employees personal property. It might come in the form of a standard software load from IT, the ability to remotely access the employee’s device, and the ability to remote wipe the device.

If the company has the ability to look on the device itself, it’s going to limit what I do with it. Many types of personal communications, certain, uh, websites, etc., are going to be off limits for that device because my privacy is explicitly signed away for that device.

This approach chaffs at me a little, but I’m coming around to it a bit. So long as the employees have been explicitly told about all of the potentially privacy-invading functionality of the software. Students of a school weren’t informed about such capabilities in school-supplied laptops in once case (so not BYOD, but still), such as when a school district was caught viewing the webcams of students laptops in a similar BYODr scenario.

So while forking over cash for a device that you don’t get to control sounds like a raw deal, it doesn’t always need to be. I’ve become accustomed to a certain level of hardware. You know what I’m not down with? A 6 year old Dell craptop computer (Dell seems to have gotten better, but man they made some crap laptops, and IT departments ate them up like candy).

You know those shitty Dells that are universally despised? Order one for every person on staff. Except the executives. Get us the good stuff.

If my primary job is technology related, I would rather bring my own device than deal with the ancient PoS laptop they’d likely give me.

BYODe, or employee controlled BYOD, likely not be appropriate for certain industries (such as defense and healthcare), but for the most part this is what I’ve seen (and what I think about when discussing BYOD). Many high technology companies follow this approach, and it works great with a tech savvy staff who chaff at the snails pace that corporate IT can sometimes work at.

From Dropbox to app stores to gmail, corporate IT organizations can’t keep up. Sometimes it’s just a matter of the breakneck of the industry. And sometimes it’s just a matter of corporate IT sucking. I saw a few employees of a huge networking vendor lamenting their 200 MB mail box limit. It’s 2012, and people still have 200 MB as a limit? I’ve got like, 7 GBytes on my gmail account. That would go into the corporate suckage column. Dropbox? It’s hard to compete with a silicon valley startup when it comes to providing a service. Yet Dropbox is something that organizations (for some legit reasons, for some paranoid delusional reasons) fear.

So when talking about BYOD, I think it’s important to know which kind of BYOD we’re talking about. The employee requirements (simple non-intrusive VPN client to Big Brother looking into my stuff) very much change the dynamics of BYOD.

Creating Your Own SSL Certificate Authority (and Dumping Self Signed Certs)

Jan 11th, 2016: New Year! Also, there was a comment below about adding -sha256 to the signing (both self-signed and CSR signing) since browsers are starting to reject SHA1. Added (I ran through a test, it worked out for me at least).

November 18th, 2015: Oops! A few have mentioned additional errors that I missed. Fixed.

July 11th, 2015: There were a few bugs in this article that went unfixed for a while. They’ve been fixed.

SSL (or TLS if you want to be super totally correct) gives us many things (despite many of the recent shortcomings).

  • Privacy (stop looking at my password)
  • Integrity (data has not been altered in flight)
  • Trust (you are who you say you are)

All three of those are needed when you’re buying stuff from say, Amazon (damn you, Amazon Prime!). But we also use SSL for web user interfaces and other GUIs when  administering devices in our control. When a website gets an SSL certificate, they typically purchase one from a major certificate authority such as DigiCert, Symantec (they bought Verisign’s registrar business), or if you like the murder of elephants and freedom, GoDaddy.  They range from around $12 USD a year to several hundred, depending on the company and level of trust. The benefit that these certificate authorities provide is a chain of trust. Your browser trusts them, they trust a website, therefore your browser trusts the website (check my article on SSL trust, which contains the best SSL diagram ever conceived).

Your devices, on the other hand, the ones you configure and only your organization accesses, don’t need that trust chain built upon the public infrastrucuture. For one, it could get really expensive buying an SSL certificate for each device you control. And secondly, you set the devices up, so you don’t really need that level of trust. So web user interfaces (and other SSL-based interfaces) are almost always protected with self-signed certificates. They’re easy to create, and they’re free. They also provide you with the privacy that comes with encryption, although they don’t do anything about trust. Which is why when you connect to a device with a self-signed certificate, you get one of these: So you have the choice, buy an overpriced SSL certificate from a CA (certificate authority), or get those errors. Well, there’s a third option, one where you can create a private certificate authority, and setting it up is absolutely free.

OpenSSL

OpenSSL is a free utility that comes with most installations of MacOS X, Linux, the *BSDs, and Unixes. You can also download a binary copy to run on your Windows installation. And OpenSSL is all you need to create your own private certificate authority. The process for creating your own certificate authority is pretty straight forward:

  1. Create a private key
  2. Self-sign
  3. Install root CA on your various workstations
Once you do that, every device that you manage via HTTPS just needs to have its own certificate created with the following steps:
  1. Create CSR for device
  2. Sign CSR with root CA key
You can have your own private CA setup in less than an hour. And here’s how to do it.

Create the Root Certificate (Done Once)

Creating the root certificate is easy and can be done quickly. Once you do these steps, you’ll end up with a root SSL certificate that you’ll install on all of your desktops, and a private key you’ll use to sign the certificates that get installed on your various devices.

Create the Root Key

The first step is to create the private root key which only takes one step. In the example below, I’m creating a 2048 bit key:

openssl genrsa -out rootCA.key 2048

The standard key sizes today are 1024, 2048, and to a much lesser extent, 4096. I go with 2048, which is what most people use now. 4096 is usually overkill (and 4096 key length is 5 times more computationally intensive than 2048), and people are transitioning away from 1024. Important note: Keep this private key very private. This is the basis of all trust for your certificates, and if someone gets a hold of it, they can generate certificates that your browser will accept. You can also create a key that is password protected by adding -des3:

openssl genrsa -des3 -out rootCA.key 2048

You’ll be prompted to give a password, and from then on you’ll be challenged password every time you use the key. Of course, if you forget the password, you’ll have to do all of this all over again.

The next step is to self-sign this certificate.

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem

This will start an interactive script which will ask you for various bits of information. Fill it out as you see fit.

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Oregon
Locality Name (eg, city) []:Portland
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Overlords
Organizational Unit Name (eg, section) []:IT
Common Name (eg, YOUR name) []:Data Center Overlords
Email Address []:none@none.com

Once done, this will create an SSL certificate called rootCA.pem, signed by itself, valid for 1024 days, and it will act as our root certificate. The interesting thing about traditional certificate authorities is that root certificate is also self-signed. But before you can start your own certificate authority, remember the trick is getting those certs in  every browser in the entire world.

Install Root Certificate Into Workstations

For you laptops/desktops/workstations, you’ll need to install the root certificate into your trusted certificate repositories. This can get a little tricky. Some browsers use the default operating system repository. For instance, in Windows both IE and Chrome use the default certificate management.  Go to IE, Internet Options, go to the Content tab, then hit the Certificates button. In Chrome going to Options and Under The Hood, and Manage certificates. They both take you to the same place, the Windows certificate repository. You’ll want to install the root CA certificate (not the key) under the Trusted Root Certificate Authorities tab. However, in Windows Firefox has its own certificate repository, so if you use IE or Chrome as well as Firefox, you’ll have to install the root certificate into both the Windows repository and the Firefox repository. In a Mac, Safari, Firefox, and Chrome all use the Mac OS X certificate management system, so you just have to install it once on a Mac. With Linux, I believe it’s on a browser-per-browser basis.

Create A Certificate (Done Once Per Device)

Every device that you wish to install a trusted certificate will need to go through this process. First, just like with the root CA step, you’ll need to create a private key (different from the root CA).

openssl genrsa -out device.key 2048

Once the key is created, you’ll generate the certificate signing request.

openssl req -new -key device.key -out device.csr

You’ll be asked various questions (Country, State/Province, etc.). Answer them how you see fit. The important question to answer though is common-name.

Common Name (eg, YOUR name) []: 10.0.0.1

Whatever you see in the address field in your browser when you go to your device must be what you put under common name, even if it’s an IP address.  Yes, even an IP (IPv4 or IPv6) address works under common name. If it doesn’t match, even a properly signed certificate will not validate correctly and you’ll get the “cannot verify authenticity” error. Once that’s done, you’ll sign the CSR, which requires the CA root key.

openssl x509 -req -in device.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out device.crt -days 500 -sha256

This creates a signed certificate called device.crt which is valid for 500 days (you can adjust the number of days of course, although it doesn’t make sense to have a certificate that lasts longer than the root certificate). The next step is to take the key and the certificate and install them in your device. Most network devices that are controlled via HTTPS have some mechanism for you to install. For example, I’m running F5’s LTM VE (virtual edition) as a VM on my ESXi 4 host. Log into F5’s web GUI (and should be the last time you’re greeted by the warning), and go to System, Device Certificates, and Device Certificate. In the drop down select Certificate and Key, and either past the contents of the key and certificate file, or you can upload them from your workstation.

After that, all you need to do is close your browser and hit the GUI site again. If you did it right, you’ll see no warning and a nice greenness in your address bar.

And speaking of VMware, you know that annoying message you always get when connecting to an ESXi host?

You can get rid of that by creating a key and certificate for your ESXi server and installing them as /etc/vmware/ssl/rui.crt and /etc/vmware/ssl/rui.key.

Gigamon Side Story

The modern data center is a lot like modern air transportation. Not nearly as sexy as it used to be, the food isn’t nearly as good as it used to be, and more choke points than we used to deal with.

With 10 Gigabit Ethernet Fabrics available from vendors like Cisco, Juniper, Brocade, et all, we can conceive of these great, non-blocking, lossless networks that let us zip VMs and data to and fro.

And then reality sets in. The security team needs to inspection points. That means firewalls, IPS, and IDS devices. And one thing they’re not terribly good at? Gigs and gigs of traffic. Also scaling. And not pissing me off.

Pictured: Firewall Choke Points

This battle between scalability and security has data center administrators and security groups rumbling like some sort of West Side Data Center Story.

Dun dun da dun! Scalability!

Dun doo doo ta doo! Inspection!

So what to do? Enter Gigamon, the makers of the orangiest network devices you’ll find in a data center. They were part of Networking Field Day 2, which I participated in back in October.

Essentially what Gigamon allows you to do is scale out your SPAN/Mirror ports. On most Cisco switches, only two ports at a time can be spitting mirrored traffic. For something like a Nexus 7000 with up to 256 10 Gigabit Interfaces, that’s usually not sufficient for monitoring anything but a small smattering of your traffic.

A product like Gigamon can tap fibre and copper links, or take in the output of a span port, classify the traffic, and send it out an appropriate port. This would allow a data center to effectively scale traffic monitoring in a way that’s not possible with mere mirrored ports alone. It would effectively remove all choke points that we normally associate with security. You’d just need to scale up with the appropriate number of IDS/IPS devices.

But with great power, comes the ability to do some unsavory things. During the presentation Gigamon mentioned they’d just done a huge install with Russia (note: I wouldn’t bring that up in your next presentation), allowing the government to monitor data of its citizens. That made me less than comfortable (and it’s also why it scares the shit out of Jeremy Gaddis). But “hey, that’s how Russia rolls” you might say. We do it here in the US, as well, through the concept of “lawful interception“. Yeah, I did feel a little dirty after that discussion.

Still, it could be used for good by removing the standard security choke points. Even if you didn’t need to IPS every packet in your data center, I would still consider architecting a design with Gigamon or another vendor like them in mind. It wouldn’t be difficult to consider where to put the devices, and it could save loads of time in the long run. If a security edict came down from on high, the appropriate devices would be put in place with Gigamon providing the pipping without choking your traffic.

In the mean time, I’m going to make sure everything I do is SSL’d.

Note: As a delegate/blogger, my travel and accommodations were covered by Gestalt IT, who vendors paid to have spots during the Networking Field Day. Vendors pay Gestalt IT to present, so while my travel (hotel, airfare, meals) were covered indirectly by the vendors, no other remuneration (save for the occasional tchotchke) from any of the vendors, directly or indirectly, or by Gestalt IT was recieved. Vendors were not promised, nor did they ask for any of us to write about them, or write about them positively. In fact, we sometimes say their products are shit (when, to be honest, sometimes they are, although this one wasn’t). My time was unpaid. 

BYOD And Juniper’s Big Brother

Twitter fight!

I’ve been involved in a few twitter fights discussions recently, which are typically passionate conversations with people that hold passionate beliefs.  However, the problem with arguing on Twitter is that it’s very easy to accidentally be on the same side, while thinking you’re on opposite sides. Such is the limit of 144 characters.

The whole brouhaha started with a tweet I made about Junos Pulse from Juniper, which can do the following (from the Pulse PDF brochure): “SMS, MMS, email, and message content monitoring, phone log, address book, and stored photo viewing and control.”

Junos Pulse is Juniper’s mobile security client, which includes VPN as well as anti-malware capabilities. It also has the ability to peer into the text messages that a phone has sent and received, as well as view all photographs taken by the smarphone or tablet’s camera. Juniper is not just marketing it towards corporate issued phones and tablets (which I have no problem with), but also (as shown in the  fear-mongering blog post with a misleading title that I  wrote about in my last post) is advocating that employee-owned devices, part of the BYOD (bring your own device) trend in IT, also be loaded with Juniper’s spy-capable software. From the fear-mongering article (emphasis mine):

Get mobile security and device management for your personal or corporate-issued mobile device, and mandate that all of your employees – or anyone for that matter who accesses your corporate network from a mobile device – load mobile security and device management on their mobile devices!

If the phone or tablet is issued by the company, I don’t have any problem with this (so long as employees know that there is that capability). This could even be quite handy, depending on the scenario. But employee owned equipment being susceptible to spying by corporate IT? No way. I can’t imagine anyone would allow that on their personal devices. Even Juniper employees.

(Related: Check out Tom Hollingsworth’s post on BYOD)

Hence my tweet, wondering if Juniper eats its own dog food, and requires employees who bring their personal, non-Juniper-owned smartphones into the office to run Pulse with the ability to view photos, texts, and other personal correspondence. I got responses like this:

I don’t think he realized that I was talking about Juniper pulse having the ability not just to spy on VPN traffic (which any VPN software could), but also the text messages and photos on the mobile device/tablet. Also that Juniper is marketing it towards employee owned devices. (Also, privacy concerns are not a legitimate reason to spy on someone.) In the end though, I think Virtual_Desktop and I were on the same page.

So it’s not just a company that I worry about violating an employees privacy, but also a rogue IT employee. I worked at a place once where a Unix admin stalked another employee by reading her email. Having the power to peer into someone’s personal texts, emails, and photos would be very tempting, and difficult to resist for the unscrupulous.

Ah, I see Tony is getting more saucy texts from his super model girlfriends

I get that if I’m at the office, and I’m using their network, that my traffic could be monitored. I get that data on company property, such as a company issued laptop, phone, or tablet is fair game for viewing by the company. But to require an employee to install something on their personal (BYOD) devices that has the ability to peer into an employee’s personal texts and images? That’s downright scary. And stupid. No knowledgable employee would let that happen. If an employer required that I install it on a device I brought into the office even if it didn’t connect to the corporate network, I’d leave the device at home. And I’d probably look for another job, because bone-headed decisions like that don’t exactly evoke confidence in management.

Junos Pulse certainly has some appropriate use cases. The ability to wipe a phone, view emails, texts and images, and other fairly intrusive activities on a company-owned device make sense in some cases. In others, it’s probably overly intrusive, overly-controlling, but within an employers rights. But on an employee’s personal device? No way.

I like Juniper, I really do. But I think they’ve got the strategy wrong for Pulse, and I think they’ll figure it out. It’s a much larger issue as well, with the consumerization of IT and employees bringing their own devices, the demarkation point between employee and employer is becoming hazy. That’s probably an offshoot of the time an employee is on the clock and off the clock becoming hazy as well. We’ll have to see where this goes, but I don’t think people are going to put up with the  “it’s going to spy on your personal device” route.

It May Already Be Too Late!

I’m very enthusiastic about anything that makes corporate IT suck less (such as BYOD, Bring Your Own Device), and despite not working for any company other than myself, I’m still quite sensitive to things that increase IT suckitude. And I’ve found the later recently in a blog post over at Juniper called “BYOD Isn’t As Scary As You Think, Mr. or Ms. CIO“.

The title of the article seems to say that BYOD isn’t scary for corporate environments. But the article reads as if the author intended to induce a panic attack.

The article is frustrating for a couple of reasons. One, CIOs might take that shit seriously, and while huffing on a paper bag because of panic-induced hyperventilation, might fire off a new bone-headed security policy. One would hope that someone at the CIO level would know better, but I’ve known CIOs that don’t.

Two, one of the great things about smart phones is the lack of shitty security products on them. And you want to go ruin that? If I’m bringing my own device, with saucy texts from my supermodel girlfriends, I’m not likely to let any company put anything on my phone.

Why Ensign Ro, those are not bridge-duty appropriate texts you’re sending to Commander Data

Three, of the possible security implications with smart phones, only a couple of edge cases would even be solved by the software that Juniper offers as a solution. For instance, the threat of a rogue employee. You used to be able to tell if you were let go because your passwords didn’t work, now you could know when your phone reboots and wipes. But how do you know they’ve gone rogue? Why, monitor photos and texts on that employee’s phone of course.

Wait, what?

You can monitor emails, texts, and camphone images? With Junos Pulse mobile security, you can.

Hi there Brett Favre, Big Brother here. We, uhh, couldn’t help but notice that photo you texted from your personal phone that we are always monitoring…

This is just making corporate security, which already sucks, even worse. It’s a mentality that is lose-lose. The IT organization would get additional complexity for very little gain, and the users would get more hindrance, little security, and a huge invasion of privacy. Maybe I’m alone in this, but if any company offered me a job and required my personal device be subjected to this, the compensation package would need to include a mega-yacht to make it worthwhile.

I’ve been self employed since 2007, and having been free of corporate laptop builds, moldy email systems, and maniacal IT managers, I can say this: Being independent is 30% about calling the shots on my own schedule, 70% is calling the shots on my own equipment.

“That’s a very attractive offer, however judging from that crusty-ass laptop you have an the bizarre no-Mac policy by your brain-dead IT head/security officer, working for your company would eat away at my soul and cause me to activate the genesis device out of frustration.”

I really like Juniper, I do. But one of the things you do with friends is call them on their shit. I do it with Cisco all the time, now it’s Juniper’s turn.

TLS 1.2: The New Hotness for Load Balancers

Aright implementors of services that utilize TLS/SSL, shit just got real. TLS 1.0/SSL 3.0? Old and busted. TLS 1.2? New hotness.

We config together, we die together. Bad admins for life.

There’s an exploit for SSL and TLS, and it’s called BEAST. It takes advantage of a previously known (but though to be too impractical to exploit) weakness in CBC. Only BEAST was able to exploit that weakness in a previously unconsidered way, making it much more than a theoretical problem. (If you’re keeping track, that’s preciously the moment that shit got real).

The cure is an update to the TLS/SSL standard called TLS 1.2, and it’s been around since 2008 (TLS 1.1 also fixes it, and has been available since 2006, but we’re talking about new hotness here).

So no problem, right? Just TLS 1.2 all the things.

Well, virtually no one uses it. It’s a chicken and egg problem. Clients haven’t supported it, so servers haven’t. Servers didn’t support it, so why would clients put the effort in? Plus, there wasn’t any reason to. The CBC weakness had been known, but it was thought to be too impractical to exploit.

But now we’re in a state of shit-is-real, so it’s time to TLS up.

So every browser and server platform running SSL is going to need to be updated to support TLS 1.2. On the client side, Google Chrome, Apple Safari, Firefox, IE (although IE 9 supports TLS 1.1, but previous version will need to be back ported) will need to be updated.

On the server side, it might be a bit simpler than we think. Most of the time when we connect to a website that utilizes SSL (HTTPS), the client isn’t actually talking SSL to the server, instead they’re talking to a load balancer that terminates the SSL connection.

Since most of the world’s websites have a load balancer terminate the SSL, we can update the load balancers with TLS 1.2 and take care of a major portion of the servers on the Internet.

Right now, most of the load balancing vendors don’t support TLS 1.2. If asked, they’ll likely say that there’s been no demand for it since clients don’t support it, which was fine until now. Now is the time for the various vendors to upgrade to 1.2, and if you’re a vendor and you’re not sure if it’s worth the effort, listen to Yoda:

Right now the only vendor I know of that supports TLS 1.2 is the market leader F5 Networks with their version 11 of their LTM, for which they should be commended. However, that’s not good enough, they need to backport version 10 (which has a huge install base). Vendors like Cisco, A10 Networks, Radware, KEMP Technologies, etc., need to also update their software to TLS 1.2. We can no longer use the excuse “because browsers don’t support it”. Because of BEAST, they will soon, and so do they.

In the meantime, if you’re running a load balancer that terminates SSL, you may want to change your cipher settings to prefer RC4-SHA instead of AES (which uses CBC). It’s cryptographically weaker, but is immune to the CBC issue. In the next few days, I’ll be putting together a page on how to prefer RC4 for the various vendors.

Rembmer, TLS 1.0/SSL 3.0: Old and busted. TLS 1.2? New hotness.

SSL’s No Good, Very Bad Couple of Months

The world of SSL/TLS security has had, well, a bad couple of months.

‘Tis but a limited exploit!

First, we’ve had a rash of very serious certificate authority security breaches. An Iranian hacker was able to hack Comodo, a certificate authority, and create valid, signed certficates for sites like microsoft.com and gmail.com. Then another SSL certificate authority in the Netherlands got p0wned so bad the government stepped in and took them over, probably from the same group of hackers from Iran.

Iran and China have both been accused of spying on dissidents through government or paragovernment forces. The Comodo and DigiNotar hacks may have lead to spying of Iranian dissidents, an a suspected attack by China a while ago prompted Google to SSL all Gmail connections at all times (not just username/password) by default, for everyone.

Between OCSP and CRLs, browser updates and rogue certificates, it’s called into question the very fabric of trust that we’ve taken for granted. Some even claim that PKI is totally broken (and there’s a reasonable argument for this).

This is what happens when there’s no trust. Also, this is how someone loses an eye. Do you want to lose an eye? Because this will totally do it.

Then someone found a way to straight up decrypt an SSL connection without any of the keys.

Wait, what?

It’s getting’ all “hack the planet” up in here.

The exploit is called BEAST, and it’s one that can decrypt SSL communications without having the secret. Thai Duong, one of the authors (the other is Juliano Rizzo) of the tool saw my post on BEAST, and invited me to watch the live demonstration from the hacker conference. Sure enough, they could decrypt SSL. Here’s video from the presentation:

Let me say that again. They could straight up decrypt that shit. 

Granted, there were some caveats, and the exploit can only be used in a in a somewhat limited fashion. It was a man-in-the-middle attack, but one that didn’t terminate the SSL connection anywhere but at the server. They found a new way to attack a known (but thought to be too impractical to exploit) vulnerability in the CBC part of some encryption algorithms.

The security community has known this was a potential problem for years, and it’s been fixed in TLS 1.1 and 1.2.

Wait, TLS? I thought this was about SSL?

Quick sidebar on SSL/TLS. SSL is the old term (the one everyone still uses, however). TLS replaced SSL, and we’re currently on TLS 1.2, although most people use TLS 1.0.

And that’s the problem. Everyone, and I do mean the entire planet, uses SSL 3.0 and TLS 1.0.  TLS 1.1 has been around since 2006, and TLS 1.2 has been around since 2008. But most web servers and browsers, as well as all manner of other types of SSL-enabled devices, don’t use anything beyond TLS 1.0.

And, here’s something that’s going to shock you:

Microsoft IIS and Windows 7 support TLS 1.1. OpenSSL, the project responsible for the vast majority of SSL infrastructure used by open source products (and much of the infrastructure for closed-source projects), doesn’t. As of writing, TLS 1.1 or above hasn’t made it yet into OpenSSL libraries, which means Apache, OpenSSH, and other tools that make use of the OpenSSL libraries can’t use anything above TLS 1.0. Look at how much of a pain in the ass it is right now to enable TLS 1.2 in Apache.

We’re into double-facepalm territory now

No good, very bad month indeed.

So now we’re going to have to update all of the web servers out there, as well as all the clients. That’s going to take a bit of doing. OpenSSL runs the SSL portion of a lot of sites, and they’ve yet to bake TLS 1.1/1.2 into the versions that everyone uses (0.9.8 and 1.0.x). Load balancers are going to play a central role in all of this. so we’ll have to wait for F5, Cisco, A10 Networks, Radware, and others to support TLS 1.2. As far as I can tell, only F5’s LTM version 11 supports anything above TLS 1.0.

The tougher part will be all of the browsers out there. There are a lot of systems that run non-supported and abandoned browsers. At least SSL/TLS is smart enough to be able to downgrade to the highest common denominator, but that would mean potentially vulnerable clients.

In the meantime something that web servers and load balancers can do is take Google’s lead and prefer the RC4 symmetric algorithm. While cryptographically weaker, it’s immune to the CBC attack.

This highlights the importance of keeping your software, both clients and servers, up to date.  Out of convenience we got stale with SSL, and even security the security obsessed OpenSSL project got caught with their pants down. This is really going to shake a lot of systems down, and hopefully be a kick in the pants to those that think it’s OK to not run current software.

I worked at an organization once where the developers forked off Apache (1.3.9 I believe) to interact with an application they developed. This meant that we couldn’t update to the latest version of Apache, as they refused to put the work in to port their application module to the updated versions of Apache. I bet you can guess how this ended. Give up? The Apache servers got p0wned.

So between BEAST and PKI problems, SSL/TLS has had a rough go at it lately. It’s not time to resort to smoke signals just yet, but it’s going to take some time before we get back to the level of confidence we once had. It’s a scary-ass world out there. Stay secure, my friends.

SSL: Who Do You Trust?

Note: This is a post that appeared on the site lbdigest.com about a year or so ago, but given that SSL is back in the news lately, I figured it’s worth updating and re-posting. Also, it features the greatest SSL diagram ever created. Seriously, if you fire up Visio/Omnigraffle, know the best you can hope for is second place. 

One of the most important technologies used on Internet is the TLS/SSL protocol (typically called just SSL, but that’s a whole different article).  The two benefits that TLS/SSL gives us are privacy and trust.

Privacy comes through the use of digital encryption (RSA, AES, etc.) to keep your various Internet interactions, such as credit card numbers, emails, passwords, saucy instant messages, confidential documents, etc., safe from prying eyes. That part is pretty well understood by most in the IT industry.

But having private communications with another party is all for naught if you’re talking to the wrong party.  You also need trust.  Trust that someone is who they say they are. For Internet commerce to work on a practical level, you need to able to trust that when you’re typing your username and password into your bank’s website, that you’re actually connecting to a bank, and not someone pretending to be your bank.

Trust is accomplished through the use of SSL certificates, CAs (certificate authorities), intermediate certificates, and certificate chains which combined is known as PKI (Public Key Infrastructure).   To elaborate on the use of these technologies to provide trust, I’m going to forgo the traditional Bob and Alice encryption examples, and go for something a little closer to your heart.  I’m going to drop some Star Trek on you.

Let’s say you’re in the market for a starship.  You’re looking for a sporty model with warp drive, heated seats, and most importantly, a holodeck. You go to your local Starfleet dealer, and you find this guy.

Ensign Tony.

 Seriously Tony, have you even talked to a girl?

The problem is, you don’t trust this guy. It’s nothing personal, but you just don’t know him. He says he’s Ensign Tony, but you have no idea if it’s really him or not.  But there is one Starfleet officer you do know and trust implicitly, even though you never met him.  You trust Captain Jean-Luc Picard.

Picard’s Law: Set up a peace conference first, ask questions later

Captain Picard is the kind of guy you start out automatically trusting. His reputation precedes him. Your browser is the same way, in that right out of the gate there are several sources (such as Verisign) that your browser trusts implicitly.

If you want to check out who your browser trusts, you can typically find it somewhere in the preferences section. For example, in Google Chrome, go into Preferences, and then Under the Hood. On a Mac this opens up the system-wide keystore for all the trusted certificates and keys. In other operating systems and/or browsers, you may have different certificate stores for different browsers, or like with a Mac all the programs may share a single centralized certificate store.

Back to the Star Trek analogy.

But you’re not dealing with Picard directly.  Instead, you’re dealing with Ensign Tony.  So Picard vouches for Ensign Tony, and thus a trust chain is built.   You trust Picard, and Picard trusts Ensign Tony, so by the transitive property, you can now trust Ensign Tony.

That is the essence of trust in SSL.

Intermediate Certificates

One of the lesser understood concepts in the us of SSL certificates is the intermediate certificates.  These are certificates that sit between the CA (Picard) and the site certificate (Ensign Tony).

You see, Picard is an important man.  The Enterprise has over a thousand crew members and he can’t possibly personally know and trust all of them.  (In Ensign Tony’s case, there’s also the little matter of a restraining order.)  So he farms the trust out to his subordinates. And one crew member he does implicitly trust is Chief Engineer Geordi La Forge.

Ensign Tony works for Geordi, and Geordi trusts Ensign Tony.  Thus Geordi becomes the intermediate certificate in this chain of trust.  You can’t trust Ensign Tony directly through Picard because Picard can’t vouch for Tony, but Geordi can vouch fro Tony, and Picard can vouch for Geordi, so we have built a chain of trust.   This is why load balancers and web servers often require you to install an intermediate certificate.

This is the greatest SSL diagram ever made.

Here’s what happens when you don’t install an intermediate certificate onto your load balancer/ADC/web server:

You’re 34 years old Tony, you’d think you would have made Lieutenant by now

One of the practical issues that comes up with intermediate certificates is which one do you use?  The various SSL certificate vendors such as Thawte, Digicert, and Verisign have several intermediate certificates depending on the type of certificate you purchase. Sometimes it’s not always obvious.  If you have any doubts, use one of the SSL certificate validation tools from the various vendors , including this one by Digicert.  It will tell you if the certificate chain works or not. Do not let a test from your browser determine whether your certificate works.  Browsers handle certs differently, and a validation tool will tell you if it will work with all browsers.

The BEAST That Slayed SSL?

We’re screwed.

Well, maybe. As you’ve probably seen all over the Twitters and coverage from The Register, security researchers Juliano Rizzo and Thai Duong may have found a way to exploit a previously known (but thought to be far too impractical) flaw in SSL/TLS, one that would allow an observer to break encryption with relative ease. And they claim to have working code that would automate that process, turning it from the realm of the mathematical researcher or cryptographic developer with a PhD to the realm of script -kiddies.

Wait, what?

Are we talking skeleton key here?

They’re not presenting their paper and exploit until on September 23rd, 2011 (tomorrow), so we don’t know the full scale of the issue. So it’s not time to panic. Just yet, at least. When it’s time to panic, I have prepared a Homeland Security-inspired color-coded scale.

What, too subtle? 

There’s some speculation about the attack, but so far here is the best description I’ve found of a likely scenario. Essentially, you become a man in the middle (pretty easy if you’re a rogue government, have a three letter acronym in your agency’s name (NSA, FSB, CIA, etc.), or if you sit at a coffee shop with a rogue access point). You could then start guessing (and testing) a session  cookie, one character at time, until you had the full cookie.

Wait, plugging away, one character at a time, until you get the code? Where have I seen this before?

War Games: Prophetic Movie

That’s right, War Games. The WOPR (or Joshua) breaks the nuclear launch codes one character at a time. Let’s hope the Department of Defense doesn’t have a web interface for launching the nukes. (I can see it now, as the President gets ready to hit the button, he’s asked to fill out a short survey.)

If someone gets your session cookie, they can imitate you. They can read your emails, send email as you, transfer money, etc., all without your password. This is all part of the HTTP standard.

Oh, you don’t know HTTP? You should.

With the HTTP protocol, we have the data unit: The HTTP message.  And there are two types of HTTP messages, a request and a response. In HTTP, every object on a page requires a request. Every GIF image, CSS style sheet, and every hilariously caption JPEG requires its own request. In HTTP, there is no way to make one request and ask for multiple objects. And don’t forget the HTML page itself, that’s also a separate request and response. One object, one request, one response.

HTTP by itself has no way to tie these requests together. The web server doesn’t know that the JPEG and the GIF that it’s been asked for are even part of the same page, let alone the same user. That’s where the session ID cookie comes in. The session cookie ID is a specific kind of HTTP cookie, and it’s the glue that joins all the request you make together which allows the web site to build a unique relationship with you.

HTTP by itself has no way to differentiate the IMG2.JPG request from one user versus another

When you log into a website, you’re asked for a username and password (and perhaps a token ID or client certificate if you’re doing two-factor). Once you supply it, you don’t supply it for every object you request. Instead, you’re granted a session cookie. They often have familiar names, like JSESSIONID, ASPSESSIONID, or PHPSESSIONID. But the cookie name could be anything. They’ll typically have some random value, a long string of alphanumeric characters.

With HTTP session ID cookies, the web server knows who’s requests came from who, and can deliver customized responses

Every web site you log into, whether it’s Twitter (auth_token), Facebook (there’s a lot), Gmail (also a lot), your browser is assigned a session ID cookie (or depending on how they do authentication, several session ID cookies). Session ID cookies are the glue that holds it all together.

These values don’t last forever. You know how GMail requires you re-login every 2 weeks? You’re generated new session ID cookies at that point. That keeps an old session ID from working past a certain point.

If someone gets this cookie value, they can pretend to be you. If you work at a company that uses key cards for access into various areas, what happens if someone steals your keycard? They can walk into all the areas you can. Same with getting the session ID cookie.

This attack, the one outlined, is based on getting this cookie. If they get it, they can pretend to be you. For any site they can grab the cookie for.

“Comrade General, we haf cracked ze cookie of the caplitalists! Permishun to ‘Buy Eet Now’?”

“Permission granted, comrade. Soon, ze secrets of ze easy bake ofen vill be ours!”

As the caption above illustrates, this is indeed serious.

Since this is fixed in TLS 1.1 or 1.2, which is used exactly nowhere, it’ll be interesting to see how much of our infrastructure we need to replace to remediate this potential hack. It could be as simple as a software update, or we could have to replace every load balancer on the planet (since they do SSL acceleration). That is a frightening prospect, indeed.

AES-NI: Hardware Encryption in your Processor

Not long ago, I had to sit through an Intel marketing presentation. Now, I like Intel. I’ve got at least 4 running Intel processors in my apartment. However, I dislike marketing presentations. And this one was a doozy. Had I known any missile launch codes or the location of the secret rebel base, I would have given them up 10 minutes into it. It was PowerPoint waterboarding.

You have my undivided attention. Not by choice.

There was one part of the presentation that did pique my interest: AES-NI. The marketing person giving the presentation didn’t know much about it, so I did some exploring and found an Intel engineer to get some more info. It’s actually quite awesome.

AES-NI is an instruction set added to newer Intel processors that accelerate certain symmetric cryptographic functions, particularly those related to AES. It’s been making its way incrementally into Intel’s processors (desktop, server, mobile).  Intel’s Xeon server processors got them in the 5600 series, however they were not in the 7600 series. The new E7 processors has it, including their new 10-core monster.

One of the earliest lines of processors to get AES-NI was Intel’s laptop processors, which is great for those that encrypt their hard drives. Mac OS X Lion uses the AES-NI extensions automatically if available for File Vault 2 (File Fault 1 in Snow Leopard doesn’t use it), as well as Windows 7’s BitLocker.  You can have Linux use AES-NI for file system encryption as well. The desktop processors are now starting to get AES-NI as well. It removes most of the performance penalty that you get with file system/disk encryption.  My i5-based MacBook Air has the entire hard drive encrypted, and with the AES-NI and the fact that it’s an SSD drive, the performance is amazing.

To see how much faster AES-NI operations are, I ran a quick and dirty test using OpenSSL on Ubuntu Server 11.04 with a i5-2300 processor. The built-in version had AES-NI support compiled into it, and I compiled a version that didn’t include the hooks. The command I ran was openssl speed -evp aes-128-cbc.

The trick is that the software must be told to use the AES-NI instruction set. You can check to see if OpenSSL has AES-NI support built-in by running the command openssl engine. It should list aesni.

openssl engine
(aesni) Intel AES-NI engine
(dynamic) Dynamic engine loading support

Note: That doesn’t mean that AES-NI is available in your processor. That’s a bit harder to determine. Check your CPU model number to see if it’s supported.

An important note, if you’re running VMware or other hypervisor, AES-NI does get passed down to the guest virtual machines. The tests above were done on an Ubuntu VM running inside of ESXi 5.0 (should work with other hypervisors too).

Another benefit is that since it’s in hardware, the algorithms can’t be tampered with. One possible vector would be to have a software library (such as OpenSSL) replaced with a rouge library, that compromises your encryption in some way.

Right now, no AMD processor has this feature (some old Via chips had something similar called Padlock), although the upcoming Bulldozer processors should get it, although I don’t know if it will be a compatible instruction set.

AES-NI is an relatively unknown enhancement, and it should be getting more attention.