BYOD: A Tale of Two Bringings of Devices

BYOD is certainly a hot topic lately. A bit ago I raised some concerns over Juniper’s Junos Pulse product, which allows a company to not only protect employees BYOD devices, but to also view their photos, messages, and other potentially private information. My argument that it wasn’t that it shouldn’t be used on employer owned devices (that’s fine), but that Juniper was marketing it to be installed on employee owned devices, potentially greatly intruding on their privacy.

I got in a similar twitter discussion recently too, and it dawned on me that there were really two distinct types of BYOD:  BYODe, and BYODr. BYODe is Bring Your Own Device, Employee controlled and BYODr is BYOD, EmployeR controlled.

What’s the difference? Most BYOD I see is BYODe, where an employee is given accounts on a mail server running IMAP/POP/Exchange, a VPN client to connect to the local intranet and internal file servers, and maybe some other services (such as a login). The employee might be required to use an antivirus software and encrypt their hard drive, but there’s a clear delineation.

BYODr is when the employer requires a much greater level of control over the employees personal property. It might come in the form of a standard software load from IT, the ability to remotely access the employee’s device, and the ability to remote wipe the device.

If the company has the ability to look on the device itself, it’s going to limit what I do with it. Many types of personal communications, certain, uh, websites, etc., are going to be off limits for that device because my privacy is explicitly signed away for that device.

This approach chaffs at me a little, but I’m coming around to it a bit. So long as the employees have been explicitly told about all of the potentially privacy-invading functionality of the software. Students of a school weren’t informed about such capabilities in school-supplied laptops in once case (so not BYOD, but still), such as when a school district was caught viewing the webcams of students laptops in a similar BYODr scenario.

So while forking over cash for a device that you don’t get to control sounds like a raw deal, it doesn’t always need to be. I’ve become accustomed to a certain level of hardware. You know what I’m not down with? A 6 year old Dell craptop computer (Dell seems to have gotten better, but man they made some crap laptops, and IT departments ate them up like candy).

You know those shitty Dells that are universally despised? Order one for every person on staff. Except the executives. Get us the good stuff.

If my primary job is technology related, I would rather bring my own device than deal with the ancient PoS laptop they’d likely give me.

BYODe, or employee controlled BYOD, likely not be appropriate for certain industries (such as defense and healthcare), but for the most part this is what I’ve seen (and what I think about when discussing BYOD). Many high technology companies follow this approach, and it works great with a tech savvy staff who chaff at the snails pace that corporate IT can sometimes work at.

From Dropbox to app stores to gmail, corporate IT organizations can’t keep up. Sometimes it’s just a matter of the breakneck of the industry. And sometimes it’s just a matter of corporate IT sucking. I saw a few employees of a huge networking vendor lamenting their 200 MB mail box limit. It’s 2012, and people still have 200 MB as a limit? I’ve got like, 7 GBytes on my gmail account. That would go into the corporate suckage column. Dropbox? It’s hard to compete with a silicon valley startup when it comes to providing a service. Yet Dropbox is something that organizations (for some legit reasons, for some paranoid delusional reasons) fear.

So when talking about BYOD, I think it’s important to know which kind of BYOD we’re talking about. The employee requirements (simple non-intrusive VPN client to Big Brother looking into my stuff) very much change the dynamics of BYOD.

Creating Your Own SSL Certificate Authority (and Dumping Self Signed Certs)

SSL (or TLS if you want to be super totally correct) gives us many things (despite many of the recent shortcomings).

  • Privacy (stop looking at my password)
  • Integrity (data has not been altered in flight)
  • Trust (you are who you say you are)

All three of those are needed when you’re buying stuff from say, Amazon (damn you, Amazon Prime!). But we also use SSL for web user interfaces and other GUIs when  administering devices in our control.

When a website gets an SSL certificate, they typically purchase one from a major certificate authority such as DigiCert, Symantec (they bought Verisign’s registrar business), or if you like the murder of elephants and freedom, GoDaddy.  They range from around $12 USD a year to several hundred, depending on the company and level of trust.

The benefit that these certificate authorities provide is a chain of trust. Your browser trusts them, they trust a website, therefore your browser trusts the website (check my article on SSL trust, which contains the best SSL diagram ever conceived).

Your devices, on the other hand, the ones you configure and only your organization accesses, don’t need that trust chain built upon the public infrastrucuture. For one, it could get really expensive buying an SSL certificate for each device you control. And secondly, you set the devices up, so you don’t really need that level of trust.

So web user interfaces (and other SSL-based interfaces) are almost always protected with self-signed certificates. They’re easy to create, and they’re free. They also provide you with the privacy that comes with encryption, although they don’t do anything about trust. Which is why when you connect to a device with a self-signed certificate, you get one of these:

So you have the choice, buy an overpriced SSL certificate from a CA (certificate authority), or get those errors. Well, there’s a third option, one where you can create a private certificate authority, and setting it up is absolutely free.


OpenSSL is a free utility that comes with most installations of MacOS X, Linux, the *BSDs, and Unixes. You can also download a binary copy to run on your Windows installation. And OpenSSL is all you need to create your own private certificate authority.

The process for creating your own certificate authority is pretty straight forward:

  1. Create a private key
  2. Self-sign
  3. Install root CA on your various workstations
Once you do that, every device that you manage via HTTPS just needs to have its own certificate created with the following steps:
  1. Create CSR for device
  2. Sign CSR with root CA key
You can have your own private CA setup in less than an hour. And here’s how to do it.

Create the Root Certificate (Done Once)

Creating the root certificate is easy and can be done quickly. Once you do these steps, you’ll end up with a root SSL certificate that you’ll install on all of your desktops, and a private key you’ll use to sign the certificates that get installed on your various devices.

Create the Root Key

The first step is to create the private root key which only takes one step. In the example below, I’m creating a 2048 bit key:

openssl genrsa -out rootCA.key 2048

The standard key sizes today are 1024, 2048, and to a much lesser extent, 4096. I go with 2048, which is what most people use now. 4096 is usually overkill (and 4096 key length is 5 times more computationally intensive than 2048), and people are transitioning away from 1024.

Important note: Keep this private key very private. This is the basis of all trust for your certificates, and if someone gets a hold of it, they can generate certificates that your browser will accept. You can also create a key that is password protected by adding -des3:

openssl genrsa -out rootCA.key 2048 -des3

You’ll be prompted to give a password, and from then on you’ll be challenged password every time you use the key. Of course, if you forget the password, you’ll have to do all of this all over again.

The next step is to self-sign this certificate.

openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem

This will start an interactive script which will ask you for various bits of information. Fill it out as you see fit.

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Oregon
Locality Name (eg, city) []:Portland
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Overlords
Organizational Unit Name (eg, section) []:IT
Common Name (eg, YOUR name) []:Data Center Overlords
Email Address []

Once done, this will create an SSL certificate called rootCA.pem, signed by itself, valid for 1024 days, and it will act as our root certificate. The interesting thing about traditional certificate authorities is that root certificate is also self-signed. But before you can start your own certificate authority, remember the trick is getting those certs in  every browser in the entire world.

Install Root Certificate Into Workstations

For you laptops/desktops/workstations, you’ll need to install the root certificate into your trusted certificate repositories. This can get a little tricky.

Some browsers use the default operating system repository. For instance, in Windows both IE and Chrome use the default certificate management.  Go to IE, Internet Options, go to the Content tab, then hit the Certificates button. In Chrome going to Options and Under The Hood, and Manage certificates. They both take you to the same place, the Windows certificate repository. You’ll want to install the root CA certificate (not the key) under the Trusted Root Certificate Authorities tab.

However, in Windows Firefox has its own certificate repository, so if you use IE or Chrome as well as Firefox, you’ll have to install the root certificate into both the Windows repository and the Firefox repository.

In a Mac, Safari, Firefox, and Chrome all use the Mac OS X certificate management system, so you just have to install it once on a Mac. With Linux, I believe it’s on a browser-per-browser basis.

Create A Certificate (Done Once Per Device)

Every device that you wish to install a trusted certificate will need to go through this process. First, just like with the root CA step, you’ll need to create a private key (different from the root CA).

openssl genrsa -out device.key 2048

Once the key is created, you’ll generate the certificate signing request.

openssl req -new -key device.key -out device.csr

You’ll be asked various questions (Country, State/Province, etc.). Answer them how you see fit. The important question to answer though is common-name.

Common Name (eg, YOUR name) []:

Whatever you see in the address field in your browser when you go to your device must be what you put under common name, even if it’s an IP address.  Yes, even an IP (IPv4 or IPv6) address works under common name.

If it doesn’t match, even a properly signed certificate will not validate correctly and you’ll get the “cannot verify authenticity” error.

Once that’s done, you’ll sign the CSR, which requires the CA root key.

openssl x509 -req -in device.csr -CA root.pem -CAkey root.key -CAcreateserial -out device.crt -days 500

This creates a signed certificate called device.crt which is valid for 500 days (you can adjust the number of days of course, although it doesn’t make sense to have a certificate that lasts longer than the root certificate).

The next step is to take the key and the certificate and install them in your device. Most network devices that are controlled via HTTPS have some mechanism for you to install. For example, I’m running F5’s LTM VE (virtual edition) as a VM on my ESXi 4 host.

Log into F5’s web GUI (and should be the last time you’re greeted by the warning), and go to System, Device Certificates, and Device Certificate.

In the drop down select Certificate and Key, and either past the contents of the key and certificate file, or you can upload them from your workstation.

After that, all you need to do is close your browser and hit the GUI site again. If you did it right, you’ll see no warning and a nice greenness in your address bar.

And speaking of VMware, you know that annoying message you always get when connecting to an ESXi host?

You can get rid of that by creating a key and certificate for your ESXi server and installing them as /etc/vmware/ssl/rui.crt and /etc/vmware/ssl/rui.key.

Gigamon Side Story

The modern data center is a lot like modern air transportation. Not nearly as sexy as it used to be, the food isn’t nearly as good as it used to be, and more choke points than we used to deal with.

With 10 Gigabit Ethernet Fabrics available from vendors like Cisco, Juniper, Brocade, et all, we can conceive of these great, non-blocking, lossless networks that let us zip VMs and data to and fro.

And then reality sets in. The security team needs to inspection points. That means firewalls, IPS, and IDS devices. And one thing they’re not terribly good at? Gigs and gigs of traffic. Also scaling. And not pissing me off.

Pictured: Firewall Choke Points

This battle between scalability and security has data center administrators and security groups rumbling like some sort of West Side Data Center Story.

Dun dun da dun! Scalability!

Dun doo doo ta doo! Inspection!

So what to do? Enter Gigamon, the makers of the orangiest network devices you’ll find in a data center. They were part of Networking Field Day 2, which I participated in back in October.

Essentially what Gigamon allows you to do is scale out your SPAN/Mirror ports. On most Cisco switches, only two ports at a time can be spitting mirrored traffic. For something like a Nexus 7000 with up to 256 10 Gigabit Interfaces, that’s usually not sufficient for monitoring anything but a small smattering of your traffic.

A product like Gigamon can tap fibre and copper links, or take in the output of a span port, classify the traffic, and send it out an appropriate port. This would allow a data center to effectively scale traffic monitoring in a way that’s not possible with mere mirrored ports alone. It would effectively remove all choke points that we normally associate with security. You’d just need to scale up with the appropriate number of IDS/IPS devices.

But with great power, comes the ability to do some unsavory things. During the presentation Gigamon mentioned they’d just done a huge install with Russia (note: I wouldn’t bring that up in your next presentation), allowing the government to monitor data of its citizens. That made me less than comfortable (and it’s also why it scares the shit out of Jeremy Gaddis). But “hey, that’s how Russia rolls” you might say. We do it here in the US, as well, through the concept of “lawful interception“. Yeah, I did feel a little dirty after that discussion.

Still, it could be used for good by removing the standard security choke points. Even if you didn’t need to IPS every packet in your data center, I would still consider architecting a design with Gigamon or another vendor like them in mind. It wouldn’t be difficult to consider where to put the devices, and it could save loads of time in the long run. If a security edict came down from on high, the appropriate devices would be put in place with Gigamon providing the pipping without choking your traffic.

In the mean time, I’m going to make sure everything I do is SSL’d.

Note: As a delegate/blogger, my travel and accommodations were covered by Gestalt IT, who vendors paid to have spots during the Networking Field Day. Vendors pay Gestalt IT to present, so while my travel (hotel, airfare, meals) were covered indirectly by the vendors, no other remuneration (save for the occasional tchotchke) from any of the vendors, directly or indirectly, or by Gestalt IT was recieved. Vendors were not promised, nor did they ask for any of us to write about them, or write about them positively. In fact, we sometimes say their products are shit (when, to be honest, sometimes they are, although this one wasn’t). My time was unpaid. 

BYOD And Juniper’s Big Brother

Twitter fight!

I’ve been involved in a few twitter fights discussions recently, which are typically passionate conversations with people that hold passionate beliefs.  However, the problem with arguing on Twitter is that it’s very easy to accidentally be on the same side, while thinking you’re on opposite sides. Such is the limit of 144 characters.

The whole brouhaha started with a tweet I made about Junos Pulse from Juniper, which can do the following (from the Pulse PDF brochure): “SMS, MMS, email, and message content monitoring, phone log, address book, and stored photo viewing and control.”

Junos Pulse is Juniper’s mobile security client, which includes VPN as well as anti-malware capabilities. It also has the ability to peer into the text messages that a phone has sent and received, as well as view all photographs taken by the smarphone or tablet’s camera. Juniper is not just marketing it towards corporate issued phones and tablets (which I have no problem with), but also (as shown in the  fear-mongering blog post with a misleading title that I  wrote about in my last post) is advocating that employee-owned devices, part of the BYOD (bring your own device) trend in IT, also be loaded with Juniper’s spy-capable software. From the fear-mongering article (emphasis mine):

Get mobile security and device management for your personal or corporate-issued mobile device, and mandate that all of your employees – or anyone for that matter who accesses your corporate network from a mobile device – load mobile security and device management on their mobile devices!

If the phone or tablet is issued by the company, I don’t have any problem with this (so long as employees know that there is that capability). This could even be quite handy, depending on the scenario. But employee owned equipment being susceptible to spying by corporate IT? No way. I can’t imagine anyone would allow that on their personal devices. Even Juniper employees.

(Related: Check out Tom Hollingsworth’s post on BYOD)

Hence my tweet, wondering if Juniper eats its own dog food, and requires employees who bring their personal, non-Juniper-owned smartphones into the office to run Pulse with the ability to view photos, texts, and other personal correspondence. I got responses like this:

I don’t think he realized that I was talking about Juniper pulse having the ability not just to spy on VPN traffic (which any VPN software could), but also the text messages and photos on the mobile device/tablet. Also that Juniper is marketing it towards employee owned devices. (Also, privacy concerns are not a legitimate reason to spy on someone.) In the end though, I think Virtual_Desktop and I were on the same page.

So it’s not just a company that I worry about violating an employees privacy, but also a rogue IT employee. I worked at a place once where a Unix admin stalked another employee by reading her email. Having the power to peer into someone’s personal texts, emails, and photos would be very tempting, and difficult to resist for the unscrupulous.

Ah, I see Tony is getting more saucy texts from his super model girlfriends

I get that if I’m at the office, and I’m using their network, that my traffic could be monitored. I get that data on company property, such as a company issued laptop, phone, or tablet is fair game for viewing by the company. But to require an employee to install something on their personal (BYOD) devices that has the ability to peer into an employee’s personal texts and images? That’s downright scary. And stupid. No knowledgable employee would let that happen. If an employer required that I install it on a device I brought into the office even if it didn’t connect to the corporate network, I’d leave the device at home. And I’d probably look for another job, because bone-headed decisions like that don’t exactly evoke confidence in management.

Junos Pulse certainly has some appropriate use cases. The ability to wipe a phone, view emails, texts and images, and other fairly intrusive activities on a company-owned device make sense in some cases. In others, it’s probably overly intrusive, overly-controlling, but within an employers rights. But on an employee’s personal device? No way.

I like Juniper, I really do. But I think they’ve got the strategy wrong for Pulse, and I think they’ll figure it out. It’s a much larger issue as well, with the consumerization of IT and employees bringing their own devices, the demarkation point between employee and employer is becoming hazy. That’s probably an offshoot of the time an employee is on the clock and off the clock becoming hazy as well. We’ll have to see where this goes, but I don’t think people are going to put up with the  “it’s going to spy on your personal device” route.

It May Already Be Too Late!

I’m very enthusiastic about anything that makes corporate IT suck less (such as BYOD, Bring Your Own Device), and despite not working for any company other than myself, I’m still quite sensitive to things that increase IT suckitude. And I’ve found the later recently in a blog post over at Juniper called “BYOD Isn’t As Scary As You Think, Mr. or Ms. CIO“.

The title of the article seems to say that BYOD isn’t scary for corporate environments. But the article reads as if the author intended to induce a panic attack.

The article is frustrating for a couple of reasons. One, CIOs might take that shit seriously, and while huffing on a paper bag because of panic-induced hyperventilation, might fire off a new bone-headed security policy. One would hope that someone at the CIO level would know better, but I’ve known CIOs that don’t.

Two, one of the great things about smart phones is the lack of shitty security products on them. And you want to go ruin that? If I’m bringing my own device, with saucy texts from my supermodel girlfriends, I’m not likely to let any company put anything on my phone.

Why Ensign Ro, those are not bridge-duty appropriate texts you’re sending to Commander Data

Three, of the possible security implications with smart phones, only a couple of edge cases would even be solved by the software that Juniper offers as a solution. For instance, the threat of a rogue employee. You used to be able to tell if you were let go because your passwords didn’t work, now you could know when your phone reboots and wipes. But how do you know they’ve gone rogue? Why, monitor photos and texts on that employee’s phone of course.

Wait, what?

You can monitor emails, texts, and camphone images? With Junos Pulse mobile security, you can.

Hi there Brett Favre, Big Brother here. We, uhh, couldn’t help but notice that photo you texted from your personal phone that we are always monitoring…

This is just making corporate security, which already sucks, even worse. It’s a mentality that is lose-lose. The IT organization would get additional complexity for very little gain, and the users would get more hindrance, little security, and a huge invasion of privacy. Maybe I’m alone in this, but if any company offered me a job and required my personal device be subjected to this, the compensation package would need to include a mega-yacht to make it worthwhile.

I’ve been self employed since 2007, and having been free of corporate laptop builds, moldy email systems, and maniacal IT managers, I can say this: Being independent is 30% about calling the shots on my own schedule, 70% is calling the shots on my own equipment.

“That’s a very attractive offer, however judging from that crusty-ass laptop you have an the bizarre no-Mac policy by your brain-dead IT head/security officer, working for your company would eat away at my soul and cause me to activate the genesis device out of frustration.”

I really like Juniper, I do. But one of the things you do with friends is call them on their shit. I do it with Cisco all the time, now it’s Juniper’s turn.

TLS 1.2: The New Hotness for Load Balancers

Aright implementors of services that utilize TLS/SSL, shit just got real. TLS 1.0/SSL 3.0? Old and busted. TLS 1.2? New hotness.

We config together, we die together. Bad admins for life.

There’s an exploit for SSL and TLS, and it’s called BEAST. It takes advantage of a previously known (but though to be too impractical to exploit) weakness in CBC. Only BEAST was able to exploit that weakness in a previously unconsidered way, making it much more than a theoretical problem. (If you’re keeping track, that’s preciously the moment that shit got real).

The cure is an update to the TLS/SSL standard called TLS 1.2, and it’s been around since 2008 (TLS 1.1 also fixes it, and has been available since 2006, but we’re talking about new hotness here).

So no problem, right? Just TLS 1.2 all the things.

Well, virtually no one uses it. It’s a chicken and egg problem. Clients haven’t supported it, so servers haven’t. Servers didn’t support it, so why would clients put the effort in? Plus, there wasn’t any reason to. The CBC weakness had been known, but it was thought to be too impractical to exploit.

But now we’re in a state of shit-is-real, so it’s time to TLS up.

So every browser and server platform running SSL is going to need to be updated to support TLS 1.2. On the client side, Google Chrome, Apple Safari, Firefox, IE (although IE 9 supports TLS 1.1, but previous version will need to be back ported) will need to be updated.

On the server side, it might be a bit simpler than we think. Most of the time when we connect to a website that utilizes SSL (HTTPS), the client isn’t actually talking SSL to the server, instead they’re talking to a load balancer that terminates the SSL connection.

Since most of the world’s websites have a load balancer terminate the SSL, we can update the load balancers with TLS 1.2 and take care of a major portion of the servers on the Internet.

Right now, most of the load balancing vendors don’t support TLS 1.2. If asked, they’ll likely say that there’s been no demand for it since clients don’t support it, which was fine until now. Now is the time for the various vendors to upgrade to 1.2, and if you’re a vendor and you’re not sure if it’s worth the effort, listen to Yoda:

Right now the only vendor I know of that supports TLS 1.2 is the market leader F5 Networks with their version 11 of their LTM, for which they should be commended. However, that’s not good enough, they need to backport version 10 (which has a huge install base). Vendors like Cisco, A10 Networks, Radware, KEMP Technologies, etc., need to also update their software to TLS 1.2. We can no longer use the excuse “because browsers don’t support it”. Because of BEAST, they will soon, and so do they.

In the meantime, if you’re running a load balancer that terminates SSL, you may want to change your cipher settings to prefer RC4-SHA instead of AES (which uses CBC). It’s cryptographically weaker, but is immune to the CBC issue. In the next few days, I’ll be putting together a page on how to prefer RC4 for the various vendors.

Rembmer, TLS 1.0/SSL 3.0: Old and busted. TLS 1.2? New hotness.

SSL’s No Good, Very Bad Couple of Months

The world of SSL/TLS security has had, well, a bad couple of months.

‘Tis but a limited exploit!

First, we’ve had a rash of very serious certificate authority security breaches. An Iranian hacker was able to hack Comodo, a certificate authority, and create valid, signed certficates for sites like and Then another SSL certificate authority in the Netherlands got p0wned so bad the government stepped in and took them over, probably from the same group of hackers from Iran.

Iran and China have both been accused of spying on dissidents through government or paragovernment forces. The Comodo and DigiNotar hacks may have lead to spying of Iranian dissidents, an a suspected attack by China a while ago prompted Google to SSL all Gmail connections at all times (not just username/password) by default, for everyone.

Between OCSP and CRLs, browser updates and rogue certificates, it’s called into question the very fabric of trust that we’ve taken for granted. Some even claim that PKI is totally broken (and there’s a reasonable argument for this).

This is what happens when there’s no trust. Also, this is how someone loses an eye. Do you want to lose an eye? Because this will totally do it.

Then someone found a way to straight up decrypt an SSL connection without any of the keys.

Wait, what?

It’s getting’ all “hack the planet” up in here.

The exploit is called BEAST, and it’s one that can decrypt SSL communications without having the secret. Thai Duong, one of the authors (the other is Juliano Rizzo) of the tool saw my post on BEAST, and invited me to watch the live demonstration from the hacker conference. Sure enough, they could decrypt SSL. Here’s video from the presentation:

Let me say that again. They could straight up decrypt that shit. 

Granted, there were some caveats, and the exploit can only be used in a in a somewhat limited fashion. It was a man-in-the-middle attack, but one that didn’t terminate the SSL connection anywhere but at the server. They found a new way to attack a known (but thought to be too impractical to exploit) vulnerability in the CBC part of some encryption algorithms.

The security community has known this was a potential problem for years, and it’s been fixed in TLS 1.1 and 1.2.

Wait, TLS? I thought this was about SSL?

Quick sidebar on SSL/TLS. SSL is the old term (the one everyone still uses, however). TLS replaced SSL, and we’re currently on TLS 1.2, although most people use TLS 1.0.

And that’s the problem. Everyone, and I do mean the entire planet, uses SSL 3.0 and TLS 1.0.  TLS 1.1 has been around since 2006, and TLS 1.2 has been around since 2008. But most web servers and browsers, as well as all manner of other types of SSL-enabled devices, don’t use anything beyond TLS 1.0.

And, here’s something that’s going to shock you:

Microsoft IIS and Windows 7 support TLS 1.1. OpenSSL, the project responsible for the vast majority of SSL infrastructure used by open source products (and much of the infrastructure for closed-source projects), doesn’t. As of writing, TLS 1.1 or above hasn’t made it yet into OpenSSL libraries, which means Apache, OpenSSH, and other tools that make use of the OpenSSL libraries can’t use anything above TLS 1.0. Look at how much of a pain in the ass it is right now to enable TLS 1.2 in Apache.

We’re into double-facepalm territory now

No good, very bad month indeed.

So now we’re going to have to update all of the web servers out there, as well as all the clients. That’s going to take a bit of doing. OpenSSL runs the SSL portion of a lot of sites, and they’ve yet to bake TLS 1.1/1.2 into the versions that everyone uses (0.9.8 and 1.0.x). Load balancers are going to play a central role in all of this. so we’ll have to wait for F5, Cisco, A10 Networks, Radware, and others to support TLS 1.2. As far as I can tell, only F5’s LTM version 11 supports anything above TLS 1.0.

The tougher part will be all of the browsers out there. There are a lot of systems that run non-supported and abandoned browsers. At least SSL/TLS is smart enough to be able to downgrade to the highest common denominator, but that would mean potentially vulnerable clients.

In the meantime something that web servers and load balancers can do is take Google’s lead and prefer the RC4 symmetric algorithm. While cryptographically weaker, it’s immune to the CBC attack.

This highlights the importance of keeping your software, both clients and servers, up to date.  Out of convenience we got stale with SSL, and even security the security obsessed OpenSSL project got caught with their pants down. This is really going to shake a lot of systems down, and hopefully be a kick in the pants to those that think it’s OK to not run current software.

I worked at an organization once where the developers forked off Apache (1.3.9 I believe) to interact with an application they developed. This meant that we couldn’t update to the latest version of Apache, as they refused to put the work in to port their application module to the updated versions of Apache. I bet you can guess how this ended. Give up? The Apache servers got p0wned.

So between BEAST and PKI problems, SSL/TLS has had a rough go at it lately. It’s not time to resort to smoke signals just yet, but it’s going to take some time before we get back to the level of confidence we once had. It’s a scary-ass world out there. Stay secure, my friends.


Get every new post delivered to your Inbox.

Join 76 other followers