Monday, January 16, 2017

What does security and USB-C have in common?

I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Monday, January 9, 2017

Security Advice: Bad, Terrible, or Awful

As an industry, we suck at giving advice. I don’t mean this in some negative hateful way, it’s just the way it is. It’s human nature really. As a species most of us aren’t very good at giving or receiving advice. There’s always that vision of the wise old person dropping wisdom on the youth like it’s candy. But in reality they don’t like the young people much more than the young people like them. Ever notice the contempt the young and old have for each other? It’s just sort of how things work. If you find someone older and wiser than you who is willing to hand out good advice, stick close to that person. You won’t find many more like that.

Today I’m going to pick on security though. Specifically security advice directed at people who aren’t security geeks. Heck, some of this will probably apply to security geeks too, so let’s just stick to humans as the target audience. Of all our opportunities around advice, I think the favorite is blaming the users for screwing up. It’s never our fault, it’s something they did, or something wasn’t configured correctly, but still probably something they did. How many times have you dealt with someone who clicked a link because they were stupid. Or they opened an attachment because they’re an idiot. Or they typed a password in that web page because they can’t read. The list is long and impressive. Not once did we do anything wrong. Why would we though? It’s not like we made anyone do those things! This is true, but we also didn’t not make them do those things!

Some of the advice we expect people to listen to is good advice. A great example is telling someone to “log out” of their banking site when they’re done. That makes sense, it’s easy enough to understand, and nothing lights on fire if they forget to do this. We also like to tell people things like “check the URL bar”. Why would a normal person do this? They don’t even know what a URL is. They know what a bar is, it’s where they go to calm down after talking to us. What about when we tell people not to open attachments? Even attachments from their Aunt Millie? She promised that cookie recipe months ago, it’s about time cookies.exe showed up!

The real challenge we have is understanding what is good advice that would supplement a properly functional system. Advice and instructions do not replace a proper solution. A lot of advice we give out is really to mask something that’s already broken. The fact that we expect users to care about a URL or attachment is basically nuts. These are failures in the system, not failures with users. We should be investing our resources into solving the root of the problem, not yelling at people for clicking on links. Instead of telling users not to click on attachments, just don’t allow attachments. Expecting behavior from people rarely changes them. At best it creates an environment of shame but it’s more likely it creates an environment of contempt. They don’t like you, you don’t like them.

As a security practitioner, look for ways to eliminate problems without asking users for intervention. A best case situation will be 80% user compliance. That remaining 20% would require more effort to deal with than anyone could handle, and if your solution is getting people to listen, you need 100% all the time which is impossible for humans but not impossible for computers.

It’s like the old saying, an ounce of prevention is worth a pound of cure. Or if you’re a fan of the metric system, 28.34 grams of prevention is worth 453.59 grams of cure!

Do you have some bad advice? Lay it on me! @joshbressers on Twitter.

Tuesday, January 3, 2017

Looks like you have a bad case of embedded libraries

A long time ago pretty much every application and library carried around its own copy of zlib. zlib is a library that does really fast and really good compression and decompression. If you’re storing data or transmitting data, it’s very likely this library is in use. It’s easy to use and is public domain. It’s no surprise it became the industry standard.

Then one day, CVE-2002-0059 happened. CVE-2002-0059 was a security flaw that was easy to trigger and easy to exploit. It affected network listening applications that used zlib (which was most of them). Today if this came out, it would make heartbleed look like a joke. This was long long ago though, most people didn’t know anything about security (or care in many instances). If you look at the updates that came out because of this flaw, they were huge because literally hundreds of software applications and libraries had to be patched. This affected Windows and Linux, which was most everything back then. Today it would affect every device on the planet. This isn’t an exaggeration. Every. Single. Device.

A lot of people learned a valuable lesson from CVE-2002-0059. That lesson was to stop embedding copies of libraries in your applications. Use the libraries already available on the system. zlib is pretty standard now, you can find it most anywhere, there is basically no reason to carry around your own version of this library in your project anymore. Anyone who does this would be seen as a bit nuts. Except this is how containers work.

Containing Containers

If you pay attention at all, you know the future of most everything is moving back in the direction of applications shipping with all the bits they need to run. Linux containers have essentially a full linux distribution inside them (a very small one of course). Now there’s a good reason for needing containers today. A long time ago, things moved very slowly. It wouldn’t have been crazy to run the same operating system for ten years. There weren’t many updates to anything. Even security updates were pretty rare. You know that if you built an application on top of a certain version of Windows, Solaris, or Linux, it would be around for a long time. Those days are long gone. Things move very very quickly today.

I’m not foolish enough to tell anyone they shouldn’t be including embedded copies of things in their containers. This is basically how containers work. Besides everything is fast now, including the operating system. You can’t count on the level of stability that once existed. This is a good thing because it gives us the ability to create faster than ever before, container technology is how we solve the problem of a fast changing operating system.

The problem we have today is our tools aren’t quite ready to deal with a security nightmare like CVE-2002-0059. If we found a serious problem like this (we sort of did with CVE-2015-7547 which affected glibc) how long would it take you to update all your containers? How would you update them? How would you even know if the flaw affected you?

The answer is most people wouldn’t update their containers quickly, some wouldn’t update them ever. This sort of goes against the whole DevOps concept. The right way this should work is if some horrible flaw is found in a library you’re shipping, your CI/CD infrastructure just magically deals with it. You shouldn’t have to really know or care. Humans are slow and make a lot of mistakes. They’re also hard to predict. All of these traits go against DevOps. The less we have humans do, the better. This has to be the future of security updates. There’s no secret option C where we stop embedding libraries this time. We need tools that can deal with security updates in a totally automated manner. We’re getting there, but we have a long way to go.

If you’re using containers today, and you can’t rebuild everything with the push of a button, you’re not really using containers. You’re running a custom Linux distribution. Don’t roll your own crypto, don’t roll your own distro.

Do you roll your own distro? Tell me, @joshbressers on Twitter.

Monday, January 2, 2017

Future Proof Security

If you’ve ever written code, even a few lines of it, you know there is always some sort of tradeoff between doing it “right” and doing it "now". This is basically the reality of any industry, there is always the right way, and then there’s the way it’s going to get done. If you’ve ever done any sort of home remodeling project you’re well aware of uncovering the sins of the past as soon as that wall gets opened up.


When you’re writing software there are some places you should never try to make this tradeoff though. In the industry we like to call some of these decisions “technical debt”. It’s not called that to be clever, it’s called that because like all debt, someday you have to pay it back, plus interest. Sometimes those loans come with huge interest rates. How many of us have seen entire projects that were thrown out because of the terrible design decisions made way back at the beginning? It’s sadly not uncommon.


Are there times we should never make a tradeoff between “right” and “now”? Yes, yes there are. The single most important is verify data correctness. Especially if you think it’s trusted input. Today’s trusted input is tomorrow’s SQL injection. Let’s use a few examples (these are actual examples I saw in the past with the names of the innocent changed).


Beware the SQL
Once Bob wrote some SQL to return all the names in one of the ‘Users’ table. It’s a simple enough query, the code looks something like this:

def get_clients():
table_name = “clients”
query = ‘SELECT * from Users_’ + table_name


That’s easy enough to understand, for every other ‘get_’ function, you change the table name variable. Someday in the future, they let the intern write some code, and he decides that would be way easier if the table_name variable was passed to the function, and you set it from the URL. Now you have a SQL injection as any remote user can set the table_name variable to anything, including dangerous SQL. If you’re ever doing SQL queries, use prepared statements. Even if you don’t think you need it. It’ll save a lot of trouble later.


Images as far as the eye can see!
There is an application that has some internal icons, they’re used for the buttons that get displayed for users to click on, no big deal. The developer took an existing image library they found under the rug. It has some security flaws but who cares, all the images it displays are shipped by the app, they’re trusted, no big deal.


In a few years the intern (that guy again!) decides that it would be awesome to show images off the Internet. There just happens to be an image library already included in the application, which is a huge win. There’s even some example code that can be copied from where the buttons are drawn!


This one is pretty easy to see. You have a known bad library that used to parse only trusted input. Now it’s parsing untrusted input and is a pretty big problem. There isn’t an easy fix for this one unfortunately. It’s rarely wise to ship embedded libraries in your projects, but everyone does it. I won't tell you to stop doing this, but I also understand this is one of the great problems we have to solve now that open source is everywhere.

These two examples have been grossly simplified, but this stuff has and will continue to happen. If you’re a software developer, be careful with your shortcuts. Always ask yourself the question “what happens if this suddenly starts parsing untrusted input?” It’ll save you a lot of trouble down the road. Never forget that the technical debt bill will show up someday. Make sure you can afford it.

Do you have a clever technical debt story? Tell me, @joshbressers on Twitter.

Sunday, December 25, 2016

The art of cutting edge, Doom 2 vs the modern Security Industry

During the holiday, I started playing Doom 2. I bet I’ve not touched this game in more than ten years. I can't even remember the last time I played it. My home directory was full of garbage and it was time to clean it up when I came across doom2.wad. I’ve been carrying this file around in my home directory for nearly twenty years now. It’s always there like an old friend you know you can call at any time, day or night. I decided it was time to install one of the doom engines and give it a go. I picked prboom, it’s something I used a long time ago and doesn’t have any fancy features like mouselook or jumping. Part of the appeal is to keep the experience close to the original. Plus if you could jump a lot of these levels would be substantially easier. The game depends on not having those features.

This game is a work of art. You don’t see games redefining the industry like this anymore. The original Doom is good, but Doom 2 is like adding color to a black and white picture, it adds a certain quality to it. The game has a story, it’s pretty bad but that's not why we play it. The appeal is the mix of puzzles, action, monsters, and just plain cleverness. I love those areas where you have two crazy huge monsters fighting, you wonder which will win, then start running like crazy when you realize the winner is now coming after you. The games today are good, but it’s not exactly the same. The graphics are great, the stories are great, the gameplay is great, but it’s not something new and exciting. Doom was new and exciting. It created a whole new genre of gaming, it became the bar every game that comes after it reaches for. There are plenty of old games that when played today are terrible, even with the glasses of nostalgia on. Doom has terrible graphics, but that doesn’t matter, the game is still fantastic.

This all got me thinking about how industries mature. Crazy new things stop happening, the existing players find a rhythm that works for them and they settle into it. When was the last time we saw a game that redefined the gaming industry? There aren’t many of these events. This brings us to the security industry. We’re at a point where everyone is waiting for an industry defining event. We know it has to happen but nobody knows what it will be.

I bet this is similar to gaming back in the days of Doom. The 486 just came out, it had a ton of horsepower compared to anything that had come before it. Anyone paying attention knew there were going to be awesome advancements. We gave smart people awesome new tools. They delivered.

Back to security now. We have tons of awesome new tools. Cloud, DevOps, Artificial Intelligence, Open Source, microservices, containers. The list is huge and we’re ready for the next big thing. We all know the way we do security today doesn’t really work, a lot of our ideas and practices are based on the best 2004 had to offer. What should we be doing in 2017 and beyond? Are there some big ideas we’re not paying attention to but should be?

Do you have thoughts on the next big thing? Or maybe which Doom 2 level is the best (Industrial Zone). Let me know.

Monday, December 19, 2016

Does "real" security matter?

As the dumpster fire that is 2016 crawls to the finish line, we had another story about a massive Yahoo breach. 1 billion user accounts had data stolen. Just to give some context here, that has to be hundreds of gigabytes at an absolute minimum. That's a crazy amount of data.

And nobody really cares.

Sure, there is some noise about all this, but in a week or two nobody will even remember. There has been a similar story to this about every month all year long. Can you even remember any of them? The stock market doesn't, basically everyone who has ever had a crazy breach hasn't seen a long term problem with their stock. Sure there will be a blip where everyone panics for a few days, then things go back to normal.

So this brings us to the title of this post.

Does anyone care about real security? What I mean here is I'm going to lump things into three buckets: no security, real security, and compliance security.

No Security
This one is pretty simple. You don't do anything. You just assume things will be OK, someday they aren't, then you clean up whatever mess you find. You could call this "reactive security" if you wanted. I'm feeling grumpy though.

Real Security
This is when you have a real security team, and you spend real money on features and technology. You have proper logging, and threat models, and attack surfaces, and hardened operating systems. Your applications go through a security development process and run in sandbox. This stuff is expensive. And hard.

Compliance Security
This is where you do whatever you have to because some regulation from somewhere says you have to. Password lengths, enabling TLS 1.2, encrypted data, the list is long. Just look at PCI if you want an example. I have no problem with this, and I think it's the future. Here is a picture of how things look today.

I don't think anyone would disagree that if you're doing the minimum compliance suggests, you still will have plenty of insecurity. The problem with the real security is that you're probably not getting any ROI, it's likely a black hole you dump money into and get minimal value back (remember the bit about long term stock prices not mattering here).

However, when we look at the sorry state of nearly all infrastructure and especially the IoT universe, it's clear that No Security is winning this race. Expecting anyone to make great leaps in security isn't going to happen. Most won't follow unless they absolutely have to. This is why compliance is the future. We have to keep nudging compliance to the right on this graph, but we have to move it slowly.

It's all about the Benjamins
As I mentioned above, security problems don't seem to cause a lot of negative financial impact. Compliance problems do. Right now there are very few instances where compliance is required, and even when it is it's not always as strong as it could be. Good security will have to firstly show value (actual measurable value, not some made up statistics), then once we see the value, it should be mandated by regulation. Not everything should be regulated, but we need clear rules as to what should need compliance, why, and especially how. I used to despise the idea of mandatory compliance around security but I think at this point it's the only plausible solution. This problem isn't going to fix itself. If you want to make a prediction ask yourself: is there a reason 2017 will be more secure than 2016?

Do you have thoughts on compliance? Let me know.

Monday, December 12, 2016

A security lifetime every five years

A long time ago, it wouldn’t be uncommon to have the same job at the same company for ten or twenty years. People loved their seniority, they loved their company, they loved everything staying the same. Stability was the name of the game. Why learn something new when you can retire in a few years?

Well, a long time ago, was a long time ago. Things are quite a bit different now. If you’ve been doing the same thing at the same company for more than five years, there’s probably something wrong. Of course there are always exceptions to every rule, but I bet more than 80% of the people in their jobs for more than five years aren’t exceptions. It’s easy to get too comfortable, it’s also dangerous.

Rather than spending too much time expanding on this idea, I’m going to take it and move into the security universe as that’s where I spend all my time. It’s a silly place, but it’s all I know, so it’s home. While all of IT moves fast, the last few years have been out of control for security. Most of the rules from even two years ago are different now. Things are moving at such a fast pace I’m comfortable claiming that every five years is a lifetime in the security universe.

I’m not saying you can’t work for the same company this whole time. I’m saying that if you’re doing the same thing for five years, you’re not growing. And if you’re not growing, what’s the point?

Now here’s the thing about security. If we think about the people we would consider the “leaders” (using the term loosely, there aren’t even many of those types) we will notice something about the whole “five years” I mentioned. How many of them have done anything on a level that got them where they are today in the last five years? Not many.

Again, there are exceptions. I’ll point to Mudge and the CITL work. That’s great stuff. But for every Mudge I can think of more than ten that just aren’t doing interesting things. There’s nothing wrong with this, I’m not pointing it out to diminish any past contributions to the world. I point it out because sometimes we spend more time looking at the past than we do looking even where we are today, much less where we’re heading in the future.

What’s the point of all this (other than making a bunch of people really mad)? It’s to point out that the people and ideas that are going to move things forward aren’t the leaders from the past, they’re new and interesting people you’ve never heard of. Look for new people with fresh ideas. Sure it’s fun to talk to the geezers, but it’s even more fun to find the people who will be the next geezers.