Sunday, January 29, 2017

Everything you know about security is wrong, stop protecting your empire!

Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn't do this everyone would think they're weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.

How wrong am I? Let me know: @joshbressers on Twitter.



Monday, January 23, 2017

Return on Risk Investment

I found myself in a discussion earlier this week that worked its way into return on investment topics. Of course nobody could really agree on what the return was which is sort of how these conversations often work out. It’s really hard to decide what the return on investment is for security features and products. It can be hard to even determine cost sometimes, which should be the easy number to figure out.

All this talk got me thinking about something I’m going to call risk investment. The idea here is that you have a risk, which we’ll think about as the cost. You have an investment of some sort, it could be a product, training, maybe staff. This investment in theory reduces your risk in some measurable way. The reduction of the risk is the return on risk investment. We like to think about these things in the context of money, but risk doesn’t exactly work that way. Risk isn’t something that can often be measured easily. Even incredibly risky behaviors can work out fine, and playing it safe can end horribly. Rather than try to equate everything to money, what if we ignored that for the moment and just worried about risk.

 First, how do you measure your risk? There isn’t a nice answer for this. There are plenty of security frameworks you can use. There are plenty of methodologies that exist, threat modeling, attack surface analysis, pen test reports, architecture reviews, automated scanning of products and infrastructure. There’s no single good answer to this question. I can’t tell you what your risk profile is, you have to decide how you’re going to measure this. What are you protecting? If it’s some sort of regulated data, there will be substantial cost in losing it, so this risk measurement is easy. It’s less obvious if you’re not operating in an environment that has direct cost to having an incident. It’s even possible you have systems and applications that pose zero risk (yeah, I said it).

 Assuming we have a way to determine risk, now we wonder how do you measure the return on controlling risk? This is possibly more tricky than deciding on how to measure your risk. You can’t prove a negative in many instances, there’s no way to say your investment is preventing something from happening. Rather than measure how many times you didn’t get hacked, the right way to think about this is if you were doing nothing, how would you measure your level of risk? We can refer back to our risk measurement method for that. Now we think about where we do have certain protections in place, what will an incident look like? How much less trouble will there be? If you can’t answer this you’re probably in trouble. This is the important data point though. When there is an incident, how do you think your counter measures will help mitigate damage? What was your investment in the risk?

 And now this brings us to our Return on Risk Investment, or RORI as I’ll call it, because I can and who doesn’t like acronyms? Here’s the thing to think about if you’re a security leader. If you have risk, which we all do, you must find some way to measure it. If you can’t measure something you don’t understand it. If you can’t measure your risk, you don’t understand your risk. Once you have your method to understand what’s happening, make note of your risk measurement without any sort of security measures in place, your risk with ideal (not perfect, perfect doesn't exist) measures in place, and your risk with existing measures in place. That will give you an idea of how effective what you’re doing is. Here’s the thing to watch for. If your existing measures are close to the risk level for no measures, that’s not a positive return. Those are things you either should fix or stop doing. Sometimes it’s OK to stop doing something that doesn’t really work. Security theater is real, it doesn’t work, and it wastes money. The trick is to find a balance that can show measurable risk reduction without breaking the bank.


How do you measure risk? Let me know: @joshbressers on Twitter.


Monday, January 16, 2017

What does security and USB-C have in common?

I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Monday, January 9, 2017

Security Advice: Bad, Terrible, or Awful

As an industry, we suck at giving advice. I don’t mean this in some negative hateful way, it’s just the way it is. It’s human nature really. As a species most of us aren’t very good at giving or receiving advice. There’s always that vision of the wise old person dropping wisdom on the youth like it’s candy. But in reality they don’t like the young people much more than the young people like them. Ever notice the contempt the young and old have for each other? It’s just sort of how things work. If you find someone older and wiser than you who is willing to hand out good advice, stick close to that person. You won’t find many more like that.

Today I’m going to pick on security though. Specifically security advice directed at people who aren’t security geeks. Heck, some of this will probably apply to security geeks too, so let’s just stick to humans as the target audience. Of all our opportunities around advice, I think the favorite is blaming the users for screwing up. It’s never our fault, it’s something they did, or something wasn’t configured correctly, but still probably something they did. How many times have you dealt with someone who clicked a link because they were stupid. Or they opened an attachment because they’re an idiot. Or they typed a password in that web page because they can’t read. The list is long and impressive. Not once did we do anything wrong. Why would we though? It’s not like we made anyone do those things! This is true, but we also didn’t not make them do those things!

Some of the advice we expect people to listen to is good advice. A great example is telling someone to “log out” of their banking site when they’re done. That makes sense, it’s easy enough to understand, and nothing lights on fire if they forget to do this. We also like to tell people things like “check the URL bar”. Why would a normal person do this? They don’t even know what a URL is. They know what a bar is, it’s where they go to calm down after talking to us. What about when we tell people not to open attachments? Even attachments from their Aunt Millie? She promised that cookie recipe months ago, it’s about time cookies.exe showed up!

The real challenge we have is understanding what is good advice that would supplement a properly functional system. Advice and instructions do not replace a proper solution. A lot of advice we give out is really to mask something that’s already broken. The fact that we expect users to care about a URL or attachment is basically nuts. These are failures in the system, not failures with users. We should be investing our resources into solving the root of the problem, not yelling at people for clicking on links. Instead of telling users not to click on attachments, just don’t allow attachments. Expecting behavior from people rarely changes them. At best it creates an environment of shame but it’s more likely it creates an environment of contempt. They don’t like you, you don’t like them.

As a security practitioner, look for ways to eliminate problems without asking users for intervention. A best case situation will be 80% user compliance. That remaining 20% would require more effort to deal with than anyone could handle, and if your solution is getting people to listen, you need 100% all the time which is impossible for humans but not impossible for computers.

It’s like the old saying, an ounce of prevention is worth a pound of cure. Or if you’re a fan of the metric system, 28.34 grams of prevention is worth 453.59 grams of cure!

Do you have some bad advice? Lay it on me! @joshbressers on Twitter.

Tuesday, January 3, 2017

Looks like you have a bad case of embedded libraries

A long time ago pretty much every application and library carried around its own copy of zlib. zlib is a library that does really fast and really good compression and decompression. If you’re storing data or transmitting data, it’s very likely this library is in use. It’s easy to use and is public domain. It’s no surprise it became the industry standard.

Then one day, CVE-2002-0059 happened. CVE-2002-0059 was a security flaw that was easy to trigger and easy to exploit. It affected network listening applications that used zlib (which was most of them). Today if this came out, it would make heartbleed look like a joke. This was long long ago though, most people didn’t know anything about security (or care in many instances). If you look at the updates that came out because of this flaw, they were huge because literally hundreds of software applications and libraries had to be patched. This affected Windows and Linux, which was most everything back then. Today it would affect every device on the planet. This isn’t an exaggeration. Every. Single. Device.

A lot of people learned a valuable lesson from CVE-2002-0059. That lesson was to stop embedding copies of libraries in your applications. Use the libraries already available on the system. zlib is pretty standard now, you can find it most anywhere, there is basically no reason to carry around your own version of this library in your project anymore. Anyone who does this would be seen as a bit nuts. Except this is how containers work.

Containing Containers

If you pay attention at all, you know the future of most everything is moving back in the direction of applications shipping with all the bits they need to run. Linux containers have essentially a full linux distribution inside them (a very small one of course). Now there’s a good reason for needing containers today. A long time ago, things moved very slowly. It wouldn’t have been crazy to run the same operating system for ten years. There weren’t many updates to anything. Even security updates were pretty rare. You know that if you built an application on top of a certain version of Windows, Solaris, or Linux, it would be around for a long time. Those days are long gone. Things move very very quickly today.

I’m not foolish enough to tell anyone they shouldn’t be including embedded copies of things in their containers. This is basically how containers work. Besides everything is fast now, including the operating system. You can’t count on the level of stability that once existed. This is a good thing because it gives us the ability to create faster than ever before, container technology is how we solve the problem of a fast changing operating system.

The problem we have today is our tools aren’t quite ready to deal with a security nightmare like CVE-2002-0059. If we found a serious problem like this (we sort of did with CVE-2015-7547 which affected glibc) how long would it take you to update all your containers? How would you update them? How would you even know if the flaw affected you?

The answer is most people wouldn’t update their containers quickly, some wouldn’t update them ever. This sort of goes against the whole DevOps concept. The right way this should work is if some horrible flaw is found in a library you’re shipping, your CI/CD infrastructure just magically deals with it. You shouldn’t have to really know or care. Humans are slow and make a lot of mistakes. They’re also hard to predict. All of these traits go against DevOps. The less we have humans do, the better. This has to be the future of security updates. There’s no secret option C where we stop embedding libraries this time. We need tools that can deal with security updates in a totally automated manner. We’re getting there, but we have a long way to go.

If you’re using containers today, and you can’t rebuild everything with the push of a button, you’re not really using containers. You’re running a custom Linux distribution. Don’t roll your own crypto, don’t roll your own distro.

Do you roll your own distro? Tell me, @joshbressers on Twitter.

Monday, January 2, 2017

Future Proof Security

If you’ve ever written code, even a few lines of it, you know there is always some sort of tradeoff between doing it “right” and doing it "now". This is basically the reality of any industry, there is always the right way, and then there’s the way it’s going to get done. If you’ve ever done any sort of home remodeling project you’re well aware of uncovering the sins of the past as soon as that wall gets opened up.


When you’re writing software there are some places you should never try to make this tradeoff though. In the industry we like to call some of these decisions “technical debt”. It’s not called that to be clever, it’s called that because like all debt, someday you have to pay it back, plus interest. Sometimes those loans come with huge interest rates. How many of us have seen entire projects that were thrown out because of the terrible design decisions made way back at the beginning? It’s sadly not uncommon.


Are there times we should never make a tradeoff between “right” and “now”? Yes, yes there are. The single most important is verify data correctness. Especially if you think it’s trusted input. Today’s trusted input is tomorrow’s SQL injection. Let’s use a few examples (these are actual examples I saw in the past with the names of the innocent changed).


Beware the SQL
Once Bob wrote some SQL to return all the names in one of the ‘Users’ table. It’s a simple enough query, the code looks something like this:

def get_clients():
table_name = “clients”
query = ‘SELECT * from Users_’ + table_name


That’s easy enough to understand, for every other ‘get_’ function, you change the table name variable. Someday in the future, they let the intern write some code, and he decides that would be way easier if the table_name variable was passed to the function, and you set it from the URL. Now you have a SQL injection as any remote user can set the table_name variable to anything, including dangerous SQL. If you’re ever doing SQL queries, use prepared statements. Even if you don’t think you need it. It’ll save a lot of trouble later.


Images as far as the eye can see!
There is an application that has some internal icons, they’re used for the buttons that get displayed for users to click on, no big deal. The developer took an existing image library they found under the rug. It has some security flaws but who cares, all the images it displays are shipped by the app, they’re trusted, no big deal.


In a few years the intern (that guy again!) decides that it would be awesome to show images off the Internet. There just happens to be an image library already included in the application, which is a huge win. There’s even some example code that can be copied from where the buttons are drawn!


This one is pretty easy to see. You have a known bad library that used to parse only trusted input. Now it’s parsing untrusted input and is a pretty big problem. There isn’t an easy fix for this one unfortunately. It’s rarely wise to ship embedded libraries in your projects, but everyone does it. I won't tell you to stop doing this, but I also understand this is one of the great problems we have to solve now that open source is everywhere.

These two examples have been grossly simplified, but this stuff has and will continue to happen. If you’re a software developer, be careful with your shortcuts. Always ask yourself the question “what happens if this suddenly starts parsing untrusted input?” It’ll save you a lot of trouble down the road. Never forget that the technical debt bill will show up someday. Make sure you can afford it.

Do you have a clever technical debt story? Tell me, @joshbressers on Twitter.