Sunday, December 4, 2016

Airports, Goats, Computers, and Users

Last week I had the joy traveling through airports right after the United States Thanksgiving holiday. Now I don't know how many of you have ever tried to travel the week after Thanksgiving but it's kind of crazy, there are a lot of people, way more than usual, and a significant number of them have probably never been on an airplane or if they travel by air they don't do it very often. The joke I like to tell people is that there are folks at the airport wondering why they can't bring their goat onto the airplane. I’m not going to use this post to discuss the merits of airport security (that’s a whole different conversation), it’s really about coexisting with existing security systems.


Now on this trip I didn't see any goats, I was hoping to see something I could classify as truly bizarre, so this was a disappointment to me. There were two dogs but they were surprisingly well behaved. However, all the madness I witnessed got me thinking about Security in an environment where a substantial number of the users are woefully unaware of the security all around them. The frequent travelers know how things work, they keep it moving smoothly, they’re aware of the security and make sure they stay out of trouble. It’s not about if something makes you more or less secure, it’s about the goal of getting from the door to the plane as quickly and painlessly as possible. Many of the infrequent travels aren’t worry about moving through the airport quickly, they’re worried about getting their stuff onto the plane. Some of this stuff shouldn’t be brought through an airport.


Now let’s think about how computer security works for most organizations. You’re not dealing with the frequent travels, you’re dealing with the holiday horde trying to smuggle a jug of motor oil through security. It’s not that these people are bad or stupid, it’s really just that they don’t worry about how things work, they’re not going to be back in the airport until next Thanksgiving. In a lot of organizations the users aren’t trying to be stupid, they just don’t understand security in a lot of instances. Browsing Facebook on the work computer isn’t seen as a bad idea, it’s their version of smuggling contraband through airport security. They don’t see what it hurts, they’re not worried about the general flow of things. If their computer gets ransomware it’s not really their problem. We’ve pushed security off to another group nobody really likes.


What does this all mean? I’m not looking to solve this problem, it’s well known that you can’t fix problems until you understand them. I just happened to notice this trend while making my way through the airport, looking for a goat. It’s not that users are stupid, they’re not as clueless as we think either, they’re just not invested in the process. It’s not something they want to care about, it’s something preventing them from doing what they want to. Can we get them invested in the airport process?


If I had to guess, we’re never going to fix users, we have to fix the tools and environment.

Sunday, November 27, 2016

The Economics of stealing a Tesla with a phone

A few days ago there was a story about how to steal a Tesla by installing malware on the owner's phone. If you look at the big picture view of this problem it's not all that bad, but our security brains want to make a huge deal out of this. Now I'm not saying that Tesla shouldn't fix this problem, especially since it's going to be a trivial fix. What we want to think about is how all these working parts have to fit together. This is something we're not very good at in the security universe; there can be one single horrible problem, but when we paint the full picture, it's not what it seems.

Firstly, the idea of being able to take full control over a car from a phone sounds terrible. It is terrible and when a problem like this is found, it should always be fixed. But this also isn't something that's going to affect millions (it probably won't even affect hundreds). This is the sort of problem where you have an attacker targeting you specifically. If someone wants to target you, there are a lot of things they can do, putting a rootkit on your phone to steal your car is one of the least scary thing. The reality is that if you're the target of a well funded adversary, you're going to lose, period. So we can ignore that scenario.

Let's move to the car itself. A Tesla, or most any stolen car today, doesn't have a lot of value, the risk vs reward is very low. I suspect a Tesla has so many serial numbers embedded in the equipment you couldn't resell any of the parts. I also bet it has enough gear on board that they can tell you where your car is with a margin of error around three inches. Stealing then trying to do something with such a vehicle probably has far more risk than any possible reward.

Now if you keep anything of value in your car, and many of us do, that could be a great opportunity for an adversary. But of course now we're back to the point if you have control over someone's phone, is your goal to steal something out of their car? Probably not. Additionally if we think as an adversary, once we break into the car, even if we leave no trace, the record of unlocking the doors is probably logged somewhere. An adversary on this level will want to remain very anonymous, and again, if your target has something of value it would be far less risky to just mug them.

Here is where the security world tends to fall apart from an economics perspective. We like to consider a particular problem or attack in a very narrow context. Gaining total control over a car does sound terrible, and if we only look at it in that context, it's a huge deal. If we look at the big picture though, it's not all that bad in reality. How many security bugs and misconfigurations have we spent millions dealing with as quickly as possible, when in the big picture, it wasn't all that big of a deal. Security is one of those things that more often than not is dealt with on an emotional level rather than one of pure logic and economics. Science and reason lead to good decisions, emotion does not.

Leave your comments on Twitter

Sunday, November 20, 2016

Fast security is the best security

DevOps security is a bit like developing without a safety net. This is meant to be a reference to a trapeze act at the circus for those of you who have never had the joy of witnessing the heart stopping excitement of the circus trapeze. The idea is that when you watch a trapeze act with a net, you know that if something goes wrong, they just land in a net. The really exciting and scary trapeze acts have no net. If these folks fall, that's pretty much it for them. Someone pointed out to me that the current DevOps security is a bit like taking away the net.

This got me thinking about how we used to develop and do security, how we do it now, and is the net really gone?

First, some history


If you're a geezer, you remember the days when the developers built something, and operations had to deploy it. It never worked, both groups called the other names. Eventually they put aside their mutual hatred, worked together, and got something that mostly worked. This did provide some level of checks and balances though. Operations could ensure development wasn't doing anything too silly, as development could check on operations. Things mostly made sense. Somehow projects still got deployed by banging rocks together.

That said though, things did move slowly, and it's not a secret that some projects failed due to structural issues after having huge sums of money spent on them. I'll never say things were better back then, anyone who claims the world was a better place isn't someone you should listen to.

The present


In the new and exciting world of DevOps who is responsible for checking on who? Development can't really blame operations anymore, they're all on the same team, sometimes it's even the same person. This would be like that time the Austrian army attacked itself. This is where the idea of the safety net being removed comes in. Who is responsible for ensuring things are mostly secure? The new answer isn't "nobody", it's "everybody".

The real power of DevOps is that the software and systems are grown, not built. This is true of security, it's now grown instead of built. Now you have ample opportunity to make good security decisions along the way. Even if you make some sort of mistake, and you will, it's trivial to fix the problem quickly without much fanfare. The way the world works today is not the way the world worked even ten years ago. If you can't move fast, you're going to fail, especially when security is involved. Fast security is the best security.

And this is really how security has to work. Security has to move fast. The days of having months to fix security problems are long gone. You have to stay on top of what's going on and get things dealt with quickly. DevOps didn't remove the security safety net, it removed the security parachute. Now you can go as fast as you want, but that also means if nobody is driving, you're going to crash into a wall.

Leave your comments on Twitter

Monday, November 14, 2016

Who cares if someone hacks my driveway camera?

I keep hearing something from people about IoT that reminds me of the old saying, if you’ve done nothing wrong, you have nothing to fear. This attitude is incredibly dangerous in the context of IoT devices (it’s dangerous in all circumstances honestly). The way I keep hearing this in the context of IoT is something like this: “I don’t care if someone hacks my video camera, it’s just showing pictures of my driveway”. The problem here isn’t what video the camera is capturing, it’s the fact that if your camera gets hacked, the attacker can do nearly anything with the device on the Internet. Remember, at this point these things are fairly powerful general purpose computers that happen to have a camera.

Let’s stick with the idea about an IoT camera being hacked as it’s easy to believe the result of a hack will be harmless. Let’s think about a few possible problem scenarios. There are literally an infinite number of these possibilities, which is part of the problem in understanding the problem.

  1. The attacker can see the camera video
  2. The attacker can use the camera in a botnet
  3. The attacker can host illegal content
  4. Send spam
  5. Mine bitcoins
  6. Crack passwords
  7. Act as a jump host

You get the idea. The possibilities are nearly endless, and as Crime Inc. continues to innovate, they will find new uses for these resources. Unprotected IoT devices are going to be currency in this new digital resource gold rush. The challenge the defenders face is we can’t defend against a threat that hasn’t been invented yet. It’s a tricky business really.

What happens if it’s doing something illegal?


Just because you don’t care about your camera being spied on doesn’t really matter. The privacy angle isn’t what’s important anymore in the context of IoT. People who had cameras that were part of the botnet probably didn’t care about the privacy. I bet a lot of them don’t even know their cameras were used as part of a massive illegal activity. I don’t expect everyone to suddenly start to watch their IoT traffic for strange happenings. The whole point to this discussion is to stress that there are always many possible layers of problems when you have a device that’s not protected. It’s not just about what the device is supposed to do. At this point nearly everything that can attach to the Internet is more powerful than the biggest computers 20 years ago. By definition these things can do literally anything.

Things are going to happen we can’t yet imagine, those are the use cases we have to worry about. We need to be mindful about what we’re doing because our actions (or inactions) can have unforeseen consequences. When we talk about hacking an IoT device, most people are only worried about whatever job the device has, not the ability of the device to create other harm, such as a huge DDoS botnet. Claiming you have nothing to hide isn't an excuse for ignoring your IoT security.

Comment on Twitter

Sunday, November 6, 2016

Free security is the only security that really works

There are certain things people want and will pay for. There are things they want and won’t. If we look at security it’s pretty clear now that security is one of those things people want, but most won’t pay for. The insane success of Let’s Encrypt is where this thought came from. Certificates aren’t new, they used to even be really cheap (there were free providers, but there was a time cost of jumping through hoops). Let’s Encrypt make the time and actual cost basically zero, now it’s deployed all over. Depending who you ask, they’re one of the biggest CAs around now, and that took them a year? That’s crazy.

Nobody is going to say “I don’t want security”. Only a monster would say such a thing. Now if you ask them to pay for their security, they’ll probably sneak out the back door while you’re not looking. We all have causes we think are great, but we’re not willing to pay for them. Do I believe in helping disadvantaged youth in Albania? I TOTALLY DO! Can I donate to the cause? I just remembered I left the kettle on the stove.

Currently most people and groups don’t have to do things securely. There is some incentive in certain industries, but fundamentally they don’t want to pay for anything. And let's face it, the difference between what happens if they do something or don’t do something (let’s say http vs https), it going to be minimal. There are some search engine rules now that give preference to https, so there’s incentive. With a free CA, now there’s no excuse. A great way forward will be small incentives for being more secure and having free or low cost ways to get those (email is probably next).

How can we make more security free?

Better built in technologies work, look at things like stack canaries, everyone has them, almost everyone uses them. If you look at Wikipedia, it was around 2000 that major compilers started to add this technology. It took quite a fair bit of time. Phrack 49, which brought stack smashing to the conversation, was published in 1996, we didn’t see massive update in stack protections until after 2000. Can you imagine what four years is like in today’s Internet?

If we think about what seems to be the hip technologies today, a few spring to mind.

  • Code scanning is currently expensive, and not well used.
  • Endpoint security gets plenty of news.
  • What do you mean you don’t have an SDLC! I am shocked! SHOCKED!
  • Software Defined EVERYTHING!
  • There are also plenty of authentication and identity and twelve factor something or other.

The list an go on nearly forever. Ask yourself this. What is the ROI on this stuff? Apart from not being able to answer, I bet some of it is negative. Why should we do something that costs more than it saves? Just having free security isn’t enough, it has to also be useful. Part of the appeal of Let’s Encrypt is it’s really easy to use, it solves a problem, it’s very low cost, and high ROI. How many security technologies can we say this about? We can’t even agree what problems some of this stuff solves.

Here’s an easy rule of thumb for things like this. If you can’t show a return of at least 10x, don’t do something. We get caught in the trap of “I have to do something” without any regard for if it makes sense. A huge advantage of demanding measured returns is it makes us focus on two questions that rarely get asked. The first and most important is “how much will this cost?” we’ve all seen runaway projects. The second is “what’s my real benefit”. The second is really hard sometimes and will end up creating a lot of new questions and ideas. If you can’t measure or decide what the benefit is to what you’re doing, you probably don’t need to be doing that. A big part of being a modern agile organization is only doing what’s needed. Security ROI can help us focus on that.

At the end of the day stop complaining everything is terrible (we already know it is), figure out how you can make a difference without huge cost. Shaking your fist while screaming “you’ll be sorry” isn’t a strategy.

Monday, October 31, 2016

Stop being the monkey's paw

Tonight while I was handing out candy on Halloween as the children came to the door trick-or-treating getting whatever candy I've not yet eaten. I started thinking about scary stories the security universe. Some of the things we do in Security could be compared to the old fable of the cursed monkey's paw, which is one of my favorite stories.

For those who don't know what this story is, the quick version of the story is essentially there is a monkey's paw, an actual severed appendage of a monkey (it's not some sort of figurative item). It has some fingers on it that may or may not signify the number of wishes used. The paw is indestructible, the previous owner doesn’t want it, but can’t get rid of it until some unsuspecting suckers shows up. The idea is you make a wish you get three wishes or five or whatever depending upon the version of the story that's told (these old folk tales can differ greatly depending on what part of the world is telling them) and then the monkey paw gives you exactly what you asked for. The problem is what you asked for comes with horrifying consequences. For example there was an old man who had the paw and he asked for $200, the next day he got his $200 because his son was killed at work and they brought him $200 of his last paycheck. Of course there's different variants of this but the basic idea is the paw seems clever, it grants wishes, but every wish comes with terrible consequences.

This story got me thinking about security, how we ask questions and how we answer questions. What if we think about this in the context of application security specifically for this example. If someone was to ask the security the question “does this code have a buffer overflow in it?” The person I asked for help is going to look for buffer overflows and they may or may not notice that it has a SQL injection problem. Or maybe it has an integer overflow or some other problem. The point is that's not what they were looking for so we didn't ask the right question. You can even bring this little farther and occasionally someone might ask the question “is my system secure” the answer is definitively no. You don't even have to look at it to answer that question and so they don't even know what to ask in reality. They are asking the monkey paw to bring them their money, it's going to do it, but they're not going to like the consequences.

So this really makes me think about how we frame the question since the questions we ask are super important, getting the details right is a big deal. However there's also another side to asking questions and that's being the human receiving the question. You have to be rational and sane in the way you deal with the person asking those questions. If we are the monkey's paw; only giving people the technical answer to the technical question, odds are good we aren't actually helping them.

As I sit here on this cold windy Halloween waiting for the kids to come and take all the candy that I keep eating, it really makes me think: as security practitioners we need to be very mindful of the fact that the questions people are asking us might not really be the answers they want. It's up to us as humans, rather than monkey paws, to interpret the intent behind the person, what is the question they really want to ask, then give them answers they can use, answers they need, and answers that are actually helpful.

Sunday, October 30, 2016

Security is in the same leaky boat as the sysadmins

Sysadmins used to rule the world. Anyone who's been around for more than a few years remembers the days when whatever the system administrator wanted, the system administrator got. They were the center of the business. Without them nothing would work. They were generally super smart and could quite often work magic with what they had. It was certainly a different time back then.

Now developers are king, the days of the sysadmin have waned. The systems we run workloads on are becoming a commodity, you either buy a relatively complete solution, or you just run it in the cloud. These days most anyone using technology for their business relies on developers instead of sysadmins.

But wait, what about servers in the cloud, or containers which are like special mini servers, or ... other things that sysadmins have to take care of! If you really think about it, containers and cloud are just vehicles for developers. All this new technology, all the new disruption, all the interesting things happening are all about enabling developers. Containers and cloud aren't ends to themselves, they are the boats by which developers deliver their cargo. Cloud didn't win, developers won, cloud just happens to be their weapon of choice right now.

If we think about all this, the question I keep wondering is "where does security fit in?"

I think the answer is that it doesn't, it probably should, but we have to change the rules since what we call security today is an antiquated and broken idea. A substantial amount of our security ideas and methods are from the old sysadmin world. Even our application security revolves around finding individual bugs, then releasing updates for them. This new world changes all the rules.

Much of our security ideas and concepts are based on the days when sysadmins ruled the world. They were like a massive T-Rex ruling their domain, instilling fear into those beneath them. Today in security we are trying to build Jurassic Park, except there are no dinosaurs, they all went extinct. Maybe we can use horses instead, nobody will notice ... probably. Most security leaders and security conferences are the same people saying the same things for the last ten years. If any of it worked even a little, I think we'd notice by now.

If you pay attention to the new hip ideas around development and security you've probably heard of DevSecOps, Rugged DevOps, SecDevOps, and a few more. They may be different things but the thing is, it should just be called "DevOps". We're in the middle of disruptive change, a lot of the old ideas and ways don't make sense anymore. Security is pretty firmly entrenched in 2004. Security isn't a special snowflake, it's not magic, it shouldn't be treated like it's somehow outside the business. Security should just exist the same way electricity or internet does. If you write software, having a security step makes as much sense as having a special testing step. You used to have testing as a step, you don't anymore because it's just a part of the workflow.

I've asked the question in the past "where are all the young security people?" I think I'm starting to figure this out. There are very few because nobody wants to join an industry that is being disrupted (at least nobody smart) and let's face it, security is seen as a problem, not a solution. The only real reason it's getting attention lately is because we've done a bad job in the past so everything is on fire now. If you want to really scare someone to death, pull out the line "I'm from security and I'm here to help". You aren't really, you might think you are, but they know better.

Comment on Twitter