Sunday, April 16, 2017

Crawl, Walk, Drive

It's that time of year again. I don't mean when all the government secrets are leaked onto the Internet by some unknown organization. I mean the time of year when it's unsafe to cross streets or ride your bike. At least in the United States. It's possible more civilized countries don't have this problem. I enjoy getting around without a car, but I feel like the number of near misses has gone up a fair bit, and it's always a person much younger than me with someone much older than them in the passenger seat. At first I didn't think much about this and just dreamed of how self driving cars will rid us of the horror that is human drivers. After the last near fatality while crossing the street it dawned on me that now is the time all the kids have their driving learner's permit. I do think I preferred not knowing this since now I know my adversary. It has a name, and that name is "youth".

For those of you who aren't familiar with how this works in the US. Essentially after less training than is given to a typical volunteer, a young person generally around the age of 16 is given the ability to drive a car, on real streets, as long as there is a "responsible adult" in the car with them. We know this is impossible as all humans are terribly irresponsible drivers. They then spend a few months almost getting in accidents, take a proper test administered by someone who has one of the few jobs worse than IT security, and generally they end up with a real driver's license, ensuring we never run out of terrible human drivers.

There are no doubt a ton of stories that could be told here about mentorship, learning, encouraging, leadership, or teaching.  I'm not going to talk about any of that that today. I think often about how we raise up the next generation of security goons, I'm tired of talking about how we're all terrible people and nobody likes us, at least for this week.

I want to discuss the challenges of dealing with someone who is very new, very ambitious, and very dangerous. There are always going to be "new" people in any group or organization. Eventually they learn the rules they need to know, generally because they screw something up and someone yells at them about it. Goodness knows I learned most everything I know like this. But the point is, as security people, we have to not only do some yelling but we have to keep things in order while the new person is busy making a mess of everything. The yelling can help make us feel better, but we still have to ensure things can't go too far off the rails.

In many instances the new person will have some sort of mentor. They will of course try to keep them on task and learning useful things, but just like the parent of our student driver, they probably spend more time gaping in terror than they do teaching anything useful. If things really go crazy you can blame them someday, but at the beginning they're just busy hanging on trying not to soil themselves in an attempt to stay composed.

This brings us back to the security group. If you're in a large organization, every day is new person screwing something up day. I can't even begin to imagine what it must be like at a public cloud provider where you not only have new employees but also all your customers are basically ongoing risky behavior. The solution to this problem is the same as our student driver problem. Stop letting humans operate the machines. I'm not talking about the new people, I'm talking about the security people. If you don't have heavy use of automation, if you're not aggregating logs and having algorithms look for problems for example, you've already lost the battle.

Humans in general are bad at repetitive boring tasks. Driving falls under this category, and a lot of security work does too. I touched on the idea of measuring what you do in my last post. I'm going to tie these together in the next post. We do a lot of things that don't make sense if we measure them, but we struggle to measure security. I suspect part of that reason is because for a long time we were the passenger with the student drivers. If we emerged at the end of the ride alive, we were mostly happy.

It's time to become the groups building the future of cars, not waiting for a horrible crash to happen. The only way we can do that is if we start to understand and measure what works and what doesn't work. Everything from ROI to how effective is our policy and procedure. Make sure you come back next week. Assuming I'm not run down by a student driver before then.

Monday, April 10, 2017

The obvious answer is never the secure answer

One of the few themes that comes up time and time again when we talk about security is how bad people tend to be at understanding what's actually going on. This isn't really anyone's fault, we're expecting people to go against what is essentially millions of years of evolution that created our behaviors. Most security problems revolve around the human being the weak link and doing something that is completely expected and completely wrong.

This brings us to a news story I ran across that reminded me of how bad humans can be at dealing with actual risk. It seems that peanut free schools don't work. I think most people would expect a school that bans peanuts to have fewer peanut related incidents than a school that doesn't. This seems like a no brainer, but if there's anything I've learned from doing security work for as long as I have, the obvious answer is always wrong.

The report does have a nugget of info in it where they point out that having a peanut free table at lunch seems to work. I suspect this is different than a full on ban, in this case you have the kids who are sensitive to peanuts sit at a table where everyone knows peanuts are bad. There is of course a certain amount of social stigma that comes with having to sit at a special table, but I suspect anyone reading this often sat alone during schooltime lunch for a very different reason ;)

This is similar to Portugal making all drugs legal and having one of the lowest overdose rates in Europe. It seems logical that if you want fewer drugs you make them illegal. It doesn't make sense to our brains that if you want fewer drugs and problems you make them legal. There are countless other examples of reality seeming to be totally backwards from what we think should be true.

So that brings us to security. There are lessons in stories like these. It's not to do the opposite of what makes sense though. The lesson is to use real data to make decisions. If you think something is true and you can't prove it either way, you could be making decisions that are actually hurting instead of helping. It's a bit like the scientific method. You have a hypothesis, you test it, then you either update your hypothesis and try again or you end up with proof.

In the near future we'll talk about measuring things; how to do it, what's important, and why it will matter for solving your problems.

Sunday, April 2, 2017

The expectation of security

If you listen to my podcast (which you should be doing already), I had a bit of a rant at the start this week about an assignment my son had over the weekend. He wasn't supposed to use any "screens" which is part of a drug addiction lesson. I get where this lesson is going, but I've really been thinking about the bigger idea of expectations and reality. This assignment is a great example of someone failing to understand the world has changed around them.

What I mean is expecting anyone to go without a "screen" for a weekend doesn't make sense. A substantial number of activities we do today rely on some sort of screen because we've replace more inefficient ways of accomplishing tasks with these screens. Need to look something up? That's a screen. What's the weather? Screen. News? Screen. Reading a book? Screen!

You get the idea. We've replaced a large number of books or papers with a screen. But this is a security blog, so what's the point? The point is I see a lot of similarities with a lot of security people. The world has changed quite a bit over the last few years, I feel like a number of our rules are similar to anyone thinking spending time without a screen is some sort of learning experience. I bet we can all think of security people we know who think it's still 1995, if you don't know any you might be that person (time for some self reflection).

Let's look at some examples.

You need to change your password every 90 days.
There are few people who think this is a good idea anymore, even the NIST guidance says this isn't a good idea. I hear this come up on a regular basis though. Password concepts have changed a lot over the last few years, but most people seem to be stuck somewhere between five and ten years ago.

If we put it behind the firewall we don't have to worry about securing it.
Remember when firewalls were magic? Me neither. There was a time from probably 1995 to 2007 or so that a lot of people thought firewalls were magic. Very recently the concept of zero trust networking has come to be a real thing. You shouldn't trust your network, it's probably compromised.

Telling someone they can't do something because it's insecure.
Remember when we used to talk about how security is the industry of "no"? That's not true anymore because now when you tell someone "no" they just go to Amazon and buy $2.38 worth of computing and do whatever it is they need to get done. Shadow IT isn't the problem, it's the solution to the problem that was the security people. It's fairly well accepted by the new trailblazers that "no" isn't an option, the only option is to work together to minimize risk.

I could probably build a list that's enormous with examples like this. The whole point is to point out that everything changes, and we should always be asking ourselves if something still makes sense. It's very easy for us to decide change is dangerous and scary. I would argue that not understanding the new security norms is actually more dangerous than having no security knowledge at all. This is probably one of the few industries where old knowledge may be worse than no knowledge. Imagine if your doctor was using the best ideas and tools from 1875. You'd almost certainly find a new doctor. Password policies and firewalls are our version of blood letting and leeches. We have a long way to go and I have no doubt we all have something to contribute.

Monday, March 27, 2017

Remember kids, if you're going to disclose, disclose responsibly!

If you pay any attention to the security universe, you're aware that Tavis Ormandy is basically on fire right now with his security research. He found the Cloudflare data leak issue a few weeks back, and is currently going to town on LastPass. The LastPass crew seems to be dealing with this pretty well, I'm not seeing a lot of complaining, mostly just info and fixes which is the right way to do these things.

There are however a bunch of people complaining about how Tavis and Google Project Zero in general tend to disclose the issues. These people are wrong, I've been there, it's not fun, but as crazy as it may seem to the ouside, the Project Zero crew knows what they're doing.

Firstly let's get two things out of the way.

1) If nobody is complaining about what you're doing, you're not doing anything interesting (Tavis is clearly doing very interesting things).

2) Disclosure is hard, there isn't a perfect solution, what Project Zero does may seem heartless to some, but it's currently the best way. The alternative is an abusive relationship.

A long time ago I was a vendor receiving security reports from Tavis, and I won't lie, it wasn't fun. I remember complaining and trying to slow things down to a pace I thought was more reasonable. Few of us have any extra time and a new vulnerability disclosure means there's extra work to do. Sometimes a disclosure isn't very detailed or lacks important information. The disclosure date proposed may not line up with product schedules. You could have another more important issue you're working on already. There are lots of reasons to dread dealing with these issues as a vendor.

All that said, it's still OK to complain, and every now and then the criticism is good. We should always be thinking about how we do things, what makes sense today won't make sense tomorrow. The way Google Project Zero does disclosure today was pretty crazy even five years ago. Now it's how things have to work. The world moves very fast now, and as we've seen from various document dumps over the last few years, there are no secrets. If you think you can keep a security issue quiet for a year you are sadly mistaken. It's possible that was once true (I suspect it never was, but that's another conversation). Either way it's not true anymore. If you know about a security flaw it's quite likely someone else does too, and once you start talking to another group about it, the odds of leaking grow at an alarming rate.

The way things used to work is changing rapidly. Anytime there is change, there are always the trailblazers and laggards. We know we can't develop secure software, but we can respond quickly. Spend time where you can make a difference, not chasing the mythical perfect solution.

If your main contribution to society is complaining, you should probably rethink your purpose.

Thursday, March 23, 2017

Inverse Law of CVEs

I've started a project to put the CVE data into Elasticsearch and see if there is anything clever we can learn about it. Ever if there isn't anything overly clever, it's fun to do. And I get to make pretty graphs, which everyone likes to look at.

I stuck a few of my early results on Twitter because it seemed like a fun thing to do. One of the graphs I put up was comparing the 3 BSDs. The image is below.


You can see that none of these graphs has enough data to really draw any conclusions from, again, I did this for fun. I did get one response claiming NetBSD is the best, because their graph is the smallest. I've actually heard this argument a few times over the past month, so I decided it's time to write about it. Especially since I'm sure I'll find many more examples like this while I'm weeding through this mountain of CVE data.

Let's make up a new law, I'll call it the "Inverse Law of CVEs". It goes like this - "The fewer CVE IDs something has has, the less secure it is".

That doesn't make sense to most people. If you have something that is bad, fewer bad things is certainly better than more bad things. This is generally true for physical concepts brains can understand. Less crime is good. Fewer accidents is good. When it comes to something like how many CVE IDs your project or product has, this idea gets turned on its head. Less is probably bad when we think about CVE IDs. There's probably some sort of line somewhere where if you cross it things flip back to bad (wait until I get to PHP). We'll call that the security maginot line because bad security decided to sneak in through the north.

If you have something with very very few CVE IDs it doesn't mean it's secure, it means nobody is looking for security issues. It's easy to understand that if something is used by a large diverse set of users, it will get more bug reports (some of which will be security bugs) and it will get more security attention from both good guys and bad guys because it's a bigger target. If something has very few users, it's quite likely there hasn't been a lot of security attention paid to it. I suspect what the above graphs really mean is Free BSD is more popular than OpenBSD, which is more popular than NetBSD. Random internet searches seem to back this up.

I'm not entirely sure what to do with all this data. Part of the fun is understanding how to classify it all. I'm not a data scientist so there will be much learning. If you have any ideas by all means let me know, I'm quite open to suggestions. Once I have better data I may consider trying to find at what point a project has enough CVE IDs to be considered on the right path, and which have so many they've crossed over to the bad place.

Sunday, March 12, 2017

Security, Consumer Reports, and Failure

Last week there was a story about Consumer Reports doing security testing of products.


As one can imagine there were a fair number of “they’ll get it wrong” sort of comments. They will get it wrong, at first, but that’s not a reason to pick on these guys. They’re quite brave to take this task on, it’s nearly impossible if you think about the state of security (especially consumer security). But this is how things start. There is no industry that has gone from broken to perfect in one step. It’s a long hard road when you have to deal with systemic problems in an industry. Consumer product security problems may be larger and more complex than any other industry has ever had to solve thanks to things such as globalization and how inexpensive tiny computers have become.

If you think about the auto industry, you’re talking about something that costs thousands of dollars. Safety is easy to justify as it’s going to be less than the overall cost of the vehicle. Now if we think about tiny computing devices, you could be talking about chips that cost less than one dollar. If the cost of security and safety will be more than the initial cost of the computing hardware it can be impossible to justify that cost. If adding security doubles the cost of something, the manufacturers will try very hard to find ways around having to include such features. There are always bizarre technicalities that can help avoid regulation, groups like Consumer Reports help with accountability.

Here is where Consumer Reports and other testing labs will be incredibly important to this story. Even if there is regulation a manufacturer chooses to ignore, a group like Consumer Reports can still review the product. Consumer Reports will get things very wrong at first, sometimes it will be hilariously wrong. But that’s OK, it’s how everything starts. If you look back at any sort of safety and security in the consumer space, it took a long time, sometimes decades, to get it right. Cybersecurity will be no different, it’s going to take a long time to even understand the problem.

Our default reaction to mistakes is often one of ridicule, this is one of those times we have to be mindful of how dangerous this attitude is. If we see a group trying to do the right thing but getting it wrong, we need to offer advice, not mockery. If we don’t engage in a useful and serious way nobody will take us seriously. There are a lot of smart security folks out there, we can help make the world a better place this time. Sometimes things can look hopeless and horrible, but things will get better. It’ll take time, it won’t be easy, but things will get better thanks to efforts such as this one.

Thursday, March 2, 2017

What the Oscars can teach us about security

If you watched the 89th Academy Awards you saw a pretty big mistake at the end of the show, the short story is Warren Beatty was handed the wrong envelope, he opened it, looked at it, then gave it to Faye Dunaway to read, which she did. The wrong people came on stage and started giving speeches, confused scrambling happened, and the correct winner was brought on stage. No doubt this will be talked about for many years to come as one of the most interesting and exciting events in the history of the awards ceremony.

People make mistakes, we won’t dwell on how the wrong envelope made it into the announcer’s hands. The details of how this error came to be isn’t what’s important for this discussion. The important lesson for us is watch Warren Beatty’s behavior. He clearly knew something was wrong, if you watch the video of him, you can tell things aren’t right. But he just kept going, gave the card to Faye Dunaway, and she read the name of the movie on the card. These people aren’t some young amateurs here, these are seasoned actors. It’s not their first rodeo. So why did this happen?

The lesson for us all is to understand that when things start to break down, people will fall back to their instincts. The presenters knew their job was to open the card and read the name. Their job wasn’t to think about it or question what they were handed. As soon as they knew something was wrong, they went on autopilot and did what was expected. This happens with computer security all the time. If people get a scary phishing email, they will often go into autopilot and do things they wouldn’t do if they kept a level head. Most attackers know how this works and they prey on this behavior. It’s really easy to claim you’d never be so stupid as to download that attachment or click on that link, but you’re not under stress. Once you’re under stress, everything changes.

This is why police, firefighters, and soldiers get a lot of training. You want these people to do the right thing when they enter autopilot mode. As soon as a situation starts to get out of hand, training kicks in and these people will do whatever they were trained to do without thinking about it. Training works, there’s a reason they train so much. Most people aren’t trained like this so they generally make poor decisions when under stress.

So what should we take away from all this? The thing we as security professionals needs to keep in mind is how this behavior works. If you have a system that isn’t essentially “secure by default”, anytime someone find themselves under mental stress, they’re going to take the path of least resistance. If this path of least resistance is also something dangerous happening, you’re not designing for security. Even security experts will have this problem, we don’t have superpowers that let us make good choices in times of high stress. It doesn’t matter how smart you think you are, when you’re under a lot of stress, you will go into autopilot, you will make bad choices if bad choices are the defaults.