Sunday, April 30, 2017

Security fail is people

The other day I ran across someone trying to keep their locker secured by using a combination lock. As you can see in the picture, the lock is on the handle of the locker, not on the loop that actually locks the door. When I saw this I had a good chuckle, took a picture, and put out a snarky tweet. I then started to think about this quite a bit. Is this the user's fault or is this bad design? I'm going to blame bad design on this one. It's easy to blame users, we do it often, but I think in most instances, the problem is the design, not the user. If nothing is ever our fault, we will never improve anything. I suspect this is part of the problem we see across the cybersecurity universe.

On Humans

One of the great truths I'm starting to understand as I deal with humans more and more is that the one thing we all have in common is that we have waves of unpredictability. Sometimes we pay very close attention to our surroundings and situations, sometimes we don't. We can be distracted by someone calling our name, by something that happened earlier in the day, or even something that happened years ago. If you think you pay very close attention to everything at all times you're fooling yourself. We are squishy bags of confusing emotions that don't always make sense.

In the above picture, I can see a number of ways this happens. Maybe the person was very old and couldn't see. I have bad eyesight and could see this happening. Maybe they were talking to a friend and didn't notice where they put the lock. What if they dropped their phone moments before putting the lock on the door. Maybe they're just a clueless idiot who can't use locks! Well, not that last one.

This example is bad design. Why is there a handle that can hold a lock directly above the loop that is supposed to hold the lock? I can think of a few ways to solve this. The handle could be something other than a loop. A pull knob would be a lot harder to screw up. The handle could be farther up, or down. The loop could be larger or in a different place. No matter how you solve this, this is just a bad design. But we blame the user. We get a good laugh at a person making a simple mistake. Someday we'll make a simple mistake then blame bad design. It is also human nature to find someone or something else to blame.

The question I keep wondering; did whoever design this door think about security in any way? Do you think they were wondering how a system can and would fail? How would it be misused? How it could be broken? In this case I doubt there was anyone thinking about security failures for the door to a locker, it's just a locker. They probably told the intern to go draw a rectangle and put a handle on it. If I could find the manufacturer and tell them about this would they listen? I'd probably get pushed into the "crazy old kook" queue. You can even wonder if anyone really cares about locker security.

Wrapping up a post like this is always tricky. I could give advice about secure design, or tell everyone they should consult with a security expert. Maybe the answer is better user education (haha no). I think I'll target this at the security people who see something like this, take a picture, then write a tweet about how stupid someone is. We can use examples like this to learn and shape our own way of thinking. It's easy to use snark when we see something like this. The best thing we can do is make note of what we see, think about how this could have happened, and someday use it as an example to make something we're building better. We can't fix the world, but we can at least teach ourselves.

Monday, April 24, 2017

I have seen the future, and it is bug bounties


Every now and then I see something on a blog or Twitter about how you can't replace a pen test with a bug bounty. For a long time I agreed with this, but I've recently changed my mind. I know this isn't a super popular opinion (yet), and I don't think either side of this argument is exactly right. Fundamentally the future of looking for issues will not be a pen test. They won't really be bug bounties either, but I'm going to predict pen testing will evolve into what we currently call bug bounties.

First let's talk about a pen test. There's nothing wrong with getting a pen test, I'd suggest everyone goes through a few just to see what it's like. I want to be clear that I'm not saying pen testing is bad. I'm going to be making the argument why it's not the future. It is the present, many organizations require them for a variety of reasons. They will continue to be a thing for a very long time. If you can only pick one thing, you should probably choose a pen test today as it's at least a known known. Bug bounties are still known unknowns for most of us.

I also want to clarify that internal pen testing teams don't fall under this post. Internal teams are far more focused and have special knowledge that an outside company never will. It's my opinion that an internal team is and will always be superior to an outside pen test or bug bounty. Of course a lot of organizations can't afford to keep a dedicated internal team, so they turn to the outside.

So anyhow, it's time for a pen test. You find a company to conduct it, you scope what will be tested (it can't be everything). You agree on various timelines, then things get underway. After perhaps a week of testing, you have a very very long and detailed report of what was found. Here's the thing about a pen test; you're paying someone to look for problems. You will get what you pay for, you'll get a list of problems, usually a huge list. Everyone knows that the bigger the list, the better the pen test! But here's the dirty secret. Most of the results won't ever be fixed. Most results will fall below your internal bug bar. You paid for a ton of issues, you got a ton of issues, then you threw most of them out. Of course it's quite likely there will be high priority problems found, which is great. Those are what you really care about, not all the unexciting problems that are 95% of the report. What's your cost per issue fixed from that pen test?

Now let's look at how a bug bounty works. You find a company to run the bounty (it's probably not worth doing this yourself, there are many logistics). You scope what will be tested. You can agree on certain timelines and/or payout limits. Then things get underway. Here's where it's very different though. You're paying for the scope of bounty, you will get what you pay for, so there is an aspect of control. If you're only paying for critical bugs, by definition, you'll only get critical bugs. Of course there will be a certain amount of false positives. If I had to guess it's similar to a pen test today, but it's going to decrease as these organizations start to understand how to cut down on noise. I know HackerOne is doing some clever things to prevent noise.

My point to this whole post revolves around getting what you pay for, essential a cost per issue fixed instead of the current cost per issue found model. The real difference is that in the case of a bug bounty, you can control the scope of incoming. In no way am I suggesting a pen test is a bad idea, I'm simply suggesting that 200 page report isn't very useful. Of course if a pen test returned three issues, you'd probably be pretty upset when paying the bill. We all have finite resources so naturally we can't and won't fix minor bugs. it's just how things work. Today at best you'll about the same results from a bug bounty and a pen test, but I see a bug bounty as having room to improve. I think the pen test model isn't full of exciting innovation.

All this said, not every product and company will be able to attract enough interest in a bug bounty. Let's face it, the real purpose behind all this is to raise the security profiles of everyone involved. Some organizations will have to use a pen test like model to get their products and services investigated. This is why the bug bounty program won't be a long term viable option. There are too many bugs and not enough researchers.

Now for the bit about the future. The near future we will see the pendulum swing from pen testing to bug bounties. The next swing of the pendulum after bug bounties will be automation. Humans aren't very good at digging through huge amounts of data but computers are. What we're really good at and computers are (currently) really bad at is finding new and exciting ways to break systems. We once thought double free bugs couldn't be exploited. We didn't see a problem with NULL pointer dereferences. Someone once thought deserializing objects was a neat idea. I would rather see humans working on the future of security instead of exploiting the past. The future of the bug bounty can be new attack methods instead of finding bugs. We have some work to do, I've not seen an automated scanner that I'd even call "almost not terrible". It will happen though, tools always start terrible and get better through the natural march of progress. The road to this unicorn future will pass through bug bounties. However, if we don't have automation ready on the other side, it's nothing but dragons.

Sunday, April 16, 2017

Crawl, Walk, Drive

It's that time of year again. I don't mean when all the government secrets are leaked onto the Internet by some unknown organization. I mean the time of year when it's unsafe to cross streets or ride your bike. At least in the United States. It's possible more civilized countries don't have this problem. I enjoy getting around without a car, but I feel like the number of near misses has gone up a fair bit, and it's always a person much younger than me with someone much older than them in the passenger seat. At first I didn't think much about this and just dreamed of how self driving cars will rid us of the horror that is human drivers. After the last near fatality while crossing the street it dawned on me that now is the time all the kids have their driving learner's permit. I do think I preferred not knowing this since now I know my adversary. It has a name, and that name is "youth".

For those of you who aren't familiar with how this works in the US. Essentially after less training than is given to a typical volunteer, a young person generally around the age of 16 is given the ability to drive a car, on real streets, as long as there is a "responsible adult" in the car with them. We know this is impossible as all humans are terribly irresponsible drivers. They then spend a few months almost getting in accidents, take a proper test administered by someone who has one of the few jobs worse than IT security, and generally they end up with a real driver's license, ensuring we never run out of terrible human drivers.

There are no doubt a ton of stories that could be told here about mentorship, learning, encouraging, leadership, or teaching.  I'm not going to talk about any of that that today. I think often about how we raise up the next generation of security goons, I'm tired of talking about how we're all terrible people and nobody likes us, at least for this week.

I want to discuss the challenges of dealing with someone who is very new, very ambitious, and very dangerous. There are always going to be "new" people in any group or organization. Eventually they learn the rules they need to know, generally because they screw something up and someone yells at them about it. Goodness knows I learned most everything I know like this. But the point is, as security people, we have to not only do some yelling but we have to keep things in order while the new person is busy making a mess of everything. The yelling can help make us feel better, but we still have to ensure things can't go too far off the rails.

In many instances the new person will have some sort of mentor. They will of course try to keep them on task and learning useful things, but just like the parent of our student driver, they probably spend more time gaping in terror than they do teaching anything useful. If things really go crazy you can blame them someday, but at the beginning they're just busy hanging on trying not to soil themselves in an attempt to stay composed.

This brings us back to the security group. If you're in a large organization, every day is new person screwing something up day. I can't even begin to imagine what it must be like at a public cloud provider where you not only have new employees but also all your customers are basically ongoing risky behavior. The solution to this problem is the same as our student driver problem. Stop letting humans operate the machines. I'm not talking about the new people, I'm talking about the security people. If you don't have heavy use of automation, if you're not aggregating logs and having algorithms look for problems for example, you've already lost the battle.

Humans in general are bad at repetitive boring tasks. Driving falls under this category, and a lot of security work does too. I touched on the idea of measuring what you do in my last post. I'm going to tie these together in the next post. We do a lot of things that don't make sense if we measure them, but we struggle to measure security. I suspect part of that reason is because for a long time we were the passenger with the student drivers. If we emerged at the end of the ride alive, we were mostly happy.

It's time to become the groups building the future of cars, not waiting for a horrible crash to happen. The only way we can do that is if we start to understand and measure what works and what doesn't work. Everything from ROI to how effective is our policy and procedure. Make sure you come back next week. Assuming I'm not run down by a student driver before then.

Monday, April 10, 2017

The obvious answer is never the secure answer

One of the few themes that comes up time and time again when we talk about security is how bad people tend to be at understanding what's actually going on. This isn't really anyone's fault, we're expecting people to go against what is essentially millions of years of evolution that created our behaviors. Most security problems revolve around the human being the weak link and doing something that is completely expected and completely wrong.

This brings us to a news story I ran across that reminded me of how bad humans can be at dealing with actual risk. It seems that peanut free schools don't work. I think most people would expect a school that bans peanuts to have fewer peanut related incidents than a school that doesn't. This seems like a no brainer, but if there's anything I've learned from doing security work for as long as I have, the obvious answer is always wrong.

The report does have a nugget of info in it where they point out that having a peanut free table at lunch seems to work. I suspect this is different than a full on ban, in this case you have the kids who are sensitive to peanuts sit at a table where everyone knows peanuts are bad. There is of course a certain amount of social stigma that comes with having to sit at a special table, but I suspect anyone reading this often sat alone during schooltime lunch for a very different reason ;)

This is similar to Portugal making all drugs legal and having one of the lowest overdose rates in Europe. It seems logical that if you want fewer drugs you make them illegal. It doesn't make sense to our brains that if you want fewer drugs and problems you make them legal. There are countless other examples of reality seeming to be totally backwards from what we think should be true.

So that brings us to security. There are lessons in stories like these. It's not to do the opposite of what makes sense though. The lesson is to use real data to make decisions. If you think something is true and you can't prove it either way, you could be making decisions that are actually hurting instead of helping. It's a bit like the scientific method. You have a hypothesis, you test it, then you either update your hypothesis and try again or you end up with proof.

In the near future we'll talk about measuring things; how to do it, what's important, and why it will matter for solving your problems.

Sunday, April 2, 2017

The expectation of security

If you listen to my podcast (which you should be doing already), I had a bit of a rant at the start this week about an assignment my son had over the weekend. He wasn't supposed to use any "screens" which is part of a drug addiction lesson. I get where this lesson is going, but I've really been thinking about the bigger idea of expectations and reality. This assignment is a great example of someone failing to understand the world has changed around them.

What I mean is expecting anyone to go without a "screen" for a weekend doesn't make sense. A substantial number of activities we do today rely on some sort of screen because we've replace more inefficient ways of accomplishing tasks with these screens. Need to look something up? That's a screen. What's the weather? Screen. News? Screen. Reading a book? Screen!

You get the idea. We've replaced a large number of books or papers with a screen. But this is a security blog, so what's the point? The point is I see a lot of similarities with a lot of security people. The world has changed quite a bit over the last few years, I feel like a number of our rules are similar to anyone thinking spending time without a screen is some sort of learning experience. I bet we can all think of security people we know who think it's still 1995, if you don't know any you might be that person (time for some self reflection).

Let's look at some examples.

You need to change your password every 90 days.
There are few people who think this is a good idea anymore, even the NIST guidance says this isn't a good idea. I hear this come up on a regular basis though. Password concepts have changed a lot over the last few years, but most people seem to be stuck somewhere between five and ten years ago.

If we put it behind the firewall we don't have to worry about securing it.
Remember when firewalls were magic? Me neither. There was a time from probably 1995 to 2007 or so that a lot of people thought firewalls were magic. Very recently the concept of zero trust networking has come to be a real thing. You shouldn't trust your network, it's probably compromised.

Telling someone they can't do something because it's insecure.
Remember when we used to talk about how security is the industry of "no"? That's not true anymore because now when you tell someone "no" they just go to Amazon and buy $2.38 worth of computing and do whatever it is they need to get done. Shadow IT isn't the problem, it's the solution to the problem that was the security people. It's fairly well accepted by the new trailblazers that "no" isn't an option, the only option is to work together to minimize risk.

I could probably build a list that's enormous with examples like this. The whole point is to point out that everything changes, and we should always be asking ourselves if something still makes sense. It's very easy for us to decide change is dangerous and scary. I would argue that not understanding the new security norms is actually more dangerous than having no security knowledge at all. This is probably one of the few industries where old knowledge may be worse than no knowledge. Imagine if your doctor was using the best ideas and tools from 1875. You'd almost certainly find a new doctor. Password policies and firewalls are our version of blood letting and leeches. We have a long way to go and I have no doubt we all have something to contribute.