Saturday, June 17, 2017

Thought leaders aren't leaders

For the last few weeks I've seen news stories and much lamenting on twitter about the security skills shortage. Some say there is no shortage, some say it's horrible beyond belief. Basically there's someone arguing every possible side of this. I'm not going to debate if there is or isn't a worker shortage, that's not really the point. A lot of complaining was done by people who would call themselves leaders in the security universe. I then read the below article and change my thinking up a bit.


Our problem isn't a staff shortage. Our problem is we don't have any actual leaders. I mean people who aren't just "in charge". Real leaders aren't just in charge, they help their people grow in a way that accomplishes their vision. Virtually everyone in the security space has spent their entire careers working alone to learn new things. We are not an industry known for working together and the thing I'd never really thought about before was that if we never work together, we never really care about anyone or anything (except ourselves). The security people who are in charge of other security people aren't motivating anyone which by definition means they're not accomplishing any sort of vision. This holds true for most organizations since barely keeping the train on the track is pretty much the best case scenario.

If I was going to guess the existing HR people look at most security groups and see the same dumpster fire we see when we look at IoT.

In the industry today virtually everyone who is seen as being some sort of security leader is what a marketing person would call "thought leaders". Thought leaders aren't leaders. Some do have talent. Some had talent. And some just own a really nice suit. It doesn't matter though. What we end up with is a situation where the only thing anyone worries about is how many Twitter followers they have instead of making a real difference. You make a real difference when you coach and motivate someone else do great things.

Being a leader with loyal employees would be a monumental step for most organizations. We have no idea who to hire and how to teach them because the leaders don't know how to do those things. Those are skills real leaders have and real leaders develop in their people. I suspect the HR department knows what's wrong with the security groups. They also know we won't listen to them.

There is a security talent shortage, but it's a shortage of leadership talent.

Sunday, June 11, 2017

Humanity isn't proactive

I ran across this article about IoT security the other day

The US Needs to Get Serious About Securing the Internet of Hackable Things

I find articles like this frustrating for the simple fact everyone keeps talking about security, but nobody is going to do anything. If you look at the history of humanity, we've never been proactive when dealing with problems. We wait until things can't get worse and the only actual option is to fix the problem. If you look at every problem there are at least two options. Option #1 is always "fix it". Option #2 is ignore it. There could be more options, but generally we pick #2 because it's the least amount of work in the short term. Humanity rarely cares about the long term implications of anything.

I know this isn't popular, but I'm going to say it: We aren't going to fix IoT security for a very long time

I really wish this wasn't true, but it just is. If a senator wants to pretend they're doing something but they're really just ignoring the problem, they hold a hearing and talk about how horrible something is. If they actually want to fix it they propose legislation. I'm not blaming anyone in charge mind you. They're really just doing what they think the people want. If we want the government to fix IoT we have to tell them to do it. Most people don't really care because they don't have a reason to care.

Here's the second point that I suspect many security people won't want to hear. The reason nobody cares about IoT security isn't because they're stupid. This is the narrative we've been telling ourselves for years. They don't care because the cost of doing nothing is substantially less than fixing IoT security. We love telling scary campfire stories about how the botnet was coming from inside the house and how a pacemaker will kill grandpa, but the reality is there hasn't been enough real damage done yet from insecure IoT. I'm not saying there won't ever be, there just hasn't been enough expensive widespread damage done yet to make anyone really care.

In world filled with insecurity, adding security to your product isn't a feature anyone really cares about. I've been doing research about topics such as pollution, mine safety, auto safety, airline safety, and a number of other problems from our past. There are no good examples where humans decided to be proactive and solve a problem before it became absolutely horrible. People need a reason to care, there isn't a reason for IoT security.

Yet.

Someday something might happen that makes people start to care. As we add compute power to literally everything my security brain says there is some sort of horrible doom coming without security. But I've also been saying this for years and it's never really happened. There is a very real possibility that IoT security will just never happen if things never get bad enough.

Sunday, June 4, 2017

Free Market Security

I've been thinking about the concept of free market forces this weekend. The basic idea here is that the price of a good is decided by the supply and demand of the market. If the market demands something, the price will go up if there it's in short supply. This is basically why the Nintendo Switch is still selling on eBay for more than it would cost in the store. There is a demand but there isn't a supply. But back to security. Let's think about something I'm going to call "free market security". What if demand and supply was driving security? Or we can flip the question around, what if the market will never drive security?

Of course security isn't really a thing like we think of goods and services in this context. At best we could call it a feature of another product. You can't buy security to add it to your products, it's just sort of something that happens as part of a larger system.

I'm leaning in the direction of secure products. Let's pick on mobile phones because that environment is really interesting. Is the market driving security into phones? I'd say the answer today is a giant "no". Most people buy phones that will never see a security update. They don't even ask about updates or security in most instances. You could argue they don't know this is even a problem.

Apple is the leader here by a wide margin. They have invested substantially into security, but why did they do this? If we want to think about market forces and security, what's the driver? If Apple phones were less secure would the market stop buying them? I suspect the sales wouldn't change at all. I know very few people who buy an iPhone for the security. I know zero people outside of some security professionals who would ever think about this question. Why Apple decided to take these actions is a topic for another day I suspect.

Switching gears, the Android ecosystem is pretty rough in this regard. The vast majority of phones sold today are android phones. Android phones that are competitively priced, all have similar hardware, and almost all of them are completely insecure. People still buy them though. Security is clearly not a feature that's driving anything in this market. I bought a Nexus phone because of security. This one single feature. I am clearly not the norm here though.

The whole point we should be thinking about is idea of a free market for security. It doesn't exist, it probably won't exist. I see it like pollution. There isn't a very large market products that either don't pollute, or are made without polluting in some way. I know there are some people who worry about sustainability, but the vast majority of consumers don't really care. In fact nobody really cared about pollution until a river actually lit on fire. There are still some who don't, even after a river lit on fire.

I think there are many of us in security who keep waiting for demand to appear for more security. We keep watching and waiting, any day now everyone will see why this matters! It's not going to happen though. We do need security more  and more each day. The way everything is heading, things aren't looking great. I'd like to think we won't have to wait for the security equivalent of a river catching on fire, but I'm pretty sure that's what it will take.

Monday, May 29, 2017

Stealing from customers

I was having some security conversations last week and cybersecurity insurance came up as a topic. This isn't overly unusual as it's a pretty popular topic, but someone said something that really got me thinking.
What if the insurance covered the customers instead of the companies?
Now I understand that many cybersecurity insurance policies can cover some amount of customer damage and loss, but fundamentally the coverage is for the company that is attacked, customers who have data stolen will maybe get a year of free credit monitoring or some other token service. That's all well and good, but I couldn't help myself from thinking about this problem from another angle. Let's think about insurance in the context of shoplifting. For this thought exercise we're going to use a real store in our example, which won't be exactly correct, but the point is to think about the problem, not get all the minor details correct.

If you're in a busy store shopping and someone steals your wallet, it's generally accepted that the store is not at fault for this theft. Most would put some effort into helping you, but at the end of the day you're probably out of luck if you expect the store to repay you for anything you lost. They almost certainly won't have insurance to cover the theft of customer property in their store.

Now let's also imagine there are things taken from the store, actual merchandise gets stolen. This is called shoplifting. It has a special name and many stores even have special groups to help minimize this damage. They also have insurance to cover some of these losses. Most businesses see some shoplifting as a part of doing business. They account for some volume of this theft when doing their planning and profit calculations.

In the real world, I suspect customers being robbed while in a store isn't very common. If there is a store that gains a reputation for customers having wallets stolen, nobody will shop there. If you visit a store in a rough part of town they might even have a security guard at the door to help keep the riffraff out. This is because no shop wants to be known as a dangerous place. You can't exist as a store with that sort of reputation. Customers need to feel safe.

In the virtual world, all that can be stolen is basically information. Sometimes that information can be equated to actual money, sometimes it's just details about a person. Some will have little to no value like a very well known email address. Sometimes it can have a huge value like a tax identifier that can be used to commit identity theft. It can be very very difficult to know when information is stolen, but also the value of that information taken can vary widely. We also seem to place very little value on our information. Many people will trade it away for a trinket online worth a fraction of the information they just supplied.

Now let's think about insurance. Just like loss prevention insurance, cybersecurity insurance isn't there to protect customers. It exists to help protect the company from the losses of an attack. If customer data is stolen the customers are not really covered, in many instances there's nothing a customer can do. It could be impossible to prove your information was stolen, even if it gets used somewhere else can you prove it came from the business in question?

After spending some time on the question of what if insurance covered the customers, I realize how hard this problem is to deal with. While real world customer theft isn't very common and it's basically not covered, there's probably no hope for information. It's so hard to prove things beyond a reasonable doubt and many of our laws require actual harm to happen before any action can be taken. Proving this harm is very very difficult. We're almost certainly going to need new laws to deal with these situations.

Sunday, May 21, 2017

You know how to fix enterprise patching? Please tell me more!!!

If you pay attention to Twitter at all, you've probably seen people arguing about patching your enterprise after the WannaCry malware. The short story is that Microsoft fixed a very serious security flaw a few months before the malware hit. That means there are quite a few machines on the Internet that haven't applied a critical security update. Of course as you imagine there is plenty of back and forth about updates. There are two basic arguments I keep seeing.

Patching is hard and if you think I can just turn on windows update for all these computers running Windows 3.11 on token ring you've never had to deal with a real enterprise before! You out of touch hipsters don't know what it's really like here. We've seen thing, like, real things. We party like it's 1995. GET OFF MY LAWN.

The other side sounds a bit like this.

How can you be running anything that's less than a few hours old? Don't you know what the Internet looks like! If everyone just applied all updates immediately and ran their business in the cloud using agile scrum based SecDevSecOps serverless development practices everything would be fine!

Of course both of these groups are wrong for basically the same reason. The world isn't simple, and whatever works for you won't work for anyone else. The tie that binds us all together is that everything is broken, all the time. All the things we use are broken, how we use them is broken, and how we manage them is broken. We can't fix them even though we try and sometimes we pretend we can fix things.

However ...

Just because everything is broken, that's no excuse to do nothing. It's easy to declare something too hard and give up. A lot of enterprises do this, a lot of enterprise security people are using this defense why they can't update their infrastructure. On the other side though, sometimes moving too fast is more dangerous than moving too slow. Reckless updates are no better than no updates. Sometimes there is nothing we can do. Security as an industry is basically a big giant Kobayashi Maru test.

I have no advice to give on how to fix this problem. I think both groups are silly and wrong but why I think this is unimportant. The right way is for everyone to have civil conversations where we put ourselves in the other person's shoes. That won't happen though, it never happens even though basically ever leader ever has said that sort of behavior is a good idea. I suggest you double down on whatever bad practices you've hitched your horse to. In the next few months we'll all have an opportunity to show why our way to do things is the worst way ever, and we'll also find an opportunity to mock someone else for noting doing things the way we do.

In this game there are no winners and losers, just you. And you've already lost.

Wednesday, May 3, 2017

Security like it's 2005!

I was reading the newspaper the other day (the real dead tree newspaper) and I came across an op-ed from my congressperson.

Gallagher: Cybersecurity for small business

It's about what you'd expect but comes with some actionable advice! Well, not really. Here it is so you don't have to read the whole thing.

Businesses can start by taking some simple and relatively inexpensive steps to protect themselves, such as:
» Installing antivirus, threat detection and firewall software and systems.
» Encrypting company data and installing security patches to make sure computers and servers are up to date.
» Strengthening password practices, including requiring the use of strong passwords and two-factor authentication.
» Educating employees on how to recognize an attempted attack, including preparing rapid response measures to mitigate the damage of an attack in progress or recently completed.
I read that and my first thought was "how on earth would a small business have a clue about any of this", but then it got me thinking about the bigger problem. This advice isn't even useful in 2017. It sort of made sense a long time ago when this was the way of thinking, it's not valid anymore though.

Let's pick them apart one by one.

Installing antivirus, threat detection and firewall software and systems.
It's no secret that antivirus doesn't really work anymore. It's expensive in terms of cost and resources. In most settings I've seen it probably causes more trouble than it solves. Threat detection doesn't really mean anything. Virtually all systems come with a firewall enabled and some level of software protections that makes existing antivirus obsolete. Honestly, this is about as solved as it's going to get. There's no positive value you can add here.

Encrypting company data and installing security patches to make sure computers and servers are up to date
This is two unrelated things. Encrypting data is probably overkill for most settings. Any encryption that's usable doesn't really protect you. Encryption that actually protects needs a dedicated security team to manage. Let's not get into an argument about offline vs online data.

Keeping systems updated a fantastic idea. Nobody does it because it's too hard to do. If you're a small business you'll either have zero updates, or automatically install them all. The right answer is to use something as a service so you don't have to think about updates. Make sure automatic updates are working on your desktops.

Strengthening password practices, including requiring the use of strong passwords and two-factor authentication

Just use two-factor auth from your as a service provider. If you're managing your own accounts and you lack a dedicated identity team failure is the only option. Every major cloud provider can help you solve this.

Educating employees on how to recognize an attempted attack, including preparing rapid response measures to mitigate the damage of an attack in progress or recently completed

Just no. There is value in helping them understand the risks and threats, but this won't work. Social engineering attacks go after the fundamental nature of humanity. You can't stop this with training. The only hope is we create cold calculating artificial intelligence that can figure this out before it reaches humans. A number of service providers can even stop some of this today because they have ways to detect anomalies. A small business doesn't and probably never will.


As you can see, this list isn't really practical for anyone to worry about. Why should you have to worry about this today? These sort of problems have been plaguing small business and home users for years. These points are all what I would call "mid 200X" advice. These were suggestions everyone was giving out around 2005, they didn't really work then but it made everyone feel better. Most of these bullets aren't actionable unless you have a security person on staff. Would a non security person have any idea where to start or what of these items mean?

The 2017 world has a solution to these problems. Use the cloud. Stuff as a Service is without question the way to solve these problems because it makes them go away. There are plenty who will naysay public cloud citing various breeches, companies leaking data, companies selling data, and plenty of other problems. The cloud isn't magic, but it lets you trade a lot of horrible problems for "slightly bad". I guarantee the problems with the cloud are substantially better than letting most people try to run their own infrastructure. I see this a bit like airplane vs automobile crashes. There are magnitudes more deaths by automobile every year, but it's the airplane crashes that really get the attention. It's much much safer to fly than to drive, just as it's much much safer to use services than to manage your own infrastructure.

Sunday, April 30, 2017

Security fail is people

The other day I ran across someone trying to keep their locker secured by using a combination lock. As you can see in the picture, the lock is on the handle of the locker, not on the loop that actually locks the door. When I saw this I had a good chuckle, took a picture, and put out a snarky tweet. I then started to think about this quite a bit. Is this the user's fault or is this bad design? I'm going to blame bad design on this one. It's easy to blame users, we do it often, but I think in most instances, the problem is the design, not the user. If nothing is ever our fault, we will never improve anything. I suspect this is part of the problem we see across the cybersecurity universe.

On Humans

One of the great truths I'm starting to understand as I deal with humans more and more is that the one thing we all have in common is that we have waves of unpredictability. Sometimes we pay very close attention to our surroundings and situations, sometimes we don't. We can be distracted by someone calling our name, by something that happened earlier in the day, or even something that happened years ago. If you think you pay very close attention to everything at all times you're fooling yourself. We are squishy bags of confusing emotions that don't always make sense.

In the above picture, I can see a number of ways this happens. Maybe the person was very old and couldn't see. I have bad eyesight and could see this happening. Maybe they were talking to a friend and didn't notice where they put the lock. What if they dropped their phone moments before putting the lock on the door. Maybe they're just a clueless idiot who can't use locks! Well, not that last one.

This example is bad design. Why is there a handle that can hold a lock directly above the loop that is supposed to hold the lock? I can think of a few ways to solve this. The handle could be something other than a loop. A pull knob would be a lot harder to screw up. The handle could be farther up, or down. The loop could be larger or in a different place. No matter how you solve this, this is just a bad design. But we blame the user. We get a good laugh at a person making a simple mistake. Someday we'll make a simple mistake then blame bad design. It is also human nature to find someone or something else to blame.

The question I keep wondering; did whoever design this door think about security in any way? Do you think they were wondering how a system can and would fail? How would it be misused? How it could be broken? In this case I doubt there was anyone thinking about security failures for the door to a locker, it's just a locker. They probably told the intern to go draw a rectangle and put a handle on it. If I could find the manufacturer and tell them about this would they listen? I'd probably get pushed into the "crazy old kook" queue. You can even wonder if anyone really cares about locker security.

Wrapping up a post like this is always tricky. I could give advice about secure design, or tell everyone they should consult with a security expert. Maybe the answer is better user education (haha no). I think I'll target this at the security people who see something like this, take a picture, then write a tweet about how stupid someone is. We can use examples like this to learn and shape our own way of thinking. It's easy to use snark when we see something like this. The best thing we can do is make note of what we see, think about how this could have happened, and someday use it as an example to make something we're building better. We can't fix the world, but we can at least teach ourselves.

Monday, April 24, 2017

I have seen the future, and it is bug bounties


Every now and then I see something on a blog or Twitter about how you can't replace a pen test with a bug bounty. For a long time I agreed with this, but I've recently changed my mind. I know this isn't a super popular opinion (yet), and I don't think either side of this argument is exactly right. Fundamentally the future of looking for issues will not be a pen test. They won't really be bug bounties either, but I'm going to predict pen testing will evolve into what we currently call bug bounties.

First let's talk about a pen test. There's nothing wrong with getting a pen test, I'd suggest everyone goes through a few just to see what it's like. I want to be clear that I'm not saying pen testing is bad. I'm going to be making the argument why it's not the future. It is the present, many organizations require them for a variety of reasons. They will continue to be a thing for a very long time. If you can only pick one thing, you should probably choose a pen test today as it's at least a known known. Bug bounties are still known unknowns for most of us.

I also want to clarify that internal pen testing teams don't fall under this post. Internal teams are far more focused and have special knowledge that an outside company never will. It's my opinion that an internal team is and will always be superior to an outside pen test or bug bounty. Of course a lot of organizations can't afford to keep a dedicated internal team, so they turn to the outside.

So anyhow, it's time for a pen test. You find a company to conduct it, you scope what will be tested (it can't be everything). You agree on various timelines, then things get underway. After perhaps a week of testing, you have a very very long and detailed report of what was found. Here's the thing about a pen test; you're paying someone to look for problems. You will get what you pay for, you'll get a list of problems, usually a huge list. Everyone knows that the bigger the list, the better the pen test! But here's the dirty secret. Most of the results won't ever be fixed. Most results will fall below your internal bug bar. You paid for a ton of issues, you got a ton of issues, then you threw most of them out. Of course it's quite likely there will be high priority problems found, which is great. Those are what you really care about, not all the unexciting problems that are 95% of the report. What's your cost per issue fixed from that pen test?

Now let's look at how a bug bounty works. You find a company to run the bounty (it's probably not worth doing this yourself, there are many logistics). You scope what will be tested. You can agree on certain timelines and/or payout limits. Then things get underway. Here's where it's very different though. You're paying for the scope of bounty, you will get what you pay for, so there is an aspect of control. If you're only paying for critical bugs, by definition, you'll only get critical bugs. Of course there will be a certain amount of false positives. If I had to guess it's similar to a pen test today, but it's going to decrease as these organizations start to understand how to cut down on noise. I know HackerOne is doing some clever things to prevent noise.

My point to this whole post revolves around getting what you pay for, essential a cost per issue fixed instead of the current cost per issue found model. The real difference is that in the case of a bug bounty, you can control the scope of incoming. In no way am I suggesting a pen test is a bad idea, I'm simply suggesting that 200 page report isn't very useful. Of course if a pen test returned three issues, you'd probably be pretty upset when paying the bill. We all have finite resources so naturally we can't and won't fix minor bugs. it's just how things work. Today at best you'll about the same results from a bug bounty and a pen test, but I see a bug bounty as having room to improve. I think the pen test model isn't full of exciting innovation.

All this said, not every product and company will be able to attract enough interest in a bug bounty. Let's face it, the real purpose behind all this is to raise the security profiles of everyone involved. Some organizations will have to use a pen test like model to get their products and services investigated. This is why the bug bounty program won't be a long term viable option. There are too many bugs and not enough researchers.

Now for the bit about the future. The near future we will see the pendulum swing from pen testing to bug bounties. The next swing of the pendulum after bug bounties will be automation. Humans aren't very good at digging through huge amounts of data but computers are. What we're really good at and computers are (currently) really bad at is finding new and exciting ways to break systems. We once thought double free bugs couldn't be exploited. We didn't see a problem with NULL pointer dereferences. Someone once thought deserializing objects was a neat idea. I would rather see humans working on the future of security instead of exploiting the past. The future of the bug bounty can be new attack methods instead of finding bugs. We have some work to do, I've not seen an automated scanner that I'd even call "almost not terrible". It will happen though, tools always start terrible and get better through the natural march of progress. The road to this unicorn future will pass through bug bounties. However, if we don't have automation ready on the other side, it's nothing but dragons.

Sunday, April 16, 2017

Crawl, Walk, Drive

It's that time of year again. I don't mean when all the government secrets are leaked onto the Internet by some unknown organization. I mean the time of year when it's unsafe to cross streets or ride your bike. At least in the United States. It's possible more civilized countries don't have this problem. I enjoy getting around without a car, but I feel like the number of near misses has gone up a fair bit, and it's always a person much younger than me with someone much older than them in the passenger seat. At first I didn't think much about this and just dreamed of how self driving cars will rid us of the horror that is human drivers. After the last near fatality while crossing the street it dawned on me that now is the time all the kids have their driving learner's permit. I do think I preferred not knowing this since now I know my adversary. It has a name, and that name is "youth".

For those of you who aren't familiar with how this works in the US. Essentially after less training than is given to a typical volunteer, a young person generally around the age of 16 is given the ability to drive a car, on real streets, as long as there is a "responsible adult" in the car with them. We know this is impossible as all humans are terribly irresponsible drivers. They then spend a few months almost getting in accidents, take a proper test administered by someone who has one of the few jobs worse than IT security, and generally they end up with a real driver's license, ensuring we never run out of terrible human drivers.

There are no doubt a ton of stories that could be told here about mentorship, learning, encouraging, leadership, or teaching.  I'm not going to talk about any of that that today. I think often about how we raise up the next generation of security goons, I'm tired of talking about how we're all terrible people and nobody likes us, at least for this week.

I want to discuss the challenges of dealing with someone who is very new, very ambitious, and very dangerous. There are always going to be "new" people in any group or organization. Eventually they learn the rules they need to know, generally because they screw something up and someone yells at them about it. Goodness knows I learned most everything I know like this. But the point is, as security people, we have to not only do some yelling but we have to keep things in order while the new person is busy making a mess of everything. The yelling can help make us feel better, but we still have to ensure things can't go too far off the rails.

In many instances the new person will have some sort of mentor. They will of course try to keep them on task and learning useful things, but just like the parent of our student driver, they probably spend more time gaping in terror than they do teaching anything useful. If things really go crazy you can blame them someday, but at the beginning they're just busy hanging on trying not to soil themselves in an attempt to stay composed.

This brings us back to the security group. If you're in a large organization, every day is new person screwing something up day. I can't even begin to imagine what it must be like at a public cloud provider where you not only have new employees but also all your customers are basically ongoing risky behavior. The solution to this problem is the same as our student driver problem. Stop letting humans operate the machines. I'm not talking about the new people, I'm talking about the security people. If you don't have heavy use of automation, if you're not aggregating logs and having algorithms look for problems for example, you've already lost the battle.

Humans in general are bad at repetitive boring tasks. Driving falls under this category, and a lot of security work does too. I touched on the idea of measuring what you do in my last post. I'm going to tie these together in the next post. We do a lot of things that don't make sense if we measure them, but we struggle to measure security. I suspect part of that reason is because for a long time we were the passenger with the student drivers. If we emerged at the end of the ride alive, we were mostly happy.

It's time to become the groups building the future of cars, not waiting for a horrible crash to happen. The only way we can do that is if we start to understand and measure what works and what doesn't work. Everything from ROI to how effective is our policy and procedure. Make sure you come back next week. Assuming I'm not run down by a student driver before then.

Monday, April 10, 2017

The obvious answer is never the secure answer

One of the few themes that comes up time and time again when we talk about security is how bad people tend to be at understanding what's actually going on. This isn't really anyone's fault, we're expecting people to go against what is essentially millions of years of evolution that created our behaviors. Most security problems revolve around the human being the weak link and doing something that is completely expected and completely wrong.

This brings us to a news story I ran across that reminded me of how bad humans can be at dealing with actual risk. It seems that peanut free schools don't work. I think most people would expect a school that bans peanuts to have fewer peanut related incidents than a school that doesn't. This seems like a no brainer, but if there's anything I've learned from doing security work for as long as I have, the obvious answer is always wrong.

The report does have a nugget of info in it where they point out that having a peanut free table at lunch seems to work. I suspect this is different than a full on ban, in this case you have the kids who are sensitive to peanuts sit at a table where everyone knows peanuts are bad. There is of course a certain amount of social stigma that comes with having to sit at a special table, but I suspect anyone reading this often sat alone during schooltime lunch for a very different reason ;)

This is similar to Portugal making all drugs legal and having one of the lowest overdose rates in Europe. It seems logical that if you want fewer drugs you make them illegal. It doesn't make sense to our brains that if you want fewer drugs and problems you make them legal. There are countless other examples of reality seeming to be totally backwards from what we think should be true.

So that brings us to security. There are lessons in stories like these. It's not to do the opposite of what makes sense though. The lesson is to use real data to make decisions. If you think something is true and you can't prove it either way, you could be making decisions that are actually hurting instead of helping. It's a bit like the scientific method. You have a hypothesis, you test it, then you either update your hypothesis and try again or you end up with proof.

In the near future we'll talk about measuring things; how to do it, what's important, and why it will matter for solving your problems.

Sunday, April 2, 2017

The expectation of security

If you listen to my podcast (which you should be doing already), I had a bit of a rant at the start this week about an assignment my son had over the weekend. He wasn't supposed to use any "screens" which is part of a drug addiction lesson. I get where this lesson is going, but I've really been thinking about the bigger idea of expectations and reality. This assignment is a great example of someone failing to understand the world has changed around them.

What I mean is expecting anyone to go without a "screen" for a weekend doesn't make sense. A substantial number of activities we do today rely on some sort of screen because we've replace more inefficient ways of accomplishing tasks with these screens. Need to look something up? That's a screen. What's the weather? Screen. News? Screen. Reading a book? Screen!

You get the idea. We've replaced a large number of books or papers with a screen. But this is a security blog, so what's the point? The point is I see a lot of similarities with a lot of security people. The world has changed quite a bit over the last few years, I feel like a number of our rules are similar to anyone thinking spending time without a screen is some sort of learning experience. I bet we can all think of security people we know who think it's still 1995, if you don't know any you might be that person (time for some self reflection).

Let's look at some examples.

You need to change your password every 90 days.
There are few people who think this is a good idea anymore, even the NIST guidance says this isn't a good idea. I hear this come up on a regular basis though. Password concepts have changed a lot over the last few years, but most people seem to be stuck somewhere between five and ten years ago.

If we put it behind the firewall we don't have to worry about securing it.
Remember when firewalls were magic? Me neither. There was a time from probably 1995 to 2007 or so that a lot of people thought firewalls were magic. Very recently the concept of zero trust networking has come to be a real thing. You shouldn't trust your network, it's probably compromised.

Telling someone they can't do something because it's insecure.
Remember when we used to talk about how security is the industry of "no"? That's not true anymore because now when you tell someone "no" they just go to Amazon and buy $2.38 worth of computing and do whatever it is they need to get done. Shadow IT isn't the problem, it's the solution to the problem that was the security people. It's fairly well accepted by the new trailblazers that "no" isn't an option, the only option is to work together to minimize risk.

I could probably build a list that's enormous with examples like this. The whole point is to point out that everything changes, and we should always be asking ourselves if something still makes sense. It's very easy for us to decide change is dangerous and scary. I would argue that not understanding the new security norms is actually more dangerous than having no security knowledge at all. This is probably one of the few industries where old knowledge may be worse than no knowledge. Imagine if your doctor was using the best ideas and tools from 1875. You'd almost certainly find a new doctor. Password policies and firewalls are our version of blood letting and leeches. We have a long way to go and I have no doubt we all have something to contribute.

Monday, March 27, 2017

Remember kids, if you're going to disclose, disclose responsibly!

If you pay any attention to the security universe, you're aware that Tavis Ormandy is basically on fire right now with his security research. He found the Cloudflare data leak issue a few weeks back, and is currently going to town on LastPass. The LastPass crew seems to be dealing with this pretty well, I'm not seeing a lot of complaining, mostly just info and fixes which is the right way to do these things.

There are however a bunch of people complaining about how Tavis and Google Project Zero in general tend to disclose the issues. These people are wrong, I've been there, it's not fun, but as crazy as it may seem to the ouside, the Project Zero crew knows what they're doing.

Firstly let's get two things out of the way.

1) If nobody is complaining about what you're doing, you're not doing anything interesting (Tavis is clearly doing very interesting things).

2) Disclosure is hard, there isn't a perfect solution, what Project Zero does may seem heartless to some, but it's currently the best way. The alternative is an abusive relationship.

A long time ago I was a vendor receiving security reports from Tavis, and I won't lie, it wasn't fun. I remember complaining and trying to slow things down to a pace I thought was more reasonable. Few of us have any extra time and a new vulnerability disclosure means there's extra work to do. Sometimes a disclosure isn't very detailed or lacks important information. The disclosure date proposed may not line up with product schedules. You could have another more important issue you're working on already. There are lots of reasons to dread dealing with these issues as a vendor.

All that said, it's still OK to complain, and every now and then the criticism is good. We should always be thinking about how we do things, what makes sense today won't make sense tomorrow. The way Google Project Zero does disclosure today was pretty crazy even five years ago. Now it's how things have to work. The world moves very fast now, and as we've seen from various document dumps over the last few years, there are no secrets. If you think you can keep a security issue quiet for a year you are sadly mistaken. It's possible that was once true (I suspect it never was, but that's another conversation). Either way it's not true anymore. If you know about a security flaw it's quite likely someone else does too, and once you start talking to another group about it, the odds of leaking grow at an alarming rate.

The way things used to work is changing rapidly. Anytime there is change, there are always the trailblazers and laggards. We know we can't develop secure software, but we can respond quickly. Spend time where you can make a difference, not chasing the mythical perfect solution.

If your main contribution to society is complaining, you should probably rethink your purpose.

Thursday, March 23, 2017

Inverse Law of CVEs

I've started a project to put the CVE data into Elasticsearch and see if there is anything clever we can learn about it. Ever if there isn't anything overly clever, it's fun to do. And I get to make pretty graphs, which everyone likes to look at.

I stuck a few of my early results on Twitter because it seemed like a fun thing to do. One of the graphs I put up was comparing the 3 BSDs. The image is below.


You can see that none of these graphs has enough data to really draw any conclusions from, again, I did this for fun. I did get one response claiming NetBSD is the best, because their graph is the smallest. I've actually heard this argument a few times over the past month, so I decided it's time to write about it. Especially since I'm sure I'll find many more examples like this while I'm weeding through this mountain of CVE data.

Let's make up a new law, I'll call it the "Inverse Law of CVEs". It goes like this - "The fewer CVE IDs something has has, the less secure it is".

That doesn't make sense to most people. If you have something that is bad, fewer bad things is certainly better than more bad things. This is generally true for physical concepts brains can understand. Less crime is good. Fewer accidents is good. When it comes to something like how many CVE IDs your project or product has, this idea gets turned on its head. Less is probably bad when we think about CVE IDs. There's probably some sort of line somewhere where if you cross it things flip back to bad (wait until I get to PHP). We'll call that the security maginot line because bad security decided to sneak in through the north.

If you have something with very very few CVE IDs it doesn't mean it's secure, it means nobody is looking for security issues. It's easy to understand that if something is used by a large diverse set of users, it will get more bug reports (some of which will be security bugs) and it will get more security attention from both good guys and bad guys because it's a bigger target. If something has very few users, it's quite likely there hasn't been a lot of security attention paid to it. I suspect what the above graphs really mean is Free BSD is more popular than OpenBSD, which is more popular than NetBSD. Random internet searches seem to back this up.

I'm not entirely sure what to do with all this data. Part of the fun is understanding how to classify it all. I'm not a data scientist so there will be much learning. If you have any ideas by all means let me know, I'm quite open to suggestions. Once I have better data I may consider trying to find at what point a project has enough CVE IDs to be considered on the right path, and which have so many they've crossed over to the bad place.

Sunday, March 12, 2017

Security, Consumer Reports, and Failure

Last week there was a story about Consumer Reports doing security testing of products.


As one can imagine there were a fair number of “they’ll get it wrong” sort of comments. They will get it wrong, at first, but that’s not a reason to pick on these guys. They’re quite brave to take this task on, it’s nearly impossible if you think about the state of security (especially consumer security). But this is how things start. There is no industry that has gone from broken to perfect in one step. It’s a long hard road when you have to deal with systemic problems in an industry. Consumer product security problems may be larger and more complex than any other industry has ever had to solve thanks to things such as globalization and how inexpensive tiny computers have become.

If you think about the auto industry, you’re talking about something that costs thousands of dollars. Safety is easy to justify as it’s going to be less than the overall cost of the vehicle. Now if we think about tiny computing devices, you could be talking about chips that cost less than one dollar. If the cost of security and safety will be more than the initial cost of the computing hardware it can be impossible to justify that cost. If adding security doubles the cost of something, the manufacturers will try very hard to find ways around having to include such features. There are always bizarre technicalities that can help avoid regulation, groups like Consumer Reports help with accountability.

Here is where Consumer Reports and other testing labs will be incredibly important to this story. Even if there is regulation a manufacturer chooses to ignore, a group like Consumer Reports can still review the product. Consumer Reports will get things very wrong at first, sometimes it will be hilariously wrong. But that’s OK, it’s how everything starts. If you look back at any sort of safety and security in the consumer space, it took a long time, sometimes decades, to get it right. Cybersecurity will be no different, it’s going to take a long time to even understand the problem.

Our default reaction to mistakes is often one of ridicule, this is one of those times we have to be mindful of how dangerous this attitude is. If we see a group trying to do the right thing but getting it wrong, we need to offer advice, not mockery. If we don’t engage in a useful and serious way nobody will take us seriously. There are a lot of smart security folks out there, we can help make the world a better place this time. Sometimes things can look hopeless and horrible, but things will get better. It’ll take time, it won’t be easy, but things will get better thanks to efforts such as this one.

Thursday, March 2, 2017

What the Oscars can teach us about security

If you watched the 89th Academy Awards you saw a pretty big mistake at the end of the show, the short story is Warren Beatty was handed the wrong envelope, he opened it, looked at it, then gave it to Faye Dunaway to read, which she did. The wrong people came on stage and started giving speeches, confused scrambling happened, and the correct winner was brought on stage. No doubt this will be talked about for many years to come as one of the most interesting and exciting events in the history of the awards ceremony.

People make mistakes, we won’t dwell on how the wrong envelope made it into the announcer’s hands. The details of how this error came to be isn’t what’s important for this discussion. The important lesson for us is watch Warren Beatty’s behavior. He clearly knew something was wrong, if you watch the video of him, you can tell things aren’t right. But he just kept going, gave the card to Faye Dunaway, and she read the name of the movie on the card. These people aren’t some young amateurs here, these are seasoned actors. It’s not their first rodeo. So why did this happen?

The lesson for us all is to understand that when things start to break down, people will fall back to their instincts. The presenters knew their job was to open the card and read the name. Their job wasn’t to think about it or question what they were handed. As soon as they knew something was wrong, they went on autopilot and did what was expected. This happens with computer security all the time. If people get a scary phishing email, they will often go into autopilot and do things they wouldn’t do if they kept a level head. Most attackers know how this works and they prey on this behavior. It’s really easy to claim you’d never be so stupid as to download that attachment or click on that link, but you’re not under stress. Once you’re under stress, everything changes.

This is why police, firefighters, and soldiers get a lot of training. You want these people to do the right thing when they enter autopilot mode. As soon as a situation starts to get out of hand, training kicks in and these people will do whatever they were trained to do without thinking about it. Training works, there’s a reason they train so much. Most people aren’t trained like this so they generally make poor decisions when under stress.

So what should we take away from all this? The thing we as security professionals needs to keep in mind is how this behavior works. If you have a system that isn’t essentially “secure by default”, anytime someone find themselves under mental stress, they’re going to take the path of least resistance. If this path of least resistance is also something dangerous happening, you’re not designing for security. Even security experts will have this problem, we don’t have superpowers that let us make good choices in times of high stress. It doesn’t matter how smart you think you are, when you’re under a lot of stress, you will go into autopilot, you will make bad choices if bad choices are the defaults.

Thursday, February 23, 2017

SHA-1 is dead, long live SHA-1!

Unless you’ve been living under a rock, you heard that some researchers managed to create a SHA-1 collision. The short story as to why this matters is the whole purpose of a hashing algorithm is to make it impossible to generate collisions on purpose. Unfortunately though impossible things are usually also impossible so in reality we just make sure it’s really really hard to generate a collision. Thanks to Moore’s Law, hard things don’t stay hard forever. This is why MD5 had to go live on a farm out in the country, and we’re not allowed to see it anymore … because it’s having too much fun. SHA-1 will get to join it soon.

The details about this attack are widely published at this point, but that’s not what I want to discuss, I want to bring things up a level and discuss the problem of algorithm deprecation. SHA-1 was basically on the way out. We knew this day was coming, we just didn’t know when. The attack isn’t super practical yet, but give it a few years and I’m sure there will be some interesting breakthroughs against SHA-1. SHA-2 will be next, which is why SHA-3 is a thing now. At the end of the day though this is why we can’t have nice things.

A long time ago there weren’t a bunch of expired standards. There were mostly just current standards and what we would call “old” standards. We kept them around because it was less work than telling them we didn’t want to be friends anymore. Sure they might show up and eat a few chips now and then, but nobody really cared. Then researchers started to look at these old algorithms and protocols as a way to attack modern systems. That’s when things got crazy.

It’s a bit like someone bribing one of your old annoying friends to sneak the attacker through your back door during a party. The friend knows you don’t really like him anymore, so it won’t really matter if he gets caught. Thus began the long and horrible journey to start marking things as unsafe. Remember how long it took before MD5 wasn’t used anymore? How about SSL 2 or SSHv1? It’s not easy to get rid of widely used standards even if they’re unsafe. Anytime something works it won't be replaced without a good reason. Good reasons are easier to find these days than they were even a few years ago.

This brings us to the recent SHA-1 news. I think it's going better this time, a lot better. The browsers already have plans to deprecate it. There are plenty of good replacements ready to go. Did we ever discuss killing off md5 before it was clearly dead? Not really. It wasn't until a zero day md5 attack was made public that it was decided maybe we should stop using it. Everyone knew it was bad for them, but they figured it wasn’t that big of a deal. I feel like everyone understands SHA-1 isn’t a huge deal yet, but it’s time to get rid of it now while there’s still time.

This is the world we live in now. If you can't move quickly you will fail. It's not a competitive advantage, it's a requirement for survival. Old standards no longer ride into the sunset quietly, they get their lunch money stolen, jacket ripped, then hung by a belt loop on the fence.

Sunday, February 12, 2017

Reality Based Security

If I demand you jump off the roof and fly, and you say no, can I call you a defeatist? What would you think? To a reasonable person it would be insane to associate this attitude with being a defeatist. There are certain expectations that fall within the confines of reality. Expecting things to happen outside of those rules is reckless and can often be dangerous.

Yet in the universe of cybersecurity we do this constantly. Anyone who doesn’t pretend we can fix problems is a defeatist and part of the problem. We just have to work harder and not claim something can’t be done, that’s how we’ll fix everything! After being called a defeatist during a discussion, I decided to write some things down. We spend a lot of time trying to fly off of roofs instead of looking for practical realistic solutions for our security problems.

The way cybersecurity works today someone will say “this is a problem”. Maybe it’s IoT, or ransomware, or antivirus, secure coding, security vulnerabilities; whatever, pick something, there’s plenty to choose from. It’s rarely in a general context though, it will be sort of specific, for example “we have to teach developers how to stop adding security flaws to software”. Someone else will say “we can’t fix that”, then they get called a defeatist for being negative and it’s assumed the defeatists are the problem. The real problem is they’re not wrong. It can’t be fixed. We will never see humans write error free code, there is no amount of training we can give them. Pretending it can is what’s dangerous. Pretending we can fix problems we can’t is lying.

The world isn’t fairy dust and rainbows. We can’t wish for more security and get it. We can’t claim to be working on a problem if we have no clue what it is or how to fix it. I’ll pick on IoT for a moment. How many security IoT “experts” exist now? The number is non trivial. Does anyone have any ideas how to understand the IoT security problems? Talking about how to fix IoT doesn’t make sense today, we don’t even really understand what’s wrong. Is the problem devices that never get updates? What about poor authentication? Maybe managing the devices is the problem? It’s not one thing, it’s a lot of things put together in a martini shaker, shook up, then dumped out in a heap. We can’t fix IoT because we don’t know what it even is in many instances. I’m not a defeatist, I’m trying to live in reality and think about the actual problems. It’s a lot easier to focus on solutions for problems you don’t understand. You will find a solution, those solutions won’t make sense though.

So what do we do now? There isn’t a quick answer, there isn’t an easy answer. The first step is to admit you have a problem though. Defeatists are a real thing, there’s no question about it. The trick is to look at the people who might be claiming something can’t be fixed. Are they giving up, or are they trying to reframe the conversation? If you declare them a defeatist, the conversation is now over, you killed it. On the other side of the coin, pretending things are fine is more dangerous than giving up, you’re living in a fantasy. The only correct solution is reality based security. Have honest and real conversations, don’t be afraid to ask hard questions, don’t be afraid to declare something unfixable. An unfixable problem is really just one that needs new ideas.

You can't fly off the roof, but trampolines are pretty awesome.

I'm @joshbressers on Twitter, talk to me.

Monday, February 6, 2017

There are no militant moderates in security

There are no militant moderates. Moderates never stand out for having a crazy opinion or idea, moderates don’t pick fights with anyone they can. Moderates get the work done. We could look at the current political climate, how many moderate reasonable views get attention? Exactly. I’m not going to talk about politics, that dumpster fire doesn’t need any more attention than it’s already getting. I am however going to discuss a topic I’m calling “security moderates”, or the people who are doing the real security work. They are sane, reasonable, smart, and actually doing things that matter. You might be one, you might know one or two. If I was going to guess, they’re a pretty big group. And they get ignored quite a lot because they're too busy getting work done to put on a show.

I’m going to split existing security talent into some sort of spectrum. There’s nothing more fun than grouping people together in overly generalized ways. I’m going to use three groups. You have the old guard on one side (I dare not mention left or right lest the political types have a fit). This is the crowd I wrote about last week; The people who want to protect their existing empires. On the other side you have a lot of crazy untested ideas, many of which nobody knows if they work or not. Most of them won’t work, at best they're a distraction, at worst they are dangerous.

Then in the middle we have our moderates. This group is the vast majority of security practitioners. The old guard think these people are a bunch of idiots who can’t possibly know as much as they do. After all, 1999 was the high point of security! The new crazy ideas group thinks these people are wasting their time on old ideas, their new hip ideas are the future. Have you actually seen homomorphic end point at rest encryption antivirus? It’s totally the future!

Now here’s the real challenge. How many conferences and journals have papers about reasonable practices that work? None. They want sensational talks about the new and exciting future, or maybe just new and exciting. In a way I don’t blame them, new and exciting is, well, new and exciting. I also think this is doing a disservice to the people getting work done in many ways. Security has never been an industry that has made huge leaps driven by new technology. It’s been an industry that has slowly marched forward (not fast enough, but that’s another topic). Some industries see huge breakthroughs every now and then. Think about how relativity changed physics overnight. I won’t say security will never see such a breakthrough, but I think we would be foolish to hope for one. The reality is our progress is made slowly and methodically. This is why putting a huge focus on crazy new ideas isn’t helping, it’s distracting. How many of those new and crazy ideas from a year ago are even still ideas anymore? Not many.

What do we do about this sad state of affairs? We have to give the silent majority a voice. Anyone reading this has done something interesting and useful. In some way you’ve moved the industry forward, you may not realize it in all cases because it’s not sensational. You may not want to talk about it because you don’t think it’s important, or you don’t like talking, or you’re sick of the fringe players criticizing everything you do. The first thing you should do is think about what you’re doing that works. We all have little tricks we like to use that really make a difference.

Next write it down. This is harder than it sounds, but it’s important. Most of these ideas aren’t going to be full papers, but that’s OK. Industry changing ideas don’t really exist, small incremental change is what we need. It could be something simple like adding an extra step during application deployment or even adding a banned function to your banned.h file. The important part is explaining what you did, why you did it, and what the outcome was (even if it was a failure, sharing things that don’t work has value). Some ideas could be conference talks, but you still need to write things down to get talks accepted. Just writing it down isn’t enough though. If nobody ever sees your writing, you’re not really writing.  Publish your writing somewhere, it’s never been easier to publish your work. Blogs are free, there are plenty of groups to find and interact with (reddit, forums, twitter, facebook). There is literally a security conference every day of the year. Find a venue, tell your story.

There are no militant moderates, this is a good thing. We have enough militants with agendas. What we need more than ever are reasonable and sane moderates with great ideas, making a difference every day. If the sane middle starts to work together. Things will get better, and we will see the change we need.

Have an idea how to do this, let me know. @joshbressers on Twitter

Sunday, January 29, 2017

Everything you know about security is wrong, stop protecting your empire!

Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn't do this everyone would think they're weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.

How wrong am I? Let me know: @joshbressers on Twitter.



Monday, January 23, 2017

Return on Risk Investment

I found myself in a discussion earlier this week that worked its way into return on investment topics. Of course nobody could really agree on what the return was which is sort of how these conversations often work out. It’s really hard to decide what the return on investment is for security features and products. It can be hard to even determine cost sometimes, which should be the easy number to figure out.

All this talk got me thinking about something I’m going to call risk investment. The idea here is that you have a risk, which we’ll think about as the cost. You have an investment of some sort, it could be a product, training, maybe staff. This investment in theory reduces your risk in some measurable way. The reduction of the risk is the return on risk investment. We like to think about these things in the context of money, but risk doesn’t exactly work that way. Risk isn’t something that can often be measured easily. Even incredibly risky behaviors can work out fine, and playing it safe can end horribly. Rather than try to equate everything to money, what if we ignored that for the moment and just worried about risk.

 First, how do you measure your risk? There isn’t a nice answer for this. There are plenty of security frameworks you can use. There are plenty of methodologies that exist, threat modeling, attack surface analysis, pen test reports, architecture reviews, automated scanning of products and infrastructure. There’s no single good answer to this question. I can’t tell you what your risk profile is, you have to decide how you’re going to measure this. What are you protecting? If it’s some sort of regulated data, there will be substantial cost in losing it, so this risk measurement is easy. It’s less obvious if you’re not operating in an environment that has direct cost to having an incident. It’s even possible you have systems and applications that pose zero risk (yeah, I said it).

 Assuming we have a way to determine risk, now we wonder how do you measure the return on controlling risk? This is possibly more tricky than deciding on how to measure your risk. You can’t prove a negative in many instances, there’s no way to say your investment is preventing something from happening. Rather than measure how many times you didn’t get hacked, the right way to think about this is if you were doing nothing, how would you measure your level of risk? We can refer back to our risk measurement method for that. Now we think about where we do have certain protections in place, what will an incident look like? How much less trouble will there be? If you can’t answer this you’re probably in trouble. This is the important data point though. When there is an incident, how do you think your counter measures will help mitigate damage? What was your investment in the risk?

 And now this brings us to our Return on Risk Investment, or RORI as I’ll call it, because I can and who doesn’t like acronyms? Here’s the thing to think about if you’re a security leader. If you have risk, which we all do, you must find some way to measure it. If you can’t measure something you don’t understand it. If you can’t measure your risk, you don’t understand your risk. Once you have your method to understand what’s happening, make note of your risk measurement without any sort of security measures in place, your risk with ideal (not perfect, perfect doesn't exist) measures in place, and your risk with existing measures in place. That will give you an idea of how effective what you’re doing is. Here’s the thing to watch for. If your existing measures are close to the risk level for no measures, that’s not a positive return. Those are things you either should fix or stop doing. Sometimes it’s OK to stop doing something that doesn’t really work. Security theater is real, it doesn’t work, and it wastes money. The trick is to find a balance that can show measurable risk reduction without breaking the bank.


How do you measure risk? Let me know: @joshbressers on Twitter.


Monday, January 16, 2017

What does security and USB-C have in common?

I've decided to create yet another security analogy! You can’t tell, but I’m very excited to do this. One of my long standing complaints about security is there are basically no good analogies that make sense. We always try to talk about auto safety, or food safety, or maybe building security, how about pollution. There’s always some sort of existing real world scenario we try warp and twist in a way so we can tell a security story that makes sense. So far they’ve all failed. The analogy always starts out strong, then something happens that makes everything fall apart. I imagine a big part of this is because security is really new, but it’s also really hard to understand. It’s just not something humans are good at understanding.

The other day this article was sent to me by @kurtseifried
How Volunteer Reviewers Are Saving The World From Crummy—Even Dangerous—USB-C Cables

The TL;DR is essentially the world of USB-C cables is sort of a modern day wild west. There’s no way to really tell which ones are good and which ones are bad, so there are some people who test the cables. It’s nothing official, they’re basically volunteers doing this in their free time. Their feedback is literally the only real way to decide which cables are good and which are bad. That’s sort of crazy if you think about it.

This really got me thinking though, it’s has a lot in common with our current security problems. We have a bunch of products and technologies. We don’t have a good way to tell if something is good or bad. There are some people who try to help with good information. But fundamentally most of our decisions are made with bad or incomplete data.

In the case of the cables, I see two practical ways out of this. Either have some sort of official testing lab. If something doesn’t pass testing, it can’t be sold. This makes sense, there are plenty of things on the market today that go through similar testing. If the products fails, it doesn’t get sold. In this case the comparable analogies hold up. Auto safety, electrical safety, hdmi; there are plenty of organizations that are responsible for ensuring the quality and safety of certain products. The cables would be no different.

A possible alternative to deal with this problem is you make sure every device will exist in a way that assumes bad cables are possible and deal with this situation in hardware. This would mean devices being smart enough to not draw too much power, or not provide too much power. To know when there will be some sort of failure mode and disconnect. There are a lot of possibilities here, and to be perfectly honest, no device will be able to do this with 100% accuracy. More importantly though, no manufacturer will be willing to add this functionality because it would add cost, probably a lot of cost. It’s still a remote possibility though, and for the sake of the analogy, we’re going to go with it.

The first example twisted to cybersecurity would mean you need a nice way to measure security. There would be a lab or organization that is capable of doing the testing, then giving some sort of stamp of approval. This has proven to be a really hard thing to do in the past. The few attempts to do this have failed. I suspect it’s possible, just very difficult to do right. Today Mudge is doing some of this with the CITL, but other than that I’m not really aware of anything of substance. It’s a really hard problem to solve, but if anyone can do it right, it’s probably Mudge.

This then leads us to the second possibility which is sort of how things work today. There is a certain expectation that an endpoint will handle certain situations correctly. Each endpoint has to basically assume anything talking to it is broken in some way. All data transferred must be verified. Executables must be signed and safely distributed. The networks the data flows across can’t really be trusted. Any connection to the machine could be an attacker and must be treated as such. This is proving to be very hard though and in the context of the cables, it’s basically the crazy solution. Our current model of security is the crazy solution. I doubt anyone will argue with that.

This analogy certainly isn’t perfect, but the more I think about it the more I like it. I’m sure there are problems thinking about this in such a way, but for the moment, it’s something to think about at least. The goal is to tell a story that normal people can understand so we can justify what we want to do and why. Normal people don’t understand security, but they do understand USB cables.


Do you have a better analogy? Let me know @joshbressers on Twitter.

Monday, January 9, 2017

Security Advice: Bad, Terrible, or Awful

As an industry, we suck at giving advice. I don’t mean this in some negative hateful way, it’s just the way it is. It’s human nature really. As a species most of us aren’t very good at giving or receiving advice. There’s always that vision of the wise old person dropping wisdom on the youth like it’s candy. But in reality they don’t like the young people much more than the young people like them. Ever notice the contempt the young and old have for each other? It’s just sort of how things work. If you find someone older and wiser than you who is willing to hand out good advice, stick close to that person. You won’t find many more like that.

Today I’m going to pick on security though. Specifically security advice directed at people who aren’t security geeks. Heck, some of this will probably apply to security geeks too, so let’s just stick to humans as the target audience. Of all our opportunities around advice, I think the favorite is blaming the users for screwing up. It’s never our fault, it’s something they did, or something wasn’t configured correctly, but still probably something they did. How many times have you dealt with someone who clicked a link because they were stupid. Or they opened an attachment because they’re an idiot. Or they typed a password in that web page because they can’t read. The list is long and impressive. Not once did we do anything wrong. Why would we though? It’s not like we made anyone do those things! This is true, but we also didn’t not make them do those things!

Some of the advice we expect people to listen to is good advice. A great example is telling someone to “log out” of their banking site when they’re done. That makes sense, it’s easy enough to understand, and nothing lights on fire if they forget to do this. We also like to tell people things like “check the URL bar”. Why would a normal person do this? They don’t even know what a URL is. They know what a bar is, it’s where they go to calm down after talking to us. What about when we tell people not to open attachments? Even attachments from their Aunt Millie? She promised that cookie recipe months ago, it’s about time cookies.exe showed up!

The real challenge we have is understanding what is good advice that would supplement a properly functional system. Advice and instructions do not replace a proper solution. A lot of advice we give out is really to mask something that’s already broken. The fact that we expect users to care about a URL or attachment is basically nuts. These are failures in the system, not failures with users. We should be investing our resources into solving the root of the problem, not yelling at people for clicking on links. Instead of telling users not to click on attachments, just don’t allow attachments. Expecting behavior from people rarely changes them. At best it creates an environment of shame but it’s more likely it creates an environment of contempt. They don’t like you, you don’t like them.

As a security practitioner, look for ways to eliminate problems without asking users for intervention. A best case situation will be 80% user compliance. That remaining 20% would require more effort to deal with than anyone could handle, and if your solution is getting people to listen, you need 100% all the time which is impossible for humans but not impossible for computers.

It’s like the old saying, an ounce of prevention is worth a pound of cure. Or if you’re a fan of the metric system, 28.34 grams of prevention is worth 453.59 grams of cure!

Do you have some bad advice? Lay it on me! @joshbressers on Twitter.

Tuesday, January 3, 2017

Looks like you have a bad case of embedded libraries

A long time ago pretty much every application and library carried around its own copy of zlib. zlib is a library that does really fast and really good compression and decompression. If you’re storing data or transmitting data, it’s very likely this library is in use. It’s easy to use and is public domain. It’s no surprise it became the industry standard.

Then one day, CVE-2002-0059 happened. CVE-2002-0059 was a security flaw that was easy to trigger and easy to exploit. It affected network listening applications that used zlib (which was most of them). Today if this came out, it would make heartbleed look like a joke. This was long long ago though, most people didn’t know anything about security (or care in many instances). If you look at the updates that came out because of this flaw, they were huge because literally hundreds of software applications and libraries had to be patched. This affected Windows and Linux, which was most everything back then. Today it would affect every device on the planet. This isn’t an exaggeration. Every. Single. Device.

A lot of people learned a valuable lesson from CVE-2002-0059. That lesson was to stop embedding copies of libraries in your applications. Use the libraries already available on the system. zlib is pretty standard now, you can find it most anywhere, there is basically no reason to carry around your own version of this library in your project anymore. Anyone who does this would be seen as a bit nuts. Except this is how containers work.

Containing Containers

If you pay attention at all, you know the future of most everything is moving back in the direction of applications shipping with all the bits they need to run. Linux containers have essentially a full linux distribution inside them (a very small one of course). Now there’s a good reason for needing containers today. A long time ago, things moved very slowly. It wouldn’t have been crazy to run the same operating system for ten years. There weren’t many updates to anything. Even security updates were pretty rare. You know that if you built an application on top of a certain version of Windows, Solaris, or Linux, it would be around for a long time. Those days are long gone. Things move very very quickly today.

I’m not foolish enough to tell anyone they shouldn’t be including embedded copies of things in their containers. This is basically how containers work. Besides everything is fast now, including the operating system. You can’t count on the level of stability that once existed. This is a good thing because it gives us the ability to create faster than ever before, container technology is how we solve the problem of a fast changing operating system.

The problem we have today is our tools aren’t quite ready to deal with a security nightmare like CVE-2002-0059. If we found a serious problem like this (we sort of did with CVE-2015-7547 which affected glibc) how long would it take you to update all your containers? How would you update them? How would you even know if the flaw affected you?

The answer is most people wouldn’t update their containers quickly, some wouldn’t update them ever. This sort of goes against the whole DevOps concept. The right way this should work is if some horrible flaw is found in a library you’re shipping, your CI/CD infrastructure just magically deals with it. You shouldn’t have to really know or care. Humans are slow and make a lot of mistakes. They’re also hard to predict. All of these traits go against DevOps. The less we have humans do, the better. This has to be the future of security updates. There’s no secret option C where we stop embedding libraries this time. We need tools that can deal with security updates in a totally automated manner. We’re getting there, but we have a long way to go.

If you’re using containers today, and you can’t rebuild everything with the push of a button, you’re not really using containers. You’re running a custom Linux distribution. Don’t roll your own crypto, don’t roll your own distro.

Do you roll your own distro? Tell me, @joshbressers on Twitter.