Sunday, August 13, 2017

But that's not my job!

This week I've been thinking about how security people and non security people interact. Various conversations I have often end up with someone suggesting everyone needs some sort of security responsibility. My suspicion is this will never work.

First some background to think about. In any organization there are certain responsibilities everyone has. Without using security as our specific example just yet, let's consider how a typical building functions. You have people who are tasked with keeping the electricity working, the plumbing, the heating and cooling. Some people keep the building clean, some take care of the elevators. Some work in the building to accomplish some other task. If the company that inhabits the building is a bank you can imagine the huge number of tasks that take place inside.

Now here's where I want our analogy to start. If I work in a building and I see a leaking faucet. I probably would report it. If I didn't, it's likely someone else would see it. It's quite possible if I'm one of the electricians and while accessing some hard to reach place I notice a leaking pipe. That's not my job to fix it, I could tell the plumbers but they're not very nice to me, so who cares. The last time I told them about a leaking pipe they blamed me for breaking it, so I don't really have an incentive here. If I do nothing, it really won't affect me. If I tell someone, at best it doesn't affect me, but in reality I probably will get some level of blame or scrutiny.

This almost certainly makes sense to most of us. I wonder if there are organizations where reporting things like this comes with an incentive. A leaking water pipe could end up causing millions in damage before it's found. Nowhere I've ever worked ever really had an incentive to report things like this. If it's not your job, you don't really have to care, so nobody ever really cared.

Now let's think about phishing in a modern enterprise. You see everything from blaming the user who clicked the link, to laughing at them for being stupid, to even maybe firing someone for losing the company a ton of money. If a user clicks a phishing link, and suspects a problem, they have very little incentive to be proactive. It's not their job. I bet the number of clicked phish links we find out about is much much lower than the total number clicked.

I also hear security folks talking about educating the users on how all this works. Users should know how to spot phishing links! While this won't work for a variety of reasons, at the end of the day, it's not their job so why do we think they should know how to do this? Even more important, why do we think they should care?

The think I keep wondering is should this be the job of everyone or just the job of the security people? I think the quick reaction is "everyone" but my suspicion is it's not. Electricity is a great example. How many stories have you heard of office workers being electrocuted in the office? The number is really low because we've made electricity extremely safe. If we put this in the context of modern security we have a system where the office is covered in bare wires. Imagine wires hanging from the ceiling, some draped on the floor. The bathroom has sparking wires next to the sink. We lost three interns last week, those stupid interns! They should have known which wires weren't safe to accidentally touch. It's up to everyone in the office to know which wires are safe and which are dangerous!

This is of course madness, but it's modern day security. Instead of fixing the wires, we just imagine we can train everyone up on how to spot the dangerous ones.

Friday, July 28, 2017

For a security conference that everyone claims not to trust the wifi, there sure was a lot of wifi

I attended BlackHat USA 2017, Elastic had a booth on the floor I spent a fair bit of time at as well as meetings scattered about the conference center. It was a great time as always, but this year I had a secret with me. I put together a Raspberry Pi that was passively collecting wifi statistics. Just certain metadata, no actual wifi data packets were captured or harmed in the making of this. I then log everything into Elasticsearch so I can build pretty visualizations in Kibana. I only captured 2.4 Ghz data with one radio, so I had it jumping around. Obviously I missed plenty of data, but this was really just about looking for interesting patterns.

I put everything I used to make this project go into GitHub, it's really rough though, you've been warned.

I have a ton of data to mine, I'll no doubt spend a great deal of time in the future doing that, but here's the basic TL;DR picture.

pretty picture

I captured 12.6 million wifi packets, the blue bars show when I captured what, the table shows the SSIDs I saw (not all packets have SSID data), and the colored graph shows which wifi channels were seen (not all packets have channel data either). I also have packet frequencies logged, so all that can be put together later. The two humps in the wifi data was when I was around the conference, I admit I was surprised by the volume of wifi I saw basically everywhere, even in the middle of the night from my hotel room.

Below is a graph showing the various frequencies I saw, every packet has to come in on some wireless frequency even if it doesn't have a wifi channel.



The devices seen data was also really interesting.

This chart represents every packet seen, so it's clearly going to be a long tail. It's no surprise an access point sends out a lot of packets, I didn't expect Apple to be #1 here, I expected the top few to be access point manufacturers. It would seem Apple gear is more popular and noisy than I expected.

A more interesting graph is unique devices seen by manufacturer (as a side note, I saw 77,904 devices in total over my 3 days).


This table is far more useful as it's totally expected a single access point will be very noisy. I didn't expect Cisco to make the top 3 I admit. But this means that Apple was basically 10% of wifi devices then we drop pretty quickly.

There's a lot more interesting data in this set, I just have to spend some time finding it all. I'll also make a point to single out the data specific to business hours. Stay tuned for a far more detailed writeup.

Saturday, July 22, 2017

Security and privacy are the same thing

Earlier today I ran across this post on Reddit
Security but not Privacy (Am I doing this right?)

The poster basically said "I care about security but not privacy".

It got me thinking about security and privacy. There's not really a difference between the two. They are two faces of the same coin but why isn't always obvious in today's information universe. If a site like Facebook or Google knows everything about you it doesn't mean you don't care about privacy, it means you're putting your trust in those sites. The same sort of trust that makes passwords private.

The first thing we need to grasp is what I'm going to call a trust boundary. I trust you understand trust already (har har har). But a trust boundary is less obvious sometimes. A security (or privacy) incident happens when there is a breach of the trust boundary. Let's just dive into some examples to better understand this.

A web site is defaced
In this example the expectation is the website owner is the only person or group that can update the website content. The attacker crossed a trust boundary that allowed them to make unwanted changes to the website.

Your credit card is used fraudulently
It's expected that only you will be using your credit card. If someone gets your number somehow and starts to make purchases with your card, how they got the card crosses a trust boundary. You could easily put this example in the "privacy" bucket if you wanted to keep them separate, it's likely your card was stolen due to lax security at one of the businesses you visited.

Your wallet is stolen
This one is tricky. The trust boundary is probably your pocket or purse. Maybe you dropped it or forgot it on a counter. Whatever happened the trust boundary is broken when you lose control of your wallet. An event like this can trickle down though. It could result in identity theft, your credit card could be used. Maybe it's just about the cash. The scary thing is you don't really know because you lost a lot of information. Some things we'd call privacy problems, some we'd call security problems.

I use a confusing last example on purpose to help prove my point. The issue is all about who do you trust with what. You can trust Facebook and give them tons of information, many of us do. You can trust Google for the same basic reasons. That doesn't mean you don't care about privacy, it just means you have put them inside a certain trust boundary. There are limits to that trust though.

What if Facebook decided to use your personal information to access your bank records? That would be a pretty substantial trust boundary abuse. What if your phone company decided to use the information they have to log into your Facebook account?

A good password isn't all that different from your credit card number. It's a bit of private information that you share with one or more other organizations. You are expecting them not to cross a trust boundary with the information you gave them.

The real challenge is to understand what trust boundaries you're comfortable with. What do you share with who? Nobody is an island, we must exist in an ecosystem of trust. We all have different boundaries of what we will share. That's quite all right. If you understand your trust boundary making good security/privacy decisions becomes a lot easier.

They say information is the new oil. If that's true then trust must be the currency.

Thursday, July 20, 2017

Summer is coming

I'm getting ready to attend Black Hat. I will miss BSides and Defcon this year unfortunately due to some personal commitments. And as I'm packing up my gear, I started thinking about what these conferences have really changed. We've been doing this every summer for longer than many of us can remember now. We make our way to the desert, we attend talks by what we consider the brightest minds in our industry. We meet lots of people. Everyone has a great time. But what is the actionable events that come from these things.

The answer is nothing. They've changed nothing.

But I'm going to put an asterisk next to that.

I do think things are getting better, for some definition of better. Technology is marching forward, security is getting dragged along with a lot of it. Some things, like IoT, have some learning to do, but the real change won't come from the security universe.

Firstly we should understand that the world today has changed drastically. The skillset that mattered ten years ago doesn't have a lot of value anymore. Things like buffer overflows are far less important than they used to be. Coding in C isn't quite what it once was. There are many protections built into frameworks and languages. The cloud has taken over a great deal of infrastructure. The list can go on.

The point of such a list is to ask the question, how much of the important change that's made a real difference came from our security leaders? I'd argue not very much. The real change comes from people we've never heard of. There are people in the trenches making small changes every single day. Those small changes eventually pile up until we notice they're something big and real.

Rather than trying to fix the big problems, our time is better spent ignoring the thought leaders and just doing something small. Conferences are important, but not to listen to the leaders. Go find the vendors and attendees who are doing new and interesting things. They are the ones that will make a difference, they are literally the future. Even the smallest bug bounty, feature, or pull request can make a difference. The end goal isn't to be a noisy gasbag, instead it should be all about being useful.



Saturday, July 8, 2017

Who's got your hack back?

The topic of hacking back keeps coming up these days. There's an attempt to pass a bill in the US that would legalize hacking back. There are many opinions on this topic, I'm generally not one to take a hard stand against what someone else thinks. In this case though, if you think hacking back is a good idea, you're wrong. Painfully wrong.

Everything I've seen up to this point tells me the people who think hacking back is a good idea are either mistaken about the issue or they're misleading others on purpose. Hacking back isn't self defense, it's not about being attacked, it's not about protection. It's a terrible idea that has no place in a modern society. Hacking back is some sort of stone age retribution tribal law. It has no place in our world.

Rather than break the various argument apart. Let's think about two examples that exist in the real world.

Firstly, why don't we give the people doing mall security guns? There is one really good reasons I can think of here. The insurance company that holds the policy on the mall would never allow the security to carry guns. If you let security carry guns, they will use them someday. They'll probably use them in an inappropriate manner, the mall will be sued, and they will almost certainly lose. That doesn't mean the mall has to pay a massive settlement, it means the insurance company has to pay a massive settlement. They don't want to do that. Even if some crazy law claims it's not illegal to hack back, no sane insurance company will allow it. I'm not talking about cyber insurance, I'm just talking about general policies here.

The second example revolves around shoplifting. If someone is caught stealing from a store, does someone go to their house and take some of their stuff in retribution? They don't of course. Why not? Because we're not cave people anymore. That's why. Retribution style justice has no place in a modern civilization. This is how a feud starts, nobody has ever won a feud, at best it's a draw when they all kill each other.

So this has me really thinking. Why would anyone want to hack back? There aren't many reasons that don't revolve around revenge. The way most attacks work you can't reliably know who is doing what with any sort of confidence. Hacking back isn't going to make anything better. It would make things a lot worse. Nobody wants to be stuck in the middle of a senseless feud. Well, nobody sane.

Sunday, June 25, 2017

When in doubt, blame open source

If you've not read my previous post on thought leadership, go do that now, this one builds on it. The thing that really kicked off my thinking on these matters was this article:

Security liability is coming for software: Is your engineering team ready?

The whole article is pretty silly, but the bit about liability and open source is the real treat. There's some sort of special consideration when you use open source apparently, we'll get back to that. Right now there is basically no liability of any sort when you use software. I doubt there will be anytime soon. Liability laws are tricky, but the lawyers I've spoken with have been clear that software isn't currently covered in most instances. The whole article is basically nonsense from that respect. The people they interview set the stage for liability and responsibility then seem to discuss how open source should be treated special in this context.

Nothing is special, open source is no better or worse than closed source software. If you build something why would open source need more responsibility than closed source? It doesn't of course, it's just an easy target to pick on. The real story is we don't know how to deal with this problem. Open source is an easy boogeyman. It's getting picked on because we don't know where else to point the finger.

The real problem is we don't know how to secure our software in an acceptable manner. Trying to talk about liability and responsibility is fine, nobody is going to worry about security until they have to. Using open source as a discussion point in this conversation clouds it though. We now get to shift the conversation from how do we improve security, to blaming something else for our problems. Open source is one of the tools we use to build our software. It might be the most powerful tool we've ever had. Tools are never the problem in a broken system even though they get blamed on a regular basis.

The conversation we must have revolves around incentives. There is no incentive to build secure software. Blaming open source or talking about responsibility are just attempts to skirt the real issue. We have to fix our incentives. Liability could be an incentive, regulation can be an incentive. User demand can be an incentive as well. Today the security quality of software doesn't seem to matter.

I'd like to end this saying we should make an effort to have more honest discussions about security incentives, but I don't think that will happen. As I mention in my previous blog post, our problem is a lack of leadership. Even if we fix security incentives, I don't see things getting much better under current leadership.

Saturday, June 17, 2017

Thought leaders aren't leaders

For the last few weeks I've seen news stories and much lamenting on twitter about the security skills shortage. Some say there is no shortage, some say it's horrible beyond belief. Basically there's someone arguing every possible side of this. I'm not going to debate if there is or isn't a worker shortage, that's not really the point. A lot of complaining was done by people who would call themselves leaders in the security universe. I then read the below article and change my thinking up a bit.


Our problem isn't a staff shortage. Our problem is we don't have any actual leaders. I mean people who aren't just "in charge". Real leaders aren't just in charge, they help their people grow in a way that accomplishes their vision. Virtually everyone in the security space has spent their entire careers working alone to learn new things. We are not an industry known for working together and the thing I'd never really thought about before was that if we never work together, we never really care about anyone or anything (except ourselves). The security people who are in charge of other security people aren't motivating anyone which by definition means they're not accomplishing any sort of vision. This holds true for most organizations since barely keeping the train on the track is pretty much the best case scenario.

If I was going to guess the existing HR people look at most security groups and see the same dumpster fire we see when we look at IoT.

In the industry today virtually everyone who is seen as being some sort of security leader is what a marketing person would call "thought leaders". Thought leaders aren't leaders. Some do have talent. Some had talent. And some just own a really nice suit. It doesn't matter though. What we end up with is a situation where the only thing anyone worries about is how many Twitter followers they have instead of making a real difference. You make a real difference when you coach and motivate someone else do great things.

Being a leader with loyal employees would be a monumental step for most organizations. We have no idea who to hire and how to teach them because the leaders don't know how to do those things. Those are skills real leaders have and real leaders develop in their people. I suspect the HR department knows what's wrong with the security groups. They also know we won't listen to them.

There is a security talent shortage, but it's a shortage of leadership talent.

Sunday, June 11, 2017

Humanity isn't proactive

I ran across this article about IoT security the other day

The US Needs to Get Serious About Securing the Internet of Hackable Things

I find articles like this frustrating for the simple fact everyone keeps talking about security, but nobody is going to do anything. If you look at the history of humanity, we've never been proactive when dealing with problems. We wait until things can't get worse and the only actual option is to fix the problem. If you look at every problem there are at least two options. Option #1 is always "fix it". Option #2 is ignore it. There could be more options, but generally we pick #2 because it's the least amount of work in the short term. Humanity rarely cares about the long term implications of anything.

I know this isn't popular, but I'm going to say it: We aren't going to fix IoT security for a very long time

I really wish this wasn't true, but it just is. If a senator wants to pretend they're doing something but they're really just ignoring the problem, they hold a hearing and talk about how horrible something is. If they actually want to fix it they propose legislation. I'm not blaming anyone in charge mind you. They're really just doing what they think the people want. If we want the government to fix IoT we have to tell them to do it. Most people don't really care because they don't have a reason to care.

Here's the second point that I suspect many security people won't want to hear. The reason nobody cares about IoT security isn't because they're stupid. This is the narrative we've been telling ourselves for years. They don't care because the cost of doing nothing is substantially less than fixing IoT security. We love telling scary campfire stories about how the botnet was coming from inside the house and how a pacemaker will kill grandpa, but the reality is there hasn't been enough real damage done yet from insecure IoT. I'm not saying there won't ever be, there just hasn't been enough expensive widespread damage done yet to make anyone really care.

In world filled with insecurity, adding security to your product isn't a feature anyone really cares about. I've been doing research about topics such as pollution, mine safety, auto safety, airline safety, and a number of other problems from our past. There are no good examples where humans decided to be proactive and solve a problem before it became absolutely horrible. People need a reason to care, there isn't a reason for IoT security.

Yet.

Someday something might happen that makes people start to care. As we add compute power to literally everything my security brain says there is some sort of horrible doom coming without security. But I've also been saying this for years and it's never really happened. There is a very real possibility that IoT security will just never happen if things never get bad enough.

Sunday, June 4, 2017

Free Market Security

I've been thinking about the concept of free market forces this weekend. The basic idea here is that the price of a good is decided by the supply and demand of the market. If the market demands something, the price will go up if there it's in short supply. This is basically why the Nintendo Switch is still selling on eBay for more than it would cost in the store. There is a demand but there isn't a supply. But back to security. Let's think about something I'm going to call "free market security". What if demand and supply was driving security? Or we can flip the question around, what if the market will never drive security?

Of course security isn't really a thing like we think of goods and services in this context. At best we could call it a feature of another product. You can't buy security to add it to your products, it's just sort of something that happens as part of a larger system.

I'm leaning in the direction of secure products. Let's pick on mobile phones because that environment is really interesting. Is the market driving security into phones? I'd say the answer today is a giant "no". Most people buy phones that will never see a security update. They don't even ask about updates or security in most instances. You could argue they don't know this is even a problem.

Apple is the leader here by a wide margin. They have invested substantially into security, but why did they do this? If we want to think about market forces and security, what's the driver? If Apple phones were less secure would the market stop buying them? I suspect the sales wouldn't change at all. I know very few people who buy an iPhone for the security. I know zero people outside of some security professionals who would ever think about this question. Why Apple decided to take these actions is a topic for another day I suspect.

Switching gears, the Android ecosystem is pretty rough in this regard. The vast majority of phones sold today are android phones. Android phones that are competitively priced, all have similar hardware, and almost all of them are completely insecure. People still buy them though. Security is clearly not a feature that's driving anything in this market. I bought a Nexus phone because of security. This one single feature. I am clearly not the norm here though.

The whole point we should be thinking about is idea of a free market for security. It doesn't exist, it probably won't exist. I see it like pollution. There isn't a very large market products that either don't pollute, or are made without polluting in some way. I know there are some people who worry about sustainability, but the vast majority of consumers don't really care. In fact nobody really cared about pollution until a river actually lit on fire. There are still some who don't, even after a river lit on fire.

I think there are many of us in security who keep waiting for demand to appear for more security. We keep watching and waiting, any day now everyone will see why this matters! It's not going to happen though. We do need security more  and more each day. The way everything is heading, things aren't looking great. I'd like to think we won't have to wait for the security equivalent of a river catching on fire, but I'm pretty sure that's what it will take.

Monday, May 29, 2017

Stealing from customers

I was having some security conversations last week and cybersecurity insurance came up as a topic. This isn't overly unusual as it's a pretty popular topic, but someone said something that really got me thinking.
What if the insurance covered the customers instead of the companies?
Now I understand that many cybersecurity insurance policies can cover some amount of customer damage and loss, but fundamentally the coverage is for the company that is attacked, customers who have data stolen will maybe get a year of free credit monitoring or some other token service. That's all well and good, but I couldn't help myself from thinking about this problem from another angle. Let's think about insurance in the context of shoplifting. For this thought exercise we're going to use a real store in our example, which won't be exactly correct, but the point is to think about the problem, not get all the minor details correct.

If you're in a busy store shopping and someone steals your wallet, it's generally accepted that the store is not at fault for this theft. Most would put some effort into helping you, but at the end of the day you're probably out of luck if you expect the store to repay you for anything you lost. They almost certainly won't have insurance to cover the theft of customer property in their store.

Now let's also imagine there are things taken from the store, actual merchandise gets stolen. This is called shoplifting. It has a special name and many stores even have special groups to help minimize this damage. They also have insurance to cover some of these losses. Most businesses see some shoplifting as a part of doing business. They account for some volume of this theft when doing their planning and profit calculations.

In the real world, I suspect customers being robbed while in a store isn't very common. If there is a store that gains a reputation for customers having wallets stolen, nobody will shop there. If you visit a store in a rough part of town they might even have a security guard at the door to help keep the riffraff out. This is because no shop wants to be known as a dangerous place. You can't exist as a store with that sort of reputation. Customers need to feel safe.

In the virtual world, all that can be stolen is basically information. Sometimes that information can be equated to actual money, sometimes it's just details about a person. Some will have little to no value like a very well known email address. Sometimes it can have a huge value like a tax identifier that can be used to commit identity theft. It can be very very difficult to know when information is stolen, but also the value of that information taken can vary widely. We also seem to place very little value on our information. Many people will trade it away for a trinket online worth a fraction of the information they just supplied.

Now let's think about insurance. Just like loss prevention insurance, cybersecurity insurance isn't there to protect customers. It exists to help protect the company from the losses of an attack. If customer data is stolen the customers are not really covered, in many instances there's nothing a customer can do. It could be impossible to prove your information was stolen, even if it gets used somewhere else can you prove it came from the business in question?

After spending some time on the question of what if insurance covered the customers, I realize how hard this problem is to deal with. While real world customer theft isn't very common and it's basically not covered, there's probably no hope for information. It's so hard to prove things beyond a reasonable doubt and many of our laws require actual harm to happen before any action can be taken. Proving this harm is very very difficult. We're almost certainly going to need new laws to deal with these situations.

Sunday, May 21, 2017

You know how to fix enterprise patching? Please tell me more!!!

If you pay attention to Twitter at all, you've probably seen people arguing about patching your enterprise after the WannaCry malware. The short story is that Microsoft fixed a very serious security flaw a few months before the malware hit. That means there are quite a few machines on the Internet that haven't applied a critical security update. Of course as you imagine there is plenty of back and forth about updates. There are two basic arguments I keep seeing.

Patching is hard and if you think I can just turn on windows update for all these computers running Windows 3.11 on token ring you've never had to deal with a real enterprise before! You out of touch hipsters don't know what it's really like here. We've seen thing, like, real things. We party like it's 1995. GET OFF MY LAWN.

The other side sounds a bit like this.

How can you be running anything that's less than a few hours old? Don't you know what the Internet looks like! If everyone just applied all updates immediately and ran their business in the cloud using agile scrum based SecDevSecOps serverless development practices everything would be fine!

Of course both of these groups are wrong for basically the same reason. The world isn't simple, and whatever works for you won't work for anyone else. The tie that binds us all together is that everything is broken, all the time. All the things we use are broken, how we use them is broken, and how we manage them is broken. We can't fix them even though we try and sometimes we pretend we can fix things.

However ...

Just because everything is broken, that's no excuse to do nothing. It's easy to declare something too hard and give up. A lot of enterprises do this, a lot of enterprise security people are using this defense why they can't update their infrastructure. On the other side though, sometimes moving too fast is more dangerous than moving too slow. Reckless updates are no better than no updates. Sometimes there is nothing we can do. Security as an industry is basically a big giant Kobayashi Maru test.

I have no advice to give on how to fix this problem. I think both groups are silly and wrong but why I think this is unimportant. The right way is for everyone to have civil conversations where we put ourselves in the other person's shoes. That won't happen though, it never happens even though basically ever leader ever has said that sort of behavior is a good idea. I suggest you double down on whatever bad practices you've hitched your horse to. In the next few months we'll all have an opportunity to show why our way to do things is the worst way ever, and we'll also find an opportunity to mock someone else for noting doing things the way we do.

In this game there are no winners and losers, just you. And you've already lost.

Wednesday, May 3, 2017

Security like it's 2005!

I was reading the newspaper the other day (the real dead tree newspaper) and I came across an op-ed from my congressperson.

Gallagher: Cybersecurity for small business

It's about what you'd expect but comes with some actionable advice! Well, not really. Here it is so you don't have to read the whole thing.

Businesses can start by taking some simple and relatively inexpensive steps to protect themselves, such as:
» Installing antivirus, threat detection and firewall software and systems.
» Encrypting company data and installing security patches to make sure computers and servers are up to date.
» Strengthening password practices, including requiring the use of strong passwords and two-factor authentication.
» Educating employees on how to recognize an attempted attack, including preparing rapid response measures to mitigate the damage of an attack in progress or recently completed.
I read that and my first thought was "how on earth would a small business have a clue about any of this", but then it got me thinking about the bigger problem. This advice isn't even useful in 2017. It sort of made sense a long time ago when this was the way of thinking, it's not valid anymore though.

Let's pick them apart one by one.

Installing antivirus, threat detection and firewall software and systems.
It's no secret that antivirus doesn't really work anymore. It's expensive in terms of cost and resources. In most settings I've seen it probably causes more trouble than it solves. Threat detection doesn't really mean anything. Virtually all systems come with a firewall enabled and some level of software protections that makes existing antivirus obsolete. Honestly, this is about as solved as it's going to get. There's no positive value you can add here.

Encrypting company data and installing security patches to make sure computers and servers are up to date
This is two unrelated things. Encrypting data is probably overkill for most settings. Any encryption that's usable doesn't really protect you. Encryption that actually protects needs a dedicated security team to manage. Let's not get into an argument about offline vs online data.

Keeping systems updated a fantastic idea. Nobody does it because it's too hard to do. If you're a small business you'll either have zero updates, or automatically install them all. The right answer is to use something as a service so you don't have to think about updates. Make sure automatic updates are working on your desktops.

Strengthening password practices, including requiring the use of strong passwords and two-factor authentication

Just use two-factor auth from your as a service provider. If you're managing your own accounts and you lack a dedicated identity team failure is the only option. Every major cloud provider can help you solve this.

Educating employees on how to recognize an attempted attack, including preparing rapid response measures to mitigate the damage of an attack in progress or recently completed

Just no. There is value in helping them understand the risks and threats, but this won't work. Social engineering attacks go after the fundamental nature of humanity. You can't stop this with training. The only hope is we create cold calculating artificial intelligence that can figure this out before it reaches humans. A number of service providers can even stop some of this today because they have ways to detect anomalies. A small business doesn't and probably never will.


As you can see, this list isn't really practical for anyone to worry about. Why should you have to worry about this today? These sort of problems have been plaguing small business and home users for years. These points are all what I would call "mid 200X" advice. These were suggestions everyone was giving out around 2005, they didn't really work then but it made everyone feel better. Most of these bullets aren't actionable unless you have a security person on staff. Would a non security person have any idea where to start or what of these items mean?

The 2017 world has a solution to these problems. Use the cloud. Stuff as a Service is without question the way to solve these problems because it makes them go away. There are plenty who will naysay public cloud citing various breeches, companies leaking data, companies selling data, and plenty of other problems. The cloud isn't magic, but it lets you trade a lot of horrible problems for "slightly bad". I guarantee the problems with the cloud are substantially better than letting most people try to run their own infrastructure. I see this a bit like airplane vs automobile crashes. There are magnitudes more deaths by automobile every year, but it's the airplane crashes that really get the attention. It's much much safer to fly than to drive, just as it's much much safer to use services than to manage your own infrastructure.

Sunday, April 30, 2017

Security fail is people

The other day I ran across someone trying to keep their locker secured by using a combination lock. As you can see in the picture, the lock is on the handle of the locker, not on the loop that actually locks the door. When I saw this I had a good chuckle, took a picture, and put out a snarky tweet. I then started to think about this quite a bit. Is this the user's fault or is this bad design? I'm going to blame bad design on this one. It's easy to blame users, we do it often, but I think in most instances, the problem is the design, not the user. If nothing is ever our fault, we will never improve anything. I suspect this is part of the problem we see across the cybersecurity universe.

On Humans

One of the great truths I'm starting to understand as I deal with humans more and more is that the one thing we all have in common is that we have waves of unpredictability. Sometimes we pay very close attention to our surroundings and situations, sometimes we don't. We can be distracted by someone calling our name, by something that happened earlier in the day, or even something that happened years ago. If you think you pay very close attention to everything at all times you're fooling yourself. We are squishy bags of confusing emotions that don't always make sense.

In the above picture, I can see a number of ways this happens. Maybe the person was very old and couldn't see. I have bad eyesight and could see this happening. Maybe they were talking to a friend and didn't notice where they put the lock. What if they dropped their phone moments before putting the lock on the door. Maybe they're just a clueless idiot who can't use locks! Well, not that last one.

This example is bad design. Why is there a handle that can hold a lock directly above the loop that is supposed to hold the lock? I can think of a few ways to solve this. The handle could be something other than a loop. A pull knob would be a lot harder to screw up. The handle could be farther up, or down. The loop could be larger or in a different place. No matter how you solve this, this is just a bad design. But we blame the user. We get a good laugh at a person making a simple mistake. Someday we'll make a simple mistake then blame bad design. It is also human nature to find someone or something else to blame.

The question I keep wondering; did whoever design this door think about security in any way? Do you think they were wondering how a system can and would fail? How would it be misused? How it could be broken? In this case I doubt there was anyone thinking about security failures for the door to a locker, it's just a locker. They probably told the intern to go draw a rectangle and put a handle on it. If I could find the manufacturer and tell them about this would they listen? I'd probably get pushed into the "crazy old kook" queue. You can even wonder if anyone really cares about locker security.

Wrapping up a post like this is always tricky. I could give advice about secure design, or tell everyone they should consult with a security expert. Maybe the answer is better user education (haha no). I think I'll target this at the security people who see something like this, take a picture, then write a tweet about how stupid someone is. We can use examples like this to learn and shape our own way of thinking. It's easy to use snark when we see something like this. The best thing we can do is make note of what we see, think about how this could have happened, and someday use it as an example to make something we're building better. We can't fix the world, but we can at least teach ourselves.

Monday, April 24, 2017

I have seen the future, and it is bug bounties


Every now and then I see something on a blog or Twitter about how you can't replace a pen test with a bug bounty. For a long time I agreed with this, but I've recently changed my mind. I know this isn't a super popular opinion (yet), and I don't think either side of this argument is exactly right. Fundamentally the future of looking for issues will not be a pen test. They won't really be bug bounties either, but I'm going to predict pen testing will evolve into what we currently call bug bounties.

First let's talk about a pen test. There's nothing wrong with getting a pen test, I'd suggest everyone goes through a few just to see what it's like. I want to be clear that I'm not saying pen testing is bad. I'm going to be making the argument why it's not the future. It is the present, many organizations require them for a variety of reasons. They will continue to be a thing for a very long time. If you can only pick one thing, you should probably choose a pen test today as it's at least a known known. Bug bounties are still known unknowns for most of us.

I also want to clarify that internal pen testing teams don't fall under this post. Internal teams are far more focused and have special knowledge that an outside company never will. It's my opinion that an internal team is and will always be superior to an outside pen test or bug bounty. Of course a lot of organizations can't afford to keep a dedicated internal team, so they turn to the outside.

So anyhow, it's time for a pen test. You find a company to conduct it, you scope what will be tested (it can't be everything). You agree on various timelines, then things get underway. After perhaps a week of testing, you have a very very long and detailed report of what was found. Here's the thing about a pen test; you're paying someone to look for problems. You will get what you pay for, you'll get a list of problems, usually a huge list. Everyone knows that the bigger the list, the better the pen test! But here's the dirty secret. Most of the results won't ever be fixed. Most results will fall below your internal bug bar. You paid for a ton of issues, you got a ton of issues, then you threw most of them out. Of course it's quite likely there will be high priority problems found, which is great. Those are what you really care about, not all the unexciting problems that are 95% of the report. What's your cost per issue fixed from that pen test?

Now let's look at how a bug bounty works. You find a company to run the bounty (it's probably not worth doing this yourself, there are many logistics). You scope what will be tested. You can agree on certain timelines and/or payout limits. Then things get underway. Here's where it's very different though. You're paying for the scope of bounty, you will get what you pay for, so there is an aspect of control. If you're only paying for critical bugs, by definition, you'll only get critical bugs. Of course there will be a certain amount of false positives. If I had to guess it's similar to a pen test today, but it's going to decrease as these organizations start to understand how to cut down on noise. I know HackerOne is doing some clever things to prevent noise.

My point to this whole post revolves around getting what you pay for, essential a cost per issue fixed instead of the current cost per issue found model. The real difference is that in the case of a bug bounty, you can control the scope of incoming. In no way am I suggesting a pen test is a bad idea, I'm simply suggesting that 200 page report isn't very useful. Of course if a pen test returned three issues, you'd probably be pretty upset when paying the bill. We all have finite resources so naturally we can't and won't fix minor bugs. it's just how things work. Today at best you'll about the same results from a bug bounty and a pen test, but I see a bug bounty as having room to improve. I think the pen test model isn't full of exciting innovation.

All this said, not every product and company will be able to attract enough interest in a bug bounty. Let's face it, the real purpose behind all this is to raise the security profiles of everyone involved. Some organizations will have to use a pen test like model to get their products and services investigated. This is why the bug bounty program won't be a long term viable option. There are too many bugs and not enough researchers.

Now for the bit about the future. The near future we will see the pendulum swing from pen testing to bug bounties. The next swing of the pendulum after bug bounties will be automation. Humans aren't very good at digging through huge amounts of data but computers are. What we're really good at and computers are (currently) really bad at is finding new and exciting ways to break systems. We once thought double free bugs couldn't be exploited. We didn't see a problem with NULL pointer dereferences. Someone once thought deserializing objects was a neat idea. I would rather see humans working on the future of security instead of exploiting the past. The future of the bug bounty can be new attack methods instead of finding bugs. We have some work to do, I've not seen an automated scanner that I'd even call "almost not terrible". It will happen though, tools always start terrible and get better through the natural march of progress. The road to this unicorn future will pass through bug bounties. However, if we don't have automation ready on the other side, it's nothing but dragons.

Sunday, April 16, 2017

Crawl, Walk, Drive

It's that time of year again. I don't mean when all the government secrets are leaked onto the Internet by some unknown organization. I mean the time of year when it's unsafe to cross streets or ride your bike. At least in the United States. It's possible more civilized countries don't have this problem. I enjoy getting around without a car, but I feel like the number of near misses has gone up a fair bit, and it's always a person much younger than me with someone much older than them in the passenger seat. At first I didn't think much about this and just dreamed of how self driving cars will rid us of the horror that is human drivers. After the last near fatality while crossing the street it dawned on me that now is the time all the kids have their driving learner's permit. I do think I preferred not knowing this since now I know my adversary. It has a name, and that name is "youth".

For those of you who aren't familiar with how this works in the US. Essentially after less training than is given to a typical volunteer, a young person generally around the age of 16 is given the ability to drive a car, on real streets, as long as there is a "responsible adult" in the car with them. We know this is impossible as all humans are terribly irresponsible drivers. They then spend a few months almost getting in accidents, take a proper test administered by someone who has one of the few jobs worse than IT security, and generally they end up with a real driver's license, ensuring we never run out of terrible human drivers.

There are no doubt a ton of stories that could be told here about mentorship, learning, encouraging, leadership, or teaching.  I'm not going to talk about any of that that today. I think often about how we raise up the next generation of security goons, I'm tired of talking about how we're all terrible people and nobody likes us, at least for this week.

I want to discuss the challenges of dealing with someone who is very new, very ambitious, and very dangerous. There are always going to be "new" people in any group or organization. Eventually they learn the rules they need to know, generally because they screw something up and someone yells at them about it. Goodness knows I learned most everything I know like this. But the point is, as security people, we have to not only do some yelling but we have to keep things in order while the new person is busy making a mess of everything. The yelling can help make us feel better, but we still have to ensure things can't go too far off the rails.

In many instances the new person will have some sort of mentor. They will of course try to keep them on task and learning useful things, but just like the parent of our student driver, they probably spend more time gaping in terror than they do teaching anything useful. If things really go crazy you can blame them someday, but at the beginning they're just busy hanging on trying not to soil themselves in an attempt to stay composed.

This brings us back to the security group. If you're in a large organization, every day is new person screwing something up day. I can't even begin to imagine what it must be like at a public cloud provider where you not only have new employees but also all your customers are basically ongoing risky behavior. The solution to this problem is the same as our student driver problem. Stop letting humans operate the machines. I'm not talking about the new people, I'm talking about the security people. If you don't have heavy use of automation, if you're not aggregating logs and having algorithms look for problems for example, you've already lost the battle.

Humans in general are bad at repetitive boring tasks. Driving falls under this category, and a lot of security work does too. I touched on the idea of measuring what you do in my last post. I'm going to tie these together in the next post. We do a lot of things that don't make sense if we measure them, but we struggle to measure security. I suspect part of that reason is because for a long time we were the passenger with the student drivers. If we emerged at the end of the ride alive, we were mostly happy.

It's time to become the groups building the future of cars, not waiting for a horrible crash to happen. The only way we can do that is if we start to understand and measure what works and what doesn't work. Everything from ROI to how effective is our policy and procedure. Make sure you come back next week. Assuming I'm not run down by a student driver before then.

Monday, April 10, 2017

The obvious answer is never the secure answer

One of the few themes that comes up time and time again when we talk about security is how bad people tend to be at understanding what's actually going on. This isn't really anyone's fault, we're expecting people to go against what is essentially millions of years of evolution that created our behaviors. Most security problems revolve around the human being the weak link and doing something that is completely expected and completely wrong.

This brings us to a news story I ran across that reminded me of how bad humans can be at dealing with actual risk. It seems that peanut free schools don't work. I think most people would expect a school that bans peanuts to have fewer peanut related incidents than a school that doesn't. This seems like a no brainer, but if there's anything I've learned from doing security work for as long as I have, the obvious answer is always wrong.

The report does have a nugget of info in it where they point out that having a peanut free table at lunch seems to work. I suspect this is different than a full on ban, in this case you have the kids who are sensitive to peanuts sit at a table where everyone knows peanuts are bad. There is of course a certain amount of social stigma that comes with having to sit at a special table, but I suspect anyone reading this often sat alone during schooltime lunch for a very different reason ;)

This is similar to Portugal making all drugs legal and having one of the lowest overdose rates in Europe. It seems logical that if you want fewer drugs you make them illegal. It doesn't make sense to our brains that if you want fewer drugs and problems you make them legal. There are countless other examples of reality seeming to be totally backwards from what we think should be true.

So that brings us to security. There are lessons in stories like these. It's not to do the opposite of what makes sense though. The lesson is to use real data to make decisions. If you think something is true and you can't prove it either way, you could be making decisions that are actually hurting instead of helping. It's a bit like the scientific method. You have a hypothesis, you test it, then you either update your hypothesis and try again or you end up with proof.

In the near future we'll talk about measuring things; how to do it, what's important, and why it will matter for solving your problems.

Sunday, April 2, 2017

The expectation of security

If you listen to my podcast (which you should be doing already), I had a bit of a rant at the start this week about an assignment my son had over the weekend. He wasn't supposed to use any "screens" which is part of a drug addiction lesson. I get where this lesson is going, but I've really been thinking about the bigger idea of expectations and reality. This assignment is a great example of someone failing to understand the world has changed around them.

What I mean is expecting anyone to go without a "screen" for a weekend doesn't make sense. A substantial number of activities we do today rely on some sort of screen because we've replace more inefficient ways of accomplishing tasks with these screens. Need to look something up? That's a screen. What's the weather? Screen. News? Screen. Reading a book? Screen!

You get the idea. We've replaced a large number of books or papers with a screen. But this is a security blog, so what's the point? The point is I see a lot of similarities with a lot of security people. The world has changed quite a bit over the last few years, I feel like a number of our rules are similar to anyone thinking spending time without a screen is some sort of learning experience. I bet we can all think of security people we know who think it's still 1995, if you don't know any you might be that person (time for some self reflection).

Let's look at some examples.

You need to change your password every 90 days.
There are few people who think this is a good idea anymore, even the NIST guidance says this isn't a good idea. I hear this come up on a regular basis though. Password concepts have changed a lot over the last few years, but most people seem to be stuck somewhere between five and ten years ago.

If we put it behind the firewall we don't have to worry about securing it.
Remember when firewalls were magic? Me neither. There was a time from probably 1995 to 2007 or so that a lot of people thought firewalls were magic. Very recently the concept of zero trust networking has come to be a real thing. You shouldn't trust your network, it's probably compromised.

Telling someone they can't do something because it's insecure.
Remember when we used to talk about how security is the industry of "no"? That's not true anymore because now when you tell someone "no" they just go to Amazon and buy $2.38 worth of computing and do whatever it is they need to get done. Shadow IT isn't the problem, it's the solution to the problem that was the security people. It's fairly well accepted by the new trailblazers that "no" isn't an option, the only option is to work together to minimize risk.

I could probably build a list that's enormous with examples like this. The whole point is to point out that everything changes, and we should always be asking ourselves if something still makes sense. It's very easy for us to decide change is dangerous and scary. I would argue that not understanding the new security norms is actually more dangerous than having no security knowledge at all. This is probably one of the few industries where old knowledge may be worse than no knowledge. Imagine if your doctor was using the best ideas and tools from 1875. You'd almost certainly find a new doctor. Password policies and firewalls are our version of blood letting and leeches. We have a long way to go and I have no doubt we all have something to contribute.

Monday, March 27, 2017

Remember kids, if you're going to disclose, disclose responsibly!

If you pay any attention to the security universe, you're aware that Tavis Ormandy is basically on fire right now with his security research. He found the Cloudflare data leak issue a few weeks back, and is currently going to town on LastPass. The LastPass crew seems to be dealing with this pretty well, I'm not seeing a lot of complaining, mostly just info and fixes which is the right way to do these things.

There are however a bunch of people complaining about how Tavis and Google Project Zero in general tend to disclose the issues. These people are wrong, I've been there, it's not fun, but as crazy as it may seem to the ouside, the Project Zero crew knows what they're doing.

Firstly let's get two things out of the way.

1) If nobody is complaining about what you're doing, you're not doing anything interesting (Tavis is clearly doing very interesting things).

2) Disclosure is hard, there isn't a perfect solution, what Project Zero does may seem heartless to some, but it's currently the best way. The alternative is an abusive relationship.

A long time ago I was a vendor receiving security reports from Tavis, and I won't lie, it wasn't fun. I remember complaining and trying to slow things down to a pace I thought was more reasonable. Few of us have any extra time and a new vulnerability disclosure means there's extra work to do. Sometimes a disclosure isn't very detailed or lacks important information. The disclosure date proposed may not line up with product schedules. You could have another more important issue you're working on already. There are lots of reasons to dread dealing with these issues as a vendor.

All that said, it's still OK to complain, and every now and then the criticism is good. We should always be thinking about how we do things, what makes sense today won't make sense tomorrow. The way Google Project Zero does disclosure today was pretty crazy even five years ago. Now it's how things have to work. The world moves very fast now, and as we've seen from various document dumps over the last few years, there are no secrets. If you think you can keep a security issue quiet for a year you are sadly mistaken. It's possible that was once true (I suspect it never was, but that's another conversation). Either way it's not true anymore. If you know about a security flaw it's quite likely someone else does too, and once you start talking to another group about it, the odds of leaking grow at an alarming rate.

The way things used to work is changing rapidly. Anytime there is change, there are always the trailblazers and laggards. We know we can't develop secure software, but we can respond quickly. Spend time where you can make a difference, not chasing the mythical perfect solution.

If your main contribution to society is complaining, you should probably rethink your purpose.

Thursday, March 23, 2017

Inverse Law of CVEs

I've started a project to put the CVE data into Elasticsearch and see if there is anything clever we can learn about it. Ever if there isn't anything overly clever, it's fun to do. And I get to make pretty graphs, which everyone likes to look at.

I stuck a few of my early results on Twitter because it seemed like a fun thing to do. One of the graphs I put up was comparing the 3 BSDs. The image is below.


You can see that none of these graphs has enough data to really draw any conclusions from, again, I did this for fun. I did get one response claiming NetBSD is the best, because their graph is the smallest. I've actually heard this argument a few times over the past month, so I decided it's time to write about it. Especially since I'm sure I'll find many more examples like this while I'm weeding through this mountain of CVE data.

Let's make up a new law, I'll call it the "Inverse Law of CVEs". It goes like this - "The fewer CVE IDs something has has, the less secure it is".

That doesn't make sense to most people. If you have something that is bad, fewer bad things is certainly better than more bad things. This is generally true for physical concepts brains can understand. Less crime is good. Fewer accidents is good. When it comes to something like how many CVE IDs your project or product has, this idea gets turned on its head. Less is probably bad when we think about CVE IDs. There's probably some sort of line somewhere where if you cross it things flip back to bad (wait until I get to PHP). We'll call that the security maginot line because bad security decided to sneak in through the north.

If you have something with very very few CVE IDs it doesn't mean it's secure, it means nobody is looking for security issues. It's easy to understand that if something is used by a large diverse set of users, it will get more bug reports (some of which will be security bugs) and it will get more security attention from both good guys and bad guys because it's a bigger target. If something has very few users, it's quite likely there hasn't been a lot of security attention paid to it. I suspect what the above graphs really mean is Free BSD is more popular than OpenBSD, which is more popular than NetBSD. Random internet searches seem to back this up.

I'm not entirely sure what to do with all this data. Part of the fun is understanding how to classify it all. I'm not a data scientist so there will be much learning. If you have any ideas by all means let me know, I'm quite open to suggestions. Once I have better data I may consider trying to find at what point a project has enough CVE IDs to be considered on the right path, and which have so many they've crossed over to the bad place.

Sunday, March 12, 2017

Security, Consumer Reports, and Failure

Last week there was a story about Consumer Reports doing security testing of products.


As one can imagine there were a fair number of “they’ll get it wrong” sort of comments. They will get it wrong, at first, but that’s not a reason to pick on these guys. They’re quite brave to take this task on, it’s nearly impossible if you think about the state of security (especially consumer security). But this is how things start. There is no industry that has gone from broken to perfect in one step. It’s a long hard road when you have to deal with systemic problems in an industry. Consumer product security problems may be larger and more complex than any other industry has ever had to solve thanks to things such as globalization and how inexpensive tiny computers have become.

If you think about the auto industry, you’re talking about something that costs thousands of dollars. Safety is easy to justify as it’s going to be less than the overall cost of the vehicle. Now if we think about tiny computing devices, you could be talking about chips that cost less than one dollar. If the cost of security and safety will be more than the initial cost of the computing hardware it can be impossible to justify that cost. If adding security doubles the cost of something, the manufacturers will try very hard to find ways around having to include such features. There are always bizarre technicalities that can help avoid regulation, groups like Consumer Reports help with accountability.

Here is where Consumer Reports and other testing labs will be incredibly important to this story. Even if there is regulation a manufacturer chooses to ignore, a group like Consumer Reports can still review the product. Consumer Reports will get things very wrong at first, sometimes it will be hilariously wrong. But that’s OK, it’s how everything starts. If you look back at any sort of safety and security in the consumer space, it took a long time, sometimes decades, to get it right. Cybersecurity will be no different, it’s going to take a long time to even understand the problem.

Our default reaction to mistakes is often one of ridicule, this is one of those times we have to be mindful of how dangerous this attitude is. If we see a group trying to do the right thing but getting it wrong, we need to offer advice, not mockery. If we don’t engage in a useful and serious way nobody will take us seriously. There are a lot of smart security folks out there, we can help make the world a better place this time. Sometimes things can look hopeless and horrible, but things will get better. It’ll take time, it won’t be easy, but things will get better thanks to efforts such as this one.

Thursday, March 2, 2017

What the Oscars can teach us about security

If you watched the 89th Academy Awards you saw a pretty big mistake at the end of the show, the short story is Warren Beatty was handed the wrong envelope, he opened it, looked at it, then gave it to Faye Dunaway to read, which she did. The wrong people came on stage and started giving speeches, confused scrambling happened, and the correct winner was brought on stage. No doubt this will be talked about for many years to come as one of the most interesting and exciting events in the history of the awards ceremony.

People make mistakes, we won’t dwell on how the wrong envelope made it into the announcer’s hands. The details of how this error came to be isn’t what’s important for this discussion. The important lesson for us is watch Warren Beatty’s behavior. He clearly knew something was wrong, if you watch the video of him, you can tell things aren’t right. But he just kept going, gave the card to Faye Dunaway, and she read the name of the movie on the card. These people aren’t some young amateurs here, these are seasoned actors. It’s not their first rodeo. So why did this happen?

The lesson for us all is to understand that when things start to break down, people will fall back to their instincts. The presenters knew their job was to open the card and read the name. Their job wasn’t to think about it or question what they were handed. As soon as they knew something was wrong, they went on autopilot and did what was expected. This happens with computer security all the time. If people get a scary phishing email, they will often go into autopilot and do things they wouldn’t do if they kept a level head. Most attackers know how this works and they prey on this behavior. It’s really easy to claim you’d never be so stupid as to download that attachment or click on that link, but you’re not under stress. Once you’re under stress, everything changes.

This is why police, firefighters, and soldiers get a lot of training. You want these people to do the right thing when they enter autopilot mode. As soon as a situation starts to get out of hand, training kicks in and these people will do whatever they were trained to do without thinking about it. Training works, there’s a reason they train so much. Most people aren’t trained like this so they generally make poor decisions when under stress.

So what should we take away from all this? The thing we as security professionals needs to keep in mind is how this behavior works. If you have a system that isn’t essentially “secure by default”, anytime someone find themselves under mental stress, they’re going to take the path of least resistance. If this path of least resistance is also something dangerous happening, you’re not designing for security. Even security experts will have this problem, we don’t have superpowers that let us make good choices in times of high stress. It doesn’t matter how smart you think you are, when you’re under a lot of stress, you will go into autopilot, you will make bad choices if bad choices are the defaults.

Thursday, February 23, 2017

SHA-1 is dead, long live SHA-1!

Unless you’ve been living under a rock, you heard that some researchers managed to create a SHA-1 collision. The short story as to why this matters is the whole purpose of a hashing algorithm is to make it impossible to generate collisions on purpose. Unfortunately though impossible things are usually also impossible so in reality we just make sure it’s really really hard to generate a collision. Thanks to Moore’s Law, hard things don’t stay hard forever. This is why MD5 had to go live on a farm out in the country, and we’re not allowed to see it anymore … because it’s having too much fun. SHA-1 will get to join it soon.

The details about this attack are widely published at this point, but that’s not what I want to discuss, I want to bring things up a level and discuss the problem of algorithm deprecation. SHA-1 was basically on the way out. We knew this day was coming, we just didn’t know when. The attack isn’t super practical yet, but give it a few years and I’m sure there will be some interesting breakthroughs against SHA-1. SHA-2 will be next, which is why SHA-3 is a thing now. At the end of the day though this is why we can’t have nice things.

A long time ago there weren’t a bunch of expired standards. There were mostly just current standards and what we would call “old” standards. We kept them around because it was less work than telling them we didn’t want to be friends anymore. Sure they might show up and eat a few chips now and then, but nobody really cared. Then researchers started to look at these old algorithms and protocols as a way to attack modern systems. That’s when things got crazy.

It’s a bit like someone bribing one of your old annoying friends to sneak the attacker through your back door during a party. The friend knows you don’t really like him anymore, so it won’t really matter if he gets caught. Thus began the long and horrible journey to start marking things as unsafe. Remember how long it took before MD5 wasn’t used anymore? How about SSL 2 or SSHv1? It’s not easy to get rid of widely used standards even if they’re unsafe. Anytime something works it won't be replaced without a good reason. Good reasons are easier to find these days than they were even a few years ago.

This brings us to the recent SHA-1 news. I think it's going better this time, a lot better. The browsers already have plans to deprecate it. There are plenty of good replacements ready to go. Did we ever discuss killing off md5 before it was clearly dead? Not really. It wasn't until a zero day md5 attack was made public that it was decided maybe we should stop using it. Everyone knew it was bad for them, but they figured it wasn’t that big of a deal. I feel like everyone understands SHA-1 isn’t a huge deal yet, but it’s time to get rid of it now while there’s still time.

This is the world we live in now. If you can't move quickly you will fail. It's not a competitive advantage, it's a requirement for survival. Old standards no longer ride into the sunset quietly, they get their lunch money stolen, jacket ripped, then hung by a belt loop on the fence.

Sunday, February 12, 2017

Reality Based Security

If I demand you jump off the roof and fly, and you say no, can I call you a defeatist? What would you think? To a reasonable person it would be insane to associate this attitude with being a defeatist. There are certain expectations that fall within the confines of reality. Expecting things to happen outside of those rules is reckless and can often be dangerous.

Yet in the universe of cybersecurity we do this constantly. Anyone who doesn’t pretend we can fix problems is a defeatist and part of the problem. We just have to work harder and not claim something can’t be done, that’s how we’ll fix everything! After being called a defeatist during a discussion, I decided to write some things down. We spend a lot of time trying to fly off of roofs instead of looking for practical realistic solutions for our security problems.

The way cybersecurity works today someone will say “this is a problem”. Maybe it’s IoT, or ransomware, or antivirus, secure coding, security vulnerabilities; whatever, pick something, there’s plenty to choose from. It’s rarely in a general context though, it will be sort of specific, for example “we have to teach developers how to stop adding security flaws to software”. Someone else will say “we can’t fix that”, then they get called a defeatist for being negative and it’s assumed the defeatists are the problem. The real problem is they’re not wrong. It can’t be fixed. We will never see humans write error free code, there is no amount of training we can give them. Pretending it can is what’s dangerous. Pretending we can fix problems we can’t is lying.

The world isn’t fairy dust and rainbows. We can’t wish for more security and get it. We can’t claim to be working on a problem if we have no clue what it is or how to fix it. I’ll pick on IoT for a moment. How many security IoT “experts” exist now? The number is non trivial. Does anyone have any ideas how to understand the IoT security problems? Talking about how to fix IoT doesn’t make sense today, we don’t even really understand what’s wrong. Is the problem devices that never get updates? What about poor authentication? Maybe managing the devices is the problem? It’s not one thing, it’s a lot of things put together in a martini shaker, shook up, then dumped out in a heap. We can’t fix IoT because we don’t know what it even is in many instances. I’m not a defeatist, I’m trying to live in reality and think about the actual problems. It’s a lot easier to focus on solutions for problems you don’t understand. You will find a solution, those solutions won’t make sense though.

So what do we do now? There isn’t a quick answer, there isn’t an easy answer. The first step is to admit you have a problem though. Defeatists are a real thing, there’s no question about it. The trick is to look at the people who might be claiming something can’t be fixed. Are they giving up, or are they trying to reframe the conversation? If you declare them a defeatist, the conversation is now over, you killed it. On the other side of the coin, pretending things are fine is more dangerous than giving up, you’re living in a fantasy. The only correct solution is reality based security. Have honest and real conversations, don’t be afraid to ask hard questions, don’t be afraid to declare something unfixable. An unfixable problem is really just one that needs new ideas.

You can't fly off the roof, but trampolines are pretty awesome.

I'm @joshbressers on Twitter, talk to me.

Monday, February 6, 2017

There are no militant moderates in security

There are no militant moderates. Moderates never stand out for having a crazy opinion or idea, moderates don’t pick fights with anyone they can. Moderates get the work done. We could look at the current political climate, how many moderate reasonable views get attention? Exactly. I’m not going to talk about politics, that dumpster fire doesn’t need any more attention than it’s already getting. I am however going to discuss a topic I’m calling “security moderates”, or the people who are doing the real security work. They are sane, reasonable, smart, and actually doing things that matter. You might be one, you might know one or two. If I was going to guess, they’re a pretty big group. And they get ignored quite a lot because they're too busy getting work done to put on a show.

I’m going to split existing security talent into some sort of spectrum. There’s nothing more fun than grouping people together in overly generalized ways. I’m going to use three groups. You have the old guard on one side (I dare not mention left or right lest the political types have a fit). This is the crowd I wrote about last week; The people who want to protect their existing empires. On the other side you have a lot of crazy untested ideas, many of which nobody knows if they work or not. Most of them won’t work, at best they're a distraction, at worst they are dangerous.

Then in the middle we have our moderates. This group is the vast majority of security practitioners. The old guard think these people are a bunch of idiots who can’t possibly know as much as they do. After all, 1999 was the high point of security! The new crazy ideas group thinks these people are wasting their time on old ideas, their new hip ideas are the future. Have you actually seen homomorphic end point at rest encryption antivirus? It’s totally the future!

Now here’s the real challenge. How many conferences and journals have papers about reasonable practices that work? None. They want sensational talks about the new and exciting future, or maybe just new and exciting. In a way I don’t blame them, new and exciting is, well, new and exciting. I also think this is doing a disservice to the people getting work done in many ways. Security has never been an industry that has made huge leaps driven by new technology. It’s been an industry that has slowly marched forward (not fast enough, but that’s another topic). Some industries see huge breakthroughs every now and then. Think about how relativity changed physics overnight. I won’t say security will never see such a breakthrough, but I think we would be foolish to hope for one. The reality is our progress is made slowly and methodically. This is why putting a huge focus on crazy new ideas isn’t helping, it’s distracting. How many of those new and crazy ideas from a year ago are even still ideas anymore? Not many.

What do we do about this sad state of affairs? We have to give the silent majority a voice. Anyone reading this has done something interesting and useful. In some way you’ve moved the industry forward, you may not realize it in all cases because it’s not sensational. You may not want to talk about it because you don’t think it’s important, or you don’t like talking, or you’re sick of the fringe players criticizing everything you do. The first thing you should do is think about what you’re doing that works. We all have little tricks we like to use that really make a difference.

Next write it down. This is harder than it sounds, but it’s important. Most of these ideas aren’t going to be full papers, but that’s OK. Industry changing ideas don’t really exist, small incremental change is what we need. It could be something simple like adding an extra step during application deployment or even adding a banned function to your banned.h file. The important part is explaining what you did, why you did it, and what the outcome was (even if it was a failure, sharing things that don’t work has value). Some ideas could be conference talks, but you still need to write things down to get talks accepted. Just writing it down isn’t enough though. If nobody ever sees your writing, you’re not really writing.  Publish your writing somewhere, it’s never been easier to publish your work. Blogs are free, there are plenty of groups to find and interact with (reddit, forums, twitter, facebook). There is literally a security conference every day of the year. Find a venue, tell your story.

There are no militant moderates, this is a good thing. We have enough militants with agendas. What we need more than ever are reasonable and sane moderates with great ideas, making a difference every day. If the sane middle starts to work together. Things will get better, and we will see the change we need.

Have an idea how to do this, let me know. @joshbressers on Twitter

Sunday, January 29, 2017

Everything you know about security is wrong, stop protecting your empire!

Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn't do this everyone would think they're weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.

How wrong am I? Let me know: @joshbressers on Twitter.