Wednesday, August 30, 2017

Security ROI isn't impossible, we suck at measuring

As of late I've been seeing a lot of grumbling that security return on investment (ROI) is impossible. This is of course nonsense. Understanding your ROI is one of the most important things you can do as a business leader. You have to understand if what you're doing makes sense. By the very nature of business, some of the things we do have more value than other things. Some things even have negative value. If we don't know which things are the most important, we're just doing voodoo security.

H. James Harrington once said
Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.
Anyone paying attention to the current state of security will probably shed a tear over that statement. The foundation of the statement results in this truth: we can't control or improve the state of security today. As much as we all like to talk about what's wrong with security and how to fix it. The reality is we don't really know what's broken, which of course means we have no idea how to fix anything.

Measuring security isn't impossible, it's just really hard today. It's really hard because we don't really understand what security is in most instances. Security isn't one thing, it's a lot of little things that don't really have anything to do with each other but we clump them together for some reason. We like to build teams of specialized people and call them the security team. We pretend we're responsible for a lot of little unrelated activities but we often don't have any real accountability. The reality is this isn't a great way to do something that actually works, it's a great way to have a lot smart people fail to live up to their true potential. The best security teams in the world today aren't good at security, they're just really good at marketing themselves so everyone thinks they're good at security.

Security needs to be a part of everything, not a special team that doesn't understand what's happening outside their walls. Think for a minute what an organization would look like if we split groups up by what programming language they knew. Now you have all the python people in one corner and all the C guys in the other corner. They'll of course have a long list of reasons why they're smarter and better than the other group (we'll ignore the perl guys down in the basement). Now if there is a project that needs some C and some python they would have to go to each group and get help. Bless the soul of anyone who needs C and python working together in their project. You know this would just be a massive insane turf war with no winner. It's quite likely the project would never work because the groups wouldn't have a huge incentive to work together. I imagine you can see the problem here. You have two groups that need to work together without proper incentive to actually work together.

Security is a lot like this. Does having a special secure development group outside of the development group make sense? Why does it make sense to have a security operations group that isn't just part of IT? If you're not part of a group do you have an incentive for the group to succeed? If I can make development's life so difficult they can't possibly succeed that's development's problem, not yours. You have no incentive to be a reasonable member of the team. The reality is you're not a member of the team at all. Your incentive is to protect your own turf, not help anyone else.

I'm going to pick on Google's Project Zero for a minute here. Not because they're broken, but because they're really really good at what they do. Project zero does research into how to break things, then they work with the project they broke to make it better. If this was part of a more traditional security thinking group, Project Zero would do research, build patches, then demand everyone uses whatever it is they built and throw a tantrum if they don't. This would of course be crazy, unwelcome, and a waste of time. Project Zero has a razor focus on research. More importantly though they work with other groups when it's time to get the final work done. Their razor focus and ability to work with others gives them a pretty clear metric they can see. How many flaws did they find? How many got fixed? How many new attack vectors did they create? This is easy to measure. Of course some groups won't work with them, but in that case they can publish their advisories and move on. There's no value in picking long horrible fights.

So here's the question you have to ask yourself. How much of what you do directly affects the group you're a part of? I don't mean things like enforcing compliance, compliance is a cost like paying for electricity, think bigger here about things that generate revenue. If you're doing a project with development, do your decisions affect them or do they affect you? If your decisions affect development you probably can't measure what you do. You can really only measure things that affect you directly. Even if you think you can measure someone else, you'll never be as good as they are. And honestly, who cares what someone else is doing, measure yourself first.

It's pretty clear we don't actually understand what we like to call "security" because we have no idea how to measure it. If we did understand it, we could measure it. According to H. James Harrington we can't fix what we can't measure it. I think given everything we've seen over the past few years, this is quite accurate. We will never fix our security problems without first measuring our security ROI.

I'll spend some time in the next few posts discussing how to measure what we do with actual examples. It's not as hard as it sounds.

Monday, August 28, 2017

Helicopter security

After my last post about security spending, I was thinking about how most security teams integrate into the overall business (hint: they don't). As part of this thought experiment I decided to compare traditional security to something that in modern times has come to be called helicopter parenting.

A helicopter parent is someone who won't let their kids do anything on their own. These are the people you hear about who follow their child to college, to sports practice. They yell at teachers and coaches for not respecting how special the child is. The kids are never allowed to take any risks because risk is dangerous and bad. If they climb the tree, while it could be a life altering experience, they could also fall and get hurt. Skateboarding is possibly the most dangerous thing anyone could ever do! We better make sure nothing bad can ever happen.

It's pretty well understood now that this sort of attitude is terrible for the children. They must learn to do things on their own, it's part of the development process. Taking risks and failing is an extremely useful exercise. It's not something we think about often, but you have to learn to fail. Failure is hard to learn. The children of helicopter parents do manage to learn one lesson they can use in their life, they learn to hide what they do from their parents. They get extremely good at finding way to get around all their rules and restrictions. To a degree we all had this problem growing up. At some point we all wanted to do something our parents didn't approve of, which generally meant we did it anyway, we just didn't tell our parents. Now imagine a universe where your parents let you do NOTHING, you're going to be hiding literally everything. Nobody throughout history has ever accepted the fact that they can do nothing, they just make sure the authoritarian doesn't know about it. Getting caught is still better than doing nothing much of the time.

This brings us to traditional security. Most security teams don't try to work with the business counterparts. Security teams often think they can just tell everyone else what to do. Have you ever heard the security team ask "what are you trying to do?" Of course not. They always just say "don't do that" or maybe "do it this way" then move on to tell the next group how to do their job. They don't try to understand what you're doing and why you are doing it. It's quite literally not their job to care what you're doing, which is part of the problem. Things like phishing tests are used to belittle, not teach (they have no value as teaching tools, but we won't discuss that today). Many of the old school security teams see their job as risk aversion, not risk management. They are helicopter security teams.

Now as we know from children, if you prevent someone from doing anything they don't become your obedient servant, they go out of their way to make sure the authority has no idea what's going on. This is basically how shadow IT became a thing. It was far easier to go around the rules than work with the existing machine. Helicopter security is worse than nothing. At least with nothing you can figure out what's going on by asking questions and getting honest answers. In a helicopter security environment information is actively hidden because truth will only get you in trouble.

Can we fix this?
I don't know the answer to this question. A lot of tech people I see (not just security) are soldiers from the last war. With the way we see cloud transforming the universe there are a lot of people who are still stuck in the past. We often hear it's hard to learn new things but it's more than that. Technology, especially security, never stands still. It used to move slow enough you could get by for a few years on old skills, but we're in the middle of disruptive change right now. If you're not constantly questioning your existing skills and way of thinking you're already behind. Some people are so far behind they will never catch up. It's human nature to double down on the status quo when you're not part of the change. Helicopter security is that doubling down.

It's far easier to fight change and hope your old skills will remain useful than it is to learn a new skill. Everything we see in IT today is basically a new skill. Today the most useful thing you can know is how to learn quickly, what you learned a few months ago could be useless today, it will probably be useless in the near future. We are actively fighting change like this in security today. We try to lump everything together and pretend we have some sort of control over it. We never really had any control, it's just a lot more obvious now than it was before. Helicopter security doesn't work, no matter how bad you want it to.

The Next Step
The single biggest thing we need to start doing is measure ourselves. Even if you don't want to learn anything new you can at least try to understand what we're doing today that actually works, which things sort of work, and of course the things that don't work at all. In the next few posts I'm going to discuss how to measure security as well as how to avoid voodoo security. It's a lot harder to justify helicopter security behavior once we understand which of our actions work and which don't.

Tuesday, August 22, 2017

Spend until you're secure

I was watching a few Twitter conversations about purchasing security last week and had yet another conversation about security ROI. This has me thinking about what we spend money on. In many industries we can spend our way out of problems, not all problems, but a lot of problems. With security if I gave you a blank check and said "fix it", you couldn't. Our problem isn't money, it's more fundamental than that.

Spend it like you got it
First let's think about how some problems can be solved with money. If you need more electricity capacity, or more help during a busy time, or more computing power, it's really easy to add capacity. You need more compute power, you can either buy more computers or just spend $2.15 in the cloud. If you need to dig a big hole, for a publicity stunt on Black Friday, you just pay someone to dig a big hole. It's not that hard.

This doesn't always work though, if you're building a new website, you probably can't buy your way to success. If a project like this falls behind it can be very difficult to catch back up. You can however track progress which I would say is at least a reasonable alternative. You can move development to another group or hire a new consultant if the old one isn't living up to expectations.

More Security
What if we need "more" security. How can we buy our way into more security for our organization? I'd start by asking the question can we show any actual value for our current security investment? If you stopped spending money on security tomorrow do you know what the results would be? If you stopped buying toilet paper for your company tomorrow you can probably understand what will happen (if you have a good facilities department I bet they already know the answer to this).

This is a huge problem in many organizations. If you don't know what would happen if you lowered or increased your security spending you're basically doing voodoo security. You can imagine many projects and processes as having a series of inputs that can be adjusted. Things like money, time, people, computers, the list could go on. You can control these variables and have direct outcomes on the project. More people could mean you can spend less money on contractors, more computers could mean less time spent on rendering or compiling. Ideally you have a way to find the optimal levels for each of these variables resulting in not only a high return on investment, but also happier workers as they can see the results of their efforts.

We can't do this with security today because security is too broad. We often don't know what would happen if we add more staff, or more technology.

Fundamental fundamentals
So this brings us to why we can't spend our way to security. I would argue there are two real problems here. The first being "security" isn't a thing. We pretend security is an industry that means something but it's really a lot of smaller things we've clumped together in such a way that ensures we can only fail. I see security teams claim to own anything that has the word security attached to it. They claim ownership of projects and ideas, but then they don't actually take any actions because they're too busy or lack the skills to do the work. Just because you know how to do secure development doesn't automatically make you an expert at network security. If you're great at network security it doesn't mean you know anything about physical security. Security is a lot of little things, we have to start to understand what those are and how to push responsibility to respective groups. Having a special application security team that's not part of development doesn't work. You need all development teams doing things securely.

The second problem is we don't measure what we do. How many security teams tell IT they have to follow a giant list of security rules, but they have no idea what would happen if one or more of those rules were rolled back? Remember when everyone insisted we needed to use complex passwords? Now that's considered bad advice and we shouldn't make people change their passwords often. It's also a bad idea to insist they use a variety of special characters now. How many millions have been wasted on stupid password rules? The fact that we changed the rules without any fanfare means there was no actual science behind the rules in the first place. If we even tried to measure this I suspect we would have known YEARS ago that it was a terrible idea. Instead we just kept doing voodoo security. How many more of our rules do you think will end up being rolled back in the near future because they don't actually make sense?

If you're in charge of a security program the first bit of advice I'd give out is to look at everything you own and get rid of whatever you can. Your job isn't to do everything, figure out what you have to do, then do it well. One project well done is far better than 12 half finished. The next thing you need to do is figure out how much whatever you do costs, and how much benefit it creates. If you can't figure out the benefit, you can probably stop doing it today. If it costs more than it saves, you can stop that too. We must have a razor focus if we're to understand what our real problems are. Once we understand the problems we can start to solve them.

Sunday, August 13, 2017

But that's not my job!

This week I've been thinking about how security people and non security people interact. Various conversations I have often end up with someone suggesting everyone needs some sort of security responsibility. My suspicion is this will never work.

First some background to think about. In any organization there are certain responsibilities everyone has. Without using security as our specific example just yet, let's consider how a typical building functions. You have people who are tasked with keeping the electricity working, the plumbing, the heating and cooling. Some people keep the building clean, some take care of the elevators. Some work in the building to accomplish some other task. If the company that inhabits the building is a bank you can imagine the huge number of tasks that take place inside.

Now here's where I want our analogy to start. If I work in a building and I see a leaking faucet. I probably would report it. If I didn't, it's likely someone else would see it. It's quite possible if I'm one of the electricians and while accessing some hard to reach place I notice a leaking pipe. That's not my job to fix it, I could tell the plumbers but they're not very nice to me, so who cares. The last time I told them about a leaking pipe they blamed me for breaking it, so I don't really have an incentive here. If I do nothing, it really won't affect me. If I tell someone, at best it doesn't affect me, but in reality I probably will get some level of blame or scrutiny.

This almost certainly makes sense to most of us. I wonder if there are organizations where reporting things like this comes with an incentive. A leaking water pipe could end up causing millions in damage before it's found. Nowhere I've ever worked ever really had an incentive to report things like this. If it's not your job, you don't really have to care, so nobody ever really cared.

Now let's think about phishing in a modern enterprise. You see everything from blaming the user who clicked the link, to laughing at them for being stupid, to even maybe firing someone for losing the company a ton of money. If a user clicks a phishing link, and suspects a problem, they have very little incentive to be proactive. It's not their job. I bet the number of clicked phish links we find out about is much much lower than the total number clicked.

I also hear security folks talking about educating the users on how all this works. Users should know how to spot phishing links! While this won't work for a variety of reasons, at the end of the day, it's not their job so why do we think they should know how to do this? Even more important, why do we think they should care?

The think I keep wondering is should this be the job of everyone or just the job of the security people? I think the quick reaction is "everyone" but my suspicion is it's not. Electricity is a great example. How many stories have you heard of office workers being electrocuted in the office? The number is really low because we've made electricity extremely safe. If we put this in the context of modern security we have a system where the office is covered in bare wires. Imagine wires hanging from the ceiling, some draped on the floor. The bathroom has sparking wires next to the sink. We lost three interns last week, those stupid interns! They should have known which wires weren't safe to accidentally touch. It's up to everyone in the office to know which wires are safe and which are dangerous!

This is of course madness, but it's modern day security. Instead of fixing the wires, we just imagine we can train everyone up on how to spot the dangerous ones.