Sunday, September 24, 2017

Measuring security: Part 2 - The cost of doing business

If you've not read my last post on measuring security you probably should. It talks about how to measure the security of things that make money. That post is mostly focused on things like products that directly generate revenue. This time we're going to talk about a category I'm calling the cost of doing business.

The term "cost of doing business" is something I made up so I could group these ideas in some sensible way. At least sensible to me. You probably can't use this with other humans in a discussion, they won't know what you're talking about. If I had a line graph of spending I would put revenue generating on one side, the purse cost centers on the other side. The cost of doing business is somewhere in the middle. These are activities that directly support whatever it is the organization does to make new money. Projects and solutions that don't directly make money themselves but do directly support things being built that make money.

The cost of doing business includes things like compliance, sending staff to meetings, maybe regulatory requirements. Things that don't directly generate revenue but you can't move forward if you don't do these things. There's not a lot of options in many cases. If you don't have PCI compliance, you can't process payments, you can't make any money, and the company won't last long. If you don't attend certain meetings nobody can get any work done. Regulated industry must follow their requirements or the company can often just be shut down. Sometimes there are things we have to do, even if we don't want to do them.

In the next post we'll talk about what I call "infrastructure", these are the things that are seen as cost centers and often a commodity service (like electricity or internet access). I just want to clarify the difference. Infrastructure is something where you have choice or can decide not to do it with a possible negative (or positive) consequence. Infrastructure is what keep the lights on at a bare minimum. Cost of doing business must be done to get yourself to the next step in a project, there is no choice, which changes what we measure and how we measure it.

The Example

Let's pick on PCI compliance as it's pretty easy to understand example. If you don't do this it's quite likely your company won't survive, assuming you need to process card payments. If you're building a new web site that will process payments, you have to get through PCI compliance, there is no choice, and the project cannot move forward until this is complete. The goal now isn't so much measuring the return on an investment as it is being a good steward of the resources given to us. PCI requirements and audits are not cheap. If you are seen as making poor decisions and squandering your resources it's quite likely the business will get grumpy with you.

Compliance and security aren't the same thing. There is some overlap but it must be understood that you can be compliant and still get hacked. The overlap of compliance is a great thing to focus on for measuring what we do. Did your compliance program make you more secure? Can you show how another group used a compliance requirement to make something better? What if something compliance required saved some money on how the network was architected? There are a lot of side benefits to pay attention to. Make sure you note the things that are improvements, even if they aren't necessarily security improvements.

I've seen examples where compliance was used to justify 2 factor authentication (2FA) in an organization, There are few things more powerful than 2FA that you can deploy. Showing compliance helped move an initiative like this forward, and also showing how the number of malicious logs drops substantially is a powerful message. Just turning on 2FA isn't enough. Make sure you show why it's better, how the attacks are slowed or stopped. Make sure you can show there were few issues for users (the people who struggle will complain loudly). If there is massive disruption for your users, figure out why you didn't know this would happen, someone screwed something up that means. It's important to measure the good and the bad. We rarely measure failure which is a problem. Nobody has a 100% success rate, learn from your failure.

What about attending a meeting or industry conference? Do you just go, file the expense report, and do nothing? That sounds like a waste of time and money. Make sure you have concrete actions. Write down what happened, why it was important you were there, how you made the situation better, and what you're going to do next. How did the meeting move your project forward? Did you learn something new, or make some plans that will help in the future? Make sure the person paying your bills sees this. Make them happy to be providing you the means to keep the business moving forward.

The Cost

The very first step we have to consider when we want to measure what we're doing is to do your homework and understand cost. Not just upfront cost but cost of machines, disk, people, services, anything you need to keep the business moving forward. If there are certain requirements needed for a solution make sure you understand and document it. If a certain piece of software or service has to be used show why. Show what part of the business can function because of the cost you're providing. Remember this is going to be specific requirements you can't escape. These are not commodity services and solutions. And of course the goal is to move forward.

If you inherit an existing solution take a good look at everything, make sure you know exactly what the resource cost of the solution is. The goal here isn't always to show a return on investment, but to show that the current solution makes sense. Just because something costs less money doesn't mean it's cheaper. If your cut rate services will put the project in jeopardy you're going to be in trouble someday. Be able to show this is a real threat. It's possible a decision will be made to take on this threat, but that's not always your choice. Always be able to answer the questions "if we do this what happens" and "if we don't do this what happens".

Conclusion
This topic is tricky. I keep thinking about it and even as I wrote this post it changed quite a lot from what I started to write. If you have something that makes money it's easy to justify investment. If you have something that's a pure cost center it's easy to minimize cost. This middle ground is tricky. How do you show value for something you have to do but isn't directly generating revenue? If you work for a forward looking business you probably won't have to spend a ton of time getting these projects funded. Growing companies understand the cost of doing business.

I have seen some companies that aren't growing as quickly fail to see value in the cost of doing business. There's nothing wrong with this sometimes, but as a security leader your job is to make your leadership understand what isn't happening because of this lack of investment. Sometimes if you keep a project limping along, barely alive, you end up causing a great deal of damage to the project and your staff. If leadership won't fund something, it means they don't view it as important and neither should you. If you think it is important, you need to sell it to your leadership. Sometimes you can't and won't win though, and then you have to be willing to let it go.

Monday, September 11, 2017

Measuring security: Part 1 - Things that make money

If you read my previous post on measuring security, you know I broke measuring into three categories. I have no good reason to do this other than it's something that made sense to me. There are without question better ways to split these apart, I'm sure there is even overlap, but that's not important. What actually matters is to start a discussion on measuring what we do. The first topic is about measuring security that directly adds to revenue such as a product or service.

Revenue
The concept of making money is simple enough. You take a resource such as raw materials, money, even people in some instances. Usually it's all three. You take these resources then transform them into something new and better. The new creation is then turned into money, or revenue, for your
business. If you have a business that doesn't make more money than it spends you have a problem. If you have a business that doesn't make any money you have a disaster.

This is easy enough to understand, but let's use a grossly simplified example to make sure we're all on the same page. Let's say you're making widgets. I suppose since this is a security topic we should call them BlockWidgetChain. In our fictional universe you spend $10 on materials and people. Make sure you can track how much something costs, you should be able to determine how much of that $10 is materials and how much is people. You then you sell the BlockWidgetChain for $20. That means you spent $10 to make $20. This should make sense to anyone who understands math (or maths for you English speakers).

Now let's say you have a competitor who makes BlockChainWidgets. They're the same thing basically, but they have no idea how much it costs them to make BlockChainWidgets. They know if they charge more than $20 they can't compete because BlockWidgetChains cost $20. Their solution is to charge $20 and hope the books work out.

I've not only described the business plan for most startups but also a company that's almost certainly in trouble. You have to know how much you spend on resources. If you spend more than you're charging for the product that's a horrible business model. Most of security works like this unfortunately. We have no idea how much a lot of what we do costs, we certainly don't know how much value it adds to the bottom line. In many instances we cannot track spending in a meaningful way.

Measuring security
So now we're on to the idea of measuring security in an environment where the security is responsible for making money. Something like security features in a product. Maybe even a security product in some instances. This is the work that pays my bills. I've been working on product security for a very long time. If you're part of your product team (which you should be, product security doesn't belong anywhere else, more on that another day) then you understand the importance of having features that make a product profitable and useful. For example I would say SSO is a must have in today's environment. If you don't have this feature you can't be as effective in the market. But adding and maintaining features isn't free. If you spend $30 and sell it for $20, you'd make more money just by staying in bed. Sometimes the most profitable decision is to not do something.

Go big or go home
The biggest mistake we like to make is doing too much. It's easy to scope a feature too big. At worst you end up failing completely, at best you end up with what you should have scoped in the first place. But you spend a lot more on failure before you end up where you should have been from the start.

Let's use SSO as our example here. If you were going to scope the best SSO solution in the world, your product would be using SAML, OAuth, PKI, Kerberos, Active Directory, LDAP, and whatever else you manage to think of on planning day. This example is pretty clearly over the top, but I bet a lot of new SSO system scope SAML and OAuth at the same time. The reality is you only need one to start. You can add more later. Firstly having a small scope is important. It shows you want to do one thing and do it well instead of doing 3 things badly. There are few features that are useful in a half finished state. Your sales team has no desire to show off a half finished product.

How to decide
But how do we decide which feature to add? The first thing I do is look at customer feedback. Do the customers clearly prefer one over the other? Setup calls with them, go on visits. Learn what they do and how they do it. If this doesn't give you a clear answer, the next question is always "which feature would sell more product". In the case of something like SAML vs OAuth there might not be a good answer. If you're some sort of cloud service OAuth means you can let customers auth against Google and Facebook. That would probably result in more users.

If you're focused on a lot of on-prem solutions, SAML might be more used. It's even possible SSO isn't what customers are after once you start to dig. I find it's best to make a mental plan of how things should look, then make sure that's not what gets built because whatever I think of first is always wrong ;)

But how much does it cost?
Lastly if there's not a good way to show revenue for a feature, you can look at investment cost. The amount of time and money something will take to implement can really help when deciding what to do. If a feature will take years to develop, that's probably not a feature you want or need. Most industries will be very different in a few years. The expectations of today won't be the expectations of tomorrow.

For example if SAML will take three times as long as OAuth to implement. And both features will result in the same number of sales. OAuth will have a substantially larger return on investment as it's much cheaper to implement. A feature doesn't count for anything until it's on the market. Half done or in development are the same as "doesn't exist". Make sure you track time as part of your costs. Money is easy to measure, but people and time are often just as important.

I really do think this is the easiest security category to measure and justify. That could be because I do it every day, but I think if you can tie actual sales back to security features you'll find yourself in a good place. Your senior leadership will think you're magic if you can show them if they invest resources in X they will get Y. Make sure you track the metrics though. It's not enough to meet expectations, make an effort to exceed your expectations. There's nothing leadership likes better than someone who can over-deliver on a regular basis.

I see a lot of groups that don't do any of this. They wander in circles sometimes adding security features that don't matter, often engineering solutions that customers only need or want 10% of. I'll never forget when I first looked at actual metrics on new features and realized something we wanted to add was going to have a massive cost and generate zero additional revenue (it may have actually detracted in future product sales). On this day I saw the power in metrics. Overnight my group became heroes for saving everyone a lot of work and headaches. Sometimes doing nothing is the most valuable action you can take.

Monday, September 4, 2017

The father of modern security: B. F. Skinner

A lot of what we call security is voodoo. Most of it actually.

What I mean with that statement is our security process is often based on ideas that don't really work. As an industry we have built up a lot of ideas and processes that aren't actually grounded in facts and science. We don't understand why we do certain things, but we know that if we don't do those things something bad will happen! Will it really happen? I heard something will happen. I suspect the answer is no, but it's very difficult to explain this concept sometimes.

I'm going to start with some research B. F. Skinner did as my example here. The very short version is that Skinner did research on pigeons. He had a box that delivered food at random intervals. The birds developed rituals that they would do in order to have their food delivered. If a pigeon decided that spinning around would cause food to be delivered, it would continue to spin around, eventually the food would appear reinforcing the nonsensical behavior. The pigeon believed their ritual was affecting how often the food was delivered. The reality is nothing the pigeon did affected how often food was delivered. The pigeon of course didn't know this, they only knew what they experienced.

My favorite example  to use next to this pigeon experiment is the password policies of old. A long time ago someone made up some rules about what a good password should look like. A good password has letters, and numbers, and special characters, and the name of a tree in it. How often we should change a password was also part of this. Everyone knows you should change passwords as often as possible. Two or three times a day is best. The more you change it the more secure it is!

Today we've decided that all this advice was terrible. The old advice was based on voodoo. It was our ritual that kept us safe. The advice to some people seemed like a fair idea, but there were no facts backing it up. Lots of random characters seems like a good idea, but we didn't know why. Changing your password often seemed like a good idea, but we didn't know why. This wasn't much different than the pigeon spinning around to get more food. We couldn't prove it didn't not work, so we kept doing it because we had to do something.

Do you know why we changed all of our password advice? We changed it because someone did the research around passwords. We found out that very long passwords using real words is substantially better than a nonsense short password. We found out that people aren't good at changing their passwords every 90 days. They end up using horrible passwords and adding a 1 to the end. We measured the effectiveness of these processes and understood they were actually doing the opposite of what we wanted them to do. Without question there are other security ideas we do today that fall into this category.

Even though we have research showing this password advice was terrible we still see a lot of organizations and people who believe the old rituals are the right way to keep passwords safe. Sometimes even when you prove something to someone they can't believe it. They are so invested in their rituals that they are unable to imagine any other way of existing. A lot of security happens this way. How many of our rules and processes are based on bad ideas?

How to measure
Here's where it gets real. It's easy to pick on the password example because it's in the past. We need to focus on the present and the future. You have an organization that's full of policy, ideas, and stuff. How can we try to make a dent in what we have today? What matters? What doesn't work, and what's actually harmful?

I'm going to split everything into 3 possible categories. We'll dive deeper into each in future posts, but we'll talk about them briefly right now.

Things that make money
Number one is things that make money. This is something like a product you sell, or a website that customers use to interact with your company. Every company does something that generates revenue. Measuring things that fit into this category is really easy. You just ask "Will this make more, less, or the same amount of money?" If the answer is less you're wasting your time. I wrote about this a bit a long time ago, the post isn't great, but the graphic I made is useful, print it out and plot your features on it. You can probably start asking this question today without much excitement.

Cost of doing business
The next category is what I call cost of doing business. This would be things like compliance or being a part of a professional organization. Sending staff to conferences and meetings. Things that don't directly generate revenue but can have a real impact on the revenue. If you don't have PCI compliance, you can't process payments, you have no revenue, and the company won't last long. Measuring some of these is really hard. Does sending someone to Black Hat directly generate revenue? No. But it will create valuable connections and they will likely learn new things that will be a benefit down the road. I guess you could think of these as investments in future revenue.

My thoughts on how to measure this one is less mature. I think about these often. I'll elaborate more in a future post.

Infrastructure
The last category I'm going to call "infrastructure". This one is a bit harder to grasp what makes sense. It's not unlike the previous question though. In this case we ask ourselves "If I stopped doing this what bad thing would happen?" Now I don't mean movie plot bad thing. Yeah if you stopped using your super expensive keycard entry system a spy from a competitor could break in and steal all your secrets using an super encrypted tor enabled flash drive, but they probably won't. This is the category where you have to consider the cost of an action vs the cost of not doing an action. Not doing things will often have a cost, but doing things also has a cost.

Return on investment is the name of the game here. Nobody likes to spend money they don't have to. This is why cloud is disrupting everything. Why pay for servers you don't need when you can rent only what you do need?

I have some great stories for this category, be sure to come back when I publish this followup article.

The homework for everyone now is to just start thinking about what you do and why you do it. If you don't have a good reason, you need to change your thinking. Changing your thinking is really hard to do as a human though. Many of us like to double down on our old beliefs when presented with facts. Don't be that person, keep an open mind.

Wednesday, August 30, 2017

Security ROI isn't impossible, we suck at measuring

As of late I've been seeing a lot of grumbling that security return on investment (ROI) is impossible. This is of course nonsense. Understanding your ROI is one of the most important things you can do as a business leader. You have to understand if what you're doing makes sense. By the very nature of business, some of the things we do have more value than other things. Some things even have negative value. If we don't know which things are the most important, we're just doing voodoo security.

H. James Harrington once said
Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.
Anyone paying attention to the current state of security will probably shed a tear over that statement. The foundation of the statement results in this truth: we can't control or improve the state of security today. As much as we all like to talk about what's wrong with security and how to fix it. The reality is we don't really know what's broken, which of course means we have no idea how to fix anything.

Measuring security isn't impossible, it's just really hard today. It's really hard because we don't really understand what security is in most instances. Security isn't one thing, it's a lot of little things that don't really have anything to do with each other but we clump them together for some reason. We like to build teams of specialized people and call them the security team. We pretend we're responsible for a lot of little unrelated activities but we often don't have any real accountability. The reality is this isn't a great way to do something that actually works, it's a great way to have a lot smart people fail to live up to their true potential. The best security teams in the world today aren't good at security, they're just really good at marketing themselves so everyone thinks they're good at security.

Security needs to be a part of everything, not a special team that doesn't understand what's happening outside their walls. Think for a minute what an organization would look like if we split groups up by what programming language they knew. Now you have all the python people in one corner and all the C guys in the other corner. They'll of course have a long list of reasons why they're smarter and better than the other group (we'll ignore the perl guys down in the basement). Now if there is a project that needs some C and some python they would have to go to each group and get help. Bless the soul of anyone who needs C and python working together in their project. You know this would just be a massive insane turf war with no winner. It's quite likely the project would never work because the groups wouldn't have a huge incentive to work together. I imagine you can see the problem here. You have two groups that need to work together without proper incentive to actually work together.

Security is a lot like this. Does having a special secure development group outside of the development group make sense? Why does it make sense to have a security operations group that isn't just part of IT? If you're not part of a group do you have an incentive for the group to succeed? If I can make development's life so difficult they can't possibly succeed that's development's problem, not yours. You have no incentive to be a reasonable member of the team. The reality is you're not a member of the team at all. Your incentive is to protect your own turf, not help anyone else.

I'm going to pick on Google's Project Zero for a minute here. Not because they're broken, but because they're really really good at what they do. Project zero does research into how to break things, then they work with the project they broke to make it better. If this was part of a more traditional security thinking group, Project Zero would do research, build patches, then demand everyone uses whatever it is they built and throw a tantrum if they don't. This would of course be crazy, unwelcome, and a waste of time. Project Zero has a razor focus on research. More importantly though they work with other groups when it's time to get the final work done. Their razor focus and ability to work with others gives them a pretty clear metric they can see. How many flaws did they find? How many got fixed? How many new attack vectors did they create? This is easy to measure. Of course some groups won't work with them, but in that case they can publish their advisories and move on. There's no value in picking long horrible fights.

So here's the question you have to ask yourself. How much of what you do directly affects the group you're a part of? I don't mean things like enforcing compliance, compliance is a cost like paying for electricity, think bigger here about things that generate revenue. If you're doing a project with development, do your decisions affect them or do they affect you? If your decisions affect development you probably can't measure what you do. You can really only measure things that affect you directly. Even if you think you can measure someone else, you'll never be as good as they are. And honestly, who cares what someone else is doing, measure yourself first.

It's pretty clear we don't actually understand what we like to call "security" because we have no idea how to measure it. If we did understand it, we could measure it. According to H. James Harrington we can't fix what we can't measure it. I think given everything we've seen over the past few years, this is quite accurate. We will never fix our security problems without first measuring our security ROI.

I'll spend some time in the next few posts discussing how to measure what we do with actual examples. It's not as hard as it sounds.

Monday, August 28, 2017

Helicopter security

After my last post about security spending, I was thinking about how most security teams integrate into the overall business (hint: they don't). As part of this thought experiment I decided to compare traditional security to something that in modern times has come to be called helicopter parenting.

A helicopter parent is someone who won't let their kids do anything on their own. These are the people you hear about who follow their child to college, to sports practice. They yell at teachers and coaches for not respecting how special the child is. The kids are never allowed to take any risks because risk is dangerous and bad. If they climb the tree, while it could be a life altering experience, they could also fall and get hurt. Skateboarding is possibly the most dangerous thing anyone could ever do! We better make sure nothing bad can ever happen.

It's pretty well understood now that this sort of attitude is terrible for the children. They must learn to do things on their own, it's part of the development process. Taking risks and failing is an extremely useful exercise. It's not something we think about often, but you have to learn to fail. Failure is hard to learn. The children of helicopter parents do manage to learn one lesson they can use in their life, they learn to hide what they do from their parents. They get extremely good at finding way to get around all their rules and restrictions. To a degree we all had this problem growing up. At some point we all wanted to do something our parents didn't approve of, which generally meant we did it anyway, we just didn't tell our parents. Now imagine a universe where your parents let you do NOTHING, you're going to be hiding literally everything. Nobody throughout history has ever accepted the fact that they can do nothing, they just make sure the authoritarian doesn't know about it. Getting caught is still better than doing nothing much of the time.

This brings us to traditional security. Most security teams don't try to work with the business counterparts. Security teams often think they can just tell everyone else what to do. Have you ever heard the security team ask "what are you trying to do?" Of course not. They always just say "don't do that" or maybe "do it this way" then move on to tell the next group how to do their job. They don't try to understand what you're doing and why you are doing it. It's quite literally not their job to care what you're doing, which is part of the problem. Things like phishing tests are used to belittle, not teach (they have no value as teaching tools, but we won't discuss that today). Many of the old school security teams see their job as risk aversion, not risk management. They are helicopter security teams.

Now as we know from children, if you prevent someone from doing anything they don't become your obedient servant, they go out of their way to make sure the authority has no idea what's going on. This is basically how shadow IT became a thing. It was far easier to go around the rules than work with the existing machine. Helicopter security is worse than nothing. At least with nothing you can figure out what's going on by asking questions and getting honest answers. In a helicopter security environment information is actively hidden because truth will only get you in trouble.

Can we fix this?
I don't know the answer to this question. A lot of tech people I see (not just security) are soldiers from the last war. With the way we see cloud transforming the universe there are a lot of people who are still stuck in the past. We often hear it's hard to learn new things but it's more than that. Technology, especially security, never stands still. It used to move slow enough you could get by for a few years on old skills, but we're in the middle of disruptive change right now. If you're not constantly questioning your existing skills and way of thinking you're already behind. Some people are so far behind they will never catch up. It's human nature to double down on the status quo when you're not part of the change. Helicopter security is that doubling down.

It's far easier to fight change and hope your old skills will remain useful than it is to learn a new skill. Everything we see in IT today is basically a new skill. Today the most useful thing you can know is how to learn quickly, what you learned a few months ago could be useless today, it will probably be useless in the near future. We are actively fighting change like this in security today. We try to lump everything together and pretend we have some sort of control over it. We never really had any control, it's just a lot more obvious now than it was before. Helicopter security doesn't work, no matter how bad you want it to.

The Next Step
The single biggest thing we need to start doing is measure ourselves. Even if you don't want to learn anything new you can at least try to understand what we're doing today that actually works, which things sort of work, and of course the things that don't work at all. In the next few posts I'm going to discuss how to measure security as well as how to avoid voodoo security. It's a lot harder to justify helicopter security behavior once we understand which of our actions work and which don't.

Tuesday, August 22, 2017

Spend until you're secure

I was watching a few Twitter conversations about purchasing security last week and had yet another conversation about security ROI. This has me thinking about what we spend money on. In many industries we can spend our way out of problems, not all problems, but a lot of problems. With security if I gave you a blank check and said "fix it", you couldn't. Our problem isn't money, it's more fundamental than that.

Spend it like you got it
First let's think about how some problems can be solved with money. If you need more electricity capacity, or more help during a busy time, or more computing power, it's really easy to add capacity. You need more compute power, you can either buy more computers or just spend $2.15 in the cloud. If you need to dig a big hole, for a publicity stunt on Black Friday, you just pay someone to dig a big hole. It's not that hard.

This doesn't always work though, if you're building a new website, you probably can't buy your way to success. If a project like this falls behind it can be very difficult to catch back up. You can however track progress which I would say is at least a reasonable alternative. You can move development to another group or hire a new consultant if the old one isn't living up to expectations.

More Security
What if we need "more" security. How can we buy our way into more security for our organization? I'd start by asking the question can we show any actual value for our current security investment? If you stopped spending money on security tomorrow do you know what the results would be? If you stopped buying toilet paper for your company tomorrow you can probably understand what will happen (if you have a good facilities department I bet they already know the answer to this).

This is a huge problem in many organizations. If you don't know what would happen if you lowered or increased your security spending you're basically doing voodoo security. You can imagine many projects and processes as having a series of inputs that can be adjusted. Things like money, time, people, computers, the list could go on. You can control these variables and have direct outcomes on the project. More people could mean you can spend less money on contractors, more computers could mean less time spent on rendering or compiling. Ideally you have a way to find the optimal levels for each of these variables resulting in not only a high return on investment, but also happier workers as they can see the results of their efforts.

We can't do this with security today because security is too broad. We often don't know what would happen if we add more staff, or more technology.

Fundamental fundamentals
So this brings us to why we can't spend our way to security. I would argue there are two real problems here. The first being "security" isn't a thing. We pretend security is an industry that means something but it's really a lot of smaller things we've clumped together in such a way that ensures we can only fail. I see security teams claim to own anything that has the word security attached to it. They claim ownership of projects and ideas, but then they don't actually take any actions because they're too busy or lack the skills to do the work. Just because you know how to do secure development doesn't automatically make you an expert at network security. If you're great at network security it doesn't mean you know anything about physical security. Security is a lot of little things, we have to start to understand what those are and how to push responsibility to respective groups. Having a special application security team that's not part of development doesn't work. You need all development teams doing things securely.

The second problem is we don't measure what we do. How many security teams tell IT they have to follow a giant list of security rules, but they have no idea what would happen if one or more of those rules were rolled back? Remember when everyone insisted we needed to use complex passwords? Now that's considered bad advice and we shouldn't make people change their passwords often. It's also a bad idea to insist they use a variety of special characters now. How many millions have been wasted on stupid password rules? The fact that we changed the rules without any fanfare means there was no actual science behind the rules in the first place. If we even tried to measure this I suspect we would have known YEARS ago that it was a terrible idea. Instead we just kept doing voodoo security. How many more of our rules do you think will end up being rolled back in the near future because they don't actually make sense?

If you're in charge of a security program the first bit of advice I'd give out is to look at everything you own and get rid of whatever you can. Your job isn't to do everything, figure out what you have to do, then do it well. One project well done is far better than 12 half finished. The next thing you need to do is figure out how much whatever you do costs, and how much benefit it creates. If you can't figure out the benefit, you can probably stop doing it today. If it costs more than it saves, you can stop that too. We must have a razor focus if we're to understand what our real problems are. Once we understand the problems we can start to solve them.

Sunday, August 13, 2017

But that's not my job!

This week I've been thinking about how security people and non security people interact. Various conversations I have often end up with someone suggesting everyone needs some sort of security responsibility. My suspicion is this will never work.

First some background to think about. In any organization there are certain responsibilities everyone has. Without using security as our specific example just yet, let's consider how a typical building functions. You have people who are tasked with keeping the electricity working, the plumbing, the heating and cooling. Some people keep the building clean, some take care of the elevators. Some work in the building to accomplish some other task. If the company that inhabits the building is a bank you can imagine the huge number of tasks that take place inside.

Now here's where I want our analogy to start. If I work in a building and I see a leaking faucet. I probably would report it. If I didn't, it's likely someone else would see it. It's quite possible if I'm one of the electricians and while accessing some hard to reach place I notice a leaking pipe. That's not my job to fix it, I could tell the plumbers but they're not very nice to me, so who cares. The last time I told them about a leaking pipe they blamed me for breaking it, so I don't really have an incentive here. If I do nothing, it really won't affect me. If I tell someone, at best it doesn't affect me, but in reality I probably will get some level of blame or scrutiny.

This almost certainly makes sense to most of us. I wonder if there are organizations where reporting things like this comes with an incentive. A leaking water pipe could end up causing millions in damage before it's found. Nowhere I've ever worked ever really had an incentive to report things like this. If it's not your job, you don't really have to care, so nobody ever really cared.

Now let's think about phishing in a modern enterprise. You see everything from blaming the user who clicked the link, to laughing at them for being stupid, to even maybe firing someone for losing the company a ton of money. If a user clicks a phishing link, and suspects a problem, they have very little incentive to be proactive. It's not their job. I bet the number of clicked phish links we find out about is much much lower than the total number clicked.

I also hear security folks talking about educating the users on how all this works. Users should know how to spot phishing links! While this won't work for a variety of reasons, at the end of the day, it's not their job so why do we think they should know how to do this? Even more important, why do we think they should care?

The think I keep wondering is should this be the job of everyone or just the job of the security people? I think the quick reaction is "everyone" but my suspicion is it's not. Electricity is a great example. How many stories have you heard of office workers being electrocuted in the office? The number is really low because we've made electricity extremely safe. If we put this in the context of modern security we have a system where the office is covered in bare wires. Imagine wires hanging from the ceiling, some draped on the floor. The bathroom has sparking wires next to the sink. We lost three interns last week, those stupid interns! They should have known which wires weren't safe to accidentally touch. It's up to everyone in the office to know which wires are safe and which are dangerous!

This is of course madness, but it's modern day security. Instead of fixing the wires, we just imagine we can train everyone up on how to spot the dangerous ones.

Friday, July 28, 2017

For a security conference that everyone claims not to trust the wifi, there sure was a lot of wifi

I attended BlackHat USA 2017, Elastic had a booth on the floor I spent a fair bit of time at as well as meetings scattered about the conference center. It was a great time as always, but this year I had a secret with me. I put together a Raspberry Pi that was passively collecting wifi statistics. Just certain metadata, no actual wifi data packets were captured or harmed in the making of this. I then log everything into Elasticsearch so I can build pretty visualizations in Kibana. I only captured 2.4 Ghz data with one radio, so I had it jumping around. Obviously I missed plenty of data, but this was really just about looking for interesting patterns.

I put everything I used to make this project go into GitHub, it's really rough though, you've been warned.

I have a ton of data to mine, I'll no doubt spend a great deal of time in the future doing that, but here's the basic TL;DR picture.

pretty picture

I captured 12.6 million wifi packets, the blue bars show when I captured what, the table shows the SSIDs I saw (not all packets have SSID data), and the colored graph shows which wifi channels were seen (not all packets have channel data either). I also have packet frequencies logged, so all that can be put together later. The two humps in the wifi data was when I was around the conference, I admit I was surprised by the volume of wifi I saw basically everywhere, even in the middle of the night from my hotel room.

Below is a graph showing the various frequencies I saw, every packet has to come in on some wireless frequency even if it doesn't have a wifi channel.



The devices seen data was also really interesting.

This chart represents every packet seen, so it's clearly going to be a long tail. It's no surprise an access point sends out a lot of packets, I didn't expect Apple to be #1 here, I expected the top few to be access point manufacturers. It would seem Apple gear is more popular and noisy than I expected.

A more interesting graph is unique devices seen by manufacturer (as a side note, I saw 77,904 devices in total over my 3 days).


This table is far more useful as it's totally expected a single access point will be very noisy. I didn't expect Cisco to make the top 3 I admit. But this means that Apple was basically 10% of wifi devices then we drop pretty quickly.

There's a lot more interesting data in this set, I just have to spend some time finding it all. I'll also make a point to single out the data specific to business hours. Stay tuned for a far more detailed writeup.

Saturday, July 22, 2017

Security and privacy are the same thing

Earlier today I ran across this post on Reddit
Security but not Privacy (Am I doing this right?)

The poster basically said "I care about security but not privacy".

It got me thinking about security and privacy. There's not really a difference between the two. They are two faces of the same coin but why isn't always obvious in today's information universe. If a site like Facebook or Google knows everything about you it doesn't mean you don't care about privacy, it means you're putting your trust in those sites. The same sort of trust that makes passwords private.

The first thing we need to grasp is what I'm going to call a trust boundary. I trust you understand trust already (har har har). But a trust boundary is less obvious sometimes. A security (or privacy) incident happens when there is a breach of the trust boundary. Let's just dive into some examples to better understand this.

A web site is defaced
In this example the expectation is the website owner is the only person or group that can update the website content. The attacker crossed a trust boundary that allowed them to make unwanted changes to the website.

Your credit card is used fraudulently
It's expected that only you will be using your credit card. If someone gets your number somehow and starts to make purchases with your card, how they got the card crosses a trust boundary. You could easily put this example in the "privacy" bucket if you wanted to keep them separate, it's likely your card was stolen due to lax security at one of the businesses you visited.

Your wallet is stolen
This one is tricky. The trust boundary is probably your pocket or purse. Maybe you dropped it or forgot it on a counter. Whatever happened the trust boundary is broken when you lose control of your wallet. An event like this can trickle down though. It could result in identity theft, your credit card could be used. Maybe it's just about the cash. The scary thing is you don't really know because you lost a lot of information. Some things we'd call privacy problems, some we'd call security problems.

I use a confusing last example on purpose to help prove my point. The issue is all about who do you trust with what. You can trust Facebook and give them tons of information, many of us do. You can trust Google for the same basic reasons. That doesn't mean you don't care about privacy, it just means you have put them inside a certain trust boundary. There are limits to that trust though.

What if Facebook decided to use your personal information to access your bank records? That would be a pretty substantial trust boundary abuse. What if your phone company decided to use the information they have to log into your Facebook account?

A good password isn't all that different from your credit card number. It's a bit of private information that you share with one or more other organizations. You are expecting them not to cross a trust boundary with the information you gave them.

The real challenge is to understand what trust boundaries you're comfortable with. What do you share with who? Nobody is an island, we must exist in an ecosystem of trust. We all have different boundaries of what we will share. That's quite all right. If you understand your trust boundary making good security/privacy decisions becomes a lot easier.

They say information is the new oil. If that's true then trust must be the currency.

Thursday, July 20, 2017

Summer is coming

I'm getting ready to attend Black Hat. I will miss BSides and Defcon this year unfortunately due to some personal commitments. And as I'm packing up my gear, I started thinking about what these conferences have really changed. We've been doing this every summer for longer than many of us can remember now. We make our way to the desert, we attend talks by what we consider the brightest minds in our industry. We meet lots of people. Everyone has a great time. But what is the actionable events that come from these things.

The answer is nothing. They've changed nothing.

But I'm going to put an asterisk next to that.

I do think things are getting better, for some definition of better. Technology is marching forward, security is getting dragged along with a lot of it. Some things, like IoT, have some learning to do, but the real change won't come from the security universe.

Firstly we should understand that the world today has changed drastically. The skillset that mattered ten years ago doesn't have a lot of value anymore. Things like buffer overflows are far less important than they used to be. Coding in C isn't quite what it once was. There are many protections built into frameworks and languages. The cloud has taken over a great deal of infrastructure. The list can go on.

The point of such a list is to ask the question, how much of the important change that's made a real difference came from our security leaders? I'd argue not very much. The real change comes from people we've never heard of. There are people in the trenches making small changes every single day. Those small changes eventually pile up until we notice they're something big and real.

Rather than trying to fix the big problems, our time is better spent ignoring the thought leaders and just doing something small. Conferences are important, but not to listen to the leaders. Go find the vendors and attendees who are doing new and interesting things. They are the ones that will make a difference, they are literally the future. Even the smallest bug bounty, feature, or pull request can make a difference. The end goal isn't to be a noisy gasbag, instead it should be all about being useful.



Saturday, July 8, 2017

Who's got your hack back?

The topic of hacking back keeps coming up these days. There's an attempt to pass a bill in the US that would legalize hacking back. There are many opinions on this topic, I'm generally not one to take a hard stand against what someone else thinks. In this case though, if you think hacking back is a good idea, you're wrong. Painfully wrong.

Everything I've seen up to this point tells me the people who think hacking back is a good idea are either mistaken about the issue or they're misleading others on purpose. Hacking back isn't self defense, it's not about being attacked, it's not about protection. It's a terrible idea that has no place in a modern society. Hacking back is some sort of stone age retribution tribal law. It has no place in our world.

Rather than break the various argument apart. Let's think about two examples that exist in the real world.

Firstly, why don't we give the people doing mall security guns? There is one really good reasons I can think of here. The insurance company that holds the policy on the mall would never allow the security to carry guns. If you let security carry guns, they will use them someday. They'll probably use them in an inappropriate manner, the mall will be sued, and they will almost certainly lose. That doesn't mean the mall has to pay a massive settlement, it means the insurance company has to pay a massive settlement. They don't want to do that. Even if some crazy law claims it's not illegal to hack back, no sane insurance company will allow it. I'm not talking about cyber insurance, I'm just talking about general policies here.

The second example revolves around shoplifting. If someone is caught stealing from a store, does someone go to their house and take some of their stuff in retribution? They don't of course. Why not? Because we're not cave people anymore. That's why. Retribution style justice has no place in a modern civilization. This is how a feud starts, nobody has ever won a feud, at best it's a draw when they all kill each other.

So this has me really thinking. Why would anyone want to hack back? There aren't many reasons that don't revolve around revenge. The way most attacks work you can't reliably know who is doing what with any sort of confidence. Hacking back isn't going to make anything better. It would make things a lot worse. Nobody wants to be stuck in the middle of a senseless feud. Well, nobody sane.

Sunday, June 25, 2017

When in doubt, blame open source

If you've not read my previous post on thought leadership, go do that now, this one builds on it. The thing that really kicked off my thinking on these matters was this article:

Security liability is coming for software: Is your engineering team ready?

The whole article is pretty silly, but the bit about liability and open source is the real treat. There's some sort of special consideration when you use open source apparently, we'll get back to that. Right now there is basically no liability of any sort when you use software. I doubt there will be anytime soon. Liability laws are tricky, but the lawyers I've spoken with have been clear that software isn't currently covered in most instances. The whole article is basically nonsense from that respect. The people they interview set the stage for liability and responsibility then seem to discuss how open source should be treated special in this context.

Nothing is special, open source is no better or worse than closed source software. If you build something why would open source need more responsibility than closed source? It doesn't of course, it's just an easy target to pick on. The real story is we don't know how to deal with this problem. Open source is an easy boogeyman. It's getting picked on because we don't know where else to point the finger.

The real problem is we don't know how to secure our software in an acceptable manner. Trying to talk about liability and responsibility is fine, nobody is going to worry about security until they have to. Using open source as a discussion point in this conversation clouds it though. We now get to shift the conversation from how do we improve security, to blaming something else for our problems. Open source is one of the tools we use to build our software. It might be the most powerful tool we've ever had. Tools are never the problem in a broken system even though they get blamed on a regular basis.

The conversation we must have revolves around incentives. There is no incentive to build secure software. Blaming open source or talking about responsibility are just attempts to skirt the real issue. We have to fix our incentives. Liability could be an incentive, regulation can be an incentive. User demand can be an incentive as well. Today the security quality of software doesn't seem to matter.

I'd like to end this saying we should make an effort to have more honest discussions about security incentives, but I don't think that will happen. As I mention in my previous blog post, our problem is a lack of leadership. Even if we fix security incentives, I don't see things getting much better under current leadership.

Saturday, June 17, 2017

Thought leaders aren't leaders

For the last few weeks I've seen news stories and much lamenting on twitter about the security skills shortage. Some say there is no shortage, some say it's horrible beyond belief. Basically there's someone arguing every possible side of this. I'm not going to debate if there is or isn't a worker shortage, that's not really the point. A lot of complaining was done by people who would call themselves leaders in the security universe. I then read the below article and change my thinking up a bit.


Our problem isn't a staff shortage. Our problem is we don't have any actual leaders. I mean people who aren't just "in charge". Real leaders aren't just in charge, they help their people grow in a way that accomplishes their vision. Virtually everyone in the security space has spent their entire careers working alone to learn new things. We are not an industry known for working together and the thing I'd never really thought about before was that if we never work together, we never really care about anyone or anything (except ourselves). The security people who are in charge of other security people aren't motivating anyone which by definition means they're not accomplishing any sort of vision. This holds true for most organizations since barely keeping the train on the track is pretty much the best case scenario.

If I was going to guess the existing HR people look at most security groups and see the same dumpster fire we see when we look at IoT.

In the industry today virtually everyone who is seen as being some sort of security leader is what a marketing person would call "thought leaders". Thought leaders aren't leaders. Some do have talent. Some had talent. And some just own a really nice suit. It doesn't matter though. What we end up with is a situation where the only thing anyone worries about is how many Twitter followers they have instead of making a real difference. You make a real difference when you coach and motivate someone else do great things.

Being a leader with loyal employees would be a monumental step for most organizations. We have no idea who to hire and how to teach them because the leaders don't know how to do those things. Those are skills real leaders have and real leaders develop in their people. I suspect the HR department knows what's wrong with the security groups. They also know we won't listen to them.

There is a security talent shortage, but it's a shortage of leadership talent.

Sunday, June 11, 2017

Humanity isn't proactive

I ran across this article about IoT security the other day

The US Needs to Get Serious About Securing the Internet of Hackable Things

I find articles like this frustrating for the simple fact everyone keeps talking about security, but nobody is going to do anything. If you look at the history of humanity, we've never been proactive when dealing with problems. We wait until things can't get worse and the only actual option is to fix the problem. If you look at every problem there are at least two options. Option #1 is always "fix it". Option #2 is ignore it. There could be more options, but generally we pick #2 because it's the least amount of work in the short term. Humanity rarely cares about the long term implications of anything.

I know this isn't popular, but I'm going to say it: We aren't going to fix IoT security for a very long time

I really wish this wasn't true, but it just is. If a senator wants to pretend they're doing something but they're really just ignoring the problem, they hold a hearing and talk about how horrible something is. If they actually want to fix it they propose legislation. I'm not blaming anyone in charge mind you. They're really just doing what they think the people want. If we want the government to fix IoT we have to tell them to do it. Most people don't really care because they don't have a reason to care.

Here's the second point that I suspect many security people won't want to hear. The reason nobody cares about IoT security isn't because they're stupid. This is the narrative we've been telling ourselves for years. They don't care because the cost of doing nothing is substantially less than fixing IoT security. We love telling scary campfire stories about how the botnet was coming from inside the house and how a pacemaker will kill grandpa, but the reality is there hasn't been enough real damage done yet from insecure IoT. I'm not saying there won't ever be, there just hasn't been enough expensive widespread damage done yet to make anyone really care.

In world filled with insecurity, adding security to your product isn't a feature anyone really cares about. I've been doing research about topics such as pollution, mine safety, auto safety, airline safety, and a number of other problems from our past. There are no good examples where humans decided to be proactive and solve a problem before it became absolutely horrible. People need a reason to care, there isn't a reason for IoT security.

Yet.

Someday something might happen that makes people start to care. As we add compute power to literally everything my security brain says there is some sort of horrible doom coming without security. But I've also been saying this for years and it's never really happened. There is a very real possibility that IoT security will just never happen if things never get bad enough.

Sunday, June 4, 2017

Free Market Security

I've been thinking about the concept of free market forces this weekend. The basic idea here is that the price of a good is decided by the supply and demand of the market. If the market demands something, the price will go up if there it's in short supply. This is basically why the Nintendo Switch is still selling on eBay for more than it would cost in the store. There is a demand but there isn't a supply. But back to security. Let's think about something I'm going to call "free market security". What if demand and supply was driving security? Or we can flip the question around, what if the market will never drive security?

Of course security isn't really a thing like we think of goods and services in this context. At best we could call it a feature of another product. You can't buy security to add it to your products, it's just sort of something that happens as part of a larger system.

I'm leaning in the direction of secure products. Let's pick on mobile phones because that environment is really interesting. Is the market driving security into phones? I'd say the answer today is a giant "no". Most people buy phones that will never see a security update. They don't even ask about updates or security in most instances. You could argue they don't know this is even a problem.

Apple is the leader here by a wide margin. They have invested substantially into security, but why did they do this? If we want to think about market forces and security, what's the driver? If Apple phones were less secure would the market stop buying them? I suspect the sales wouldn't change at all. I know very few people who buy an iPhone for the security. I know zero people outside of some security professionals who would ever think about this question. Why Apple decided to take these actions is a topic for another day I suspect.

Switching gears, the Android ecosystem is pretty rough in this regard. The vast majority of phones sold today are android phones. Android phones that are competitively priced, all have similar hardware, and almost all of them are completely insecure. People still buy them though. Security is clearly not a feature that's driving anything in this market. I bought a Nexus phone because of security. This one single feature. I am clearly not the norm here though.

The whole point we should be thinking about is idea of a free market for security. It doesn't exist, it probably won't exist. I see it like pollution. There isn't a very large market products that either don't pollute, or are made without polluting in some way. I know there are some people who worry about sustainability, but the vast majority of consumers don't really care. In fact nobody really cared about pollution until a river actually lit on fire. There are still some who don't, even after a river lit on fire.

I think there are many of us in security who keep waiting for demand to appear for more security. We keep watching and waiting, any day now everyone will see why this matters! It's not going to happen though. We do need security more  and more each day. The way everything is heading, things aren't looking great. I'd like to think we won't have to wait for the security equivalent of a river catching on fire, but I'm pretty sure that's what it will take.

Monday, May 29, 2017

Stealing from customers

I was having some security conversations last week and cybersecurity insurance came up as a topic. This isn't overly unusual as it's a pretty popular topic, but someone said something that really got me thinking.
What if the insurance covered the customers instead of the companies?
Now I understand that many cybersecurity insurance policies can cover some amount of customer damage and loss, but fundamentally the coverage is for the company that is attacked, customers who have data stolen will maybe get a year of free credit monitoring or some other token service. That's all well and good, but I couldn't help myself from thinking about this problem from another angle. Let's think about insurance in the context of shoplifting. For this thought exercise we're going to use a real store in our example, which won't be exactly correct, but the point is to think about the problem, not get all the minor details correct.

If you're in a busy store shopping and someone steals your wallet, it's generally accepted that the store is not at fault for this theft. Most would put some effort into helping you, but at the end of the day you're probably out of luck if you expect the store to repay you for anything you lost. They almost certainly won't have insurance to cover the theft of customer property in their store.

Now let's also imagine there are things taken from the store, actual merchandise gets stolen. This is called shoplifting. It has a special name and many stores even have special groups to help minimize this damage. They also have insurance to cover some of these losses. Most businesses see some shoplifting as a part of doing business. They account for some volume of this theft when doing their planning and profit calculations.

In the real world, I suspect customers being robbed while in a store isn't very common. If there is a store that gains a reputation for customers having wallets stolen, nobody will shop there. If you visit a store in a rough part of town they might even have a security guard at the door to help keep the riffraff out. This is because no shop wants to be known as a dangerous place. You can't exist as a store with that sort of reputation. Customers need to feel safe.

In the virtual world, all that can be stolen is basically information. Sometimes that information can be equated to actual money, sometimes it's just details about a person. Some will have little to no value like a very well known email address. Sometimes it can have a huge value like a tax identifier that can be used to commit identity theft. It can be very very difficult to know when information is stolen, but also the value of that information taken can vary widely. We also seem to place very little value on our information. Many people will trade it away for a trinket online worth a fraction of the information they just supplied.

Now let's think about insurance. Just like loss prevention insurance, cybersecurity insurance isn't there to protect customers. It exists to help protect the company from the losses of an attack. If customer data is stolen the customers are not really covered, in many instances there's nothing a customer can do. It could be impossible to prove your information was stolen, even if it gets used somewhere else can you prove it came from the business in question?

After spending some time on the question of what if insurance covered the customers, I realize how hard this problem is to deal with. While real world customer theft isn't very common and it's basically not covered, there's probably no hope for information. It's so hard to prove things beyond a reasonable doubt and many of our laws require actual harm to happen before any action can be taken. Proving this harm is very very difficult. We're almost certainly going to need new laws to deal with these situations.

Sunday, May 21, 2017

You know how to fix enterprise patching? Please tell me more!!!

If you pay attention to Twitter at all, you've probably seen people arguing about patching your enterprise after the WannaCry malware. The short story is that Microsoft fixed a very serious security flaw a few months before the malware hit. That means there are quite a few machines on the Internet that haven't applied a critical security update. Of course as you imagine there is plenty of back and forth about updates. There are two basic arguments I keep seeing.

Patching is hard and if you think I can just turn on windows update for all these computers running Windows 3.11 on token ring you've never had to deal with a real enterprise before! You out of touch hipsters don't know what it's really like here. We've seen thing, like, real things. We party like it's 1995. GET OFF MY LAWN.

The other side sounds a bit like this.

How can you be running anything that's less than a few hours old? Don't you know what the Internet looks like! If everyone just applied all updates immediately and ran their business in the cloud using agile scrum based SecDevSecOps serverless development practices everything would be fine!

Of course both of these groups are wrong for basically the same reason. The world isn't simple, and whatever works for you won't work for anyone else. The tie that binds us all together is that everything is broken, all the time. All the things we use are broken, how we use them is broken, and how we manage them is broken. We can't fix them even though we try and sometimes we pretend we can fix things.

However ...

Just because everything is broken, that's no excuse to do nothing. It's easy to declare something too hard and give up. A lot of enterprises do this, a lot of enterprise security people are using this defense why they can't update their infrastructure. On the other side though, sometimes moving too fast is more dangerous than moving too slow. Reckless updates are no better than no updates. Sometimes there is nothing we can do. Security as an industry is basically a big giant Kobayashi Maru test.

I have no advice to give on how to fix this problem. I think both groups are silly and wrong but why I think this is unimportant. The right way is for everyone to have civil conversations where we put ourselves in the other person's shoes. That won't happen though, it never happens even though basically ever leader ever has said that sort of behavior is a good idea. I suggest you double down on whatever bad practices you've hitched your horse to. In the next few months we'll all have an opportunity to show why our way to do things is the worst way ever, and we'll also find an opportunity to mock someone else for noting doing things the way we do.

In this game there are no winners and losers, just you. And you've already lost.

Wednesday, May 3, 2017

Security like it's 2005!

I was reading the newspaper the other day (the real dead tree newspaper) and I came across an op-ed from my congressperson.

Gallagher: Cybersecurity for small business

It's about what you'd expect but comes with some actionable advice! Well, not really. Here it is so you don't have to read the whole thing.

Businesses can start by taking some simple and relatively inexpensive steps to protect themselves, such as:
» Installing antivirus, threat detection and firewall software and systems.
» Encrypting company data and installing security patches to make sure computers and servers are up to date.
» Strengthening password practices, including requiring the use of strong passwords and two-factor authentication.
» Educating employees on how to recognize an attempted attack, including preparing rapid response measures to mitigate the damage of an attack in progress or recently completed.
I read that and my first thought was "how on earth would a small business have a clue about any of this", but then it got me thinking about the bigger problem. This advice isn't even useful in 2017. It sort of made sense a long time ago when this was the way of thinking, it's not valid anymore though.

Let's pick them apart one by one.

Installing antivirus, threat detection and firewall software and systems.
It's no secret that antivirus doesn't really work anymore. It's expensive in terms of cost and resources. In most settings I've seen it probably causes more trouble than it solves. Threat detection doesn't really mean anything. Virtually all systems come with a firewall enabled and some level of software protections that makes existing antivirus obsolete. Honestly, this is about as solved as it's going to get. There's no positive value you can add here.

Encrypting company data and installing security patches to make sure computers and servers are up to date
This is two unrelated things. Encrypting data is probably overkill for most settings. Any encryption that's usable doesn't really protect you. Encryption that actually protects needs a dedicated security team to manage. Let's not get into an argument about offline vs online data.

Keeping systems updated a fantastic idea. Nobody does it because it's too hard to do. If you're a small business you'll either have zero updates, or automatically install them all. The right answer is to use something as a service so you don't have to think about updates. Make sure automatic updates are working on your desktops.

Strengthening password practices, including requiring the use of strong passwords and two-factor authentication

Just use two-factor auth from your as a service provider. If you're managing your own accounts and you lack a dedicated identity team failure is the only option. Every major cloud provider can help you solve this.

Educating employees on how to recognize an attempted attack, including preparing rapid response measures to mitigate the damage of an attack in progress or recently completed

Just no. There is value in helping them understand the risks and threats, but this won't work. Social engineering attacks go after the fundamental nature of humanity. You can't stop this with training. The only hope is we create cold calculating artificial intelligence that can figure this out before it reaches humans. A number of service providers can even stop some of this today because they have ways to detect anomalies. A small business doesn't and probably never will.


As you can see, this list isn't really practical for anyone to worry about. Why should you have to worry about this today? These sort of problems have been plaguing small business and home users for years. These points are all what I would call "mid 200X" advice. These were suggestions everyone was giving out around 2005, they didn't really work then but it made everyone feel better. Most of these bullets aren't actionable unless you have a security person on staff. Would a non security person have any idea where to start or what of these items mean?

The 2017 world has a solution to these problems. Use the cloud. Stuff as a Service is without question the way to solve these problems because it makes them go away. There are plenty who will naysay public cloud citing various breeches, companies leaking data, companies selling data, and plenty of other problems. The cloud isn't magic, but it lets you trade a lot of horrible problems for "slightly bad". I guarantee the problems with the cloud are substantially better than letting most people try to run their own infrastructure. I see this a bit like airplane vs automobile crashes. There are magnitudes more deaths by automobile every year, but it's the airplane crashes that really get the attention. It's much much safer to fly than to drive, just as it's much much safer to use services than to manage your own infrastructure.

Sunday, April 30, 2017

Security fail is people

The other day I ran across someone trying to keep their locker secured by using a combination lock. As you can see in the picture, the lock is on the handle of the locker, not on the loop that actually locks the door. When I saw this I had a good chuckle, took a picture, and put out a snarky tweet. I then started to think about this quite a bit. Is this the user's fault or is this bad design? I'm going to blame bad design on this one. It's easy to blame users, we do it often, but I think in most instances, the problem is the design, not the user. If nothing is ever our fault, we will never improve anything. I suspect this is part of the problem we see across the cybersecurity universe.

On Humans

One of the great truths I'm starting to understand as I deal with humans more and more is that the one thing we all have in common is that we have waves of unpredictability. Sometimes we pay very close attention to our surroundings and situations, sometimes we don't. We can be distracted by someone calling our name, by something that happened earlier in the day, or even something that happened years ago. If you think you pay very close attention to everything at all times you're fooling yourself. We are squishy bags of confusing emotions that don't always make sense.

In the above picture, I can see a number of ways this happens. Maybe the person was very old and couldn't see. I have bad eyesight and could see this happening. Maybe they were talking to a friend and didn't notice where they put the lock. What if they dropped their phone moments before putting the lock on the door. Maybe they're just a clueless idiot who can't use locks! Well, not that last one.

This example is bad design. Why is there a handle that can hold a lock directly above the loop that is supposed to hold the lock? I can think of a few ways to solve this. The handle could be something other than a loop. A pull knob would be a lot harder to screw up. The handle could be farther up, or down. The loop could be larger or in a different place. No matter how you solve this, this is just a bad design. But we blame the user. We get a good laugh at a person making a simple mistake. Someday we'll make a simple mistake then blame bad design. It is also human nature to find someone or something else to blame.

The question I keep wondering; did whoever design this door think about security in any way? Do you think they were wondering how a system can and would fail? How would it be misused? How it could be broken? In this case I doubt there was anyone thinking about security failures for the door to a locker, it's just a locker. They probably told the intern to go draw a rectangle and put a handle on it. If I could find the manufacturer and tell them about this would they listen? I'd probably get pushed into the "crazy old kook" queue. You can even wonder if anyone really cares about locker security.

Wrapping up a post like this is always tricky. I could give advice about secure design, or tell everyone they should consult with a security expert. Maybe the answer is better user education (haha no). I think I'll target this at the security people who see something like this, take a picture, then write a tweet about how stupid someone is. We can use examples like this to learn and shape our own way of thinking. It's easy to use snark when we see something like this. The best thing we can do is make note of what we see, think about how this could have happened, and someday use it as an example to make something we're building better. We can't fix the world, but we can at least teach ourselves.

Monday, April 24, 2017

I have seen the future, and it is bug bounties


Every now and then I see something on a blog or Twitter about how you can't replace a pen test with a bug bounty. For a long time I agreed with this, but I've recently changed my mind. I know this isn't a super popular opinion (yet), and I don't think either side of this argument is exactly right. Fundamentally the future of looking for issues will not be a pen test. They won't really be bug bounties either, but I'm going to predict pen testing will evolve into what we currently call bug bounties.

First let's talk about a pen test. There's nothing wrong with getting a pen test, I'd suggest everyone goes through a few just to see what it's like. I want to be clear that I'm not saying pen testing is bad. I'm going to be making the argument why it's not the future. It is the present, many organizations require them for a variety of reasons. They will continue to be a thing for a very long time. If you can only pick one thing, you should probably choose a pen test today as it's at least a known known. Bug bounties are still known unknowns for most of us.

I also want to clarify that internal pen testing teams don't fall under this post. Internal teams are far more focused and have special knowledge that an outside company never will. It's my opinion that an internal team is and will always be superior to an outside pen test or bug bounty. Of course a lot of organizations can't afford to keep a dedicated internal team, so they turn to the outside.

So anyhow, it's time for a pen test. You find a company to conduct it, you scope what will be tested (it can't be everything). You agree on various timelines, then things get underway. After perhaps a week of testing, you have a very very long and detailed report of what was found. Here's the thing about a pen test; you're paying someone to look for problems. You will get what you pay for, you'll get a list of problems, usually a huge list. Everyone knows that the bigger the list, the better the pen test! But here's the dirty secret. Most of the results won't ever be fixed. Most results will fall below your internal bug bar. You paid for a ton of issues, you got a ton of issues, then you threw most of them out. Of course it's quite likely there will be high priority problems found, which is great. Those are what you really care about, not all the unexciting problems that are 95% of the report. What's your cost per issue fixed from that pen test?

Now let's look at how a bug bounty works. You find a company to run the bounty (it's probably not worth doing this yourself, there are many logistics). You scope what will be tested. You can agree on certain timelines and/or payout limits. Then things get underway. Here's where it's very different though. You're paying for the scope of bounty, you will get what you pay for, so there is an aspect of control. If you're only paying for critical bugs, by definition, you'll only get critical bugs. Of course there will be a certain amount of false positives. If I had to guess it's similar to a pen test today, but it's going to decrease as these organizations start to understand how to cut down on noise. I know HackerOne is doing some clever things to prevent noise.

My point to this whole post revolves around getting what you pay for, essential a cost per issue fixed instead of the current cost per issue found model. The real difference is that in the case of a bug bounty, you can control the scope of incoming. In no way am I suggesting a pen test is a bad idea, I'm simply suggesting that 200 page report isn't very useful. Of course if a pen test returned three issues, you'd probably be pretty upset when paying the bill. We all have finite resources so naturally we can't and won't fix minor bugs. it's just how things work. Today at best you'll about the same results from a bug bounty and a pen test, but I see a bug bounty as having room to improve. I think the pen test model isn't full of exciting innovation.

All this said, not every product and company will be able to attract enough interest in a bug bounty. Let's face it, the real purpose behind all this is to raise the security profiles of everyone involved. Some organizations will have to use a pen test like model to get their products and services investigated. This is why the bug bounty program won't be a long term viable option. There are too many bugs and not enough researchers.

Now for the bit about the future. The near future we will see the pendulum swing from pen testing to bug bounties. The next swing of the pendulum after bug bounties will be automation. Humans aren't very good at digging through huge amounts of data but computers are. What we're really good at and computers are (currently) really bad at is finding new and exciting ways to break systems. We once thought double free bugs couldn't be exploited. We didn't see a problem with NULL pointer dereferences. Someone once thought deserializing objects was a neat idea. I would rather see humans working on the future of security instead of exploiting the past. The future of the bug bounty can be new attack methods instead of finding bugs. We have some work to do, I've not seen an automated scanner that I'd even call "almost not terrible". It will happen though, tools always start terrible and get better through the natural march of progress. The road to this unicorn future will pass through bug bounties. However, if we don't have automation ready on the other side, it's nothing but dragons.

Sunday, April 16, 2017

Crawl, Walk, Drive

It's that time of year again. I don't mean when all the government secrets are leaked onto the Internet by some unknown organization. I mean the time of year when it's unsafe to cross streets or ride your bike. At least in the United States. It's possible more civilized countries don't have this problem. I enjoy getting around without a car, but I feel like the number of near misses has gone up a fair bit, and it's always a person much younger than me with someone much older than them in the passenger seat. At first I didn't think much about this and just dreamed of how self driving cars will rid us of the horror that is human drivers. After the last near fatality while crossing the street it dawned on me that now is the time all the kids have their driving learner's permit. I do think I preferred not knowing this since now I know my adversary. It has a name, and that name is "youth".

For those of you who aren't familiar with how this works in the US. Essentially after less training than is given to a typical volunteer, a young person generally around the age of 16 is given the ability to drive a car, on real streets, as long as there is a "responsible adult" in the car with them. We know this is impossible as all humans are terribly irresponsible drivers. They then spend a few months almost getting in accidents, take a proper test administered by someone who has one of the few jobs worse than IT security, and generally they end up with a real driver's license, ensuring we never run out of terrible human drivers.

There are no doubt a ton of stories that could be told here about mentorship, learning, encouraging, leadership, or teaching.  I'm not going to talk about any of that that today. I think often about how we raise up the next generation of security goons, I'm tired of talking about how we're all terrible people and nobody likes us, at least for this week.

I want to discuss the challenges of dealing with someone who is very new, very ambitious, and very dangerous. There are always going to be "new" people in any group or organization. Eventually they learn the rules they need to know, generally because they screw something up and someone yells at them about it. Goodness knows I learned most everything I know like this. But the point is, as security people, we have to not only do some yelling but we have to keep things in order while the new person is busy making a mess of everything. The yelling can help make us feel better, but we still have to ensure things can't go too far off the rails.

In many instances the new person will have some sort of mentor. They will of course try to keep them on task and learning useful things, but just like the parent of our student driver, they probably spend more time gaping in terror than they do teaching anything useful. If things really go crazy you can blame them someday, but at the beginning they're just busy hanging on trying not to soil themselves in an attempt to stay composed.

This brings us back to the security group. If you're in a large organization, every day is new person screwing something up day. I can't even begin to imagine what it must be like at a public cloud provider where you not only have new employees but also all your customers are basically ongoing risky behavior. The solution to this problem is the same as our student driver problem. Stop letting humans operate the machines. I'm not talking about the new people, I'm talking about the security people. If you don't have heavy use of automation, if you're not aggregating logs and having algorithms look for problems for example, you've already lost the battle.

Humans in general are bad at repetitive boring tasks. Driving falls under this category, and a lot of security work does too. I touched on the idea of measuring what you do in my last post. I'm going to tie these together in the next post. We do a lot of things that don't make sense if we measure them, but we struggle to measure security. I suspect part of that reason is because for a long time we were the passenger with the student drivers. If we emerged at the end of the ride alive, we were mostly happy.

It's time to become the groups building the future of cars, not waiting for a horrible crash to happen. The only way we can do that is if we start to understand and measure what works and what doesn't work. Everything from ROI to how effective is our policy and procedure. Make sure you come back next week. Assuming I'm not run down by a student driver before then.