Tonight while I was handing out candy on Halloween as the children came to the door trick-or-treating getting whatever candy I've not yet eaten. I started thinking about scary stories the security universe. Some of the things we do in Security could be compared to the old fable of the cursed monkey's paw, which is one of my favorite stories.
For those who don't know what this story is, the quick version of the story is essentially there is a monkey's paw, an actual severed appendage of a monkey (it's not some sort of figurative item). It has some fingers on it that may or may not signify the number of wishes used. The paw is indestructible, the previous owner doesn’t want it, but can’t get rid of it until some unsuspecting suckers shows up. The idea is you make a wish you get three wishes or five or whatever depending upon the version of the story that's told (these old folk tales can differ greatly depending on what part of the world is telling them) and then the monkey paw gives you exactly what you asked for. The problem is what you asked for comes with horrifying consequences. For example there was an old man who had the paw and he asked for $200, the next day he got his $200 because his son was killed at work and they brought him $200 of his last paycheck. Of course there's different variants of this but the basic idea is the paw seems clever, it grants wishes, but every wish comes with terrible consequences.
This story got me thinking about security, how we ask questions and how we answer questions. What if we think about this in the context of application security specifically for this example. If someone was to ask the security the question “does this code have a buffer overflow in it?” The person I asked for help is going to look for buffer overflows and they may or may not notice that it has a SQL injection problem. Or maybe it has an integer overflow or some other problem. The point is that's not what they were looking for so we didn't ask the right question. You can even bring this little farther and occasionally someone might ask the question “is my system secure” the answer is definitively no. You don't even have to look at it to answer that question and so they don't even know what to ask in reality. They are asking the monkey paw to bring them their money, it's going to do it, but they're not going to like the consequences.
So this really makes me think about how we frame the question since the questions we ask are super important, getting the details right is a big deal. However there's also another side to asking questions and that's being the human receiving the question. You have to be rational and sane in the way you deal with the person asking those questions. If we are the monkey's paw; only giving people the technical answer to the technical question, odds are good we aren't actually helping them.
As I sit here on this cold windy Halloween waiting for the kids to come and take all the candy that I keep eating, it really makes me think: as security practitioners we need to be very mindful of the fact that the questions people are asking us might not really be the answers they want. It's up to us as humans, rather than monkey paws, to interpret the intent behind the person, what is the question they really want to ask, then give them answers they can use, answers they need, and answers that are actually helpful.
Monday, October 31, 2016
Sunday, October 30, 2016
Security is in the same leaky boat as the sysadmins
Sysadmins used to rule the world. Anyone who's been around for more than a few years remembers the days when whatever the system administrator wanted, the system administrator got. They were the center of the business. Without them nothing would work. They were generally super smart and could quite often work magic with what they had. It was certainly a different time back then.
Now developers are king, the days of the sysadmin have waned. The systems we run workloads on are becoming a commodity, you either buy a relatively complete solution, or you just run it in the cloud. These days most anyone using technology for their business relies on developers instead of sysadmins.
But wait, what about servers in the cloud, or containers which are like special mini servers, or ... other things that sysadmins have to take care of! If you really think about it, containers and cloud are just vehicles for developers. All this new technology, all the new disruption, all the interesting things happening are all about enabling developers. Containers and cloud aren't ends to themselves, they are the boats by which developers deliver their cargo. Cloud didn't win, developers won, cloud just happens to be their weapon of choice right now.
If we think about all this, the question I keep wondering is "where does security fit in?"
I think the answer is that it doesn't, it probably should, but we have to change the rules since what we call security today is an antiquated and broken idea. A substantial amount of our security ideas and methods are from the old sysadmin world. Even our application security revolves around finding individual bugs, then releasing updates for them. This new world changes all the rules.
Much of our security ideas and concepts are based on the days when sysadmins ruled the world. They were like a massive T-Rex ruling their domain, instilling fear into those beneath them. Today in security we are trying to build Jurassic Park, except there are no dinosaurs, they all went extinct. Maybe we can use horses instead, nobody will notice ... probably. Most security leaders and security conferences are the same people saying the same things for the last ten years. If any of it worked even a little, I think we'd notice by now.
If you pay attention to the new hip ideas around development and security you've probably heard of DevSecOps, Rugged DevOps, SecDevOps, and a few more. They may be different things but the thing is, it should just be called "DevOps". We're in the middle of disruptive change, a lot of the old ideas and ways don't make sense anymore. Security is pretty firmly entrenched in 2004. Security isn't a special snowflake, it's not magic, it shouldn't be treated like it's somehow outside the business. Security should just exist the same way electricity or internet does. If you write software, having a security step makes as much sense as having a special testing step. You used to have testing as a step, you don't anymore because it's just a part of the workflow.
I've asked the question in the past "where are all the young security people?" I think I'm starting to figure this out. There are very few because nobody wants to join an industry that is being disrupted (at least nobody smart) and let's face it, security is seen as a problem, not a solution. The only real reason it's getting attention lately is because we've done a bad job in the past so everything is on fire now. If you want to really scare someone to death, pull out the line "I'm from security and I'm here to help". You aren't really, you might think you are, but they know better.
Comment on Twitter
Now developers are king, the days of the sysadmin have waned. The systems we run workloads on are becoming a commodity, you either buy a relatively complete solution, or you just run it in the cloud. These days most anyone using technology for their business relies on developers instead of sysadmins.
But wait, what about servers in the cloud, or containers which are like special mini servers, or ... other things that sysadmins have to take care of! If you really think about it, containers and cloud are just vehicles for developers. All this new technology, all the new disruption, all the interesting things happening are all about enabling developers. Containers and cloud aren't ends to themselves, they are the boats by which developers deliver their cargo. Cloud didn't win, developers won, cloud just happens to be their weapon of choice right now.
If we think about all this, the question I keep wondering is "where does security fit in?"
I think the answer is that it doesn't, it probably should, but we have to change the rules since what we call security today is an antiquated and broken idea. A substantial amount of our security ideas and methods are from the old sysadmin world. Even our application security revolves around finding individual bugs, then releasing updates for them. This new world changes all the rules.
Much of our security ideas and concepts are based on the days when sysadmins ruled the world. They were like a massive T-Rex ruling their domain, instilling fear into those beneath them. Today in security we are trying to build Jurassic Park, except there are no dinosaurs, they all went extinct. Maybe we can use horses instead, nobody will notice ... probably. Most security leaders and security conferences are the same people saying the same things for the last ten years. If any of it worked even a little, I think we'd notice by now.
If you pay attention to the new hip ideas around development and security you've probably heard of DevSecOps, Rugged DevOps, SecDevOps, and a few more. They may be different things but the thing is, it should just be called "DevOps". We're in the middle of disruptive change, a lot of the old ideas and ways don't make sense anymore. Security is pretty firmly entrenched in 2004. Security isn't a special snowflake, it's not magic, it shouldn't be treated like it's somehow outside the business. Security should just exist the same way electricity or internet does. If you write software, having a security step makes as much sense as having a special testing step. You used to have testing as a step, you don't anymore because it's just a part of the workflow.
I've asked the question in the past "where are all the young security people?" I think I'm starting to figure this out. There are very few because nobody wants to join an industry that is being disrupted (at least nobody smart) and let's face it, security is seen as a problem, not a solution. The only real reason it's getting attention lately is because we've done a bad job in the past so everything is on fire now. If you want to really scare someone to death, pull out the line "I'm from security and I'm here to help". You aren't really, you might think you are, but they know better.
Comment on Twitter
Sunday, October 23, 2016
Everything you know about security is wrong
If I asked everyone to tell me what security is, what do you do about it, and why you do it. I wouldn't get two answers that were the same. I probably wouldn't even get two that are similar. Why is this? After recording Episode 9 of the Open Source Security Podcast I co-host, I started thinking about measuring a lot. It came up in the podcast in the context of bug bounties, which get exactly what they measure. But do they measure the right things? I don't know the answer, nor does it really matter. It's just important to keep this in mind as in any system, you will get exactly what you measure.
Why do we do the things we do?
I've asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I'm going to go so far as to say we don't know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don't really increase security. Some of us fix security bugs so the bad guys don't exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don't stop a motivated attacker.
Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we're protecting the business. Sure, that one sounds good.
Measuring a negative
There's a reason this is so hard and weird though. It's only sort of our fault, it's what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn't happen. It's also impossible to prove a negative.
Do you know how many car accidents you didn't get in last week? How about how many times you weren't horribly maimed in an industrial accident? How many times did you not get mugged? These questions don't even make sense, no sane person would even try to measure those things. This is basically our current security metrics.
The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren't goals, those are outcomes.
What's our positive?
In order to measure something, it has to be true. We can't prove a negative, we have to prove something to measure it, so what's the "positive" we need to look for and measure. This isn't easy. I've been in this industry for a long time and I've done a lot of thinking about this. I'm not sure I'm right in my list below, but getting others to think about this is more important than being right.
As security people, we need to think about risk. Our job isn't to stop bad things, it's to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I'm not going to. NIST is neat, but it's too big for my liking, we need something simple. I'm going to suggest you build a security score card and track it over time. The historical trends will be very important.
Security Score Card
I'm not saying this is totally correct, it's just an idea I have floating in my mind, you're welcome to declare it insane. Here's what I'm suggesting you track.
1) Number of staff
2) Number of "systems"
3) Lines of code
4) Number of security people
That's it.
Here's why though. Let's think about measuring positives. We can't measure what isn't happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it's flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren't horribly overworked, you can't force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.
The other really important thing this does is makes you answer the questions. I bet a lot of organizations can't answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics - by measuring it we will change it, probably for the better.
If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that's clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you're willing to share ratios with me I'd love to start collecting data. As I said, I don't have scientific proof behind this, it's just something I suspect is true.
I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.
Send me your scorecard via Twitter
Why do we do the things we do?
I've asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I'm going to go so far as to say we don't know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don't really increase security. Some of us fix security bugs so the bad guys don't exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don't stop a motivated attacker.
Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we're protecting the business. Sure, that one sounds good.
Measuring a negative
There's a reason this is so hard and weird though. It's only sort of our fault, it's what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn't happen. It's also impossible to prove a negative.
Do you know how many car accidents you didn't get in last week? How about how many times you weren't horribly maimed in an industrial accident? How many times did you not get mugged? These questions don't even make sense, no sane person would even try to measure those things. This is basically our current security metrics.
The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren't goals, those are outcomes.
What's our positive?
In order to measure something, it has to be true. We can't prove a negative, we have to prove something to measure it, so what's the "positive" we need to look for and measure. This isn't easy. I've been in this industry for a long time and I've done a lot of thinking about this. I'm not sure I'm right in my list below, but getting others to think about this is more important than being right.
As security people, we need to think about risk. Our job isn't to stop bad things, it's to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I'm not going to. NIST is neat, but it's too big for my liking, we need something simple. I'm going to suggest you build a security score card and track it over time. The historical trends will be very important.
Security Score Card
I'm not saying this is totally correct, it's just an idea I have floating in my mind, you're welcome to declare it insane. Here's what I'm suggesting you track.
1) Number of staff
2) Number of "systems"
3) Lines of code
4) Number of security people
That's it.
Here's why though. Let's think about measuring positives. We can't measure what isn't happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it's flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren't horribly overworked, you can't force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.
The other really important thing this does is makes you answer the questions. I bet a lot of organizations can't answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics - by measuring it we will change it, probably for the better.
If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that's clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you're willing to share ratios with me I'd love to start collecting data. As I said, I don't have scientific proof behind this, it's just something I suspect is true.
I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.
Send me your scorecard via Twitter
Friday, October 21, 2016
IoT Can Never Be Fixed
This title is a bit click baity, but it's true, not for the reason you think. Keep reading to see why.
If you've ever been involved in keeping a software product updated, I mean from the development side of things, you know it's not a simple task. It's nearly impossible really. The biggest problem is that even after you've tested it to death and gone out of your way to ensure the update is as small as possible, things break. Something always breaks.
If you're using a typical computer, when something breaks, you sit down in front of it, type away on the keyboard, and you fix the problem. More often than not you just roll back the update and things go back to the way they used to be.
IoT is a totally different story. If you install an update and something goes wrong, you now have a very expensive paperweight. It's usually very difficult to fix IoT devices if something goes wrong, many of them are installed in less than ideal places and some may even be dangerous to get near the device.
This is why very few things do automatic updates. If you have automatic updates configured, things can just stop working one day. You'll probably have no idea it's coming, one day you wake up and your camera is bricked. Of course it's just as likely things won't break until it's something super important, we all know how Murphy's Law works out.
This doesn't even take into account the problems of secured updates, vendors going out of business, hardware going end of life, and devices that fail to update for some reason or other.
The law of truly large numbers
Let's assume there are 2 million of a given device out there. Let's assume there are automatic updates enabled. If we can guess 10% won't get updates for some reason or other. That means there will be around 200,000 vulnerable devices that miss the first round of updates. That's one product. With IoT the law of truly large numbers kicks in. Crazy things will happen because of this.
The law of truly large numbers tells us that if you have a large enough sample set, every crazy thing that can happen, will happen. Because of this law, the IoT can never be secured.
Now, all this considered, that's no reason to lose hope. It just means we have take this into consideration. We don't build systems that can handle a large number of crazy events. Once we take this into account we can start to design a system that's robust against these problems. The way we develop these systems and products will need a fundamental change. The way we do things today doesn't work in a large number situation. It's not a matter of maybe fixing this, it has to be fixed, and someone will fix it, the rewards will be substantial.
Comment on Twitter
If you've ever been involved in keeping a software product updated, I mean from the development side of things, you know it's not a simple task. It's nearly impossible really. The biggest problem is that even after you've tested it to death and gone out of your way to ensure the update is as small as possible, things break. Something always breaks.
If you're using a typical computer, when something breaks, you sit down in front of it, type away on the keyboard, and you fix the problem. More often than not you just roll back the update and things go back to the way they used to be.
IoT is a totally different story. If you install an update and something goes wrong, you now have a very expensive paperweight. It's usually very difficult to fix IoT devices if something goes wrong, many of them are installed in less than ideal places and some may even be dangerous to get near the device.
This is why very few things do automatic updates. If you have automatic updates configured, things can just stop working one day. You'll probably have no idea it's coming, one day you wake up and your camera is bricked. Of course it's just as likely things won't break until it's something super important, we all know how Murphy's Law works out.
This doesn't even take into account the problems of secured updates, vendors going out of business, hardware going end of life, and devices that fail to update for some reason or other.
The law of truly large numbers
Let's assume there are 2 million of a given device out there. Let's assume there are automatic updates enabled. If we can guess 10% won't get updates for some reason or other. That means there will be around 200,000 vulnerable devices that miss the first round of updates. That's one product. With IoT the law of truly large numbers kicks in. Crazy things will happen because of this.
The law of truly large numbers tells us that if you have a large enough sample set, every crazy thing that can happen, will happen. Because of this law, the IoT can never be secured.
Now, all this considered, that's no reason to lose hope. It just means we have take this into consideration. We don't build systems that can handle a large number of crazy events. Once we take this into account we can start to design a system that's robust against these problems. The way we develop these systems and products will need a fundamental change. The way we do things today doesn't work in a large number situation. It's not a matter of maybe fixing this, it has to be fixed, and someone will fix it, the rewards will be substantial.
Comment on Twitter
Monday, October 17, 2016
Can I interest you in talking about Security?
I had a discussion last week with some fellow security folks about how we can discuss security with normal people. If you pay attention to what's going on, you know the security people and the non security people don't really communicate well. We eventually made our way to comparing what we do to the door to door religious groups. They're rarely seen in a positive light, are usually annoying, and only seem to show up when it's most inconvenient. This got me thinking, we probably have more in common there than we want to admit, but there are also some lessons for us.
Firstly, nobody wants to talk to either group. The reasons are basically the same. People are already mostly happy with whatever choices they've made and don't need someone showing up to mess with their plans. Do you enjoy being told you're wrong? Even if you are wrong, you don't want someone telling you this. At best you want to figure it out yourself but in reality you don't care and will keep doing whatever you want. It's part of being an irrational human. I'm right, you're wrong, everything else is just pointless details.
Let's assume you are certain that the message you have is really important. If you're not telling people something useful, you're wasting their time. It doesn't matter how important a message is, the audience has to want to hear it. Nobody likes having their time wasted. In this crazy election season, how often are you willing to not just hang up your phone when a pollster calls? You know it's just a big waste of time.
Most importantly though, you can't act pretentious. If you think you're better than whoever you're talking to, even if you're trying hard not to show it, they'll know. Humans are amazing at understanding what another person is thinking by how they act. It's how we managed to survive this long. Our monkey brains are really good at handling social interactions without us even knowing. How often do you talk to someone who is acting superior to you, and all you want to do is stop talking to them.
Firstly, nobody wants to talk to either group. The reasons are basically the same. People are already mostly happy with whatever choices they've made and don't need someone showing up to mess with their plans. Do you enjoy being told you're wrong? Even if you are wrong, you don't want someone telling you this. At best you want to figure it out yourself but in reality you don't care and will keep doing whatever you want. It's part of being an irrational human. I'm right, you're wrong, everything else is just pointless details.
Let's assume you are certain that the message you have is really important. If you're not telling people something useful, you're wasting their time. It doesn't matter how important a message is, the audience has to want to hear it. Nobody likes having their time wasted. In this crazy election season, how often are you willing to not just hang up your phone when a pollster calls? You know it's just a big waste of time.
Most importantly though, you can't act pretentious. If you think you're better than whoever you're talking to, even if you're trying hard not to show it, they'll know. Humans are amazing at understanding what another person is thinking by how they act. It's how we managed to survive this long. Our monkey brains are really good at handling social interactions without us even knowing. How often do you talk to someone who is acting superior to you, and all you want to do is stop talking to them.
Now what?
It's really easy to point all this stuff out, most of us probably know this already. So what can we start doing different? In the same context of door to door selling, it's far more powerful if someone comes to you. If they come to you, they want to learn and understand. So while there isn't anything overly new and exciting, the thing that's best for us to remember today is just be available. If you're approachable, you will be approached, and when they do, make sure you don't drive your audience away. If someone wants to talk to you about security, let them. And be kind, understanding, and sympathetic.
Monday, October 10, 2016
Only trust food delivered by a zebra
If you're a security person you're probably used to normal people not listening to you. Sometimes we know why they don't listen, but often the users get blamed for being stupid or stubborn or something else to justify their behavior. After having a conversation the other day it was noted that some of our advice could be compared to telling someone they should only trust food that has been delivered to them by a zebra.
It's meant to sound silly, because it is silly.
If you tell someone they should only trust food delivered by a zebra, they might nod and agree, some will tell you that's silly, but fundamentally nobody is going to listen. They won't listen because that advice is completely impractical. If you give impractical advice, your advice gets ignored. This gets tricky though because what I call impractical advice you may not. Things can get complicated especially when a difficult topic is being discussed. It's even harder when you have a lot of people who are self proclaimed experts but in reality don't know very much.
This is basically the story of security though. We give advice that we think is practical, normal people hear advice that makes no sense, makes their life worse, and is probably something they don't even want to do. They have two choices. Tell you they think your advice is bad, or just nod and agree while doing whatever they want. The latter is much less work. If someone tells you the advice you just gave them is bad, you're not going to think about how to give better advice, you're going to spend a lot of time convincing them why you're right and they're wrong. Smart people don't argue, they just nod and agree.
The solution to this problem is very simple to explain, but will be very hard to do. It's not uncommon for me to talk about listening as a very important thing we should being doing more of. If listening was easy, or solved as many things as I claim it would, we wouldn't have any more problems. While it is super important we listen to those we must help, it's only a small part of what we have to do. We must learn to be tactical first. You can't listen to people who won't talk to you. And if you show up demand zebra food, nobody will ever tell you anything useful. You get branded as a kook and that pretty much ends everything.
Stop demanding zebra food.
Comment on Twitter
It's meant to sound silly, because it is silly.
If you tell someone they should only trust food delivered by a zebra, they might nod and agree, some will tell you that's silly, but fundamentally nobody is going to listen. They won't listen because that advice is completely impractical. If you give impractical advice, your advice gets ignored. This gets tricky though because what I call impractical advice you may not. Things can get complicated especially when a difficult topic is being discussed. It's even harder when you have a lot of people who are self proclaimed experts but in reality don't know very much.
This is basically the story of security though. We give advice that we think is practical, normal people hear advice that makes no sense, makes their life worse, and is probably something they don't even want to do. They have two choices. Tell you they think your advice is bad, or just nod and agree while doing whatever they want. The latter is much less work. If someone tells you the advice you just gave them is bad, you're not going to think about how to give better advice, you're going to spend a lot of time convincing them why you're right and they're wrong. Smart people don't argue, they just nod and agree.
The solution to this problem is very simple to explain, but will be very hard to do. It's not uncommon for me to talk about listening as a very important thing we should being doing more of. If listening was easy, or solved as many things as I claim it would, we wouldn't have any more problems. While it is super important we listen to those we must help, it's only a small part of what we have to do. We must learn to be tactical first. You can't listen to people who won't talk to you. And if you show up demand zebra food, nobody will ever tell you anything useful. You get branded as a kook and that pretty much ends everything.
Stop demanding zebra food.
Comment on Twitter
Monday, October 3, 2016
Impossible is impossible!
Sometimes when you plan for a security event, it would be expected that the thing you're doing will be making some outcome (something bad probably) impossible. The goal of the security group is to keep the bad guys out, or keep the data in, or keep the servers patched, or find all the security bugs in the code. One way to look at this is security is often in the business of preventing things from happening, such as making data exfiltration impossible. I'm here to tell you it's impossible to make something impossible.
As you think about that statement for a bit, let me explain what's happening here, and how we're going to tie this back to security, business needs, and some common sense. We've all heard of the 80/20 rule, one of the forms is that the last 20% of the features are 80% of the cost. It's a bit more nuanced than that if you really think about it. If your goal is impossible it would be more accurate to say 1% of the features are 2000% of the cost. What's really being described here is a curve that looks like this
The thinking behind this came about while I was discussing DRM with someone. No matter what sort of DRM gets built, someone will break it. DRM is built by a person which means, by definition, a smarter person can break it. It can't be 100%, in some cases it's not even 80%. But when a lot of people or groups think about DRM, the goal is to make acquiring the movie or music or whatever 100% impossible. They even go so far as to play the cat and mouse game constantly. Every time a researcher manages to break the DRM, they fix it, the researcher breaks it, they fix it, continue this forever.
Here's the question about the above graph though. Where is the break even point? Every project has a point of diminishing returns. A lot of security projects forget that if the cost of what you're doing is greater than the cost of the thing you're trying to protect, you're wasting resources. Never forget that there is such a thing as negative value. Doing things that don't matter often create negative value.
This is easiest to explain in the context of ransomware. If you're spending $2000 to protect yourself from a ransomware invasion that will cost $300, that's a bad investment. As crime inc. continues to evolve I imagine they will keep a lot of this in mind, if they can keep their damage low, there won't be a ton of incentive for security spending, which helps them grow their business. That's a topic for another day though.
The summary of all this is that perfect security doesn't exist. It might never exist (never say never though). You have to accept good enough security. And more often than not, good enough is close enough to perfect that it gets the job done.
Comment on Twitter
As you think about that statement for a bit, let me explain what's happening here, and how we're going to tie this back to security, business needs, and some common sense. We've all heard of the 80/20 rule, one of the forms is that the last 20% of the features are 80% of the cost. It's a bit more nuanced than that if you really think about it. If your goal is impossible it would be more accurate to say 1% of the features are 2000% of the cost. What's really being described here is a curve that looks like this
You can't make it to 100%, no matter how much you spend. This of course means there's no point in trying, but more importantly you have to realize you can't get to 100%. If you're smart you'll put your feature set somewhere around 80%, anything above that is probably a waste of money. If you're really clever there is some sort of best place to be investing resources, that's where you really want to be. 80% is probably a solid first pass though, and it's an easy number to remember.
The important thing to remember is that 100% is impossible. The curve never reaches 100%. Ever.
The thinking behind this came about while I was discussing DRM with someone. No matter what sort of DRM gets built, someone will break it. DRM is built by a person which means, by definition, a smarter person can break it. It can't be 100%, in some cases it's not even 80%. But when a lot of people or groups think about DRM, the goal is to make acquiring the movie or music or whatever 100% impossible. They even go so far as to play the cat and mouse game constantly. Every time a researcher manages to break the DRM, they fix it, the researcher breaks it, they fix it, continue this forever.
Here's the question about the above graph though. Where is the break even point? Every project has a point of diminishing returns. A lot of security projects forget that if the cost of what you're doing is greater than the cost of the thing you're trying to protect, you're wasting resources. Never forget that there is such a thing as negative value. Doing things that don't matter often create negative value.
This is easiest to explain in the context of ransomware. If you're spending $2000 to protect yourself from a ransomware invasion that will cost $300, that's a bad investment. As crime inc. continues to evolve I imagine they will keep a lot of this in mind, if they can keep their damage low, there won't be a ton of incentive for security spending, which helps them grow their business. That's a topic for another day though.
The summary of all this is that perfect security doesn't exist. It might never exist (never say never though). You have to accept good enough security. And more often than not, good enough is close enough to perfect that it gets the job done.
Comment on Twitter
Subscribe to:
Posts (Atom)