During the holiday, I started playing Doom 2. I bet I’ve not touched this game in more than ten years. I can't even remember the last time I played it. My home directory was full of garbage and it was time to clean it up when I came across doom2.wad. I’ve been carrying this file around in my home directory for nearly twenty years now. It’s always there like an old friend you know you can call at any time, day or night. I decided it was time to install one of the doom engines and give it a go. I picked prboom, it’s something I used a long time ago and doesn’t have any fancy features like mouselook or jumping. Part of the appeal is to keep the experience close to the original. Plus if you could jump a lot of these levels would be substantially easier. The game depends on not having those features.
This game is a work of art. You don’t see games redefining the industry like this anymore. The original Doom is good, but Doom 2 is like adding color to a black and white picture, it adds a certain quality to it. The game has a story, it’s pretty bad but that's not why we play it. The appeal is the mix of puzzles, action, monsters, and just plain cleverness. I love those areas where you have two crazy huge monsters fighting, you wonder which will win, then start running like crazy when you realize the winner is now coming after you. The games today are good, but it’s not exactly the same. The graphics are great, the stories are great, the gameplay is great, but it’s not something new and exciting. Doom was new and exciting. It created a whole new genre of gaming, it became the bar every game that comes after it reaches for. There are plenty of old games that when played today are terrible, even with the glasses of nostalgia on. Doom has terrible graphics, but that doesn’t matter, the game is still fantastic.
This all got me thinking about how industries mature. Crazy new things stop happening, the existing players find a rhythm that works for them and they settle into it. When was the last time we saw a game that redefined the gaming industry? There aren’t many of these events. This brings us to the security industry. We’re at a point where everyone is waiting for an industry defining event. We know it has to happen but nobody knows what it will be.
I bet this is similar to gaming back in the days of Doom. The 486 just came out, it had a ton of horsepower compared to anything that had come before it. Anyone paying attention knew there were going to be awesome advancements. We gave smart people awesome new tools. They delivered.
Back to security now. We have tons of awesome new tools. Cloud, DevOps, Artificial Intelligence, Open Source, microservices, containers. The list is huge and we’re ready for the next big thing. We all know the way we do security today doesn’t really work, a lot of our ideas and practices are based on the best 2004 had to offer. What should we be doing in 2017 and beyond? Are there some big ideas we’re not paying attention to but should be?
Do you have thoughts on the next big thing? Or maybe which Doom 2 level is the best (Industrial Zone). Let me know.
Sunday, December 25, 2016
Monday, December 19, 2016
Does "real" security matter?
As the dumpster fire that is 2016 crawls to the finish line, we had another story about a massive Yahoo breach. 1 billion user accounts had data stolen. Just to give some context here, that has to be hundreds of gigabytes at an absolute minimum. That's a crazy amount of data.
And nobody really cares.
Sure, there is some noise about all this, but in a week or two nobody will even remember. There has been a similar story to this about every month all year long. Can you even remember any of them? The stock market doesn't, basically everyone who has ever had a crazy breach hasn't seen a long term problem with their stock. Sure there will be a blip where everyone panics for a few days, then things go back to normal.
So this brings us to the title of this post.
Does anyone care about real security? What I mean here is I'm going to lump things into three buckets: no security, real security, and compliance security.
No Security
This one is pretty simple. You don't do anything. You just assume things will be OK, someday they aren't, then you clean up whatever mess you find. You could call this "reactive security" if you wanted. I'm feeling grumpy though.
Real Security
This is when you have a real security team, and you spend real money on features and technology. You have proper logging, and threat models, and attack surfaces, and hardened operating systems. Your applications go through a security development process and run in sandbox. This stuff is expensive. And hard.
Compliance Security
This is where you do whatever you have to because some regulation from somewhere says you have to. Password lengths, enabling TLS 1.2, encrypted data, the list is long. Just look at PCI if you want an example. I have no problem with this, and I think it's the future. Here is a picture of how things look today.
I don't think anyone would disagree that if you're doing the minimum compliance suggests, you still will have plenty of insecurity. The problem with the real security is that you're probably not getting any ROI, it's likely a black hole you dump money into and get minimal value back (remember the bit about long term stock prices not mattering here).
However, when we look at the sorry state of nearly all infrastructure and especially the IoT universe, it's clear that No Security is winning this race. Expecting anyone to make great leaps in security isn't going to happen. Most won't follow unless they absolutely have to. This is why compliance is the future. We have to keep nudging compliance to the right on this graph, but we have to move it slowly.
It's all about the Benjamins
As I mentioned above, security problems don't seem to cause a lot of negative financial impact. Compliance problems do. Right now there are very few instances where compliance is required, and even when it is it's not always as strong as it could be. Good security will have to firstly show value (actual measurable value, not some made up statistics), then once we see the value, it should be mandated by regulation. Not everything should be regulated, but we need clear rules as to what should need compliance, why, and especially how. I used to despise the idea of mandatory compliance around security but I think at this point it's the only plausible solution. This problem isn't going to fix itself. If you want to make a prediction ask yourself: is there a reason 2017 will be more secure than 2016?
Do you have thoughts on compliance? Let me know.
And nobody really cares.
Sure, there is some noise about all this, but in a week or two nobody will even remember. There has been a similar story to this about every month all year long. Can you even remember any of them? The stock market doesn't, basically everyone who has ever had a crazy breach hasn't seen a long term problem with their stock. Sure there will be a blip where everyone panics for a few days, then things go back to normal.
So this brings us to the title of this post.
Does anyone care about real security? What I mean here is I'm going to lump things into three buckets: no security, real security, and compliance security.
No Security
This one is pretty simple. You don't do anything. You just assume things will be OK, someday they aren't, then you clean up whatever mess you find. You could call this "reactive security" if you wanted. I'm feeling grumpy though.
Real Security
This is when you have a real security team, and you spend real money on features and technology. You have proper logging, and threat models, and attack surfaces, and hardened operating systems. Your applications go through a security development process and run in sandbox. This stuff is expensive. And hard.
Compliance Security
This is where you do whatever you have to because some regulation from somewhere says you have to. Password lengths, enabling TLS 1.2, encrypted data, the list is long. Just look at PCI if you want an example. I have no problem with this, and I think it's the future. Here is a picture of how things look today.
I don't think anyone would disagree that if you're doing the minimum compliance suggests, you still will have plenty of insecurity. The problem with the real security is that you're probably not getting any ROI, it's likely a black hole you dump money into and get minimal value back (remember the bit about long term stock prices not mattering here).
However, when we look at the sorry state of nearly all infrastructure and especially the IoT universe, it's clear that No Security is winning this race. Expecting anyone to make great leaps in security isn't going to happen. Most won't follow unless they absolutely have to. This is why compliance is the future. We have to keep nudging compliance to the right on this graph, but we have to move it slowly.
It's all about the Benjamins
As I mentioned above, security problems don't seem to cause a lot of negative financial impact. Compliance problems do. Right now there are very few instances where compliance is required, and even when it is it's not always as strong as it could be. Good security will have to firstly show value (actual measurable value, not some made up statistics), then once we see the value, it should be mandated by regulation. Not everything should be regulated, but we need clear rules as to what should need compliance, why, and especially how. I used to despise the idea of mandatory compliance around security but I think at this point it's the only plausible solution. This problem isn't going to fix itself. If you want to make a prediction ask yourself: is there a reason 2017 will be more secure than 2016?
Do you have thoughts on compliance? Let me know.
Monday, December 12, 2016
A security lifetime every five years
A long time ago, it wouldn’t be uncommon to have the same job at the same company for ten or twenty years. People loved their seniority, they loved their company, they loved everything staying the same. Stability was the name of the game. Why learn something new when you can retire in a few years?
Well, a long time ago, was a long time ago. Things are quite a bit different now. If you’ve been doing the same thing at the same company for more than five years, there’s probably something wrong. Of course there are always exceptions to every rule, but I bet more than 80% of the people in their jobs for more than five years aren’t exceptions. It’s easy to get too comfortable, it’s also dangerous.
Rather than spending too much time expanding on this idea, I’m going to take it and move into the security universe as that’s where I spend all my time. It’s a silly place, but it’s all I know, so it’s home. While all of IT moves fast, the last few years have been out of control for security. Most of the rules from even two years ago are different now. Things are moving at such a fast pace I’m comfortable claiming that every five years is a lifetime in the security universe.
I’m not saying you can’t work for the same company this whole time. I’m saying that if you’re doing the same thing for five years, you’re not growing. And if you’re not growing, what’s the point?
Now here’s the thing about security. If we think about the people we would consider the “leaders” (using the term loosely, there aren’t even many of those types) we will notice something about the whole “five years” I mentioned. How many of them have done anything on a level that got them where they are today in the last five years? Not many.
Again, there are exceptions. I’ll point to Mudge and the CITL work. That’s great stuff. But for every Mudge I can think of more than ten that just aren’t doing interesting things. There’s nothing wrong with this, I’m not pointing it out to diminish any past contributions to the world. I point it out because sometimes we spend more time looking at the past than we do looking even where we are today, much less where we’re heading in the future.
Sunday, December 4, 2016
Airports, Goats, Computers, and Users
Last week I had the joy traveling through airports right after the United States Thanksgiving holiday. Now I don't know how many of you have ever tried to travel the week after Thanksgiving but it's kind of crazy, there are a lot of people, way more than usual, and a significant number of them have probably never been on an airplane or if they travel by air they don't do it very often. The joke I like to tell people is that there are folks at the airport wondering why they can't bring their goat onto the airplane. I’m not going to use this post to discuss the merits of airport security (that’s a whole different conversation), it’s really about coexisting with existing security systems.
Now on this trip I didn't see any goats, I was hoping to see something I could classify as truly bizarre, so this was a disappointment to me. There were two dogs but they were surprisingly well behaved. However, all the madness I witnessed got me thinking about Security in an environment where a substantial number of the users are woefully unaware of the security all around them. The frequent travelers know how things work, they keep it moving smoothly, they’re aware of the security and make sure they stay out of trouble. It’s not about if something makes you more or less secure, it’s about the goal of getting from the door to the plane as quickly and painlessly as possible. Many of the infrequent travels aren’t worry about moving through the airport quickly, they’re worried about getting their stuff onto the plane. Some of this stuff shouldn’t be brought through an airport.
Now let’s think about how computer security works for most organizations. You’re not dealing with the frequent travels, you’re dealing with the holiday horde trying to smuggle a jug of motor oil through security. It’s not that these people are bad or stupid, it’s really just that they don’t worry about how things work, they’re not going to be back in the airport until next Thanksgiving. In a lot of organizations the users aren’t trying to be stupid, they just don’t understand security in a lot of instances. Browsing Facebook on the work computer isn’t seen as a bad idea, it’s their version of smuggling contraband through airport security. They don’t see what it hurts, they’re not worried about the general flow of things. If their computer gets ransomware it’s not really their problem. We’ve pushed security off to another group nobody really likes.
What does this all mean? I’m not looking to solve this problem, it’s well known that you can’t fix problems until you understand them. I just happened to notice this trend while making my way through the airport, looking for a goat. It’s not that users are stupid, they’re not as clueless as we think either, they’re just not invested in the process. It’s not something they want to care about, it’s something preventing them from doing what they want to. Can we get them invested in the airport process?
If I had to guess, we’re never going to fix users, we have to fix the tools and environment.
Now on this trip I didn't see any goats, I was hoping to see something I could classify as truly bizarre, so this was a disappointment to me. There were two dogs but they were surprisingly well behaved. However, all the madness I witnessed got me thinking about Security in an environment where a substantial number of the users are woefully unaware of the security all around them. The frequent travelers know how things work, they keep it moving smoothly, they’re aware of the security and make sure they stay out of trouble. It’s not about if something makes you more or less secure, it’s about the goal of getting from the door to the plane as quickly and painlessly as possible. Many of the infrequent travels aren’t worry about moving through the airport quickly, they’re worried about getting their stuff onto the plane. Some of this stuff shouldn’t be brought through an airport.
Now let’s think about how computer security works for most organizations. You’re not dealing with the frequent travels, you’re dealing with the holiday horde trying to smuggle a jug of motor oil through security. It’s not that these people are bad or stupid, it’s really just that they don’t worry about how things work, they’re not going to be back in the airport until next Thanksgiving. In a lot of organizations the users aren’t trying to be stupid, they just don’t understand security in a lot of instances. Browsing Facebook on the work computer isn’t seen as a bad idea, it’s their version of smuggling contraband through airport security. They don’t see what it hurts, they’re not worried about the general flow of things. If their computer gets ransomware it’s not really their problem. We’ve pushed security off to another group nobody really likes.
What does this all mean? I’m not looking to solve this problem, it’s well known that you can’t fix problems until you understand them. I just happened to notice this trend while making my way through the airport, looking for a goat. It’s not that users are stupid, they’re not as clueless as we think either, they’re just not invested in the process. It’s not something they want to care about, it’s something preventing them from doing what they want to. Can we get them invested in the airport process?
If I had to guess, we’re never going to fix users, we have to fix the tools and environment.
Sunday, November 27, 2016
The Economics of stealing a Tesla with a phone
A few days ago there was a story about how to steal a Tesla by installing malware on the owner's phone. If you look at the big picture view of this problem it's not all that bad, but our security brains want to make a huge deal out of this. Now I'm not saying that Tesla shouldn't fix this problem, especially since it's going to be a trivial fix. What we want to think about is how all these working parts have to fit together. This is something we're not very good at in the security universe; there can be one single horrible problem, but when we paint the full picture, it's not what it seems.
Firstly, the idea of being able to take full control over a car from a phone sounds terrible. It is terrible and when a problem like this is found, it should always be fixed. But this also isn't something that's going to affect millions (it probably won't even affect hundreds). This is the sort of problem where you have an attacker targeting you specifically. If someone wants to target you, there are a lot of things they can do, putting a rootkit on your phone to steal your car is one of the least scary thing. The reality is that if you're the target of a well funded adversary, you're going to lose, period. So we can ignore that scenario.
Let's move to the car itself. A Tesla, or most any stolen car today, doesn't have a lot of value, the risk vs reward is very low. I suspect a Tesla has so many serial numbers embedded in the equipment you couldn't resell any of the parts. I also bet it has enough gear on board that they can tell you where your car is with a margin of error around three inches. Stealing then trying to do something with such a vehicle probably has far more risk than any possible reward.
Now if you keep anything of value in your car, and many of us do, that could be a great opportunity for an adversary. But of course now we're back to the point if you have control over someone's phone, is your goal to steal something out of their car? Probably not. Additionally if we think as an adversary, once we break into the car, even if we leave no trace, the record of unlocking the doors is probably logged somewhere. An adversary on this level will want to remain very anonymous, and again, if your target has something of value it would be far less risky to just mug them.
Here is where the security world tends to fall apart from an economics perspective. We like to consider a particular problem or attack in a very narrow context. Gaining total control over a car does sound terrible, and if we only look at it in that context, it's a huge deal. If we look at the big picture though, it's not all that bad in reality. How many security bugs and misconfigurations have we spent millions dealing with as quickly as possible, when in the big picture, it wasn't all that big of a deal. Security is one of those things that more often than not is dealt with on an emotional level rather than one of pure logic and economics. Science and reason lead to good decisions, emotion does not.
Leave your comments on Twitter
Firstly, the idea of being able to take full control over a car from a phone sounds terrible. It is terrible and when a problem like this is found, it should always be fixed. But this also isn't something that's going to affect millions (it probably won't even affect hundreds). This is the sort of problem where you have an attacker targeting you specifically. If someone wants to target you, there are a lot of things they can do, putting a rootkit on your phone to steal your car is one of the least scary thing. The reality is that if you're the target of a well funded adversary, you're going to lose, period. So we can ignore that scenario.
Let's move to the car itself. A Tesla, or most any stolen car today, doesn't have a lot of value, the risk vs reward is very low. I suspect a Tesla has so many serial numbers embedded in the equipment you couldn't resell any of the parts. I also bet it has enough gear on board that they can tell you where your car is with a margin of error around three inches. Stealing then trying to do something with such a vehicle probably has far more risk than any possible reward.
Now if you keep anything of value in your car, and many of us do, that could be a great opportunity for an adversary. But of course now we're back to the point if you have control over someone's phone, is your goal to steal something out of their car? Probably not. Additionally if we think as an adversary, once we break into the car, even if we leave no trace, the record of unlocking the doors is probably logged somewhere. An adversary on this level will want to remain very anonymous, and again, if your target has something of value it would be far less risky to just mug them.
Here is where the security world tends to fall apart from an economics perspective. We like to consider a particular problem or attack in a very narrow context. Gaining total control over a car does sound terrible, and if we only look at it in that context, it's a huge deal. If we look at the big picture though, it's not all that bad in reality. How many security bugs and misconfigurations have we spent millions dealing with as quickly as possible, when in the big picture, it wasn't all that big of a deal. Security is one of those things that more often than not is dealt with on an emotional level rather than one of pure logic and economics. Science and reason lead to good decisions, emotion does not.
Leave your comments on Twitter
Sunday, November 20, 2016
Fast security is the best security
DevOps security is a bit like developing without a safety net. This is meant to be a reference to a trapeze act at the circus for those of you who have never had the joy of witnessing the heart stopping excitement of the circus trapeze. The idea is that when you watch a trapeze act with a net, you know that if something goes wrong, they just land in a net. The really exciting and scary trapeze acts have no net. If these folks fall, that's pretty much it for them. Someone pointed out to me that the current DevOps security is a bit like taking away the net.
This got me thinking about how we used to develop and do security, how we do it now, and is the net really gone?
If you're a geezer, you remember the days when the developers built something, and operations had to deploy it. It never worked, both groups called the other names. Eventually they put aside their mutual hatred, worked together, and got something that mostly worked. This did provide some level of checks and balances though. Operations could ensure development wasn't doing anything too silly, as development could check on operations. Things mostly made sense. Somehow projects still got deployed by banging rocks together.
That said though, things did move slowly, and it's not a secret that some projects failed due to structural issues after having huge sums of money spent on them. I'll never say things were better back then, anyone who claims the world was a better place isn't someone you should listen to.
In the new and exciting world of DevOps who is responsible for checking on who? Development can't really blame operations anymore, they're all on the same team, sometimes it's even the same person. This would be like that time the Austrian army attacked itself. This is where the idea of the safety net being removed comes in. Who is responsible for ensuring things are mostly secure? The new answer isn't "nobody", it's "everybody".
The real power of DevOps is that the software and systems are grown, not built. This is true of security, it's now grown instead of built. Now you have ample opportunity to make good security decisions along the way. Even if you make some sort of mistake, and you will, it's trivial to fix the problem quickly without much fanfare. The way the world works today is not the way the world worked even ten years ago. If you can't move fast, you're going to fail, especially when security is involved. Fast security is the best security.
And this is really how security has to work. Security has to move fast. The days of having months to fix security problems are long gone. You have to stay on top of what's going on and get things dealt with quickly. DevOps didn't remove the security safety net, it removed the security parachute. Now you can go as fast as you want, but that also means if nobody is driving, you're going to crash into a wall.
Leave your comments on Twitter
This got me thinking about how we used to develop and do security, how we do it now, and is the net really gone?
First, some history
If you're a geezer, you remember the days when the developers built something, and operations had to deploy it. It never worked, both groups called the other names. Eventually they put aside their mutual hatred, worked together, and got something that mostly worked. This did provide some level of checks and balances though. Operations could ensure development wasn't doing anything too silly, as development could check on operations. Things mostly made sense. Somehow projects still got deployed by banging rocks together.
That said though, things did move slowly, and it's not a secret that some projects failed due to structural issues after having huge sums of money spent on them. I'll never say things were better back then, anyone who claims the world was a better place isn't someone you should listen to.
The present
In the new and exciting world of DevOps who is responsible for checking on who? Development can't really blame operations anymore, they're all on the same team, sometimes it's even the same person. This would be like that time the Austrian army attacked itself. This is where the idea of the safety net being removed comes in. Who is responsible for ensuring things are mostly secure? The new answer isn't "nobody", it's "everybody".
The real power of DevOps is that the software and systems are grown, not built. This is true of security, it's now grown instead of built. Now you have ample opportunity to make good security decisions along the way. Even if you make some sort of mistake, and you will, it's trivial to fix the problem quickly without much fanfare. The way the world works today is not the way the world worked even ten years ago. If you can't move fast, you're going to fail, especially when security is involved. Fast security is the best security.
And this is really how security has to work. Security has to move fast. The days of having months to fix security problems are long gone. You have to stay on top of what's going on and get things dealt with quickly. DevOps didn't remove the security safety net, it removed the security parachute. Now you can go as fast as you want, but that also means if nobody is driving, you're going to crash into a wall.
Leave your comments on Twitter
Monday, November 14, 2016
Who cares if someone hacks my driveway camera?
I keep hearing something from people about IoT that reminds me of the old saying, if you’ve done nothing wrong, you have nothing to fear. This attitude is incredibly dangerous in the context of IoT devices (it’s dangerous in all circumstances honestly). The way I keep hearing this in the context of IoT is something like this: “I don’t care if someone hacks my video camera, it’s just showing pictures of my driveway”. The problem here isn’t what video the camera is capturing, it’s the fact that if your camera gets hacked, the attacker can do nearly anything with the device on the Internet. Remember, at this point these things are fairly powerful general purpose computers that happen to have a camera.
Let’s stick with the idea about an IoT camera being hacked as it’s easy to believe the result of a hack will be harmless. Let’s think about a few possible problem scenarios. There are literally an infinite number of these possibilities, which is part of the problem in understanding the problem.
- The attacker can see the camera video
- The attacker can use the camera in a botnet
- The attacker can host illegal content
- Send spam
- Mine bitcoins
- Crack passwords
- Act as a jump host
You get the idea. The possibilities are nearly endless, and as Crime Inc. continues to innovate, they will find new uses for these resources. Unprotected IoT devices are going to be currency in this new digital resource gold rush. The challenge the defenders face is we can’t defend against a threat that hasn’t been invented yet. It’s a tricky business really.
What happens if it’s doing something illegal?
Just because you don’t care about your camera being spied on doesn’t really matter. The privacy angle isn’t what’s important anymore in the context of IoT. People who had cameras that were part of the botnet probably didn’t care about the privacy. I bet a lot of them don’t even know their cameras were used as part of a massive illegal activity. I don’t expect everyone to suddenly start to watch their IoT traffic for strange happenings. The whole point to this discussion is to stress that there are always many possible layers of problems when you have a device that’s not protected. It’s not just about what the device is supposed to do. At this point nearly everything that can attach to the Internet is more powerful than the biggest computers 20 years ago. By definition these things can do literally anything.
Comment on Twitter
Sunday, November 6, 2016
Free security is the only security that really works
There are certain things people want and will pay for. There are things they want and won’t. If we look at security it’s pretty clear now that security is one of those things people want, but most won’t pay for. The insane success of Let’s Encrypt is where this thought came from. Certificates aren’t new, they used to even be really cheap (there were free providers, but there was a time cost of jumping through hoops). Let’s Encrypt make the time and actual cost basically zero, now it’s deployed all over. Depending who you ask, they’re one of the biggest CAs around now, and that took them a year? That’s crazy.
Nobody is going to say “I don’t want security”. Only a monster would say such a thing. Now if you ask them to pay for their security, they’ll probably sneak out the back door while you’re not looking. We all have causes we think are great, but we’re not willing to pay for them. Do I believe in helping disadvantaged youth in Albania? I TOTALLY DO! Can I donate to the cause? I just remembered I left the kettle on the stove.
Currently most people and groups don’t have to do things securely. There is some incentive in certain industries, but fundamentally they don’t want to pay for anything. And let's face it, the difference between what happens if they do something or don’t do something (let’s say http vs https), it going to be minimal. There are some search engine rules now that give preference to https, so there’s incentive. With a free CA, now there’s no excuse. A great way forward will be small incentives for being more secure and having free or low cost ways to get those (email is probably next).
If we think about what seems to be the hip technologies today, a few spring to mind.
The list an go on nearly forever. Ask yourself this. What is the ROI on this stuff? Apart from not being able to answer, I bet some of it is negative. Why should we do something that costs more than it saves? Just having free security isn’t enough, it has to also be useful. Part of the appeal of Let’s Encrypt is it’s really easy to use, it solves a problem, it’s very low cost, and high ROI. How many security technologies can we say this about? We can’t even agree what problems some of this stuff solves.
Here’s an easy rule of thumb for things like this. If you can’t show a return of at least 10x, don’t do something. We get caught in the trap of “I have to do something” without any regard for if it makes sense. A huge advantage of demanding measured returns is it makes us focus on two questions that rarely get asked. The first and most important is “how much will this cost?” we’ve all seen runaway projects. The second is “what’s my real benefit”. The second is really hard sometimes and will end up creating a lot of new questions and ideas. If you can’t measure or decide what the benefit is to what you’re doing, you probably don’t need to be doing that. A big part of being a modern agile organization is only doing what’s needed. Security ROI can help us focus on that.
At the end of the day stop complaining everything is terrible (we already know it is), figure out how you can make a difference without huge cost. Shaking your fist while screaming “you’ll be sorry” isn’t a strategy.
Nobody is going to say “I don’t want security”. Only a monster would say such a thing. Now if you ask them to pay for their security, they’ll probably sneak out the back door while you’re not looking. We all have causes we think are great, but we’re not willing to pay for them. Do I believe in helping disadvantaged youth in Albania? I TOTALLY DO! Can I donate to the cause? I just remembered I left the kettle on the stove.
Currently most people and groups don’t have to do things securely. There is some incentive in certain industries, but fundamentally they don’t want to pay for anything. And let's face it, the difference between what happens if they do something or don’t do something (let’s say http vs https), it going to be minimal. There are some search engine rules now that give preference to https, so there’s incentive. With a free CA, now there’s no excuse. A great way forward will be small incentives for being more secure and having free or low cost ways to get those (email is probably next).
How can we make more security free?
Better built in technologies work, look at things like stack canaries, everyone has them, almost everyone uses them. If you look at Wikipedia, it was around 2000 that major compilers started to add this technology. It took quite a fair bit of time. Phrack 49, which brought stack smashing to the conversation, was published in 1996, we didn’t see massive update in stack protections until after 2000. Can you imagine what four years is like in today’s Internet?If we think about what seems to be the hip technologies today, a few spring to mind.
- Code scanning is currently expensive, and not well used.
- Endpoint security gets plenty of news.
- What do you mean you don’t have an SDLC! I am shocked! SHOCKED!
- Software Defined EVERYTHING!
- There are also plenty of authentication and identity and twelve factor something or other.
The list an go on nearly forever. Ask yourself this. What is the ROI on this stuff? Apart from not being able to answer, I bet some of it is negative. Why should we do something that costs more than it saves? Just having free security isn’t enough, it has to also be useful. Part of the appeal of Let’s Encrypt is it’s really easy to use, it solves a problem, it’s very low cost, and high ROI. How many security technologies can we say this about? We can’t even agree what problems some of this stuff solves.
Here’s an easy rule of thumb for things like this. If you can’t show a return of at least 10x, don’t do something. We get caught in the trap of “I have to do something” without any regard for if it makes sense. A huge advantage of demanding measured returns is it makes us focus on two questions that rarely get asked. The first and most important is “how much will this cost?” we’ve all seen runaway projects. The second is “what’s my real benefit”. The second is really hard sometimes and will end up creating a lot of new questions and ideas. If you can’t measure or decide what the benefit is to what you’re doing, you probably don’t need to be doing that. A big part of being a modern agile organization is only doing what’s needed. Security ROI can help us focus on that.
At the end of the day stop complaining everything is terrible (we already know it is), figure out how you can make a difference without huge cost. Shaking your fist while screaming “you’ll be sorry” isn’t a strategy.
Monday, October 31, 2016
Stop being the monkey's paw
Tonight while I was handing out candy on Halloween as the children came to the door trick-or-treating getting whatever candy I've not yet eaten. I started thinking about scary stories the security universe. Some of the things we do in Security could be compared to the old fable of the cursed monkey's paw, which is one of my favorite stories.
For those who don't know what this story is, the quick version of the story is essentially there is a monkey's paw, an actual severed appendage of a monkey (it's not some sort of figurative item). It has some fingers on it that may or may not signify the number of wishes used. The paw is indestructible, the previous owner doesn’t want it, but can’t get rid of it until some unsuspecting suckers shows up. The idea is you make a wish you get three wishes or five or whatever depending upon the version of the story that's told (these old folk tales can differ greatly depending on what part of the world is telling them) and then the monkey paw gives you exactly what you asked for. The problem is what you asked for comes with horrifying consequences. For example there was an old man who had the paw and he asked for $200, the next day he got his $200 because his son was killed at work and they brought him $200 of his last paycheck. Of course there's different variants of this but the basic idea is the paw seems clever, it grants wishes, but every wish comes with terrible consequences.
This story got me thinking about security, how we ask questions and how we answer questions. What if we think about this in the context of application security specifically for this example. If someone was to ask the security the question “does this code have a buffer overflow in it?” The person I asked for help is going to look for buffer overflows and they may or may not notice that it has a SQL injection problem. Or maybe it has an integer overflow or some other problem. The point is that's not what they were looking for so we didn't ask the right question. You can even bring this little farther and occasionally someone might ask the question “is my system secure” the answer is definitively no. You don't even have to look at it to answer that question and so they don't even know what to ask in reality. They are asking the monkey paw to bring them their money, it's going to do it, but they're not going to like the consequences.
So this really makes me think about how we frame the question since the questions we ask are super important, getting the details right is a big deal. However there's also another side to asking questions and that's being the human receiving the question. You have to be rational and sane in the way you deal with the person asking those questions. If we are the monkey's paw; only giving people the technical answer to the technical question, odds are good we aren't actually helping them.
As I sit here on this cold windy Halloween waiting for the kids to come and take all the candy that I keep eating, it really makes me think: as security practitioners we need to be very mindful of the fact that the questions people are asking us might not really be the answers they want. It's up to us as humans, rather than monkey paws, to interpret the intent behind the person, what is the question they really want to ask, then give them answers they can use, answers they need, and answers that are actually helpful.
For those who don't know what this story is, the quick version of the story is essentially there is a monkey's paw, an actual severed appendage of a monkey (it's not some sort of figurative item). It has some fingers on it that may or may not signify the number of wishes used. The paw is indestructible, the previous owner doesn’t want it, but can’t get rid of it until some unsuspecting suckers shows up. The idea is you make a wish you get three wishes or five or whatever depending upon the version of the story that's told (these old folk tales can differ greatly depending on what part of the world is telling them) and then the monkey paw gives you exactly what you asked for. The problem is what you asked for comes with horrifying consequences. For example there was an old man who had the paw and he asked for $200, the next day he got his $200 because his son was killed at work and they brought him $200 of his last paycheck. Of course there's different variants of this but the basic idea is the paw seems clever, it grants wishes, but every wish comes with terrible consequences.
This story got me thinking about security, how we ask questions and how we answer questions. What if we think about this in the context of application security specifically for this example. If someone was to ask the security the question “does this code have a buffer overflow in it?” The person I asked for help is going to look for buffer overflows and they may or may not notice that it has a SQL injection problem. Or maybe it has an integer overflow or some other problem. The point is that's not what they were looking for so we didn't ask the right question. You can even bring this little farther and occasionally someone might ask the question “is my system secure” the answer is definitively no. You don't even have to look at it to answer that question and so they don't even know what to ask in reality. They are asking the monkey paw to bring them their money, it's going to do it, but they're not going to like the consequences.
So this really makes me think about how we frame the question since the questions we ask are super important, getting the details right is a big deal. However there's also another side to asking questions and that's being the human receiving the question. You have to be rational and sane in the way you deal with the person asking those questions. If we are the monkey's paw; only giving people the technical answer to the technical question, odds are good we aren't actually helping them.
As I sit here on this cold windy Halloween waiting for the kids to come and take all the candy that I keep eating, it really makes me think: as security practitioners we need to be very mindful of the fact that the questions people are asking us might not really be the answers they want. It's up to us as humans, rather than monkey paws, to interpret the intent behind the person, what is the question they really want to ask, then give them answers they can use, answers they need, and answers that are actually helpful.
Sunday, October 30, 2016
Security is in the same leaky boat as the sysadmins
Sysadmins used to rule the world. Anyone who's been around for more than a few years remembers the days when whatever the system administrator wanted, the system administrator got. They were the center of the business. Without them nothing would work. They were generally super smart and could quite often work magic with what they had. It was certainly a different time back then.
Now developers are king, the days of the sysadmin have waned. The systems we run workloads on are becoming a commodity, you either buy a relatively complete solution, or you just run it in the cloud. These days most anyone using technology for their business relies on developers instead of sysadmins.
But wait, what about servers in the cloud, or containers which are like special mini servers, or ... other things that sysadmins have to take care of! If you really think about it, containers and cloud are just vehicles for developers. All this new technology, all the new disruption, all the interesting things happening are all about enabling developers. Containers and cloud aren't ends to themselves, they are the boats by which developers deliver their cargo. Cloud didn't win, developers won, cloud just happens to be their weapon of choice right now.
If we think about all this, the question I keep wondering is "where does security fit in?"
I think the answer is that it doesn't, it probably should, but we have to change the rules since what we call security today is an antiquated and broken idea. A substantial amount of our security ideas and methods are from the old sysadmin world. Even our application security revolves around finding individual bugs, then releasing updates for them. This new world changes all the rules.
Much of our security ideas and concepts are based on the days when sysadmins ruled the world. They were like a massive T-Rex ruling their domain, instilling fear into those beneath them. Today in security we are trying to build Jurassic Park, except there are no dinosaurs, they all went extinct. Maybe we can use horses instead, nobody will notice ... probably. Most security leaders and security conferences are the same people saying the same things for the last ten years. If any of it worked even a little, I think we'd notice by now.
If you pay attention to the new hip ideas around development and security you've probably heard of DevSecOps, Rugged DevOps, SecDevOps, and a few more. They may be different things but the thing is, it should just be called "DevOps". We're in the middle of disruptive change, a lot of the old ideas and ways don't make sense anymore. Security is pretty firmly entrenched in 2004. Security isn't a special snowflake, it's not magic, it shouldn't be treated like it's somehow outside the business. Security should just exist the same way electricity or internet does. If you write software, having a security step makes as much sense as having a special testing step. You used to have testing as a step, you don't anymore because it's just a part of the workflow.
I've asked the question in the past "where are all the young security people?" I think I'm starting to figure this out. There are very few because nobody wants to join an industry that is being disrupted (at least nobody smart) and let's face it, security is seen as a problem, not a solution. The only real reason it's getting attention lately is because we've done a bad job in the past so everything is on fire now. If you want to really scare someone to death, pull out the line "I'm from security and I'm here to help". You aren't really, you might think you are, but they know better.
Comment on Twitter
Now developers are king, the days of the sysadmin have waned. The systems we run workloads on are becoming a commodity, you either buy a relatively complete solution, or you just run it in the cloud. These days most anyone using technology for their business relies on developers instead of sysadmins.
But wait, what about servers in the cloud, or containers which are like special mini servers, or ... other things that sysadmins have to take care of! If you really think about it, containers and cloud are just vehicles for developers. All this new technology, all the new disruption, all the interesting things happening are all about enabling developers. Containers and cloud aren't ends to themselves, they are the boats by which developers deliver their cargo. Cloud didn't win, developers won, cloud just happens to be their weapon of choice right now.
If we think about all this, the question I keep wondering is "where does security fit in?"
I think the answer is that it doesn't, it probably should, but we have to change the rules since what we call security today is an antiquated and broken idea. A substantial amount of our security ideas and methods are from the old sysadmin world. Even our application security revolves around finding individual bugs, then releasing updates for them. This new world changes all the rules.
Much of our security ideas and concepts are based on the days when sysadmins ruled the world. They were like a massive T-Rex ruling their domain, instilling fear into those beneath them. Today in security we are trying to build Jurassic Park, except there are no dinosaurs, they all went extinct. Maybe we can use horses instead, nobody will notice ... probably. Most security leaders and security conferences are the same people saying the same things for the last ten years. If any of it worked even a little, I think we'd notice by now.
If you pay attention to the new hip ideas around development and security you've probably heard of DevSecOps, Rugged DevOps, SecDevOps, and a few more. They may be different things but the thing is, it should just be called "DevOps". We're in the middle of disruptive change, a lot of the old ideas and ways don't make sense anymore. Security is pretty firmly entrenched in 2004. Security isn't a special snowflake, it's not magic, it shouldn't be treated like it's somehow outside the business. Security should just exist the same way electricity or internet does. If you write software, having a security step makes as much sense as having a special testing step. You used to have testing as a step, you don't anymore because it's just a part of the workflow.
I've asked the question in the past "where are all the young security people?" I think I'm starting to figure this out. There are very few because nobody wants to join an industry that is being disrupted (at least nobody smart) and let's face it, security is seen as a problem, not a solution. The only real reason it's getting attention lately is because we've done a bad job in the past so everything is on fire now. If you want to really scare someone to death, pull out the line "I'm from security and I'm here to help". You aren't really, you might think you are, but they know better.
Comment on Twitter
Sunday, October 23, 2016
Everything you know about security is wrong
If I asked everyone to tell me what security is, what do you do about it, and why you do it. I wouldn't get two answers that were the same. I probably wouldn't even get two that are similar. Why is this? After recording Episode 9 of the Open Source Security Podcast I co-host, I started thinking about measuring a lot. It came up in the podcast in the context of bug bounties, which get exactly what they measure. But do they measure the right things? I don't know the answer, nor does it really matter. It's just important to keep this in mind as in any system, you will get exactly what you measure.
Why do we do the things we do?
I've asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I'm going to go so far as to say we don't know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don't really increase security. Some of us fix security bugs so the bad guys don't exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don't stop a motivated attacker.
Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we're protecting the business. Sure, that one sounds good.
Measuring a negative
There's a reason this is so hard and weird though. It's only sort of our fault, it's what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn't happen. It's also impossible to prove a negative.
Do you know how many car accidents you didn't get in last week? How about how many times you weren't horribly maimed in an industrial accident? How many times did you not get mugged? These questions don't even make sense, no sane person would even try to measure those things. This is basically our current security metrics.
The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren't goals, those are outcomes.
What's our positive?
In order to measure something, it has to be true. We can't prove a negative, we have to prove something to measure it, so what's the "positive" we need to look for and measure. This isn't easy. I've been in this industry for a long time and I've done a lot of thinking about this. I'm not sure I'm right in my list below, but getting others to think about this is more important than being right.
As security people, we need to think about risk. Our job isn't to stop bad things, it's to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I'm not going to. NIST is neat, but it's too big for my liking, we need something simple. I'm going to suggest you build a security score card and track it over time. The historical trends will be very important.
Security Score Card
I'm not saying this is totally correct, it's just an idea I have floating in my mind, you're welcome to declare it insane. Here's what I'm suggesting you track.
1) Number of staff
2) Number of "systems"
3) Lines of code
4) Number of security people
That's it.
Here's why though. Let's think about measuring positives. We can't measure what isn't happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it's flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren't horribly overworked, you can't force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.
The other really important thing this does is makes you answer the questions. I bet a lot of organizations can't answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics - by measuring it we will change it, probably for the better.
If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that's clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you're willing to share ratios with me I'd love to start collecting data. As I said, I don't have scientific proof behind this, it's just something I suspect is true.
I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.
Send me your scorecard via Twitter
Why do we do the things we do?
I've asked this question before, and I often get answers from people. Some are well thought out reasonable answers. Some are overly simplistic. Some are just plain silly. All of them are wrong. I'm going to go so far as to say we don't know why we do what we do in most instances. Sure, there might be compliance, with a bunch of rules, that everyone knows don't really increase security. Some of us fix security bugs so the bad guys don't exploit them (even though very few actually get exploited). Some of us harden systems using rules that probably don't stop a motivated attacker.
Are we protecting data? Are we protecting the systems? Are we protecting people? Maybe we're protecting the business. Sure, that one sounds good.
Measuring a negative
There's a reason this is so hard and weird though. It's only sort of our fault, it's what we try to measure. We are trying to measure something not happening. You cannot measure how many times an event didn't happen. It's also impossible to prove a negative.
Do you know how many car accidents you didn't get in last week? How about how many times you weren't horribly maimed in an industrial accident? How many times did you not get mugged? These questions don't even make sense, no sane person would even try to measure those things. This is basically our current security metrics.
The way we look at security today is all about the negatives. The goal is to not be hacked. The goal is to not have security bugs. Those aren't goals, those are outcomes.
What's our positive?
In order to measure something, it has to be true. We can't prove a negative, we have to prove something to measure it, so what's the "positive" we need to look for and measure. This isn't easy. I've been in this industry for a long time and I've done a lot of thinking about this. I'm not sure I'm right in my list below, but getting others to think about this is more important than being right.
As security people, we need to think about risk. Our job isn't to stop bad things, it's to understand and control risk. We cannot stop bad things from happening, the best we can hope for is to minimize damage from bad things. Right about now is where many would start talking about the NIST framework. I'm not going to. NIST is neat, but it's too big for my liking, we need something simple. I'm going to suggest you build a security score card and track it over time. The historical trends will be very important.
Security Score Card
I'm not saying this is totally correct, it's just an idea I have floating in my mind, you're welcome to declare it insane. Here's what I'm suggesting you track.
1) Number of staff
2) Number of "systems"
3) Lines of code
4) Number of security people
That's it.
Here's why though. Let's think about measuring positives. We can't measure what isn't happening, but we can measure what we have and what is happening. If you work for a healthy company, 1-3 will be increasing. What does your #4 look like? I bet in many organizations it's flat and grossly understaffed. Good staff will help deal with security problems. If you have a good leader and solid staff, a lot of security problems get dealt with. Things like the NIST framework is what happens when you have competent staff who aren't horribly overworked, you can't force a framework on a broken organization, it just breaks it worse. Every organization is different, there is no one framework or policy that will work. The only way we tackle this stuff is by having competent motivated staff.
The other really important thing this does is makes you answer the questions. I bet a lot of organizations can't answer 2 and 3. #1 is usually pretty easy (just ask ldap), #2 is much harder, and #3 may be impossible for some. These look like easy things to measure and just like quantum physics - by measuring it we will change it, probably for the better.
If you have 2000 employees, 200 systems, 4 million lines of code, and 2 security people, that's clearly a disaster waiting to happen. If you have 20, there may be hope. I have no idea what the proper ratios should be, if you're willing to share ratios with me I'd love to start collecting data. As I said, I don't have scientific proof behind this, it's just something I suspect is true.
I should probably add one more thing. What we measure not only needs to be true, it needs to be simple.
Send me your scorecard via Twitter
Friday, October 21, 2016
IoT Can Never Be Fixed
This title is a bit click baity, but it's true, not for the reason you think. Keep reading to see why.
If you've ever been involved in keeping a software product updated, I mean from the development side of things, you know it's not a simple task. It's nearly impossible really. The biggest problem is that even after you've tested it to death and gone out of your way to ensure the update is as small as possible, things break. Something always breaks.
If you're using a typical computer, when something breaks, you sit down in front of it, type away on the keyboard, and you fix the problem. More often than not you just roll back the update and things go back to the way they used to be.
IoT is a totally different story. If you install an update and something goes wrong, you now have a very expensive paperweight. It's usually very difficult to fix IoT devices if something goes wrong, many of them are installed in less than ideal places and some may even be dangerous to get near the device.
This is why very few things do automatic updates. If you have automatic updates configured, things can just stop working one day. You'll probably have no idea it's coming, one day you wake up and your camera is bricked. Of course it's just as likely things won't break until it's something super important, we all know how Murphy's Law works out.
This doesn't even take into account the problems of secured updates, vendors going out of business, hardware going end of life, and devices that fail to update for some reason or other.
The law of truly large numbers
Let's assume there are 2 million of a given device out there. Let's assume there are automatic updates enabled. If we can guess 10% won't get updates for some reason or other. That means there will be around 200,000 vulnerable devices that miss the first round of updates. That's one product. With IoT the law of truly large numbers kicks in. Crazy things will happen because of this.
The law of truly large numbers tells us that if you have a large enough sample set, every crazy thing that can happen, will happen. Because of this law, the IoT can never be secured.
Now, all this considered, that's no reason to lose hope. It just means we have take this into consideration. We don't build systems that can handle a large number of crazy events. Once we take this into account we can start to design a system that's robust against these problems. The way we develop these systems and products will need a fundamental change. The way we do things today doesn't work in a large number situation. It's not a matter of maybe fixing this, it has to be fixed, and someone will fix it, the rewards will be substantial.
Comment on Twitter
If you've ever been involved in keeping a software product updated, I mean from the development side of things, you know it's not a simple task. It's nearly impossible really. The biggest problem is that even after you've tested it to death and gone out of your way to ensure the update is as small as possible, things break. Something always breaks.
If you're using a typical computer, when something breaks, you sit down in front of it, type away on the keyboard, and you fix the problem. More often than not you just roll back the update and things go back to the way they used to be.
IoT is a totally different story. If you install an update and something goes wrong, you now have a very expensive paperweight. It's usually very difficult to fix IoT devices if something goes wrong, many of them are installed in less than ideal places and some may even be dangerous to get near the device.
This is why very few things do automatic updates. If you have automatic updates configured, things can just stop working one day. You'll probably have no idea it's coming, one day you wake up and your camera is bricked. Of course it's just as likely things won't break until it's something super important, we all know how Murphy's Law works out.
This doesn't even take into account the problems of secured updates, vendors going out of business, hardware going end of life, and devices that fail to update for some reason or other.
The law of truly large numbers
Let's assume there are 2 million of a given device out there. Let's assume there are automatic updates enabled. If we can guess 10% won't get updates for some reason or other. That means there will be around 200,000 vulnerable devices that miss the first round of updates. That's one product. With IoT the law of truly large numbers kicks in. Crazy things will happen because of this.
The law of truly large numbers tells us that if you have a large enough sample set, every crazy thing that can happen, will happen. Because of this law, the IoT can never be secured.
Now, all this considered, that's no reason to lose hope. It just means we have take this into consideration. We don't build systems that can handle a large number of crazy events. Once we take this into account we can start to design a system that's robust against these problems. The way we develop these systems and products will need a fundamental change. The way we do things today doesn't work in a large number situation. It's not a matter of maybe fixing this, it has to be fixed, and someone will fix it, the rewards will be substantial.
Comment on Twitter
Monday, October 17, 2016
Can I interest you in talking about Security?
I had a discussion last week with some fellow security folks about how we can discuss security with normal people. If you pay attention to what's going on, you know the security people and the non security people don't really communicate well. We eventually made our way to comparing what we do to the door to door religious groups. They're rarely seen in a positive light, are usually annoying, and only seem to show up when it's most inconvenient. This got me thinking, we probably have more in common there than we want to admit, but there are also some lessons for us.
Firstly, nobody wants to talk to either group. The reasons are basically the same. People are already mostly happy with whatever choices they've made and don't need someone showing up to mess with their plans. Do you enjoy being told you're wrong? Even if you are wrong, you don't want someone telling you this. At best you want to figure it out yourself but in reality you don't care and will keep doing whatever you want. It's part of being an irrational human. I'm right, you're wrong, everything else is just pointless details.
Let's assume you are certain that the message you have is really important. If you're not telling people something useful, you're wasting their time. It doesn't matter how important a message is, the audience has to want to hear it. Nobody likes having their time wasted. In this crazy election season, how often are you willing to not just hang up your phone when a pollster calls? You know it's just a big waste of time.
Most importantly though, you can't act pretentious. If you think you're better than whoever you're talking to, even if you're trying hard not to show it, they'll know. Humans are amazing at understanding what another person is thinking by how they act. It's how we managed to survive this long. Our monkey brains are really good at handling social interactions without us even knowing. How often do you talk to someone who is acting superior to you, and all you want to do is stop talking to them.
Firstly, nobody wants to talk to either group. The reasons are basically the same. People are already mostly happy with whatever choices they've made and don't need someone showing up to mess with their plans. Do you enjoy being told you're wrong? Even if you are wrong, you don't want someone telling you this. At best you want to figure it out yourself but in reality you don't care and will keep doing whatever you want. It's part of being an irrational human. I'm right, you're wrong, everything else is just pointless details.
Let's assume you are certain that the message you have is really important. If you're not telling people something useful, you're wasting their time. It doesn't matter how important a message is, the audience has to want to hear it. Nobody likes having their time wasted. In this crazy election season, how often are you willing to not just hang up your phone when a pollster calls? You know it's just a big waste of time.
Most importantly though, you can't act pretentious. If you think you're better than whoever you're talking to, even if you're trying hard not to show it, they'll know. Humans are amazing at understanding what another person is thinking by how they act. It's how we managed to survive this long. Our monkey brains are really good at handling social interactions without us even knowing. How often do you talk to someone who is acting superior to you, and all you want to do is stop talking to them.
Now what?
It's really easy to point all this stuff out, most of us probably know this already. So what can we start doing different? In the same context of door to door selling, it's far more powerful if someone comes to you. If they come to you, they want to learn and understand. So while there isn't anything overly new and exciting, the thing that's best for us to remember today is just be available. If you're approachable, you will be approached, and when they do, make sure you don't drive your audience away. If someone wants to talk to you about security, let them. And be kind, understanding, and sympathetic.
Monday, October 10, 2016
Only trust food delivered by a zebra
If you're a security person you're probably used to normal people not listening to you. Sometimes we know why they don't listen, but often the users get blamed for being stupid or stubborn or something else to justify their behavior. After having a conversation the other day it was noted that some of our advice could be compared to telling someone they should only trust food that has been delivered to them by a zebra.
It's meant to sound silly, because it is silly.
If you tell someone they should only trust food delivered by a zebra, they might nod and agree, some will tell you that's silly, but fundamentally nobody is going to listen. They won't listen because that advice is completely impractical. If you give impractical advice, your advice gets ignored. This gets tricky though because what I call impractical advice you may not. Things can get complicated especially when a difficult topic is being discussed. It's even harder when you have a lot of people who are self proclaimed experts but in reality don't know very much.
This is basically the story of security though. We give advice that we think is practical, normal people hear advice that makes no sense, makes their life worse, and is probably something they don't even want to do. They have two choices. Tell you they think your advice is bad, or just nod and agree while doing whatever they want. The latter is much less work. If someone tells you the advice you just gave them is bad, you're not going to think about how to give better advice, you're going to spend a lot of time convincing them why you're right and they're wrong. Smart people don't argue, they just nod and agree.
The solution to this problem is very simple to explain, but will be very hard to do. It's not uncommon for me to talk about listening as a very important thing we should being doing more of. If listening was easy, or solved as many things as I claim it would, we wouldn't have any more problems. While it is super important we listen to those we must help, it's only a small part of what we have to do. We must learn to be tactical first. You can't listen to people who won't talk to you. And if you show up demand zebra food, nobody will ever tell you anything useful. You get branded as a kook and that pretty much ends everything.
Stop demanding zebra food.
Comment on Twitter
It's meant to sound silly, because it is silly.
If you tell someone they should only trust food delivered by a zebra, they might nod and agree, some will tell you that's silly, but fundamentally nobody is going to listen. They won't listen because that advice is completely impractical. If you give impractical advice, your advice gets ignored. This gets tricky though because what I call impractical advice you may not. Things can get complicated especially when a difficult topic is being discussed. It's even harder when you have a lot of people who are self proclaimed experts but in reality don't know very much.
This is basically the story of security though. We give advice that we think is practical, normal people hear advice that makes no sense, makes their life worse, and is probably something they don't even want to do. They have two choices. Tell you they think your advice is bad, or just nod and agree while doing whatever they want. The latter is much less work. If someone tells you the advice you just gave them is bad, you're not going to think about how to give better advice, you're going to spend a lot of time convincing them why you're right and they're wrong. Smart people don't argue, they just nod and agree.
The solution to this problem is very simple to explain, but will be very hard to do. It's not uncommon for me to talk about listening as a very important thing we should being doing more of. If listening was easy, or solved as many things as I claim it would, we wouldn't have any more problems. While it is super important we listen to those we must help, it's only a small part of what we have to do. We must learn to be tactical first. You can't listen to people who won't talk to you. And if you show up demand zebra food, nobody will ever tell you anything useful. You get branded as a kook and that pretty much ends everything.
Stop demanding zebra food.
Comment on Twitter
Monday, October 3, 2016
Impossible is impossible!
Sometimes when you plan for a security event, it would be expected that the thing you're doing will be making some outcome (something bad probably) impossible. The goal of the security group is to keep the bad guys out, or keep the data in, or keep the servers patched, or find all the security bugs in the code. One way to look at this is security is often in the business of preventing things from happening, such as making data exfiltration impossible. I'm here to tell you it's impossible to make something impossible.
As you think about that statement for a bit, let me explain what's happening here, and how we're going to tie this back to security, business needs, and some common sense. We've all heard of the 80/20 rule, one of the forms is that the last 20% of the features are 80% of the cost. It's a bit more nuanced than that if you really think about it. If your goal is impossible it would be more accurate to say 1% of the features are 2000% of the cost. What's really being described here is a curve that looks like this
The thinking behind this came about while I was discussing DRM with someone. No matter what sort of DRM gets built, someone will break it. DRM is built by a person which means, by definition, a smarter person can break it. It can't be 100%, in some cases it's not even 80%. But when a lot of people or groups think about DRM, the goal is to make acquiring the movie or music or whatever 100% impossible. They even go so far as to play the cat and mouse game constantly. Every time a researcher manages to break the DRM, they fix it, the researcher breaks it, they fix it, continue this forever.
Here's the question about the above graph though. Where is the break even point? Every project has a point of diminishing returns. A lot of security projects forget that if the cost of what you're doing is greater than the cost of the thing you're trying to protect, you're wasting resources. Never forget that there is such a thing as negative value. Doing things that don't matter often create negative value.
This is easiest to explain in the context of ransomware. If you're spending $2000 to protect yourself from a ransomware invasion that will cost $300, that's a bad investment. As crime inc. continues to evolve I imagine they will keep a lot of this in mind, if they can keep their damage low, there won't be a ton of incentive for security spending, which helps them grow their business. That's a topic for another day though.
The summary of all this is that perfect security doesn't exist. It might never exist (never say never though). You have to accept good enough security. And more often than not, good enough is close enough to perfect that it gets the job done.
Comment on Twitter
As you think about that statement for a bit, let me explain what's happening here, and how we're going to tie this back to security, business needs, and some common sense. We've all heard of the 80/20 rule, one of the forms is that the last 20% of the features are 80% of the cost. It's a bit more nuanced than that if you really think about it. If your goal is impossible it would be more accurate to say 1% of the features are 2000% of the cost. What's really being described here is a curve that looks like this
You can't make it to 100%, no matter how much you spend. This of course means there's no point in trying, but more importantly you have to realize you can't get to 100%. If you're smart you'll put your feature set somewhere around 80%, anything above that is probably a waste of money. If you're really clever there is some sort of best place to be investing resources, that's where you really want to be. 80% is probably a solid first pass though, and it's an easy number to remember.
The important thing to remember is that 100% is impossible. The curve never reaches 100%. Ever.
The thinking behind this came about while I was discussing DRM with someone. No matter what sort of DRM gets built, someone will break it. DRM is built by a person which means, by definition, a smarter person can break it. It can't be 100%, in some cases it's not even 80%. But when a lot of people or groups think about DRM, the goal is to make acquiring the movie or music or whatever 100% impossible. They even go so far as to play the cat and mouse game constantly. Every time a researcher manages to break the DRM, they fix it, the researcher breaks it, they fix it, continue this forever.
Here's the question about the above graph though. Where is the break even point? Every project has a point of diminishing returns. A lot of security projects forget that if the cost of what you're doing is greater than the cost of the thing you're trying to protect, you're wasting resources. Never forget that there is such a thing as negative value. Doing things that don't matter often create negative value.
This is easiest to explain in the context of ransomware. If you're spending $2000 to protect yourself from a ransomware invasion that will cost $300, that's a bad investment. As crime inc. continues to evolve I imagine they will keep a lot of this in mind, if they can keep their damage low, there won't be a ton of incentive for security spending, which helps them grow their business. That's a topic for another day though.
The summary of all this is that perfect security doesn't exist. It might never exist (never say never though). You have to accept good enough security. And more often than not, good enough is close enough to perfect that it gets the job done.
Comment on Twitter
Monday, September 26, 2016
Who left all this fire everywhere?
If you're paying attention, you saw the news about Yahoo's breach. Five hundred million accounts. That's a whole lot of data if you think about it. But here's the thing. If you're a security person, are you surprised by this? If you are, you've not been paying attention.
It's pretty well accepted that there are two types of large infrastructures. Those who know they've been hacked, and those who don't yet know they've been hacked. Any group as large as Yahoo probably has more attackers inside their infrastructure than anyone really wants to think about. This is certainly true of every single large infrastructure and cloud provider and consumer out there. Think about that for a little bit. If you're part of a large infrastructure, you have threat actors inside your network right now, probably more than you think.
There are two really important things to think about.
Firstly, if you have any sort of important data, and it's not well protected, odds are very high that it's left your network. Remember that not every hack gets leaked in public, sometimes you'll never find out. On that note, if anyone has any data on what percentage of compromises leaked I'd love to know.
The most important thing is around how we need to build infrastructure with a security mindset. This is a place public cloud actually has an advantage. If you have a deployment in a public cloud, you're naturally going to be less trusting of the machines than you would be if they were in racks you can see. Neither is really any safer, it's just you trust one less which will result in a more secured infrastructure. Gone are the days where having a nice firewall is all the security you need.
Now every architect should assume whatever they're doing has bad actors on the network and in the machines. If you keep this in mind, it really changes how you do things. Storing lots of sensitive data in the same place isn't wise. Break things apart when you can. Make sure data is encrypted as much as possible. Plan for failure, have you done an exercise where you assume the worst then decide what you do next? This is the new reality we have to exist in. It'll take time to catch up of course, but there's not really a choice. This is one of those change or die situations. Nobody can afford to ignore the problems around leaking sensitive data for much longer. The times, they are a changin.
Leave your comments on Twitter: @joshbressers
It's pretty well accepted that there are two types of large infrastructures. Those who know they've been hacked, and those who don't yet know they've been hacked. Any group as large as Yahoo probably has more attackers inside their infrastructure than anyone really wants to think about. This is certainly true of every single large infrastructure and cloud provider and consumer out there. Think about that for a little bit. If you're part of a large infrastructure, you have threat actors inside your network right now, probably more than you think.
There are two really important things to think about.
Firstly, if you have any sort of important data, and it's not well protected, odds are very high that it's left your network. Remember that not every hack gets leaked in public, sometimes you'll never find out. On that note, if anyone has any data on what percentage of compromises leaked I'd love to know.
The most important thing is around how we need to build infrastructure with a security mindset. This is a place public cloud actually has an advantage. If you have a deployment in a public cloud, you're naturally going to be less trusting of the machines than you would be if they were in racks you can see. Neither is really any safer, it's just you trust one less which will result in a more secured infrastructure. Gone are the days where having a nice firewall is all the security you need.
Now every architect should assume whatever they're doing has bad actors on the network and in the machines. If you keep this in mind, it really changes how you do things. Storing lots of sensitive data in the same place isn't wise. Break things apart when you can. Make sure data is encrypted as much as possible. Plan for failure, have you done an exercise where you assume the worst then decide what you do next? This is the new reality we have to exist in. It'll take time to catch up of course, but there's not really a choice. This is one of those change or die situations. Nobody can afford to ignore the problems around leaking sensitive data for much longer. The times, they are a changin.
Leave your comments on Twitter: @joshbressers
Tuesday, September 20, 2016
Is dialup still an option?
TL;DR - No.
Here's why.
I was talking with my Open Source Security Podcast co-host Kurt Seifried about what it would be like to access the modern Internet using dialup. So I decided to give this a try. My first thought was to find a modem, but after looking into this, it isn't really an option anymore.
I know it's not perfect, but it's probably close enough to get a feel for what's going on. I understand this doesn't exactly recreate a modem experience with details like compression, latency, and someone picking up the phone during a download. There was nothing worse than having that 1 megabyte download at 95% when someone decided they needed to make a phone call. Call waiting was also a terrible plague.
If you're too young to understand any of this, be thankful. Anyone who looks at this time with nostalgia is pretty clearly delusional.
I started testing at a 1024 Kb connection and halved my way down to 56 (instead of 64). This seemed like a nice way to get a feel for how these sites react as your speed shifts down.
You're going to want to start paying attention to Amazon, something really clever is going to happen, it's sort of noticeable in this graph. Also of note is how consistent bing.com is. While not the fastest site, it will remain extremely consistent through the entire test.
For example from the time I typed in gmail.com until I could read a mail message took about 600 seconds I did let every page load completely before clicking or typing on it. Once I had it loaded, and the AJAX interface timed out then told me to switch to HTML mode, it was mostly usable. It was only about 30 seconds to load a message (including images) and 0.2 seconds to return to the inbox.
Logging into Facebook took about 200 seconds. It was basically unusable once it loaded though. Nothing new loaded, it loads quite a few images though, so this makes sense. These things aren't exactly "web optimized" anymore. If you know someone on dialup, don't expect them to be using Facebook.
cnn.com took 800 seconds. Reddit's front page was 750 seconds. Google News was only 33 seconds. The newspaper is probably a better choice if you have dialup.
I finally tried to run a "yum update" in Fedora to see if updating the system was something you could leave running overnight. It's not. After about 4 hours of just downloading repo metadata I gave up. There was no way you can plausibly update a system over dialup. If you're on dialup, the timeouts will probably keep you from getting pwnt better than updates will.
Another problem you hit with a modern system like this is it tries to download things automatically in the background. More than once I had to kill some background tasks that basically ruined my connection. Most system designers today assume everyone has a nice Internet connection so they can do whatever they want in the background. That's clearly a problem when you're running at a speed this slow.
Here's why.
I was talking with my Open Source Security Podcast co-host Kurt Seifried about what it would be like to access the modern Internet using dialup. So I decided to give this a try. My first thought was to find a modem, but after looking into this, it isn't really an option anymore.
The setup
- No Modem
- Fedora 24 VM
- Firefox as packaged with Fedora 24
- Use the firewall via wondershaper to control the network speed
- "App Telemetry" firefox plugin to time the site load time
I know it's not perfect, but it's probably close enough to get a feel for what's going on. I understand this doesn't exactly recreate a modem experience with details like compression, latency, and someone picking up the phone during a download. There was nothing worse than having that 1 megabyte download at 95% when someone decided they needed to make a phone call. Call waiting was also a terrible plague.
If you're too young to understand any of this, be thankful. Anyone who looks at this time with nostalgia is pretty clearly delusional.
I started testing at a 1024 Kb connection and halved my way down to 56 (instead of 64). This seemed like a nice way to get a feel for how these sites react as your speed shifts down.
Baseline
I picked the most popular english language sites listed on the Alexa top 100. I added lwn.net becuase I like them, and my kids had me add twitch. My home Internet connection is 50 Mb down, 5 Mb up. As you can see, in general all these sites load in less than 5 seconds. The numbers represent the site being fully loaded. Most web browsers seem to show something pretty quickly, even if the page is still loading. For the purpose of this test, our numbers are how long it takes a site to fully load. I also show 4 samples because as you'll see later on, some of these sites took a really really long time to load, so four was as much suffering as I could endure. Perhaps someday I'll do this again with extra automation so I don't have to be so involved.
1024 Kb/s
Things really started to go downhill at this point. Anyone who claims a 1 megabit connection is broadband has probably never tried to use such a connection. In general though most of the sites were usable from a very narrow definition ofh the word.
512 Kb/s
256 Kb/s
Here is where you can really see what Amazon is doing. They clearly have some sort of client side magic happening to ensure an acceptable response. For the rest of my testing I saw this behavior. A slow first load, then things were much much faster. Waiting for sites to load at this speed was really painful, it's only going to get worse from here. 15 seconds doesn't sound horrible, but it really is a long time to wait.128 Kb
Things are not good at 128 Kb/s. Wikipedia looks empty, it was still loading at the same speed as our fist test. I imagine my lack of an ad enhanced experience with them helps keeps it so speedy.56 Kb
Here is the real data you're waiting for. This is where I set the speed to 56K down, 48K up, which is the ideal speed of a 56K modem. I doubt most of us got that speed very often.
As you can probably see, Twitch takes an extremely long time to load. This should surprise nobody as it's a site that streams video, by definition it's expected you have a fast connection. Here is the graph again with Twitch removed.
The Yahoo column is empty because I couldn't get Yahoo to load. It timed out every single time I tried. Wikipedia looks empty, but it still loaded at 0.3 seconds. After thinking about this it does make sense. There are Wikipedia users who are on dialup in some countries. They have to keep it lean. Amazon still has a slow first load, then nice and speedy (for some definition of speedy) after that. I tried to load a youtube video to see if it would work. After about 10 minutes of nothing happening I gave up.
Typical tasks
I also tried to perform a few tasks I would consider "expected" by someone using the Internet.For example from the time I typed in gmail.com until I could read a mail message took about 600 seconds I did let every page load completely before clicking or typing on it. Once I had it loaded, and the AJAX interface timed out then told me to switch to HTML mode, it was mostly usable. It was only about 30 seconds to load a message (including images) and 0.2 seconds to return to the inbox.
Logging into Facebook took about 200 seconds. It was basically unusable once it loaded though. Nothing new loaded, it loads quite a few images though, so this makes sense. These things aren't exactly "web optimized" anymore. If you know someone on dialup, don't expect them to be using Facebook.
cnn.com took 800 seconds. Reddit's front page was 750 seconds. Google News was only 33 seconds. The newspaper is probably a better choice if you have dialup.
I finally tried to run a "yum update" in Fedora to see if updating the system was something you could leave running overnight. It's not. After about 4 hours of just downloading repo metadata I gave up. There was no way you can plausibly update a system over dialup. If you're on dialup, the timeouts will probably keep you from getting pwnt better than updates will.
Another problem you hit with a modern system like this is it tries to download things automatically in the background. More than once I had to kill some background tasks that basically ruined my connection. Most system designers today assume everyone has a nice Internet connection so they can do whatever they want in the background. That's clearly a problem when you're running at a speed this slow.
Conclusion
Is the Internet usable on Dialup in 2016? No. You can't even pretend it's maybe usable. It pretty much would suck rocks to use the Internet on dialup today. I'm sure there are some people doing it. I feel bad for them. It's clear we've hit a place where broadband is expected, and honestly, you need fast broadband, even 1 Megabit isn't enough anymore if you want a decent experience. The definition of broadband in the US is now 25Mb down 3Mb up. Anyone who disagrees with that should spend a day at 56K.
I know this wasn't the most scientific study ever done, I would welcome something more rigorous. If you have any questions or ideas hit me up on Twitter: @joshbressers
Sunday, September 18, 2016
Why do we do security?
I had a discussion last week that ended with this question. "Why do we do security". There wasn't a great answer to this question. I guess I sort of knew this already, but it seems like something too obvious to not have an answer. Even as I think about it I can't come up with a simple answer. It's probably part of the problems you see in infosec.
The purpose of security isn't just to be "secure", it's to manage risk in some meaningful way. In the real world this is usually pretty easy for us to understand. You have physical things, you want to keep them from getting broken, stolen, lost, pick something. It usually makes some sort of sense.
It would be really easy to use banks as my example here, after all they have a lot of something everyone wants, so instead let's use cattle, that will be more fun. Cows are worth quite a lot of money actually. Anyone who owns cows knows you need to protect them in some way. In some environments you want to keep your cows inside a pen, in others you let them roam free. If they roam free the people living near the cows need to protect themselves actually (barbed wire wasn't invented to keep cows in, it was used to keep them out). This is something we can understand. Some environments are very low risk, you can let your cattle roam where they want. Some are high risk, so you keep them in a pen. I eagerly await the cow related mails this will produce because of my gross over-simplification of what is actually a very complex and nuanced problem.
So now we have the question about what are you protecting? If you're a security person, what are you really trying to protect? You can't protect everything, there's no point in protecting everything. If you try to protect everything you actually end up protecting nothing. You need to protect the things you have that are not only high value, but also have a high risk of being attacked/stolen. That priceless statue in the pond outside that weighs four tons is high value, but nobody is stealing it.
Maybe this is why it's hard to get security taken seriously sometimes. If you don't know what you're protecting, you can't explain why you're important. The result is generally the security guy storming out screaming "you'll be sorry". They probably won't. If we can't easily identify what our risk is and why we care about it, we can't possibly justify what we do.
There are a lot of frameworks that can help us understand how we should be protecting our security assets, but they don't really do a great job of helping identify what those assets really are. I don't think this a bad thing, I think this is just part of maturing the industry. We all have finite budgets, if we protect things that don't need protecting we are literally throwing money away. So this begs the question what should we be protecting?
I'm not sure we can easily answer this today. It's harder than it sounds. We could say we need to protect the things that if were lost tomorrow would prevent the business from functioning. That's not wrong, but electricity and water fall into that category. If you tried to have an "electricity security program" at most organizations you'll be looking for a new job at the end of the day. We could say that customer data is the most important asset, which it might be, but what are you protecting it from? Is it enough to have a good backup? Do you need a fail-over data center? Will an IDS help protect the data? Do we want to protect the integrity or is our primary fear exfiltration? Things can get out of hand pretty quickly.
I suspect there may be some value to these questions in the world of accounting. Accountants spend much time determining assets and values. I've not yet looked into this, but I think my next project will be starting to understand how assets are dealt with by the business. Everything from determining value, to understanding loss. There is science here already, it would be silly for us to try to invent our own.
Leave your comments on Twitter: @joshbressers
The purpose of security isn't just to be "secure", it's to manage risk in some meaningful way. In the real world this is usually pretty easy for us to understand. You have physical things, you want to keep them from getting broken, stolen, lost, pick something. It usually makes some sort of sense.
It would be really easy to use banks as my example here, after all they have a lot of something everyone wants, so instead let's use cattle, that will be more fun. Cows are worth quite a lot of money actually. Anyone who owns cows knows you need to protect them in some way. In some environments you want to keep your cows inside a pen, in others you let them roam free. If they roam free the people living near the cows need to protect themselves actually (barbed wire wasn't invented to keep cows in, it was used to keep them out). This is something we can understand. Some environments are very low risk, you can let your cattle roam where they want. Some are high risk, so you keep them in a pen. I eagerly await the cow related mails this will produce because of my gross over-simplification of what is actually a very complex and nuanced problem.
So now we have the question about what are you protecting? If you're a security person, what are you really trying to protect? You can't protect everything, there's no point in protecting everything. If you try to protect everything you actually end up protecting nothing. You need to protect the things you have that are not only high value, but also have a high risk of being attacked/stolen. That priceless statue in the pond outside that weighs four tons is high value, but nobody is stealing it.
Maybe this is why it's hard to get security taken seriously sometimes. If you don't know what you're protecting, you can't explain why you're important. The result is generally the security guy storming out screaming "you'll be sorry". They probably won't. If we can't easily identify what our risk is and why we care about it, we can't possibly justify what we do.
There are a lot of frameworks that can help us understand how we should be protecting our security assets, but they don't really do a great job of helping identify what those assets really are. I don't think this a bad thing, I think this is just part of maturing the industry. We all have finite budgets, if we protect things that don't need protecting we are literally throwing money away. So this begs the question what should we be protecting?
I'm not sure we can easily answer this today. It's harder than it sounds. We could say we need to protect the things that if were lost tomorrow would prevent the business from functioning. That's not wrong, but electricity and water fall into that category. If you tried to have an "electricity security program" at most organizations you'll be looking for a new job at the end of the day. We could say that customer data is the most important asset, which it might be, but what are you protecting it from? Is it enough to have a good backup? Do you need a fail-over data center? Will an IDS help protect the data? Do we want to protect the integrity or is our primary fear exfiltration? Things can get out of hand pretty quickly.
I suspect there may be some value to these questions in the world of accounting. Accountants spend much time determining assets and values. I've not yet looked into this, but I think my next project will be starting to understand how assets are dealt with by the business. Everything from determining value, to understanding loss. There is science here already, it would be silly for us to try to invent our own.
Leave your comments on Twitter: @joshbressers
Monday, September 12, 2016
On Experts
Are you an expert? Do you know an expert? Do you want to be an expert?
This came up for me the other day while having a discussion with a self proclaimed expert. I'm not going to claim I'm an expert at anything, but if you tell me all about how good you are, I'm not going to take it at face value. I'm going to demand some proof. "Trust me" isn't proof.
There are a rather large number of people who think they are experts, some think they're experts at everything. Nobody is an expert at everything. People who claim to have done everything should be looked at with great suspicion. Everyone can be an expert at something though.
One of the challenges we always face is trying to figure out who is actually an expert, and who only thinks they are an expert? There are plenty of people who sound very impressive, but if they have to deal with an actual expert, things fall apart pretty quick. They can get you into trouble if you're expecting expert advice. Especially in areas like security, bad advice can be worse than no advice.
The simple answer is to look at their public contributions. If you have someone who has ZERO public contributions, that's not an expert in anything. Even if you're working for a secretive organization, you're going to leave a footprint somewhere. No footprint means you should seriously question a person's expertise. Becoming an expert leaves a long crazy trail behind whoever gets there. In the new and exciting world of open source and social media there is no excuse for not being able to to show off your work (unless you don't have anything to show off of course).
If you think you're an expert, or you want to be an expert, start doing things in the open. Write code (if you don't have a github account, go get one). Write blog posts, answer questions, go to meetups. There are so many opportunities it's not even funny. Just because you think you're smart doesn't mean you are, go out and prove it.
This came up for me the other day while having a discussion with a self proclaimed expert. I'm not going to claim I'm an expert at anything, but if you tell me all about how good you are, I'm not going to take it at face value. I'm going to demand some proof. "Trust me" isn't proof.
There are a rather large number of people who think they are experts, some think they're experts at everything. Nobody is an expert at everything. People who claim to have done everything should be looked at with great suspicion. Everyone can be an expert at something though.
One of the challenges we always face is trying to figure out who is actually an expert, and who only thinks they are an expert? There are plenty of people who sound very impressive, but if they have to deal with an actual expert, things fall apart pretty quick. They can get you into trouble if you're expecting expert advice. Especially in areas like security, bad advice can be worse than no advice.
The simple answer is to look at their public contributions. If you have someone who has ZERO public contributions, that's not an expert in anything. Even if you're working for a secretive organization, you're going to leave a footprint somewhere. No footprint means you should seriously question a person's expertise. Becoming an expert leaves a long crazy trail behind whoever gets there. In the new and exciting world of open source and social media there is no excuse for not being able to to show off your work (unless you don't have anything to show off of course).
If you think you're an expert, or you want to be an expert, start doing things in the open. Write code (if you don't have a github account, go get one). Write blog posts, answer questions, go to meetups. There are so many opportunities it's not even funny. Just because you think you're smart doesn't mean you are, go out and prove it.
Tuesday, September 6, 2016
You can't weigh risk if you don't know what you don't know
There is an old saying we've all heard at some point. It's often attributed to Donald Rumsfeld.
This ties back into conversations about risk and how we deal with it.
If there is something you don't know you don't know, by definition you can't weight the possible risk with whatever it is you are (or aren't) doing. A great example here is trying to understand your infrastructure. If you don't know what you have, you don't know which machines are patches, and you're not sure who is running what software, you have a lot of unknowns. It's probably safe to say at some future date there will be a grand explosion when everything start to fall apart. It's also probably safe to say if you have infrastructure like this, you don't understand the pile of dynamite you're sitting on.
Measuring risk can be like trying to take a picture of an invisible man. Do you know where your risk is? Do you know what it should look like? How big is it? Is it wearing a hat? There are so many things to keep track of when we try to understand risk. There are more people afraid of planes than cars, but flying is magnitudes safer. Humans are really bad at risk. We think we understand something (or think it's a known or known unknown). Often we're actually mistaken.
How do we deal with the unknown unknowns in a context like this? We could talk about being agile or quick or adaptive, whatever you want. But at the end of the day what's going to save you is your experts. Understand them, know where you are strong and weak. Someday the unknowns become knows, usually with a violent explosion. To some of your experts these risks are known, you may just have to listen.
It's also important to have multiple experts. If you only have one, they could believe they're smarter than they are. This is where things can get tricky. How can we decide who is actually an expert and who thinks they're an expert? This is a whole long complex topic by itself which I'll write about someday.
Anyway, on the topic of risk and unknowns. There will always be unknown unknowns. Even if you have the smartest experts in the world, it's going to happen. Just make sure your unknown unknowns are worth it. There's nothing worse than not knowing something you should.
There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't knowIf any of us have ever been in a planning meeting, a variant of this has no doubt come up at some point. It came up for me last week, and every time I hear it I think about all things we don't know we don't know. If you're not familiar with the concept, it works a bit like this. I know I don't know to drive a boat. But because I know I don't know this, I could learn. If you know you lack certain knowledge, you could find a way to learn it. If you don't know what you don't know, there is nothing you can do about it. The future is often an unknown unknown. There is nothing we can do about the future in many instances, you just have to wait until it becomes a known, and hope it won't be anything too horrible. There can also be blindness when you think you know something, but you really don't. This is when people tend to stop listening to the actual experts because they think they are an expert.
This ties back into conversations about risk and how we deal with it.
If there is something you don't know you don't know, by definition you can't weight the possible risk with whatever it is you are (or aren't) doing. A great example here is trying to understand your infrastructure. If you don't know what you have, you don't know which machines are patches, and you're not sure who is running what software, you have a lot of unknowns. It's probably safe to say at some future date there will be a grand explosion when everything start to fall apart. It's also probably safe to say if you have infrastructure like this, you don't understand the pile of dynamite you're sitting on.
Measuring risk can be like trying to take a picture of an invisible man. Do you know where your risk is? Do you know what it should look like? How big is it? Is it wearing a hat? There are so many things to keep track of when we try to understand risk. There are more people afraid of planes than cars, but flying is magnitudes safer. Humans are really bad at risk. We think we understand something (or think it's a known or known unknown). Often we're actually mistaken.
How do we deal with the unknown unknowns in a context like this? We could talk about being agile or quick or adaptive, whatever you want. But at the end of the day what's going to save you is your experts. Understand them, know where you are strong and weak. Someday the unknowns become knows, usually with a violent explosion. To some of your experts these risks are known, you may just have to listen.
It's also important to have multiple experts. If you only have one, they could believe they're smarter than they are. This is where things can get tricky. How can we decide who is actually an expert and who thinks they're an expert? This is a whole long complex topic by itself which I'll write about someday.
Anyway, on the topic of risk and unknowns. There will always be unknown unknowns. Even if you have the smartest experts in the world, it's going to happen. Just make sure your unknown unknowns are worth it. There's nothing worse than not knowing something you should.
Monday, August 29, 2016
How do we explain email to an "expert"?
This has been a pretty wild week, more wild than usual I think we can all agree. The topic I found the most interesting wasn't about one of the countless 0day flaws, it was a story from Slate titled: In Praise of the Private Email Server
The TL;DR says running your own email server is a great idea. Almost everyone came out proclaiming it a terrible idea. I agree it's a terrible idea, but this also got me thinking. How do you explain this to someone who doesn't really understand what's going on?
There are three primary groups of people.
1) People who know they know nothing
2) People who think they're experts
3) People who are actually experts
If I had to guess, most of #3 knows running your own email server is pretty dangerous. #1 probably is happy to let someone else do it. #2 is a dangerous group, probably the largest, and the group who most needs to understand what's going on.
These ideas apply to a lot of areas, feel free to substitute the term "security" "cloud" "doughnuts" or "farming" for email. You'll figure it out with a little work.
So anyway.
A long time ago, if you wanted email you basically had to belong to an organization that ran an email server. Something like a university or maybe a huge company. Getting a machine on the Internet was a pretty big deal. Hosting email was even bigger. I could say "by definition this meant if you were running a machine on the Internet you were an expert", but I suspect that wasn't true, we just like to remember the past as being more awesome than it was.
Today anyone can spin up a machine in a few seconds. It's pretty cool but it also means literally anyone can run an email server. If you run a server for you and a few other people, it's unlikely anything terrible will happen. You'll probably get pwnt someday, you might notice, but the world won't end. How do we convince this group that just because you can, doesn't mean you should? The short answer is you can't. I actually wrote about this a little bit last year.
So if we can't convince them what do we do? We get them to learn. If you've ever heard of the Dunning Kruger effect (I talk about it constantly), you understand the problem is generally a lack of knowledge.
You can't convince experts of anything, especially experts that aren't really experts. What we can do though is encourage them to learn. If we have someone we know is on the peak of that curve, if they learn just a little bit more, they're going to fall back to earth.
So I can say running your email server is a terrible idea. I can say it all day and most people don't care what I think. So here's my challenge. If you run your own email server, start reading email related RFCs, learn about things like spam, blacklisting, greylisting, SPF. Read about SMTPS, learn how certificates work. Learn how to mange keys, learn about securing your clients with multi factor auth. Read about how to keep the mail secure while on disk. There are literally more topics than one could read in a lifetime. If you're an expert, and you don't know what one of those things are, go learn it. Learn them all. Then you'll understand there are no experts.
Let me know how wrong I am: @joshbressers
The TL;DR says running your own email server is a great idea. Almost everyone came out proclaiming it a terrible idea. I agree it's a terrible idea, but this also got me thinking. How do you explain this to someone who doesn't really understand what's going on?
There are three primary groups of people.
1) People who know they know nothing
2) People who think they're experts
3) People who are actually experts
If I had to guess, most of #3 knows running your own email server is pretty dangerous. #1 probably is happy to let someone else do it. #2 is a dangerous group, probably the largest, and the group who most needs to understand what's going on.
These ideas apply to a lot of areas, feel free to substitute the term "security" "cloud" "doughnuts" or "farming" for email. You'll figure it out with a little work.
So anyway.
A long time ago, if you wanted email you basically had to belong to an organization that ran an email server. Something like a university or maybe a huge company. Getting a machine on the Internet was a pretty big deal. Hosting email was even bigger. I could say "by definition this meant if you were running a machine on the Internet you were an expert", but I suspect that wasn't true, we just like to remember the past as being more awesome than it was.
Today anyone can spin up a machine in a few seconds. It's pretty cool but it also means literally anyone can run an email server. If you run a server for you and a few other people, it's unlikely anything terrible will happen. You'll probably get pwnt someday, you might notice, but the world won't end. How do we convince this group that just because you can, doesn't mean you should? The short answer is you can't. I actually wrote about this a little bit last year.
So if we can't convince them what do we do? We get them to learn. If you've ever heard of the Dunning Kruger effect (I talk about it constantly), you understand the problem is generally a lack of knowledge.
You can't convince experts of anything, especially experts that aren't really experts. What we can do though is encourage them to learn. If we have someone we know is on the peak of that curve, if they learn just a little bit more, they're going to fall back to earth.
So I can say running your email server is a terrible idea. I can say it all day and most people don't care what I think. So here's my challenge. If you run your own email server, start reading email related RFCs, learn about things like spam, blacklisting, greylisting, SPF. Read about SMTPS, learn how certificates work. Learn how to mange keys, learn about securing your clients with multi factor auth. Read about how to keep the mail secure while on disk. There are literally more topics than one could read in a lifetime. If you're an expert, and you don't know what one of those things are, go learn it. Learn them all. Then you'll understand there are no experts.
Let me know how wrong I am: @joshbressers
Sunday, August 21, 2016
The cost of mentoring, or why we need heroes
Earlier this week I had a chat with David A. Wheeler about mentoring. The conversation was fascinating and covered many things, but the topic of mentoring really got me thinking. David pointed out that nobody will mentor if they're not getting paid. My first thought was that it can't be true! But upon reflection, I'm pretty sure it is.
I can't think of anyone I mentored where a paycheck wasn't involved. There are people in the community I've given advice to, sometimes for an extended period of time, but I would hesitate to claim I was a mentor. Now I think just equating this to a paycheck would be incorrect and inaccurate. There are plenty of mentors in other organizations that aren't necessarily getting a paycheck, but I would say they're getting paid in some sense of the word. If you're working with at risk youth for example, you may not get paid money, but you do have satisfaction in knowing you're making a difference in someone's life. If you mentor kids as part of a sports team, you're doing it because you're getting value out of the relationship. If you're not getting value, you're going to quit.
So this brings me to the idea of mentoring in the community.
The whole conversation started because of some talk of mentoring on Twitter, but now I suspect this isn't something that would work quite like we think. The basic idea would be you have new young people who are looking for someone to help them cut their teeth. Some of these relationships could work out, but probably only when you're talking about a really gifted new person and a very patient mentor. If you've ever helped the new person, you know how terribly annoying they become, especially when they start to peak on the Dunning-Kruger graph. If I don't have a great reason to stick around, I'm almost certainly going to bail out of that. So the question really is can a mentoring program like this work? Will it ever be possible to have a collection of community mentors helping a collection of new people?
Let's assume the answer is no. I think the current evidence somewhat backs this up. There aren't a lot of young people getting into things like security and open source in general. We all like to think we got where we are through brilliance and hard work, but we all probably had someone who helped us out. I can't speak for everyone, but I also had some security heroes back in the day. Groups like the l0pht, Cult of the Dead Cow, Legion of Doom, 2600, mitnick, as well as a handful of local people. Who are the new heroes?
Do it for the heroes!
We may never have security heroes like we did. It's become a proper industry. I don't think many mature industries have new and exciting heroes. We know who Chuck Yeager is, I bet nobody could name 5 test pilots anymore. That's OK though. You know what happens when there is a solid body of knowledge that needs to be moved from the old to the young? You go to a university. That's right, our future rests with the universities.
I can't think of anyone I mentored where a paycheck wasn't involved. There are people in the community I've given advice to, sometimes for an extended period of time, but I would hesitate to claim I was a mentor. Now I think just equating this to a paycheck would be incorrect and inaccurate. There are plenty of mentors in other organizations that aren't necessarily getting a paycheck, but I would say they're getting paid in some sense of the word. If you're working with at risk youth for example, you may not get paid money, but you do have satisfaction in knowing you're making a difference in someone's life. If you mentor kids as part of a sports team, you're doing it because you're getting value out of the relationship. If you're not getting value, you're going to quit.
So this brings me to the idea of mentoring in the community.
The whole conversation started because of some talk of mentoring on Twitter, but now I suspect this isn't something that would work quite like we think. The basic idea would be you have new young people who are looking for someone to help them cut their teeth. Some of these relationships could work out, but probably only when you're talking about a really gifted new person and a very patient mentor. If you've ever helped the new person, you know how terribly annoying they become, especially when they start to peak on the Dunning-Kruger graph. If I don't have a great reason to stick around, I'm almost certainly going to bail out of that. So the question really is can a mentoring program like this work? Will it ever be possible to have a collection of community mentors helping a collection of new people?
Let's assume the answer is no. I think the current evidence somewhat backs this up. There aren't a lot of young people getting into things like security and open source in general. We all like to think we got where we are through brilliance and hard work, but we all probably had someone who helped us out. I can't speak for everyone, but I also had some security heroes back in the day. Groups like the l0pht, Cult of the Dead Cow, Legion of Doom, 2600, mitnick, as well as a handful of local people. Who are the new heroes?
Do it for the heroes!
We may never have security heroes like we did. It's become a proper industry. I don't think many mature industries have new and exciting heroes. We know who Chuck Yeager is, I bet nobody could name 5 test pilots anymore. That's OK though. You know what happens when there is a solid body of knowledge that needs to be moved from the old to the young? You go to a university. That's right, our future rests with the universities.
Of course it's really easy to say this is the future, making this happen will be a whole different story. I don't have any idea where we start, I imagine people like David Wheeler have ideas. All I do know is that if nothing changes, we're not going to like what happens.
Also, if you're part of an open source project, get your badge!
If you have thoughts or ideas, let me know: @joshbressers
Monday, August 15, 2016
Can't Trust This!
Last week saw a really interesting bug in TCP come to light. CVE-2016-5696 describes an issue in the way Linux deals with challenge ACKs defined in RFC 5961. The issue itself is really clever and interesting. It's not exactly new but given the research was presented at USENIX, it suddenly got more attention from the press.
The researchers showed themselves injecting data into a standard http connection, which is easy to understand and terrifying to most people. Generally speaking we operate in a world where TCP connections are mostly trustworthy. It's not true if you have a "man in the middle", but with this bug you don't need a MiTM if you're using a public network, which is horrifying.
The real story isn't the flaw though, the flaw is great research and quite clever, but it just highlights something many of us have known for a very long time. You shouldn't trust the network.
Not so long ago the general thinking was that the public internet wasn't very trustworthy, but it all worked well enough that things worked. TLS (SSL back then) was created to ensure some level of trust between two endpoints and everything seemed well enough. Most traffic still passed over the network unencrypted though. There were always grumblings about coffee shop attack or nation state style man in the middle, but practically speaking nobody really took these attacks seriously.
The world is different now though. There is no more network perimeter. It's well accepted that you can't trust the things inside your network any more than you can trust the things outside your network. Attacks like this are going to keep happening. The network continues to get more complex, which means the number of security problems increases. IPv6 will solve the problem of running out of IP addresses while adding a ton of new security problems in the process. Just wait for the research to start taking a hard look at IPv6.
The joke is "there is no cloud, just someone else's computer", there's also no network, it's someone else's network. It's someone else's network you can't trust. You know you can't trust your own network because it's grown to a point it's probably self aware. Now you expect to trust the network of a cloud provider that is doing things a few thousand times more complex than you are? You know all the cloud infrastructures are held together with tape and string too, their networks aren't magic, they just have really really good paint.
So what's the point of all this rambling about how we can't trust any networks? The point is you can't trust the network. No matter what you're told, no matter what's going on. You need to worry about what's happening on the network. You also need to think about the machines, but that's a story for another day. The right way to deal with your data is to ask yourself the question "what happens if someone can see this data on the wire?" Not all data is super important, some you don't have to protect. There is some data you have that must be protected at all times. That's the stuff you need to figure out how to best do something like endpoint network encryption. If everyone asked this question at least once during development and deployment it would solve a lot of problems I suspect.
Not so long ago the general thinking was that the public internet wasn't very trustworthy, but it all worked well enough that things worked. TLS (SSL back then) was created to ensure some level of trust between two endpoints and everything seemed well enough. Most traffic still passed over the network unencrypted though. There were always grumblings about coffee shop attack or nation state style man in the middle, but practically speaking nobody really took these attacks seriously.
The world is different now though. There is no more network perimeter. It's well accepted that you can't trust the things inside your network any more than you can trust the things outside your network. Attacks like this are going to keep happening. The network continues to get more complex, which means the number of security problems increases. IPv6 will solve the problem of running out of IP addresses while adding a ton of new security problems in the process. Just wait for the research to start taking a hard look at IPv6.
The joke is "there is no cloud, just someone else's computer", there's also no network, it's someone else's network. It's someone else's network you can't trust. You know you can't trust your own network because it's grown to a point it's probably self aware. Now you expect to trust the network of a cloud provider that is doing things a few thousand times more complex than you are? You know all the cloud infrastructures are held together with tape and string too, their networks aren't magic, they just have really really good paint.
So what's the point of all this rambling about how we can't trust any networks? The point is you can't trust the network. No matter what you're told, no matter what's going on. You need to worry about what's happening on the network. You also need to think about the machines, but that's a story for another day. The right way to deal with your data is to ask yourself the question "what happens if someone can see this data on the wire?" Not all data is super important, some you don't have to protect. There is some data you have that must be protected at all times. That's the stuff you need to figure out how to best do something like endpoint network encryption. If everyone asked this question at least once during development and deployment it would solve a lot of problems I suspect.
Subscribe to:
Posts (Atom)