Monday, November 30, 2015

Where is the physical trust boundary?

There's a story of a toothbrush security advisory making the rounds.

This advisory is pretty funny but it matters. The actual issue with the toothbrush isn't a huge deal, an attacker isn't going to do anything exciting with the problems. The interesting issue here is we're at the start of many problems like this we're going to see.

Today some engineers built a clever toothbrush. Tomorrow they're going to build new things, different things. Security will matter for some of them. It won't matter for most of them.

Boundaries of trust


Today when we try to decide if something is a security issue we like to ask the question "Does this cross a trust boundary?" If it does, it's probably a security issue. If no trust boundary is crossed, it's probably not an issue. There are of course lots of corner cases and nuance, but we can generally apply the rule.

Think of it this way. If a user can delete their own files, that's not crossing a trust boundary, that's just doing something silly. If a user can delete someone else's files, that's not good.

This starts to get weird when we think about real things though.

Boundaries of physical trust?


What happens in the physical world? What counts as a trust boundary? In the toothbrush example above an attacker could gain knowledge of how someone is using a toothbrush. That's technically a trust boundary (an attacker can gain data they're not supposed to have), but let's face it, it's not a big deal. If your credit card number was also included in the data, sure no question there.

But as such, we're talking about data that isn't exciting. You can make the argument about tracking data from a user over the course of time and across devices, let's not go there right now. Let's just keep the thinking small and contained.

Where do we draw the line?


If we think about physical devices, what are our lines? A concept of just a trust boundary doesn't really work here. I can think of three lines, all of which are important, but not equally important.
  1. Safety
  2. Harm
  3. Annoyance

Safety

When I say safety I'm thinking about a device that could literally kill a person. This could be something like disabling the brakes on a car. Making a toaster start a fire. Catastrophic events. I don't think anyone would ever claim this class of issues isn't a problem. They are serious, I would expect any vendor to take these very seriously.

Harm

Harm would be where someone or something can be hurt. Nothing catastrophic. Think maybe a small burn, or a scrape. Perhaps making someone fall down when using a scooter, or burn themselves with a device. We could argue this category for a while. Things will get fuzzy between if the problem is catastrophic. Some vendors will be less willing to deal with these but I bet most get fixed quickly.

Annoyance

Annoyance is where things are going to get out of hand. This is where the toothbrush advisory lives. In the case of a toothbrush it's not going to be a huge deal. Should the vendor fix it? Probably. Should you get a new toothbrush over it? Probably not.

The nuance will be which annoying problems deserve fixes and which ones don't? Some of these problems could cost you money. What if an attacker can turn up your thermostat so your furnace runs constantly? Now we have an issue that can cost real money. What if we have a problem where your 3D printer ruins a spool of filament? What if the oven burns the Christmas goose?

Where is our trust boundary in the world of annoying problems? You can't just draw the line at money and goods. What happens if you can ring a person's door bell and they have to keep getting up to check the door? Things start to get really weird.

Do you think a consumer will be willing to spend an extra $10 for "better security"? I doubt it. In the event a device will harm or kill a person there are government agencies to step in and stop such products. There are no agencies for leaking data and even if there were they would have limited resources. Compare "annoyance security" to all the products sold today that don't actually work, who is policing those?

As of right now our future is going to be one where everything is connected to the Internet, none of it is secure, and nobody cares.

Join the conversation, hit me up on twitter, I'm @joshbressers

Friday, November 20, 2015

If your outcome is perfect or nothing, nothing always wins

This tweet
https://twitter.com/RichFelker/status/666325066838339584

Led to this thread
http://marc.info/?t=144778171800001&r=1&w=2

The short version is there are some developers from Red Hat working on gcc attempting to prevent ROP style attacks. More than one person has accused this work of being pointless and a waste of time. It's not, the waste of time is arguing about why trying new things is dumb.

Here's the important thing security people always screw up.

The only waste of time is if you do nothing and complain about the people who are doing something.

It is possible the ROP work that's being done won't end up preventing anything. If that's true the absolute worst thing that will result is learning a lesson. It's all too easy in the security space to act like this. If it's not perfect you can make the argument it's bad. It's a common trait of a dysfunctional group.

This is however true in crypto, never invent your own crypto algorithm.

But in the context of humanity, this is how progress happens. First someone has an idea, it might be a terrible idea, but they work on it, then they get help, the people helping expand and change the idea, eventually, after people work together, the end is greater than the means. Or if it's a bad idea, it goes nowhere. Failure only exists if you learn nothing.

This isn't how security has worked, it's probably why everything seems so broken. The problem isn't the normal people, it's the security people. Here's how a normal security idea happens:
  1. Idea
  2. YOUR IDEA IS STUPID YOU'RE WASTING YOUR TIME AND YOU'RE STUPID!!!
  3. Give up
That's madness.

From now on, if someone has an idea and you think it's silly, say nothing. Just sit and watch. If you're right it will light on fire and you can run around giving hi5s. It probably won't though. If someone starts something, and others come to help, it's going to grow into something, or they'll fail and learn something. This is how humans learn and get better. It's how open source works, it's why open source won. It's why security is losing.

The current happy ending to the ROP thread is it's going to continue, the naysayers seem to have calmed down for now. I was a bit worried for a while I'll admit. I have no doubt they'll be back though.

Help or shut up. That is all.

Join the conversation, hit me up on twitter, I'm @joshbressers

Monday, November 16, 2015

Your containers were built in some guy's barn!

Today containers are a bit like how cars used to work a long long long time ago. You couldn't really buy a car, you had to build it yourself or find someone who could build one for you in their barn. The parts were terrible and things would break all the time. It probably ran on steam or was pulled by a horse.

Containers aren't magic. Well they are for most people. Almost all technology is basically magic for almost everyone. There are some who understand it but generally speaking, it's complicated. People know enough to get by which is fine, but that also means you have to trust your supplier. Your car is probably magic to you. You put gas in a hole in the back, then you can press buttons, push peddles, and turn wheels to transport you places. I'm sure a lot of people at this point are running through the basics of how cars work in their heads to reassure themselves its' not magic and they know what's going on!

They're magic, unless you own an engine hoist (and know how to use it).

Now let's think about containers in this context. For the vast majority of container users, they get a file from somewhere, it's full of stuff that doesn't make a lot of sense. Then they run some commands they found on the internet, then some magic happens, then they repeat this twiddling things here and there until on try 47 they have a working container.

It's easy to say it doesn't matter where the container content came from, or who wrote the dockerfile, or what happens at build time. It's easy because we're still very early in the life of this technology. Most things are still fresh enough that security can squeak by. Most technology is fresh enough you don't have to worry about API or ABI issues. Most technology is new enough it mostly works.

Except even with as new as this technology is, we are starting to see reports of how many security flaws exist in docker images. This will only get worse, not better, if nothing changes. Almost nobody is paying attention, containers mean we don't have to care about this stuff, right!? We're at a point where we have guys building cars in their barns. Would you trust your family in a car built in some guy's barn? No, you want a car built with good parts and has been safety tested. Your containers are being built in some guy's barn.

If nothing changes, imagine what the future will look like. What if we had containers in 1995. There would still be people deploying Windows 95 in a container and putting it on the Internet. In 20 years, there are still going to be containers we use today being deployed. Imagine still seeing Heartbleed in 20 years if nothing changes, the thought is horrifying.

Of course I'm a bit over dramatic about all this, but the basic premise is sound. You have to understand what your container bits are. Make sure your supplier can support them. Make sure your supplier knows what they're shipping. Demand containers built with high quality parts, not pieces of old tractors found in some barn. We need secure software supply chains, there are only a few places doing it today, start asking questions and paying attention.

Join the conversation, hit me up on twitter, I'm @joshbressers

Tuesday, November 10, 2015

Is the Linux ransomware the first of many?

If you pay any attention to the news, no doubt the story of the Linux ransomware that's making the rounds. There has been much said about the technical merits of this, but there are two things I keep wondering.

Is this a singular incident, or the first of many?

You could argue this either way. It might be a one off blip, it might be the first of more to come. We shouldn't start to get worked up just yet. If there's another one of these before the year ends I'm going to stock up on coffee for the impending long nights.

Why now?

Why are we seeing this now? Linux and Apache have been running a lot of web servers for a very long time. Is there something different now that wasn't there before? Unpatched software isn't new. Ransomware is sort of new. Drive-by attacks aren't new. What is new is the amount of attention this thing is getting.

It is helpful that the author made a mistake so the technical analysis is more interesting that it would be otherwise. I wonder if this wouldn't have been nearly as exciting without that.

If this is the first of many, 2016 could be a long year. Let's hope it's an anomaly.

Join the conversation, hit me up on twitter, I'm @joshbressers

You don't have Nixon to kick around any more!

There has been a bit of noise lately around some groups not taking security as seriously as they should. Or maybe it's the security folks don't think they take it as seriously as they should. Someday there is going to be a security mushroom cloud! When there is, you won't have Nixon Security to kick around anymore!

Does it matter?

I keep thinking about people who predict the end of the world, there hasn't been one of these in a while now. The joke is always "someday they'll be right".

We're a bit like this when it comes to computer security. The security guys have been saying for a very long time "someday you'll wish you listened to us!" I'm not sure this will even happen though. There will be localized events of course, but I doubt there will be one singular thing, it'll likely be a long slow burn.

The future won't be packetized.

The world is different now, I don't think there will be some huge changing event, but it's for the exact reason we think it will. Open source won, but it doesn't mean security wins next, it means security wins never.

Will there be a major security event that makes everyone start paying attention? I don't think so. If you look at history, a singular major event can cause a group to quickly change direction and unite them all. This happened to Microsoft, their SDL program got created, things like Nimda and Code Red gave them purpose and direction. But Microsoft was a single entity, one person could demand they change direction and everyone had to listen. If you didn't listen, you got a new job.

Imagine what would happen if anyone inside an open source project did this, even if they are viewed as the "leader"? It would be a circus. You would have one group claiming this is great (that's us), one claiming this is dumb (those are the armchair security goofs) and a large group who wouldn't care or change their behavior because there's no incentive.

You can't "hack" open source. A single project can be attacked or have a terrible security record. Individual projects may change how they work, but fundamentally the whole ecosystem won't drastically change. Nobody can attack everything, they can only attack small bits. Now don't think this is necessarily bad. It's how open source works and it is what it is. Open source won I dare not question the methodology.

At the end of the day the way we start to get security to where we want it will be with a few important ideas. Once we have containers that can be secured, some bugs go away for example. I always say there is no security silver bullet. There isn't one, there will be many. It's the only way any of this will work out. Expecting everyone to be a security expert doesn't work, expecting volunteers to care about security doesn't work.

The future of open source security lies with the integrators. The people who take lots of random projects and put them together. That's where the accountability lives, it's where it belongs. I don't' know what that means yet, but I suspect we'll find out in the near future as security continues to be a hot topic.

It's a shame I'm not musical. Security Mushroom Cloud would be a great band name.

Join the conversation, hit me up on twitter, I'm @joshbressers

Tuesday, November 3, 2015

Hack your meetings

I don't think I've ever sat down to a discussion about security that doesn't end with a plan to fix every problem ever, which of course means we have a rather impressive plan where failure is the only possible outcome.

Security people are terrible at scoping
I'm not entirely sure why this is, but almost every security discussions spirals out of control and topics that are totally unrelated seem to always come up, and sometime dominate the conversation. Part of me suspects it's because there is so much to do, it's hard to know where to start.

I've recently dealt with a few meetings that had drastically different outcomes. The first got stuck on details, oceans will need to be boiled. The second meeting was fast and insanely productive. The reason why this meeting was fantastic took me a while to figure out. We were all social engineered and it was glorious.

Meeting #1
The first meeting was a pretty typical security meeting. We have a bunch of problems, no idea where to even start, so we kept getting deeper and deeper, never solving anything. It wasn't a bad group, I don't think less of anyone. I was without a doubt acting just like everyone else. In fact I had more than one of these this week. I'm sure I'll have more next week.

Meeting #2
The meeting I'm calling meeting 2 was a crazy event unlike one I've ever had. We ended with a ton of actions and everyone happy with the results. It took me an hour of reflection to figure out what happened. One of the people on the call managed to social engineered everyone else. I have no idea if he knows this, it doesn't matter because it was awesome and I'm totally stealing the technique.

A topic would come up, it would get some discussion, know basically what we had to do, then we would hear "We should do X, I'll own the task". After the first ten minutes one person owned almost everything. After a while the other meeting attendees started taking tasks away because one person had too many.

This was brilliant.

Of course I could see this backfire if you have a meeting full of people happy to let you take all the actions, but most groups don't work like this. In almost every setting everyone wants to be an important contributing member.

I'm now eager to try this technique out. I'm sure there is nuance I'm not aware of yet, but that's half the fun in making any new idea your own.

Give it a try, let me know how it goes.

Join the conversation, hit me up on twitter, I'm @joshbressers