Sunday, April 24, 2016

Can we train our way out of security flaws?

I had a discussion with some people I work with smarter than myself about training developers. The usual training suggests came up, but at the end of the day, and this will no doubt enrage some of you, we can't train developers to write secure code.

It's OK, my twitter handle is @joshbressers, go tell me how dumb I am, I can handle it.

So anyhow, training. It's a great idea in theory. It works in many instances, but security isn't one of them. If you look at where training is really successful it's for things like how to use a new device, or how to work with a bit of software. Those are really single purpose items, that's the trick. If you have a device that really only does one thing, you can train a person how to use it; it has a finite scope. Writing software has no scope. To quote myself from this discussion:

You have a Turing complete creature, using a Turing complete machine, writing in a Turing complete language, you're going to end up with Turing complete bugs.

The problem with training in this situation is that you can't train for infinite permutations. By its very definition, training can only cover a finite amount of content. Programming by definition requires you to draw on an infinite amount of content. The two are mutually exclusive.

Since you've made it this far, let's come to an understanding. Firstly, training, even how to write software is not a waste of time. Just because you can't train someone to write secure software you can teach them to understand the problem (or a subset of it). The tech industry is notorious for seeing everything as all or none. It's a sliding scale.

So what's the point?

My thoughts on this matter are one of how can we think about the challenges in a different way. Sometimes you have to understand the problem and the tools you have to find better solutions for it. We love to worry about how to teach everyone how to be more secure, when in reality it's all about many layers with small bits of security in each spot.

I hate car analogies, but this time it sort of makes sense.

We don't proclaim the way to stop people getting killed in road accidents is to train them to be better drivers. In fact I've never heard anyone claim this is the solution. We have rules that dictate how to road is to be used (which humans ignore). We have cars with lots of safety features (which humans love to disable). We have humans on the road to ensure the rules are being followed. We have safety built into lots of roads, like guard rails and rumble strips. At the end of the day even with layers of safety built in, there are accidents, lots of accidents, and almost no calls for more training.

You know what's currently the talk about how to make things safer? Self driving cars. It's ironic that software may be the solution to human safety. The point though is that every system reaches a point where the best you can ever do is marginal improvements. Cars are there, software is there. If we want to see substantial change we need new technology that changes everything.

In the meantime, we can continue to add layers of safety for software, this is where most effort seems to be today. We can leverage our existing knowledge and understanding of problems to work on making things marginally better. Some of this could be training, some of this will be technology. What we really need to do is figure out what's next though.

Just as humans are terrible drivers, we are terrible developers. We won't fix auto safety with training any more than we will fix software security with training. Of course there are basic rules everyone needs to understand which is why some training is useful. We're not going see any significant security improvements without some sort of new technology breakthrough. I don't know what that is, nobody does yet. What is self driving software development going to look like?

Let me know what you think. I'm @joshbressers on Twitter.