Transformational Change for Secure Software Development- An attempt to provoke thought for changing how we think about application security

I recently read a report that organizations are only patching about 10% of the vulnerabilities that roll in every day. The main takeaway was that approach was sufficient because there is no need to patch everything. After all only 5–6% of the vulnerabilities are likely to be exploited. The rest is noise. The catch is…. you now have to buy another product that tells you which 10% to patch.

There you have it…

To deal with this constant tsunami of software fixes that show up every day all we need is more software to manage bad software.

In this article, I’ll try to take a really fundamental step back to truly examine this problem. After all, it was Albert Einstein who said, “if he had 1 hour to solve a problem he would think about the problem for 55 minutes and provide the solution in the remaining time”.

If I revert back to my high school math, the number of software vulnerabilities is a function of the number of lines of code

v=f(# LC). -{where #LC = lines of code}.

For each line of code, there is an x % chance that it leads to a vulnerability. This may not be a linear function but its definitely a positive correlation. Some functions are steep and some are flat. Flattening the relationship by reducing the number of vulnerabilities per line of code is the ongoing objective and mantra of secure coding movement.

But what is the true consequence of producing bad code.

Some vendors produce ‘fixes’ to broken code and vulnerabilities every couple of weeks. They have made this a regular event and are on a set schedule to clean up the mistakes that they shipped to their customers.

Imagine for a second… if your car gets recalled every month and you have to drive it to the dealership to get it fixed. The dealership hooks up your car with new features (lets say, mud flaps) and fixes some high severity vulnerabilities at the same time. This is done over and over like clockwork and you have become numb to this peculiar activity.

If aliens landed on this planet and saw how we manage software… what would they say.

…You pay for software that has to be rewritten? The organization who sold you that software expects you to hire people, create change windows and manage a whole process to fix the defects they sent you?…

Using the analogy above, you are hiring the mechanic and the auto repair shop because you are managing and paying for the dealership. Even worse, the software provider pays bug bounties for discovering these problems because they would rather crowd source the identification than commit to internal staff identifying such issues. Imagine if cars were recalled by relying on random car enthusiasts poking around in their garage to find defects in every make and model of cars.

We blindly accept this reality because certain products are seen as unimpeachable parts of our infrastructure where no other alternatives exist.

Now imagine this different perspective

Many organizations outsource functions that are not core to their business. In doing so they write contracts that penalize the service provider when it doesn’t deliver on its commitment. For example, if your help desk is outsourced, metrics are collected on the service and if they don’t meet an SLA (Service Level Agreement), then there is a financial penalty for not meeting the target.

Imagine if software came with a caveat where the vendor pays you back for exceeding an agreed upon number of vulnerabilities. You would think that when secure coding practices are tied to the bottom line, people will start caring. I suspect that we have issues with vulnerabilities because developers are not growing up with an understanding of how a person can make their software do what it’s not supposed to do. They don’t develop a hacker mentality and asking them to do so in mid life is not always going to work. Putting them through secure coding seminars, making them a security champion and appealing to their future marketability as secure coders does not seem to work. There is no doubt that the incentive is to churn out working software over secure software.

We must therefore really look for the following fundamental shift embedded in market/economic principles to really bring about change. Specifically, these three changes could change the status quo;

  1. Innovators could overcome existing barriers to entry by producing software that shifts the incentives in their favor when they produce secure code. You pay more for the software but it comes with rebates if your vulnerability overhead goes up. After all, a car with 10 recalls a year would be a disaster.
  2. Market participants must be willing to jump ship to such products. There will eventually be another product that disrupts the stuff we patch every 2 weeks. It will take courage to jump ship but it is plausible that it could happen.
  3. Bug bounty programs could pivot from being a platform to report bugs to the software providers to providing that information to consortiums of software users. The consortiums could then apply pressure on the software provider to pay up on the calculated business cost of patching. Security researcher A can go to Microsoft for a $10,000 bounty or pool that finding into the consortium where the collected pool of findings can be split between the impacted organizations and the researchers.

The main idea of the above is not to propose an actual solution but to really provoke thought. Does the status quo of rinse repeat make sense? The answer to that is no. Are we resigned to our fait? We should not be.

Let’s hope someone reads this and passes it off.

First blog in the bag!