While analogies can be helpful to understand cybersecurity, analogizing cybersecurity to car safety oversimplifies the challenge of cybersecurity. This oversimplification creates the impression that software security is as easy as buckling a seatbelt. It also suggests that cyber incidents are the result of the lack of caring or competence of software developers.
In fact, as the US National Cybersecurity Strategy notes that “the most advanced software security programs cannot prevent all vulnerabilities” and the multi-lateral document “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default” acknowledges that even software that is secure-by-design “will continue to suffer vulnerabilities.” Unlike car accidents, which are just that – accidents – cyber incidents are the result of malicious actors, too frequently funded by adversarial nation-states, operating in a complex digital ecosystem.
If we want to improve cybersecurity – a shared goal – it’s time to put the car safety analogy back in the garage.
1. Malicious Actors Are Not Like Drivers and the Internet Is Not Like a Highway
When drivers pull onto the road, they can generally be confident that other drivers will not intentionally cause a crash. In fact, if drivers are committed to crashing their cars into another vehicle, it would be virtually impossible to stop them.
Cars and seatbelts are not a good analogy for cybersecurity. Other drivers aren’t seeking to disable fellow motorists’ brakes or unbuckle seatbelts the way nation-state actors, and hundreds or thousands of criminal syndicates, attempt to disable software security systems millions of times per day.
Further, car accidents take place at a specific location, making it easier to assess and react to the situation. By comparison, cyber incidents take place across borders, and involve malicious actors who are difficult to identify and bring to justice.
2. Competition Encourages Excellence
Some in government have argued that software companies should provide all available security features for no charge – just as consumers might assume that seatbelts are included with the purchase of a new car. Yes, there are certain baseline safety requirements for cars and reasonable security standards are required for software. But car manufacturers compete on safety – and will charge for extra or better features. Numerous safety innovations – such as blind-spot monitors, lane departure warnings, and electronic stability control – cost extra, and for good reason.
Just as carmakers have myriad private and public incentives to provide new safety features, policymakers should maintain the incentive for software companies to develop new and better security functions.
If, instead, policymakers apply the car safety analogy and dictate which security features are required, the result would be less innovation to confront malicious actors who will continue to improve their tactics, techniques, and procedures.
3. Compliance Checklist Don’t Work for Cybersecurity
The US government, industry, and partners around the world have spent decades in advancing cybersecurity as a risk management practice within organizations helmed by its top leaders. Further applying the car safety analogy risks undoing that progress by reverting to stale, prescriptive, compliance-based checklists.
Risk-based approaches are particularly important for cybersecurity where both offensive and defensive capabilities develop much faster than legislation, regulation, or policy.
Additionally, risk-based approaches recognize that not every organization or system needs the same level of security. Not even the US government takes a uniform approach to its security needs, for instance focusing its security practices on the most sensitive information needs that sort of protection. Put another way: Even the US government tailors the security it procures to its mission needs – a risk-based approach.
A better analogy for cybersecurity involves complex systems like ecosystems. Complex systems feature interacting and dynamic actors which produce outcomes that are challenging to predict. Complex systems more accurately model cybersecurity challenges, and don’t oversimplify the challenge.
We disagree about the most effective analogy to use to help policymakers understand and appreciate the challenge and complexity of cybersecurity, and develop and implement policies that improve outcomes. But we agree on many of the steps that government and industry should prioritize to accomplish these shared goals.
We agree that the security of software should be driven by executive-level commitment and engagement.
We agree that software producers should design and build software in a way that reasonably protects against malicious cyber actors, even if all vulnerabilities cannot be prevented.
We agree that software producers should invest in building security in rather than bolting it on. We also agree that software producers should leverage internationally recognized standards and best practices like the BSA Framework for Secure Software, which is cited by the National Institute of Standards and Technology’s Secure Software Development Framework, to achieve outcomes like those identified in the US National Cybersecurity Strategy.
We agree that the burden of security should not disproportionately fall on consumers, even if the burden of safely operating a car ultimately falls on drivers.
Most importantly, we agree that despite the complexity of the growing challenge the time to start doing better is now.
Building prescriptive requirements for software based on an overly simplistic analogy to car safety won’t encourage companies to develop more secure software. And in the complex digital ecosystem, in which malicious actors are constantly improving, we need to reward software vendors that continuously improve their security tools.
Once we’ve embraced a more accurate analogy, we’ll be on the road to a more secure future.