In Matters of Cyber Security, To Err is Human
Updated: Aug 8
Just as the safety of self-driving vehicles hinges on human drivers following the rules of the road, the effectiveness of many corporate cyber security measures depends on employees exercising due care when carrying them out.
Driverless vehicles soon will be able to take us anywhere we want, near or far, with little effort; we need only enter our destination, sit back, and enjoy the ride—or so we are told.
This exciting vision promises not only a more pleasant and easier way to travel, but also, potentially, a safer one. One study shows that more than 90 percent of motorized vehicle accidents happen because of human error. Meanwhile, self-driving cars can be programmed to avoid crashes.
Equipped with sensors, radar, GPS, input from other vehicles, and more data, autonomous cars won’t engage in the most common automotive infractions, which, according to the study, include driving drunk, getting distracted, cutting off other vehicles during lane changes, and failing to yield to oncoming cars.
With more than 32,000 traffic-related deaths in 2014, the potential of self-driving cars to save lives and property could be significant. Recently, though, the safety of autonomous vehicles has come into question. Driverless cars on the roads today are getting into accidents at twice the rate of cars with drivers. Human-driven cars are hitting them because, it seems, driverless vehicles follow the rules too closely.
To put it another way, developers of driverless cars may not have considered the human element in their design equation. Now, it seems, they are second-guessing their algorithms, wondering if they should program the cars to exceed the speed limit, if that’s what most drivers would typically do in a given situation, for example, or to cross a double yellow line to drive around a bicyclist.
We see a similar situation in cyber security today. In spite of efforts to improve encryption, authentication, firewalls, and the like, data breaches continue on massive scales.
With more than 95 percent of breaches blamed on user error—clicking on phony links, opening fake websites, and using unsecured passwords, for instance—we’re seeing that, as with those driverless cars, the technology is only as good as the human interacting with it.
To shore up defenses, many cyber security practitioners work to educate employees about cyber threats. But what to make of recent reports showing that IT professionals, who arguably know more about cyber security than the average employee, are more likely than others to engage in risky online behaviors? The survey indicates that although security awareness training is important for reducing risk, clearly it is not sufficient.
Some financial institutions are trying the “stick” approach: They’re sending out fake phishing emails to workers and penalizing those who open them. They’re also monitoring employees’ social media accounts for sensitive information and prohibiting out-of-office email replies and phone messages. The result may be enhanced security, but at what cost? Do we really need to create a workplace culture in which people distrust every email and phone call they receive? Do we risk making them afraid to participate in social media at all?
Given the increasing sophistication of cyber criminals’ schemes, expecting even the most knowledgeable among us to spot every phishing email, and to carefully check every Web address before clicking, may be not only unreasonable but also hazardous to our organizational health. To err is indeed human, and it often takes only one mistake to let intruders into company databases.
There must be a better way.
Rather than place the security onus on employees or executives, perhaps we ought to work around the “weak link” of human error. Instead of instilling fear into workers, maybe we ought to give them infrastructure they can trust.
These days, there’s a lot of talk about “baking in” security during the design process when developing new technologies. How about “mixing in” human foibles—forgetfulness, inattention, denial, and even rebelliousness—as well? Why not embed cyber security features that protect people from themselves rather than expect perfection from imperfect users? Imagine that your email program consistently recognized and flagged phishing emails, for instance, or that you could depend on your Web browser to prevent unsafe links from opening.
In general, people aren’t told to distrust the airbags in their cars, or to check the safety of the highways they drive before embarking on a journey. Why burden busy workers with the task of keeping their enterprises safe from criminals whose sole job is to infiltrate their organizations? Our role, as cyber security professionals, is to provide a security infrastructure users can trust.
New research into “self-healing” networks shows potential for a future in which computers exercise the duties of being vigilant that are currently asked of people. In the meantime, perhaps we should consider adding more than a touch of human unpredictability to our cyber security recipes. Instead of developing digital strategies focused on technology, how can we create cyber security strategies for humans living and working in a digital world?