What if I told you that the greatest threat related to technology isn’t a flaw in software, poor security planning, or a person’s susceptibility to manipulation? What if I said the biggest cyber-threat is that too many people don’t believe there really is a threat at all? And that without that belief, no other security recommendation is very effective. Bear with me for a few moments while I make my case.
I had a meeting recently with the Chief Information Officer and the Chief Information Security Officer of a major U.S. utility responsible for producing electricity to a vast population of the country. The purpose of the meeting was to discuss ways in which we could partner so that we could both benefit from improved security programs. Partnerships like this are critical in the cybersecurity industry. We learn from each other. We build on each other’s successes.
Here’s the thing about this particular meeting, though. I don’t think I can help them and I know they can’t help me. Why? Neither the CIO or the CISO believes in the value of cybersecurity. Sure, they have a well-funded cybersecurity program, but they’ve focused their efforts on responding to security incidents and all but abandoned the idea of prevention. Neither of them believes they are truly at risk and their organization’s employees don’t either.
In a general sense, All organizations face the exact same threats as all the others. Vulnerabilities in technology exist either as software flaws, security and access control mechanisms, policies and processes, or with the people entrusted to use and manage the technology.
The same is true for individuals and the technology they have at home. Software flaws provide avenues for malicious individuals to exploit. Poor hygiene online allows fraudsters to gain access to information that they can use manipulate people. Individuals’ failure to recognize and respond to threats puts their personal devices and information at risk.
A bigger problem
Though academics, think tanks, and government researchers have all come together to identify and categorize threats and then prescribe solutions, there is a constant, looming threat to cybersecurity that nobody seems to have a solution for. The problem is an over-arching, society-wide belief that cybersecurity experts are alarmists. There is a common idea that cybersecurity professionals just want to scare people into spending money for problems that don’t really exist or that aren’t as bad as they’d have you believe.
During my meeting with those two executives, they explained that implementing changes in the name of security is difficult because their field workers, contractors, and even administrative staff only see the changes as a hindrance to their ability to be productive. People are resistant to changes and tighter control because they don’t see how anyone could exploit people or their equipment in the first place. Even these two “technically inclined” executives see threats to their organization as mere hypothetical situations that are unlikely to actually materialize.
Before anyone can take valuable measures that will actually do what’s necessary to protect them, they must first trust and believe that there is real risk. So far, the security industry has failed to truly convince anyone other than themselves. Evidence of this can be seen by looking at the data about the most common vulnerabilities and how we continually approach cyber-risk.
New year, same old issue
The problem of nobody truly believing that there are serious security issues with IT isn’t a new one. Year after year, the same issues remain. Even with the field of cybersecurity rising up out of necessity in the past 20 years, cybersecurity professionals largely remain the only group of people thoroughly convinced there is a problem worthy of being taken seriously.
Application developers don’t buy it
SANS and OWASP , two of the most trusted organizations to track and publish statistics on the most common flaws in software and web applications from year to year, have constantly reported the same handful of flaws, every year. Chief among these are the ability to inject instructions into user input fields, such as user name and password fields on a login screen or in the comments section of a website, and back-door the program into allowing the attacker to gain access to information, spread malware, or take control of the computer that hosts the program.
Though these attacks carry technical-sounding names like SQL injection, cross site scripting (XSS), and buffer overflow attacks, performing them can be easily automated using simple, and often free, tools obtained from the Internet. They are simple enough to perform that they can be taught by someone who knows how to anyone with basic computer skills in just a matter of hours.
What’s worse is that SANS, OWASP, and other organization have prescribed very simple solutions (in terms of software development) to prevent these common threats for more than a decade. In spite of that, developers continue to produce software with these same weaknesses.
One of these easily remediated flaws, SQL injection, was first identified as a problem in 1998. It has remained as one of the top 10 most common software and web application flaws every single year since. Software developers just continue to write software without taking well-documented measures to mitigate it, just like many other issues.
Hardware makers don’t buy it
Each year, new technology is introduced. Many of the innovative products provide new and fun capabilities we didn’t have before. The problem is that all this new hardware uses the same operating systems, same basic design, and same development processes as products from years past. More often than not, security is an after-thought.
Most people don’t realize that embedded systems, such as the thermostat that controls your home’s heating and cooling, the electronics that regulate the temperature in your refrigerator, and the timer on your oven are computers too. These devices are all around us. Almost none of them were built with security in mind because nobody ever thought they would need it.
Then someone had the grand idea of connecting embedded systems to the Internet. This spawned a family of technology we now call Internet of Things, or IoT. These often can’t be updated, can’t be patched, and the software they run on was not originally designed with any security in mind. Yet hardware makers continue to build new products using this same old technology.
Business executives don’t buy it
Forbes recently ran an article explaining the high levels of stress among chief information security officers. The article talks about the results of a recent study that says 1 in 6 security executives either self medicates or has a problem with excessive alcohol consumption as a means of dealing with stress that comes from their jobs. The study revealed that 91% of CISOs surveyed reported having moderate or high levels of stress. The cause? Low support from senior leadership and a lack of executives who value security programs that CISOs are tasked with running.
Organizations hiring cybersecurity workers don’t even bother to understand how to gauge the knowledge and skills of potential employees. Many organizations simply require that their prospects hold a CISSP certification. A certification that does not test for technical competence, but instead for the general knowledge required to supervise and manage cybersecurity workers. Holding a CISSP certification only shows that you understand enough of the most basic cybersecurty principles to supervise people who are experts on them. Yet, this certification is the standard in the United States by which organizations judge whether someone has the knowledge and skills necessary to do many technical cybersecurity jobs.
IT personnel don’t buy it
One of the most preventable culprits for IT being compromised is that software updates and patches are not applied. Millions of flaws exist across all the software used on all the devices in circulation today. Most of these flaws have fixes that have already been produced and distributed. Nearly every software update that is distributed by nearly every software company, and for nearly every tech product on the market is accompanied by an explanation of the flaws the update is intended to fix.
Larger companies often have multiple technologies to identify unpatched systems and let administrators know which updates need to be applied to what systems. Yet, patches aren’t applied and systems remain vulnerable to known and well-documented flaws.
As ArsTechnica reported, the massive ransomeware, WannaCry, that decimated the world in 2017, affected millions of systems. Microsoft published a patch months before the spread of WannaCry that would have fixed the vulnerability for every system that it exploited. In 2017, there were more than 200k systems around the world that were vulnerable to Heartbleed, a vulnerability discovered in 2014 and fixed with a patch within a matter of weeks. In both of these cases, a vulnerability was identified and the solution was distributed by the product maker, but far too many people didn’t even bother to apply it. In fact, it’s reported that somewhere near 90% of firms who fell victim to hackers have failed to keep systems patched.
Consumers don’t buy it
People like shiny new toys. They delight in technology that can automate their world, bring them entertainment, and allow them to geek out with cool new capabilities. Today, nearly every person with a job (and many not old enough to work) has a smart phone. Between 2017 and 2019 over 15 million smart speakers were sold world-wide. As a society, we welcome new technology into our lives with open arms.
There is a meme floating around that says during the Cold War, the public was afraid of government spying, but today we invite it into our homes in the form of smart speakers, internet-connected cameras, and smart home security systems. The truth in this meme is that in the mid-to-late 1900s people were suspicious of technology, but now we rarely stop to question whether we should trust it.
As consumers, we rarely stop to ask if software or hardware is secure before we buy new products. We don’t look for security certifications on products. We don’t consider the risks of signing up for the latest social media site. We certainly don’t put much effort in making sure our home devices are kept up to date, especially when they aren’t nagging us to download and install those updates.
Trust the experts
The greatest problem that I have with the energy executives I met with is that they do not value the professional knowledge and ability of cybersecurity experts to manage cybersecurity in their organization. Rather than hire experts trained and educated in cyber-risk management and network security testing, they prefer to employ project managers, who are not well-versed in technology risk, and forensic investigators, who specialize in post-incident information gathering.
While cybersecurity involves risk management, it is necessary to have a deep understanding of technology in order to properly manage risk associated with it. Likewise, a forensic investigator is equipped and trained to figure out what someone did with a computer after the fact, not in the art of breaking into them and exploiting weaknesses as a preventative measure.
If we are going to have an impact on improving security around the use of technology, we have to start actually listening to cybersecurity experts. We must collectively stop writing them off as alarmist fear-mongers and start considering that there might actually be merit to what they are so passionate about. We must stop trusting technology and start listening to those who understand the threats posed by it. This is the greatest obstacle standing between where we are now and finally making progress towards safer, more secure technology.