Tech security today is bad, and as people bring more and more connected tech gadgets into their homes, the risks are increasing dramatically. That’s why it is time for the tech industry to step up and take responsibility for protecting the devices they make, and the people that use them. This was the message delivered by ARM CEO Simon Segars to ARM developers attending the annual ARM TechCon in Santa Clara, Calif. The theme of security permeated the event, with ARM announcing its Platform Security Architecture, a set of architecture specifications and open source firmware aimed for use in the IoT, along with a programmable security core.
But Segars and other speakers made it clear that this concern about security wasn’t just about what ARM is doing. Segars distributed what ARM is calling a Security Manifesto, urging the tech industry to accept the fact that it has a social contract with users, that security is a collective industry responsibility and that security systems have to allow for human error.
The manifesto also states that security must be a primary design consideration from the beginning, an approach cybersecurity experts have been beating the drum about “It’s not just a software problem,” Segars said. “If we can do more in hardware, if we design things with the assumption that a compromise is going to happen, we can make the [security] software simpler.”
Segars also said the makers of connected products have to take ownership for security for the lifetime of the product. In the auto industry, he pointed out “you read all the time about recalls, so you know that the car company is taking some responsibility for your safety after you’ve driven off the lot. As imperfect as that is, at least there is an onus on the company that sold me the car to look after that.” Today, he continued, “the tech industry is not like that. In very few cases does anybody who sold you an electronic product take responsibility for it. That has to change.” And it has to change very soon, because, he said, “as we go into the world of IoT, the threats explode tremendously.” “When everything is a connected, it is an opportunity for a bad hacker,” he said. Already, he said, hackers have even attacked a connected fish tank. “I’d like all of you to join us in committing to making electronic products safer than they are today,” he urged.
Segars wants the tech industry to think of protecting against security hacks not so much as building a wall that locks hackers out, but as building an immune system that fights infections. “These devices are alive,” he said. And “electronic threats are alive in the same way biological threats are alive,” they are constantly adapting to new conditions.
The kind of cyber immune system he described would use intelligence in the devices themselves, in the network, and in the cloud to spot when something is wrong, to quarantine it so it doesn’t damage other systems, and to heal it by triggering a firmware update or other patch, in most cases, without the user of the device having to do anything. Companies would have to work together to spot and counteract threats, he said.
While Segars focused on what the industry can do against cybercrime, he clearly didn’t want to oversell the power of technology to fix the problem. Instead, users should take more responsibility for their own security, he indicated in a white paper distributed with the manifesto. And cybercrime experts took the stage to explicitly deliver the message that humans, not just machines, can and will be hacked. People, they indicated, need to do better holding up their end of the social contract, that is, spotting when they are about to be manipulated by hackers and not falling for it, or at least not falling for it quite as often as they do today.
Mary Aiken, a cyber psychologist who works in forensics (and who was the inspiration for the TV show CSI Cyber), said that even smart people do stupid things online—like load apps on their phones that they haven’t fully checked out, even when they know they have credit card data loaded on their phones as well. In a larger context, she pointed out that storing financial data for half the U.S. population on one company’s servers was probably not a good idea. “Just because we can, doesn’t mean we should,” she said.
Jessica Butler, a cybersecurity consultant, made it clear that just about any human can be hacked. Her most stunning real-world example featured an email sent to a high-level employee, purportedly from the company CEO, requesting the immediate transfer of a vast amount of money to close an important, but stealthy, deal. The employee made the transfer.
“So many organizations are falling victim to these attacks,” Butler said. “The FBI estimates that these are costing $2.5 billion a year annually.” “It’s about the people, the process, and the technology,” she said. “I would love us to get to the stage where technology can solve our security problems, but we are very far from that point right now.” She warned about the exploding threat of ransomware, and its potential move from computers into the Internet of Things. “Would you ransom your smart TV when your favorite show is coming on in half an hour?” she asked. “If I compromise your sex toy, can I get access to information that can be ransomed?”
Individuals, she said, need to step up their use of password managers and two-factor authentication. And they also need to be suspicious. “Beware,” she said, “of links and attachments. It’s terrible advice, I know, because half of what we do at our desks is click on link and download attachments. But you have to be aware. If you get an attachment that you weren’t expecting, pick up the phone and call the person who sent it to you.” “We need to get people to be our strongest link,” she concluded.