NEW YORK, Oct 17 ― The vision of the so-called internet of things — giving all sorts of physical things a digital makeover — has been years ahead of reality. But that gap is closing fast.

Today, the range of things being computerised and connected to networks is stunning, from watches, appliances and clothing to cars, jet engines and factory equipment. Even roadways and farm fields are being upgraded with digital sensors. In the last two years, the number of internet-of-things devices in the world has surged nearly 70 per cent to 6.4 billion, according to Gartner, a research firm. By 2020, the firm forecasts, the internet-of-things population will reach 20.8 billion.

The optimistic outlook is that the internet of things will be an enabling technology that will help make the people and physical systems of the world — health care, food production, transportation, energy consumption — smarter and more efficient.

The pessimistic outlook? Hackers will have something else to hack. And consumers accustomed to adding security tools to their computers and phones should expect to adopt similar precautions with internet-connected home appliances.

Advertisement

“If we want to put networked technologies into more and more things, we also have to find a way to make them safer,” said Michael Walker, a programme manager and computer security expert at the Pentagon’s advanced research arm. “It’s a challenge for civilisation.”

To help address that challenge, Walker and the Defence Advanced Research Projects Agency, or DARPA, created a contest with millions of dollars in prize money, called the Cyber Grand Challenge. To win, contestants would have to create automated digital defence systems that could identify and fix software vulnerabilities on their own — essentially smart software robots as sentinels for digital security.

A reminder of the need for stepped-up security came a few weeks after the DARPA-sponsored competition, which was held in August. Researchers for Level 3 Communications, a telecommunications company, said they had detected several strains of malware that launched attacks on websites from compromised internet-of-things devices.

Advertisement

The Level 3 researchers, working with Flashpoint, an internet risk-management firm, found that as many as one million devices, mainly security cameras and video recorders, had been harnessed for so-called botnet attacks. They called it “a drastic shift” toward using internet-of-things devices as hosts for attacks instead of traditional hosts, such as hijacked data centre computers and computer routers in homes.

And last week, researchers at Akamai Technologies, a web content delivery company, reported another security breach. They detected hackers commandeering as many as two million devices, including Wi-Fi hot spots and satellite antennas, to test whether stolen user names and passwords could be deployed to gain access to websites.

The Cyber Grand Challenge was announced in 2013, and qualifying rounds began in 2014. At the outset, more than 100 teams were in the contest. Through a series of elimination rounds, the competitors were winnowed to seven teams that participated in the finals in August in Las Vegas. The three winning teams collected a total of US$3.75 million (RM15.8 million) in prize money.

With the computer security contest, DARPA took a page from a playbook that worked in the past. The agency staged a similar contest that served to jump-start the development of self-driving cars in 2005. It took the winning team’s autonomous vehicle nearly seven hours to complete the 132-mile course, a dawdling pace of less than 20mph.

Still, the 2005 contest proved that autonomous vehicles were possible, brushing aside long-standing doubts and spurring investment and research that led to the commercialisation of self-driving car technology.

“We’re at that same moment with autonomous cyberdefence,” Walker said.

The contest, according the leaders of the three winning teams, was a technical milestone, but it also shed light on how machine automation and human expertise might be most efficiently combined in computer security.

In the security industry, the scientists say, there is a lot of talk of “self-healing systems.” But the current state of automation, they add, typically applies to one element of security, such as finding software vulnerabilities, monitoring networks or deploying software patches. And automated malware detection, for instance, is often based on large databases of known varieties of malicious code.

For the DARPA test, the attack code was new, created for the event. In the capture-the-flag style contest, the teams played both offense and defence. For the humans, it was hands-off during the competition. The software was on its own to find and exploit flaws in opponents’ software, scan networks for incoming assaults and write code to tighten its defences.

The winners succeeded in integrating different software techniques, in ways not done before, into automated “cybersecurity systems.” The contest was conducted in a walled-off computing environment rather than the open internet.

The scientists agree that further development work needs to be done for the technology to be used broadly on commercial networks and the open internet.

“But this was a demonstration that automated cyberdefence is mature enough, and it’s coming,” said David Melski, captain of the second-place team whose members came from the University of Virginia and a spinoff startup from Cornell University, GrammaTech, where Melski heads research.

The first-place team, which won US$2 million, was a group from ForAllSecure, a spinoff from Carnegie Mellon University. Hours after the DARPA contest, its cyberreasoning software, called Mayhem, went up against the best human teams at Defcon, an annual hacking competition.

In that three-day contest, Mayhem held its own for two days and proved itself to be extremely strong on defence. But by the third day, the human experts had come up with more innovative exploits than Mayhem, said David Brumley, a professor at Carnegie Mellon and chief executive of ForAllSecure.

Still, the automated system displayed its power, especially in keeping up with the scale of security challenges in the internet-of-things era. “The number of things automated systems can look at is so vast that it changes the game,” Brumley said.

Yan Shoshitaishvili, a PhD candidate who led the third-place team, a group from the University of California, Santa Barbara, is focusing his research on designing “centaur” systems that effectively combine machine firepower with human expertise.

Humans are still better than computers at understanding context — and security is so often defined by context. For example, you do want to broadcast your GPS location data to friends in a social app like Glympse; you do not want a programme sending out location data if you’re in a battlefield tank.

“In the real world,” Shoshitaishvili said, “humans can assist these automated systems. That’s the path ahead.” — The New York Times