Each year, computer security conferences host a high tech version of the kids game “capture the flag,” so that teams of hackers and security researchers can demonstrate their hacking prowess. The game requires teams to secure a computer system by identifying intentional and unintentional vulnerabilities in various software modules while launching and defending against threats from competitive teams.

This week, DARPA, the Defense Advanced Research Projects Agency, hosted a version of a capture the flag contest where the teams were autonomous bots. The event, held Thursday in Las Vegas as part of the Defcon security contest, was the final competition of the agency’s Cyber Grand Challenge, a $55 million hacking contest designed to spur innovation in the area of autonomous cyber warfare.

Seven teams of researchers from across the country fielded bot systems that competed with one another to autonomously identify and patch software vulnerabilities that were planted in their systems by DARPA, while deflecting attacks from competing bots and launching their own attacks against the computer systems those bots were protecting. Team’s bots are scored on their ability to secure their own software and services, ensure their continued availability and take advantage of vulnerabilities in competing team’s systems.

From the looks of it, DARPA constructed a pretty elaborate physical environment for the contest, complete with an “air gap” to ensure that each system was acting totally on its own. Announcers followed along with the 96 rounds of action and provided a live play-by-play for onlookers, while referees ensured that each team played by the rules. With each round, DARPA deployed a new set of software for the bots to both defend and attack.

I watched segments of the 4+ hour video from the final competition and found it pretty fascinating, but I failed in my brief attempt to find any details on how the bot various bot systems work.

Cade Metz’ coverage of the competition for Wired painted an interesting picture of the different strategies each bot pursued in the contest. One bot, Rubeus, built by federal contractor Raytheon, took an aggressive tack, going after vulnerabilities in the other systems from the get go. Yet another bot, Mech.Phish didn’t perform as well overall, but it did have a knack for finding and exploiting complex and subtle bugs in the challenge code.

Mayhem, a bot fielded by a team from Carnegie Mellon spin-out ForAllSecure, and the eventual winner of the $2M first prize, seemed rather focused on patching its own systems and keeping them up and running. The bot reportedly used statistical analyses throughout the game to weigh the costs and benefits of patching vulnerabilities (which has inherent risks and demands service downtime), and would only decide to patch those holes that made sense based on this analysis.

Cybersecurity is an important and rapidly evolving use case for ML & AI, and there’s been quite a bit of commercial activity in the area in addition to innovation and research activities like the CGC.

This week startup Distil Networks closed a $21 million series C funding round to help enterprise customers separate good bots from bad ones, and keep the latter off of their web sites. Note that we’re not talking about chatbots here, but rather the kind of web bots that abuse APIs, scrape web sites, and probe them for vulnerabilities. The company uses machine learning techniques to detect when a bot is trying to cloak its activity by spoofing multiple user accounts, browsers, and locations.

And last month, another cyber security startup, Darktrace Ltd. raised a $64 million series C to help enterprises identify and defend against a variety of networked threats.

Subscribe: iTunes / Youtube / Spotify / RSS