"Think about the system as a virtual analyst," says Kaylan Veermachaneni, co-creator of MIT's latest artificial intelligence. He's a research scientist at the university's Computer Science and Artificial Intelligence lab that, along with Ignacio Arnaldo, built an AI that acts as a lookout for the age of cyber warfare. AI2 (Short for Artificial Intelligence Squared) is a system designed to spot a hacking attack better than humans and existing software. They claim that the program can detect 85 percent of malicious attacks, although that figure is set to rise the more it learns. We can already imagine Sony's IT gurus beating a path to Massachusetts with a suitcase stuffed full of unmarked bills.
Existing threat-detection systems broadly fall into two categories: a software bot that can detect patterns and human analysis. AI2's gimmick is that it mashes together a handful of different machine learning tools and asks its flesh-and-blood counterparts for help. When it thinks it's found a pattern amongst the noise of data, it offers it up to a person for a second opinion. After a short period of time, AI2 will learn from its errors and what the human experts are telling it. As Arnaldo says, "it continuously generates new models that it can refine in as little as two hours."
On its first day of operation, AI2 flagged 200 events that it determined to be a cyberattack to its masters. After a handful of days picking up the dos and don'ts from operators, that figure had dropped to just 40. That frees up fleshy operators to concentrate on the rest of their job and giving each flagged incident more of their attention. According to Nitesh Chawla, professor of computer science at Notre Dame University, AI2 "has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover." Maybe Mossack Fonseca will be second on the list of clients racing to MIT's door.