![Howard Shrobe](/sites/default/files/styles/headshot/public/images/people/profile/Howard%20Shrobe_%20002.jpg?h=dfcdaab2&itok=WWTgaCCy)
PI
Core/Dual
Howard Shrobe
Howard Shrobe is currently serving as a Program Manager in DARPA's Information Innovation Office (I2O) where he leads programs in AI, Computer Architecture and Cyber Security and is not conducting research on campus while he serves his term at DARPA.
The following is an overview of his research interests;
Imagine a world where computing systems are both trustworthy and as natural to interact with as a colleague. This dream may seem a long way off — after all, today’s computers are woefully insecure, with systems being compromised regularly for industrial infrastructure, financial systems, and military; and they are also woefully unnatural in the way they interact with us. But thanks to research at the intersections of cybersecurity and artificial intelligence (AI), the main weakness underlying these problems — that computers don’t really know what they’re doing — is being addressed.
Dr. Howard Shrobe, a Principal Research Scientist in MIT CSAIL His goal is to create and deploy high-performance, reliable, and secure computing systems that are easy to interact with, and he and his team achieve this in a number of ways involving two major research areas that intersect: Systems and security, and AI.
When Dr. Shrobe first came to MIT, he was developing operating systems and was attracted to MIT because of the highly innovative Multics system project. But the Multics project was nearly over, so he drifted into the Artificial Intelligence Lab (now CSAIL) because there was more work in systems available there. As he worked in the lab, he became more and more interested in AI, particularly cognitive-style AI, which focuses on reasoning and representation. In the 1990s, he began to see that these two areas of interest have started to meet in the field of cybersecurity.
Some of Dr. Shrobe’s work is in pure systems, but much of his approach to cybersecurity is to bring in AI-style reasoning as part of the solution. For example, he and his group are designing computing systems that actively manage metadata in order to prevent attacks and help computers understand what they are doing at different levels, including hardware and low-level systems software. While an effort Dr. Shrobe is currently making in designing a new processor that tags every word in memory and registers with extra metadata is pure systems, he uses AI planning and reasoning techniques in other areas to try and provide second-layer defense and tools for analyzing systems.
Attack Planner in an example of one such system built by Dr. Shrobe and his team that utilizes AI to try to model the way attackers think about attacking systems. Typically, malicious actors will form an overall plan known as the Cyber Kill Chain, which involves the high-level steps of the strategy: Initial penetration, lateral motion from one machine to another, privilege escalation, and finally exploitation (and in many cases, obfuscation). The reasoning part of the system constructs as many plans for the attacker as it possibly can as a kind of auditing tool, so that users can be better prepared for all of the ways attackers could penetrate a system and the various consequences of the attack.
Another blend of AI and cybersecurity Dr. Shrobe and his group have worked on involves the idea of building monitors that watch a system while it’s executing, and compare what the system is doing to a symbolic model of what it is supposed to do. If the system’s actual behavior goes outside the envelope of what the model sanctions, then you know that something is wrong. This model can also be used to reason backward and figure out where the violation might have originally occurred.
Prior to starting his tour at DARPA, Dr. Shrobe worked with other CSAIL memebers and DOLL Labs on a DARPA project called ASIST that is attempting to build agents help to improve the functioning of teams. The ASIST program deals with what is known in psychology as “theory of mind”, which is essential to modeling the Other and the oftentimes subtle ways humans interact with one another. People nod, use hand gestures, and read facial expressions. Computers, on the other hand, don’t do that, so the goal here is to develop a kind of reasoning so that a system can reflect on what the people it’s interacting with are actually trying to do, what they’re thinking, what their plans might be, and how the computing system can interact with them in a constructive way. A key element of the team's approach is to use stories about prior interactions as precedents for what to do in the current situation.
Related Links
Last updated Feb 28 '23