Cognitive assistant for intelligence analysts developing at Mason
By Emily Verbiest, Contributor
Mason’s Learning Agents Center is developing a cognitive assistance system to aid intelligence analysts and other national security professionals with AI problem-solving tools.
The 4.5-year long program was launched by the Intelligence Advanced Research Projects Activity (IARPA), an organization within the Office of the Director of National Intelligence, to develop tools and techniques that enhance the reasoning of intelligence analysts. The Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) program began in January 2017, and the primary contractors include Syracuse University, Monash University, the University of Melbourne and Mason.
Mason’s participation in CREATE is led by Gheorghe Tecuci, Mihai Boicu, Nancy Holincheck, and Tom Winston of the Learning Agents Center. Together, these professors are developing a cognitive assistant called Co-Arg, which stands for Cogent Argumentation with Crowd Elicitation.
Co-Arg is a system that seeks to tackle intelligence questions through an evidence-based reasoning methodology that combines analyst’s imagination with the reliability of computers, and the critical reasoning and insight of crowds.
Tecuci, a computer science professor and artificial intelligence researcher explained, “We developed an intelligent cognitive assistant to help intelligence analysts ‘connect the dots,’ to better answer intelligence questions based on evidence – evidence that is often incomplete, ambiguous, contradictory, and of varying levels of credibility.”
Intelligence analysts address complex national and international security issues under tight deadlines and in high-stakes environments. To minimize the possibility of error and oversight, Co-Arg helps analysts formulate accurate and logical argumentations.
For example, an intelligence analyst might face questions such as, “What happened to Flight 67?” or “How do certain foreign insurgencies affect U.S. security?” With Co-Arg, lead analysts and crowd analysts generate hypotheses, determine the probabilities of each hypothesis, and try to reach the most probable answers based on the available information.
“We want to help intelligence analysts provide more accurate answers to the questions they are addressing,” said Tecuci.
Co-Arg functions by crowdsourcing from a team of analysts, searching for errors and biases in the argumentations, and generating reports that provide tangible lines of thinking to justify conclusions. Co-Arg’s methodical approach to analysis increases the reliability and validity of the answers to intelligence questions.
Co-Arg strives to combine the creative capabilities and wisdom of the human mind with the logic and probabilistic reasoning of computers. A successful union would form a synergistic force that allows analysts to confront intelligence questions with more accuracy, validity and speed.
Co-Arg will be continually developed, evaluated, and revised during the duration of the program. Currently, a group of students at California State University in San Bernardino are experimentally using Co-Arg in their courses. Their usage serves as an internal testing of the system as they provide feedback to Mason’s Learning Agents Center. Similar Co-Arg intelligence analysis courses may eventually become available at Mason as well.
In the age of burgeoning automation, Co-Arg does not seek to replace intelligence analysts, Tecuci explained. Instead, the tool structures analyst reasoning in a visible and clear-cut way, thereby producing more defensible arguments.
Cognitive assistants such as Co-Arg could also serve as valuable tools in industries outside of the national security realm. Lawyers, politicians, economists and doctors could all potentially benefit from a cognitive assistant that enhances their understanding and problem-solving capabilities.
Photo Courtesy of Alexis Glenn/ Mason Creative Services