Viterbi School artificial intelligence expert Milind Tambe and Nathan Schurr (PhD 2007) brought the complex system into being; working directly with the L.A. County Fire Department as it evolved, making sure that it remained relevant and useful.
The idea was to speed up and make more useful a training simulation. The LAFD had previously run simulated training drills working by hand, but without sophisticated computer tools provided by Tambe and Schurr in a system dubbed DEFACTO. In one room, a B-team of veteran firefighters would make up a disaster, sending bulletins to another room, where trainees would decide where to send equipment and how to responf. They looked at crude pictures moved models around a map,
The system meant that dozens of top personnel would be tied up for hours. Mobilizing a B-team was such a major organizational effort that it was necessary to bring as many trainees as possible into the response group. There were so many that each individual received little experience. "It's so costly to have large exercises," said LAFD Fire Captain Ron Roemer.
DEFACTO totally changed the situation. The B-Team was completely replaced by Tambe and Schurr's sophisticated artificial intelligence system. Instead of a committee of firefighters, DEFACTO had committees of artificial intelligences, "agents.'"
The agents created disaster scenarios, vivid ones, with images and maps. Individuals or small teams could train on the system,
USC in flames: AI agents create scnearios for conflagrations, then help trainees fight the fictional fires on real city landscapes, such as this view of the University Park campus.
And instead of crude diagrams or pictures, the disaster was visible to trainees in full living color, in a high fidelity, 3-D "Omnipresent viewer" system. The advantages are clear, according to the fire department's Roemer in a published report. "It's a lot more controlled. You can see if you're heading toward a mistake much more quickly."
And agents also worked with trainees proposing responses. Trainees could also gauge their success by comparing their reaction scenarios against ones proposed by the system's teams of artificial agents. They learned how to delegate some responsibility for strategy to the agents in the system. The agent committees, from their side "have the flexibility to limit human interaction."
Why? Schurr said that extensive and repeated work with the system responding to numerous fictional disasters large and small illustrated a curious pattern. Humans did better than small committees of artificial agents. But humans were barely better than larger committees, even when the course the agents' proposed was not the best - because disagreements between the human and the agents resulted in a compromise plan. "Even wrong decisions can lead to better results," Schurr said.
While CREATE is now focusing exclusively on risk analysis, rather than response, a close follow on project is alive and under development at the University of Maryland, building on the insights. Los Angeles Fire and Police personnel are looking forward to the next crop. "It really tests your decision making," Roemer said.