Using Metareasoning to Maintain and Restore Safety for Reliably Autonomy

Abstract

While developers carefully specify the high-level decision-making models in autonomous systems, it is infeasible for these models to ensure safety across every scenario that can be encountered during operation. We therefore propose a safety metareasoning system that mitigates the severity of the system’s safety concerns while reducing the interference to the system’s task: the system executes in parallel a task process that completes the task and safety processes that each address a safety concern, arbitrating with a conflict resolver. This paper offers a definition of a safety metareasoning system, an evaluation rating generation algorithm for a safety process, a conflict resolution algorithm for a conflict resolver, an application of our approach to planetary rover exploration, and a demonstration that our approach is effective in simulation.

Publication
International Joint Conference on Artificial Intelligence (IJCAI) R2AW Workshop