An increasing number of military applications demand that humans and robots/machines team and work together as peers over long periods to solve complex problems over open worlds. Examples of such tasks include search and rescue applications, and executive command and guidance of unmanned vehicles. Robots operating in such remote human-robot teams need to engage in goaldirected reasoning with partial models of world and objectives, while responding to state, goal and objective updates that come to them both from a rich, dynamic world as well as from the human commanders that exercise control over them. Typical reactive robotic architectures are inadequate in such scenarios since they come with hard-wired implicit goals. Instead, teaming robots require more explicit planning components that can take new requirements and directives into consideration. Most existing work on decisionmaking for human-robot teams focuses on automating either path planning decisions, or taskassignment decisions with the human taking an operator role. Effective peer-to-peer teaming requires full-fledged action planning on the robots part. Similarly, most existing work in automated planning ignores the humans in the loop, and assumes complete knowledge of models and objectives. Finally, pure learning-based approaches that attempt to first learn the complete models before using them are not well suited, as the robot does not have the luxury of waiting until the models become complete. The broad aim of the proposed research is to understand the challenges faced by a planner that guides a robot in such teaming scenarios, and to develop effective frameworks for handling those challenges. The challenges stem both from the long-term nature of teaming tasks and the open-world nature of the environment, as well as the need for supporting effective communication between the human and the robot. These in turn demand the ability to deal with incompletely specified models, uncertain objectives, open and dynamically changing worlds, as well as the ability to take continual human instructions (including those that change and/or modify goals and action models), and return meaningful status reports. In this research, we propose to address these challenges. Specifically, we propose to undertake research tasks aimed respectively at handling incomplete models through generation of robust plans; handling uncertain objectives and partial preferences through diverse plans and open world conditional goals; and handling continual state, goal and model-updates with the help of commitment and opportunity sensitive replanning. In all these tasks, we propose to pay particular attention to issues of interfacing with the humans in the team. The proposed work leverages and builds on promising preliminary results from the Kambhampatis recent research on planning, and Cookes research on evaluating effective human-machine communication frameworks. It is expected to make fundamental contributions to automated planning as well as decision-making for human-robot teams. B-1 The proposed research is well aligned with the goals of the Science of Autonomy program. In particular, the proposed research addresses the following two tough problems: (i) Dynamically generate, assess and refine plans with partially known environments and partially known objectives and (ii) Planning when beliefs of both self and other agents change contingently with the environment (e.g. new objects, relations and events are observed, when new goals emerge or are inferred).
|Effective start/end date||5/1/13 → 4/30/16|
- DOD-NAVY: Office of Naval Research (ONR): $450,000.00