RoboHow.Cog will investigate a knowledge-enabled and plan-based approach to robot programming and control where knowledge for accomplishing everyday manipulation tasks is semi-automatically acquired from instructions in the World Wide Web, from human instruction and demonstration, and from haptic demonstration.
This approach will be made possible by developing a novel computational model for the knowledge-enabled and plan-based control of everyday manipulation that advances state-of-the-art robot planning approaches in that it (1) allows the specification of concurrent percept-guided robot behavior, (2) enables programmers to specify sketchy plans that are able to express the many kinds of incompleteness, vagueness, and ambiguities that are typically found in human instructions, (3) supports partial movement and manipulation action specifications as first-class objects, (4) allows the incremental refinement of incomplete specification components by adding heterogeneous pieces of information.
The knowledge-enabled control will be made possible through an interface layer that provides access to state-of-the-art perception and control mechanisms that extend constraint- and optimization-based movement specification and execution methods that permit the force adaptive control of movements that achieve the desired effects and avoid the unwanted ones. In addition, novel perception mechanisms satisfying the knowledge preconditions of plans and monitoring the effects of actions will make the RoboHow.Cog approach feasible. The constraint- and optimization-based movement specification and execution will also build a stable and sustainable bridge between our symbolic high-level control and the continuous time and space of the robots’ motion and perception. At the high level, the robot will reason about and manipulate symbolic descriptions of these movement specifications, while at the low level, the continuous time and space trajectories will have to satisfy the specifications.
The strategy of the RoboHow.Cog project is to develop a framework through which robots can acquire the necessary knowledge for accomplishing the envisaged everyday manipulation tasks in a combination of complementary ways: (i) semi-automatically from instructions in the World Wide Web; (ii) in a supervised way via human instruction and demonstration, on the basis of (both previously captured and “live”) haptic demonstrations.