Sketch-based navigation for mobile robots using qualitative landmark states
Metadata[+] Show full item record
In this work, a system for navigating a mobile robot along a sketched route is proposed. The sketch is drawn on a PDA screen by a human operator and contains approximate landmarks and a path, similar to a sketch provided to another person to reach a goal. The robot receives the sketch and detects the objects and route. The robot proceeds to extract spatial relations between the robot and surrounding objects at crucial nodes along the sketched route. Based on the extracted spatial relations, a sequence of Qualitative Landmark States (QLS's) and associated robot commands serves as a guide for robot navigation in the real world. The robot then executes the sketched route by matching landmark states in the real world to the extracted states. The approach is validated and tested using sketches by independent study participants both with a real robot and in a simulator. Special sketches and robot operating environments are used to illustrate results in extreme cases and to independently test extraction, identification and matching of QLS's. We show that QLS's based on spatial relations can be used as a common route representation between a sketched route map and a physical environment. The selection of QLS's is crucial to the success of such an approach and the algorithm shows a way to pick the correct states for successful navigation. The approach is not dependent on the number or type of sensors on the robot and does not assume a particular type of robot; the strategy can work with any sensory method that can provide an object representation in two dimensions (top view). The approach is not dependent on the route chosen, the size or shape of objects, or the position of the objects. The algorithm can account for a certain degree of uncertainty and inconsistencies in sketching (scaling of object size and position, distortions, completeness of object representation.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License.