SINGLE POST

Eyes in the Back of Your Head: Robust Visual Teach & Repeat Using Multiple Stereo Cameras

Autonomous path-following robots that use vision-based navigation are appealing for a wide variety of tedious and dangerous applications. However, a reliance on matching point-based visual features often renders vision-based navigation unreliable over extended periods of time in unstructured, outdoor environments. Specifically, scene change caused by lighting, weather, and seasonal variation lead to changes in visual features and result in a reduction of feature associations across time. This paper presents an autonomous, path-following system that uses multiple stereo cameras to increase the algorithm field of view and reliably navigate in these feature-limited scenarios.

The addition of a second camera in the localization pipeline greatly increases the probability that a stable feature will be in the robot’s field of view at any point in time, extending the amount of time the robot can reliably navigate. We experimentally validate our algorithm through a challenging winter field trial, where the robot autonomously traverses a 250m path six times with an autonomy rate of 100% despite significant changes in the appearance of the scene due to lighting and melting snow. We show that the addition of a second stereo camera to the system significantly increases the autonomy window when compared to current state-of-the-art path-following methods.