Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

Author: Achim J. Lilienthal, Beveridge, Canny, Felzenszwalb, Freund, Frçh, Gonzales, Huber, Martin Persson, Mayer, Mueller, Persson, Surmann, Thrun, Tom Duckett, Tupin
Publisher: Elsevier BV

ABOUT BOOK

This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

Powered by: