Project Details
Local Perception for the Autonomous Navigation of Multicopters
Applicant
Professor Dr. Sven Behnke
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Geophysics
Geophysics
Term
from 2011 to 2020
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 166047863
Objective of the proposed project is the generation of an environment representation for an autonomous copter which allows for a safe 3D navigation and obstacle avoidance (in P3). This representation is based on the pose estimate of the copter (P1) and measurements of onboard distance sensors and cameras. In addition to the 3D laser range scanner, which has been developed in the first project phase and ultrasonic distance sensors, further modalities shall be integrated: radar sensors and time-of-flight cameras. The obstacles that are detected in the existing multi-camera system will be incorporated more, in particular visual object point, which are generated in P4 by stereo triangulation and bundle adjustment, and semi-dense visual obstacles from P5. The detection of obstacles must work reliably, even when individual modalities fail, e.g. due to the obstacle properties or the lighting conditions. The requirements for the environment representation are derived from navigation planning. In order to increase the level of autonomy of the copter, not only egocentric representations with relative precision will be created onboard, but also allocentric maps.The egocentric map will be maintained with high frequency by registering the most recent measurements of Laserscanner and cameras. The registration of all measurements will be optimized globally to generate an allocentric environment representation. For this, the GNNS-Pose from P1 will be incorporated. The calibration of the multimodal sensor system will be continuously refined by minimizing registration errors. Registration will be performed by graph optimization, for which we will develop new methods for the simultaneous registration of multiple modalities. We will also work on the modelling of dynamic obstacles. These will be separated from the environment and modeled separately. This is needed for motion prediction - the basis for anticipatory navigation planning in P3. Furthermore, we will create in cooperation with P7 a semantic categorization of the environment. Surfaces will be assigned to navigation-relevant categories like floor, facade, roof, vegetation and relevant objects like persons, vehicles, and windows will be detected. To this end, we will advance methods for 3D fusion of semantic categorization, object detection, and learning from few annotated examples.
DFG Programme
Research Units
Subproject of
FOR 1505:
Mapping on Demand
Co-Investigator
Professor Dr. Jürgen Gall