Vision-based navigation or optical navigation uses
computer vision algorithms and optical sensors, including laser-based
range finder and photometric cameras using
CCD arrays, to extract the
visual features required to the localization in the surrounding environment. However, there are a range of techniques for navigation and localization using vision information, the main components of each technique are: • representations of the environment. • sensing models. • localization algorithms. In order to give an overview of vision-based navigation and its techniques, we classify these techniques under
indoor navigation and
outdoor navigation.
Indoor navigation from a moving camera The easiest way of making a robot go to a goal location is simply to
guide it to this location. This guidance can be done in different ways: burying an inductive loop or magnets in the floor, painting lines on the floor, or by placing beacons, markers, bar codes etc. in the environment. Such
Automated Guided Vehicles (AGVs) are used in industrial scenarios for transportation tasks. Indoor Navigation of Robots are possible by IMU based indoor positioning devices. There are a very wider variety of indoor navigation systems. The basic reference of indoor and outdoor navigation systems is "Vision for mobile robot navigation: a survey" by Guilherme N. DeSouza and Avinash C. Kak. Also see "Vision based positioning" and AVM Navigator.
Autonomous Flight Controllers Typical Open Source Autonomous Flight Controllers have the ability to fly in full automatic mode and perform the following operations; • Take off from the ground and fly to a defined altitude • Fly to one or more waypoints • Orbit around a designated point • Return to the launch position • Descend at a specified speed and land the aircraft The onboard flight controller relies on GPS for navigation and stabilized flight, and often employ additional
Satellite-based augmentation systems (SBAS) and altitude (barometric pressure) sensor. ==Inertial navigation==