TOF sensor fusion
Most range sensors basically use the TOF (Time of Flight) of an echo signal to offer their distance to the closest obstacle in the direction of the sensor, but many of these can be rotated like a sonar in a submarine to cover a wider range of detection. Others, like cameras, provide information on a wider area, but this information is typically more complex to process. In a dynamic, potentially unstructured environment, a quick response time might be the difference between safety and collision, so in many cases, when video cameras are used, they are combined with other range sensors to achieve faster responses. Furthermore, visual information is so rich that video processing has only been solved when some constraints can be applied. These restrictions usually imply some knowledge about the operation environment and a heavy specific problem-solving orientation. Simpler range sensors also present their own drawbacks. Sonar sensors, for example, have an uncertainty angle that, in models like Polaroid, may be up to 22.5º. Consequently, when obstacles are at a significant distance, say 4-5 m, we know how far they are, but not exactly where. Infrarred sensors are sensitive to natural light and their output may change sharply when there is a significant illumination change, like whenever someone is stepping in front of a window. switches on a light or opens a door. Lasers can not detect glass doors and deal poorly with black surfaces. Furthermore, all three of these sensors only detect obstacles in their own plane, so irregular objects like a table might be completely missed. Recently, though, laser sensors have become smaller, cheaper and stronger, so they have been particularly favored by the wheelchair industry. A classic SICK LMS 291-S05 laser weights 4.5kg and is 156 x 155 x 210mm, whereas a modern Hokuyo URG-04LX weights 160g and is just 50 x 50 x 70mm.
In any case, since all sensors have advantages and drawbacks, it is usual to work with several ones or, at least, to statistically combine the readings of a single one in time and space to obtain more reliable knowledge on the environment. If these readings are combined in a short time span within the surroundings of the robot, range sensor readings can be used to avoid close obstacles by heading it in a free direction. Readings can also be combined into a wider model of the environment to predict more efficient and safer trajectories to the goal, to combine several goals or to coordinate different mobiles. In order to build global models of the environment, though, it is important to know the position of the mobile within the environment with some accuracy. Otherwise, it would be like asking directions to someone who is actually lost.