OpenNI: Kinect and more!

OpenNI has come to public attention lately because it is used as a free programming tool for popular XBox 3D sensor Kinect; however, it offers much more.



OpenNI (Open Natural Interaction) is a multi-language, cross-platform framework that defines
APIs for writing applications utilizing Natural Interaction. OpenNI APIs are composed of a set of
interfaces for writing NI applications based on:

- Vision and audio sensors (the devices that ‘see’ and ‘hear’ the figures and their
surroundings.)
- Vision and audio perception middleware (the software components that analyze the audio
and visual data that is recorded from the scene, and comprehend it).




In brief, OpenNi provides a HAL (Hardware Abstraction Layer) for a 3D sensor, RGB camera,
IR camera and audio device (not surprisingly integrated in Kinect) and middleware to avoid programming basic functions like body detection and analysis (joints, orientation, center of mass ...), hand point detection and analysis, gesture recognition or basic scene analysis (e.g. separation between the foreground and background, coordinates of the floor plane, individual identification of figures ...). This is really convenient if we think that alternatives previous to Kinect to this respect involved 9000 EUR TOF (Time of Flight) cameras or less reliable and much more computationally expensive stereo systems and we had to program stuff on our own.

OpenNI follows a philosophy similar to ROS: devices and program modules are modeled as nodes that interact with each other, conforming the so-called production chains.

The main advantage of OpenNI is that it is currently being integrated in every robotics open framework, so we can use Kinect in any application we have developed (and we are!)

Make your own assistive wheelchair in 11 easy steps



All the assembly information you'll need to turn your power wheelchair into a robot and survive the process is gathered in this technical annex Really! :D

Potential Fields and Potential Risks

The Potential Field Approach (PFA) [1] is a well known method in reactive robot navigation. The basic idea simply models (detected) obstacles as repulsors and goal(s) as attractor(s) depending on their proximity to the robot. The rotational of the resulting field returns the motion vectors for our mobile. PFA are intuitive, simple, smooth and, up to a point, reliable.

Despite being relatively old, this technique is still widely used in navigation. Indeed, many robotic wheelchairs use PFA to avoid collisions by removing control from the user when there is present danger in what we call "safeguard operation mode". Still, PFA per se can not be used for much more in its original formulation due to well reported problems: i) oscillations in corridor-like situations; ii) sensitivity to local minima; and iii) local traps. The following viewo shows the commented problems:





These problems can be solved by more powerful, enhanced versions of PFA, mostly based on making them less reactive. Algorithms used in assistive navigation instead of PFA include the Dynamic Window Approach and the Vector Field Histogram (VFH).


[1] Khatib, O., "Real-Time Obstacle Avoidance for Manipulators and Mobile Robots", Int. J. of Robotic Research, Vol.5, No.1 (1986), p.60

Older Posts

Recent News

-Biometrically adapted wheelchair control paper accepted in IEEE Trans. on NSRE :) -New paper on collaborative navigation in hospitals accepted in Autonomous Robots

Followers



Recent Comments