OpenNI: Kinect and more!
OpenNI has come to public attention lately because it is used as a free programming tool for popular XBox 3D sensor Kinect; however, it offers much more.
OpenNI (Open Natural Interaction) is a multi-language, cross-platform framework that defines
APIs for writing applications utilizing Natural Interaction. OpenNI APIs are composed of a set of
interfaces for writing NI applications based on:
- Vision and audio sensors (the devices that ‘see’ and ‘hear’ the figures and their
surroundings.)
- Vision and audio perception middleware (the software components that analyze the audio
and visual data that is recorded from the scene, and comprehend it).
In brief, OpenNi provides a HAL (Hardware Abstraction Layer) for a 3D sensor, RGB camera,
IR camera and audio device (not surprisingly integrated in Kinect) and middleware to avoid programming basic functions like body detection and analysis (joints, orientation, center of mass ...), hand point detection and analysis, gesture recognition or basic scene analysis (e.g. separation between the foreground and background, coordinates of the floor plane, individual identification of figures ...). This is really convenient if we think that alternatives previous to Kinect to this respect involved 9000 EUR TOF (Time of Flight) cameras or less reliable and much more computationally expensive stereo systems and we had to program stuff on our own.
OpenNI follows a philosophy similar to ROS: devices and program modules are modeled as nodes that interact with each other, conforming the so-called production chains.
The main advantage of OpenNI is that it is currently being integrated in every robotics open framework, so we can use Kinect in any application we have developed (and we are!)
Recent Comments