Nao (pronounced now) is an autonomous, programmable humanoid robot developed by Aldebaran Robotics, a French robotics company headquartered in Paris. The robot's development began with the launch of Project Nao in 2004. On 15 August 2007, Nao replaced Sony's robot dog Aibo as the robot used in the RoboCup Standard Platform League (SPL), an international robot soccer competition.The Nao was used in RoboCup 2008 and 2009, and the NaoV3R was chosen as the platform for the SPL at RoboCup 2010.
Numerous versions of the robot have been released since 2008. The Nao Academics Edition was developed for universities and laboratories for research and education purposes. It was released to institutions in 2008, and was made publicly available by 2011. The robot has since entered use in numerous academic institutions worldwide, including the University of Tokyo, India's IIT Kanpur and Saudi Arabia's King Fahd University of Petroleum and Minerals . In December 2011, Aldebaran Robotics released the Nao Next Gen, featuring enhanced software, a more powerful CPU and HD cameras.
In the summer of 2010, Nao made global headlines with a synchronized dance routine at the Shanghai Expo in China. In October 2010, the University of Tokyo purchased 30 Nao robots for their Nakamura Lab, with hopes of developing the robots into active lab assistants. In December 2010, a Nao robot was demonstrated doing a stand-up comedy routine, and a new version of the robot was released, featuring sculpted arms and improved motors. In May 2011, Aldebaran announced that it would release Nao's controlling source code to the public as open source software. In June 2011, Aldebaran raised US$13 million in a round of venture funding led by Intel Capital.
In December 2011, the Nao Next Gen, featuring hardware and software enhancements such as HD cameras, improved robustness, anti-collision systems and a faster walking speed. Since 2011, over 200 academic institutions worldwide have made use of the robot.[3][4][11] In 2012, donated Nao robots were used to teach autistic children in a UK school; some of the children found the childlike, expressive robots more relatable than human beings.
The Nao robot also features an onboard Linux-powered multimedia system, including four microphones (for voice recognition and sound localization), two speakers (for text-to-speech synthesis) and two HD cameras (for computer vision, including facial and shape recognition). The robot comes with a software suite that includes a graphical programming tool ("Choregraphe"), simulation software and a software developer's kit. Nao is also compatible with the Microsoft Robotics Studio, Cyberbotics Webots, and the Gostai Urbi Studio. An upgraded version of the robot, featuring enhanced multilingual speech synthesis, will be released in 2014.
NAO is a little artificial character with a unique combination of hardware and software: it consists of sensors, motors and software driven by NAOqi, our dedicated operating system.
Numerous versions of the robot have been released since 2008. The Nao Academics Edition was developed for universities and laboratories for research and education purposes. It was released to institutions in 2008, and was made publicly available by 2011. The robot has since entered use in numerous academic institutions worldwide, including the University of Tokyo, India's IIT Kanpur and Saudi Arabia's King Fahd University of Petroleum and Minerals . In December 2011, Aldebaran Robotics released the Nao Next Gen, featuring enhanced software, a more powerful CPU and HD cameras.
History
Six prototypes of Nao were designed between 2005 and 2007. In March 2008, the first production version of the robot, the Nao Robocup Edition, was released to the contestants of that year's Robocup. The Nao Academics Edition was released to universities, education institutes and research laboratories in late 2008.In the summer of 2010, Nao made global headlines with a synchronized dance routine at the Shanghai Expo in China. In October 2010, the University of Tokyo purchased 30 Nao robots for their Nakamura Lab, with hopes of developing the robots into active lab assistants. In December 2010, a Nao robot was demonstrated doing a stand-up comedy routine, and a new version of the robot was released, featuring sculpted arms and improved motors. In May 2011, Aldebaran announced that it would release Nao's controlling source code to the public as open source software. In June 2011, Aldebaran raised US$13 million in a round of venture funding led by Intel Capital.
In December 2011, the Nao Next Gen, featuring hardware and software enhancements such as HD cameras, improved robustness, anti-collision systems and a faster walking speed. Since 2011, over 200 academic institutions worldwide have made use of the robot.[3][4][11] In 2012, donated Nao robots were used to teach autistic children in a UK school; some of the children found the childlike, expressive robots more relatable than human beings.
Design
The various versions of the Nao robotics platform feature either 14, 21 or 25 degrees of freedom (DoF). A specialised model with 21 DoF and no actuated hands was created for the Robocup competition. All Nao Academics versions feature an inertial measurement unit with accelerometer, gyrometer and four ultrasonic sensors that provide Nao with stability and positioning within space. The legged versions included eight force-sensing resistors and two bumpers.The Nao robot also features an onboard Linux-powered multimedia system, including four microphones (for voice recognition and sound localization), two speakers (for text-to-speech synthesis) and two HD cameras (for computer vision, including facial and shape recognition). The robot comes with a software suite that includes a graphical programming tool ("Choregraphe"), simulation software and a software developer's kit. Nao is also compatible with the Microsoft Robotics Studio, Cyberbotics Webots, and the Gostai Urbi Studio. An upgraded version of the robot, featuring enhanced multilingual speech synthesis, will be released in 2014.
Specifications
Nao Next Gen (2011) | |
---|---|
Height | 58 centimetres (23 in) |
Weight | 4.3 kilograms (9.5 lb) |
Autonomy | 60 minutes (active use), 90 minutes (normal use) |
Degrees of freedom | 21 to 25 |
CPU | Intel Atom @ 1.6 GHz |
Built-in OS | Linux |
Compatible OS | Windows, Mac OS, Linux |
Programming languages | C++, Python, Java, MATLAB, Urbi, C, .Net |
Vision | Two HD 1280x960 cameras |
Connectivity | Ethernet, Wi-Fi |
How does he work?
NAO is a little artificial character with a unique combination of hardware and software: it consists of sensors, motors and software driven by NAOqi, our dedicated operating system.
It gets its magic from its programming and animation.
Movement libraries are available through graphics tools such as Choregraphe and other advanced programming software.
They allow users to create elaborate behaviors, access the data acquired by the sensors, and control the robot... to give it life!
NAO has:
- 25 motors, for movement
- Two cameras, to see its surroundings
- An inertial measurement unit, which lets it know whether it is upright or lying down
- Touch sensors to detect your touch
- Four microphones, so it can hear you
This combination of technologies (and many others) gives NAO the ability to detect its surroundings.
Now it must interpret what it detected. This is where the embedded software in NAO's head comes in. Aldebaran created a dedicated operating system, NAOqi, allowing the small humanoid to understand and interpret the data received by its sensors. Beyond that, it's all programming and imagination.
Now it must interpret what it detected. This is where the embedded software in NAO's head comes in. Aldebaran created a dedicated operating system, NAOqi, allowing the small humanoid to understand and interpret the data received by its sensors. Beyond that, it's all programming and imagination.
More about
Hardware Platform
NAO is a programmable, 58cm tall humanoid robot with the following key components:
- Body with 25 degrees of freedom (DOF) whose key elements are electric motors and actuators
- Sensor network, including 2 cameras, 4 microphones, sonar rangefinder, 2 IR emitters and receivers, 1 inertial board, 9 tactile sensors, and 8 pressure sensors
- Various communication devices, including voice synthesizer, LED lights, and 2 high-fidelity speakers
- Intel ATOM 1,6ghz CPU (located in the head) that runs a Linux kernel and supports Aldebaran’s proprietary middleware (NAOqi)
- Second CPU (located in the torso)
- 27,6-watt-hour battery that provides NAO with 1.5 or more hours of autonomy, depending on usage
Motion
Omnidirectional walking
NAO's
walking uses a simple dynamic model (linear inverse pendulum) and
quadratic programming. It is stabilized using feedback from joint
sensors. This makes walking robust and resistant to small disturbances,
and torso oscillations in the frontal and lateral planes are absorbed.
NAO can walk on a variety of floor surfaces, such as carpeted, tiled,
and wooden floors. NAO can transition between these surfaces while
walking.
Whole body motion
NAO's
motion module is based on generalized inverse kinematics, which handles
Cartesian coordinates, joint control, balance, redundancy, and task
priority. This means that when asking NAO to extend its arm, it bends
over because its arms and leg joints are taken into account. NAO will
stop its movement to maintain balance.
Fall Manager
The
Fall Manager protects NAO when it falls. Its main function is to detect
when NAO's center of mass (CoM) shifts outside the support polygon. The
support polygon is determined by the position of the foot or feet in
contact with the ground. When a fall is detected, all motion tasks are
killed and, depending on the direction, NAO's arms assume protective
positioning, the CoM is lowered, and robot stiffness is reduced to zero.
Vision
NAO has two cameras and can track, learn, and recognize images and faces.
NAO sees using two 920p cameras, which can capture up to 30 images per second.
The first camera, located on NAO’s forehead, scans the horizon, while the second located at mouth level scans the immediate surroundings.
The software lets you recover photos and video streams of what NAO sees. But eyes are only useful if you can interpret what you see.
The first camera, located on NAO’s forehead, scans the horizon, while the second located at mouth level scans the immediate surroundings.
The software lets you recover photos and video streams of what NAO sees. But eyes are only useful if you can interpret what you see.
That’s
why NAO contains a set of algorithms for detecting and recognizing
faces and shapes. NAO can recognize who is talking to it or find a ball
or, eventually, more complex objects.
These algorithms have been specially developed, with constant attention to using a minimum of processor resources.
Furthermore,
NAO’s SDK lets you develop your own modules to interface with OpenCV
(the Open Source Computer Vision library originally developed by Intel).
Since
you can execute modules on NAO or transfer them to a PC connected to
NAO, you can easily use the OpenCV display functions to develop and test
your algorithms with image feedback.
Audio
NAO
uses four microphones to track sounds, and its voice recognition and
text-to-speech capabilities allow it to communicate in 8 languages.
Sound Source Localization
One
of the main purposes of humanoid robots is to interact with people.
Sound localization allows a robot to identify the direction of sounds.
To produce robust and useful outputs while meeting CPU and memory
requirements, NAO sound source localization is based on an approach
known as “Time Difference of Arrival.”
When a nearby source emits a sound, each of NAO’s four microphones receives the sound wave at slightly different times.
For
example, if someone talks to NAO on its left side, the corresponding
sound wave first hits the left microphones, then the front and rear
microphones a few milliseconds later, and finally the right microphone.
These
differences, known as interaural time difference (ITD), can then be
mathematically processed to determine the current location of the
emitting source.
By solving the equation
every time it hears a sound, NAO can determine the direction of the
emitting source (azimuthal and elevation angles) from ITDs between the
four microphones.
This feature is
available as a NAOqi module called ALAudioSourceLocalization; it
provides a C++ and Python API that allows precise interactions with a
Python script or NAOqi module.
Two Choregraphe boxes that allow easy use of the feature inside a behavior are also available:
Possible applications include:
- Human Detection, Tracking, and Recognition
- Noisy Object Detection, Tracking, and Recognition
- Speech Recognition in a specific direction
- Speaker Recognition in a specific direction
- Remote Monitoring/Security applications
- Entertainment applications
Audio Signal Processing
In
robotics, embedded processors have limited computational power, making
it useful to perform some calculations remotely on a desktop computer or
server.
This is especially true for audio
signal processing; for example, speech recognition often takes place
more efficiently, faster, and more accurately on a remote processor.
Most modern smartphones process voice recognition remotely.
Users may want to use their own signal processing algorithms directly in the robot.
The NAOqi framework uses Simple Object Access Protocol (SOAP) to send and receive audio signals over the Web.
Sound is produced and recorded in NAO using the Advanced Linux Sound Architecture (ALSA) library.
The ALAudioDevice module manages audio inputs and outputs.
Using
NAO’s audio capabilities, a wide range of experiments and research can
take place in the fields of communications and human-robot interaction.
For example, users can employ NAO as a communication device, interacting with NAO (talk and hear) as if it were a human being.
Signal
processing is of course an interesting example. Thanks to the audio
module, you can get the raw audio data from the microphones in real time
and process it with your own code.
Tactile sensors & Sonar Rangefinders
Tactile Sensors
Besides
cameras and microphones, NAO is fitted with capacitive sensors
positioned on top of its head in three sections and on its hands.
You
can therefore give NAO information through touch: pressing once to tell
it shut down, for example, or using the sensors as a series of buttons
to trigger an associated action.
The system comes with LED lights that indicate the type of contact. You can also program complex sequences.
Sonar Rangefinders
NAO is equipped with two sonar channels: two transmitters and two receivers.
They allow NAO to estimate the distances to obstacles in its environment. The detection range is 0–70 cm.
Less than 15 cm, there is no distance information; NAO only knows that an object is present.
Connectivity
Ethernet and Wi-Fi
NAO
currently supports Wi-Fi (bgn) and Ethernet, the most widespread
network communication protocols. In addition, infrared transceivers in
the eyes allow connection to objects in the environment. NAO is
compatible with the IEE 802.11g Wi-Fi standard and can be used on both
WPA and WEP networks, making it possible to connect to most home and
office networks. NAO's OS supports both Ethernet and Wi-Fi connections
and requires no Wi-Fi setup other than entering the password.
NAO's
ability to connect to networks offers a wide range of possibilities.
You can pilot and program NAO using any computer on the network.
Here are a few examples of applications NAO users have already created:
- Based on NAO's IP address, NAO can figure out its location and give you a personalized weather report.
- Ask NAO about a topic and it connects to Wikipedia and read you the relevant entry.
- Connect NAO to an audio stream and it plays an Internet radio station for you.
Using XMPP technology (like in the Google Chat system), you can control NAO remotely and stream video from its cameras.
Infrared
Using
infrared, NAO can communicate with other NAOs and other devices that
support infrared. You can configure NAO to use infrared to control other
devices (“NAO, please turn on the TV”). In addition, NAO can also
receive instructions from infrared emitters, such as remote controls.
And of course, two NAOs can communicate with each other directly.
Infrared
is already the most common method of controlling appliances, making NAO
easily adaptable to domotics applications. NAO can also detect whether
an infrared signal received is coming from the left or right.
No comments:
Post a Comment