Friday, November 26, 2010

New Method to Identify People by Their Ears

In a paper entitledA Novel Ray Analogy for Enrolment of Ear Biometricsjust presented at theIEEE Fourth International Conference onBiometrics: Theory, Applications and Systems, scientists from the University's School of Electronics and Computer Science (ECS) described how a technique called image ray transform can highlight tubular structures such as ears, making it possible to identify them.

The research which was carried out by Professor Mark Nixon, Dr John Carter and Alastair Cummings at ECS, describes how the transform is capable of highlighting tubular structures such as

  • the helix of the ear and spectacle frames and, by exploiting the
  • elliptical shape of the helix, can be used as the basis of a method
  • for enrolment for ear biometrics.

Professor Nixon, one of the UK's earliest researchers in this field, first proved that ears were a viable biometric back in 1999.

At that point he said that ears have certain advantages over the more established biometrics as they have a rich and stable structure that is preserved from birth to old age and instead of aging they just get bigger. The ear also does not suffer from changes in facial expression and it is firmly fixed in the middle of the side of the head against a predictable background, unlike face recognition which usually requires the face to be captured against a controlled background.

However, the fact that ears can be concealed by hair, led Professor Nixon and his team to research their use as a biometric further and to come up with new algorithms to make it possible to identify and isolate the ear from the head.

The technique presented by the scientists achieves 99.6% success at enrolment across 252 images of the XM2VTS database, displaying a resistance to confusion with hair and spectacles. These results show great potential for enhancing the detection of structural features.

"Feature recognition is one of the biggest challenges of computer vision," said Alastair Cummings, the PhD student for the research."The ray transform technique may also be appropriate for use in gait biometrics, as legs act as tubular features that the transform is adept at extracting. The transform could also be extended to work upon 3D images, both spatial and spatio-temporal, for 3D biometrics or object tracking. It is a general pre-processing technique for feature extraction in computer images, a technology which is now pervading manufacturing, surveillance and medical applications."

A copy of at:A Novel Ray Analogy for Enrolment of Ear Biometricscan be accessed at:http://eprints.ecs.soton.ac.uk/21546/

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Thursday, November 25, 2010

I Want to See What You See: Babies Treat 'Social Robots' as Sentient Beings

Curiosity drives their learning. At 18 months old, babies are intensely curious about what makes humans tick. A team of University of Washington researchers is studying how infants tell which entities are"psychological agents" that can think and feel.

Research published in the October/November issue ofNeural Networksprovides a clue as to how babies decide whether a new object, such as a robot, is sentient or an inanimate object. Four times as many babies who watched a robot interact socially with people were willing to learn from the robot than babies who did not see the interactions.

"Babies learn best through social interactions, but what makes something 'social' for a baby?" said Andrew Meltzoff, lead author of the paper and co-director of the UW's Institute for Learning and Brain Sciences."It is not just what something looks like, but how it moves and interacts with others that gives it special meaning to the baby."

The UW researchers hypothesized that babies would be more likely to view the robot as a psychological being if they saw other friendly human beings socially interacting with it."Babies look to us for guidance in how to interpret things, and if we treat something as a psychological agent, they will, too," Meltzoff said."Even more remarkably, they will learn from it, because social interaction unlocks the key to early learning."

During the experiment, an 18-month-old baby sat on its parent's lap facing Rechele Brooks, a UW research assistant professor and a co-author of the study. Sixty-four babies participated in the study, and they were tested individually. They played with toys for a few minutes, getting used to the experimental setting. Once the babies were comfortable, Brooks removed a barrier that had hidden a metallic humanoid robot with arms, legs, a torso and a cube-shaped head containing camera lenses for eyes. The robot -- controlled by a researcher hidden from the baby -- waved, and Brooks said,"Oh, hi! That's our robot!"

Following a script, Brooks asked the robot, named Morphy, if it wanted to play, and then led it through a game. She would ask,"Where is your tummy?" and"Where is your head?" and the robot pointed to its torso and its head. Then Brooks demonstrated arm movements and Morphy imitated. The babies looked back and forth as if at a ping pong match, Brooks said.

At the end of the 90-second script, Brooks excused herself from the room. The researchers then measured whether the baby thought the robot was more than its metal parts.

The robot beeped and shifted its head slightly -- enough of a rousing to capture the babies' attention. The robot turned its head to look at a toy next to the table where the baby sat on the parent's lap. Most babies -- 13 out of 16 -- who had watched the robot play with Brooks followed the robot's gaze. In a control group of babies who had been familiarized with the robot but had not seen Morphy engage in games, only three of 16 turned to where the robot was looking.

"We are using modern technology to explore an age-old question about the essence of being human," said Meltzoff, who holds the Job and Gertrud Tamaki Endowed Chair in psychology at the UW."The babies are telling us that communication with other people is a fundamental feature of being human."

The study has implications for humanoid robots, said co-author Rajesh Rao, UW associate professor of computer science and engineering and head of UW's neural systems laboratory. Rao's team helped design the computer programs that made Morphy appear social."The study suggests that if you want to build a companion robot, it is not sufficient to make it look human," said Rao."The robot must also be able to interact socially with humans, an interesting challenge for robotics."

The study was funded by the Office of Naval Research and the National Science Foundation. Aaron Shon, who graduated from UW with a doctorate in computer science and engineering, is also a co-author on the paper.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Wednesday, November 24, 2010

McSleepy Meets DaVinci: Doctors Conduct First-Ever All-Robotic Surgery and Anesthesia

“Collaboration between DaVinci, a surgical robot, and anesthetic robot McSleepy, seemed an obvious fit; robots in medicine can provide health care of higher safety and precision, thus ultimately improving outcomes,” said Dr. TM Hemmerling of McGill University and MUHC’s Department of Anesthesia, who is also a neuroscience researcher at the Research Institute (RI) of the MUHC.

“The DaVinci allows us to work from a workstation operating surgical instruments with delicate movements of our fingers with a precision that cannot be provided by humans alone,” said Dr. A. Aprikian, MUHC urologist in chief and Director of the MUHC Cancer Care Mission, and also a researcher in the Cancer Axis at the RI MUHC. He and his team of surgeons operate the robotic arms from a dedicated workstation via video control with unsurpassed 3D HD image quality.

“Providing anesthesia for robotic prostatectomy can be challenging because of the specific patient positioning and the high degree of muscle relaxation necessary to maintain perfect conditions for the surgical team,” added Dr. Hemmerling.“Automated anesthesia delivery via McSleepy guarantees the same high quality of care every time it is used, independent from the subjective level of expertise. It can be configured exactly to the specific needs of different surgeries, such as robotic surgery.”

“Obviously, there is still some work needed to perfect the all robotic approach– from technical aspects to space requirements for the robots,” added Dr. Hemmerling.“Whereas robots have been used in surgery for quite some time, anesthesia has finally caught up. Robots will not replace doctors but help them to perform to the highest standards.”

Combining both robots, the specialists at the MUHC can deliver the most modern and accurate patient care. The researchers will use the results of this project to test all robotic surgery and anesthesia in a larger scale of patients and various types of surgery.”This should allow for faster, safer and more precise surgery for our patients” concluded Dr. Aprikian.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Tuesday, November 23, 2010

Underwater Robots on Course to the Deep Sea

Even when equipped with compressed-air bottles and diving regulators, humans reach their limits very quickly under water. In contrast, unmanned submarine vehicles that are connected by cable to the control center permit long and deep dives. Today remote-controlled diving robots are used for research, inspection and maintenance work. The possible applications of this technology are limited, however, by the length of the cable and the instinct of the navigator. No wonder that researchers are working on autonomous underwater robots which orient themselves under water and carry out jobs without any help from humans.

In the meantime, there are AUVs (autonomous underwater vehicles) which collect data independently or take samples before they return to the starting points."For the time being, the technology is too expensive to carry out routine work, such as inspections of bulkheads, dams or ships' bellies," explains Dr. Thomas Rauschenbach, Director of the Application Center System Technology AST Ilmenau, Germany at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB. This may change soon. Together with the researchers at four Fraunhofer Institutes, Rauschenbach's team is presently working on a generation of autonomous underwater robots which will be smaller, more robust and cheaper than the previous models. The AUVs shall be able to find their bearings in clear mountain reservoirs equally well as in turbid harbor water. They will be suitable for work on the floor of the deep sea as well as for inspections of shallow concrete bases that offshore wind power station have been mounted on.

The engineers from Fraunhofer Institute for Optronics, System Technologies and Image Exploitation in Karlsruhe, Germany are working on the"eyes" for underwater robots. Optical perception is based on a special exposure and analysis technology which even permits orientation in turbid water as well. First of all, it determines the distance to the object, and then the camera emits a laser impulse which is reflected by the object, such as a wall. Microseconds before the reflected light flash arrives, the camera opens the aperture and the sensors capture the incident light pulses. At the Ilmenau branch of the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation,

Rauschenbach's team is developing the"brain" of the robot: a control program that keeps the AUV on course in currents such as at a certain distance to the wall that is to be examined. The Fraunhofer Institute for Biomedical Engineering IBMT in St. Ingbert provides the silicone encapsulation for the pressure-tolerant construction of electronic circuits as well as the"ears" of the new robot: ultrasound sensors permit the inspection of objects. Contrary to the previously conventional sonar technology, researchers are now using high-frequency sound waves which are reflected by the obstacles and registered by the sensor. The powerful but lightweight lithium batteries of the Fraunhofer ISIT in Itzehoe that supply the AUV with energy are encapsulated by silicone.

A special energy management system that researchers at the Fraunhofer Institute for Environmental, Safety and Energy Technology UMSICHT in Oberhausen, Germany have developed saves power and ensures that the data are saved in emergencies before the robot runs out of energy and has to surface.

A torpedo-shaped prototype two meters long that is equipped with eyes, ears, a brain, a motor and batteries will go on its maiden voyage this year in a new tank in Ilmenau. The tank is only three meters deep, but"that's enough to test the decisive functions," affirms Dr. Rauschenbach. In autumn 2011, the autonomous diving robot will put to sea for the first time from the research vessel POSEIDON: Several dives up to a depth of 6,000 meters have been planned.


Source

Tiny Brained Bees Solve a Complex Mathematical Problem

Scientists at Royal Holloway, University of London and Queen Mary, University of London have discovered that bees learn to fly the shortest possible route between flowers even if they discover the flowers in a different order. Bees are effectively solving the 'Travelling Salesman Problem', and these are the first animals found to do this.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed.

Dr Nigel Raine, from the School of Biological Sciences at Royal Holloway explains:"Foraging bees solve travelling salesman problems every day. They visit flowers at multiple locations and, because bees use lots of energy to fly, they find a route which keeps flying to a minimum."

The team used computer controlled artificial flowers to test whether bees would follow a route defined by the order in which they discovered the flowers or if they would find the shortest route. After exploring the location of the flowers, bees quickly learned to fly the shortest route.

As well as enhancing our understanding of how bees move around the landscape pollinating crops and wild flowers, this research, which is due to be published inThe American Naturalist, has other applications. Our lifestyle relies on networks such as traffic on the roads, information flow on the web and business supply chains. By understanding how bees can solve their problem with such a tiny brain we can improve our management of these everyday networks without needing lots of computer time.

Dr Raine adds:"Despite their tiny brains bees are capable of extraordinary feats of behaviour. We need to understand how they can solve the Travelling Salesman Problem without a computer. What short-cuts do they use?'

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source