Friday, March 18, 2011

Bomb Disposal Robot Getting Ready for Front-Line Action

The organisations have come together to create a lightweight, remote-operated vehicle, or robot, that can be controlled by a wireless device, not unlike a games console, from a distance of several hundred metres.

The innovative robot, which can climb stairs and even open doors, will be used by soldiers on bomb disposal missions in countries such as Afghanistan.

Experts from the Department of Computer& Communications Engineering, based within the university's School of Engineering, are working on the project alongside NIC Instruments Limited of Folkestone, manufacturers of security search and bomb disposal equipment.

Much lighter and more flexible than traditional bomb disposal units, the robot is easier for soldiers to carry and use when out in the field. It has cameras on board, which relay images back to the operator via the hand-held control, and includes a versatile gripper which can carry and manipulate delicate items.

The robot also includes nuclear, biological and chemical weapons sensors.

Measuring just 72cm by 35cm, the robot weighs 48 kilogrammes and can move at speeds of up to eight miles per hour.


Source

Thursday, March 10, 2011

How Do People Respond to Being Touched by a Robotic Nurse?

The research is being presented March 9 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"What we found was that how people perceived the intent of the robot was really important to how they responded. So, even though the robot touched people in the same way, if people thought the robot was doing that to clean them, versus doing that to comfort them, it made a significant difference in the way they responded and whether they found that contact favorable or not," said Charlie Kemp, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

In the study, researchers looked at how people responded when a robotic nurse, known as Cody, touched and wiped a person's forearm. Although Cody touched the subjects in exactly the same way, they reacted more positively when they believed Cody intended to clean their arm versus when they believed Cody intended to comfort them.

These results echo similar studies done with nurses.

"There have been studies of nurses and they've looked at how people respond to physical contact with nurses," said Kemp, who is also an adjunct professor in Georgia Tech's College of Computing."And they found that, in general, if people interpreted the touch of the nurse as being instrumental, as being important to the task, then people were OK with it. But if people interpreted the touch as being to provide comfort… people were not so comfortable with that."

In addition, Kemp and his research team tested whether people responded more favorably when the robot verbally indicated that it was about to touch them versus touching them without saying anything.

"The results suggest that people preferred when the robot did not actually give them the warning," said Tiffany Chen, doctoral student at Georgia Tech."We think this might be because they were startled when the robot started speaking, but the results are generally inconclusive."

Since many useful tasks require that a robot touch a person, the team believes that future research should investigate ways to make robot touch more acceptable to people, especially in healthcare. Many important healthcare tasks, such as wound dressing and assisting with hygiene, would require a robotic nurse to touch the patient's body,

"If we want robots to be successful in healthcare, we're going to need to think about how do we make those robots communicate their intention and how do people interpret the intentions of the robot," added Kemp."And I think people haven't been as focused on that until now. Primarily people have been focused on how can we make the robot safe, how can we make it do its task effectively. But that's not going to be enough if we actually want these robots out there helping people in the real world."

In addition to Kemp and Chen, the research group consists of Andrea Thomaz, assistant professor in Georgia Tech's College of Computing, and postdoctoral fellow Chih-Hung Aaron King.


Source

Wednesday, March 9, 2011

How Can Robots Get Our Attention?

The research is being presented March 8 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"The primary focus was trying to give Simon, our robot, the ability to understand when a human being seems to be reacting appropriately, or in some sense is interested now in a response with respect to Simon and to be able to do it using a visual medium, a camera," said Aaron Bobick, professor and chair of the School of Interactive Computing in Georgia Tech's College of Computing.

Using the socially expressive robot Simon, from Assistant Professor Andrea Thomaz's Socially Intelligent Machines lab, researchers wanted to see if they could tell when he had successfully attracted the attention of a human who was busily engaged in a task and when he had not.

"Simon would make some form of a gesture, or some form of an action when the user was present, and the computer vision task was to try to determine whether or not you had captured the attention of the human being," said Bobick.

With close to 80 percent accuracy Simon was able to tell, using only his cameras as a guide, whether someone was paying attention to him or ignoring him.

"We would like to bring robots into the human world. That means they have to engage with human beings, and human beings have an expectation of being engaged in a way similar to the way other human beings would engage with them," said Bobick.

"Other human beings understand turn-taking. They understand that if I make some indication, they'll turn and face someone when they want to engage with them and they won't when they don't want to engage with them. In order for these robots to work with us effectively, they have to obey these same kinds of social conventions, which means they have to perceive the same thing humans perceive in determining how to abide by those conventions," he added.

Researchers plan to go further with their investigations into how Simon can read communication cues by studying whether he can tell by a person's gaze whether they are paying attention or using elements of language or other actions.

"Previously people would have pre-defined notions of what the user should do in a particular context and they would look for those," said Bobick."That only works when the person behaves exactly as expected. Our approach, which I think is the most novel element, is to use the user's current behavior as the baseline and observe what changes."

The research team for this study consisted of Bobick, Thomaz, doctoral student Jinhan Lee and undergraduate student Jeffrey Kiser.


Source

Tuesday, March 8, 2011

Teaching Robots to Move Like Humans

The research was presented March 7 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"It's important to build robots that meet people's social expectations because we think that will make it easier for people to understand how to approach them and how to interact with them," said Andrea Thomaz, assistant professor in the School of Interactive Computing at Georgia Tech's College of Computing.

Thomaz, along with Ph.D. student Michael Gielniak, conducted a study in which they asked how easily people can recognize what a robot is doing by watching its movements.

"Robot motion is typically characterized by jerky movements, with a lot of stops and starts, unlike human movement which is more fluid and dynamic," said Gielniak."We want humans to interact with robots just as they might interact with other humans, so that it's intuitive."

Using a series of human movements taken in a motion-capture lab, they programmed the robot, Simon, to perform the movements. They also optimized that motion to allow for more joints to move at the same time and for the movements to flow into each other in an attempt to be more human-like. They asked their human subjects to watch Simon and identify the movements he made.

"When the motion was more human-like, human beings were able to watch the motion and perceive what the robot was doing more easily," said Gielniak.

In addition, they tested the algorithm they used to create the optimized motion by asking humans to perform the movements they saw Simon making. The thinking was that if the movement created by the algorithm was indeed more human-like, then the subjects should have an easier time mimicking it. Turns out they did.

"We found that this optimization we do to create more life-like motion allows people to identify the motion more easily and mimic it more exactly," said Thomaz.

The research that Thomaz and Gielniak are doing is part of a theme in getting robots to move more like humans move. In future work, the pair plan on looking at how to get Simon to perform the same movements in various ways.

"So, instead of having the robot move the exact same way every single time you want the robot to perform a similar action like waving, you always want to see a different wave so that people forget that this is a robot they're interacting with," said Gielniak.


Source

Saturday, March 5, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Friday, March 4, 2011

Method Developed to Match Police Sketch, Mug Shot: Algorithms and Software Will Match Sketches With Mugshots in Police Databases

A team led by MSU University Distinguished Professor of Computer Science and Engineering Anil Jain and doctoral student Brendan Klare has developed a set of algorithms and created software that will automatically match hand-drawn facial sketches to mug shots that are stored in law enforcement databases.

Once in use, Klare said, the implications are huge.

"We're dealing with the worst of the worst here," he said."Police sketch artists aren't called in because someone stole a pack of gum. A lot of time is spent generating these facial sketches so it only makes sense that they are matched with the available technology to catch these criminals."

Typically, artists' sketches are drawn by artists from information obtained from a witness. Unfortunately, Klare said,"often the facial sketch is not an accurate depiction of what the person looks like."

There also are few commercial software programs available that produce sketches based on a witness' description. Those programs, however, tend to be less accurate than sketches drawn by a trained forensic artist.

The MSU project is being conducted in the Pattern Recognition and Image Processing lab in the Department of Computer Science and Engineering. It is the first large-scale experiment matching operational forensic sketches with photographs and, so far, results have been promising.

"We improved significantly on one of the top commercial face-recognition systems," Klare said."Using a database of more than 10,000 mug shot photos, 45 percent of the time we had the correct person."

All of the sketches used were from real crimes where the criminal was later identified.

"We don't match them pixel by pixel," said Jain, director of the PRIP lab."We match them up by finding high-level features from both the sketch and the photo; features such as the structural distribution and the shape of the eyes, nose and chin."

This project and its results appear in the March 2011 issue of the journalIEEE Transactions on Pattern Analysis and Machine Intelligence.

The MSU team plans to field test the system in about a year.

The sketches used in this research were provided by forensic artists Lois Gibson and Karen Taylor, and forensic sketch artists working for the Michigan State Police.


Source

Wednesday, March 2, 2011

New Technique for Improving Robot Navigation Systems

An autonomous mobile robot is a robot that is able to navigate its environment without colliding or getting lost. Unmanned robots are also able to recover from spatial disorientation. Conducted by Sergio Guadarrama, researcher of the European Centre for Soft Computing, and Antonio Ruiz, assistant professor at the Universidad Politécnica de Madrid's Facultad de Informática, and published in the Information Sciences journal, the research focuses on map building. Map building is one of the skills related to autonomous navigation, where a robot is required to explore an unknown environment (enclosure, plant, buildings, etc.) and draw up a map of the environment. Before it can do this, the robot has to use its sensors to perceive obstacles.

The main sensor types used for autonomous navigation are vision and range sensors. Although vision sensors can capture much more information from the environment, this research used range, specifically ultrasonic, sensors, which are less accurate, to demonstrate that the model builds accurate maps from few and imprecise input data.

Once it has captured the ranges, the robot has to map these distances to obstacles on the map. Point clouds are used to draw the map, as the imprecision of the range data rules out the use of straight lines or even isolated points. Even so, the resulting map is by no means an architectural blueprint of the site, because not even the robot's location is precisely known, and there is no guarantee that each point cloud is correctly positioned. In actual fact, one and the same obstacle can be viewed properly from one robot position, but not from another. This can produce contradictory information -obstacle and no obstacle- about the same area of the map under construction. Which of the two interpretations is correct?

Exploring unknown spaces

The solution is based on linguistic descriptions of the antonyms"vacant" and"occupied" and inspired by computing with words and the computational theory of perceptions, two theories proposed by L.A. Zadeh of the University of California at Berkeley. Whereas other published research views obstacles and empty spaces as complementary concepts, this research assumes that, rather than being complements, obstacles and vacant spaces are a pair of opposites.

For example, we can infer that an occupied space is not vacant, but we cannot infer that an unoccupied space is empty. This space could be unknown or ambiguous, because the robot has limited information about its environment. Also the contradictions between"vacant" and"occupied" are also explicitly represented.

This way, the robot is able to make a distinction between two types of unknown spaces: spaces that are unknown because information is contradictory and spaces that are unknown because they are unexplored. This would lead the robot to navigate with caution through the contradictory spaces and explore the unexplored spaces. The map is constructed using linguistic rules, such as"If the measured distance is short, then assign a high confidence level to the measurement" or"If an obstacle has been seen several times, then increase the confidence in its presence," where"short,""high" and"several" are fuzzy sets, subject to fuzzy sets theory. Contradictions are resolved by a greater reliance on shorter ranges and combining multiple measures.

Compared with the results of other methods, the outcomes show that the maps built using this technique better capture the shape of walls and open spaces, and contain fewer errors from incorrect sensor data. This opens opportunities for improving the current autonomous navigation systems for robots.


Source