Thursday, December 30, 2010

New Cognitive Robotics Lab Tests Theories of Human Thought

"The real world has a lot of inconsistency that humans handle almost without noticing -- for example, we walk on uneven terrain, we see in shifting light," said Professor Vladislav Daniel Veksler, who is currently teaching Cognitive Robotics."With robots, we can see the problems humans face when navigating their environment."

Cognitive Robotics marries the study of cognitive science -- how the brain represents and transforms information -- with the challenges of a physical environment. Advances in cognitive robotics transfer to artificial intelligence, which seeks to develop more efficient computer systems patterned on the versatility of human thought.

Professor Bram Van Heuveln, who organized the lab, said cognitive scientists have developed a suite of elements -- perception/action, planning, reasoning, memory, decision-making -- that are believed to constitute human thought. When properly modeled and connected, those elements are capable of solving complex problems without the raw power required by precise mathematical computations.

"Suppose we wanted to build a robot to catch fly balls in an outfield. There are two approaches: one uses a lot of calculations -- Newton's law, mechanics, trigonometry, calculus -- to get the robot to be in the right spot at the right time," said Van Heuveln."But that's not the way humans do it. We just keep moving toward the ball. It's a very simple solution that doesn't involve a lot of computation but it gets the job done."

Robotics are an ideal testing ground for that principle because robots act in the real world, and a correct cognitive solution will withstand the unexpected variables presented by the real world.

"The physical world can help us to drive science because it's different from any simulated world we could come up with -- the camera shakes, the motors slip, there's friction, the light changes," Veksler said."This platform -- robotics -- allows us to see that you can't rely on calculations. You have to be adaptive."

The lab is open to all students at Rensselaer. In its first semester, the lab has largely attracted computer science and cognitive science students enrolled in a Cognitive Robotics course taught by Veksler, but Veksler and Van Heuveln hope it will attract more engineering and art students as word of the facility spreads.

"We want different students together in one space -- a place where we can bring the different disciplines and perspectives together," said Van Heuveln."I would like students to use this space for independent research: they come up with the research project, they say 'let's look at this.'"

The lab is equipped with five"Create" robots -- essentially a Roomba robotic vacuum cleaner paired with a laptop; three hand-eye systems; one Chiara (which looks like a large metal crab); and 10 LEGO robots paired with the Sony Handy Board robotic controller.

On a recent day, Jacqui Brunelli and Benno Lee were working on their robot"cat" and"mouse" pair, which try to chase and evade each other respectively; Shane Reilly was improving the computer"vision" of his robotic arm; and Ben Ball was programming his robot to maintain a fixed distance from a pink object waved in front of its"eye."

"The thing that I've learned is that the sensor data isn't exact -- what it 'sees' constantly changes by a few pixels -- and to try to go by that isn't going to work," said Ball, a junior and student of computer science and physics.

Ball said he is trying to pattern his robot on a more human approach.

"We don't just look at an object and walk toward it. We check our position, adjusting our course," Ball said."I need to devise an iterative approach where the robot looks at something, then moves, then looks again to check its results."

The work of the students, who program their robots with the Tekkotsu open-source software, could be applied in future projects, said Van Heuveln.

"As a cognitive scientist, I want this to be built on elements that are cognitively plausible and that are recyclable -- parts of cognition that I can apply to other solutions as well," said Van Heuveln."To me, that's a heck of a lot more interesting than the computational solution."

In a generic domain, their early investigations clearly show how a more cognitive approach employing limited resources can easily outpace more powerful computers using a brute force approach, said Veksler.

"We look to humans not just because we want to simulate what we do, which is an interesting problem in itself, but also because we're smart," said Veksler."Some of the things we have, like limited working memory -- which may seem like a bad thing -- are actually optimal for solving problems in our environment. If you remembered everything, how would you know what's important?"


Source

Wednesday, December 1, 2010

New Psychology Theory Enables Computers to Mimic Human Creativity

Solving this"insight problem" requires creativity, a skill at which humans excel (the coin is a fake --"B.C." and Arabic numerals did not exist at the time) and computers do not. Now, a new explanation of how humans solve problems creatively -- including the mathematical formulations for facilitating the incorporation of the theory in artificial intelligence programs -- provides a roadmap to building systems that perform like humans at the task.

Ron Sun, Rensselaer Polytechnic Institute professor of cognitive science, said the new"Explicit-Implicit Interaction Theory," recently introduced in an article inPsychological Review, could be used for future artificial intelligence.

"As a psychological theory, this theory pushes forward the field of research on creative problem solving and offers an explanation of the human mind and how we solve problems creatively," Sun said."But this model can also be used as the basis for creating future artificial intelligence programs that are good at solving problems creatively."

The paper, titled"Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model," by Sun and Sèbastien Hèlie of University of California, Santa Barbara, appeared in the July edition ofPsychological Review. Discussion of the theory is accompanied by mathematical specifications for the"CLARION" cognitive architecture -- a computer program developed by Sun's research group to act like a cognitive system -- as well as successful computer simulations of the theory.

In the paper, Sun and Hèlie compare the performance of the CLARION model using"Explicit-Implicit Interaction" theory with results from previous human trials -- including tests involving the coin question -- and found results to be nearly identical in several aspects of problem solving.

In the tests involving the coin question, human subjects were given a chance to respond after being interrupted either to discuss their thought process or to work on an unrelated task. In that experiment, 35.6 percent of participants answered correctly after discussing their thinking, while 45.8 percent of participants answered correctly after working on another task.

In 5,000 runs of the CLARION program set for similar interruptions, CLARION answered correctly 35.3 percent of the time in the first instance, and 45.3 percent of the time in the second instance.

"The simulation data matches the human data very well," said Sun.

Explicit-Implicit Interaction theory is the most recent advance on a well-regarded outline of creative problem solving known as"Stage Decomposition," developed by Graham Wallas in his seminal 1926 book"The Art of Thought." According to stage decomposition, humans go through four stages -- preparation, incubation, insight (illumination), and verification -- in solving problems creatively.

Building on Wallas' work, several disparate theories have since been advanced to explain the specific processes used by the human mind during the stages of incubation and insight. Competing theories propose that incubation -- a period away from deliberative work -- is a time of recovery from fatigue of deliberative work, an opportunity for the mind to work unconsciously on the problem, a time during which the mind discards false assumptions, or a time in which solutions to similar problems are retrieved from memory, among other ideas.

Each theory can be represented mathematically in artificial intelligence models. However, most models choose between theories rather than seeking to incorporate multiple theories and therefore they are fragmentary at best.

Sun and Hèlie's Explicit-Implicit Interaction (EII) theory integrates several of the competing theories into a larger equation.

"EII unifies a lot of fragmentary pre-existing theories," Sun said."These pre-existing theories only account for some aspects of creative problem solving, but not in a unified way. EII unifies those fragments and provides a more coherent, more complete theory."

The basic principles of EII propose the coexistence of two different types of knowledge and processing: explicit and implicit. Explicit knowledge is easier to access and verbalize, can be rendered symbolically, and requires more attention to process. Implicit knowledge is relatively inaccessible, harder to verbalize, and is more vague and requires less attention to process.

In solving a problem, explicit knowledge could be the knowledge used in reasoning, deliberately thinking through different options, while implicit knowledge is the intuition that gives rise to a solution suddenly. Both types of knowledge are involved simultaneously to solve a problem and reinforce each other in the process. By including this principle in each step, Sun was able to achieve a successful system.

"This tells us how creative problem solving may emerge from the interaction of explicit and implicit cognitive processes; why both types of processes are necessary for creative problem solving, as well as in many other psychological domains and functionalities," said Sun.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


Source

Friday, November 26, 2010

New Method to Identify People by Their Ears

In a paper entitledA Novel Ray Analogy for Enrolment of Ear Biometricsjust presented at theIEEE Fourth International Conference onBiometrics: Theory, Applications and Systems, scientists from the University's School of Electronics and Computer Science (ECS) described how a technique called image ray transform can highlight tubular structures such as ears, making it possible to identify them.

The research which was carried out by Professor Mark Nixon, Dr John Carter and Alastair Cummings at ECS, describes how the transform is capable of highlighting tubular structures such as

  • the helix of the ear and spectacle frames and, by exploiting the
  • elliptical shape of the helix, can be used as the basis of a method
  • for enrolment for ear biometrics.

Professor Nixon, one of the UK's earliest researchers in this field, first proved that ears were a viable biometric back in 1999.

At that point he said that ears have certain advantages over the more established biometrics as they have a rich and stable structure that is preserved from birth to old age and instead of aging they just get bigger. The ear also does not suffer from changes in facial expression and it is firmly fixed in the middle of the side of the head against a predictable background, unlike face recognition which usually requires the face to be captured against a controlled background.

However, the fact that ears can be concealed by hair, led Professor Nixon and his team to research their use as a biometric further and to come up with new algorithms to make it possible to identify and isolate the ear from the head.

The technique presented by the scientists achieves 99.6% success at enrolment across 252 images of the XM2VTS database, displaying a resistance to confusion with hair and spectacles. These results show great potential for enhancing the detection of structural features.

"Feature recognition is one of the biggest challenges of computer vision," said Alastair Cummings, the PhD student for the research."The ray transform technique may also be appropriate for use in gait biometrics, as legs act as tubular features that the transform is adept at extracting. The transform could also be extended to work upon 3D images, both spatial and spatio-temporal, for 3D biometrics or object tracking. It is a general pre-processing technique for feature extraction in computer images, a technology which is now pervading manufacturing, surveillance and medical applications."

A copy of at:A Novel Ray Analogy for Enrolment of Ear Biometricscan be accessed at:http://eprints.ecs.soton.ac.uk/21546/

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Thursday, November 25, 2010

I Want to See What You See: Babies Treat 'Social Robots' as Sentient Beings

Curiosity drives their learning. At 18 months old, babies are intensely curious about what makes humans tick. A team of University of Washington researchers is studying how infants tell which entities are"psychological agents" that can think and feel.

Research published in the October/November issue ofNeural Networksprovides a clue as to how babies decide whether a new object, such as a robot, is sentient or an inanimate object. Four times as many babies who watched a robot interact socially with people were willing to learn from the robot than babies who did not see the interactions.

"Babies learn best through social interactions, but what makes something 'social' for a baby?" said Andrew Meltzoff, lead author of the paper and co-director of the UW's Institute for Learning and Brain Sciences."It is not just what something looks like, but how it moves and interacts with others that gives it special meaning to the baby."

The UW researchers hypothesized that babies would be more likely to view the robot as a psychological being if they saw other friendly human beings socially interacting with it."Babies look to us for guidance in how to interpret things, and if we treat something as a psychological agent, they will, too," Meltzoff said."Even more remarkably, they will learn from it, because social interaction unlocks the key to early learning."

During the experiment, an 18-month-old baby sat on its parent's lap facing Rechele Brooks, a UW research assistant professor and a co-author of the study. Sixty-four babies participated in the study, and they were tested individually. They played with toys for a few minutes, getting used to the experimental setting. Once the babies were comfortable, Brooks removed a barrier that had hidden a metallic humanoid robot with arms, legs, a torso and a cube-shaped head containing camera lenses for eyes. The robot -- controlled by a researcher hidden from the baby -- waved, and Brooks said,"Oh, hi! That's our robot!"

Following a script, Brooks asked the robot, named Morphy, if it wanted to play, and then led it through a game. She would ask,"Where is your tummy?" and"Where is your head?" and the robot pointed to its torso and its head. Then Brooks demonstrated arm movements and Morphy imitated. The babies looked back and forth as if at a ping pong match, Brooks said.

At the end of the 90-second script, Brooks excused herself from the room. The researchers then measured whether the baby thought the robot was more than its metal parts.

The robot beeped and shifted its head slightly -- enough of a rousing to capture the babies' attention. The robot turned its head to look at a toy next to the table where the baby sat on the parent's lap. Most babies -- 13 out of 16 -- who had watched the robot play with Brooks followed the robot's gaze. In a control group of babies who had been familiarized with the robot but had not seen Morphy engage in games, only three of 16 turned to where the robot was looking.

"We are using modern technology to explore an age-old question about the essence of being human," said Meltzoff, who holds the Job and Gertrud Tamaki Endowed Chair in psychology at the UW."The babies are telling us that communication with other people is a fundamental feature of being human."

The study has implications for humanoid robots, said co-author Rajesh Rao, UW associate professor of computer science and engineering and head of UW's neural systems laboratory. Rao's team helped design the computer programs that made Morphy appear social."The study suggests that if you want to build a companion robot, it is not sufficient to make it look human," said Rao."The robot must also be able to interact socially with humans, an interesting challenge for robotics."

The study was funded by the Office of Naval Research and the National Science Foundation. Aaron Shon, who graduated from UW with a doctorate in computer science and engineering, is also a co-author on the paper.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Wednesday, November 24, 2010

McSleepy Meets DaVinci: Doctors Conduct First-Ever All-Robotic Surgery and Anesthesia

“Collaboration between DaVinci, a surgical robot, and anesthetic robot McSleepy, seemed an obvious fit; robots in medicine can provide health care of higher safety and precision, thus ultimately improving outcomes,” said Dr. TM Hemmerling of McGill University and MUHC’s Department of Anesthesia, who is also a neuroscience researcher at the Research Institute (RI) of the MUHC.

“The DaVinci allows us to work from a workstation operating surgical instruments with delicate movements of our fingers with a precision that cannot be provided by humans alone,” said Dr. A. Aprikian, MUHC urologist in chief and Director of the MUHC Cancer Care Mission, and also a researcher in the Cancer Axis at the RI MUHC. He and his team of surgeons operate the robotic arms from a dedicated workstation via video control with unsurpassed 3D HD image quality.

“Providing anesthesia for robotic prostatectomy can be challenging because of the specific patient positioning and the high degree of muscle relaxation necessary to maintain perfect conditions for the surgical team,” added Dr. Hemmerling.“Automated anesthesia delivery via McSleepy guarantees the same high quality of care every time it is used, independent from the subjective level of expertise. It can be configured exactly to the specific needs of different surgeries, such as robotic surgery.”

“Obviously, there is still some work needed to perfect the all robotic approach– from technical aspects to space requirements for the robots,” added Dr. Hemmerling.“Whereas robots have been used in surgery for quite some time, anesthesia has finally caught up. Robots will not replace doctors but help them to perform to the highest standards.”

Combining both robots, the specialists at the MUHC can deliver the most modern and accurate patient care. The researchers will use the results of this project to test all robotic surgery and anesthesia in a larger scale of patients and various types of surgery.”This should allow for faster, safer and more precise surgery for our patients” concluded Dr. Aprikian.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source

Tuesday, November 23, 2010

Underwater Robots on Course to the Deep Sea

Even when equipped with compressed-air bottles and diving regulators, humans reach their limits very quickly under water. In contrast, unmanned submarine vehicles that are connected by cable to the control center permit long and deep dives. Today remote-controlled diving robots are used for research, inspection and maintenance work. The possible applications of this technology are limited, however, by the length of the cable and the instinct of the navigator. No wonder that researchers are working on autonomous underwater robots which orient themselves under water and carry out jobs without any help from humans.

In the meantime, there are AUVs (autonomous underwater vehicles) which collect data independently or take samples before they return to the starting points."For the time being, the technology is too expensive to carry out routine work, such as inspections of bulkheads, dams or ships' bellies," explains Dr. Thomas Rauschenbach, Director of the Application Center System Technology AST Ilmenau, Germany at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB. This may change soon. Together with the researchers at four Fraunhofer Institutes, Rauschenbach's team is presently working on a generation of autonomous underwater robots which will be smaller, more robust and cheaper than the previous models. The AUVs shall be able to find their bearings in clear mountain reservoirs equally well as in turbid harbor water. They will be suitable for work on the floor of the deep sea as well as for inspections of shallow concrete bases that offshore wind power station have been mounted on.

The engineers from Fraunhofer Institute for Optronics, System Technologies and Image Exploitation in Karlsruhe, Germany are working on the"eyes" for underwater robots. Optical perception is based on a special exposure and analysis technology which even permits orientation in turbid water as well. First of all, it determines the distance to the object, and then the camera emits a laser impulse which is reflected by the object, such as a wall. Microseconds before the reflected light flash arrives, the camera opens the aperture and the sensors capture the incident light pulses. At the Ilmenau branch of the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation,

Rauschenbach's team is developing the"brain" of the robot: a control program that keeps the AUV on course in currents such as at a certain distance to the wall that is to be examined. The Fraunhofer Institute for Biomedical Engineering IBMT in St. Ingbert provides the silicone encapsulation for the pressure-tolerant construction of electronic circuits as well as the"ears" of the new robot: ultrasound sensors permit the inspection of objects. Contrary to the previously conventional sonar technology, researchers are now using high-frequency sound waves which are reflected by the obstacles and registered by the sensor. The powerful but lightweight lithium batteries of the Fraunhofer ISIT in Itzehoe that supply the AUV with energy are encapsulated by silicone.

A special energy management system that researchers at the Fraunhofer Institute for Environmental, Safety and Energy Technology UMSICHT in Oberhausen, Germany have developed saves power and ensures that the data are saved in emergencies before the robot runs out of energy and has to surface.

A torpedo-shaped prototype two meters long that is equipped with eyes, ears, a brain, a motor and batteries will go on its maiden voyage this year in a new tank in Ilmenau. The tank is only three meters deep, but"that's enough to test the decisive functions," affirms Dr. Rauschenbach. In autumn 2011, the autonomous diving robot will put to sea for the first time from the research vessel POSEIDON: Several dives up to a depth of 6,000 meters have been planned.


Source

Tiny Brained Bees Solve a Complex Mathematical Problem

Scientists at Royal Holloway, University of London and Queen Mary, University of London have discovered that bees learn to fly the shortest possible route between flowers even if they discover the flowers in a different order. Bees are effectively solving the 'Travelling Salesman Problem', and these are the first animals found to do this.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed.

Dr Nigel Raine, from the School of Biological Sciences at Royal Holloway explains:"Foraging bees solve travelling salesman problems every day. They visit flowers at multiple locations and, because bees use lots of energy to fly, they find a route which keeps flying to a minimum."

The team used computer controlled artificial flowers to test whether bees would follow a route defined by the order in which they discovered the flowers or if they would find the shortest route. After exploring the location of the flowers, bees quickly learned to fly the shortest route.

As well as enhancing our understanding of how bees move around the landscape pollinating crops and wild flowers, this research, which is due to be published inThe American Naturalist, has other applications. Our lifestyle relies on networks such as traffic on the roads, information flow on the web and business supply chains. By understanding how bees can solve their problem with such a tiny brain we can improve our management of these everyday networks without needing lots of computer time.

Dr Raine adds:"Despite their tiny brains bees are capable of extraordinary feats of behaviour. We need to understand how they can solve the Travelling Salesman Problem without a computer. What short-cuts do they use?'

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.


Source