Saturday, May 14, 2011

Controling Robotic Arms Is Child's Play

"The input device contains various movement sensors, also called inertial sensors," says Bernhard Kleiner of the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, who leads the project. The individual micro-electromechanical systems themselves are not expensive. What the scientists have spent time developing is how these sensors interact."We have developed special algorithms that fuse the data of individual sensors and identify a pattern of movement. That means we can detect movements in free space," summarizes Kleiner.

What may at first appear to be a trade show gimmick, is in fact a technology that offers numerous advantages in industrial production and logistical processes. The system could be used to simplify the programming of industrial robots, for example. To date, this has been done with the aid of laser tracking systems: An employee demonstrates the desired motion with a hand-held baton that features a white marker point. The system records this motion by analyzing the light reflected from a laser beam aimed at the marker. Configuring and calibrating the system takes a lot of time. The new input device should eliminate the need for these steps in the future -- instead, employees need only pick up the device and show the robot what it is supposed to do.

The system has numerous applications in medicine, as well. Take, for example, gait analysis. Until now, cameras have made precise recordings of patients as they walk back and forth along a specified path. The films reveal to the physician such things as how the joints behave while walking, or whether incorrect posture in the knees has been improved by physical therapy. Installing the cameras, however, is complex and costly, and patients are restricted to a predetermined path. The new sensor system can simplify this procedure: Attached to the patient's upper thigh, it measures the sequences and patterns of movement -- without limiting the patient's motion in any way.

"With the inertial sensor system, gait analysis can be performed without a frame of reference and with no need for a complex camera system," explains Kleiner. In another project, scientists are already working on comparisons of patients' gait patterns with those patterns appearing in connection with such diseases as Parkinson's.

Another medical application for the new technology is the control of active prostheses containing numerous small actuators. Whenever the patient moves, the prosthesis in turn also moves; this makes it possible for a leg prosthesis to roll the foot while walking. Here, too, the sensor could be attached to the patient's upper thigh and could analyze the movement, helping to regulate the motors of the prosthesis. Research scientists are currently working on combining the inertial sensor system with an electromyographic (EMG) sensor. Electromyography is based on the principle that when a muscle tenses, it produces an electrical voltage which a sensor can then measure by way of an electrode. If the sensor is placed, for example, on the muscle responsible for lifting the patient's foot, the sensor registers when the patient tenses this muscle -- and the prosthetic foot lifts itself. EMG sensors like this are already available but are difficult to position.

"While standard EMG sensors consist of individual electrodes that have to be positioned precisely on the muscle, our system is made up of many small electrodes that attach to a surface area. This enables us to sense muscle movements much more reliably," says Kleiner.


Source

Tuesday, May 10, 2011

Robotics: A Tiltable Head Could Improve the Ability of Undulating Robots to Navigate Disaster Debris

Researchers at the Georgia Institute of Technology recently built a robot that can penetrate and"swim" through granular material. In a new study, they show that varying the shape or adjusting the inclination of the robot's head affects the robot's movement in complex environments.

"We discovered that by changing the shape of the sand-swimming robot's head or by tilting its head up and down slightly, we could control the robot's vertical motion as it swam forward within a granular medium," said Daniel Goldman, an assistant professor in the Georgia Tech School of Physics.

Results of the study will be presented on May 10 at the 2011 IEEE International Conference on Robotics and Automation in Shanghai. Funding for this research was provided by the Burroughs Wellcome Fund, National Science Foundation and Army Research Laboratory.

The study was conducted by Goldman, bioengineering doctoral graduate Ryan Maladen, physics graduate student Yang Ding and physics undergraduate student Andrew Masse, all from Georgia Tech, and Northwestern University mechanical engineering adjunct professor Paul Umbanhowar.

"The biological inspiration for our sand-swimming robot is the sandfish lizard, which inhabits the Sahara desert in Africa and rapidly buries into and swims within sand," explained Goldman."We were intrigued by the sandfish lizard's wedge-shaped head that forms an angle of 140 degrees with the horizontal plane, and we thought its head might be responsible for or be contributing to the animal's ability to maneuver in complex environments."

For their experiments, the researchers attached a wedge-shaped block of wood to the head of their robot, which was built with seven connected segments, powered by servo motors, packed in a latex sock and wrapped in a spandex swimsuit. The doorstop-shaped head -- which resembled the sandfish's head -- had a fixed lower length of approximately 4 inches, height of 2 inches and a tapered snout. The researchers examined whether the robot's vertical motion could be controlled simply by varying the inclination of the robot's head.

Before each experimental run in a test chamber filled with quarter-inch-diameter plastic spheres, the researchers submerged the robot a couple inches into the granular medium and leveled the surface. Then they tracked the robot's position until it reached the end of the container or swam to the surface.

The researchers investigated the vertical movement of the robot when its head was placed at five different degrees of inclination. They found that when the sandfish-inspired head with a leading edge that formed an angle of 155 degrees with the horizontal plane was set flat, negative lift force was generated and the robot moved downward into the media. As the tip of the head was raised from zero to 7 degrees relative to the horizontal, the lift force increased until it became zero. At inclines above 7 degrees, the robot rose out of the medium.

"The ability to control the vertical position of the robot by modulating its head inclination opens up avenues for further research into developing robots more capable of maneuvering in complex environments, like debris-filled areas produced by an earthquake or landslide," noted Goldman.

The robotics results matched the research team's findings from physics experiments and computational models designed to explore how head shape affects lift in granular media.

"While the lift forces of objects in air, such as airplanes, are well understood, our investigations into the lift forces of objects in granular media are some of the first ever," added Goldman.

For the physics experiments, the researchers dragged wedge-shaped blocks through a granular medium. Blocks with leading edges that formed angles with the horizontal plane of less than 90 degrees resembled upside-down doorstops, the block with a leading edge equal to 90 degrees was a square, and blocks with leading edges greater than 90 degrees resembled regular doorstops.

They found that blocks with leading edges that formed angles with the horizontal plane less than 80 degrees generated positive lift forces and wedges with leading edges greater than 120 degrees created negative lift. With leading edges between 80 and 120 degrees, the wedges did not generate vertical forces in the positive or negative direction.

Using a numerical simulation of object drag and building on the group's previous studies of lift and drag on flat plates in granular media, the researchers were able to describe the mechanism of force generation in detail.

"When the leading edge of the robot head was less than 90 degrees, the robot's head experienced a lift force as it moved forward, which resulted in a torque imbalance that caused the robot to pitch and rise to the surface," explained Goldman.

Since this study, the researchers have attached a wedge-shaped head on the robot that can be dynamically modulated to specific angles. With this improvement, the researchers found that the direction of movement of the robot is sensitive to slight changes in orientation of the head, further validating the results from their physics experiments and computational models.

Being able to precisely control the tilt of the head will allow the researchers to implement different strategies of head movement during burial and determine the best way to wiggle deep into sand. The researchers also plan to test the robot's ability to maneuver through material similar to the debris found after natural disasters and plan to examine whether the sandfish lizard adjusts its head inclination to ensure a straight motion as it dives into the sand.

This material is based on research sponsored by the Burroughs Wellcome Fund, the National Science Foundation (NSF) under Award Number PHY-0749991, and the Army Research Laboratory (ARL) under Cooperative Agreement Number W911NF-08-2-0004.


Source

Saturday, May 7, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Friday, May 6, 2011

EEG Headset With Flying Harness Lets Users 'Fly' by Controlling Their Thoughts

Creative director and Rensselaer MFA candidate Yehuda Duenyas describes the"Infinity Simulator" as a platform similar to a gaming console -- like the Wii or the Kinect -- writ large.

"Instead of you sitting and controlling gaming content, it's a whole system that can control live elements -- so you can control 3-D rigging, sound, lights, and video," said Duenyas, who works under the moniker"xxxy.""It's a system for creating hybrids of theater, installation, game, and ride."

Duenyas created the"Infinity Simulator" with a team of collaborators, including Michael Todd, a Rensselaer 2010 graduate in computer science. Duenyas will exhibit the new system in the art installation"The Ascent" on May 12 at Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC).

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd.

Within the theater, the rigging -- including the harness -- is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The"Infinity Simulator," a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

"We've built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it," said Duenyas."The 'Infinity Simulator' is the center; everything talks to the 'Infinity Simulator.'"

The May 12"The Ascent" installation is only one experience made possible by the new platform, Duenyas said.

"'The Ascent' embodies the maiden experience that we'll be presenting," Duenyas said."But we've found that it's a versatile platform to create almost any type of experience that involves rigging, video, sound, and light. The idea is that it's reactive to the users' body; there's a physical interaction."

Duenyas, a Brooklyn-based artist and theater director, specializes in experiential theater performances.

"The thing that I focus on the most is user experience," Duenyas said."All the shows I do with my theater company and on my own involve a lot of set and set design -- you're entering into a whole world. You're having an experience that is more than going to a show, although a show is part of it."

The"Infinity Simulator" stemmed from an idea Duenyas had for such a theatrical experience.

"It started with an idea that I wanted to create a simulator that would give people a feeling of infinity," Duenyas said. His initial vision was that of a room similar to a Cave Automated Virtual Environment -- a room paneled with projection screens -- in which participants would be able to float effortlessly in an environment intended to evoke a glimpse into infinity.

At Rensselaer, Duenyas took advantage of the technology at hand to explore his idea, first with a video game he developed in 2010, then -- working through the Department of the Arts -- with EMPAC's computer-controlled 3-D theatrical flying harness.

"The charge of the arts department is to allow the artists that they bring into the department to use technology to enhance what they've been doing already," Duenyas said."In coming here (EMPAC), and starting to translate our ideas into a physical space, so many different things started opening themselves up to us."

The 2010 video game, also developed with Todd, tracked the movements -- pitch and yaw -- of players suspended in a custom-rigged harness, allowing players to soar through simulated landscapes. Duenyas said that that game (also called the"Infinity Simulator") and the new platform are part of the same vision.

EMPAC Director Johannes Goebel saw the game on display at the 2010 GameFest and discussed the custom-designed 3-D theatrical flying rig in EMPAC with Duenyas. Working through the Arts Department, Duenyas submitted a proposal to work with the rig, and his proposal was accepted.

Duenyas and his team experimented -- first gaining peripheral control over the system, and then linking it to the EEG headset -- and created the Ascent installation as an initial project. In the installation, the Infinity Simulator is programmed to respond to relaxation.

"We're measuring two brain states -- alpha and theta -- waking consciousness and everyday brain computational processing," said Duenyas."If you close your eyes and take a deep breath, that processing power decreases. When it decreases below a certain threshold, that is the trigger for you to elevate."

As a user rises, their ascent triggers a changing display of lights, sound, and video. Duenyas said he wants to hint at transcendental experience, while keeping the door open for a more circumspect interpretation.

"The point is that the user is trying to transcend the everyday and get into this meditative state so they can have this experience. I see it as some sort of iconic spiritual simulator. That's the serious side," he said."There's also a real tongue-in-cheek side of my work: I want clouds, I want Terry Gilliam's animated fist to pop out of a cloud and hit you in the face. It's mixing serious religious symbology, but not taking it seriously."

The humor is prompted, in part, by the limitations of this earliest iteration of Duenyas' vision.

"It started with, 'I want to have a glimpse of infinity,' 'I want to float in space.' Then you get in the harness and you're like 'man, this harness is uncomfortable,'" he said."In order to achieve the original vision, we had to build an infrastructure, and I still see development of the infinity experience is a ways off; but what we can do with the infrastructure in a realistic time frame is create 'The Ascent,' which is going to be really fun, and totally other."

Creating the"Infinity Simulator" has prompted new possibilities.

"The vision now is to play with this fun system that we can use to build any experience," he said."It's sort of overwhelming because you could do so many things -- you could create a flight through cumulus clouds, you could create an augmented physicality parkour course where you set up different features in the room and guide yourself to different heights. It's limitless."


Source

Wednesday, May 4, 2011

Robots Learn to Share: Why We Go out of Our Way to Help One Another

Altruism, the sacrificing of individual gains for the greater good, appears at first glance to go against the notion of"survival of the fittest." But altruistic gene expression is found in nature and is passed on from one generation to the next. Worker ants, for example, are sterile and make the ultimate altruistic sacrifice by not transmitting their genes at all in order to insure the survival of the queen's genetic makeup. The sacrifice of the individual in order to insure the survival of a relative's genetic code is known as kin selection. In 1964, biologist W.D. Hamilton proposed a precise set of conditions under which altruistic behavior may evolve, now known as Hamilton's rule of kin selection. Here's the gist: If an individual family member shares food with the rest of the family, it reduces his or her personal likelihood of survival but increases the chances of family members passing on their genes, many of which are common to the entire family. Hamilton's rule simply states that whether or not an organism shares its food with another depends on its genetic closeness (how many genes it shares) with the other organism.

Testing the evolution of altruism using quantitative studies in live organisms has been largely impossible because experiments need to span hundreds of generations and there are too many variables. However, Floreano's robots evolve rapidly using simulated gene and genome functions and allow scientists to measure the costs and benefits associated with the trait. Additionally, Hamilton's rule has long been a subject of much debate be-cause its equation seems too simple to be true."This study mirrors Hamilton's rule re-markably well to explain when an altruistic gene is passed on from one generation to the next, and when it is not," says Keller.

Previous experiments by Floreano and Keller showed that foraging robots doing simple tasks, such as pushing seed-like objects across the floor to a destination, evolve over multiple generations. Those robots not able to push the seeds to the correct location are selected out and cannot pass on their code, while robots that perform comparatively better see their code reproduced, mutated, and recombined with that of other robots into the next generation -- a minimal model of natural selection. The new study by EPFL and UNIL researchers adds a novel dimension: once a foraging robot pushes a seed to the proper destination, it can decide whether it wants to share it or not. Evolutionary experiments lasting 500 generations were repeated for several scenarios of altruistic interaction -- how much is shared and to what cost for the individual -- and of genetic relatedness in the population. The researchers created groups of relatedness that, in the robot world, would be the equivalent of complete clones, siblings, cousins and non-relatives. The groups that shared along the lines of Hamilton's rule foraged better and passed their code onto the next generation.

The quantitative results matched surprisingly well the predictions of Hamilton's rule even in the presence of multiple interactions. Hamilton's original theory takes a limited and isolated vision of gene interaction into account, whereas the genetic simulations run in the foraging robots integrate effects of one gene on multiple other genes with Hamilton's rule still holding true. The findings are already proving useful in swarm robotics."We have been able to take this experiment and extract an algorithm that we can use to evolve cooperation in any type of robot," explains Floreano."We are using this altruism algo-rithm to improve the control system of our flying robots and we see that it allows them to effectively collaborate and fly in swarm formation more successfully."

This research was funded by the Swiss National Science Foundation, the Euro-pean Commission ECAgents and Swarmanoids projects, and the European Research Council.


Source

Wednesday, April 27, 2011

Caterpillars Inspire New Movements in Soft Robots

Some caterpillars have the extraordinary ability to rapidly curl themselves into a wheel and propel themselves away from predators. This highly dynamic process, called ballistic rolling, is one of the fastest wheeling behaviours in nature.

Researchers from Tufts University, Massachusetts, saw this as an opportunity to design a robot that mimics this behaviour of caterpillars and to develop a better understanding of the mechanics behind ballistic rolling.

The study, published on April 27, in IOP Publishing's journalBioinspiration& Biomimetics, also includes a video of both the caterpillar and robot in action and can be found athttp://www.youtube.com/watch?v=wZe9qWi-LUo.

To simulate the movement of a caterpillar, the researchers designed a 10cm long soft-bodied robot, called GoQBot, made out of silicone rubber and actuated by embedded shape memory alloy coils. It was named GoQBot as it forms a"Q" shape before rolling away at over half a meter per second.

The GoQBot was designed to specifically replicate the functional morphologies of a caterpillar, and was fitted with 5 infrared emitters along its side to allow motion tracking using one of the latest high speed 3D tracking systems. Simultaneously, a force plate measured the detailed ground forces as the robot pushed off into a ballistic roll.

In order to change its body conformation so quickly, in less than 100 ms, GoQBot benefits from a significant degree of mechanical coordination in ballistic rolling. Researchers believe such coordination is mediated by the nonlinear muscle coupling in the animals.

The researchers were also able to explain why caterpillars don't use the ballistic roll more often as a default mode of transport; despite its impressive performance, ballistic rolling is only effective on smooth surfaces, demands a large amount of power, and often ends unpredictably.

Not only did the study provide an insight into the fascinating escape system of a caterpillar, it also put forward a new locomotor strategy which could be used in future robot development.

Many modern robots are modelled after snakes, worms and caterpillars for their talents in crawling and climbing into difficult spaces. However, the limbless bodies severely reduce the speeds of the robots in the opening. On the other hand, there are many robots that employ a rolling motion in order to travel with speed and efficiency, but they struggle to gain access to difficult spaces.

Lead author Huai-Ti Lin from the Department of Biology, Tufts University, said:"GoQBot demonstrates a solution by reconfiguring its body and could therefore enhance several robotic applications such as urban rescue, building inspection, and environmental monitoring."

"Due to the increased speed and range, limbless crawling robots with ballistic rolling capability could be deployed more generally at a disaster site such as a tsunami aftermath. The robot can wheel to a debris field and wiggle into the danger for us."


Source

Tuesday, April 12, 2011

Wii Key to Helping Kids Balance

The Rice engineering students created the new device using components of the popular Nintendo game system to create a balance training system.

What the kids may see as a fun video game is really a sophisticated way to help them advance their skills. The Wii Balance Boards lined up between handrails will encourage patients age 6 to 18 to practice their balance skills in an electronic gaming environment. The active handrails, which provide feedback on how heavily patients depend on their arms, are a unique feature.

Many of the children targeted for this project have cerebral palsy, spina bifida or amputations. Using the relatively inexpensive game console components improves the potential of this system to become a cost-effective addition to physical therapy departments in the future.

Steven Irby, an engineer at Shriners' Motion Analysis Laboratory, pitched the idea to Rice's engineering mentors after the success of last year's Trek Tracker project, a computer-controlled camera system for gait analysis that was developed by engineering students at Rice's Oshman Engineering Design Kitchen (OEDK).

The engineering seniors who chose to tackle this year's new project -- Michelle Pyle, Drew Berger and Matt Jones, aka Team Equiliberators -- hope to have the system up and running at Shriners Hospital before they graduate next month.

"He (Irby) wants to get kids to practice certain tasks in their games, such as standing still, then taking a couple of steps and being able to balance, which is pretty difficult for some of them," Pyle said."The last task is being able to take a couple of steps and then turn around."

"This isn't a measurement device as much as it is a game," Irby said."But putting the two systems together is what makes it unique. The Wii system is not well suited to kids with significant balance problems; they can't play it. So we're making something that is more adaptable to them."

The game requires patients to shoot approaching monsters by hitting particular spots with their feet as they step along the Wii array, computer science student Jesus Cortez, one of the game's creators, explained. The game gets harder as the patients improve, he said, and the chance to rack up points gives them an incentive.

A further step, not yet implemented, would be to program feedback from the handrails into the game. Leaning on the rails would subtract points from the users' scores, encouraging them to improve their postures. The game would also present challenges specific to younger and older children to keep them engaged.

The programming team also includes undergraduate Irina Patrikeeva and graduate student Nick Zhu. Studio arts undergraduate Jennifer Humphreys created the artwork.

The system's components include a PC, the Wii boards (aligned in a frame) and two balance beam-like handrails that read how much force patients are putting on their hands. Communications to the PC are handled via the Wii's native Bluetooth protocol.

The students said their prototype cost far less than the$2,000 they'd budgeted. Rice supplied the computer equipment and LabVIEW software they needed to create the diagnostic software that interfaces with Shriners' existing systems, and they purchased the Wii Balance Boards on eBay.

"Small force plates that people commonly use for such measurements cost at least a couple of grand, but Wii boards -- and people have done research on this -- give you a pretty good readout of your center of balance for what they cost," Pyle said.

Jones, who is building the final unit for delivery to Shriners, said he wants patients to see the Wii boards."We're putting clear acrylic over the boards so there aren't any gaps that could trip up the younger ones," he said."We wanted to use a device that's familiar to them, but they might not be convinced it's a Wii board unless they can see it."


Source

Monday, April 11, 2011

Artificial Intelligence for Improving Data Processing

Within this framework, five leading scientists presented the latest advances in their research work on different aspects of AI. The speakers tackled issues ranging from the more theoretical such as algorithms capable of solving combinatorial problems to robots that can reason about emotions, systems that use vision to monitor activities, and automated players that learn how to win in a given situation."Inviting speakers from groups of references allows us to offer a panoramic view of the main problems and the techniques open in the area, including advances in video and multi-sensor systems, task planning, automated learning, games, and artificial consciousness or reasoning," the experts noted.

The participants from the AVIRES (The Artificial Vision and Real Time Systems) research group at the University of Udine gave a seminar on the introduction of data fusion techniques and distributed artificial vision. In particular, they dealt with automated surveillance systems with visual sensor networks, from basic techniques for image processing and object recognition to Bayesian reasoning for understanding activities and automated learning and data fusion to make high performance system. Dr.Simon Lucas, professor at the Essex University and editor in chief of IEEE Transactions on Computational Intelligence and AI in Games and a researcher focusing on the application of AI techniques on games, presented the latest trends in generation algorithms for game strategies. During his presentation, he pointed out the strength of UC3M in this area, citing its victory in two of the competitions held at the international level during the most recent edition of the Conference on Computational Intelligence and Games.

In addition, Enrico Giunchiglia, professor at the University of Genoa and former president of the Council of the International Conference on Automated Planning and Scheduling (ICAPS), described the most recent work in the area of logic satisfaction, which is rapidly growing due to its applications in circuit design and in task planning

Artificial Intelligence (IA) is as old as computer science and has generated ideas, techniques and applications that permit it to solve difficult problems. The field is very active and offers solutions to very diverse sectors. The number of industrial applications that have an AI technique is very high, and from the scientific point of view, there are many specialized journals and congresses. Furthermore, new lines of research are constantly being open and there is a still great room for improvement in knowledge transfer between researchers and industry. These are some of the main ideas gathered at the 4th International Seminar on New Issues on Artificial Intelligence), organized by the SCALAB group in the UC3M Computer Engineering Department at the Leganés campus of this Madrid university.

The future of Artificial Intelligence

This seminar also included a talk on the promising future of AI."The tremendous surge in the number of devices capable of capturing and processing information, together with the growth of the computing capacity and the advances in algorithms enormously boost the possibilities for practical application," the researchers from the SCALAB group pointed out. Among them we can cite the construction of computer programs that make life easier, which take decisions in complex environments or which allow problems to be solved in environments which are difficult to access for people," he noted. From the point of view of these research trends, more and more emphasis is being placed on developing systems capable of learning and demonstrating intelligent behavior without being tied to replicating a human model.

AI will allow advances in the development of systems capable of automatically understanding a situation and its context with the use of sensor data and information systems as well as establishing plans of action, from support applications to decision making within dynamic situations. According to the researchers, this is due to the rapid advances and the availability of sensor technology which provides a continuous flow of data about the environment, information that must be dealt with appropriately in a node of data fusion and information. Likewise, the development of sophisticated techniques for task planning allow plans of action to be composed, executed, checked for correct execution, and rectified in case of some failure, and finally to learn from mistakes made.

This technology has allowed a wide range of applications such as integrated systems for surveillance, monitoring and detecting anomalies, activity recognition, teleassistence systems, transport logistic planning, etc. According to Antonio Chella, Full Professor at the University of Palermo and expert in Artificial Consciousness, the future of AI will imply discovering a new meaning of the word"intelligence." Until now, it has been equated with automated reasoning in software systems, but in the future AI will tackle more daring concepts such as the incarnation of intelligence in robots, as well as emotions, and above all consciousness.


Source

Friday, April 8, 2011

Technique for Letting Brain Talk to Computers Now Tunes in Speech

In a new study, scientists from Washington University demonstrated that humans can control a cursor on a computer screen using words spoken out loud and in their head, holding huge applications for patients who may have lost their speech through brain injury or disabled patients with limited movement.

By directly connecting the patient's brain to a computer, the researchers showed that the computer could be controlled with up to 90% accuracy even when no prior training was given.

Patients with a temporary surgical implant have used regions of the brain that control speech to"talk" to a computer for the first time, manipulating a cursor on a computer screen simply by saying or thinking of a particular sound.

"There are many directions we could take this, including development of technology to restore communication for patients who have lost speech due to brain injury or damage to their vocal cords or airway," says author Eric C. Leuthardt, MD, of Washington University School of Medicine in St. Louis.

Scientists have typically programmed the temporary implants, known as brain-computer interfaces, to detect activity in the brain's motor networks, which control muscle movements.

"That makes sense when you're trying to use these devices to restore lost mobility -- the user can potentially engage the implant to move a robotic arm through the same brain areas he or she once used to move an arm disabled by injury," says Leuthardt, assistant professor of neurosurgery, of biomedical engineering and of neurobiology,"But that has the potential to be inefficient for restoration of a loss of communication."

Patients might be able to learn to think about moving their arms in a particular way to say hello via a computer speaker, Leuthardt explains. But it would be much easier if they could say hello by using the same brain areas they once engaged to use their own voices.

The research appears April 7 inThe Journal of Neural Engineering.

The devices under study are temporarily installed directly on the surface of the brain in epilepsy patients. Surgeons like Leuthardt use them to identify the source of persistent, medication-resistant seizures and map those regions for surgical removal. Researchers hope one day to install the implants permanently to restore capabilities lost to injury and disease.

Leuthardt and his colleagues have recently revealed that the implants can be used to analyze the frequency of brain wave activity, allowing them to make finer distinctions about what the brain is doing. For the new study, Leuthardt and others applied this technique to detect when patients say or think of four sounds:

  • oo, as in few
  • e, as in see
  • a, as in say
  • a, as in hat

When scientists identified the brainwave patterns that represented these sounds and programmed the interface to recognize them, patients could quickly learn to control a computer cursor by thinking or saying the appropriate sound.

In the future, interfaces could be tuned to listen to just speech networks or both motor and speech networks, Leuthardt says. As an example, he suggests that it might one day be possible to let a disabled patient both use his or her motor regions to control a cursor on a computer screen and imagine saying"click" when he or she wants to click on the screen.

"We can distinguish both spoken sounds and the patient imagining saying a sound, so that means we are truly starting to read the language of thought," he says."This is one of the earliest examples, to a very, very small extent, of what is called 'reading minds' -- detecting what people are saying to themselves in their internal dialogue."

"We want to see if we can not just detect when you're saying dog, tree, tool or some other word, but also learn what the pure idea of that looks like in your mind," he says."It's exciting and a little scary to think of reading minds, but it has incredible potential for people who can't communicate or are suffering from other disabilities."

The next step, which Leuthardt and his colleagues are working on, is to find ways to distinguish what they call"higher levels of conceptual information."

The study identified that speech intentions can be acquired through a site that is less than a centimetre wide which would require only a small insertion into the brain. This would greatly reduce the risk of a surgical procedure.


Source

Wednesday, April 6, 2011

Device Enables Computer to Identify Whether User Is Male or Female

The Spanish Patent and Trademark Office has awarded the Universidad Politécnica de Madrid and the Universidad Rey Juan Carlos Spanish patent ES 2 339 100 B2 for the device.

Thanks to the new algorithm, devices can be built to measure television or advertising video audiences by gathering demographic information about spectators (dynamic marketing). The new device is also useful for conducting market research at shopping centres, stores, banks or any other business using cameras to count people and extract demographic information. Another application is interactive kiosks with a virtual vendor, as the device automatically extracts information about the user, such as the person's gender, to improve interaction.

A step forward in gender recognition from facial images

This research, the results of which were published in IEEE Transactions on Pattern Analysis and Machine Intelligence, demonstrates that linear techniques are just as good as support vector machines (SVM) for the gender recognition problem. The developed technique is applicable in devices that have low computational resources, like telephones or intelligent cameras.

The study concludes that linear methods are useful for training with databases that contain a small number of images, as well as for outputting gender classifiers that are as fast as boosting-based classifiers. However, boosting- or SVM-based methods will require more training images to get good results. Finally, SVM-based classifiers are the slowest option. Additionally, the experimental evidence suggests that there is a dependency among different demographic variables like gender, age or ethnicity.

Device for demographic face classification

The invention is a device equipped with a camera that captures digital images and is connected to an image processing system. The image processing system trims each face detection image to the size of 25x25 pixels. An elliptical mask (designed to eliminate background interference) is then applied to the image, and it is equalized and classified.

The device advances the state of the art by using a classifier based on the most efficient linear classification methods: principal component analysis (PCA), followed by Fisher's linear discriminant analysis (LDA) using a Bayesian classifier in the small dimensional space output by the LDA. For PCA+LDA to be competitive, the crucial step is to select the most discriminant PCA features before performing LDA.

One of the major research areas in informatics is the development of machines that interact with users in the same way as human beings communicate with each other. This research is a step further in this direction.


Source

Friday, March 18, 2011

Bomb Disposal Robot Getting Ready for Front-Line Action

The organisations have come together to create a lightweight, remote-operated vehicle, or robot, that can be controlled by a wireless device, not unlike a games console, from a distance of several hundred metres.

The innovative robot, which can climb stairs and even open doors, will be used by soldiers on bomb disposal missions in countries such as Afghanistan.

Experts from the Department of Computer& Communications Engineering, based within the university's School of Engineering, are working on the project alongside NIC Instruments Limited of Folkestone, manufacturers of security search and bomb disposal equipment.

Much lighter and more flexible than traditional bomb disposal units, the robot is easier for soldiers to carry and use when out in the field. It has cameras on board, which relay images back to the operator via the hand-held control, and includes a versatile gripper which can carry and manipulate delicate items.

The robot also includes nuclear, biological and chemical weapons sensors.

Measuring just 72cm by 35cm, the robot weighs 48 kilogrammes and can move at speeds of up to eight miles per hour.


Source

Thursday, March 10, 2011

How Do People Respond to Being Touched by a Robotic Nurse?

The research is being presented March 9 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"What we found was that how people perceived the intent of the robot was really important to how they responded. So, even though the robot touched people in the same way, if people thought the robot was doing that to clean them, versus doing that to comfort them, it made a significant difference in the way they responded and whether they found that contact favorable or not," said Charlie Kemp, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

In the study, researchers looked at how people responded when a robotic nurse, known as Cody, touched and wiped a person's forearm. Although Cody touched the subjects in exactly the same way, they reacted more positively when they believed Cody intended to clean their arm versus when they believed Cody intended to comfort them.

These results echo similar studies done with nurses.

"There have been studies of nurses and they've looked at how people respond to physical contact with nurses," said Kemp, who is also an adjunct professor in Georgia Tech's College of Computing."And they found that, in general, if people interpreted the touch of the nurse as being instrumental, as being important to the task, then people were OK with it. But if people interpreted the touch as being to provide comfort… people were not so comfortable with that."

In addition, Kemp and his research team tested whether people responded more favorably when the robot verbally indicated that it was about to touch them versus touching them without saying anything.

"The results suggest that people preferred when the robot did not actually give them the warning," said Tiffany Chen, doctoral student at Georgia Tech."We think this might be because they were startled when the robot started speaking, but the results are generally inconclusive."

Since many useful tasks require that a robot touch a person, the team believes that future research should investigate ways to make robot touch more acceptable to people, especially in healthcare. Many important healthcare tasks, such as wound dressing and assisting with hygiene, would require a robotic nurse to touch the patient's body,

"If we want robots to be successful in healthcare, we're going to need to think about how do we make those robots communicate their intention and how do people interpret the intentions of the robot," added Kemp."And I think people haven't been as focused on that until now. Primarily people have been focused on how can we make the robot safe, how can we make it do its task effectively. But that's not going to be enough if we actually want these robots out there helping people in the real world."

In addition to Kemp and Chen, the research group consists of Andrea Thomaz, assistant professor in Georgia Tech's College of Computing, and postdoctoral fellow Chih-Hung Aaron King.


Source

Wednesday, March 9, 2011

How Can Robots Get Our Attention?

The research is being presented March 8 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"The primary focus was trying to give Simon, our robot, the ability to understand when a human being seems to be reacting appropriately, or in some sense is interested now in a response with respect to Simon and to be able to do it using a visual medium, a camera," said Aaron Bobick, professor and chair of the School of Interactive Computing in Georgia Tech's College of Computing.

Using the socially expressive robot Simon, from Assistant Professor Andrea Thomaz's Socially Intelligent Machines lab, researchers wanted to see if they could tell when he had successfully attracted the attention of a human who was busily engaged in a task and when he had not.

"Simon would make some form of a gesture, or some form of an action when the user was present, and the computer vision task was to try to determine whether or not you had captured the attention of the human being," said Bobick.

With close to 80 percent accuracy Simon was able to tell, using only his cameras as a guide, whether someone was paying attention to him or ignoring him.

"We would like to bring robots into the human world. That means they have to engage with human beings, and human beings have an expectation of being engaged in a way similar to the way other human beings would engage with them," said Bobick.

"Other human beings understand turn-taking. They understand that if I make some indication, they'll turn and face someone when they want to engage with them and they won't when they don't want to engage with them. In order for these robots to work with us effectively, they have to obey these same kinds of social conventions, which means they have to perceive the same thing humans perceive in determining how to abide by those conventions," he added.

Researchers plan to go further with their investigations into how Simon can read communication cues by studying whether he can tell by a person's gaze whether they are paying attention or using elements of language or other actions.

"Previously people would have pre-defined notions of what the user should do in a particular context and they would look for those," said Bobick."That only works when the person behaves exactly as expected. Our approach, which I think is the most novel element, is to use the user's current behavior as the baseline and observe what changes."

The research team for this study consisted of Bobick, Thomaz, doctoral student Jinhan Lee and undergraduate student Jeffrey Kiser.


Source

Tuesday, March 8, 2011

Teaching Robots to Move Like Humans

The research was presented March 7 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"It's important to build robots that meet people's social expectations because we think that will make it easier for people to understand how to approach them and how to interact with them," said Andrea Thomaz, assistant professor in the School of Interactive Computing at Georgia Tech's College of Computing.

Thomaz, along with Ph.D. student Michael Gielniak, conducted a study in which they asked how easily people can recognize what a robot is doing by watching its movements.

"Robot motion is typically characterized by jerky movements, with a lot of stops and starts, unlike human movement which is more fluid and dynamic," said Gielniak."We want humans to interact with robots just as they might interact with other humans, so that it's intuitive."

Using a series of human movements taken in a motion-capture lab, they programmed the robot, Simon, to perform the movements. They also optimized that motion to allow for more joints to move at the same time and for the movements to flow into each other in an attempt to be more human-like. They asked their human subjects to watch Simon and identify the movements he made.

"When the motion was more human-like, human beings were able to watch the motion and perceive what the robot was doing more easily," said Gielniak.

In addition, they tested the algorithm they used to create the optimized motion by asking humans to perform the movements they saw Simon making. The thinking was that if the movement created by the algorithm was indeed more human-like, then the subjects should have an easier time mimicking it. Turns out they did.

"We found that this optimization we do to create more life-like motion allows people to identify the motion more easily and mimic it more exactly," said Thomaz.

The research that Thomaz and Gielniak are doing is part of a theme in getting robots to move more like humans move. In future work, the pair plan on looking at how to get Simon to perform the same movements in various ways.

"So, instead of having the robot move the exact same way every single time you want the robot to perform a similar action like waving, you always want to see a different wave so that people forget that this is a robot they're interacting with," said Gielniak.


Source

Saturday, March 5, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Friday, March 4, 2011

Method Developed to Match Police Sketch, Mug Shot: Algorithms and Software Will Match Sketches With Mugshots in Police Databases

A team led by MSU University Distinguished Professor of Computer Science and Engineering Anil Jain and doctoral student Brendan Klare has developed a set of algorithms and created software that will automatically match hand-drawn facial sketches to mug shots that are stored in law enforcement databases.

Once in use, Klare said, the implications are huge.

"We're dealing with the worst of the worst here," he said."Police sketch artists aren't called in because someone stole a pack of gum. A lot of time is spent generating these facial sketches so it only makes sense that they are matched with the available technology to catch these criminals."

Typically, artists' sketches are drawn by artists from information obtained from a witness. Unfortunately, Klare said,"often the facial sketch is not an accurate depiction of what the person looks like."

There also are few commercial software programs available that produce sketches based on a witness' description. Those programs, however, tend to be less accurate than sketches drawn by a trained forensic artist.

The MSU project is being conducted in the Pattern Recognition and Image Processing lab in the Department of Computer Science and Engineering. It is the first large-scale experiment matching operational forensic sketches with photographs and, so far, results have been promising.

"We improved significantly on one of the top commercial face-recognition systems," Klare said."Using a database of more than 10,000 mug shot photos, 45 percent of the time we had the correct person."

All of the sketches used were from real crimes where the criminal was later identified.

"We don't match them pixel by pixel," said Jain, director of the PRIP lab."We match them up by finding high-level features from both the sketch and the photo; features such as the structural distribution and the shape of the eyes, nose and chin."

This project and its results appear in the March 2011 issue of the journalIEEE Transactions on Pattern Analysis and Machine Intelligence.

The MSU team plans to field test the system in about a year.

The sketches used in this research were provided by forensic artists Lois Gibson and Karen Taylor, and forensic sketch artists working for the Michigan State Police.


Source

Wednesday, March 2, 2011

New Technique for Improving Robot Navigation Systems

An autonomous mobile robot is a robot that is able to navigate its environment without colliding or getting lost. Unmanned robots are also able to recover from spatial disorientation. Conducted by Sergio Guadarrama, researcher of the European Centre for Soft Computing, and Antonio Ruiz, assistant professor at the Universidad Politécnica de Madrid's Facultad de Informática, and published in the Information Sciences journal, the research focuses on map building. Map building is one of the skills related to autonomous navigation, where a robot is required to explore an unknown environment (enclosure, plant, buildings, etc.) and draw up a map of the environment. Before it can do this, the robot has to use its sensors to perceive obstacles.

The main sensor types used for autonomous navigation are vision and range sensors. Although vision sensors can capture much more information from the environment, this research used range, specifically ultrasonic, sensors, which are less accurate, to demonstrate that the model builds accurate maps from few and imprecise input data.

Once it has captured the ranges, the robot has to map these distances to obstacles on the map. Point clouds are used to draw the map, as the imprecision of the range data rules out the use of straight lines or even isolated points. Even so, the resulting map is by no means an architectural blueprint of the site, because not even the robot's location is precisely known, and there is no guarantee that each point cloud is correctly positioned. In actual fact, one and the same obstacle can be viewed properly from one robot position, but not from another. This can produce contradictory information -obstacle and no obstacle- about the same area of the map under construction. Which of the two interpretations is correct?

Exploring unknown spaces

The solution is based on linguistic descriptions of the antonyms"vacant" and"occupied" and inspired by computing with words and the computational theory of perceptions, two theories proposed by L.A. Zadeh of the University of California at Berkeley. Whereas other published research views obstacles and empty spaces as complementary concepts, this research assumes that, rather than being complements, obstacles and vacant spaces are a pair of opposites.

For example, we can infer that an occupied space is not vacant, but we cannot infer that an unoccupied space is empty. This space could be unknown or ambiguous, because the robot has limited information about its environment. Also the contradictions between"vacant" and"occupied" are also explicitly represented.

This way, the robot is able to make a distinction between two types of unknown spaces: spaces that are unknown because information is contradictory and spaces that are unknown because they are unexplored. This would lead the robot to navigate with caution through the contradictory spaces and explore the unexplored spaces. The map is constructed using linguistic rules, such as"If the measured distance is short, then assign a high confidence level to the measurement" or"If an obstacle has been seen several times, then increase the confidence in its presence," where"short,""high" and"several" are fuzzy sets, subject to fuzzy sets theory. Contradictions are resolved by a greater reliance on shorter ranges and combining multiple measures.

Compared with the results of other methods, the outcomes show that the maps built using this technique better capture the shape of walls and open spaces, and contain fewer errors from incorrect sensor data. This opens opportunities for improving the current autonomous navigation systems for robots.


Source

Thursday, February 24, 2011

A Semantic Sommelier: Wine Application Highlights the Power of Web 3.0

Web scientist and Rensselaer Polytechnic Institute Tetherless World Research Constellation Professor Deborah McGuinness has been developing a family of applications for the most tech-savvy wine connoisseurs since her days as a graduate student in the 1980s -- before what we now know as the World Wide Web had even been envisioned.

Today, McGuinness is among the world's foremost experts in Web ontology languages. These languages are used to encode meanings in a language that computers can understand. The most recent version of her wine application serves as an exceptional example of what the future of the World Wide Web, often called Web 3.0, might in fact look like. It is also an exceptional tool for teaching future Web Scientists about ontologies.

"The wine agent came about because I had to demonstrate the new technology that I was developing," McGuinness said."I had sophisticated applications that used cutting-edge artificial intelligence technology in domains, such as telecommunications equipment, that were difficult for anyone other than well-trained engineers to understand." McGuinness took the technology into the domain of wines and foods to create a program that she uses as a semantic tutorial, an"Ontologies 101" as she calls it. And students throughout the years have done many things with the wine agent including, most recently, experimentation with social media and mobile phone applications.

Today, the semantic sommelier is set to provide even the most novice of foodies some exciting new tools to expand their wine knowledge and food-pairing abilities on everything from their home PC to their smart phone. Evan Patton, a graduate student in computer science at Rensselaer, is the most recent student to tinker with the wine agent and is working with McGuinness to bring it into the mobile space on both the iPhone and Droid platforms.

The agent uses the Web Ontology Language (OWL), the formal language for the Semantic Web. Like the English language, which uses an agreed upon alphabet to form words and sentences that all English-speaking people can recognize, OWL uses a formalized set of symbols to create a code or language that a wide variety of applications can"read." This allows your computer to operate more efficiently and more intelligently with your cell phone or your Facebook page, or any other webpage or web-enabled device. These semantics also allow for an entirely new generation in smart search technologies.

Thanks to its semantic technology, the sommelier is input with basic background knowledge about wine and food. For wine, that includes its body, color (red versus white or blush), sweetness, and flavor. For food, this includes the course (e.g. appetizer versus entrée), ingredient type (e.g. fish versus meat), and its heat (mild versus spicy). The semantic technologies beneath the application then encode that knowledge and apply reasoning to search and share that information. This semantic functionality can now be exploited for a variety of culinary purposes, all of which McGuinness, a personal lover of fine wines, and Patton are working together on.

Having a spicy fish dish for dinner? Search within the system and it will arrive at a good wine pairing for the meal. Beyond basic pairings, the application has strong possibilities for use in individual restaurants, according to McGuinness, who envisions teaming up with restaurant owners to input their specific menus and wine lists. Thus, a diner could check menus and wine holdings before going out for dinner or they could enter a restaurant, pull out their smart phone, and instantly know what is in the wine cellar and goes best with that chef's clams casino. Beyond pairings, diners could rate different wines, providing fellow diners with personal reviews and the restaurateur with valuable information on what to stock up on next week. Is it a dry restaurant? The application could also be loaded up with the inventory within the liquor store down the street.

Beyond the table, the application can also be used to make personal wine suggestions and virtual wine cellars that you could share with your friends via Facebook or other social media platforms. It could also be used to manage a personal wine cellar, providing information on what is a peak flavor at the moment or what in your cellar would go best with your famous steak au poivre.

"Today we have 10 gadgets with us at any given time," McGuinness said."We live and breathe social media. With semantic technologies, we can offload more of the searching and reasoning required to locate and share information to the computer while still maintaining personal control over our information and how we use it. We also increase the ability of our technologies to interact with each other and decrease the need for as many gadgets or as many interactions with them since the applications do more work for us."


Source

Sunday, February 20, 2011

Scientists Steer Car With the Power of Thought

They then succeeded in developing an interface to connect the sensors to their otherwise purely computer-controlled vehicle, so that it can now be"controlled" via thoughts. Driving by thought control was tested on the site of the former Tempelhof Airport.

The scientists from Freie Universität first used the sensors for measuring brain waves in such a way that a person can move a virtual cube in different directions with the power of his or her thoughts. The test subject thinks of four situations that are associated with driving, for example,"turn left" or"accelerate." In this way the person trained the computer to interpret bioelectrical wave patterns emitted from his or her brain and to link them to a command that could later be used to control the car. The computer scientists connected the measuring device with the steering, accelerator, and brakes of a computer-controlled vehicle, which made it possible for the subject to influence the movement of the car just using his or her thoughts.

"In our test runs, a driver equipped with EEG sensors was able to control the car with no problem -- there was only a slight delay between the envisaged commands and the response of the car," said Prof. Raúl Rojas, who heads the AutoNOMOS project at Freie Universität Berlin. In a second test version, the car drove largely automatically, but via the EEG sensors the driver was able to determine the direction at intersections.

The AutoNOMOS Project at Freie Universität Berlin is studying the technology for the autonomous vehicles of the future. With the EEG experiments they investigate hybrid control approaches, i.e., those in which people work with machines.

The computer scientists have made a short film about their research, which is available at:http://tinyurl.com/BrainDriver


Source

Saturday, February 19, 2011

Augmented Reality System for Learning Chess

An ordinary webcam, a chess board, a set of 32 pieces and custom software are the key elements in the final degree project of the telecommunications engineering students Ivan Paquico and Cristina Palmero, from the UPC-Barcelona Tech's Terrassa School of Engineering (EET). The project, for which the students were awarded a distinction, was directed by the professor Jordi Voltas and completed during an international mobility placement in Finland.

The system created by Ivan Paquico, the 2001 Spanish Internet chess champion, and Cristina Palmero, a keen player and federation member, is a didactic tool that will help chess clubs and associations to teach the game and make it more appealing, particularly to younger players.

The system combines augmented reality, computer vision and artificial intelligence, and the only equipment required is a high-definition home webcam, the Augmented Reality Chess software, a standard board and pieces, and a set of cardboard markers the same size as the squares on the board, each marked with the first letter of the corresponding piece: R for the king (reiin Catalan), D for the queen (dama), T for the rooks (torres), A for the bishops (alfils), C for the knights (cavalls) and P for the pawns (peons).

Learning chess with virtual pieces

To use the system, learners play with an ordinary chess board but move the cardboard markers instead of standard pieces. The table is lit from above and the webcam focuses on the board, and every time the player moves one of the markers the system recognises the piece and reproduces the move in 3D on the computer screen, creating a virtual representation of the game.

For example, if the learner moves the marker P (pawn), the corresponding piece will be displayed on the screen in 3D, with all of the possible moves indicated. This is a simple and attractive way of showing novices the permitted movements of each piece, making the system particularly suitable for children learning the basics of this board game.

Making chess accessible to all

The learning tool also incorporates a move-tracking program called Chess Recognition: from the images captured by the webcam, the system instantly recognises and analyses every movement of every piece and can act as a referee, identify illegal moves and provide the players with an audible description of the game status. According to Ivan Paquico and Cristina Palmero, this feature could be very useful for players with visual impairment -- who have their own federation and, until now, have had to play with specially adapted boards and pieces -- and for clubs and federations, tournament organisers and enthusiasts of all levels.

The Chess Recognition program saves whole games so that they can be shared, broadcast online and viewed on demand, and can generate a complete user history for analysing the evolution of a player's game. The program also creates an automatic copy of the scoresheet (the official record of each game) for players to view or print.

The technology for playing chess and recording games online has been available for a number of years, but until now players needed sophisticated equipment including pieces with integrated chips and a special electronic board with a USB connection. The standard retail cost of this equipment is between 400 and 500 euros.


Source

Monday, February 14, 2011

Scientists Develop Control System to Allow Spacecraft to Think for Themselves

Professor Sandor Veres and his team of engineers have developed an artificially intelligent control system called 'sysbrain'.

Using natural language programming (NLP), the software agents can read special English language technical documents on control methods. This gives the vehicles advanced guidance, navigation and feedback capabilities to stop them crashing into other objects and the ability to adapt during missions, identify problems, carry out repairs and make their own decisions about how best to carry out a task.

Professor Veres, who is leading the EPSRC-funded project, says:"This is the world's first publishing system of technical knowledge for machines and opens the way for engineers to publish control instructions to machines directly. As well as spacecrafts and satellites, this innovative technology is transferable to other types of autonomous vehicles, such as autonomous underwater, ground and aerial vehicles."

To test the control systems that could be applied in a space environment, Professor Veres and his team constructed a unique test facility and a fleet of satellite models, which are controlled by the sysbrain cognitive agent control system.

The 'Autonomous Systems Testbed' consists of a glass covered precision level table, surrounded by a metal framework, which is used to mount overhead visual markers, observation cameras and isolation curtains to prevent any external light sources interfering with experimentation. Visual navigation is performed using onboard cameras to observe the overhead marker system located above the test area. This replicates how spacecraft would use points in the solar system to determine their orientation.

The perfectly-balanced model satellites, which rotate around a pivot point with mechanical properties similar to real satellites, are placed on the table and glide across it on roller bearings almost without friction to mimic the zero-gravity properties of space. Each model has eight propellers to control movement, a set of inertia sensors and additional cameras to be 'spatially aware' and to 'see' each other. The model's skeletal robot frame also allows various forms of hardware to be fitted and experimented with.

Professor Veres adds:"We have invented sysbrains to control intelligent machines. Sysbrain is a special breed of software agents with unique features such as natural language programming to create them, human-like reasoning, and most importantly they can read special English language documents in 'system English' or 'sEnglish'. Human authors of sEnglish documents can put them on the web as publications and sysbrain can read them to enhance their physical and problem solving skills. This allows engineers to write technical papers directly for sysbrain that control the machines."

Further information is available athttp://www.sesnet.soton.ac.uk/people/smv/avs_lab/index.htm.


Source

Friday, February 4, 2011

Future Surgeons May Use Robotic Nurse, 'Gesture Recognition'

Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

The"vision-based hand gesture recognition" technology could have other applications, including the coordination of emergency response activities during disasters.

"It's a concept Tom Cruise demonstrated vividly in the film 'Minority Report,'" Wachs said.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria.

The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.

At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency, Wachs said.

Findings from the research will be detailed in a paper appearing in the February issue of Communications of the ACM, the flagship publication of the Association for Computing Machinery. The paper was written by researchers at Purdue, the Naval Postgraduate School in Monterey, Calif., and Ben-Gurion University of the Negev, Israel.

Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively.

He is now working to extend the system's capabilities in research with Purdue's School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.

"One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions," Wachs said."You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use."

Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures.

"Say the surgeon starts talking to another person in the operating room and makes conversational gestures," Wachs said."You don't want the robot handing the surgeon a hemostat."

A scrub nurse assists the surgeon and hands the proper surgical instruments to the doctor when needed.

"While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room," Wachs said."In that case, a robotic scrub nurse could be better."

The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university's School of Veterinary Medicine.

Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, little work has been done in the area of gesture recognition, Wachs said.

"Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed," he said.

Wachs is developing advanced algorithms that isolate the hands and apply"anthropometry," or predicting the position of the hands based on knowledge of where the surgeon's head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.

"Another contribution is that by tracking a surgical instrument inside the patient's body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images," Wachs said."This is done using a different sensor mounted over the surgical lights."

The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person's hands without the use of a wand.

"You just step into the operating room, and automatically your body is mapped in 3-D," he said.

Accuracy and gesture-recognition speed depend on advanced software algorithms.

"Even if you have the best camera, you have to know how to program the camera, how to use the images," Wachs said."Otherwise, the system will work very slowly."

The research paper defines a set of requirements, including recommendations that the system should:

  • Use a small vocabulary of simple, easily recognizable gestures.
  • Not require the user to wear special virtual reality gloves or certain types of clothing.
  • Be as low-cost as possible.
  • Be responsive and able to keep up with the speed of a surgeon's hand gestures.
  • Let the user know whether it understands the hand gestures by providing feedback, perhaps just a simple"OK."
  • Use gestures that are easy for surgeons to learn, remember and carry out with little physical exertion.
  • Be highly accurate in recognizing hand gestures.
  • Use intuitive gestures, such as two fingers held apart to mimic a pair of scissors.
  • Be able to disregard unintended gestures by the surgeon, perhaps made in conversation with colleagues in the operating room.
  • Be able to quickly configure itself to work properly in different operating rooms, under various lighting conditions and other criteria.

"Eventually we also want to integrate voice recognition, but the biggest challenges are in gesture recognition," Wachs said."Much is already known about voice recognition."

The work is funded by the U.S. Agency for Healthcare Research and Quality.


Source

Thursday, February 3, 2011

New Mathematical Model of Information Processing in the Brain Accurately Predicts Some of the Peculiarities of Human Vision

At the Society of Photo-Optical Instrumentation Engineers' Human Vision and Electronic Imaging conference on Jan. 27, Ruth Rosenholtz, a principal research scientist in the Department of Brain and Cognitive Sciences, presented a new mathematical model of how the brain does that summarizing. The model accurately predicts the visual system's failure on certain types of image-processing tasks, a good indication that it captures some aspect of human cognition.

Most models of human object recognition assume that the first thing the brain does with a retinal image is identify edges -- boundaries between regions with different light-reflective properties -- and sort them according to alignment: horizontal, vertical and diagonal. Then, the story goes, the brain starts assembling these features into primitive shapes, registering, for instance, that in some part of the visual field, a horizontal feature appears above a vertical feature, or two diagonals cross each other. From these primitive shapes, it builds up more complex shapes -- four L's with different orientations, for instance, would make a square -- and so on, until it's constructed shapes that it can identify as features of known objects.

While this might be a good model of what happens at the center of the visual field, Rosenholtz argues, it's probably less applicable to the periphery, where human object discrimination is notoriously weak. In a series of papers in the last few years, Rosenholtz has proposed that cognitive scientists instead think of the brain as collecting statistics on the features in different patches of the visual field.

Patchy impressions

On Rosenholtz's model, the patches described by the statistics get larger the farther they are from the center. This corresponds with a loss of information, in the same sense that, say, the average income for a city is less informative than the average income for every household in the city. At the center of the visual field, the patches might be so small that the statistics amount to the same thing as descriptions of individual features: A 100-percent concentration of horizontal features could indicate a single horizontal feature. So Rosenholtz's model would converge with the standard model.

But at the edges of the visual field, the models come apart. A large patch whose statistics are, say, 50 percent horizontal features and 50 percent vertical could contain an array of a dozen plus signs, or an assortment of vertical and horizontal lines, or a grid of boxes.

In fact, Rosenholtz's model includes statistics on much more than just orientation of features: There are also measures of things like feature size, brightness and color, and averages of other features -- about 1,000 numbers in all. But in computer simulations, storing even 1,000 statistics for every patch of the visual field requires only one-90th as many virtual neurons as storing visual features themselves, suggesting that statistical summary could be the type of space-saving technique the brain would want to exploit.

Rosenholtz's model grew out of her investigation of a phenomenon called visual crowding. If you were to concentrate your gaze on a point at the center of a mostly blank sheet of paper, you might be able to identify a solitary A at the left edge of the page. But you would fail to identify an identical A at the right edge, the same distance from the center, if instead of standing on its own it were in the center of the word"BOARD."

Rosenholtz's approach explains this disparity: The statistics of the lone A are specific enough to A's that the brain can infer the letter's shape; but the statistics of the corresponding patch on the other side of the visual field also factor in the features of the B, O, R and D, resulting in aggregate values that don't identify any of the letters clearly.

Road test

Rosenholtz's group has also conducted a series of experiments with human subjects designed to test the validity of the model. Subjects might, for instance, be asked to search for a target object -- like the letter O -- amid a sea of"distractors" -- say, a jumble of other letters. A patch of the visual field that contains 11 Q's and one O would have very similar statistics to one that contains a dozen Q's. But it would have much different statistics than a patch that contained a dozen plus signs. In experiments, the degree of difference between the statistics of different patches is an extremely good predictor of how quickly subjects can find a target object: It's much easier to find an O among plus signs than it is to find it amid Q's.

Rosenholtz, who has a joint appointment to the Computer Science and Artificial Intelligence Laboratory, is also interested in the implications of her work for data visualization, an active research area in its own right. For instance, designing subway maps with an eye to maximizing the differences between the summary statistics of different regions could make them easier for rushing commuters to take in at a glance.

In vision science,"there's long been this notion that somehow what the periphery is for is texture," says Denis Pelli, a professor of psychology and neural science at New York University. Rosenholtz's work, he says,"is turning it into real calculations rather than just a side comment." Pelli points out that the brain probably doesn't track exactly the 1,000-odd statistics that Rosenholtz has used, and indeed, Rosenholtz says that she simply adopted a group of statistics commonly used to describe visual data in computer vision research. But Pelli also adds that visual experiments like the ones that Rosenholtz is performing are the right way to narrow down the list to"the ones that really matter."


Source

Tuesday, February 1, 2011

Physicists Challenge Classical World With Quantum-Mechanical Implementation of 'Shell Game'

In a paper published in the Jan. 30 issue of the journalNature Physics, UCSB researchers show the first demonstration of the coherent control of a multi-resonator architecture. This topic has been a holy grail among physicists studying photons at the quantum-mechanical level for more than a decade.

The UCSB researchers are Matteo Mariantoni, postdoctoral fellow in the Department of Physics; Haohua Wang, postdoctoral fellow in physics; John Martinis, professor of physics; and Andrew Cleland, professor of physics.

According to the paper, the"shell man," the researcher, makes use of two superconducting quantum bits (qubits) to move the photons -- particles of light -- between the resonators. The qubits -- the quantum-mechanical equivalent of the classical bits used in a common PC -- are studied at UCSB for the development of a quantum super computer. They constitute one of the key elements for playing the photon shell game.

"This is an important milestone toward the realization of a large-scale quantum register," said Mariantoni."It opens up an entirely new dimension in the realm of on-chip microwave photonics and quantum-optics in general."

The researchers fabricated a chip where three resonators of a few millimeters in length are coupled to two qubits."The architecture studied in this work resembles a quantum railroad," said Mariantoni."Two quantum stations -- two of the three resonators -- are interconnected through the third resonator which acts as a quantum bus. The qubits control the traffic and allow the shuffling of photons among the resonators."

In a related experiment, the researchers played a more complex game that was inspired by an ancient mathematical puzzle developed in an Indian temple called the Towers of Hanoi, according to legend.

The Towers of Hanoi puzzle consists of three posts and a pile of disks of different diameter, which can slide onto any post. The puzzle starts with the disks in a stack in ascending order of size on one post, with the smallest disk at the top. The aim of the puzzle is to move the entire stack to another post, with only one disk being moved at a time, and with no disk being placed on top of a smaller disk.

In the quantum-mechanical version of the Towers of Hanoi, the three posts are represented by the resonators and the disks by quanta of light with different energy."This game demonstrates that a truly Bosonic excitation can be shuffled among resonators -- an interesting example of the quantum-mechanical nature of light," said Mariantoni.

Mariantoni was supported in this work by an Elings Prize Fellowship in Experimental Science from UCSB's California NanoSystems Institute.


Source

Friday, January 21, 2011

For Robust Robots, Let Them Be Babies First

Or at least that's not too far off from what University of Vermont roboticist Josh Bongard has discovered, as he reports in the January 10 online edition of theProceedings of the National Academy of Sciences.

In a first-of-its-kind experiment, Bongard created both simulated and actual robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolved, spending less time in"infant" tadpole-like forms and more time in"adult" four-legged forms.

These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, in their final form, the changing robots had developed a more robust gait -- better able to deal with, say, being knocked with a stick -- than the ones that had learned to walk using upright legs from the beginning.

"This paper shows that body change, morphological change, actually helps us design better robots," Bongard says."That's never been attempted before."

Robots are complex

Bongard's research, supported by the National Science Foundation, is part of a wider venture called evolutionary robotics."We have an engineering goal," he says"to produce robots as quickly and consistently as possible." In this experimental case: upright four-legged robots that can move themselves to a light source without falling over.

"But we don't know how to program robots very well," Bongard says, because robots are complex systems. In some ways, they are too much like people for people to easily understand them.

"They have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there's neurons and there's sensors and motors and they're all turning on and off in parallel," Bongard says,"and the emergent behavior from the complex system which is a robot, is some useful task like clearing up a construction site or laying pavement for a new road." Or at least that's the goal.

But, so far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments.

Which is why Bongard, an assistant professor in UVM's College of Engineering and Mathematical Sciences, and other robotics experts have turned to computer programs to design robots and develop their behaviors -- rather than trying to program the robots' behavior directly.

His new work may help.

To the light

Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space."It looks like a modern video game," he says. Each creature -- or, rather, generations of the creatures -- then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait -- based on its body plan -- that can get it to the light source without tipping over.

"The robots have 12 moving parts," Bongard says."They look like the simplified skeleton of a mammal: it's got a jointed spine and then you have four sticks -- the legs -- sticking out."

Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.

And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?

"The snake and reptilian robots are, in essence, training wheels," says Bongard,"they allow evolution to find motion patterns quicker, because those kinds of robots can't fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it's able to tackle the balance problem after already solving the movement problem."

Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?

"Yes," says Bongard,"We're copying nature, we're copying evolution, we're copying neural science when we're building artificial brains into these robots." But the key point is that his robots don't only evolve their artificial brain -- the neural network controller -- but rather do so in continuous interaction with a changing body plan. A tadpole can't kick its legs, because it doesn't have any yet; it's learning some things legless and others with legs.

And this may help to explain the most surprising -- and useful -- finding in Bongard's study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn't before faced, like efforts to tip them over.

Bongard is not exactly sure why this is, but he thinks it's because controllers that evolved in the robots whose bodies changed over generations learned to maintain the desired behavior over a wider range of sensor-motor arrangements than controllers evolved in robots with fixed body plans. It seem that learning to walk while flat, squat, and then upright, gave the evolving robots resilience to stay upright when faced with new disruptions. Perhaps what a tadpole learns before it has legs makes it better able to use its legs once they grow.

"Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies," Bongard writes in his PNAS paper"This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world."

"One thing that has been left out all this time is the obvious fact that in nature it's not that the animal's body stays fixed and its brain gets better over time," he says,"in natural evolution animals bodies and brains are evolving together all the time." A human infant, even if she knew how, couldn't walk: her bones and joints aren't up to the task until she starts to experience stress on the foot and ankle.

That hasn't been done in robotics for an obvious reason:"it's very hard to change a robot's body," Bongard says,"it's much easier to change the programming inside its head."

Lego proof

Still, Bongard gave it a try. After running 5000 simulations, each taking 30 hours on the parallel processors in UVM's Vermont Advanced Computing Center --"it would have taken 50 or 100 years on a single machine," Bongard says -- he took the task into the real world.

"We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it," he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs."The brace gradually tilts the robot," as the controller searches for successful movement patterns, Bongard says,"so that the legs go from horizontal to vertical, from reptile to quadruped.

"While the brace is bending the legs, the controller is causing the robot to move around, so it's able to move its legs, and bend its spine," he says,"it's squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it's walking like a coyote."

"It's a very simple prototype," he says,"but it works; it's a proof of concept."


Source