It takes a swarm: These robots talk to each other, make decisions as a group
A distributed neural network emerges over a robotic swarm creating an "artificial group mind." Credit: Michael Otte
As the old saying doesn’t exactly go: hundreds of artificial minds are better than one.
That power in numbers is how Michael Otte, an engineer at the University of Maryland, approaches robot decision making. In his research, he wirelessly connects a large number of robots—or, artificial “brains”—into a single, complex computational entity; roboticists call it a swarm. He trains them to  connect,  communicate, sharing data to form a picture of their environment in full, and then  collectively figure out how to respond to it.
Otte wants his robots to pool their physical and computational resources in order to solve a common problem. He does this by creating a process by which hundreds of individual robots merge their computing power to become a single, albeit distributed, computer. He perceives particular value in deploying robot swarms in the face of a challenge unknown: a search-and-rescue mission after a catastrophic natural disaster, for example, where establishing risk and response need is step one.
Otte’s research is inspired in part by the so-called swarm intelligence of self-organized biological systems, such as colonies of ants or bees. Individual ants, for example, are by themselves simple organisms capable of simple tasks. One ant searches for food; another ant lays eggs; another still builds walls made of soil. Many ants together, however, form complex social networks that can perform complex tasks for the collective good. They allocate labor; they coordinate movement; they build elaborate nest structures. They attack and overtake neighboring colonies. They change their cooperative behavior in response to external stimuli, like when their nest is damaged by a predator or inquisitive child.
Otte is driven by his own kind of inquisitiveness: one that combines the seed of inspiration from insect colonies with the challenge of making science fiction a reality.
“The concept of a ‘group mind,’ in which multiple consciousnesses are linked into a single intelligence, has been a plot device in science fiction literature since at least the 1930 novel Last and First Men,” says Otte, an assistant professor of aerospace engineering in UMD’s A. James Clark School of Engineering. “I wanted to see if I could successfully apply that concept of a group mind in robotic swarms. Each robot has just a little bit of computational power, but together, they have a lot more than that.”
In a peer-reviewed paper recently published in The International Journal of Robotics Research, Otte describes just how he trained a legion of Kilobots—a simple robotic platform that clocks in at only 3.3 centimeters tall—to accomplish something bigger than themselves:
While a single Kilobot has but a single light sensor capable of discerning only a single light value, a swarm of Kilobots can combine their sensor data to ‘see’ across their entire environment. Each robot communicates wirelessly with its neighbors, bouncing infrared signals off of the ground and up to the others nearby. This connection forms an artificial neural network across the swarm, which the swarm uses to detect and recognize images created by projected visible light. The resulting computational entity is called an artificial group mind.
Given the sensor data that the robotic swarm collectively sees across the environment, the group mind figures out what is happening in the environment and then decides which behavioral response the swarm should perform. For example, the group mind can be trained to recognize a projected peace symbol or biohazard symbol. If the group mind recognizes a peace symbol, then it responds by creating a smiley face. If a biohazard symbol is recognized, the swarm creates a frowny face instead.
The algorithm Otte used to train the Kilobot swarm is a tried-and-true set of rules used in artificial neural network research. What’s new and especially interesting about his work is that the algorithm was modified to be successfully applied across a distributed swarm of many robots connected by a wireless network.
“Wireless communication is inherently unreliable; messages can be dropped between robots, and individuals can fall behind the group,” explains Otte. “We accounted for this by programming neurons within the network to wait for neighbors who had fallen behind to catch back up to where they should be. This strengthens every robot’s neural pathways at the same rate over time.”
This capability is especially useful for groups in which different robots are programmed to perform different actions, like Otte’s swarm. Each robot is location-dependent, meaning its role in the collective response behavior—i.e., does an individual move to form part of an eye or part of the smile?—is determined by where it starts in relation to its neighbors. If a robot falls too far behind its neighbor in training, it could compromise the response action, which is defined by the coordinated movement of the whole. In other words, one rogue robot could spoil the swarm.
“In a sense, the robotic swarm is only as strong as its weakest individual. We need for all of the neurons in the entire brain to learn as a group. By waiting for individual robots who have dropped messages due to the unreliability of wireless networks, the swarm learns more efficiently, making itself stronger in the long run,” says Otte.
Learn more about Dr. Otte's research by watching the video below:
These are tiny robots. And they are awesome.
Helping robots remember
Machine Learning's Translational Medicine
Realistic simulator improves safety of self-driving vehicles before road testing
Do Good Robotics Symposium to explore technologies that benefit society and the planet
Alumnus Xiaobo Tan elevated to ASME Fellow
Cornelia Fermüller joins ISR affiliate faculty
Regli is co-PI for ARM functional interoperable compiler project
Maryland Robotics Center team demonstrates robots at 2019 AAAS Annual Meeting
Student autonomous robotics competition slated for June 2019
November 7, 2018