Robots that can dynamically reconfigure themselves to adapt to their environments offer a promising advantage over their less dynamic cousins. Researchers have been working through all the challenges of realizing that potential: hardware, software, and all the interactions in between. On the software end of the spectrum, a team at University of Pennsylvania’s ModLab has been working on a robot that can autonomously choose a configuration to best fit its task at hand.
We’ve recently done an overview of modular robots, and we noted that coordination and control are persistent challenges in this area. The robot in this particular demonstration is a hybrid: a fixed core module serving as central command, plus six of the lab’s dynamic SMORES-EP modules. The core module has a RGB+Depth camera for awareness of its environment. A separate downwards-looking camera watches SMORES modules for awareness of itself.
Combining that data using a mix of open robot research software and new machine specific code, this team’s creation autonomously navigates an unfamiliar test environment. While it can adapt to specific terrain challenges like a wood staircase, there are still limitations on situations it can handle. Kudos to the researchers for honestly showing and explaining how the robot can get stuck on a ground seam, instead of editing that gaffe out to cover it up.
While this robot isn’t the completely decentralized modular robot system some are aiming for, it would be a mistake to dismiss based on that criticism alone. At the very least, it is an instructive step on the journey offering a tradeoff that’s useful on its own merits. And perhaps this hybrid approach will find application with a modular robot close to our hearts: Dtto, the winner of our 2016 Hackaday Prize.
[via Science News]
No comments:
Post a Comment