If robots ever want to make an omelette, they’re gonna have to break a few eggs — or, in this case, perhaps a few Scandinavian particleboard coffee tables. In an inspired attempt at getting ‘bots better at learning how to perform the kind of real-world tasks that might actually make them, y’know, useful around the house, researchers are teaching them how to do a universally dreaded bit of dirty work: assembling Ikea furniture.
If that sounds like the highest and best use of a robot in the history of robots, join the furniture assembly-cursing club. As it turns out, using the deceptively complicated task of putting a table together makes for an effective yet safe environment where robots are free to fail repeatedly until they learn, through iteration and emulation of human behavior, to succeed.
Wired reports that a team at the University of Southern California has developed the new simulator; a robot teaching tool that’ll be made available to the wider community of robotics engineers. The goal is to train machines to do, in physical reality, what their algorithm-slinging virtual AI counterparts already can do much faster in simulation models, untethered as they are by the constraints of manipulating actual things in actual space.
Even though assembling a blocky object from smaller blocky parts is a simple (if at time tedious) task, it’s one that stumps robots in countless ways that humans simply take for granted. Stringing a series of such moment-by-moment events together in the right sequence, and in a variety of physical environments (like your living room) until a shiny new chair or table emerges remains a tricky learning task for machines.
In other words, while a bot that lives inside a computer can theoretically assemble thousands of Ikea dining sets at mind-warping speed, getting a real robot to do the same thing in your house runs into all kinds of roadblocks. Even the system that the team has created can’t yet get robots to account for all the variables that go into putting flat-packed furniture together, although the goal is to train them how a person might perform the task by emulating how humans deploy sequential logic to follow each step until the job’s done.
“While seemingly very trivial to humans, it's not just that we grab a part —we have to know exactly where to grab it and with how much force,” USC roboticist and system developer Joseph Lim told Wired. “Even this grasping skill is a very big open problem for robotics.”
Even though it sounds like they’ve still got a long way to go, at least this is the kind of robot revolution we can definitely get behind. Teaching droids to think and act like people might be a skill worth passing along to our mechanized servants after all — so long as it means humanity might finally arrive at a future when the hardest part of getting that sleek new TV stand set up and ready to rock is swiping our credit cards at the store.