Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!
Lieutenant Data’s grandparents are about to be born, based on self-driving cars
One thing is for certain, we've all had worse co-workers.
The bridge of the Next Generation Enterprise wouldn’t have been the same without Data. In fact, on several occasions the ship and its crew would have been lost entirely if not for the intervention and quick thinking of its android officer. As we move into the future, endeavoring to seek out new worlds, it is likely we’ll continue to expand our reliance on robotic counterparts.
Interactions between people and robots are on the incline, and research at places like Boston Dynamics results in increasingly advanced and complicated machines. As a result, the need for robust systems to define those interactions are needed in order for people to feel safe living and working in an environment with a growing robotic population.
These interactions manifest themselves most often today in manufacturing, where robots make up a significant portion of various processes. Already there have been at least a few human deaths at the hands of a robot. Those deaths and injuries, of course, aren’t the result of any malicious intent on the part of the machines, but instead represent gaps in our ability to effectively program them.
Asimov’s laws of robotics guide fictional systems which are far more advanced than what we’re working with today, but developing something similar for the guidance of modern robots can help to ensure our safe and effective cooperation.
New research carried out by scientists at Advanced Control and Intelligent Systems Laboratory at the University of British Columbia outlines a potential system for defining and developing guidelines for use in human-robotic collaboration (HRC). The work was published in the journal Robotics and Computer-Integrated Manufacturing.
Traditionally, robots have been programmed to carry out their work without regard for the environment. If a human happens to get in the way of that work, the results can be disastrous. Researchers are working on new systems which allow machines a greater awareness of their surroundings and the ability to adjust their actions accordingly, sometimes in a fraction of a second. These systems are built on a similar existing structure for autonomous vehicles which, from a certain point of view, are robots which operate alongside humans in the real world.
In addition to environmental awareness, future robots will have the ability to predict the actions of humans and other machines and make necessary adjustments in advance. This becomes increasingly important as we take robots out of static environments like factories and let them loose on the outside world.
The conditions of a real-world scenario are fluid and will require a new class of machine capable of handling those conditions in order for them to work correctly and in order for us to work together alongside them.
Modern robots in an industrial setting have two modes of operation, autonomous and collaborative. When working autonomously, the robot’s actions are coded in such a way that it carries out its objectives irrespective of outside influence. Collaboration, in contrast, requires a robot to take in data from an ever-changing environment and build models by which they can operate under fluid conditions.
This becomes important when the work being done has dynamic demands. Humans and robots each have a particular set of talents which might make them ideal for certain parts of collaborative work. There are, in fact, already robots working alongside human astronauts in space. The adorably named Robonaut 2 is a humanoid robot developed by NASA at Johnson Space Center, who was launched to the International Space Station in February of 2011. The environment of space requires the quick thinking and ingenuity of living astronauts, but also harbors many conditions which are dangerous to human life. Those scenarios are a perfect example of instances when collaboration between humans and robots can work.
Robots use machine-learning to take in data sets either from the real world or in simulations to build their models. Simulated scenarios are faster but don’t necessarily include all of the variables a machine might encounter in a real-world setting.
Researchers broke down types of work ranging from the sorts of fully programmed robots we’re mostly familiar with through to fully autonomous robots. In the middle, there exists a range of work where robots can be assistive, cooperative, or collaborative. These last two are the realms wherein a robot’s ability to parse its environment becomes the most important because of the enhanced level of potential touch time between humans and machines while they share a workspace.
Robots in these shared workspaces could have a number of potentially redundant modes of communication allowing them to gather data from their human collaborators. These range from hand gestures to voice commands and even interpreting facial expressions.
Through supervised, unsupervised, and reinforcement learning strategies, future robots will learn to recognize and anticipate the actions of humans and other robots in a workspace, even and especially if that workspace exists in the outside world.
These enhanced abilities will be critical not only for robots to be able to work safely alongside humans in a variety of environments, but will be needed in order to establish a relationship of trust in humans. That last is a particular challenge, which will take some time and a solid track record.
Perhaps we shouldn’t be surprised. Even in the 24th century, there were people who didn’t trust Data or treat him as a full member of the crew. But if robots are going to be a part of our lives, and all indications suggest they are, then we’ll have to learn to live and work together.
Now, if we can just teach one to fold the laundry then we’ll all be happy.