Robots are finding their way into every part of our lives. Your local box store, hospital, and Amazon distribution center likely all feature a robot or two delivering medications, taking inventory, or prepping your next day order for shipment. There are plenty of articles out there pointing to the eventual takeover of jobs by robots. Some are shaken by the thought of a new “race” that might be an improvement on humanity. Are we really afraid of them? Or is it just that their mechanical vibes are a turnoff?
Our fascination with humanizing robots has frequently been displayed in pop culture, from the big screen to your TV. The Terminator featured multiple robots taking on the likeness of humans. Data, a Star Trek: The Next Generation character debuted as a synthetic form of life utilizing strong AI and a fully functional humanoid body.
The humanization of robots goes beyond synthetic skin and realistic limbs. It is almost as if we are insistent on making robots take on other aspects of humanity like self-awareness and the ability to feel in ways like humans do (sentience).
The questions lie therein. Is this a good idea? Is it possible? Will it cause bigger problems than we can think of now?
The possibility of creating a robot with the ability to think, act, and look like humans is hotly debated. The reasoning behind the desire for humanoid robots seems simple enough. Dr. Ben Goertzel thinks it is rooted in humans wanting to interact with something that looks more like them. He created Sophia, a robot that resembles a human face and interacts with limited conversation.
Dr. Goertzel believes that people would much rather divulge a secret to a robot that looks like a person as opposed to one that looks like a self-propelled and self-guided vacuum. He feels that humans are far more likely to interact with a familiar face than with a screen attached to a set of wheels.
Sophia now has peers – all named Sophia. There are 20 of them and many have been rolled out as advertisement for the possibilities awaiting the field of robotics. Some large companies have been interested in the technology for use as a futuristic greeter or concierge but it would be quite expensive to purchase as things currently stand.
Dr. Goertzel is undeterred by the financial hurdle giving mass producers pause. He has big dreams for the future of Sophia, including a fully humanoid robot.
“My aspiration is that we achieve something like artificial general intelligence [where a machine can perform any intellectual task a human can] of the equivalent of a five-year-old child by 2025 or earlier.”
Prohibitive costs are not the only thing holding this lofty goal back, however.
There are essentially two lines of thought. One, like Dr. Goertzel feels, is that if a robot appears more human, humans will be more likely to interact with it. On the flip side, a theory created by Masahiro Mori warns that humans will find humanistic robots off-putting and bizarre (known as the Uncanny Valley effect).
For people like Dor Skuler, a leader in the field and CEO of Intuition Robotics, opposition to human-like robots goes beyond their general perception by humans. He believes it leans toward unethical to attempt to pass robots off as real people. What if a robot is so realistic that you do not realize it is not a person? Would you be offended if you found out the truth after interacting personally with such a robot? Skuler thinks so.
Skuler’s company makes a robot called, “ElliQ.” ElliQ has the specific purpose of keeping homebound, often lonely elderly people company. The robot can speak and respond to questions but leaves no question about its lack of humanity.
He, and his research, has found that people react positively to what is obviously a robot without the smoke and mirrors of human appearance. Thus, Skuler states, there is no reason to force humanistic attributes on a robot for the sake of “likeability”.
Skuler is not the only one who leans toward the less human side of the spectrum. A researcher at Carnegie Mellon’s Robotics Institute named Dr. Reid Simmons thinks keeping things simple is best. In his view, there is simply no need to get into making robots so realistic.
If the robot is functional and doing the job it is assigned, why would we need to add human characteristics anyway? Take this Rozum robotic arm. Would it make a better manufacturing asset if it had realistic skin? No. It is great at its job and no humanistic feature would make it perform better. Is adding this layer of realism just causing potential problems unnecessarily?
Dr. Simmons seems to think so. He thinks the “Uncanny Valley” is a dangerous place to approach. If we begin getting some robots to the point that they are indiscernible from humans, will other less humanoid robots (that are just as or more effective) become less trustworthy? Over time, this is certainly a possibility.
Will robots ever be able to fill the other half of the humanoid equation? Is it possible that they will begin to be smarter than us in the realms of conversation and interaction? Well, Dr. Goertzel says the answer is a definitive yes. Taking it a step further, he believes that robots have the potential to achieve sentience and self-awareness.
There is a field called Artificial General Intelligence (AGI). This is the type of AI that makes such a leap from programming to conscious choice possible. Many in this field believe that if robots can achieve human intelligence, then they can obtain the ability to freely choose.
If you thought the debate about the benefits of humanoid robots was heated, listen in on scientists talk about the possibilities of robots experiencing true emotion.
Dr. Goertzel feels that the level of intelligence, which leads to choice, will eventually lead to the ability to emote. He wants his future robots to be capable of showing compassion while not necessarily exhibiting other traits like despair or anger.
Skuler offers an alternate view. In fact, he believes emotion is reserved for living things and cannot possibly be experienced by non-living things like robots. He certainly feels robots are capable of learning all the typical responses a human might make but that does not necessarily mean it has emotions.
One of his projects hints at the ability of robots to understand emotion while not feeling them. His company is researching and developing a companion that will “ride-along” with you in the car. It will detect emotions, interpret them, and find ways to keep drivers safe.
We seem to be at a crossroads in AI. One where you can choose to continue down the path of making robots similar to humans while understanding they will never attain humanity. Or, you can dream of a future where robots and humans are virtually the same.
The latter belief might seem far-fetched, and for good reason. However, that may have been how those surrounding Nikola Tesla in the 1920s. His introduction of robots probably seemed pretty unbelievable back then.
We can leave the debates to the scientists and engineers working on robotics. Are you ready for a robot that looks like you? Or are you going to climb out of that Uncanny Valley and run as far as you can?