When to start panicking about killer robots
Updated: Apr 8, 2019
[Originally published on Charged Magazine, November 7, 2018]
If you’ve seen one too many sci-fi movies (or five minutes of West World), you might think artificially intelligent humanoid robots can, and likely will, outsmart their creators.
After all, over the past century, humanoid robotics technology has accelerated from barely-believable Disney animatronics, to emotive, joke-cracking, robotic citizens. It’s not a stretch to envision what’s next. However, although potentially unsettling, modern artificial intelligence technology shouldn’t have you perusing panic room catalogs just yet. Or should it?
High-Tech One-Hit Wonders
It is true that robotic devices developed in the past few decades can perform some truly remarkable tasks, such as have interactive conversations, dance, or do parkour. But, all of these amazing machines fall into two categories: they can move well, or they can think well. None can do both – yet.
For example, Boston Dynamics is a robotics company that has built several robots with eerily life-like movement capabilities. Take their creation Big Dog. When pushed, this robot almost looks like a scared deer recovering from a fall. These machines, such as their newer creation called Handle, have both reflexes and complicated controls that allow them to move in uncertain environments with extreme precision, carrying heavy weights and performing multiple tasks. However, they aren’t exactly stimulating conversation partners.
On the other hand, there are robots that very convincingly interact with humans, such as the now-famous Sophia from Hanson Robotics. Sophia can converse with humans relatively seamlessly. She can learn from conversations over time, tell jokes, and even throw shade. But her movements are clunky and relatively non-existent.
Robots are designed to specialize in one task – they can either walk or talk but cannot yet do both. As prominent roboticist and Oregon State computer science professor Thomas Diettrich, Ph.D., explained in an interview with Tech Insider, “No [artificial intelligence] system comes anywhere close to having this immense breadth of capabilities, particularly when it comes to combining vision, language, and physical manipulation” (Del Prado, 2016). What happens when there is a robot that excels in all of these tasks? At that point, it might be time to consider investing in some off-grid property.
Walking the Walk so You Can Talk the Talk
Why does movement matter when talking about artificial intelligence? As MIT neuroscientist Daniel Wolpert, Ph.D. puts it, mammals have brains so that we can move. Evolutionarily speaking, mammals developed the ability to perform complex movements like walking before we developed higher cognitive abilities, such as speech.
Moving and interacting with your environment is the best way to learn, even for robots. Peter Norvig, Ph.D and director of research at Google agreed. “Reasoning [in artificial intelligence] will be improved as we develop systems that continuously sense and interact with the world, as opposed to learning systems that passively observe information that others have chosen,” Norvig explains (Del Prado, 2016).
Take, for example, a baby learning to talk. Although you don’t remember it, you probably learned what a ball was by seeing a ball, grabbing it, maybe even putting it in your mouth a few dozen times, all while someone repeated the word “ball” over and over. You were eventually able to define what the word “ball” meant by recalling your interactions with that object. This is likely why early motor skills are predictive language development (Bedford et al., 2016). Observations like these have led cognitive neuroscientists to develop the theory of embodied cognition. This theory suggests that cognitive functions – understanding the emotions of others, developing likes and dislikes, and communicating with those around you, to name a few – are rooted in our physical experiences with the environment, and the sensations we feel from those experiences. Essentially, you know what awkwardness is because you accidentally called your first-grade teacher ‘mom’ once or accidentally said “Love you!” before hanging up the phone with your boss.
An entire body of developmental and neural imaging evidence supports this theory, showing that we use our own sensory and motor experiences for complex cognitive processes such as understanding the actions of others (Grafton, 2009) and the meaning of words (Pulvermuller, 2005). Fortunately for us, or unfortunately if you are a roboticist, manmade machines do not yet possess this ability.
Moving Towards Independent Thought
For a robot to have true, rich artificial intelligence they have to be able to think and learn without supervision. At the moment, robots have a hard time generating their own opinions about the environment because they can’t fully – or perhaps more importantly, independently – experience the environment. If you tell a robot to “push” or to “walk,” they typically understand what you mean because someone went into the code and defined what those words meant, not because the machine defined it for themselves by actually pushing or walking. Deb Roy, Ph.D and a professor at MIT working on robots that understand language through physical interaction, describes this as “the machine [getting] caught in circular chains of dictionary-like definitions.” (Roy, 2005). Their group has managed to create Ripley, a robot that can physically interact with objects to form its own opinions about the environment. However, these types of machines still have a long way to go before engaging in rich social interactions as well as complicated physical movements.
So what’s holding us back from developing robots that can move, sense, and think? It seems that researchers investigating things like movement and sensations in humans and robots are largely separated from roboticists developing cognitive algorithms. Similar to Dr. Roy, Dr. Fancesca Odone and colleagues are working to develop the cognitive and physical skills of iCub, a robot that can independently learn to visually recognize objects and perform physical manipulations. Fanello et al. (2017) explain that “In spite of the complementary challenges, cognitive robotics and computer vision mainly proceeded on independent tracks.” To enable robots to walk and talk, perhaps researchers across fields must first talk to each other.
We’re Safe. For now.
Until robots can form experience-based definitions of the things they interact with through their own movement, we are probably safe from the potential robot apocalypse. When robots develop the ability to learn from their own physical experience – experience they gain themselves by interacting with the environment through coordinated movement – that is when their thinking will become truly independent.
Who knows, maybe if we show them our good side, robots will like humans and decide to live harmoniously with their creators. However, since Sophia recently learned to walk, and seems to have other plans, you might want to keep those panic room catalogs within arm’s reach.
Bedford, R., Pickles, A., & Lord, C. (2016). Early gross motor skills predict the subsequent development of language in children with autism spectrum disorder. Autism Research, 9(9), 993-1001.
Del Prado, G. M. (2016, March 09). Experts explain the biggest obstacles to creating human-like robots. https://www.businessinsider.com/biggest-challenges-human-artificial-intelligence-2016-2
Fanello, S. R., Ciliberto, C., Noceti, N., Metta, G., & Odone, F. (2017). Visual recognition for humanoid robots. Robotics and Autonomous Systems, 91, 151-168.
Grafton, S. T. (2009). Embodied cognition and the simulation of action to understand others. Annals of the New York Academy of Sciences, 1156(1), 97-117.
Pulvermüller, F. (2005). Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6(7), 576.
Roy, D. (2005). Semiotic schemas: A framework for grounding language in action and perception. Artificial Intelligence, 167(1-2), 170-205.