Robo Sapiens: The Mutual Future of Machine and Mankind

A photo of two students working in a lab with robotic equipmentWritten by: Chloe Cheung 

Photo by: ThisIsEngineering from Pexels

As far as we know, humanity has been the sole sapient species to inhabit the Earth.

But that could change in the very near future.

Our species name is Homo Sapiens, which translates to “Man as a Thinker”, or to put it another way: “Thinking Man”. Our ability to think is what distinguishes and elevates us above our fellow species on this planet. To think is to perceive the world; to store, sort and categorize it into rules—and to not only follow rules, but invent new ones. Simply put, thinking is the quality that makes us human.

And it is this same quality that Artificial Intelligence, or AI, intends to emulate.

Machine Learning is a subset of AI that seeks out this elusive quality. As the name implies, machine learning teaches robots how to learn, how to “think” independently of human input, and how to program themselves.

Fiction is populated with examples of AI and machine learning. Blockbusters like Blade Runner, Terminator, 2001: A Space Odyssey and The Matrix have captured the popular imagination with their portrayals of thinking machines and how they may interact with humanity. These films serve as possibilities—many of them frightening—of what a future with intelligent, thinking AI could look like.

Many of these movies arise from our fear that AI will grow beyond its programming, and become something more; something smarter, something better—and something we can no longer control. Robots were originally conceived as tools for humanity, and AI a means to better aid robots in fulfilling their intended, preprogrammed purposes. We use machines but fear the day machines use us.

AI has already surpassed humans in certain tasks and will come to do so on many others. In 1997, Deep Blue defeated then reigning world champion Garry Kasparov in a game of chess. In 2017, AI predicted lung cancer progression about 20% better than with conventional clinical guidelines. In 2018, Tesla’s Autopilot demonstrated a 45% reduction in highway accidents compared to humans on its first version—and newer models will only continue to improve. In addition, AI is becoming a permanent fixture in our daily lives. Whether it’s bombarding you with targeted ads, finding your next flick on Netflix, or unlocking your phone with facial recognition, AI is everywhere. These growing levels of autonomy and ubiquity in AI technology mean that the future of AI will become the future of humanity—and it is a future that we will confront in our lifetimes.

In an interview at SXSW 2018, Elon Musk cautions humanity about the dangers AI can pose, should the technology allowed to develop unchecked:

“AI experts think they are smarter than they are. AI experts don’t like the idea that a machine can be smarter than them…[but] AI progresses faster than they [think]…

“I think the danger of AI much greater than that of nuclear warheads by a lot…so why do we have no regulatory oversight?”

Even as early as 1950, some of the most eminent minds had already recognized the potential danger of AI and need for regulation.

Long before Musk, prominent author Issac Asimov proposed his own code of ethics that should be encoded into all robots, which came to be known as the Three Laws of Robotics.

Asimov’s Laws are as follows:

  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These guidelines act as a failsafe—a way to prevent robots from harming humans.

Asimov’s Laws of Robotics have gone on to become a model of behaviour for many modern roboticists to emulate. Our current level of knowledge, however, makes this very difficult to implement in practice.

Replicating rules likes Asimov’s requires translating them into a form that a robot can understand. Asimov’s Laws attempt to replicate desired behaviours; however, many behaviours are not absolute and mean different things in different contexts. Robots would have to be instructed on how to navigate the context of their tasks to arrive at the intended meaning.

To further complicate matters, human language itself is multifaceted and situational, with subtle nuances. We can hardly agree on establishing universal definitions for terminology as is, so how can we begin to codify them into our robots? And that’s not even getting into the technical difficulties of programming these definitions and behaviours into robots.

But establishing these universal definitions and behavioral parameters in AI is a crucial first step. For these guidelines to be effective, all parties—human and robot—must have a common understanding of the language of ethics.

The potential of AI is vast and limitless. So far, we have just been floating on the surface of a binary sea of possibility, and it is only in recent years that we are beginning to plumb the depths of what AI has to offer. But as we continue to delve deeper, airtight regulation will be our moral compass, providing us with the guidance and direction we need to help us navigate these unfamiliar waters. I, personally, am optimistic about the future of AI, but any form of optimism must be tempered by caution. One small lapse in oversight can be all it takes for our creations to capsize, and one small lapse in oversight can be all it takes to crush humanity beneath the machinations and devices of our own making.

Read Scholars' Life blogs!

A Letter From Your Scholar's Life Editor...

Hear from the 2021-22 Scholar's Life Managing Editor, Nathalie!

Big Data and The Social Dilemma

Apps are convenient and entertaining, but we rarely reflect on the consequences of using them.


Published on