Asimov’s Laws of Robotics: Creating Safe Robots for the Age of Artificial Intelligence

Posted on June 10, 2017


Science fiction writer and biochemist Isaac Asimov wrote extensively about robotic technology in his short stories and novels, but it was his three laws of robotics that many believe are his greatest contribution.

I Robot

To make robots and artificial intelligence that would do no harm to human beings, Asimov devised his three simple laws of robotics to govern their behaviour. The laws were first published in the 1942 short story Runaround in the short story collection I, Robot. This was the first instance of a whole new field of roboethics and the stories included in I, Robot and collections such as The Rest of the Robots are an exploration of how the laws work and fail.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

As the nature of the stories is often a philosophical and ethical investigation of how the laws can be superceded, ignored, or broken by robots, Asimov came up with a ‘Zeroth’ law to sit above the three: 0. A robot may not harm humanity, or through inaction allow humanity to come to harm.

xkcd’s examination of what happens if Asimov’s Three Laws are not followed explicitly.

He envisioned his laws as the basis for future robot construction to keep humans safe:

“I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to able to choose among different courses of behavior. My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.” Isaac Asimov

But many believe Asimov’s laws will not adequately cover the variety of robots we will build or the advent of superhuman artificial intelligence.

Robots in Asimov’s stories are often semi-human androids, built by human manufacture (not AI), subservient to humans. His stories are incredibly entertaining, logic puzzles that actually show why the laws fail through a series of loopholes in various situations. As a starter, check out this Gizmodo post featuring opinion from machine intelligence and AI researchers, titled Why Asimov’s Three Laws of Robotics Can’t Protect Us.

In fact do we need the three laws for simple robots like a vacuum cleaner. And how do those laws apply to DNA robots that operate inside the body? In this post from the MIT Technology Review, two researchers from the University of Koblenz – Ulrike Barthelmess and Ulrich Furbach – say the laws are rooted in a fear of technology in our culture, found in stories about Jewish Golems and Shelley’s Frankenstein. What we really fear is other human beings creating machines that can harm us.

Asian Robotics

Interestingly, south east Asian countries where robotics is more advanced, in South Korea and Japan, where people interact with robots in everyday life, there are more advanced robotic laws and roboethics charters. Look at Japan’s Ten Principles of Robot Law.

  1. Robots must serve mankind
  2. Robots must never kill or injure humans
  3. Robot manufacterers shall be responsible for their creations
  4. Robots involved in the production of currency, contraband or dangerous goods, must hold a current permit.
  5. Robots shall not leave the country without a permit.
  6. A robot’s identity must not be altered, concealed or allowed to be misconstrued.
  7. Robots shall remain identifiable at all times.
  8. Robots created for adult purposes shall not be permitted to work with children.
  9. Robots must not assist in criminal activities, nor aid or abet criminals to escape justice.
  10. Robots must refrain from damaging human homes or tools, including other robots.

For an up to date look at the ethical issues on areas like robots in the military and so-called Lethal Autonomous Weapons Systems, whether Care-Bots should bring alcoholic drinks to people, or how much social interaction with a robot is acceptable, there are some great posts on and the RoboEthics Database.