Back in 1942, Isaac Asimov wrote a short story called “Runaround” that formally introduced his Three Laws of Robotics as a complete set of rules that all robots in his Robots series were beholden to. Now, Google has announced that a similar safeguard will inform how their future AI-enabled machines will operate, partially inspired by the Three Laws.
Let’s look at what Google has coined their “Robot Constitution,” and how it works.
Asimov’s Three Laws of Robotics are Pretty Straightforward
As they appear in “Runaround,” the Three Laws are as follows:
- The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- The Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Since they were put to paper, these laws have been referenced and reproduced in dozens of different types of media. Now, they seem to have finally made the jump to practical applications (although Asimov would argue they made a pretty good set of rules for human behavior already).
Introducing Google DeepMind’s “Robot Constitution”
Google DeepMind, the primary artificial intelligence research wing maintained by the technology giant, has been hard at work trying to develop advanced robotics that are better suited for real-world utility.
If I were to ask you to tidy up your desk space, for instance, you would have a pretty good idea of what I meant. Put away extra writing utensils, file or properly dispose of documents, and similar parts of the task would be inferred by your flexible human mind. A robot, on the other hand, doesn’t have the understanding to process these kinds of requests and act upon them appropriately. Google’s AutoRT system gives the robot this capability.
In essence, AutoRT allows the robot to assess its environment, determine what it can actually accomplish out of its options, and then attempt to do so.
This system isn’t without its risks, however. Returning to our “tidy the desk” example, what if the desk had a fishbowl on it? Would a robot try to remove the bowl, potentially killing the fish? What if a robot, tasked with cleaning a kitchen, left a knife in an unsafe place? Alternatively, what if a robot determines it needs to accomplish something that its frame simply is not capable of doing—risking damage to itself in the process?
This is where Google’s Robot Constitution comes into play. Starting with the classic “A robot may not injure a human being,” the Robot Constitution is a series of prompts that dictate what the robot simply cannot do. These rules prevent Google’s experimental robots from doing anything that involves human beings, animals, sharp objects, or electrical appliances. Google also programmed safeguards to stop tasks that exerted too much force on the robot’s joints, and a more practical safeguard—a human supervisor with a killswitch—was always present as well.
The Researchers are Striving for Further Advancement
Hoping that these advancements are another step toward a future filled with helpful robots, the team at Google has been hard at work for just over half a year (as of this writing), having evaluated a fleet of 52 unique robots over a course of 77,000 trials. Over that time, their robots were able to complete 6,650 unique tasks—and while these tasks were relatively simple ones, along the lines of “wipe down the countertop with the sponge,” they show a potential future where we may never need to wipe down countertops.
In the meantime, if there are certain IT tasks that your business is currently unable to accomplish, we're here to help. Give us a call at (954) 575-3992 to learn about what our managed services could do for you and your operations.