This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

Why The Laws Of Robotics Cannot Work

Posted on 23rd February 2017

Show only this post
Show all posts in this thread.

There has been a lot of news about AI (Artificial Intelligence). There have been major advances in the technology, and many products and services are being rolled out using it: we have AI chat-bots, AI personal assistants, AI in translation tools, and AI being used to stamp out fake news, and is being developed for use on the battlefields of the future, to name but a few. This new breed of AI uses surprisingly few computing resources: it no longer needs a huge computer centre, but simpler AI programs will run even on portable devices such as mobile phones.

My position remains that AI is extremely dangerous. Smarter people than me, such as Professor Stephen Hawking and Elon Musk, have said that AI poses an existential threat to humanity. It is not difficult to imagine scenarios where AI goes out of control and poses a threat to our existence: sci-fi movies and literature are full of examples (see the other posts in this thread for some of these works).

I have argued in the past for researchers to take more seriously the laws of robotics, first proposed by Isaac Asimov in 1942. These laws are fairly simple:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I now realise, however, that this approach cannot work. Modern AI is not programmed in the conventional sense; it learns by being fed data. This means that the creators of an AI system do not know how it represents the information that it has learned, and how it implements the rules and priorities that it has been given. This means that it is not possible to program the laws of robotics into the AI at a basic level, since the programmers do not know the language in which it is thinking. It is, of course, possible to include the laws of robotics in the information that is taught to the AI, but we can never know what priority these laws will really be given by the AI, nor even what it understands by words such as "harm" and "human".

Realistically, the only way we can protect ourselves from the doomsday scenarios of out of control AI is by ensuring that AI never has:

  1. The ability to do physical harm; this means no AI on the battlefield, in self-driving vehicles, in medical equipment, and a whole host of other applications. I seriously doubt that industry and government will be capable of, and trustworthy enough to, show such restraint.
  2. The ability to learn more, once deployed, which amounts to reprogramming itself. Since such continued learning is already one of the unique selling propositions of some AI products, the ship has already sailed on that piece of restraint.
  3. The ability to create more AI. At least this is not yet happening (as far as I know), but I suspect that it is only a matter of time before AI developers start to use AI tools to create the next generation of AI products,

So, basically, we are screwed!