This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.

If you have comments on this blog posting, please email me .

The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.

Click here to see the whole Opinion Blog.

To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").

AI and Robotics: A Threat to Us All.

Posted on 4th December 2014

Show only this post
Show all posts in this thread.

There have been a lot of stories about, and a general rise in popular interest in, robotics and AI recently. There have been robotics competitions, and rethinking of the famous Turing Test of Artificial Intelligence. Recent space projects have involved at least semi-autonomous functioning, due to the impracticality of remotely controlling devices at vast distances.

I have always been concerned about this area of technology, but have been accused of paranoia by my friends and colleagues. Now Prof. Stephen Hawking has shared that he is also worried (as reported in this BBC story), and has said that "efforts to create thinking machines pose a threat to our very existence".

Isaac Asimov first defined the laws of robotics in a story in 1942. The laws are intended as a safety feature, to ensure that robots do no harm to people, and do what they are told.

I have worked in the field of robotics (robot vehicles for military uses), and I know that Asimov's laws of robotics are generally completely ignored by researchers in robotics and AI. That is like building fast cars without brakes.

If anyone doesn't believe that robotics are being developed for the battlefield, check out this article in Popular Science.

If anyone finds the plot line of Terminator too fanciful, check out this BBC article about a project to connect robots to the Internet so that they can learn from public sources and each other. Sounds a lot like SkyNet to me.

I think the description that really puts the risks into perspective was written by Philip K. Dick: "Second Variety". It is a short story, also made into a movie called "Screamers". The message is clear. If you build autonomous (i.e. AI-based) robots, and give them the ability to change their design (already being experimented with in AI machines), and to replicate themselves (already being seriously considered by scientists and engineers), then without a system that ensures that the laws of robotics are always programmed in to the machines without alteration, it is game over for the human race. Of course, it only takes one rogue government, terrorist group or company to not play by the rules, and the rules become useless.

Maybe you also think I am being paranoid, but Stephen Hawking is a very smart guy, and he is worried. You at least owe it to yourselves to read Second Variety, or watch Screamers before you dismiss this.