This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.
If you have comments on this blog posting, please email me .
The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.
Click here to see the whole Opinion Blog.
To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").
Posted on 23rd February 2017
|Show only this post|
Show all posts in this thread.
There has been a lot of news about AI (Artificial Intelligence). There have been major advances in the technology, and many products and services are being rolled out using it: we have AI chat-bots, AI personal assistants, AI in translation tools, and AI being used to stamp out fake news, and is being developed for use on the battlefields of the future, to name but a few. This new breed of AI uses surprisingly few computing resources: it no longer needs a huge computer centre, but simpler AI programs will run even on portable devices such as mobile phones.
My position remains that AI is extremely dangerous. Smarter people than me, such as Professor Stephen Hawking and Elon Musk, have said that AI poses an existential threat to humanity. It is not difficult to imagine scenarios where AI goes out of control and poses a threat to our existence: sci-fi movies and literature are full of examples (see the other posts in this thread for some of these works).
I have argued in the past for researchers to take more seriously the laws of robotics, first proposed by Isaac Asimov in 1942. These laws are fairly simple:
I now realise, however, that this approach cannot work. Modern AI is not programmed in the conventional sense; it learns by being fed data. This means that the creators of an AI system do not know how it represents the information that it has learned, and how it implements the rules and priorities that it has been given. This means that it is not possible to program the laws of robotics into the AI at a basic level, since the programmers do not know the language in which it is thinking. It is, of course, possible to include the laws of robotics in the information that is taught to the AI, but we can never know what priority these laws will really be given by the AI, nor even what it understands by words such as "harm" and "human".
Realistically, the only way we can protect ourselves from the doomsday scenarios of out of control AI is by ensuring that AI never has:
So, basically, we are screwed!