This blog posting represents the views of the author, David Fosberry. Those opinions may change over time. They do not constitute an expert legal or financial opinion.
If you have comments on this blog posting, please email me .
The Opinion Blog is organised by threads, so each post is identified by a thread number ("Major" index) and a post number ("Minor" index). If you want to view the index of blogs, click here to download it as an Excel spreadsheet.
Click here to see the whole Opinion Blog.
To view, save, share or refer to a particular blog post, use the link in that post (below/right, where it says "Show only this post").
Posted on 21st October 2017
|Show only this post|
Show all posts in this thread.
I am sure that the researchers at Google, described in this report on Business Insider, are very pleased with themselves. I am somewhat less than pleased.
In May this year, they developed an artificial intelligence (AI) designed to help them create other AIs. Now it has demonstrated its abilities by "building machine-learning software that’s more efficient and powerful than the best human-designed systems."
In my last post on this subject (here) I listed 3 things that AI must not be allowed to do, to avoid the AI apocalypse. The first 2 had already begun; this new work by Google is the last item.
As I wrote on 23rd February 2017, basically, we are screwed.
Would it be too much to ask the people working on AI to finally show some moral compunction, and to apply some common sense?