The history of the fight against a possible real-world Terminator Scenario, but is it really needed?
Adam Milton-Barker | Jul 19, 2018 | Artificial Intelligence | 5593
Normally I get a little shudder when I see the words "Elon Musk says dot dot dot" popping up in my news feed, but today the posts I saw mentioning Musk caught my eye. At first I actually thought that tech blogs were resharing old news about the Open Letter on Autonomous Weapons back in 2015, but after checking it out I found out that Musk and a number of others including DeepMind (part of Google) had got together again at the International Joint Conference on Artificial Intelligence (IJCAI) organized by Future of Life Institute in Stockholm, Sweden to sign a pledge to agree to not get involved with the development of "lethal autonomous weapons."
The hype about the dangers of Artificial Intelligence has started again, but this time I agree, keep A.I away from warfare! (2015)
Back in 2015 Elon Musk, Stephen Hawking, Demis Hassabis, Steve Wozniak and 1000's of others signed an open letter, AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS, to request the banning of autonomous weapons. The letter was to be presented at the 2015 International Joint Conference on Artificial Intelligence (IJCAI) on the 28th July 2015 in Buenos Aires. Also published in 2015 and signed by Musk and Hawking was the RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE that included a list of possible areas of research regarding the journey towards sentient AI.
"In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."
Future of Life Institute
In 2017 another letter was produced urging for a ban, around the same time the letter was published talks with the United Nations began regarding the banning of autonomous weapons which were spurred on due to the letter published in 2015.
Bringing us back up to date, this years International Joint Conference on Artificial Intelligence was actually held in Stockholm, Sweden at the same venue that I attended the 35th International Conference on Machine Learning last week with Intel. At IJCAI yesterday a pledge was published agreeing to not be involved with the development of AI that will be directly connected with / in control of weapons, the agreement however doesn't seem to cover them developing non lethal types of AI according to the Verge.
My thoughts on this ? As I mentioned in the previously linked article, I agree that AI must be kept away from war, but I think it should not be restricted to only weapons. Honestly though at this point I feel that we have a very immediate, more pressing danger of poverty due to the 4th Industrial Revolution and the massive technology shifts we have been, are and will continue to face.
A.I. is not the danger, misinformation is the real killer!
I think that really, we cannot protect against rogue AI without actually having rogue AI, any more than we create antivirus before we have the virus, so for now we can let Musk continue on his journey to creating sentient AI, just to keep an eye on it of course ;) I believe our time needs to be spent on educating and training businesses and the general public for a not too distant world where automation/AI can do most jobs, and the only jobs that remain require technical skill sets.