If technology continues at its current pace, we could potentially see artificial intelligence enter the battlefield in just a few decades, or even just a few years.
The idea of using autonomous machinery as weapons has been a contentious issue that raises some frightening prospects, and the potential for a military revolution on the same level as gunpowder and nuclear weapons.
An Open Letter From Tech’s Prominent Thinkers
In a collective response to the notion of AI weapons, over a thousand tech leaders and prominent thinkers have signed an open letter and voiced their support for a ban on autonomous weapons that function independent of human control.
Put together by the Future of Life Institute (FLI) and presented at the International Joint Conference on Artificial Intelligence, the letter has been signed by a number of prominent scientists, business leaders, and thinkers, including Stephen Hawking, Elon Musk, Noam Chomsky, and Steve Wozniak.
A Warning Against Potential Dangers
The letter cites key reasons why AI weapons, and the global arms race that would likely result, could lead to major disasters.
This includes the relative ease for building AI weapons, their eventual appearance on the black-market, and their use by terrorists, or for purposes of ethnic cleansing, political assignations, and other nefarious intentions.
The letter also states that artificial intelligence can play a role in better military defenses and for mitigating the potential for civilian looses during conflicts, but only if AI itself is not developed to be a tool for killing.
Limit Developments Or Live Out Potential?
While there are many who are already on-board with the statements specified in the FLI letter, the organization is continuing to reach out to AI and robotics researchers and seek their commitment in excluding the militarization of autonomous technologies from their work.
AI—and how it could become a major threat to humanity—has recently become a more prominent discussion, furthered by rapid advances in interactive robotics and autonomous machinery and systems found in all manner of applications.
Some argue that AI should be developed to only limited points while other see robotics developed by humans and interacting on more natural, intuitive levels with humans, as a natural progression of technology and part of reaching our innovative potential.
Do you think militarized artificial intelligence should be limited and even banned on a global scale? Do you think the disastrous AI scenarios have been over blown? Share your thoughts in the comments.