WASHINGTON — A group of scientists, engineers and celebrities have signed an open letter calling for a ban on the development of artificial intelligence-based offensive weaponry, citing the risk such technology will proliferate into the hands of terrorists and dictators.

In a July 27 letter posted on the website of the Future of Life Institute, more than 1,170 AI and robotics researchers joined with 1,200 celebrities, scientists and software developers to warn that a military AI race would not be "beneficial for humanity."

Among those joining the call are world renowned scientist Stephen Hawking, SpaceX and Tesla Motors founder Elon Musk, Apple co-founder Steve Wozniak, and linguist Noam Chomsky.

"Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," the letter reads.

The letter notes that there are "many ways" AI can be used to make life better for individuals around the globe, including on the battlefield, without being militarized. But if any major military power begins to develop AI-based weapons, the signees warn, the technology will quickly spread like the "Kalashnikovs of tomorrow."

"Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce," the letter reads. "It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.

"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."

Pentagon leadership has openly discussed possible applications for artificial intelligence in the battlespace, but has also acknowledged concerns about creating a system that acts without human analysis.

In a November 2012 directive, the Pentagon put in place rules that any autonomous and semiautonomous weapon systems "shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

Meanwhile, the Defense Advanced Research Projects Agency has experimented with artificial intelligence, including a June Robotics Challenge which showcased semiautonomous technologies that could be used in disaster relief scenarios.

Twitter: @AaronMehta