The “genie is out of the bottle” according to the Vice President of Research at Thales, a French ‘defense giant.’
Terrorist groups are now “certain” to get their hands on small drones that can be used to attack targets.
According to the BBC, the US and Chinese armed forces are testing swarming drone technology. ISIS and other terrorist groups may make replicas of such drones that can be used for indiscriminate killing.
From the BBC:
Alvin Wilby, vice-president of research at French defence giant Thales, which supplies reconnaissance drones to the British Army, said the “genie is out of the bottle” with smart technology.
And he raised the prospect of attacks by “swarms” of small drones that move around and select targets with only limited input from humans.
“The technological challenge of scaling it up to swarms and things like that doesn’t need any inventive step,” he told the Lords Artificial Intelligence committee.
“It’s just a question of time and scale and I think that’s an absolute certainty that we should worry about.”
The US and Chinese military are testing swarming drones – dozens of cheap unmanned aircraft that can be used to overwhelm enemy targets or defend humans from attack.
Noel Sharkey, emeritus professor of artificial intelligence and robotics at University of Sheffield, said he feared “very bad copies” of such weapons – without safeguards built-in to prevent indiscriminate killing – would fall into the hands of terrorist groups such as so-called Islamic State.
Self-regulating technologies, such as cars that drive themselves and attack drones, could be used to breach security measures, leaving even guarded targets susceptible to attack.
As the field of robotics grows and automated technologies improve, terrorists have no reason to ignore technology that can help them do the greatest amount of harm possible.
The Blaze reports:
The U.S. and China are already testing technology that includes “dozens of cheap, unmanned aircraft,” that can hunt down targets or be used as a way to protect people from attacks, Wilby said.
“The technological challenge of scaling it up to swarms and things like that doesn’t need any inventive step,” he told the Lords Artificial Intelligence committee. “It’s just a question of time and scale and I think that’s an absolute certainty that we should worry about.”
Bad copies of these autonomous weapons could easily fall into the hands of terrorists, said Noel Sharkey, emeritus professor of artificial intelligence and robotics at University of Sheffield.
When used for evil, keep in mind that the robots are simply killing machines carrying out orders without conscience, Sharkey warned. He paints a picture in which drones and other robots could roam around “firing at will.”
And he believes it has a very good chance of quickly becoming a reality.
“I don’t want to live in a world where war can happen in a few seconds accidentally and a lot of people die before anybody stops it,” said Sharkey, who is also a spokesman for the Campaign to Stop Killer Robots.
Because of the threat of AI technology falling into the wrong hands, there’s been a push in some countries to ban weaponized AI.
Should robotic experts continue to develop drones for the battlefield, or is the risk too great?
Tell us what you think, and sound off in the comments below.