PDA

View Full Version : Self-aware drones: When Drones Decide to Kill on Their Own




Anti Federalist
10-02-2012, 09:28 AM
This is one of those cases where I have to vehemently disagree with the idea that all technology is value neutral.

There is no upside to this.

There is nothing but evil intent behind this.



When Drones Decide to Kill on Their Own

http://thediplomat.com/flashpoints-blog/2012/10/01/why-killing-should-remain-a-human-enterprise/

It’s almost impossible nowadays to attend a law-enforcement or defense show that does not feature unmanned vehicles, from aerial surveillance drones to bomb disposal robots, as the main attraction. This is part of a trend that has developed over the years where tasks that were traditionally handled in situ are now operated remotely, thus minimizing the risks of casualties while extending the length of operations.

While military forces, police/intelligence agencies and interior ministries have set their sights on drones for missions spanning the full spectrum from terrain mapping to targeted killings, today’s unmanned vehicles remain reliant on human controllers who are often based hundreds, and sometimes thousands of kilometers away from the theater of operations. Consequently, although the use of drones substantially increases operational effectiveness — and, in the case of targeted killings, adds to the emotional distance between perpetrator and target — they remain primarily an extension of, and are regulated by, human decisionmaking.

All that could be about to change, with reports that the U.S. military (and presumably others) have been making steady progress developing drones that operate with little, if any, human oversight. For the time being, developers in the U.S. military insist that when it comes to lethal operations, the new generation of drones will remain under human supervision. Nevertheless, unmanned vehicles will no longer be the “dumb” drones in use today; instead, they will have the ability to “reason” and will be far more autonomous, with humans acting more as supervisors than controllers.

Scientists and military officers are already envisaging scenarios in which a manned combat platform is accompanied by a number of “sentient” drones conducting tasks ranging from radar jamming to target acquisition and damage assessment, with humans retaining the prerogative of launching bombs and missiles.

It’s only a matter of time, however, before the defense industry starts arguing that autonomous drones should be given the “right” to use deadly force without human intervention. In fact, Ronald Arkin of Georgia Tech contends that such an evolution is inevitable. In his view, sentient drones could act more ethically and humanely, without their judgment being clouded by human emotion (though he concedes that unmanned systems will never be perfectly ethical). Arkin is not alone in thinking that “automated killing” has a future, if the guidelines established in the U.S. Air Force’s Unmanned Aircraft Systems Flight Plan 2009-2047 are any indication.

In an age where printers and copy machines continue to jam, the idea that drones could start making life-and-death decisions should be cause for concern. Once that door is opened, the risk that we are on a slippery ethical slope with potentially devastating results seems all too real. One need not envision the nightmares scenario of an out-of-control Skynet from Terminator movie fame to see where things could go wrong.

In this day and age, battlefield scenarios are less and less the meeting of two conventional forces in open terrain, and instead increasingly takes the form of combatants engaging in close quarter firefights in dense urban areas. This is especially true of conflicts pitting modern military forces — the very same forces that are most likely to deploy sentient drones — against a weaker opponent, such as NATO in Afghanistan, the U.S. in Iraq, or Israel in Lebanon, Gaza, and the West Bank.

Israeli counterterrorism probably provides the best examples of the ethical problems that would arise from the use of sentient drones with a license to kill. While it is true that domestic politics and the thirst for vengeance are both factors in the decision to attack a “terrorist” target, in general the Israel Defense Forces (IDF) must continually use proportionality and weigh the operational benefits of launching an attack in an urban area against the costs of attendant civilian collateral. The IDF has faced severe criticism over the years for what human rights organizations and others have called “disproportionate” attacks against Palestinians and Lebanese. In many instances, such criticism was justified.

That said, what often goes unreported are the occasions when the Israeli government didn’t launch an attack because of the high risks of collateral damage, or because a target’s family was present in the building when the attack was to take place. As Daniel Byman writes in a recent book on Israeli counterterrorism, “Israel spends an average of ten hours planning the operation and twenty seconds on the question of whether to kill or not.”

Those twenty seconds make all the difference, and it’s difficult to imagine how a robot could make such a call. Unarguably, there will be times when hatred will exacerbate pressures to use deadly violence (e.g., the 1982 Sabra and Shatila massacre that was carried out while the IDF looked on). But equally there are times when human compassion, or the ability to think strategically, imposes restraints on the desirability of using force. Unless artificial intelligence reaches a point where it can replicate, if not transcend, human cognition and emotion, machines will not be able to act under ethical considerations or to imagine the consequences of action in strategic terms.

How, for example, would a drone decide whether to attack a Hezbollah rocket launch site or depot in Southern Lebanon located near a hospital or with schools in the vicinity? How, without human intelligence, will it be able to determine whether civilians remain in the building, or recognize that schoolchildren are about to leave the classroom and play in the yard? Although humans were ultimately responsible, the downing of Iran Air Flight 655 in 1988 by the U.S. Navy is nevertheless proof that only humans still have the ability to avoid certain types of disaster. The A300 civilian aircraft, with 290 people on board, was shot down by the U.S. Navy’s USS Vincennes after operators mistook it for an Iranian F-14 aircraft and warnings to change course were unheeded. Without doubt, today’s more advanced technology would have ensured the Vincennes made visual contact with the airliner, which wasn’t the case back in 1988. Had such contact been made, U.S. naval officers would very likely have called off the attack. Absent human agency, whether a fully independent drone would make a similar call would be contingent on the quality of its software — a not so comforting thought.

And the problems don’t just end there. It’s already become clear that states regard the use of unmanned vehicle as somewhat more acceptable than human intrusions. From Chinese UAVs conducting surveillance near the border with India to U.S. drones launching Hellfire missiles at suspected terrorists in places like Pakistan, Afghanistan or Yemen, states regard such activity as less intrusive than, say, U.S. special forces taking offensive action on their soil. Once drones start acting on their own and become commonplace, the level of acceptability will likely increase, further deresponsibilizing their users.

Finally, by removing human agency altogether from the act of killing, the restraints on the use of force risk being further weakened. Technological advances over the centuries have consistently increased the physical and emotional distance between an attacker and his target, resulting in ever-higher levels of destructiveness. Already back during the Gulf War of 1991, critics were arguing that the “videogame” and “electronic narrative” aspect of fixing a target in the crosshairs of an aircraft flying at 30,000 feet before dropping a precision-guided bomb had made killing easier, at least for the perpetrator and the public. Things were taken to a greater extreme with the introduction of attack drones, with U.S. Air Force pilots not even having to be in Afghanistan to launch attacks against extremist groups there, drawing accusations that the U.S. conducts an “antiseptic” war.

Still, at some point, a human has to make a decision whether to kill or not. It’s hard to imagine that we could ever be confident enough to allow technology to cross that thin red line.

jkr
10-02-2012, 09:53 AM
one went crazy in 2007, i think, and DID kill humans...all by its self

FSP-Rebel
10-02-2012, 09:55 AM
iRobot anyone?

libertygrl
10-02-2012, 10:18 AM
And the hits just keep on coming......:eek: Sort of makes you not even want to bother getting out of bed in the morning.:(

EBounding
10-02-2012, 10:21 AM
Couldn't we just have drones that bake cookies or something first before killing people?

http://zs1.smbc-comics.com/comics/20110114.gif

fisharmor
10-02-2012, 10:48 AM
Yes, giving them automated tasks is a fucking brilliant idea.
http://www.haaretz.com/news/middle-east/iran-official-we-tricked-the-u-s-surveillance-drone-to-land-intact-1.401641

Seriously, why are the military people who say these things allowed to keep their jobs?

Lucille
10-02-2012, 10:57 AM
There's an upside for the bloodthirsty ruling class. It relieves them of any responsibility when it "malfunctions (http://www.ronpaulforums.com/showthread.php?391474-Something-New-to-Worry-About-Murderous-Autonomous-Drones)" and kills a bunch of right wing extremist types.

tangent4ronpaul
10-02-2012, 11:19 AM
HELLO SKYNET!

And it's because of F'n Hollyweird and the Terminator series of movies...

-t

VanBummel
10-02-2012, 11:24 AM
I recently graduated from college. I majored in Computer Science, and this was the major reason I decided to stop with a bachelor's and take a safe IT job (no matter how well or poorly I do, I don't kill anyone). A rediculous portion of CS research dollars comes from the DOD and goes into military robots, self-driving cars and self-bombing drones. There was no was I was helping out with, or even stamping my name to any of that bullshit.

michaelwise
10-02-2012, 11:37 AM
Flying Killer Drone Robots

I've been saying the main stream media is a major enemy of the American people for a while.

I don't believe they can brainwash a critical mass of people anymore because virtually nobody tunes into their news channels these days.

It used to be 20 million would watch CNN prime time, but that's down to about 500k now.

The Internet is the new anti-mind control mechanism reaching billions of people today.

As far as being told what to think by the MSM, that it was a "Terrorist Attack", I don't give a rat's ass. I already know terrorism is a gorilla tactic and you can't win a war against a tactic because it's just a thought idea.

I also know why the Libya attack probably happened and why other Muslim countries protests happened, that used the anti-Muslim Youtube video as an excuse, but really it was because we're slaughtering thousands of innocent brown people, women and children with our flying killer drone robots.

The only question I have now is; What are the names of the people who did this attack on the CIA operatives in the State Department, so I can give the Libyan Freedom Fighters the congressional medal of honor, or was it the Mormon Mafia so I can hang them for it.

Trying to Secure Libya's most delicious Light Sweet Crude Oil deposits for the Owners of the Planet's oil Conglomerate to Control is not a Virtue.

Anti Federalist
10-02-2012, 07:00 PM
I recently graduated from college. I majored in Computer Science, and this was the major reason I decided to stop with a bachelor's and take a safe IT job (no matter how well or poorly I do, I don't kill anyone). A rediculous portion of CS research dollars comes from the DOD and goes into military robots, self-driving cars and self-bombing drones. There was no was I was helping out with, or even stamping my name to any of that bullshit.

Mega +rep!

jmdrake
10-02-2012, 08:18 PM
Yes, giving them automated tasks is a fucking brilliant idea.
http://www.haaretz.com/news/middle-east/iran-official-we-tricked-the-u-s-surveillance-drone-to-land-intact-1.401641

Seriously, why are the military people who say these things allowed to keep their jobs?

I wonder what other GPS guided autonomous vehicles can be "tricked" by the Iranians/Chinese?

HOLLYWOOD
10-02-2012, 08:49 PM
When the machine decides to destroy and murder... No single gov killer can be blamed and avoids all legal ramifications. Make sure you destroy those video tapes like the CIA... leave no loose ends/evidence.

awake
10-02-2012, 08:59 PM
Can't build killer self thinking robots when your broke. The people who are planning stuff like this don't understand that the war machine is going to be falling on hard times soon. The wars and the industries that support them will be fighting off seniors who want their pensions instead of endless war...The army of the old will completely over run the military industrial complex when it comes time to default.

angelatc
10-02-2012, 09:01 PM
This is one of those cases where I have to vehemently disagree with the idea that all technology is value neutral.

There is no upside to this.

There is nothing but evil intent behind this.



When Drones Decide to Kill on Their Own.

OH AF, there's always hope. Maybe they can decide to kill each other!

FindLiberty
10-02-2012, 09:17 PM
People who write code for those killer drones have nothing to fear, if they haven’t done anything wrong…

osan
10-02-2012, 11:11 PM
..

ClydeCoulter
10-02-2012, 11:40 PM
I recently graduated from college. I majored in Computer Science, and this was the major reason I decided to stop with a bachelor's and take a safe IT job (no matter how well or poorly I do, I don't kill anyone). A rediculous portion of CS research dollars comes from the DOD and goes into military robots, self-driving cars and self-bombing drones. There was no was I was helping out with, or even stamping my name to any of that bullshit.

Yep, me too. Only I am not recently graduated.

I had the chance for government funding on some projects in college that I decided not to do. And have abandonded several ideas for the same reason.

Once, my long time friend, no longer with us, and I agrued over whether it was possible for machines to eventually take over. I said yes, because man is that stupid to give machines everything they need to do it, thinking they will keep control. A virus at the wrong time in the wrong control machine could turn loose havoc.

Damn them for continuing down this path.

John F Kennedy III
10-02-2012, 11:51 PM
HELLO SKYNET!

And it's because of F'n Hollyweird and the Terminator series of movies...

-t

No, those movies were based on Pentagon or DAARPA documents.

Anti Federalist
10-03-2012, 12:30 AM
No, those movies were based on Pentagon or DAARPA documents.

Oh, that's a good point.

He's right, folks.

That's where these ideas come from, at least for that type of movie.

fisharmor
10-03-2012, 09:13 AM
I recently graduated from college. I majored in Computer Science, and this was the major reason I decided to stop with a bachelor's and take a safe IT job (no matter how well or poorly I do, I don't kill anyone). A rediculous portion of CS research dollars comes from the DOD and goes into military robots, self-driving cars and self-bombing drones. There was no was I was helping out with, or even stamping my name to any of that bullshit.

Well, I have an ME friend who until recently was working on self-driving cars, and in fairness, most of their research was on teaching the car how NOT to kill people.

jkr
10-03-2012, 09:23 AM
they use the movies to pitch the concepts to acquire funding and then gradually indoctrinate the "citizens" into the new system of tecnotronic $lavery...all the while getting rich and FAT off of the people they are sworn to protect (until the time comes to shear the sheep)

they are not smart or creative-they are violent
and they use artists to manifest the horrors of their twisted minds!
its like a bad acid trip

VanBummel
10-03-2012, 10:33 AM
Well, I have an ME friend who until recently was working on self-driving cars, and in fairness, most of their research was on teaching the car how NOT to kill people.

And to me, the killing potential isn't even the scariest thing. I saw my university's self-driving car (built with grants from DARPA, of course) - it was completely packed with cameras, computers, and sensors throughout (because it had to be). The surveillance potential is horrifying:

You have exceeded the posted speed limit of *45mph*, your bank account *Checking* has been debited *$100*. You have *10 seconds* to lower your speed to within legal limits to avoid being penalized again.

You have entered a restricted area. Auto-drive will now engage. Please do not attempt to regain control of the vehicle or exit the vehicle until arriving at the police station for interrogation.

You have *damaged or removed* sensor *105(b)*. Your car will be inoperable until a licensed technician from the Department of Driver Safety can install a new sensor *105(b)*. Your bank account *Checking* has been debited *$1000*. Your bank account *Checking* has been overdrawn, and you must spend *72 hours* in jail. Please report to your local police station within *24 hours* to avoid being penalized again.

Pericles
10-03-2012, 10:36 AM
OH AF, there's always hope. Maybe they can decide to kill each other!

Sounds like a fun project for liberty minded people with IT skills.