Why people think that AI with a gun more scarier than a soldier with a gun, I would never understand.
Remember a video where helicopter shot reporters because their cameras looked like an RPG? A soldier with a gun, when facing an uncertain situation with where he or his friends risk death, will weigh heavily on the "caution" side, shooting on everything that moves. AI with a gun, however, can accept it's death as an acceptable outcome in a similar situation.
> Why people think that AI with a gun more scarier than a soldier with a gun, I would never understand.
In two words: scale and miniaturisation. A rifleman has inherent limitations -- he cannot move unaided beyond his walking speed, cannot be made to weigh on the order of a kilogramme, and he must sleep and eat and shit. He cannot lie in wait indefinitely, and he cannot fly either. He has, in the godawful vernacular of the defence-contracting industry, SWaP (size, weight, and power) issues. His face is as vulnerable to bullets as yours or mine, and a 12.7mm (.50 BMG) round will walk through his body armour anyway. He is human, and the harm that men with guns can do is thus limited.
Stuart Russell uses the example of micro-UAVs with AI-based targeting software and each armed with a single-use shaped charge (for anti-personnel use or breaching doors) -- 10^6 of them will devastate a city, with extremely little human/logistical support needed. A million riflemen could do a bunch of killing, but they will be slower, easier to stop, easier to detect, and will require a lot more support and infrastructure to remain effective.
What do we call weapons that allow very few men to kill millions without placing themselves in any hazard, again? Russell (rightfully, in my judgement) classes this sort of use of AI as "scalable WMDs". Lethal autonomous weapons shouldn't be compared to a "soldier with a gun"; appropriate comparisons are more along the lines of "flying landmines with face recognition".
Million riflemen are unstoppable by neanderthals with sticks, million micro-UAVs are unstoppable by not-ready-for-this areas. The technology to detect uavs is the same that we're talking about though, so we are definitely no neanderthals.
If a human being does the shooting there is a human element involved - someone to surrender to, a possibility for mercy, a chance at accountability at a court, someone to write a book about what happened 20 years later.
All of those things are important, but none of them are a priority for the people who have the "AI with a gun" programmed.
> none of them are a priority for the people who have the "AI with a gun" programmed
Aren't they, really? Why do you think so? Soldiers have exactly the same incentives as designers and engineers of those devices: accepting enemy's surrender can be a rational tactical choice (so that more of them surrender instead of fighting to the end), they are just as accountable in the eyes of law (which may be important to them or not - exactly the same as the usual soldiers), etc.
The only difference is, AI will make choices rationally and less influences by emotions of the battlefield. Do you really think then net result of average soldier's emotions brings him closer to "merciful"? As far as I can tell, it's the opposite - most powerful emotion on the battlefield is usually fear, and it doesn't make people merciful at all.
Sure both can happen. I think the real fear is an army of AI bots gone wrong. I don't think we've had an army of humans turn on each other.
Humans will always make mistakes, and not that two humans can't make the same mistake, but each mistake is individual. A bunch bots stamped with the same code using the same hardware will be capable of making the same mistake for each bot due to the same bug.
> I don't think we've had an army of humans turn on each other.
It happened only last week in Zimbabwe. An army of humans designed to keep the ruling powers safe turned on them and takes over control, and it's hardly the first time.