I want gun toting machines running around battlefields for the simple reason that one day we could have armies composed of robots fighting other robots and no people would have to waste their lives.
I think the problem is that you fail to grasp the core purpose of war.
Surely the core purpose of war is to accomplish objectives. The methods by which that is achieved is usually violence and results in a lot of people getting killed. No country goes to war with the express purpose of killing as many of the other country's people as they can, and if they do they're killing those people for a specific reason. The killing is not the end in itself.
The problem is, battles aren't fought only in the designated battle site #358608-1C, they're fought in and around places with significant civilian populations. Just because the two sides would be fighting solely with machines (something probably only a few nations could even afford) wouldn't remove human presence from the battlefield. A pair of boots and a rifle is just too cheap to ever go out of style. And while "never shoot civilians" sounds nice, how do you manage that? How will the machine tell a difference between a soldier and a civilian? The blog palsch linked to had a few very realistic examples of what could go wrong with giving a machine full autonomy in deciding whether a human target is lawful or not.
I have no idea how these machines would tell the difference between soldiers and civilians, who is lawful and who is not, even what the machines would look like. None of us do, and we can't until we do research into it. That's why it's silly to make calls to pull the plug on research like this.
And yet I would still rather give sufficient autonomy to a machine to decide whether or not a human target was lawful rather than give it to a human. They're just governed by code, that's all. Code is reliable if it's correct.