Roger, we're talking Fighter Combat Drones here, right? Asimov's first law broken already, so I strongly doubt if Asimov's laws is relevant here.
It is however an important starting point; most importantly perhaps in showing just how difficult it is to give a logic engine even the simplest instructions without it killing someone. I've exaggerated that of course, but at the end of the day, you have to be careful what you tell an automated machine what to do, especially in context with the logic that has been programmed into it.
A new set of rules need to be defined to address all the unknowns and possibilities of automated unmanned vehicles (robots?) capable of killing humans (with bombs, missiles, guns etc.)
Which is the fun part =/
But what we are talking about now is a fighter ready unmmanned automated aerial vehicle capable of killing another person without the intervention of a second human.
We should be wary of a difference between general laws of robotics, general laws of combat robotics, ground-based/ground-attack robots, and air-only robots.
A drone to enforce a no-fly zone is much easier to deal with than all of combat, because as long as it has working radar, GPS and weapons, the result of giving it a (true) no-fly zone to enforce will always be the same. If it flies, it dies. It doesn't really need to make decisions.
Problems start arising mostly when you have it making decisions, especially about whether or not to engage, and what constitutes a target. There's been enough cases where humans have screwed this up, so a robot can't be expected to fare that much better in all
circumstances. In some, results may be better, as when an airliner overlapped with a parked military jet and a ship's radar controller marked one as the other. The airliner flew normally, but the operator got so worked up that his mind tricked him into thinking it was on an attack run against the ship, reported it to the unfortunate fellow in charge who had no choice but to blow it out of the sky. Protocol has changed as a result, as a second person checking the screen could have prevented a significant loss of life, but it is this sort of situation where an automated system would have saved lives, and I am sure there are other similar situations where human judgement can fail.
Much like setting your highly trained protection dog loose on another human being, and then turning your back to see if the dog can make his own decisions, after you have given him an order. Can you expect your dog to NOT harm anyone else when trying to subdue your perp? Can your dog make rational moral decisions?
A dog is trained, and that training defines its decision-making process. A computer does the same, except the decision-making process is programmed directly. As such, it's much easier to predict the outcome of a computer's logic, because you know exactly what it's thinking. With a dog, you're missing that step; you don't know what it's learned from what you taught it.
Its instincts and what life has taught it might be beneficial in some situations, but is unpredictable.
So now we expect a piece of metal and electronics, that cannot love, hate, dislike or reason, to make such moral rational abstract decisions.
I think it's unfair to say it can't reason. Reasoning is the application of logic to a process, and a machine is nothing more than a logic engine.
As noted above, we don't necessarily expect it to make moral decisions. It all depends on what you program it to do. You might program it with a way to tell friend from foe, shoot at foes, and ask for instructions if it can't tell which it is. Being fired at is a good way to identify foes, and can probably be programmed as a condition.
I believe unmanned fighters, remotely controlled by humans from a strategic position is quite possible, but presents its own set of challenges, such as reaction time, and sensory input of the remote pilot. Would the drone get that sixth sense many fighter pilots report on before an imminent engagement? But that is a discussion for another thread.
I'd go with something similar to the mars rover as far as control goes. Because of the time delay between Earth and Mars, a solution was found that gives the drone a set of pre-programmed behaviours, which are activated by command and with parameters from Earth. A fighter jet could work the very same way. Give it the programming and means to destroy a target from the word go, and look after itself in a non-hostile manner until the word go, and then have a human giving the command.
Some things are fairly black and white; when someone locks onto you and then fires a missile, you're well within your rights to shoot them down. A human would do exactly that unless given explicit orders (and even then they might do it anyway), a robot can be programmed to do that, unless given explicit orders, and will always comply with them.
I don't think we'll see robots firing at unidentifiable targets anytime soon, and I'm not sure if it's something we need or want to discuss right now. I don't believe it's something that is truly relevant (to the original post, as relevant as it might be to the current discussion), and could get a thread of its own.
I will touch on it regardless. At the end of the day, to fire on a target, an AI needs to identify it as such. The more difficult that is, the more awkward the conversation about ethics. That's why it is easier to talk about pure-air drones, as opposed to ground-attack or ground-based drones; because telling aircraft apart is much easier than identifying what is and is not a threatening humanoid. There are only so many ways of identifying one, and a lot of them are visual. You can use various other sensors to identify a humanoid, such as heat (though technically still largely a visual identification), but to identify if it's hostile is another matter. The easiest way is to know where a person is and is not supposed to be, such as inside a facility. Automated internal gun systems are perfectly feasible in certain types of area, and in fact one could make one using a handgun you might already own, and an auto-sentry system that can be bought online. Then just shut it down when you need to pass through the area. It's a very basic way of doing it, and results in a "secure corridor", but would be vulnerable to a breach while you're in it. Not very effective, but demonstrates the easiest instruction to give a drone: Kill all, and kill none. The ethics are about as complicated as a CCTV installation or a minefield on private land.
Telling a friendly and hostile human apart becomes more difficult. If CoD is to be believed, an IR strobe can be used to make this task easier for humans (referring to the mission where you get to play gunner on the AC-130). But one thing that a human can do that a machine cannot do (or at least not easily) is decide who is shooting at who, something that a gunner can do. It's easy enough to identify a person that is shooting to be shooting at you and thus hostile, and as a robot you can afford to wait long enough for this to happen. When people are shooting at each other, one cannot at this time expect a robot to intervene on that information alone. A human would struggle just as much. The best call there is to not shoot, unless you can positively identify a foe, without killing anyone you cannot identify as a foe.
Automated Unmanned fighter capable vehicles IMHO opinion is irresponsible. Its one thing sending an automated drone to take photos of enemy positions, record enemy force counts and even to paint targets, but its a totally different story to tell this same drone to go into "hot" environments with a view to defend itself and in effect kill another human being.
Surely an automated system painting a target for another automated system that does the killing is much the same thing as an automated system killing people? Not entirely sure how these systems work, so I could be wrong as to how much human input there is in the process, but I think that painting the target is usually the bit that's done by humans, hitting it is largely automated (guided weapons and the like).
Sending a drone into a hot environment and asking it to defend itself without using pre-emptive action is probably the most ethical way of employing an automatic killing machine, because it has a guarantee that anyone it kills was hostile - they fired first.
The Geneva Convention places a premium on the rights of innocents as well as the rights of those combatants in contact situations: when I stop firing at you and "put up my hands" (figuratively speaking) and pose you no more threat you cannot kill me. Would a drone be able to distinguish this scenario?
In air-to-air warfare, how is this done now? A drone would have this much easier, because one could standardise a "ceasefire" frequency or signal, which could then be used to surrender. Probably far more reliable than a crackly radio in a different language against someone you just pissed off by shooting at them.
In a land-based scenario, this is one of many things where drones will fall short when trying to identify targets. I'm repeating myself I think.
I just cannot comprehend a machine making life and death decisions without the intervention of a human.
I think you're putting a little too much significance on it. Well-programmed cold and calculating is no better or worse than emotional. Machines inherently cannot hate or be upset (as you stated above); humans can be. One can program a computer to account for casualties inflicted so far, but at the end of the day, your machine is mirroring the morality of the man who programs it. Why is that machine so much scarier than the man?
I don't think its morally ethical and I don't think it should even be considered.
You make it sound like men have a right to make life and death decisions. The only thing that sets them apart is that man has a conscience. The only problem with that reasoning is... Some don't, and some listen to other emotions first.
Much like it would be immoral to allow cloning of people ...
And why's that?
More importantly why is it moral and ethical to clone animals as opposed to humans? Is it? I think (from the way you say it) that you say that because it's something you've always heard or thought or been told, but never really thought about
Sorry if I'm repeating myself a lot, I wrote this over the course of hours so I don't know how it's come out...