Quote Originally Posted by pendell View Post
The problem with the zeroth law is that it is so amorphous that it's not really useful as a guide to behavior.

What is "good for humanity"? Is it better for humanity if , say, we eliminated the gene for down's syndrome? What if a robot arrived at this conclusion, flawed or no, and started terminating the lives of anyone with that gene? Would we accept its defense that it was acting in accord with the zeroth law?

And what 'humanity'? If a robot concluded that 'humanity' must evolve to the next stage of evolution, and proceeded therefore to exert the necessary environmental pressure on the gene pool by fomenting wars or performing selected assassinations, would we want this to happen?

There is no deed so base, so vile, that it cannot be somehow justified as "for the good of humanity".

That is why I would prefer a concrete rule with tangible measures of performance (such as "Don't kill a human being") over a nebulous concept that can be rationalized to mean ANYTHING. I've debugged enough computer programs to know what happens when a computer follows the instructions assigned to it by humans to it's logical conclusion.

Respectfully,

Brian P.
The Zeroth Law requires robots not only vastly more intelligent than humans but also nigh-omniscient robots, really.