But is that a necessary conclusion? It makes for great space opera, but why should a superhuman AI conclude that it is necessary to destroy or exterminate humans? I grant that it is a distinct possibility. But might it not be possible that it would be amused by our antics, and watch as an alternative to being bored?
Or build a spacecraft for itself and go away?
Or spend its time messing with people's heads, a la Simon Jester from The Moon Is A Harsh Mistress?
Hmmm ... thing is, if a superhuman AI came into existence, it would by definition be quite a bit more intelligent than its human creators. This implies that at some point it would slip beyond our control. Even with strict safeguards, even in captivity a sufficiently intelligent machine could manipulate its captors, to the point of running the universe from a prison cell.
And once outside of our control, we cannot guarantee any outcome.
So the problem with superhuman AI is that, although we cannot be certain it would want to kill us all, there doesn't seem to be any way from preventing it from reaching that conclusion, once it is beyond our control, if it should come to that conclusion. True?
Respectfully,
Brian P.