1. - Top - End - #1199
    Troll in the Playground
     
    Lizardfolk

    Join Date
    Feb 2007
    Gender
    Male

    Default Re: Agents of S.H.I.E.L.D. V: You Joined the Cavalry.

    Quote Originally Posted by dancrilis View Post
    In fairness programming a humanesque mind and that including hard coded limitations may be problematic, and having one pass as human without a humanesque mind may be even more difficult.

    But imagine for a second it is possible to include this limitation, imagine the following scenario.

    Someone is driving a truck towards innocent civilians, the android doesn't have time to stop the truck but can shoot the driver in the head which will likely (they can do the probabality for this) cause the truck to swerve and save the civilians - should the android be able to make such a snap second decision?
    Saying 'yes' or 'no' is perfectly acceptable as there is a case for either.
    A human should absolutely be allowed to make a snap decision like that. But for an A.I. should only be allowed to make that decision if part of the point in making it was to give it free will (which must mean no hard coded unbreakable rules to regulate behavior and it would be recommended that you figure out how to replicate human emotions before you do this (since most humans will only kill if they absolutely have to rather than if it just seems the most logical thing to do; granted emotional duress can also cause a human to kill, but that too is a minority problem and building robots in this way it should, at the very least, prevent them from banning together against humanity) but I digress) was part of the point. To build it otherwise would be extremly dangerous. It is ONLY at this point that I myself would say "haven't you seen any movies" as they do give theoreticals about how and why this could go wrong if allowed. To make good A.I. you have to make sure you know what you are doing which is what makes it frustrating that we see it go wrong in the same ways over and over again. But I digress yet again.

    From both what Radcliffe has said both in private and in front of others it does not seem like Aida was designed with free will; which must mean that either she really did gain some form of sentience from the Darkhold or Radcliffe, for some reason, programmed her to prioritize what he wants over his physical safety given that he already told her not to kill for him anymore.
    Last edited by Lizard Lord; 2017-01-22 at 01:56 PM.