A human should absolutely be allowed to make a snap decision like that. But for an A.I. should only be allowed to make that decision if part of the point in making it was to give it free will (which must mean no hard coded unbreakable rules to regulate behavior and it would be recommended that you figure out how to replicate human emotions before you do this (since most humans will only kill if they absolutely have to rather than if it just seems the most logical thing to do; granted emotional duress can also cause a human to kill, but that too is a minority problem and building robots in this way it should, at the very least, prevent them from banning together against humanity) but I digress) was part of the point. To build it otherwise would be extremly dangerous. It is ONLY at this point that I myself would say "haven't you seen any movies" as they do give theoreticals about how and why this could go wrong if allowed. To make good A.I. you have to make sure you know what you are doing which is what makes it frustrating that we see it go wrong in the same ways over and over again. But I digress yet again.
From both what Radcliffe has said both in private and in front of others it does not seem like Aida was designed with free will; which must mean that either she really did gain some form of sentience from the Darkhold or Radcliffe, for some reason, programmed her to prioritize what he wants over his physical safety given that he already told her not to kill for him anymore.