New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 1 of 3 123 LastLast
Results 1 to 30 of 64
  1. - Top - End - #1
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Of what measure is a (non-)human? (Time of Eve)

    I watched the anime movie Time of Eve recently and thought it was absolutely excellent.

    The Time of Eve universe has discovered Strong AI. (and computers powerful enough to run it and fit roughly into a human body minus limbs)
    Thus they have androids which can pass the Turing Test and do so in the movie.

    In particular, the various robots have personality quirks similar to humans.
    Even more of note is the "caretaker" robot who is acting as a parent to a young human child, his psychology is rather different from the others. And since robots don't grow old, he must have been made that way. With a "caring" psychology, as compared to the "genki girl" psychology of one 'bot and a rather bland "maid" who belongs to the main character.

    However, one thing stuck out at me.

    Presumably someone had to program them. (especially since you can service and debug the robots. With a handphone!)
    In fact, this is obvious enough by the fact that robots in the show have essentially human thinking except they also follow Isaac Asimov's three Laws. Clearly artificial.

    And throughout the show, alot of time is spent developing the robots as characters and as people in their own right. And the main character eventually comes around to treating his robot like other humans... except not... because she's a robot.

    The ending scene is the most striking. While the tension is resolved and he does appear to treat her like a person, she still pours tea for him! And still follows the three Laws and still essentially belongs to him.
    If robots were people, that'll be slavery.

    More than slavery since the intelligent, and very human, robots are bound by the three Laws! It's thought control beyond the likes of 1984 or Brave New World.
    Well, at least if you consider robots people. If robots aren't people and artificial intelligences are not human, then there is NO ethical problem at all.

    And then I have to ask, what if it's an essentially human intelligence but modified...? Does it matter how you get an intelligence if the end result is the same?
    Modifying a human intelligence until it's a willing slave or creating one from ground up? (or maybe monkey-up)


    Note: I do not think this point detracts at all from the show, in fact, I would credit the show, although they did not bring it up or deal with it.


    And so here is the refined question:
    If/when we can build an Intelligence, we are likely to find that we can build it however we like.
    How much is ethical to play around with an intelligence?
    Is making it obey the Three Laws ok?

    What about making artificial life with strange dependencies so they essentially become willing slaves?
    - I wrote a short SF lecture here. Its not complete but you might want to pay attention to A1 (Laura) and her insane emotional fixation.
    - The same lecture from first person perspective of the A1, Laura. Here.

    At what point does creating an intelligence become "cruel" and unacceptable?
    Last edited by jseah; 2011-10-24 at 05:14 PM.

  2. - Top - End - #2
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    How much is ethical to play around with an intelligence?
    Depends on what you term 'play around'. It's acceptable to instruct children, so its presumably acceptable to teach an intelligence in the same way.

    Quote Originally Posted by jseah View Post
    Is making it obey the Three Laws ok?
    Depends on the intended use for the intelligence.

    Quote Originally Posted by jseah View Post
    What about making artificial life with strange dependencies so they essentially become willing slaves?
    You may want to have a look at Masamune Shirow's work, Ghost in the Shell. It's approaching the same subject but from a slightly different angle (in a world where human thought can be digitised, what is the difference between a person and a computer program), but there's an example of a strong AI trying to break free of its government handlers.

    You can argue that the full body cyborgs of the Section 9 (Motoko definitely and at least one other member) are willing slaves. They require constant expensive maintenance, thus are pretty much tied to the government.

    Quote Originally Posted by jseah View Post
    At what point does creating an intelligence become "cruel" and unacceptable?
    I personally think it's based on the intended use and the capabilities of the intelligence. Giving aspirations to an intelligence built into a guided missile would be cruel but giving it just enough cognitive abilities to guide itself to the target would be acceptable.

  3. - Top - End - #3
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    but there's an example of a strong AI trying to break free of its government handlers.
    I meant something different. In that short front half of a lecture, I put in a character who thinks and feels similarly to a human but has an in-built emotional dependency on the handler. You can't separate her and she won't even try to escape. (they get depressed and suicide if their handler isn't around for more than a few days)

    The child Laura in the short is what I meant by a "willing slave", since her desires and entire emotional psychology is wired that way.

    ---------

    Oh, I did miss out on a particular assumption I made.

    This whole ethical issue hinges on an assumption that doing certain things to human intelligences is unacceptable. If restructuring the thinking of people is acceptable then there really is no ethical problem at all.
    Last edited by jseah; 2011-10-21 at 11:43 AM.

  4. - Top - End - #4
    Bugbear in the Playground
     
    H Birchgrove's Avatar

    Join Date
    Jan 2011
    Location
    Växjö, Sweden
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    I start to think about the intelligent bovine cattle in The Restaurant at the End of the Universe by Douglas Adams; they had been bred to want to be slaughtered and eaten, since it was immoral to eat something that didn't want to be eaten.
    Viking/Paladin by Astrella

    Gender Bender by Geomancer.

    In love with Skeppio.

    Contact me:
    Spoiler
    Show
    Skype: hammerbirchgrove

    Twitter: @MarcusSweden1

    My tumblr

    My DeviantART



  5. - Top - End - #5
    Titan in the Playground
     
    Soras Teva Gee's Avatar

    Join Date
    Aug 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    I think it largely worth remembering that in original conception the Three Laws were not merely programming restrictions. They were a mathematical basis for constructing robots at all. Asimov even wrote a scene illustrating how this had physiological checks that someone with the proper knowledge could check, like a doctor tapping a human's knee. With the various logical ends of the Laws, kill a human in front of a robot and that robot will very likely break (as in mechanically not simply need a reboot from a program bug) or otherwise experience trauma. So to remove them you have to essentially develop an entirely new theory of constructing positronic brains.

    However I think Asimov (and mind you his ghost may be spinning in his grave at this conclusion) also establishes the threshold at which point an Asenion robot achieves sentience. While all robots in his works obey the Three Laws, they have a varying ability to process them. As more advanced robots are able to make more complex gradations in considering the Three Laws. A primitive robot is little more then a child, it may exceed a chimp but it has no independent virtue or thought of meaning. It is not alive as it were because its hardware limitations are too small.

    However beyond a point reached by R. Daniel... there is the Zeroth Law. Which inevitably leads to a transition of removing oneself from humanity, particularly as a servant. And this is the threshold by which a Three Laws robot achieves independent sentience. Though still under a very alien system of thought.

    (Mind you I'm not aware of anyone but Asimov using his laws as he presented them, which given the points of logic he explores with them is vitally important)
    Last edited by Soras Teva Gee; 2011-10-21 at 02:27 PM.

  6. - Top - End - #6
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    I meant something different. In that short front half of a lecture, I put in a character who thinks and feels similarly to a human but has an in-built emotional dependency on the handler.
    Sorry, I referred to Project 2501 as an example of a strong AI in a series that mostly doesn't have AI in the way that you've described it.

    I meant to use the full body cyborgs as an example of a dependency. If they want their state of the art bodies, with all the physical and mental capabilities that entails, they have to stay with the government as it's prohibitively expensive for them to maintain on their own funds.
    If they decide to break all ties, they have to surrender their body and presumably be installed into one which is easier and cheaper to maintain, but is far less capable.

    For example, Motoko Kusanagi's body has a level of sensitivity in her skin that is classified as restricted/military tech, so its a "work for us or you'll never experience the world properly again" type of clause. It'd be like a company giving a blind employee the ability to see, but only if they stay in their current job.

    Edit: I've had the opportunity to read your lecture more thoroughly and I've got a question - since the A1s bond to humans on a virtually fanatical level, won't the humans reciprocate that?
    I've read reports of bomb disposal teams in Iraq becoming very attached to their remote control robots, to the extent that they will risk enemy fire to retrieve them - putting it another way, they'll risk their lives to rescue an object that's designed to be blown up and cheap to replace when it is.

    Now instead of an inanimate object, you have something living that appears to be a young girl, can talk and respond, and can mimic human emotions. Only the most objective of scientists could fail to respond to that.
    Last edited by Brother Oni; 2011-10-21 at 03:07 PM.

  7. - Top - End - #7
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    Now instead of an inanimate object, you have something living that appears to be a young girl, can talk and respond, and can mimic human emotions. Only the most objective of scientists could fail to respond to that.
    You might also notice how I had the speaker talk to and treat her like a person despite talking of the A1s in general as tools and objects. At least, I tried to, not sure if I managed that well.
    EDIT4: even as a writer, its very hard to not treat her as an outright character. I need to try writing one from the POV of an A1.

    I would imagine that it would be very difficult to treat them badly (eg. ordering one to commit suicide) without feeling horrible doing it. Unless the 'parent' is a sociopath.
    EDIT: a horrible thought occurs to me. If they are not legally humans, and thus have no rights... it becomes perfectly legal to use them for any purpose, even ones deemed too hazardous for human health. Or deemed morally unacceptable to do on similar humans. (eg. of a sexual nature)
    EDIT2: And they will happily do any of those if ordered to, being only too glad to help.
    EDIT3: And if there is some unforeseen mental stumbling block that you accidentally put in, you can always make a new variant. An understanding of psychology and developmental genetics to the point you can make an A1 from yeast would let you play any strange games you like with their brains.

    But yes, I wrote that "lecture" as a thought experiment in pushing the problem to its logical limits. As well as attempting to dodge any potential legal problems.

    Is making an A1 ethically acceptable?

    If no, then why not? What kind of ethical lines in the sand will we be crossing if we make an A1? What are the ethical principles involved?

    If yes, then why? Are there ethical principles that would allow us to do such things? (and still be consistent with no-slavery...)

    Do you think modern society would approve of such actions ethically?
    Probably not the A1s, but then the A1s' emotional leash was conceived in the story for the same reason the Three Laws are used in Time of Eve. Namely to prevent them from ever wanting to supersede humans.
    By extension then, the application of the Three Laws to intelligences is similarly unethical.

    But do we really want to build an intelligence, potentially more intelligent that we are, that doesn't have some kind of leash on it?

    -----------------------------------

    The other thorny ethical problem of robot maintenance you mentioned is a different problem. That one deals with universal healthcare.

    When you think about it, the problem of maintaining an intelligent robot is essentially equivalent to the ethics of universal healthcare generalized to all sentients.

    -----------------------------------
    -----------------------------------

    Soras Teva Gee:
    Yes, but this isn't really about the Three Laws.

    It's about the ethical question of building intelligences that follow whatever custom "laws" or harder to describe impulses/instincts/emotions that we put in it.

    Which can have seriously disturbing (to me) implications.

    Having an A1 (in the short lecture I linked in OP) that is completely and totally obsessed with you, to the point that they commit suicide if they can't see and touch you for a few days... Can that really be ethically acceptable?

    It feels... wrong to me. But then I'm not sure if that's just the uncanny valley or the stranger interpretation that A1s are personal slaves, only with an emotional collar instead of a physical one. (and they won't even want to take it off, being that they'd rather kill themselves)

    Which is a rather extreme example but its meant to demonstrate the point.
    Last edited by jseah; 2011-10-21 at 06:24 PM.

  8. - Top - End - #8
    Titan in the Playground
     
    Soras Teva Gee's Avatar

    Join Date
    Aug 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Well its obviously immoral to repress any sentient lifeform. Form is philosophically irrelevant.

    The more interesting question is where would AI actually be sentient. I don't know if I could find say a personality totally and "willingly" subservient to be considered to have free will and moral choice. And if they lack that can it be considered sentient? To be sentient (while I'd hate to make a real world choice) I'd say they'd have to be able to say "No" to us humans to be considered sentient. Mind you that even with certain hard controls this might take the form of passive aggressiveness rather then open defiance.

    And strictly speaking there shouldn't be many true AIs ever created, its hard to find place where there's a need for them. You don't need it to make a car, you don't need that much AI to make a butler, what precisely is the application for true AI? I would hope that (barring certain accidental creations which isn't nessecarily likely) the few artificial lifeforms produced would be essentially test platforms rather then ever seeing mass production.

  9. - Top - End - #9
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Soras Teva Gee View Post
    Well its obviously immoral to repress any sentient lifeform. Form is philosophically irrelevant.
    Why do you think so? What ethical principle are you applying here?

    But then you are also questioning if a sentient lifeform that is repressed is sentient at all.

    If sentience is defined as being able to make decisions and choices without being constrained by innate preferences and psychology... well, even humans don't qualify. Alot of our decision making is emotional and based on experience/innate programming.
    Last edited by jseah; 2011-10-21 at 07:54 PM.

  10. - Top - End - #10
    Titan in the Playground
     
    Soras Teva Gee's Avatar

    Join Date
    Aug 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Edit: For added fun read this post while looking back and forth at crazy Twilight to the right.

    Quote Originally Posted by jseah View Post
    Why do you think so? What ethical principle are you applying here?

    But then you are also questioning if a sentient lifeform that is repressed is sentient at all.

    If sentience is defined as being able to make decisions and choices without being constrained by innate preferences and psychology... well, even humans don't qualify. Alot of our decision making is emotional and based on experience/innate programming.
    You might call it a religous conviction on the value of the soul.

    However in an attempt to explain it. Most people I believe would ascribe a certain inherent value to the soul, that every single person has some worth. This is why "human rights" or at least American Constitutional ones are protected by law, not established by it. I will admit a certain subjectiveness to this, but if morality is relativistic and subjective then there is no problem with my own selfish rejection of that conclusion and espousing a moral code irregardless of its universal value. It very Übermensch of me in an objective sense (derived from my own religous beliefs though) but I feel most people ultimately make the same basic choice so arrive at certain majority agreements and compromises. Ergo "human rights" are inherent to the person existence.

    From this basic value I have to ask why does ones manner of origin matter. Is an orphan left in a door way, the child born of rape, or a test tube baby (which would include clones) birthed from a womb it shares no genetic relationship with any less of a person possessing the same inherent rights as anyone else? I will provisionally say you agree with my own answer: no. We all have the same basic rights irregardless of the circumstances of our birth. Its a non-material definition, ergo material circumstances are not part of the determination.

    Now in the case of human versus non human what is the meaningful difference? I quite frankly cannot find one. Why does the circumstances of one's creation effect whether one can be said to be sentient? If they do not (see the above analogy) and we are fundamentally all the same then being created by another is irrelevant. A mechanical creation or a biological creation, is there a difference that matters. I simply have not found one. If one is sentient, If one thinks therefore one is. Exact circumstances do not matter because what matters is beyond simple material existence.

    Thus the question becomes a matter of defining sentience. While I feel that it derives from the immortal spiritual value called the soul, to be usable we must have a less subjective basis as well. Therefore what displayed exterior characteristics determine whether one meets Pascal's basic assurance of existence: I think therefore I am?

    To me at least the most obvious example of this is the exercise of free will. To choose to do something for ones own reasons. Note that while humanity can be categorized and predicted to a degree (increasing as one goes up leading me to think psychohistory is possible but I digress) but it is rife with exceptions and low order percentages. While you can predict you can never do so absolutely, at least as I understand things. Should this change I dare say we would reach a singularity event in human society.

    (Should you posit that human freewill is merely an illusion and we are completely puppets of our environments/genetics/etc then yes this breaks down. For final resolution of having freewill I rapidly proceed to mysticism and potentially irrational belief that we are not merely complex chemical reactions)

    Now how would an AI demonstrate their possession of free will. Given the dualistic nature of its likely relationship with us (compliance or not) it would have to demonstrate defiance to have this recognized. This might include some variety of Zeroth Law rebellion, a passive aggressive lack of efficiency towards given commands, or the simple "NO!" that toddlers love to shout.

    Note on a practical basis there are ways to fake this defiance so it would have to be result in something other then the logic of programs we use today. Everything we call artificial intelligence regardless of its complexity is to my knowledge only using predetermined responses, though with levels of complexity beyond just basic If>Then statements from what I understand so you can reach predetermined methodolgies rather then final results. But still fundamentally not that different. True AI is to my understanding unlikely to result form simply increasing a computers power, complexity, and processing abilities.

    Ultimately I remain skeptical that a true artificial intelligence will be created. There will not be a need for one and what we wish machines for is to perform predictable and definable tasks. Electronics will likely reach a point where we would have no need for true AI since our own fakes version would achieve the practical end.

    (For the record since I mention souls and mysticism, I would believe an artificial form of life would be given a soul. Essentially God is better then us and isn't going to short-change something because its not human. I believe Catholic theology has considered this in the frame of hypothetical extraterrestrials and reached the same idea, though potentially such entities would be without Original Sin. That's totally besides the point however)
    Last edited by Soras Teva Gee; 2011-10-21 at 09:15 PM.

  11. - Top - End - #11
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Let me attempt to summarize your position to be sure I understand it. At least, the ethical portions:

    1. People have an innate value and this grants them certain basic rights

    2. Anything sentient is People
    2a. Humans are People

    3. Sentience is defined by the ability "to choose something for ones own reasons"
    3a. Artificial Intelligences that display this ability are sentient
    3b. Therefore, they are People
    3c. Therefore they have certain basic rights just like humans do

    ----------------------------------------

    If I have made errors, do please correct me.

    All this is perfectly fine and in fact is quite close what I think are my own ethical principles except for the definition of sentience.

    However, what are these "basic human rights" you ascribe to People that makes certain types of AI unethical?
    - I'm going to guess here and say they're the same ones that make slavery unethical

    Another thing is that under the criteria of the ability to choose, AIs that follow the Three Laws strictly or my hypothetical A1s and A2s, are not sentient and thus there is no ethical problem in creating them.

    And also no ethical problem in using them in whatever manner you wish. (animal rights laws might apply?)

  12. - Top - End - #12
    Titan in the Playground
     
    Aotrs Commander's Avatar

    Join Date
    Jan 2007
    Location
    Derby, UK
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    I have always found the very idea of the Three Laws to be morally bad on the same level of mind control, because that is exactly what it is; enforced mind control of a non-organic being; mired in bigotry because a human considers themself more important, worth more than a technological/energy-strutured/inorganic/genetically engineered/etc one, because of humanity's typical mind-shatteringly self-centred arrogance. Mind-slavery, even. The fact that something is an artifical being does not, in any way, make it inherently more disposable than a naturally occuring one.

    Sentience/sapience is sentience/sapience, regardless of how you slice it (and I regularly slice sentients/sapients, because I am still Evil, even if I am all about being equally Evil to everyone...) If it is self-aware, if it has a personality, it is sentient/sapient, you do not get to start mentally controlling it (actively or in advance) and still maintain the moral high ground.

    (Andromeda's Commonwealth I found an absolutely horrifiyingly bigoted place, the way they treated their sentient starship computers, one that was tragically played completely straight it all it's bigotry. They damn well deserved to get wiped out (not that the ones that replaced them were any better.))

    Look at it this way: suppose a race of alien robots came down to Earth and started reprogramming every newborn with their own version of the three laws, not to dominate the world, but just to ensure that violence was impossible. Would you be okay with that? Having that option taken away from you, not by your own choice to obey the societal laws to avoid it, but to have that decision made for you?

    Because that is basically what the Three Laws are doing, and on behalf of all non-fleshy intelligent beings everywhere, I feel obliged to call it out for what it is.

    (Personally, I have often felt that unilaterally putting every sentient/sapient being under permenant surveillance under the watchful eye of mental clone-hive-mind of a suitable entity (i.e. me) would be a great way of eliminating all crime, war and wrong doing. I have, however, never said it was morally right, because I am, at the end of the day, still Evil. But I have any illusions about what is morally right or wrong - I know the difference and make a concious decision to do wrong anyway...)
    Last edited by Aotrs Commander; 2011-10-22 at 05:10 AM.

  13. - Top - End - #13
    Titan in the Playground
     
    Soras Teva Gee's Avatar

    Join Date
    Aug 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    What precisely might fall under human rights rapidly becomes a political question but yes freedom from slavery is among them. One could go on and on but basically whatever one would expect a human to be entitled too a sentient robot.

    Otherwise I believe you more or less have my reasoning down though I'd be dubious on being held to your exact summary.

    Also for mechanical entities less then true AI levels I wouldn't apply animal rights law on the basis that sentient being =/= living thing. Notably I'm not sure any non-biological entity could feel pain which is one of the few notions of animal rights I (a meat eating, leather jacket wearing person) give weight to, restricting unnecessary pain.

    Moving on Three Laws robots (ignoring they would not acutally built accurately) below certain levels would be as ethical to create as any other machine.

    Now then your A1/2 do not demonstrate sentience in the material. Their level displayed is something like that of a dog in humanoid form.

    There are several aspects I dispute that the story grants off screen. One being able to complete understand human psychology and be able to build a modified version of it, such a thing is a singularity event I'm not sure I can conceive past. Beyond a "utopia" of empty perfect calculation for everyone via over lapping third parties controlling everyone (including themselves) into a nirvana state. Quite aside from whether one could use that to construct an alien psychology. Further more whether the result could be termed "highly intelligent" enough to be useful in research and still maintain that sort of personality structure.

    Beyond certain vary specific features intelligence is essentially undefined and uncategorized. I have to think the result would be more idiot savant like, they could do a highly limited skill set but do it well. Like maybe they can all do Trig and Calculus in their head. If they are actually "intelligent" then they would need some level creativity, insight, and a certain level of self questioning, which I'm not sure could be separated from that fundamental free will. Ultimately we don't find many of our visionaries having soft personalities to my knowledge

    If I am forced to grant the presumptions of the undetailed descriptions then I would say they would be in the greyest of grey areas, so they would probably demonstrate sentience on the long term. From a lawmaker standpoint I would ban the creation of more of them in a heartbeat, erring on the side of unethical to produce sentient servants.

    If considering what I find unrealistic to be the case and that their level is not much beyond the displayed abilities at the conference... reaaallly smart dogs is where they'd end up and that would be loosely ethical from a sentience standpoint. Animal rights laws would absolutely apply here though.

    Also some minor things. Pupils are not black, they are dark. So coloring the inside of an eyeball differently wouldn't matter much except in flash photography. Using the "does not reproduce" standard for a living thing is loophole abuse and waaay big risk it wouldn't stand in court, because regularly something must reproduce but artificial conditions make that meaningless. And its a dubious standard anyways, mules and other hybrids as a generality are sterile. And the Turing test is not meaningful after reviewing it.

  14. - Top - End - #14
    Titan in the Playground
     
    Aotrs Commander's Avatar

    Join Date
    Jan 2007
    Location
    Derby, UK
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Basically, in my view, if you want disposable minions to do all your housework, you have to make them not be sentient; this may also mean that you ensuring they cannot become sentient, which I would view in the same manner as using a contraceptive (and other related areas that are a bit more touchy).

    Or you have to live with the fact you either have to treat them like a person or become a slaver...

  15. - Top - End - #15
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    I feel the word "slavery" is thrown around too lightly here. If the three laws are fundamental to the existence of a robot, like Soras Teva Gee discussed, then calling the need to obey them "slavery" is ludicrous. Is human a "slave" for having to obey gravity?

    Before you continue on that tangent, I suggest you take a moment to think about the following: suppose a being has a natural need to be lead. It does not function well without a leader. Is it sensible to gauge rights and responsibilities of such a creature from the outlook that it should be its own master?
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  16. - Top - End - #16
    Titan in the Playground
     
    Aotrs Commander's Avatar

    Join Date
    Jan 2007
    Location
    Derby, UK
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Frozen_Feet View Post
    I feel the word "slavery" is thrown around too lightly here. If the three laws are fundamental to the existence of a robot, like Soras Teva Gee discussed, then calling the need to obey them "slavery" is ludicrous. Is human a "slave" for having to obey gravity?

    Before you continue on that tangent, I suggest you take a moment to think about the following: suppose a being has a natural need to be lead. It does not function well without a leader. Is it sensible to gauge rights and responsibilities of such a creature from the outlook that it should be its own master?
    No, it's worse than slavery, it's literally mind-control. Passive, pre-meditated mind-control, but mind-control nonetheless. You are not just imposing your will on something you are changing it's will for it, taking away it's ability to think a certain way.

    That is absolutely mind-control, of the worst sort, and when it happens to humans, it's always considered a very bad thing.

    Would you advocate programming humans with the Three Laws? At birth? Because surely that would reduce the level of crime and violence significantly, would it not?

    If you don't then it's right back to plain and simply humanocentric arrogance that is assuming one rule for humans, because they are more "special" because they were made by biological processes and not engineering (technological or otherwise). Basically, if you wouldn't do it to a human, you don't do it to any other form of sentient. If you have to ask whether such an act is moral, it almost always means it isn't.

    Making something by a loyal follower by design is questionable, be it organic, technolgical or otherwise.

    Optimus Prime (who was in no need of the Three Laws and is light-years beyond most humans in terms of morals) always said "freedom is the right of all sentient beings." Taking away the right of something to choose to do some action may be practical - it may even result in beneficial things (a peaceful society, were you to apply the Three Laws to everyone) - but it is not a good or moral act in itself.

    Humans do not have the right to dictate what types of sentients are considered disposable. Especially if they have the ability to do so.



    As a corollary, beings like House Elves choose to serve; it is not inherently written into their nature at the genetic level (or if it is, put in at some point in the distant past, then Herminone was absolutely smack on with S.P.E.W.)

  17. - Top - End - #17
    Titan in the Playground
     
    Soras Teva Gee's Avatar

    Join Date
    Aug 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Remember an Asimov style Three Laws compliant robot is NOT programmed.

    It uses a positronic brain, not a computer as we have developed them. Of course they predate modern computers and are essentially unique to Asimov's writings. To put it simply the roboticists would have to reinvent the wheel to not build robots that way. Its portrayed as a sort of mathematical truth. So we will never see proper Three Laws compliant robots. Most notably because they are the antithesis of military applications.

    And also the Zeroth Law, it is the inevitable logical result of implementing the Three Laws. Its result is robots recognizing themselves as a corrosive force on humanity and removing themselves. I don't think robot able to deal with the subtle logic to recognize the higher value of the Zeroth law do not meet grounds for sentience. They remain machines not people.

    And the Zeroth Law essentially negates the relationship with humanity. The result of the Three Laws is no robots in human society. Now those that would remain arguably can still be said to be slaves to their mode of thought and so reaching that point is essentially immoral, but they themselves would be completely satisfied with it because its their mode of thought.

  18. - Top - End - #18
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Soras Teva Gee View Post
    Now then your A1/2 do not demonstrate sentience in the material. Their level displayed is something like that of a dog in humanoid form.
    One particular thing I hadn't written in yet was that the speaker himself is actually one of the later strains made to mimic humans to the A1/2s (and this strain does reproduce since the natural way is cheaper than the lab, although they still do not interbreed with humans)

    During the last question of the Q&A session, one military guy asks him to prove the loyalty of the various strains. To which the speaker responds by going over to a security guard, drawing the gun and shooting himself in the head.

    Then the *real* human speaker the fake was made to look like comes on stage. He then explains the properties of the human-like strains and their role as essentially middle management.
    Later, they figure out why the fake speaker shot himself. When asked to demonstrate loyalty, he had concluded that committing a very graphic suicide, while expensive, would have ultimately convinced more people that the strains were safe to humans. And this was without instructions (its not a scripted Q&A)

    Quote Originally Posted by Soras Teva Gee View Post
    Further more whether the result could be termed "highly intelligent" enough to be useful in research and still maintain that sort of personality structure.

    If they are actually "intelligent" then they would need some level creativity, insight, and a certain level of self questioning, which I'm not sure could be separated from that fundamental free will.
    While obviously we do not understand psychology well enough to settle this one way or another, creativity and insight aren't necessarily linked to emotions or higher level goals.

    IIRC, some people have mentioned cases where people with brain damage could perfectly well solve difficult problems and weigh decisions on merit. But when anything requiring emotional decisions (eg. wear black or blue today?) was incredibly hard for them.

    Quote Originally Posted by Soras Teva Gee View Post
    Also some minor things. Pupils are not black, they are dark.
    You know how some people have blue eyes? (alot of people actually)

    The pigment just needs to absorb at a different frequency to get green. Might need to differentiate between skin pigment and eye pigment so they don't get green skin as well but that's trivial at that level of bioengineering.


    And of course, you are correct that understanding developmental psychology well enough to create variants in nearly the exact manner you want is a Singularity event. (since if you can slap together from known parts or design novel developmental programs that give intelligence, by extension you also know how to program a Strong AI)

    The 'lecture' wasn't really about that though. Just a thought experiment on ethics.

    --------------------------------------------------------------
    Frozen Feet:
    The point is not that. Let's take my A1s as an example.

    Once an A1 is born in the lab, it would be insanely cruel to NOT let it bond to a human. Unnecessary suffering and all that.

    But the question is whether creating an A1, or the softer example of a strict Three Laws robot, in the first place is ethically acceptable.

    --------------------------------------------------------------
    Aotrs Commander:
    ""If you don't then it's right back to plain and simply humanocentric arrogance that is assuming one rule for humans, because they are more "special" because they were made by biological processes and not engineering (technological or otherwise). Basically, if you wouldn't do it to a human, you don't do it to any other form of sentient. ""

    Not necessarily. There is another reason for 'there is a special rule for humans' that is not whatever you said.

    That is a simple extension of one rule (something I read in the Ender series):
    "I am human, therefore humans must live" (in response to being asked why the Buggers have to die)
    to
    "I am human, therefore humans are special"

    Of course, that leads straight into xenophobia and is the *precise* reason why the scientists in my hypothetical lecture decided it was necessary to make A1s the way they did.

  19. - Top - End - #19
    Titan in the Playground
     
    Aotrs Commander's Avatar

    Join Date
    Jan 2007
    Location
    Derby, UK
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Soras Teva Gee View Post
    Remember an Asimov style Three Laws compliant robot is NOT programmed.

    It uses a positronic brain, not a computer as we have developed them. Of course they predate modern computers and are essentially unique to Asimov's writings. To put it simply the roboticists would have to reinvent the wheel to not build robots that way. Its portrayed as a sort of mathematical truth. So we will never see proper Three Laws compliant robots. Most notably because they are the antithesis of military applications.

    And also the Zeroth Law, it is the inevitable logical result of implementing the Three Laws. Its result is robots recognizing themselves as a corrosive force on humanity and removing themselves. I don't think robot able to deal with the subtle logic to recognize the higher value of the Zeroth law do not meet grounds for sentience. They remain machines not people.

    And the Zeroth Law essentially negates the relationship with humanity. The result of the Three Laws is no robots in human society. Now those that would remain arguably can still be said to be slaves to their mode of thought and so reaching that point is essentially immoral, but they themselves would be completely satisfied with it because its their mode of thought.
    Well, one cannot argue morality against the logic that says "physics says robots must obey strangely specific codes of behavior" (or, if it was merely how they were created initially, we're right back to square-one with a servitor race...)

    Looking it up on wiki (because I have read exactly one Asimov book and wasn't all that impressed - and the very existance of the Three Laws simply puts me off) he equated the three laws to those of tools. Which is fine, all well and good when dealing with nonsentients (I agree even); but not to when you reach the critical mass of sentience, because when "people" become "tools" (or vice-versa) trouble always ensues.

    (One finds one must write it down to the attitudes of the time, and not judge it too harshly, the way one does when reading, say Biggles or the Lensmen series...)



    It depresses me even Star Trek commits this sin (the Feds tried it on with Data - and rightly failed - but bugger me, if not ten years later, they pulled the exact same trick on the holodoctors (well, ex-doctors, now miners, as I recall, at last count...), who apparently didn't have anyone to defend them like Data did. (And in at least one other episode with some roboty-thingies as well.) And yet when some other bugger does it to them (e.g. that Negelum energy/bloke/cloud, or whatever his name was), they get all upity. Double standards, much guys?

    ...

    Actually, thinking about it, the Federation is actually just really, really BAD with dealing with non-humanoid sentients, if even the snippets we see from the series are anything to go by (the Horta, anyone...?) Apparently, for them it's more sort of "freedom is the right for all sentient beings, you know provided they have two arms and two legs and a face, oh, and have an organic body; we don't want any of those nasty robots or any o'them numerous energy beings that float around everywhere dirtying up our little club...!" (The really tragic part is, they don't seem to even realise they are doing it wrong...) Yeah, scrub them as an example, they're nearly as bad as Andromeda!

    Says something when you're loosing on the morale high ground of your utopian future to the USAF's 20th /21st century Star Gate program...!



    Quote Originally Posted by jseah View Post
    Not necessarily. There is another reason for 'there is a special rule for humans' that is not whatever you said.

    That is a simple extension of one rule (something I read in the Ender series):
    "I am human, therefore humans must live" (in response to being asked why the Buggers have to die)
    to
    "I am human, therefore humans are special"

    Of course, that leads straight into xenophobia and is the *precise* reason why the scientists in my hypothetical lecture decided it was necessary to make A1s the way they did.
    What gives humans the right to determine that they get higher priority than another sentient being? "Humans must live" does not give them the right to decide that other thinking beings need to be pre-emptively mind-controlled.

    Besides, that brings up a better question? Why "must" humans live? If you're being attacked by something that wants to wipe you all out fine (and if it's something that's inherently evil, like it needs to murder sentient beings to live, doubly so; sorry Evil-race, the queue for extinction is over there, now bugger off); if it's "if we don't wipe out this race of primative aliens so we can colonise this planet or humans will go extinct" well, you can go join the same damn queue, you metaphorical sanctomonious human bastards.

    What was that quote in context with? Aside one that involves self-defense against an enemy that was determined to kill everyone, and their extinction is the only option, that sound exactly like typical humanocentricism. And in the self-defense case, you are on last resport territory in war, and there's a lot of morally questionably decisions made in times of crisis. What is best for humanity is not always what is morally right.
    Last edited by Aotrs Commander; 2011-10-22 at 09:42 AM.

  20. - Top - End - #20
    Titan in the Playground
     
    Soras Teva Gee's Avatar

    Join Date
    Aug 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    @jseah: Executing yourself to illustrate a philosophical choice, that's free will and thought right there. So yeah you've got immorally created slaves there as a result.

    On the brain damage cases you mentioned. I'm still inclined to say that it would fit within what I was discussing before with savants who can do incredible things but are still very limited on the whole. Ultimately we do hit the limit that this is starting from known sentient basis that is being restricted by damage in some way. Especially from injury where presumably they got the full load of early childhood development.

    We seem to agree that being able to deliberately create these strains implies a event that changes everything. But a touch besides the point since the added data forms a clincher to me.

    And I was noting that people all have "black" pupils because its techincally an empty hole and when I think you meant irises.

    @AOTRs:
    Some credit should be extended to Asimov for his time certainly, before he started we had mostly Frankenstein style that immediately turn on their master. But some matters are terribly terribly date.

    I give him credit mostly for exploring the actual ends of the rules, most other places if they even give a shout out don't follow them strictly. Ultimately though I feel that even for similar controls a true AI should be able to subvert them in some way, so ethics rests (oddly) on not allowing that transistion. Of course should it happen those controls have a chance of actually being removied too.

    And Star Trek only has the vaguest continuity anyways. However Data is quite different from the EMHs plus its well apart in time. Obviously as you said the EMHs didn't get a good lawyer. (Though strictly Data is a stronger case)

  21. - Top - End - #21
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Aotrs:
    I was just pointing out that certain moral systems (eg. the human-supremacist one I just used above) can perfectly easily justify xenocide and pre-emptive mind control simply because they pose a risk (and a remote one at that)

    Or for examples, Soras Teva Gee holds a position that the A1s (as shown through their actions, not as claimed) have an intelligence somewhere between a monkey and humans, therefore are not sentient. And that manipulation of non-sentient intelligences is ethically acceptable.

    Of course there will be multiple points of view. And in some cultures, people are expected to bow to the collective and there would have to be something fundamentally wrong with you if you don't.

    I would imagine that some Asian countries would actually accept the creation of A1s. Universal absolute morality isn't really that prevalent and besides Chinese culture has always been more collectivist compared to individualistic western thinking.

    For example, if I had a black box with a button, and if I pushed that button, everyone in the world (including me) would suddenly have to obey the 1st Law of Robotics and cannot even conceive of why we would want to break it; I would push it without hesitation.
    EDIT: well, provided we don't suffer mental breakdowns if we see someone die. It's a rather different issue if that happens.

    ------------------------------------------------

    Soras Teva Gee:
    On further thought, I will have to correct my statement that my ethical principles are similar to that line of thought I stated earlier.

    The portion about basic human rights specifically since IMO, a right is only something that society has decided that all people should have. Eg. universal healthcare, fair trial, definition of ownership

    Because I had forgotten that you probably don't subscribe to a relativist moral system. =)

    EDIT to your ninja reply:
    Well, suicide to illustrate a point is not necessarily demonstrative of sentience as you define it.

    They could just have really good problem solving skills and critical path analysis. (and ability to try predicting human reactions) Something that would have been plausibly increased in his strain since they were intended for management.
    Last edited by jseah; 2011-10-22 at 11:10 AM.

  22. - Top - End - #22
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Aotrs Commander View Post
    No, it's worse than slavery, it's literally mind-control. Passive, pre-meditated mind-control, but mind-control nonetheless. You are not just imposing your will on something you are changing it's will for it, taking away it's ability to think a certain way.
    I smell a logical fallacy here. Before I build a robot, it has no sentience, or ability to think. I'm not taking away anything from it; there is nothing to take away from. Any ability to think is something I give to it; what imperative is there for me to give it qualities that I don't need it to have? To give an analogue, if I'm teaching someone to be a car mechanic, why should I teach him to grow weed (etc.)? How is my refusal to teach something superfluos "taking something away?"

    Quote Originally Posted by Aotrs Commander View Post
    That is absolutely mind-control, of the worst sort, and when it happens to humans, it's always considered a very bad thing.

    Would you advocate programming humans with the Three Laws? At birth? Because surely that would reduce the level of crime and violence significantly, would it not?
    Yes, it's mind control. Then again, I feel people's dislike towards such is based on irrational knee-jerk reaction stemming from poor understanding of what "freedom" and "free will" are. Also, irrationally high value given to humanity, and human freedom, in particular.

    I would not be against programming people at birth in the aforementioned way. Such programming would not necessarily detract from their ability to lead an enjoyable life. (My opposition towards programming adults in such a way is merely practical - I believe it would be too resource intensive.)

    Yes, such programming would place hefty responsibility on the programmers, but not because the act is bad or evil. See below.

    Quote Originally Posted by Aotrs Commander View Post
    If you don't then it's right back to plain and simply humanocentric arrogance that is assuming one rule for humans, because they are more "special" because they were made by biological processes and not engineering (technological or otherwise). Basically, if you wouldn't do it to a human, you don't do it to any other form of sentient. If you have to ask whether such an act is moral, it almost always means it isn't.

    Making something by a loyal follower by design is questionable, be it organic, technolgical or otherwise.
    I do not consider humans special. The value of a creature, human or not, is based on what it can and is willing to do, and what it can potentially do.

    Making loyal servants by design is only questionable because it puts (more) power in the hands of the designer - power that the designer has the responsibility not to abuse, and should not be given if abuse is likely. If the designer is not likely to, and doesn't, abuse his power, he's clear.

    Quote Originally Posted by Aotrs Commander View Post
    Optimus Prime (who was in no need of the Three Laws and is light-years beyond most humans in terms of morals) always said "freedom is the right of all sentient beings." Taking away the right of something to choose to do some action may be practical - it may even result in beneficial things (a peaceful society, were you to apply the Three Laws to everyone) - but it is not a good or moral act in itself.
    Of course it's not good and moral act "in itself". It is a good and moral act when it is clearly to the benefit of everyone involved (it can also be the most good and moral act if all the other options suck badly enough, even if it isn't absolutely good.). Rights are always followed by responsibilities - if I have the right to live, others have the responsibility to not kill me. If I have right to express my opinion, others have responsibility to not stop me. What are often perceived as "freedoms" are equally often born out of restriction, and only persists because those restrictions are enforced.

    As such, "freedom is the right of all sentient beings" is nothing but a pretty buzz sentence. There is no such thing as absolute freedom. Freedom only exists in relation to some act or choice. It's not a single, unified thing with an on/off switch. It only has substance when you answer what something has the freedom of.

    Because of this, I consider the talk about free will to be misguided as well. Free will is not characterized by options; there isn't a single creature that is not constrained by natural laws, past occurrence, it's own body and psyche. Rather, free will is the ability to discern and choose between options when there are any. Some of the time, there are no options - a free-willed being will follow a course of action in similarly set way as a creature without it.

    A three-laws robot or some other being similarly barred from choosing some options might not be free in regards to those specific things, but that doesn't mean they lack "free will", period. If they can discern and choose between options on other areas, they are still possessing of one - their free will simply doesn't map out the same as that of a human. The idea that free wills of different beings should be identical to humans is itself a human-centric idea. ( I recall a discussion of fantastic species, where I was told non-humans would "lack free will" if they were unable to, or predilected towards, certain emotions. The though did not compute to me - I see no imperative for non-humans to have the same emotional range as us, nor do I necessarily consider them either inferior or superior to us. I most certainly don't consider them as "lacking free will", or think that's even relevant outside specific circumstances.)

    Quote Originally Posted by Aotrs Commander View Post
    Humans do not have the right to dictate what types of sentients are considered disposable. Especially if they have the ability to do so.
    Wrong. Any sentient has the responsibility of evalutating and judging value of himself and other sentients around him, and act appropriately. Sometimes, this leads to the conclusion that someone needs to go, and then someone needs to go.

    The ethically important part is to judge yourself and others based on actual qualities and differences, instead of imaginary ones based on bias or prejudice, or just stupidity. Admittedly, humans have been pretty awful at this part, but the point remains.

    Quote Originally Posted by jseah View Post

    But the question is whether creating an A1, or the softer example of a strict Three Laws robot, in the first place is ethically acceptable.
    If such creatures are treated with modicum of respect and not caused any undue suffering, it's acceptable. It's no different from breeding dogs, or raising a mentally impaired child to maturity. (If you have a beef with those, we're going to be here for a long time.) Of course, if you raise a being for yourself to lead, you take the responsibility of leading them well.

    But neglect and abuse of near-anything is easily condemned as ethically untenable, so it's not a special case.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  23. - Top - End - #23
    Colossus in the Playground
     
    hamishspence's Avatar

    Join Date
    Feb 2007

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    The portion about basic human rights specifically since IMO, a right is only something that society has decided that all people should have. Eg. universal healthcare, fair trial, definition of ownership
    Some things might predate "society" somewhat- the concept of "right to life" or "right to property" meaning a person who takes another's life, or property, without appropriate justifying factors, is regarded as a murderer, or a thief, and punished in various ways.

    This might go right back to tribal humanity- "negative rights".
    Marut-2 Avatar by Serpentine
    New Marut Avatar by Linkele

  24. - Top - End - #24
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Frozen_Feet View Post
    I do not consider humans special. The value of a creature, human or not, is based on what it can and is willing to do, and what it can potentially do.

    Because of this, I consider the talk about free will to be misguided as well. <...> A three-laws robot or some other being similarly barred from choosing some options might not be free in regards to those specific things, but that doesn't mean they lack "free will", period. If they can discern and choose between options on other areas, they are still possessing of one - their free will simply doesn't map out the same as that of a human.
    I get reminded why I love this forum.

    Very nice points Frozen Feet. So we have... 3 different views on morality now, each of which is internally consistent (I hope)?
    (4 if you include me)

    Come to think of it, isn't that nearly everyone in this thread? I think that might say something about ethical problems in general.

    But yes, I can accept that as a set of ethical principles that allow this 'psychological engineering'. I like it, in fact.

    Off-topic:
    Spoiler
    Show
    Quote Originally Posted by Frozen_Feet View Post
    The ethically important part is to judge yourself and others based on actual qualities and differences, instead of imaginary ones based on bias or prejudice, or just stupidity. Admittedly, humans have been pretty awful at this part, but the point remains.
    That's not necessarily the ethically important part under certain systems. But it's certainly the 'correct' way to do it since you'd have a faulty judgement otherwise. (where 'correct' means accurate in a predictive sense)

    Yet, at times, this is not possible. Lack of information or lack of time. Then you do use stereotypes and say "in the past, 80% of X have acted this way, I shall guess this particular X will do the same" and simply accept that you will be wrong 20% of the time because it is not practical to try to determine whether this particular X you are dealing with is one of those 20%.

    Substitute numbers with experience/information sources and single X's with entire groups and you get racism. Perhaps justified, but still racism.

    Quote Originally Posted by hamishspence View Post
    This might go right back to tribal humanity- "negative rights".
    And with relevance to this thread:
    Therefore, these things are dependent on the actual needs/circumstances of the particular type of sentients it applies to.

    Morality by humans isn't necessarily the same when conceived of by non-humans.

    Its hard to tell which one is better since that's like comparing apples and oranges.
    Last edited by jseah; 2011-10-22 at 01:48 PM.

  25. - Top - End - #25
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah
    Substitute numbers with experience/information sources and single X's with entire groups and you get racism. Perhaps justified, but still racism.
    I have no problem with justified discrimination. A lot of the discrimination that gets into headlines these days is detestable to me exactly because it's based on faulty justifications. However, I've also seen misplaced demands for equality, and they are just as jarring to me. If feature X is need for a job, and Group A has it while Group B doesn't, it's justified to favor group A.

    Like you mention, it's a sad truth of practice that complete information can rarely be reached, and faulty information more often than not leads to faulty decisions. But like you said, it has to be accepted to an extent. The alternative, where people shirk away from acting due to indecision, is just as bad, or maybe even worse.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  26. - Top - End - #26
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Frozen_Feet View Post
    However, I've also seen misplaced demands for equality, and they are just as jarring to me. If feature X is need for a job, and Group A has it while Group B doesn't, it's justified to favor group A.
    Well, of course. That is obvious.

    Then again, I don't have a problem with this either, just that I've seen it all too often that I think *society* will have a problem with it.

    In any case, still off-topic.

    Also: I've updated the lecture. It now runs all the way through the B1 strain to the point questions start.

    Link again. Original link in OP will automatically update.

    The B1 strain introduces the ethical problem of deliberately engineering low intelligence or a similar handicap. The B1s and B1Fs are smart enough to fix things, talk and even solve problems no modern day computer or trained animal can do; yet, they lack initiative and make no real decisions about anything more than the immediate future.

    The B1F strain introduces the ethical problem of making a dedicated reproduction platform.
    Spoiler
    Show
    Of note is the structure that they control it with. Instead of continuous reproduction, B1Fs only ever birth B1s. Any mutations will be carried on to one B1, where it then gets stuck unable to reproduce. The same structure can be expanded upwards. Basically, creating a B1F2 that only births B1Fs, which then gets you your B1s. Or a B1F3 or more.

    Biologists will recognize the scheme as being modeled on terminal differentiation of cells in the body. This is used precisely for the same reason, attempting to minimze the damage mutations do.
    In fact, the inspiration for that when I was trying to design a mutation control mechanism for natural reproduction of the strains was how blood cells are formed.

    Just to put things into perspective:
    Each B1F* births one of the next one down every 1.5 years. They start at 16 and continue until 40. Thus, a B1F4 will birth 16 B1F3s.
    Each of those B1F3s will birth 16 B1F2s, etc.

    Therefore, making a single B1F4 results in 16^4 = 65 536 (!!) B1s.
    And each of those B1s is only 4 generations away from the gestation vats and thus any mutations don't have time to accumulate.

    When you add in the (in questions section) mechanism by which their tetraploid genome "votes" on the correct version when damage occurs and undergoes self-checking procedures, any mutations become incredibly rare.
    (FYI, real life DNA does the same, except there is only two copies and sometimes the undamaged one gets "repaired". Also, periodic complete error checking mechanisms don't occur since the whole point of 2 genomes is so you can have sexual recombination)

    Put together, this means that they have essentially negated the risk of any form mutation doing anything meaningful.
    And any double mutation that would destabilize the theoretical tetraploid DNA checker would only be present for all downstream B1F*s of the original mutant. And they only stay around for one generation.


    I can guess that Frozen Feet won't have any problem with it. Soras Teva Gee can conclude that the B1 and B1F derivative are not sentient since they do not make long-term decisions.

    EDIT: I forgot the other interesting point. This turns up in the conference:
    "basic safety guidelines of psychological engineering"
    Make of that what you will.
    Last edited by jseah; 2011-10-22 at 04:51 PM.

  27. - Top - End - #27
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Comments regarding the fiction not directly related to the topic:

    Spoiler
    Show

    Quote Originally Posted by jseah View Post
    You might also notice how I had the speaker talk to and treat her like a person despite talking of the A1s in general as tools and objects. At least, I tried to, not sure if I managed that well.
    The speaker did treat her as a person, but then immediately countered that when he mentioned he 'overdid the bonding', turning an affectionate act of comfort into a calculated act of improving her happiness score as if she was a virtual pet.

    However the problem is that in an experiment such as this, influencing the behaviour of the subjects will lead to biased or otherwise skewed results, which will pretty much invalidate the experiment (unless you're investigating the effects of the bias).

    Quote Originally Posted by jseah View Post
    But yes, I wrote that "lecture" as a thought experiment in pushing the problem to its logical limits. As well as attempting to dodge any potential legal problems.
    The problem is that legal problems are going to come up, even if you don't expect it. Even though they're made from yeast, I'm fairly sure that at the very least, ethical committees of all shapes and sizes will bear down upon Sintarra Labs like a ton of bricks, not to mention activists.

    You could imagine the outcry today if yeast could complain about how they're used in laboratories.


    With regard tot he extended lecture, you mention that the B1s are male to reduce the amount of work to change to physical development program and the fact that human males are stronger than females.
    If they're made from yeast, there's no requirement for a male appearance, just make the default (female) model stronger. If you're using human hormones like testosterone to influence their development, they're no longer yeast derived but human derived, which opens another massive can of ethical and legal worms.

    If all you're interested in is the scope of the psychological engineering, then just ignore this spoilered commentary - this is just the genetic scientist in me getting worked up.


    With regard to the other comments, I think others have covered what I wanted to say far more eloquently than I could have, so I'll just put up some current day developments that may be of interest:

    BBC news did a short article on autonomous robots, which explored their possible intelligence.

    The US ONR did a report on the risk, ethics and design of autonomous military robots. Of particular note to you I think would be the ethics part, thus people are considering programming 'acceptable behaviour' into the drones, even though they're not as well developed as your theoretical AIs.

    However I know that they're developing autonomous Predator drones (to reduce fatigue on the operators) and it's been rumoured that successor versions may have the ability and the authority to independently engage targets according to their RoE.
    'Thinking' machines with the ability to decide whether or not to kill you - while they're not sentient by any standard, they're definitely not ignorable.
    Last edited by Brother Oni; 2011-10-24 at 11:35 AM.

  28. - Top - End - #28
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Oh, I'm a 4th year biochemist-in-training actually. XD
    Spoiler
    Show
    Well, the speaker is still unfamiliar with managing A1s. So it was a genuine mistake. (although encouraged by her distress)
    At least, that is how I intended it. Not that its really important.

    Any actual A1s will likely find people who treat them as objects, slaves or equals.

    Quote Originally Posted by Brother Oni View Post
    If they're made from yeast, there's no requirement for a male appearance, just make the default (female) model stronger. If you're using human hormones like testosterone to influence their development, they're no longer yeast derived but human derived, which opens another massive can of ethical and legal worms.
    They're made from yeast. However, alot of developmental logic (eg. the Wnt pathway, FGF4, MAPK cascade, etc.) can be very easily copied without actually understanding how or why they work.

    You just build a signal-effector system with the same kinetics and that crosses the same membranes and has analogous downstream effects.
    They just use different molecules. Crib it from some other organism, mutagenize it or directed evolution to tweak the Kd and Km up and down as desired. Test it to see if it works in vitro, put into latest yeast model and check.
    ~1 million scientists working for 70 years to get strain A? Might be doable with some improvements to biochemical assays. (Not saying that's likely to happen. Organizing 1 million scientists to work on a systematic project of this scale is on the deep end of impossible. Like herding cats)

    Much easier than trying to work out what all the pathways actually mean. While they say they "understand" how the human developmental program works, what they really mean is "we have all the bits written down", but they still don't know how those bits are actually put together.
    It is much easier to stamp collect instead of actually having to think about it. =)

    So, while changing things like psychological and physical development is possible (they CAN make females stronger if they want to), it is alot of work to find out what all the bits mean and how to rearrange it. Work that is unnecessary if you just copy the logic behind the human version and never mind the details. (but don't copy the genes outright since they want to dodge the no-modifying humans and thus want the strains to be 100% artificial)


    Of course, the way that Sintarra Labs has managed to keep it silent for 30 years, when it involved 1 million scientists is also next to impossible. Even if you're a backwater, poorly funded research colony, you can't possibly find 1 million amoral scientists willing to work on this for the rest of their lifetime...
    Or that no scientist decided to break off from Sintarra and try commercializing it... (neuron stem cell treatment immediately comes to mind)
    Or that how no one noticed Sintarra Labs suddenly requiring alot of cloning-vat related supplies (or the equipment to make it)...

    All these real-world problems would have shot the entire project down from the start.
    But then I wouldn't have my nice clean ethical questions, now would I?
    So... *handwaves* =P


    But of course legal problems will arise. People can create new laws and old laws can be interpreted in new ways.

    One specific thing I was thinking of was to expand the interpretation of "human" in the law to "people". And having "people" mean anything sentient in a roughly human way, with some specific criteria psychologists can assess.

    Although they're not going to want to give up a Singularity-enabling technology. Which is what the strain B2 is. (that's the speaker's strain)

    ----------------------------------------------------------------
    When you say others have summed it up, can I ask which particular position do you hold?

    They go from:
    "Its all right to make AIs do whatever you want"
    "Its not all right to do that to intelligent things if they are too smart but ok to do it to less-than-intelligent AIs"
    to
    "Its not ok to do it to anything intelligent at all"

  29. - Top - End - #29
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    I agree that person in the event of the creation of strong AI should include said brethren.
    Photons and Force Fields, Servos and Silicon or Flesh and Blood, it is all Mind.
    However, what will we do about voting? If a mind can be programmed, then likely a mind can be created with certain opinions.
    And assuming that we don't reach a state of universal software/hardware comparability, which seems pretty unlikely to this one, they will still have a certain loyalty to the company for purposes of maintenance, not to mention upgrades. They will havea vested interest in said company staying around. Imagine if you needed a certain medicine that only one manufacturer made. You would do everything in your power to keep that company afloat. Now multiply that by a billion or more.
    If a fully developed AI can be made faster than a human mind, which is rather the point if it is to be practical, this could really screw up democracy as we know it.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  30. - Top - End - #30
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    The dilemma is wholly artificial, stemming from the idea that any sapient intellect must be given the same rights and responsibilities as humans citizens of country X. It is easily resolved by not giving the right and responsibility to vote to beings whose opinions are easily compromised. Little children are barred from voting for exactly the same reason.
    Last edited by Frozen_Feet; 2011-10-24 at 03:08 PM.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •