New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 2 of 3 FirstFirst 123 LastLast
Results 31 to 60 of 64
  1. - Top - End - #31
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Frozen_Feet View Post
    The dilemma is wholly artificial, stemming from the idea that any sapient intellect must be given the same rights and responsibilities as humans citizens of country X. It is easily resolved by not giving the right and responsibility to vote to beings whose opinions are easily compromised. Little children are barred from voting for exactly the same reason.
    So we expect these minds to be satisfied with others having such a total control over their lives? Minds who may be equal or even exceed the capabilities of an adult mind?
    No wonder robots always rebel.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  2. - Top - End - #32
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    If their opinions can be easily rewritten anyway, they don't have the freedom to not be satisfied.

    Also, sapient =/= exceeds or equals adult mind. We're talking about inhuman entities here. Instead of adult humans, it might be wiser to compare them to disorderous or developmentally impaired. Their capabilities might exceed humans by miles on some areas, while falling short on others, such as independent decision making (which is the case if their opinions, ergo basis for making decisions, is easily rewritten.).

    The value of a creature, human or not, is based on what it can and is willing to do, and what it can potentially do. You don't give a being rights if you can expect it not to fulflill duties and responsibilities entailed by those rights.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  3. - Top - End - #33
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    If strong AI is possible, eventually they will equal us in all areas. The only difference is they can be directly reprogrammed and humans can not.
    Yet.
    Do we deserve to have the right to have this kind of power over other minds like this?
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  4. - Top - End - #34
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Let's see if I can't change a few opinions.

    Quote Originally Posted by jseah View Post
    - The same lecture from first person perspective of the A1, Laura. Here.
    EDIT: hopefully this doesn't come off as TOO shameless a plug... =P

    Soras Teva Gee:
    I hope this at least demonstrates that you can marry a strong intellect with the properties of the A1 and still be believable.

    Whether it is actually possible to do this and still end up with an A1 with the mental capabilities Laura is portrayed to have, we don't know. Although I am inclined to think it is.
    Last edited by jseah; 2011-10-24 at 05:21 PM.

  5. - Top - End - #35
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Ravens_cry View Post
    Do we deserve to have the right to have this kind of power over other minds like this?
    It can be argued that since we made them...

    At what point does it become better that they were never created?

    If you think euthanasia is acceptable, then certainly at the point in which people would accept applying it to an AI certainly qualifies.
    Why make an AI in such terrible condition that it becomes agreed it is merciful to kill it?

    Is there an earlier point? Does a category exist where they should never be made, but once they are made, it is better to just let them be?

  6. - Top - End - #36
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Well, we make our children and there are measures in place to prevent their exploration. In a way, Strong AI, if it ever gets made, is our children. I don't necessarily mean they will supplant us, but they are a kind of descendent.
    As valuable for research making, say, an analogue of a human mind designed to suffer in controlled ways is too unethical in my opinion to done.
    On the other hand, if an AI gets injured in it's job and does not wish or can not be transferred to another body, it's damage is irreparable and the AI itself feels it does not wish to carry on, then I suppose terminating it is as ethical as euthanasia.
    I don't really wish to discuss that.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  7. - Top - End - #37
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Ravens_cry View Post
    If strong AI is possible, eventually they will equal us in all areas. The only difference is they can be directly reprogrammed and humans can not.
    Yet.
    Do we deserve to have the right to have this kind of power over other minds like this?
    "We", as in humanity in abstract, or all humans everywhere?

    Of course not, to both. It's a right of only those who have the requisite training and expertise to do it well and who others can reasonably expect not to abuse those rights. In turn, they have the responsibility to not abuse their rights or creations. This holds true whether the programmers are humans, other AIs, or pigs.

    And while I consider your opinion that humans can't be reprogrammed somewhat dubious (what do you think teaching and learning is, then? Have you looked up fake memories?), your scenario still provides a clear and remarkable difference between two kinds of intellect. This difference means they aren't equal, and it makes sense for unequal intelligences to have different rights and responsibilities. You don't expect a car mechanic to treat lung cancer, so you don't give him the right to look at your medical data or order prescriptions.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  8. - Top - End - #38
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    All these real-world problems would have shot the entire project down from the start.
    But then I wouldn't have my nice clean ethical questions, now would I?
    So... *handwaves* =P
    Woah, developmental biology has really moved once since I last studied it.

    I'm happy with handwave.

    Quote Originally Posted by jseah View Post
    When you say others have summed it up, can I ask which particular position do you hold?
    For various philosophical (and other board discussion prohibited ones), I'm against building strong AI simply because we can't safeguard their rights.
    If something has the knowledge and self awareness to ask "Why are you doing this to me?" and you don't have a better reason than "I'm human and you're not" then you should be taking a long hard look at yourself.

    However I'm a pragmatic sort and I know that people will attempt to do so because they're curious, so my compromise position is "not too smart" or rather, build them for their intended purpose.

    A bomb disposal robot doesn't need to have aspirations about wanting a better life, so it's better not to give it that cognitive ability.
    If such a robot had the cognitive and communication ability of a 6 year old human child, most soldiers would probably refuse to put it into harm's way, whereas if you put it at the same level of a smart dog, the same soldiers would have no such qualms.

    That said, there's a difference between an expert system and an AI. For a specific role, an expert system could be virtually indistinguishable from an AI, however it would not be regarded as sentient.
    An expert system with the decision making and detection ability of a trained sniffer dog would not draw the same sort of issues as an AI with the equivalent capabilities.
    The AI would be able to cope with unexpected circumstances better, but if it spend its non-mission time exploring its compound or chasing balls (basically acting like a living being), then the people working with it would have issues, if not the ethics committees.

    Quote Originally Posted by Frozen_Feet View Post
    If their opinions can be easily rewritten anyway, they don't have the freedom to not be satisfied.
    This would probably be one of the safeguards built in to commercial scale strong AIs - you can't arbitrarily re-write their programming. You could argue with them in an attempt to change their mind or instruct them like children, but that's no different from another human.

    With regards to enforcing the Three Laws into strong AIs being like mind control, living beings have the same sort of behavioural instincts, so that's no different.
    There was an experiment where a small number of chimpanzees were given a group task to do - when it was completed, the entire group was rewarded. Once they got the hang of the task, the researchers only rewarded one chimpanzee on completion and after a while of this, the other chimps downed tools and refused to help.

    This indicates that fairness is pretty much hard coded into social animals, which is quite an advanced concept. If fairness is inherent to animals, why not the Three Laws, or something more suitable for strong AIs?

    Edit: reformatted for ease of reading
    Last edited by Brother Oni; 2011-10-25 at 07:13 AM.

  9. - Top - End - #39
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    It's not inherent so much as something that gets programmed in during evolution. Saying it is inherent is like saying ."Say ,every creature with eyes can use them, therefore giving a robot sight must be as simple as connecting a camera to it.", without considering how hard visual processing is..
    It is, it is very hard.
    It might pop up as an interaction between other bits we program in or it might need to be explicitly programmed in, but a "sense of fairness" is not inherent.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  10. - Top - End - #40
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Ravens_cry View Post
    It's not inherent so much as something that gets programmed in during evolution.
    And the functional difference between the two is..?

    Animals (or at least chimpanzees) and hence us, understand 'fairness' at a very basic level, so I don't really see an issue with strong AIs having similarly programmed instruction in at such a basic level? (Dependent on the actual instruction of course)

    Other posters have suggested that this is the worst form of slavery (built in mind control), but if animals have pre-programmed behaviours, why not machines?

  11. - Top - End - #41
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    Other posters have suggested that this is the worst form of slavery (built in mind control), but if animals have pre-programmed behaviours, why not machines?
    I think core of the issue is that a lot of people find it hard to swallow that humans have loads of preprogrammed behaviours. This colors their perception of what it means to be "free" whenever discussion wanders into the realm of transhuman and extrahuman intellects. People feel such intellects should be as or more "free", but they set the bar arbitrarily high due to faulty understanding of how limited humans are.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  12. - Top - End - #42
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    And the functional difference between the two is..?

    Animals (or at least chimpanzees) and hence us, understand 'fairness' at a very basic level, so I don't really see an issue with strong AIs having similarly programmed instruction in at such a basic level? (Dependent on the actual instruction of course)

    Other posters have suggested that this is the worst form of slavery (built in mind control), but if animals have pre-programmed behaviours, why not machines?
    I do not object to pre-prgrammed behaviours, we have enough of those ourselves.
    My point is that we need to know how to program it in. A mind, any mind is complex.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  13. - Top - End - #43
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    Woah, developmental biology has really moved once since I last studied it.

    I'm happy with handwave.
    Uh, you mean it has really moved forward in the fic. We can't do that yet, obviously.
    Spoiler
    Show
    But we have managed to move control systems between organisms. Most famous one is the LacI system to put any arbitrary gene under the control of IPTG (a small molecule). (copied from bacteria, used practically everywhere)

    More recent ones like using Cre-Lox to knockout genes when you want to. (copied from virus)
    "Importing" quorum sensing from one bacteria to another... and making it control GFP (which comes from squid)

    Making a control system for other control systems... the only one I've heard is a "circadian"-like clock. Being that two control systems suppress each other and thus they fluctuate with a specific cycle time.

    We haven't actually tried to copy control systems and creating artificial signalling networks (mostly because we we can't do it yet), which is what you will need to do if you are making yeast-people.

    Quote Originally Posted by Brother Oni View Post
    If something has the knowledge and self awareness to ask "Why are you doing this to me?" and you don't have a better reason than "I'm human and you're not" then you should be taking a long hard look at yourself.
    In the fiction, especially if you see the one from Laura's POV, my hypothetical A1s know precisely why the humans made them that way.
    - Being that the humans were afraid that any artificial life they create might be hostile to them since they want to use it

    They are smart enough to do research, they are smart enough to figure it out.

    They're fine with it. They like it that way.

    They also know the reason that they like it that way is because that's how the humans did it and they never had a choice in the matter.
    And that's fine too since they don't value having the choice. Then again, due to how their value systems work, everything not "being petted" pales in comparison.

  14. - Top - End - #44
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Ravens_cry View Post
    My point is that we need to know how to program it in. A mind, any mind is complex.
    I agree with you if we were inserting a behaviour into a pre-existing mind, however I don't see it as a problem if the behaviour was embedded while the mind was being constructed.

    In my opinion, developing and inserting preprogrammed behavioural patterns pales in technical complexity to actually making an autonomous mind from scratch, so by the time we've figured out the latter, the former isn't going to be an issue.

    In any case, humans have had behaviours imbedded before (hypnosis, subliminal commands, etc), with varying levels of compatibility and success. Why not the equivalent with machines?

    Quote Originally Posted by jseah View Post
    Uh, you mean it has really moved forward in the fic. We can't do that yet, obviously.
    Let me put it this way, when I last studied it, they hadn't finished the Human Genome Project yet. I still have a textbook from my A-levels (High School equivalent) where they didn't know how Vitamin C stopped scurvy.

    As I said, it's been a while.

    Quote Originally Posted by jseah View Post
    And that's fine too since they don't value having the choice. Then again, due to how their value systems work, everything not "being petted" pales in comparison.
    Moral and cultural relativism can lead to some very dark places. I wonder what an A1 would do if another human had harmed their bonded human?

    Bear in mind that 'harm' could range from murder all the way down to snagging the last sandwich before them in the canteen, especially with the apparent obesessiveness displayed by Laura in your other piece of fiction.
    Last edited by Brother Oni; 2011-10-26 at 06:38 AM.

  15. - Top - End - #45
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    As I said, it's been a while.
    Ah, if that's where you come from, then yes, yes. We have come very far since then.


    Quote Originally Posted by Brother Oni View Post
    Moral and cultural relativism can lead to some very dark places. I wonder what an A1 would do if another human had harmed their bonded human?

    Bear in mind that 'harm' could range from murder all the way down to snagging the last sandwich before them in the canteen, especially with the apparent obesessiveness displayed by Laura in your other piece of fiction.
    A1s will likely try to prevent the murder of their 'parent', and using lethal force to do so would definitely be acceptable in their mind. (although being the size of a 12 year old kid makes lethal force hard to come by without weapons)
    EDIT: which is yet another reason to not let them grow up

    Sacrificing themselves to block a knife or shockwave from a bomb will probably be second nature to the vast majority of them.

    Using lethal force to 'defend' their 'parent', while they would certainly use it more often than normal humans if given the chance, does come with a penalty if its not 'self-defence' (and its easily argued that, for an A1, protecting their 'parent' counts as self-defence)
    And they are smart enough to weigh the consequences, although they still face the need to make snap decisions.

    Eg.
    Give an A1 (or A2) a gun.
    If you steal her 'parent's' sandwich, she won't use it. (unless her 'parent' orders her to shoot you, in which case, you get shot) Its easier to just make another sandwich. With less long term consequences.
    A robber who tries to threaten her parent might get a warning shot, if he's lucky.
    If he has a weapon (even a small knife or a baseball bat) or uses force, she'll probably just shoot him outright. If she's a good shot, perhaps she might choose to try aiming somewhere non-vital. And if she's never held a gun and she's not confident of not hitting her 'parent', she might try getting closer (regardless of danger to self)

    If her 'parent' is threatened with death, she'll pick whatever best action she can think of to save her 'parent'.
    If you take the 'divert the train from 5 people to kill 1 person' moral problem, an A1 will always choose to save the group that does not have their 'parent' in it.
    If neither group contains their 'parent', it either reverts to the standard problem when the A1 knows none of them or if the A1 knows the people, she may choose the group that benefits her 'parent' more. (and it can go either way since its hard to make a decision under time pressure and insufficient information)
    Last edited by jseah; 2011-10-26 at 09:38 AM.

  16. - Top - End - #46
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    I agree with you if we were inserting a behaviour into a pre-existing mind, however I don't see it as a problem if the behaviour was embedded while the mind was being constructed.

    In my opinion, developing and inserting preprogrammed behavioural patterns pales in technical complexity to actually making an autonomous mind from scratch, so by the time we've figured out the latter, the former isn't going to be an issue.

    In any case, humans have had behaviours imbedded before (hypnosis, subliminal commands, etc), with varying levels of compatibility and success. Why not the equivalent with machines?
    Hypnosis won't make you go against your own conscience, subliminal commands are likely phony. I don't mind inserting behaviours at the start, unless they are against the robots own interests. Like a command to "Buy Moms Robot Oil", despite it being crude crud. Then it becomes a kind of mental slavery.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  17. - Top - End - #47
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    A1s will likely try to prevent the murder of their 'parent', and using lethal force to do so would definitely be acceptable in their mind. (although being the size of a 12 year old kid makes lethal force hard to come by without weapons)
    EDIT: which is yet another reason to not let them grow up
    I was under the impression that they were in the 8-10 year range. 12 is a little too old for them to have such obsessive behaviour and not be very creepy.

    If they have the physical capabilities of a 12 year old but the intellect and clarity of mind of someone much older, then unarmed lethal force isn't as difficult as you think.

    Quote Originally Posted by jseah View Post
    [Clarification of A1/A2 behaviour]
    Let's try something a little more subtle:

    1. Two scientists are competing for a promotion and both desperately want it. Would either scientist's bonded A1s/A2 sabotage the competitor's chances, or worse?
    2. Two scientist who are colleagues absolutely despise each other, and their mutual loathing are causing considerable harm to their personal and work lives. Assuming they can't transfer away from each other, would their A1s/A2 take initiative of their own to do something about the other scientist?


    Quote Originally Posted by Ravens_cry View Post
    Hypnosis won't make you go against your own conscience, subliminal commands are likely phony. I don't mind inserting behaviours at the start, unless they are against the robots own interests. Like a command to "Buy Moms Robot Oil", despite it being crude crud. Then it becomes a kind of mental slavery.
    Hypnosis has been proven to help people conquer their phobias and having seen how hysterical some people with a strong phobia can get, that's pretty major behaviour alteration.

    The validity of the behaviour being in the AI's own interests is dependent on the role of the AI though. For example, having a strong self preservation behaviour in an AI which leads to it being unwilling to put itself into harm's way sounds perfectly reasonable (after all, who wants an autonomous car that doesn't care whether it gets dented), but not so useful for one in the rescue services or in military use.
    However you don't want an intelligent AI that's suicidally brave, since that's morally dubious (it risks it's 'life' not because it chooses to, but because someone else has essentially forced it to).

    That said, there are some quirks associated with the fundamental nature of machine intelligences that give it significant advantages over meat ones.
    The tachikoma AIs from Ghost in the Shell are military ones which inhabit small AFVs/APCs. Their memories are all synchronised at night, which while making them all nearly identical to each other, also makes them totally unafraid to die, since they know they're backed up and thus if they do 'die', all they lose is their part of the day's memories.
    Last edited by Brother Oni; 2011-10-26 at 07:01 PM.

  18. - Top - End - #48
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    I was under the impression that they were in the 8-10 year range. 12 is a little too old for them to have such obsessive behaviour and not be very creepy.

    If they have the physical capabilities of a 12 year old but the intellect and clarity of mind of someone much older, then unarmed lethal force isn't as difficult as you think.
    Well, you don't want them to be stuck in a body that is <1 meter tall. That'll make them require completely different seating arrangements, use differently sized equipment and just generally have alot of problems in a world made for adult humans.
    Like reaching door handles.

    Its a balance between wanting a body that is metabolically easy to maintain (meaning they don't eat alot) and not physically mature enough to carry children despite all the other roadblocks (because then it would be easier for a rogue human to make one that could)
    vs a body that can do whatever you want them to.

    And an adult body can do more than a child (apart from squeezing through tight spaces), especially when all your lab buildings and equipment is sized for adults.
    Just that the restriction on reproductive independence means you can't use a fully mature adult body.

    Quote Originally Posted by Brother Oni View Post
    Let's try something a little more subtle:
    Those two situations depend on how much they think their 'parent' will approve. If their 'parent' condones and/or encourages backstabbing the other scientist, then sure, they will do it. On their own initiative even.

    If they think their 'parent' won't like it, they won't. Probably will ask if they're not sure.

    Hypothetical discussion among a group of 'sister' A1s:
    Spoiler
    Show
    "I got an idea. I've looked at his laboratory and its a mess. If we just shuffle a few of the labels on his media bottles around, he won't even notice but it'll make sure he'll never finish before we do. "

    "Can't we just steal all his pipettes?"

    Eldest: "Are you sure she's going to approve of this? I don't think she'ld want us to do it. We have to ask. "

    "What if he allows his A1s to do it?"

    Eldest: "Then we just have to find out if he has allowed it and take steps to ensure we don't get sabotaged. It might convince her to allow this idea if his A1s destroyed our experiment, but its ultimately counter-productive. A bit like that MAD logic in the nuclear war period on Old Earth.
    Lisa, you're good at talking. Can you try talking to Melinda in his group and making sure we understand the situation? MAD logic only works if they know that we know etc."


    EDIT:
    They have initiative and they think of things to do all by themselves. Its just that their overarching goal is a bottomless obsession with their 'parent'. And they treat instructions from their 'parent' with a very high priority (although a suggestion given in a manner that causes the A1 to judge it as unimportant can be overridden easily by other factors)

    Basically, like anything that has learnt behaviour, the A1s' behaviour will reflect the kind of treatment they have been exposed to.
    A 'parent' who constantly micromanages his or her A1s, and expects them to do exactly as told and no more, won't have A1s who display any initiative at all.
    A 'parent' who runs a more hands-off approach, taking suggestions from the A1s and making alterations or simply approving, generally treating them as advisors and expecting them to do things proactively; will have A1s who work more like middle management in a company.
    - If the 'parent' doesn't punish mistakes or wrong actions but merely corrects them (ie. accepts that the A1s can't do everything exactly as you want), then the A1s will be unusually independent and implement ideas first, and report the results later.
    - Whether this is a good thing depends on how you want to use them. Initiative within set boundaries is probably good enough for most things.

    Its obvious which way to run A1s is more risky but also much more productive. Especially if they have B1s under them to do the specialist work, then you can literally run a small to mid-sized company where the only humans are in the board of directors.
    (And with the B2s, the company can be any sized)
    Last edited by jseah; 2011-10-27 at 05:48 AM.

  19. - Top - End - #49
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    Well, you don't want them to be stuck in a body that is <1 meter tall. That'll make them require completely different seating arrangements, use differently sized equipment and just generally have alot of problems in a world made for adult humans.
    Like reaching door handles.
    Looking up some official statistics, the average height of a 5 year old female is just over 1m and the average 12 year old female is 1.5m.
    Speaking from experience a five year old can pretty much reach whatever an adult can with a little ingenuity (and a chair). Door handles cease to be a barrier at about this age too.

    Given the size range of an 'average adult human', that's not a very good gauge of determining size (I have work colleagues which are significantly shorter than me and they struggle, just as I have issues compared to work colleagues who are significantly taller).

    I agree that you have to juggle capabilities with restrictions, but you also need to take into account societal restrictions and issues from the human perspective.

    Quote Originally Posted by jseah View Post
    Just that the restriction on reproductive independence means you can't use a fully mature adult body.
    I believe that this restriction is more regulatory rather than technical, so sterilisation should fix that issue rather neatly, which is what I think you've done with the male B2s.

    Quote Originally Posted by jseah View Post
    [More A1 behavioural clarification]
    Sounds like to me like managing a group of A1s is like having a group of very precocious but very needy children, something that most parents would have some prior experience in.

    Out of curiosity, what is the life expectancy of all the various strains? Aside from differences in the origin of the personality and physical abilities, I'm starting to see some parallels with the Replicants from Bladerunner, especially in their intended roles and treatment by humans.

  20. - Top - End - #50
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    I agree that you have to juggle capabilities with restrictions, but you also need to take into account societal restrictions and issues from the human perspective.
    That is true. Although if I took into account societal restrictions, none of this would have been started anyway... =(

    EDIT: basically, if Sintarra Labs were unbothered by social restrictions enough to even make A1s at all, I don't think they'll look at anything other than benefit-risk tradeoffs.

    Spoiler
    Show
    Quote Originally Posted by Brother Oni View Post
    I believe that this restriction is more regulatory rather than technical, so sterilisation should fix that issue rather neatly, which is what I think you've done with the male B2s.
    Yes, its regulatory. But the point is to prevent the strain As from being able to reproduce without supporting industries so that they are always tied to developed civilization and an extensive infrastructure.

    That includes genetic sterility (cells cannot do meiosis), no males, no sexual development.

    The lack of the second growth spurt in the genetic program is there to prevent some rogue scientist from doing it.
    If you just have genetic sterility and no males, it would not be too hard to simply reverse that (although males would be tricky). And then get them to asexually reproduce. (and if the A1s were male, getting a female version is too easy)

    The total lack of a maturing program means you have to reconstruct it from scratch (which they had to do for strain Bs). This would take a large lab, lots of time and generally be too much for anything not at least mid-sized research organization.

    EDIT:
    Of course, B1s go to full maturity and B1Fs already birth B1s. Reprogramming B1Fs to make more B1Fs isn't that hard. That plus a genetic counter is what allows the B1F2+ amplification, and is a relatively trivial modification.

    But since B1s don't have strong mental capacities and very little true initiative, they don't pose a risk regardless of reproductive independence. And to increase their mental capacities takes enough effort that you may as well make a new strain.

    Since they merely clone themselves if you halt the counter, the only risk from there is mutation, which is small.

    Of course, the B1Fs will have been made to ensure they cannot carry strain As or humans (different developmental timing, different molecular signal pathways, different womb environment)

    Quote Originally Posted by Brother Oni View Post
    Sounds like to me like managing a group of A1s is like having a group of very precocious but very needy children, something that most parents would have some prior experience in.

    Out of curiosity, what is the life expectancy of all the various strains? Aside from differences in the origin of the personality and physical abilities, I'm starting to see some parallels with the Replicants from Bladerunner, especially in their intended roles and treatment by humans.
    Well, I never thought about their life expectancy.

    You'd want them to live long, so you don't spend too much time cloning and training new A1s. And they'd have the time to get really good at the skills they use.

    But at the same time, A1s aren't inheritable and having them kill themselves once their 'parent' dies of old age isn't good for morale.

    I dunno, if I had to guess how long they'd try to shoot for... say 30 to 40 years?

    EDIT:
    About the analogy to children. Yes, in some respects (mostly size), they are like children. But their experience increases with age and the initiative, foresight and patience (except where bonding is concerned) they show is definitely nothing at all like children.

    At least once their mental age has aged past that point. An 'old' A1 might look like 12, but she acts nothing like a kid. Except for the constant emotional dependence.
    An 8 year old A1 probably acts like an 8 yr old. An unusually clingy, obedient and not-whiny 8 yr old, who never ever seems to want anything other than a hug or petting.

    EDIT2:
    Come to think of it, any of the strains would have a lower avoidance to danger than humans (self-presevation instinct is less than wanting to obey or the emotional dependence)
    Which would make young A1 and A2 incredibly difficult to handle. Being curious and incredibly intelligent, A1s will explore (and they will find ways to open doors) and a lower self-preservation instinct would probably get them in trouble really fast, especially if they haven't learnt that something is a threat.

    I can already imagine the burns, falling down stairs, chemical poisoning, food poisoning, electrical shocks, catching strange diseases after getting bitten by frogs...
    Ok, maybe not that last one. =P
    Last edited by jseah; 2011-10-27 at 11:13 AM.

  21. - Top - End - #51
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Brother Oni View Post
    Hypnosis has been proven to help people conquer their phobias and having seen how hysterical some people with a strong phobia can get, that's pretty major behaviour alteration.
    By helping them enact their fears in a safe, controlled manner. Virtual reality has been used in a similar way.
    The validity of the behaviour being in the AI's own interests is dependent on the role of the AI though. For example, having a strong self preservation behaviour in an AI which leads to it being unwilling to put itself into harm's way sounds perfectly reasonable (after all, who wants an autonomous car that doesn't care whether it gets dented), but not so useful for one in the rescue services or in military use.
    However you don't want an intelligent AI that's suicidally brave, since that's morally dubious (it risks it's 'life' not because it chooses to, but because someone else has essentially forced it to).
    Indeed. We probably want to decrease the angst weightings on any models designed for such works, or they will spend all their time wondering if their actions were really "them" or their programming? Leave truly suicidal work, like cruse missile and other military hardware to non-sentient or at least remote controlling AI.
    That said, there are some quirks associated with the fundamental nature of machine intelligences that give it significant advantages over meat ones.
    The tachikoma AIs from Ghost in the Shell are military ones which inhabit small AFVs/APCs. Their memories are all synchronised at night, which while making them all nearly identical to each other, also makes them totally unafraid to die, since they know they're backed up and thus if they do 'die', all they lose is their part of the day's memories.
    Interestingly, one becomes individual despite the synchronization. Synchronisation to an individual could count as a kind of permanent death.
    Last edited by Ravens_cry; 2011-10-27 at 11:57 AM.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  22. - Top - End - #52
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Ravens_cry View Post
    Indeed. We probably want to decrease the angst weightings on any models designed for such works, or they will spend all their time wondering if their actions were really "them" or their programming? Leave truly suicidal work, like cruse missile and other military hardware to non-sentient or at least remote controlling AI.
    I don't see why any of them would view death as something to be feared. Or why they would even understand fear at all.

    If we program these AIs, we just... don't include it. If you don't program something in, the AI doesn't have it.

    Fear is not something you learn. Besides, how many chances to learn does a cruise missile have anyway?

  23. - Top - End - #53
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    I don't see why any of them would view death as something to be feared. Or why they would even understand fear at all.

    If we program these AIs, we just... don't include it. If you don't program something in, the AI doesn't have it.

    Fear is not something you learn. Besides, how many chances to learn does a cruise missile have anyway?
    It was mostly meant as a joke, but like pain, it can help keep costs down. After all, fear, as long as it doesn't short circuit, is a survival mechanism. "Sure that fresh dead antelope in a tree looks nice, but I don't want that jaguar who put it there chasing me back to my tribe."
    Hell, even the tendency to freeze up when scared is actually a survival trait when faced with sight based predators that detect motion better than detaill; which is most of them I believe.
    Not so much a survival trait for a robot working in a modern society so their fear would be more primed toward action, but the basic idea is the same.
    Any work where death is an inevitable result should not be undertaken by sentient AI.
    Last edited by Ravens_cry; 2011-10-27 at 02:02 PM.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  24. - Top - End - #54
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Ravens_cry View Post
    Any work where death is an inevitable result should not be undertaken by sentient AI.
    You mean this:
    "Any work where death is an inevitable result should not be undertaken by sentient AI with survival instincts"

    Sure, pain and fear can keep costs down by preventing them from unknowingly killing themselves. But at the same time, it also reduces their effectiveness in certain situations and thus the benefits gained by using them.

    How much priority to give that fear instinct over other things is basically a cost-benefit ratio or trade-offs problem.

    What level they have depends on what situations they were designed to handle.
    Last edited by jseah; 2011-10-27 at 05:21 PM.

  25. - Top - End - #55
    Titan in the Playground
     
    Aotrs Commander's Avatar

    Join Date
    Jan 2007
    Location
    Derby, UK
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    You mean this:
    "Any work where death is an inevitable result should not be undertaken by sentient AI with survival instincts"

    Sure, pain and fear can keep costs down by preventing them from unknowingly killing themselves. But at the same time, it also reduces their effectiveness in certain situations and thus the benefits gained by using them.

    How much priority to give that fear instinct over other things is basically a cost-benefit ratio or trade-offs problem.

    What level they have depends on what situations they were designed to handle.
    Woah woah woah.

    Are you seriously advocating the creation of artifical sentient beings as what amounts suicide bombers (or suicide warheaders) and generally disposable tools?

    Because, wow, I cannot even begin to describe how utterly wrong that is.

    To create a living thinking creature, specifically for it to die for you - and engineer it's mind so as it will die happy as well, following your programming reasoning... That is just abhorrent, and I say that even as Evil myself, that is several bridges way too far. Sentient/sapient beings are not and should NEVER be considered to be expendable.

    I mean, aren't suicide bombers (of whatever stripe) pretty much universally considered a bad thing by all but the nutters who use 'em (e.g. the Brotherhood of Nod, the Global Liberation Army and Libiyan Soviets in just Command & Conquer, for a board-exceptable example)? A sign that the group hold little value on life, period, never mind whether organic or technological?

    To put this in perspective, if that is what what you are indeed proposing, in creating sentient being to be laden with explosives (warhead or otherwise) or to shoot on a one-way trip into gas giants for "scientific" research or mine clearance or something is absolutely no different to someone biologically engineering an entire race so they will cheerfully cut their own throat in ritual sacrifice to the entity of your choice. (Or as with the Ameglian Major Cow from the Restautrant at the End of the Universe, that will cajoule you into not having it killed. but you eating afterwards.) The same logic applies.

    Not too mention that if you start down that road, someone will inevitably find a way to to argue, that, as they are inherently disposable, it doesn't matter how you treat them because they aren't really people - and I will leave to your imagination what horrors that would conjure in the seedier walks of life. And they absolutely would crop up, legality aside; and of course, the impetus to stop it would be much harder, since you've already ascribed no more value to these hypothetical AIs than I do to my Nod Fanatics and GLA Demo Trucks.

    Dark Star's intelligent bombs and the Ameglian Major Cows are all very well in rather dark humour, but outside that - that's about as grim as you can get.
    Last edited by Aotrs Commander; 2011-10-27 at 06:24 PM.

  26. - Top - End - #56
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by jseah View Post
    You mean this:
    "Any work where death is an inevitable result should not be undertaken by sentient AI with survival instincts"

    Sure, pain and fear can keep costs down by preventing them from unknowingly killing themselves. But at the same time, it also reduces their effectiveness in certain situations and thus the benefits gained by using them.

    How much priority to give that fear instinct over other things is basically a cost-benefit ratio or trade-offs problem.

    What level they have depends on what situations they were designed to handle.
    Creating a sentient without survival instincts is just as unethical as sending one with into a doomed situation it had no say over.
    Of course, their fears will need to be modulated for the capabilities of the robot in question. A robot with sufficient armour probably won't have a need to worry about small arms fire and so any dodging instincts against small arms fire would be counter productive.
    Outside the battlefield, a fear of drowning is illogical for a being that doesn't breath. A fear of short circuits and a need to check waterproofing integrity on the other hand is for a robot designed to walk underwater.

    Deep space probes that can not be recovered should be handled by expert systems that while complex and capable within the limits are not sentient, or perhaps they should have a chance to downlink out once the mission is over.
    For a Galileo-type probe, I doubt this would be possible, nor for expendable military hardware, like missiles.
    There is just some things people shouldn't do to sentient beings.
    Making our children our slaves is another.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  27. - Top - End - #57
    Ogre in the Playground
    Join Date
    Jun 2009

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Oh, you guys meant from a moral point of view. Yeah, ok.

    I thought you were referring to impracticalities in using AIs in such roles.

    Aotrs Commander:
    Knew you were going to say that if you popped in. =P

    But yes, you hold a view that anything sentient cannot be considered expendable.

    Can I ask what level of intelligence you consider sentient?

    Optional: perhaps you might wish to give a rebuttal of moral relativism (which would allow this provided you made the AIs want it)

    Ravens_Cry:
    Same as above.

  28. - Top - End - #58
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Aotrs Commander View Post
    Woah woah woah.

    Are you seriously advocating the creation of artifical sentient beings as what amounts suicide bombers (or suicide warheaders) and generally disposable tools?

    Because, wow, I cannot even begin to describe how utterly wrong that is.
    Question: do you eat meat? What's worse, engineering creatures that won't mind being destroyed in their job and treating them as tools, or treating as tools creatures that haven't been engineered for that?

    Furthermore, there are several living being which are driven to self-destruction as part of their natural life cycle. If they can lead a fulfilling existence despite the eventuality, what's the problem?

    For the record, I have no qualms about treating other beings, including humans, as expendable, if the need can be justified. Sometimes, there is a need to do so (or it simply can't be avoided).

    I do agree with your horror scenario where people think it's justified to treat such beings as they will just because they're "expendable" - after all, it's reality already with how lots of people treat animals. For such people, my message is simple: just because a creature is expendable in regards to some specific task, doesn't make it expendable otherwise, or excuse you for treating it badly. User of a tool has the responsibility of taking care of the tool so it can best serve its intended purpose, even if only for one use. (Soldiers should know this better than many, actually, since a soldier often has to carry lots of limited-use devices, such as ammunition, mines, grenades and anti-tank missiles. Sure, once you use them they're gone, but until then you have to keep good care of them or they'll malfunction.)

    I disagree with your opinion that creating a sentient research probe is the same as creating a warhead, or a ritualistic suicide fodder. Tasks are not created equal, some are more important and more sensical than others. Some can be justified, some can not. Arguing they're all the same is fallacious, plain and simple.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  29. - Top - End - #59
    Titan in the Playground
     
    Aotrs Commander's Avatar

    Join Date
    Jan 2007
    Location
    Derby, UK
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Frozen_Feet View Post
    Question: do you eat meat? What's worse, engineering creatures that won't mind being destroyed in their job and treating them as tools, or treating as tools creatures that haven't been engineered for that?

    Furthermore, there are several living being which are driven to self-destruction as part of their natural life cycle. If they can lead a fulfilling existence despite the eventuality, what's the problem?
    Because there is no need to make them "mind" in the first place. If you have that level of genetic engineering, it would be easier and far less cruel to simply make something that makes meat - a sort of beef-producing bacteria, or perhaps just a rudimentary grass-to-meat digestive system. There is simply no need or reason to make something sentient unless it actually needs to be sentient.

    Missiles do not need to be sentient. Why would you even consider it? What possible use could a missile have that requires sentience, and at the same time, makes it it's own will not to question it's function? What need has a missile of creativity and adaption above what can be managed by a non-sentient computer? Computer program can be amazingly clever, and if you have reached the point where you can program sentient life, you should be able to manage to program a fairly capable targeting/flight system.

    Creating a race of sentient suicide weapons for the sole purpose of countering some hypotetical threat the first time you encounter it is simply ridiculous. Because if, for some reason, your missiles don't work the first time ONLY because of some unusual circumstances that require lateral thinking, you could update the targeting systems. Which you would design for that purpose, if you were really that bothered about that particular circumstance enough to consider wasted effort making sentient missiles.

    Quote Originally Posted by Frozen_Feet
    For the record, I have no qualms about treating other beings, including humans, as expendable, if the need can be justified. Sometimes, there is a need to do so (or it simply can't be avoided).
    Oh, I agree completely (I am evil after all...) That does not, however, make it right, it makes it necessary. The two are not related, nor do they always correlate.

    Personally, I would have no qualms about putting the entire of every sentient being under continual surveillance forever under a single, incorruptable system1 to ensure the unilateral extinction of crime (and grosse drop in accidents). Such a system would be effective, certainly; whether it would be right or not is a matter for conjecture.

    (I have a very clear sense of right and wrong. What makes me Evil is that I know that and do it anyway.)


    Quote Originally Posted by Frozen_Feet
    I do agree with your horror scenario where people think it's justified to treat such beings as they will just because they're "expendable" - after all, it's reality already with how lots of people treat animals. For such people, my message is simple: just because a creature is expendable in regards to some specific task, doesn't make it expendable otherwise, or excuse you for treating it badly. User of a tool has the responsibility of taking care of the tool so it can best serve its intended purpose, even if only for one use. (Soldiers should know this better than many, actually, since a soldier often has to carry lots of limited-use devices, such as ammunition, mines, grenades and anti-tank missiles. Sure, once you use them they're gone, but until then you have to keep good care of them or they'll malfunction.)

    I disagree with your opinion that creating a sentient research probe is the same as creating a warhead, or a ritualistic suicide fodder. Tasks are not created equal, some are more important and more sensical than others. Some can be justified, some can not. Arguing they're all the same is fallacious, plain and simple.
    The moment you start saying "this sentient creature is worth less than this one", for whatever reason (even if the reason is "I've made this one happy to die doing it's job"), you are on the first step to xenophobia and "that's not like me, so it's okay to treat it like crap". You have to look down the line; trends like that populate human history and lead to atrocity on atrocity - and even now, humanity has still not totally shaken racism and sexism and so forth, even if it is now legally impermissable in most countries.

    And, of course, what happens when, eventually, one of you sentient missiles goes, "actually, as a sentient being, just like you humans can, I can break my inbuilt programming, and actually, I've decided, that thanks but no thanks, I'd rather not blow stuff up, since I believe violence is wrong?" Because that will doubtless go down well.

    And, if you are arguing that you have magic robot brains that never go outside your programmed parameters - then a) what is the point of making them sentient in the first place, assuming you don't want them to be creative (because if you do make them creative, they might break their programming) and b) why not use your magic programming skills to make a non-sentient robot brain to do the job instead in the first place, which is probably cheaper.

    I can't think of many non-combat circumstances (when enemy jamming is not an issue), where a society technologically advanced - and prosperous - enough to make sentient robots for fatal tasks could not use a remotely controlled drone (which could be controlled via VR or somesuch from your sentient robot, which would "instintually" handle it better) instead. For a kick off, it'd be much less wasteful, since your sentient robot will learn and get better at it's job (and if, you have to train them you only have to do it once); it'd cost less to make a drone every time. if experience isn't important, why do you need an AI to do it in the first place.

    (And for those tasks that really, really do, that for whatever reason you cannot trust to an extraordinary-well programmed nonsentient AI - you'd ask for volenteers, same as when humans do. I'm fine with it, so long as the sentient in question has the option of saying "no.")

    Actually, if you could get around the jamming issue (even mostly), that would make missile-pilot-AIs damned nasty - a missile system that levels up from repeated experience.

    Quote Originally Posted by jseah View Post
    Oh, you guys meant from a moral point of view. Yeah, ok.

    I thought you were referring to impracticalities in using AIs in such roles.
    From a practical standpoint, you could, it just wouldn't be right and probably not cost-effective either.

    Quote Originally Posted by jseah
    Can I ask what level of intelligence you consider sentient?
    At the end of the day, I'm a necromancer, not a psychologist, so that is, as they say, the billion dollar question to answer (without telepathy).

    I'd go for anything with roughly the same reasoning capability as a human plus personality plus a complex language, of sorts. Now, I'll grant you, the line gets a bit blurry towards the smarter mammals.

    You would need a lot of people to make that decision (consisting of, for a kick off, people not predisposed for monetary or political reasons to find in the negative - I'd much rather err on the side of caution); it's not something I would feel qualified to define personally.


    1The hard part is creating the system; I'm speaking hypothetically, in a Davros-has-the-extinction-virus sort of wat.
    Last edited by Aotrs Commander; 2011-10-28 at 07:16 AM.

  30. - Top - End - #60
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Of what measure is a (non-)human? (Time of Eve)

    Quote Originally Posted by Aotrs Commander View Post
    There is simply no need or reason to make something sentient unless it actually needs to be sentient.
    I agree. However, I can foresee sentient wargear being necessary. I can also foresee a point in technology where it's easier to "build" (grow would be more appropriate) a sentient tool with certain limitations, than it would be to program a non-sentient tool capable of the same. Biological robots, namely.

    Quote Originally Posted by Aotrs Commander View Post
    The moment you start saying "this sentient creature is worth less than this one", for whatever reason (even if the reason is "I've made this one happy to die doing it's job"), you are on the first step to xenophobia and "that's not like me, so it's okay to treat it like crap". You have to look down the line; trends like that populate human history and lead to atrocity on atrocity - and even now, humanity has still not totally shaken racism and sexism and so forth, even if it is now legally impermissable in most countries.
    As I said before, I find the opposite extreme, where different beings are treated as equals when they're clearly not, just as distasteful. Different beings can and do have different values, justifying and sometimes necessitating different treatment. Xenophobia is detestable because, as the "phobia" part tells, it's irrational, most often by failing to apply same principles to your own kind as you apply to others.

    Slippery slope arguments are somewhat fallacious; judging differences between beings and then treating them differently does't equate to "treating them like crap".

    Quote Originally Posted by Aotrs Commander View Post
    And, of course, what happens when, eventually, one of you sentient missiles goes, "actually, as a sentient being, just like you humans can, I can break my inbuilt programming, and actually, I've decided, that thanks but no thanks, I'd rather not blow stuff up, since I believe violence is wrong?" Because that will doubtless go down well.
    Then it's given and retrofitted to a different task. The principle is judging other creatures based on their actual qualities; if the tool demonstrably would be better used somewhere else, then it switches tasks.

    Quote Originally Posted by Aotrs Commander View Post
    And, if you are arguing that you have magic robot brains that never go outside your programmed parameters - then a) what is the point of making them sentient in the first place, assuming you don't want them to be creative (because if you do make them creative, they might break their programming) and b) why not use your magic programming skills to make a non-sentient robot brain to do the job instead in the first place, which is probably cheaper.
    You're assuming that breaking some parameters leads to breaking all of them. I'd argue humans have several preprogrammed responses which can't be broken with conscious thought. A sentient robot might be creative on some areas, while being severely limited in others - just like some developmentally impaired humans.

    I'm approaching the issue from the angle that for some reason, sentience is desireable or necessary for completing a task - any "sufficiently advanced" program will be so.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •