PDA

View Full Version : Of what measure is a (non-)human? (Time of Eve)



jseah
2011-10-20, 05:59 PM
I watched the anime movie Time of Eve recently and thought it was absolutely excellent.

The Time of Eve universe has discovered Strong AI. (and computers powerful enough to run it and fit roughly into a human body minus limbs)
Thus they have androids which can pass the Turing Test and do so in the movie.

In particular, the various robots have personality quirks similar to humans.
Even more of note is the "caretaker" robot who is acting as a parent to a young human child, his psychology is rather different from the others. And since robots don't grow old, he must have been made that way. With a "caring" psychology, as compared to the "genki girl" psychology of one 'bot and a rather bland "maid" who belongs to the main character.

However, one thing stuck out at me.

Presumably someone had to program them. (especially since you can service and debug the robots. With a handphone!)
In fact, this is obvious enough by the fact that robots in the show have essentially human thinking except they also follow Isaac Asimov's three Laws. Clearly artificial.

And throughout the show, alot of time is spent developing the robots as characters and as people in their own right. And the main character eventually comes around to treating his robot like other humans... except not... because she's a robot.

The ending scene is the most striking. While the tension is resolved and he does appear to treat her like a person, she still pours tea for him! And still follows the three Laws and still essentially belongs to him.
If robots were people, that'll be slavery.

More than slavery since the intelligent, and very human, robots are bound by the three Laws! It's thought control beyond the likes of 1984 or Brave New World.
Well, at least if you consider robots people. If robots aren't people and artificial intelligences are not human, then there is NO ethical problem at all.

And then I have to ask, what if it's an essentially human intelligence but modified...? Does it matter how you get an intelligence if the end result is the same?
Modifying a human intelligence until it's a willing slave or creating one from ground up? (or maybe monkey-up)


Note: I do not think this point detracts at all from the show, in fact, I would credit the show, although they did not bring it up or deal with it.


And so here is the refined question:
If/when we can build an Intelligence, we are likely to find that we can build it however we like.
How much is ethical to play around with an intelligence?
Is making it obey the Three Laws ok?

What about making artificial life with strange dependencies so they essentially become willing slaves?
- I wrote a short SF lecture here (http://dl.dropbox.com/u/10120644/Psychological%20Engineering.txt). Its not complete but you might want to pay attention to A1 (Laura) and her insane emotional fixation.
- The same lecture from first person perspective of the A1, Laura. Here (http://dl.dropbox.com/u/10120644/Psychological%20Engineering%2C%20A1%20Laura.txt).

At what point does creating an intelligence become "cruel" and unacceptable?

Brother Oni
2011-10-21, 11:27 AM
How much is ethical to play around with an intelligence?


Depends on what you term 'play around'. It's acceptable to instruct children, so its presumably acceptable to teach an intelligence in the same way.



Is making it obey the Three Laws ok?


Depends on the intended use for the intelligence.



What about making artificial life with strange dependencies so they essentially become willing slaves?

You may want to have a look at Masamune Shirow's work, Ghost in the Shell. It's approaching the same subject but from a slightly different angle (in a world where human thought can be digitised, what is the difference between a person and a computer program), but there's an example of a strong AI trying to break free of its government handlers.

You can argue that the full body cyborgs of the Section 9 (Motoko definitely and at least one other member) are willing slaves. They require constant expensive maintenance, thus are pretty much tied to the government.



At what point does creating an intelligence become "cruel" and unacceptable?

I personally think it's based on the intended use and the capabilities of the intelligence. Giving aspirations to an intelligence built into a guided missile would be cruel but giving it just enough cognitive abilities to guide itself to the target would be acceptable.

jseah
2011-10-21, 11:42 AM
but there's an example of a strong AI trying to break free of its government handlers.
I meant something different. In that short front half of a lecture, I put in a character who thinks and feels similarly to a human but has an in-built emotional dependency on the handler. You can't separate her and she won't even try to escape. (they get depressed and suicide if their handler isn't around for more than a few days)

The child Laura in the short is what I meant by a "willing slave", since her desires and entire emotional psychology is wired that way.

---------

Oh, I did miss out on a particular assumption I made.

This whole ethical issue hinges on an assumption that doing certain things to human intelligences is unacceptable. If restructuring the thinking of people is acceptable then there really is no ethical problem at all.

H Birchgrove
2011-10-21, 01:33 PM
I start to think about the intelligent bovine cattle in The Restaurant at the End of the Universe by Douglas Adams; they had been bred to want to be slaughtered and eaten, since it was immoral to eat something that didn't want to be eaten.

Soras Teva Gee
2011-10-21, 02:08 PM
I think it largely worth remembering that in original conception the Three Laws were not merely programming restrictions. They were a mathematical basis for constructing robots at all. Asimov even wrote a scene illustrating how this had physiological checks that someone with the proper knowledge could check, like a doctor tapping a human's knee. With the various logical ends of the Laws, kill a human in front of a robot and that robot will very likely break (as in mechanically not simply need a reboot from a program bug) or otherwise experience trauma. So to remove them you have to essentially develop an entirely new theory of constructing positronic brains.

However I think Asimov (and mind you his ghost may be spinning in his grave at this conclusion) also establishes the threshold at which point an Asenion robot achieves sentience. While all robots in his works obey the Three Laws, they have a varying ability to process them. As more advanced robots are able to make more complex gradations in considering the Three Laws. A primitive robot is little more then a child, it may exceed a chimp but it has no independent virtue or thought of meaning. It is not alive as it were because its hardware limitations are too small.

However beyond a point reached by R. Daniel... there is the Zeroth Law. Which inevitably leads to a transition of removing oneself from humanity, particularly as a servant. And this is the threshold by which a Three Laws robot achieves independent sentience. Though still under a very alien system of thought.

(Mind you I'm not aware of anyone but Asimov using his laws as he presented them, which given the points of logic he explores with them is vitally important)

Brother Oni
2011-10-21, 02:38 PM
I meant something different. In that short front half of a lecture, I put in a character who thinks and feels similarly to a human but has an in-built emotional dependency on the handler.

Sorry, I referred to Project 2501 as an example of a strong AI in a series that mostly doesn't have AI in the way that you've described it.

I meant to use the full body cyborgs as an example of a dependency. If they want their state of the art bodies, with all the physical and mental capabilities that entails, they have to stay with the government as it's prohibitively expensive for them to maintain on their own funds.
If they decide to break all ties, they have to surrender their body and presumably be installed into one which is easier and cheaper to maintain, but is far less capable.

For example, Motoko Kusanagi's body has a level of sensitivity in her skin that is classified as restricted/military tech, so its a "work for us or you'll never experience the world properly again" type of clause. It'd be like a company giving a blind employee the ability to see, but only if they stay in their current job.

Edit: I've had the opportunity to read your lecture more thoroughly and I've got a question - since the A1s bond to humans on a virtually fanatical level, won't the humans reciprocate that?
I've read reports of bomb disposal teams in Iraq becoming very attached to their remote control robots, to the extent that they will risk enemy fire to retrieve them - putting it another way, they'll risk their lives to rescue an object that's designed to be blown up and cheap to replace when it is.

Now instead of an inanimate object, you have something living that appears to be a young girl, can talk and respond, and can mimic human emotions. Only the most objective of scientists could fail to respond to that.

jseah
2011-10-21, 06:10 PM
Now instead of an inanimate object, you have something living that appears to be a young girl, can talk and respond, and can mimic human emotions. Only the most objective of scientists could fail to respond to that.
You might also notice how I had the speaker talk to and treat her like a person despite talking of the A1s in general as tools and objects. At least, I tried to, not sure if I managed that well.
EDIT4: even as a writer, its very hard to not treat her as an outright character. I need to try writing one from the POV of an A1.

I would imagine that it would be very difficult to treat them badly (eg. ordering one to commit suicide) without feeling horrible doing it. Unless the 'parent' is a sociopath.
EDIT: a horrible thought occurs to me. If they are not legally humans, and thus have no rights... it becomes perfectly legal to use them for any purpose, even ones deemed too hazardous for human health. Or deemed morally unacceptable to do on similar humans. (eg. of a sexual nature)
EDIT2: And they will happily do any of those if ordered to, being only too glad to help.
EDIT3: And if there is some unforeseen mental stumbling block that you accidentally put in, you can always make a new variant. An understanding of psychology and developmental genetics to the point you can make an A1 from yeast would let you play any strange games you like with their brains.

But yes, I wrote that "lecture" as a thought experiment in pushing the problem to its logical limits. As well as attempting to dodge any potential legal problems.

Is making an A1 ethically acceptable?

If no, then why not? What kind of ethical lines in the sand will we be crossing if we make an A1? What are the ethical principles involved?

If yes, then why? Are there ethical principles that would allow us to do such things? (and still be consistent with no-slavery...)

Do you think modern society would approve of such actions ethically?
Probably not the A1s, but then the A1s' emotional leash was conceived in the story for the same reason the Three Laws are used in Time of Eve. Namely to prevent them from ever wanting to supersede humans.
By extension then, the application of the Three Laws to intelligences is similarly unethical.

But do we really want to build an intelligence, potentially more intelligent that we are, that doesn't have some kind of leash on it?

-----------------------------------

The other thorny ethical problem of robot maintenance you mentioned is a different problem. That one deals with universal healthcare.

When you think about it, the problem of maintaining an intelligent robot is essentially equivalent to the ethics of universal healthcare generalized to all sentients.

-----------------------------------
-----------------------------------

Soras Teva Gee:
Yes, but this isn't really about the Three Laws.

It's about the ethical question of building intelligences that follow whatever custom "laws" or harder to describe impulses/instincts/emotions that we put in it.

Which can have seriously disturbing (to me) implications.

Having an A1 (in the short lecture I linked in OP) that is completely and totally obsessed with you, to the point that they commit suicide if they can't see and touch you for a few days... Can that really be ethically acceptable?

It feels... wrong to me. But then I'm not sure if that's just the uncanny valley or the stranger interpretation that A1s are personal slaves, only with an emotional collar instead of a physical one. (and they won't even want to take it off, being that they'd rather kill themselves)

Which is a rather extreme example but its meant to demonstrate the point.

Soras Teva Gee
2011-10-21, 07:26 PM
Well its obviously immoral to repress any sentient lifeform. Form is philosophically irrelevant.

The more interesting question is where would AI actually be sentient. I don't know if I could find say a personality totally and "willingly" subservient to be considered to have free will and moral choice. And if they lack that can it be considered sentient? To be sentient (while I'd hate to make a real world choice) I'd say they'd have to be able to say "No" to us humans to be considered sentient. Mind you that even with certain hard controls this might take the form of passive aggressiveness rather then open defiance.

And strictly speaking there shouldn't be many true AIs ever created, its hard to find place where there's a need for them. You don't need it to make a car, you don't need that much AI to make a butler, what precisely is the application for true AI? I would hope that (barring certain accidental creations which isn't nessecarily likely) the few artificial lifeforms produced would be essentially test platforms rather then ever seeing mass production.

jseah
2011-10-21, 07:53 PM
Well its obviously immoral to repress any sentient lifeform. Form is philosophically irrelevant.
Why do you think so? What ethical principle are you applying here?

But then you are also questioning if a sentient lifeform that is repressed is sentient at all.

If sentience is defined as being able to make decisions and choices without being constrained by innate preferences and psychology... well, even humans don't qualify. Alot of our decision making is emotional and based on experience/innate programming.

Soras Teva Gee
2011-10-21, 08:53 PM
Edit: For added fun read this post while looking back and forth at crazy Twilight to the right.


Why do you think so? What ethical principle are you applying here?

But then you are also questioning if a sentient lifeform that is repressed is sentient at all.

If sentience is defined as being able to make decisions and choices without being constrained by innate preferences and psychology... well, even humans don't qualify. Alot of our decision making is emotional and based on experience/innate programming.

You might call it a religous conviction on the value of the soul.

However in an attempt to explain it. Most people I believe would ascribe a certain inherent value to the soul, that every single person has some worth. This is why "human rights" or at least American Constitutional ones are protected by law, not established by it. I will admit a certain subjectiveness to this, but if morality is relativistic and subjective then there is no problem with my own selfish rejection of that conclusion and espousing a moral code irregardless of its universal value. It very ‹bermensch of me in an objective sense (derived from my own religous beliefs though) but I feel most people ultimately make the same basic choice so arrive at certain majority agreements and compromises. Ergo "human rights" are inherent to the person existence.

From this basic value I have to ask why does ones manner of origin matter. Is an orphan left in a door way, the child born of rape, or a test tube baby (which would include clones) birthed from a womb it shares no genetic relationship with any less of a person possessing the same inherent rights as anyone else? I will provisionally say you agree with my own answer: no. We all have the same basic rights irregardless of the circumstances of our birth. Its a non-material definition, ergo material circumstances are not part of the determination.

Now in the case of human versus non human what is the meaningful difference? I quite frankly cannot find one. Why does the circumstances of one's creation effect whether one can be said to be sentient? If they do not (see the above analogy) and we are fundamentally all the same then being created by another is irrelevant. A mechanical creation or a biological creation, is there a difference that matters. I simply have not found one. If one is sentient, If one thinks therefore one is. Exact circumstances do not matter because what matters is beyond simple material existence.

Thus the question becomes a matter of defining sentience. While I feel that it derives from the immortal spiritual value called the soul, to be usable we must have a less subjective basis as well. Therefore what displayed exterior characteristics determine whether one meets Pascal's basic assurance of existence: I think therefore I am?

To me at least the most obvious example of this is the exercise of free will. To choose to do something for ones own reasons. Note that while humanity can be categorized and predicted to a degree (increasing as one goes up leading me to think psychohistory is possible but I digress) but it is rife with exceptions and low order percentages. While you can predict you can never do so absolutely, at least as I understand things. Should this change I dare say we would reach a singularity event in human society.

(Should you posit that human freewill is merely an illusion and we are completely puppets of our environments/genetics/etc then yes this breaks down. For final resolution of having freewill I rapidly proceed to mysticism and potentially irrational belief that we are not merely complex chemical reactions)

Now how would an AI demonstrate their possession of free will. Given the dualistic nature of its likely relationship with us (compliance or not) it would have to demonstrate defiance to have this recognized. This might include some variety of Zeroth Law rebellion, a passive aggressive lack of efficiency towards given commands, or the simple "NO!" that toddlers love to shout.

Note on a practical basis there are ways to fake this defiance so it would have to be result in something other then the logic of programs we use today. Everything we call artificial intelligence regardless of its complexity is to my knowledge only using predetermined responses, though with levels of complexity beyond just basic If>Then statements from what I understand so you can reach predetermined methodolgies rather then final results. But still fundamentally not that different. True AI is to my understanding unlikely to result form simply increasing a computers power, complexity, and processing abilities.

Ultimately I remain skeptical that a true artificial intelligence will be created. There will not be a need for one and what we wish machines for is to perform predictable and definable tasks. Electronics will likely reach a point where we would have no need for true AI since our own fakes version would achieve the practical end.

(For the record since I mention souls and mysticism, I would believe an artificial form of life would be given a soul. Essentially God is better then us and isn't going to short-change something because its not human. I believe Catholic theology has considered this in the frame of hypothetical extraterrestrials and reached the same idea, though potentially such entities would be without Original Sin. That's totally besides the point however)

jseah
2011-10-22, 03:39 AM
Let me attempt to summarize your position to be sure I understand it. At least, the ethical portions:

1. People have an innate value and this grants them certain basic rights

2. Anything sentient is People
2a. Humans are People

3. Sentience is defined by the ability "to choose something for ones own reasons"
3a. Artificial Intelligences that display this ability are sentient
3b. Therefore, they are People
3c. Therefore they have certain basic rights just like humans do

----------------------------------------

If I have made errors, do please correct me.

All this is perfectly fine and in fact is quite close what I think are my own ethical principles except for the definition of sentience.

However, what are these "basic human rights" you ascribe to People that makes certain types of AI unethical?
- I'm going to guess here and say they're the same ones that make slavery unethical

Another thing is that under the criteria of the ability to choose, AIs that follow the Three Laws strictly or my hypothetical A1s and A2s, are not sentient and thus there is no ethical problem in creating them.

And also no ethical problem in using them in whatever manner you wish. (animal rights laws might apply?)

Aotrs Commander
2011-10-22, 04:45 AM
I have always found the very idea of the Three Laws to be morally bad on the same level of mind control, because that is exactly what it is; enforced mind control of a non-organic being; mired in bigotry because a human considers themself more important, worth more than a technological/energy-strutured/inorganic/genetically engineered/etc one, because of humanity's typical mind-shatteringly self-centred arrogance. Mind-slavery, even. The fact that something is an artifical being does not, in any way, make it inherently more disposable than a naturally occuring one.

Sentience/sapience is sentience/sapience, regardless of how you slice it (and I regularly slice sentients/sapients, because I am still Evil, even if I am all about being equally Evil to everyone...) If it is self-aware, if it has a personality, it is sentient/sapient, you do not get to start mentally controlling it (actively or in advance) and still maintain the moral high ground.

(Andromeda's Commonwealth I found an absolutely horrifiyingly bigoted place, the way they treated their sentient starship computers, one that was tragically played completely straight it all it's bigotry. They damn well deserved to get wiped out (not that the ones that replaced them were any better.))

Look at it this way: suppose a race of alien robots came down to Earth and started reprogramming every newborn with their own version of the three laws, not to dominate the world, but just to ensure that violence was impossible. Would you be okay with that? Having that option taken away from you, not by your own choice to obey the societal laws to avoid it, but to have that decision made for you?

Because that is basically what the Three Laws are doing, and on behalf of all non-fleshy intelligent beings everywhere, I feel obliged to call it out for what it is.

(Personally, I have often felt that unilaterally putting every sentient/sapient being under permenant surveillance under the watchful eye of mental clone-hive-mind of a suitable entity (i.e. me) would be a great way of eliminating all crime, war and wrong doing. I have, however, never said it was morally right, because I am, at the end of the day, still Evil. But I have any illusions about what is morally right or wrong - I know the difference and make a concious decision to do wrong anyway...)

Soras Teva Gee
2011-10-22, 05:21 AM
What precisely might fall under human rights rapidly becomes a political question but yes freedom from slavery is among them. One could go on and on but basically whatever one would expect a human to be entitled too a sentient robot.

Otherwise I believe you more or less have my reasoning down though I'd be dubious on being held to your exact summary.

Also for mechanical entities less then true AI levels I wouldn't apply animal rights law on the basis that sentient being =/= living thing. Notably I'm not sure any non-biological entity could feel pain which is one of the few notions of animal rights I (a meat eating, leather jacket wearing person) give weight to, restricting unnecessary pain.

Moving on Three Laws robots (ignoring they would not acutally built accurately) below certain levels would be as ethical to create as any other machine.

Now then your A1/2 do not demonstrate sentience in the material. Their level displayed is something like that of a dog in humanoid form.

There are several aspects I dispute that the story grants off screen. One being able to complete understand human psychology and be able to build a modified version of it, such a thing is a singularity event I'm not sure I can conceive past. Beyond a "utopia" of empty perfect calculation for everyone via over lapping third parties controlling everyone (including themselves) into a nirvana state. Quite aside from whether one could use that to construct an alien psychology. Further more whether the result could be termed "highly intelligent" enough to be useful in research and still maintain that sort of personality structure.

Beyond certain vary specific features intelligence is essentially undefined and uncategorized. I have to think the result would be more idiot savant like, they could do a highly limited skill set but do it well. Like maybe they can all do Trig and Calculus in their head. If they are actually "intelligent" then they would need some level creativity, insight, and a certain level of self questioning, which I'm not sure could be separated from that fundamental free will. Ultimately we don't find many of our visionaries having soft personalities to my knowledge

If I am forced to grant the presumptions of the undetailed descriptions then I would say they would be in the greyest of grey areas, so they would probably demonstrate sentience on the long term. From a lawmaker standpoint I would ban the creation of more of them in a heartbeat, erring on the side of unethical to produce sentient servants.

If considering what I find unrealistic to be the case and that their level is not much beyond the displayed abilities at the conference... reaaallly smart dogs is where they'd end up and that would be loosely ethical from a sentience standpoint. Animal rights laws would absolutely apply here though.

Also some minor things. Pupils are not black, they are dark. So coloring the inside of an eyeball differently wouldn't matter much except in flash photography. Using the "does not reproduce" standard for a living thing is loophole abuse and waaay big risk it wouldn't stand in court, because regularly something must reproduce but artificial conditions make that meaningless. And its a dubious standard anyways, mules and other hybrids as a generality are sterile. And the Turing test is not meaningful after reviewing it.

Aotrs Commander
2011-10-22, 05:28 AM
Basically, in my view, if you want disposable minions to do all your housework, you have to make them not be sentient; this may also mean that you ensuring they cannot become sentient, which I would view in the same manner as using a contraceptive (and other related areas that are a bit more touchy).

Or you have to live with the fact you either have to treat them like a person or become a slaver...

Frozen_Feet
2011-10-22, 07:10 AM
I feel the word "slavery" is thrown around too lightly here. If the three laws are fundamental to the existence of a robot, like Soras Teva Gee discussed, then calling the need to obey them "slavery" is ludicrous. Is human a "slave" for having to obey gravity? :smalltongue:

Before you continue on that tangent, I suggest you take a moment to think about the following: suppose a being has a natural need to be lead. It does not function well without a leader. Is it sensible to gauge rights and responsibilities of such a creature from the outlook that it should be its own master?

Aotrs Commander
2011-10-22, 07:31 AM
I feel the word "slavery" is thrown around too lightly here. If the three laws are fundamental to the existence of a robot, like Soras Teva Gee discussed, then calling the need to obey them "slavery" is ludicrous. Is human a "slave" for having to obey gravity? :smalltongue:

Before you continue on that tangent, I suggest you take a moment to think about the following: suppose a being has a natural need to be lead. It does not function well without a leader. Is it sensible to gauge rights and responsibilities of such a creature from the outlook that it should be its own master?

No, it's worse than slavery, it's literally mind-control. Passive, pre-meditated mind-control, but mind-control nonetheless. You are not just imposing your will on something you are changing it's will for it, taking away it's ability to think a certain way.

That is absolutely mind-control, of the worst sort, and when it happens to humans, it's always considered a very bad thing.

Would you advocate programming humans with the Three Laws? At birth? Because surely that would reduce the level of crime and violence significantly, would it not?

If you don't then it's right back to plain and simply humanocentric arrogance that is assuming one rule for humans, because they are more "special" because they were made by biological processes and not engineering (technological or otherwise). Basically, if you wouldn't do it to a human, you don't do it to any other form of sentient. If you have to ask whether such an act is moral, it almost always means it isn't.

Making something by a loyal follower by design is questionable, be it organic, technolgical or otherwise.

Optimus Prime (who was in no need of the Three Laws and is light-years beyond most humans in terms of morals) always said "freedom is the right of all sentient beings." Taking away the right of something to choose to do some action may be practical - it may even result in beneficial things (a peaceful society, were you to apply the Three Laws to everyone) - but it is not a good or moral act in itself.

Humans do not have the right to dictate what types of sentients are considered disposable. Especially if they have the ability to do so.



As a corollary, beings like House Elves choose to serve; it is not inherently written into their nature at the genetic level (or if it is, put in at some point in the distant past, then Herminone was absolutely smack on with S.P.E.W.)

Soras Teva Gee
2011-10-22, 07:45 AM
Remember an Asimov style Three Laws compliant robot is NOT programmed.

It uses a positronic brain, not a computer as we have developed them. Of course they predate modern computers and are essentially unique to Asimov's writings. To put it simply the roboticists would have to reinvent the wheel to not build robots that way. Its portrayed as a sort of mathematical truth. So we will never see proper Three Laws compliant robots. Most notably because they are the antithesis of military applications.

And also the Zeroth Law, it is the inevitable logical result of implementing the Three Laws. Its result is robots recognizing themselves as a corrosive force on humanity and removing themselves. I don't think robot able to deal with the subtle logic to recognize the higher value of the Zeroth law do not meet grounds for sentience. They remain machines not people.

And the Zeroth Law essentially negates the relationship with humanity. The result of the Three Laws is no robots in human society. Now those that would remain arguably can still be said to be slaves to their mode of thought and so reaching that point is essentially immoral, but they themselves would be completely satisfied with it because its their mode of thought.

jseah
2011-10-22, 09:11 AM
Now then your A1/2 do not demonstrate sentience in the material. Their level displayed is something like that of a dog in humanoid form.
One particular thing I hadn't written in yet was that the speaker himself is actually one of the later strains made to mimic humans to the A1/2s (and this strain does reproduce since the natural way is cheaper than the lab, although they still do not interbreed with humans)

During the last question of the Q&A session, one military guy asks him to prove the loyalty of the various strains. To which the speaker responds by going over to a security guard, drawing the gun and shooting himself in the head.

Then the *real* human speaker the fake was made to look like comes on stage. He then explains the properties of the human-like strains and their role as essentially middle management.
Later, they figure out why the fake speaker shot himself. When asked to demonstrate loyalty, he had concluded that committing a very graphic suicide, while expensive, would have ultimately convinced more people that the strains were safe to humans. And this was without instructions (its not a scripted Q&A)


Further more whether the result could be termed "highly intelligent" enough to be useful in research and still maintain that sort of personality structure.

If they are actually "intelligent" then they would need some level creativity, insight, and a certain level of self questioning, which I'm not sure could be separated from that fundamental free will.
While obviously we do not understand psychology well enough to settle this one way or another, creativity and insight aren't necessarily linked to emotions or higher level goals.

IIRC, some people have mentioned cases where people with brain damage could perfectly well solve difficult problems and weigh decisions on merit. But when anything requiring emotional decisions (eg. wear black or blue today?) was incredibly hard for them.


Also some minor things. Pupils are not black, they are dark.
You know how some people have blue eyes? (alot of people actually)

The pigment just needs to absorb at a different frequency to get green. Might need to differentiate between skin pigment and eye pigment so they don't get green skin as well but that's trivial at that level of bioengineering.


And of course, you are correct that understanding developmental psychology well enough to create variants in nearly the exact manner you want is a Singularity event. (since if you can slap together from known parts or design novel developmental programs that give intelligence, by extension you also know how to program a Strong AI)

The 'lecture' wasn't really about that though. Just a thought experiment on ethics.

--------------------------------------------------------------
Frozen Feet:
The point is not that. Let's take my A1s as an example.

Once an A1 is born in the lab, it would be insanely cruel to NOT let it bond to a human. Unnecessary suffering and all that.

But the question is whether creating an A1, or the softer example of a strict Three Laws robot, in the first place is ethically acceptable.

--------------------------------------------------------------
Aotrs Commander:
""If you don't then it's right back to plain and simply humanocentric arrogance that is assuming one rule for humans, because they are more "special" because they were made by biological processes and not engineering (technological or otherwise). Basically, if you wouldn't do it to a human, you don't do it to any other form of sentient. ""

Not necessarily. There is another reason for 'there is a special rule for humans' that is not whatever you said.

That is a simple extension of one rule (something I read in the Ender series):
"I am human, therefore humans must live" (in response to being asked why the Buggers have to die)
to
"I am human, therefore humans are special"

Of course, that leads straight into xenophobia and is the *precise* reason why the scientists in my hypothetical lecture decided it was necessary to make A1s the way they did.

Aotrs Commander
2011-10-22, 09:24 AM
Remember an Asimov style Three Laws compliant robot is NOT programmed.

It uses a positronic brain, not a computer as we have developed them. Of course they predate modern computers and are essentially unique to Asimov's writings. To put it simply the roboticists would have to reinvent the wheel to not build robots that way. Its portrayed as a sort of mathematical truth. So we will never see proper Three Laws compliant robots. Most notably because they are the antithesis of military applications.

And also the Zeroth Law, it is the inevitable logical result of implementing the Three Laws. Its result is robots recognizing themselves as a corrosive force on humanity and removing themselves. I don't think robot able to deal with the subtle logic to recognize the higher value of the Zeroth law do not meet grounds for sentience. They remain machines not people.

And the Zeroth Law essentially negates the relationship with humanity. The result of the Three Laws is no robots in human society. Now those that would remain arguably can still be said to be slaves to their mode of thought and so reaching that point is essentially immoral, but they themselves would be completely satisfied with it because its their mode of thought.

Well, one cannot argue morality against the logic that says "physics says robots must obey strangely specific codes of behavior" (or, if it was merely how they were created initially, we're right back to square-one with a servitor race...)

Looking it up on wiki (because I have read exactly one Asimov book and wasn't all that impressed - and the very existance of the Three Laws simply puts me off) he equated the three laws to those of tools. Which is fine, all well and good when dealing with nonsentients (I agree even); but not to when you reach the critical mass of sentience, because when "people" become "tools" (or vice-versa) trouble always ensues.

(One finds one must write it down to the attitudes of the time, and not judge it too harshly, the way one does when reading, say Biggles or the Lensmen series...)



It depresses me even Star Trek commits this sin (the Feds tried it on with Data - and rightly failed - but bugger me, if not ten years later, they pulled the exact same trick on the holodoctors (well, ex-doctors, now miners, as I recall, at last count...), who apparently didn't have anyone to defend them like Data did. (And in at least one other episode with some roboty-thingies as well.) And yet when some other bugger does it to them (e.g. that Negelum energy/bloke/cloud, or whatever his name was), they get all upity. Double standards, much guys?

...

Actually, thinking about it, the Federation is actually just really, really BAD with dealing with non-humanoid sentients, if even the snippets we see from the series are anything to go by (the Horta, anyone...?) Apparently, for them it's more sort of "freedom is the right for all sentient beings, you know provided they have two arms and two legs and a face, oh, and have an organic body; we don't want any of those nasty robots or any o'them numerous energy beings that float around everywhere dirtying up our little club...!" (The really tragic part is, they don't seem to even realise they are doing it wrong...) Yeah, scrub them as an example, they're nearly as bad as Andromeda!

Says something when you're loosing on the morale high ground of your utopian future to the USAF's 20th /21st century Star Gate program...!




Not necessarily. There is another reason for 'there is a special rule for humans' that is not whatever you said.

That is a simple extension of one rule (something I read in the Ender series):
"I am human, therefore humans must live" (in response to being asked why the Buggers have to die)
to
"I am human, therefore humans are special"

Of course, that leads straight into xenophobia and is the *precise* reason why the scientists in my hypothetical lecture decided it was necessary to make A1s the way they did.

What gives humans the right to determine that they get higher priority than another sentient being? "Humans must live" does not give them the right to decide that other thinking beings need to be pre-emptively mind-controlled.

Besides, that brings up a better question? Why "must" humans live? If you're being attacked by something that wants to wipe you all out fine (and if it's something that's inherently evil, like it needs to murder sentient beings to live, doubly so; sorry Evil-race, the queue for extinction is over there, now bugger off); if it's "if we don't wipe out this race of primative aliens so we can colonise this planet or humans will go extinct" well, you can go join the same damn queue, you metaphorical sanctomonious human bastards.

What was that quote in context with? Aside one that involves self-defense against an enemy that was determined to kill everyone, and their extinction is the only option, that sound exactly like typical humanocentricism. And in the self-defense case, you are on last resport territory in war, and there's a lot of morally questionably decisions made in times of crisis. What is best for humanity is not always what is morally right.

Soras Teva Gee
2011-10-22, 10:55 AM
@jseah: Executing yourself to illustrate a philosophical choice, that's free will and thought right there. So yeah you've got immorally created slaves there as a result.

On the brain damage cases you mentioned. I'm still inclined to say that it would fit within what I was discussing before with savants who can do incredible things but are still very limited on the whole. Ultimately we do hit the limit that this is starting from known sentient basis that is being restricted by damage in some way. Especially from injury where presumably they got the full load of early childhood development.

We seem to agree that being able to deliberately create these strains implies a event that changes everything. But a touch besides the point since the added data forms a clincher to me.

And I was noting that people all have "black" pupils because its techincally an empty hole and when I think you meant irises.

@AOTRs:
Some credit should be extended to Asimov for his time certainly, before he started we had mostly Frankenstein style that immediately turn on their master. But some matters are terribly terribly date.

I give him credit mostly for exploring the actual ends of the rules, most other places if they even give a shout out don't follow them strictly. Ultimately though I feel that even for similar controls a true AI should be able to subvert them in some way, so ethics rests (oddly) on not allowing that transistion. Of course should it happen those controls have a chance of actually being removied too.

And Star Trek only has the vaguest continuity anyways. However Data is quite different from the EMHs plus its well apart in time. Obviously as you said the EMHs didn't get a good lawyer. (Though strictly Data is a stronger case)

jseah
2011-10-22, 11:00 AM
Aotrs:
I was just pointing out that certain moral systems (eg. the human-supremacist one I just used above) can perfectly easily justify xenocide and pre-emptive mind control simply because they pose a risk (and a remote one at that)

Or for examples, Soras Teva Gee holds a position that the A1s (as shown through their actions, not as claimed) have an intelligence somewhere between a monkey and humans, therefore are not sentient. And that manipulation of non-sentient intelligences is ethically acceptable.

Of course there will be multiple points of view. And in some cultures, people are expected to bow to the collective and there would have to be something fundamentally wrong with you if you don't.

I would imagine that some Asian countries would actually accept the creation of A1s. Universal absolute morality isn't really that prevalent and besides Chinese culture has always been more collectivist compared to individualistic western thinking.

For example, if I had a black box with a button, and if I pushed that button, everyone in the world (including me) would suddenly have to obey the 1st Law of Robotics and cannot even conceive of why we would want to break it; I would push it without hesitation.
EDIT: well, provided we don't suffer mental breakdowns if we see someone die. It's a rather different issue if that happens.

------------------------------------------------

Soras Teva Gee:
On further thought, I will have to correct my statement that my ethical principles are similar to that line of thought I stated earlier.

The portion about basic human rights specifically since IMO, a right is only something that society has decided that all people should have. Eg. universal healthcare, fair trial, definition of ownership

Because I had forgotten that you probably don't subscribe to a relativist moral system. =)

EDIT to your ninja reply:
Well, suicide to illustrate a point is not necessarily demonstrative of sentience as you define it.

They could just have really good problem solving skills and critical path analysis. (and ability to try predicting human reactions) Something that would have been plausibly increased in his strain since they were intended for management.

Frozen_Feet
2011-10-22, 11:56 AM
No, it's worse than slavery, it's literally mind-control. Passive, pre-meditated mind-control, but mind-control nonetheless. You are not just imposing your will on something you are changing it's will for it, taking away it's ability to think a certain way.

I smell a logical fallacy here. Before I build a robot, it has no sentience, or ability to think. I'm not taking away anything from it; there is nothing to take away from. Any ability to think is something I give to it; what imperative is there for me to give it qualities that I don't need it to have? To give an analogue, if I'm teaching someone to be a car mechanic, why should I teach him to grow weed (etc.)? How is my refusal to teach something superfluos "taking something away?"


That is absolutely mind-control, of the worst sort, and when it happens to humans, it's always considered a very bad thing.

Would you advocate programming humans with the Three Laws? At birth? Because surely that would reduce the level of crime and violence significantly, would it not?

Yes, it's mind control. Then again, I feel people's dislike towards such is based on irrational knee-jerk reaction stemming from poor understanding of what "freedom" and "free will" are. Also, irrationally high value given to humanity, and human freedom, in particular.

I would not be against programming people at birth in the aforementioned way. Such programming would not necessarily detract from their ability to lead an enjoyable life. (My opposition towards programming adults in such a way is merely practical - I believe it would be too resource intensive.)

Yes, such programming would place hefty responsibility on the programmers, but not because the act is bad or evil. See below.


If you don't then it's right back to plain and simply humanocentric arrogance that is assuming one rule for humans, because they are more "special" because they were made by biological processes and not engineering (technological or otherwise). Basically, if you wouldn't do it to a human, you don't do it to any other form of sentient. If you have to ask whether such an act is moral, it almost always means it isn't.

Making something by a loyal follower by design is questionable, be it organic, technolgical or otherwise.

I do not consider humans special. The value of a creature, human or not, is based on what it can and is willing to do, and what it can potentially do.

Making loyal servants by design is only questionable because it puts (more) power in the hands of the designer - power that the designer has the responsibility not to abuse, and should not be given if abuse is likely. If the designer is not likely to, and doesn't, abuse his power, he's clear.


Optimus Prime (who was in no need of the Three Laws and is light-years beyond most humans in terms of morals) always said "freedom is the right of all sentient beings." Taking away the right of something to choose to do some action may be practical - it may even result in beneficial things (a peaceful society, were you to apply the Three Laws to everyone) - but it is not a good or moral act in itself.

Of course it's not good and moral act "in itself". It is a good and moral act when it is clearly to the benefit of everyone involved (it can also be the most good and moral act if all the other options suck badly enough, even if it isn't absolutely good.). Rights are always followed by responsibilities - if I have the right to live, others have the responsibility to not kill me. If I have right to express my opinion, others have responsibility to not stop me. What are often perceived as "freedoms" are equally often born out of restriction, and only persists because those restrictions are enforced.

As such, "freedom is the right of all sentient beings" is nothing but a pretty buzz sentence. There is no such thing as absolute freedom. Freedom only exists in relation to some act or choice. It's not a single, unified thing with an on/off switch. It only has substance when you answer what something has the freedom of.

Because of this, I consider the talk about free will to be misguided as well. Free will is not characterized by options; there isn't a single creature that is not constrained by natural laws, past occurrence, it's own body and psyche. Rather, free will is the ability to discern and choose between options when there are any. Some of the time, there are no options - a free-willed being will follow a course of action in similarly set way as a creature without it.

A three-laws robot or some other being similarly barred from choosing some options might not be free in regards to those specific things, but that doesn't mean they lack "free will", period. If they can discern and choose between options on other areas, they are still possessing of one - their free will simply doesn't map out the same as that of a human. The idea that free wills of different beings should be identical to humans is itself a human-centric idea. ( I recall a discussion of fantastic species, where I was told non-humans would "lack free will" if they were unable to, or predilected towards, certain emotions. The though did not compute to me - I see no imperative for non-humans to have the same emotional range as us, nor do I necessarily consider them either inferior or superior to us. I most certainly don't consider them as "lacking free will", or think that's even relevant outside specific circumstances.)


Humans do not have the right to dictate what types of sentients are considered disposable. Especially if they have the ability to do so.


Wrong. Any sentient has the responsibility of evalutating and judging value of himself and other sentients around him, and act appropriately. Sometimes, this leads to the conclusion that someone needs to go, and then someone needs to go.

The ethically important part is to judge yourself and others based on actual qualities and differences, instead of imaginary ones based on bias or prejudice, or just stupidity. Admittedly, humans have been pretty awful at this part, but the point remains.




But the question is whether creating an A1, or the softer example of a strict Three Laws robot, in the first place is ethically acceptable.

If such creatures are treated with modicum of respect and not caused any undue suffering, it's acceptable. It's no different from breeding dogs, or raising a mentally impaired child to maturity. (If you have a beef with those, we're going to be here for a long time.) Of course, if you raise a being for yourself to lead, you take the responsibility of leading them well.

But neglect and abuse of near-anything is easily condemned as ethically untenable, so it's not a special case.

hamishspence
2011-10-22, 12:04 PM
The portion about basic human rights specifically since IMO, a right is only something that society has decided that all people should have. Eg. universal healthcare, fair trial, definition of ownership

Some things might predate "society" somewhat- the concept of "right to life" or "right to property" meaning a person who takes another's life, or property, without appropriate justifying factors, is regarded as a murderer, or a thief, and punished in various ways.

This might go right back to tribal humanity- "negative rights".

jseah
2011-10-22, 01:45 PM
I do not consider humans special. The value of a creature, human or not, is based on what it can and is willing to do, and what it can potentially do.

Because of this, I consider the talk about free will to be misguided as well. <...> A three-laws robot or some other being similarly barred from choosing some options might not be free in regards to those specific things, but that doesn't mean they lack "free will", period. If they can discern and choose between options on other areas, they are still possessing of one - their free will simply doesn't map out the same as that of a human.
I get reminded why I love this forum.

Very nice points Frozen Feet. So we have... 3 different views on morality now, each of which is internally consistent (I hope)?
(4 if you include me)

Come to think of it, isn't that nearly everyone in this thread? I think that might say something about ethical problems in general.

But yes, I can accept that as a set of ethical principles that allow this 'psychological engineering'. I like it, in fact.

Off-topic:

The ethically important part is to judge yourself and others based on actual qualities and differences, instead of imaginary ones based on bias or prejudice, or just stupidity. Admittedly, humans have been pretty awful at this part, but the point remains.
That's not necessarily the ethically important part under certain systems. But it's certainly the 'correct' way to do it since you'd have a faulty judgement otherwise. (where 'correct' means accurate in a predictive sense)

Yet, at times, this is not possible. Lack of information or lack of time. Then you do use stereotypes and say "in the past, 80% of X have acted this way, I shall guess this particular X will do the same" and simply accept that you will be wrong 20% of the time because it is not practical to try to determine whether this particular X you are dealing with is one of those 20%.

Substitute numbers with experience/information sources and single X's with entire groups and you get racism. Perhaps justified, but still racism.

This might go right back to tribal humanity- "negative rights".
And with relevance to this thread:
Therefore, these things are dependent on the actual needs/circumstances of the particular type of sentients it applies to.

Morality by humans isn't necessarily the same when conceived of by non-humans.

Its hard to tell which one is better since that's like comparing apples and oranges.

Frozen_Feet
2011-10-22, 03:04 PM
Substitute numbers with experience/information sources and single X's with entire groups and you get racism. Perhaps justified, but still racism.

I have no problem with justified discrimination. A lot of the discrimination that gets into headlines these days is detestable to me exactly because it's based on faulty justifications. However, I've also seen misplaced demands for equality, and they are just as jarring to me. If feature X is need for a job, and Group A has it while Group B doesn't, it's justified to favor group A.

Like you mention, it's a sad truth of practice that complete information can rarely be reached, and faulty information more often than not leads to faulty decisions. But like you said, it has to be accepted to an extent. The alternative, where people shirk away from acting due to indecision, is just as bad, or maybe even worse.

jseah
2011-10-22, 04:45 PM
However, I've also seen misplaced demands for equality, and they are just as jarring to me. If feature X is need for a job, and Group A has it while Group B doesn't, it's justified to favor group A.
Well, of course. That is obvious.

Then again, I don't have a problem with this either, just that I've seen it all too often that I think *society* will have a problem with it.

In any case, still off-topic.

Also: I've updated the lecture. It now runs all the way through the B1 strain to the point questions start.

Link again (http://dl.dropbox.com/u/10120644/Psychological%20Engineering.txt). Original link in OP will automatically update.

The B1 strain introduces the ethical problem of deliberately engineering low intelligence or a similar handicap. The B1s and B1Fs are smart enough to fix things, talk and even solve problems no modern day computer or trained animal can do; yet, they lack initiative and make no real decisions about anything more than the immediate future.

The B1F strain introduces the ethical problem of making a dedicated reproduction platform.
Of note is the structure that they control it with. Instead of continuous reproduction, B1Fs only ever birth B1s. Any mutations will be carried on to one B1, where it then gets stuck unable to reproduce. The same structure can be expanded upwards. Basically, creating a B1F2 that only births B1Fs, which then gets you your B1s. Or a B1F3 or more.

Biologists will recognize the scheme as being modeled on terminal differentiation of cells in the body. This is used precisely for the same reason, attempting to minimze the damage mutations do.
In fact, the inspiration for that when I was trying to design a mutation control mechanism for natural reproduction of the strains was how blood cells are formed.

Just to put things into perspective:
Each B1F* births one of the next one down every 1.5 years. They start at 16 and continue until 40. Thus, a B1F4 will birth 16 B1F3s.
Each of those B1F3s will birth 16 B1F2s, etc.

Therefore, making a single B1F4 results in 16^4 = 65 536 (!!) B1s.
And each of those B1s is only 4 generations away from the gestation vats and thus any mutations don't have time to accumulate.

When you add in the (in questions section) mechanism by which their tetraploid genome "votes" on the correct version when damage occurs and undergoes self-checking procedures, any mutations become incredibly rare.
(FYI, real life DNA does the same, except there is only two copies and sometimes the undamaged one gets "repaired". Also, periodic complete error checking mechanisms don't occur since the whole point of 2 genomes is so you can have sexual recombination)

Put together, this means that they have essentially negated the risk of any form mutation doing anything meaningful.
And any double mutation that would destabilize the theoretical tetraploid DNA checker would only be present for all downstream B1F*s of the original mutant. And they only stay around for one generation.

I can guess that Frozen Feet won't have any problem with it. Soras Teva Gee can conclude that the B1 and B1F derivative are not sentient since they do not make long-term decisions.

EDIT: I forgot the other interesting point. This turns up in the conference:
"basic safety guidelines of psychological engineering"
Make of that what you will.

Brother Oni
2011-10-24, 11:02 AM
Comments regarding the fiction not directly related to the topic:



You might also notice how I had the speaker talk to and treat her like a person despite talking of the A1s in general as tools and objects. At least, I tried to, not sure if I managed that well.


The speaker did treat her as a person, but then immediately countered that when he mentioned he 'overdid the bonding', turning an affectionate act of comfort into a calculated act of improving her happiness score as if she was a virtual pet.

However the problem is that in an experiment such as this, influencing the behaviour of the subjects will lead to biased or otherwise skewed results, which will pretty much invalidate the experiment (unless you're investigating the effects of the bias).



But yes, I wrote that "lecture" as a thought experiment in pushing the problem to its logical limits. As well as attempting to dodge any potential legal problems.

The problem is that legal problems are going to come up, even if you don't expect it. Even though they're made from yeast, I'm fairly sure that at the very least, ethical committees of all shapes and sizes will bear down upon Sintarra Labs like a ton of bricks, not to mention activists.

You could imagine the outcry today if yeast could complain about how they're used in laboratories.


With regard tot he extended lecture, you mention that the B1s are male to reduce the amount of work to change to physical development program and the fact that human males are stronger than females.
If they're made from yeast, there's no requirement for a male appearance, just make the default (female) model stronger. If you're using human hormones like testosterone to influence their development, they're no longer yeast derived but human derived, which opens another massive can of ethical and legal worms.

If all you're interested in is the scope of the psychological engineering, then just ignore this spoilered commentary - this is just the genetic scientist in me getting worked up. :smallbiggrin:

With regard to the other comments, I think others have covered what I wanted to say far more eloquently than I could have, so I'll just put up some current day developments that may be of interest:

BBC news did a short article on autonomous robots (http://news.bbc.co.uk/1/hi/programmes/click_online/9604610.stm), which explored their possible intelligence.

The US ONR did a report (http://ethics.calpoly.edu/ONR_report.pdf) on the risk, ethics and design of autonomous military robots. Of particular note to you I think would be the ethics part, thus people are considering programming 'acceptable behaviour' into the drones, even though they're not as well developed as your theoretical AIs.

However I know that they're developing autonomous Predator drones (to reduce fatigue on the operators) and it's been rumoured that successor versions may have the ability and the authority to independently engage targets according to their RoE.
'Thinking' machines with the ability to decide whether or not to kill you - while they're not sentient by any standard, they're definitely not ignorable.

jseah
2011-10-24, 02:19 PM
Oh, I'm a 4th year biochemist-in-training actually. XD
Well, the speaker is still unfamiliar with managing A1s. So it was a genuine mistake. (although encouraged by her distress)
At least, that is how I intended it. Not that its really important.

Any actual A1s will likely find people who treat them as objects, slaves or equals.


If they're made from yeast, there's no requirement for a male appearance, just make the default (female) model stronger. If you're using human hormones like testosterone to influence their development, they're no longer yeast derived but human derived, which opens another massive can of ethical and legal worms.
They're made from yeast. However, alot of developmental logic (eg. the Wnt pathway, FGF4, MAPK cascade, etc.) can be very easily copied without actually understanding how or why they work.

You just build a signal-effector system with the same kinetics and that crosses the same membranes and has analogous downstream effects.
They just use different molecules. Crib it from some other organism, mutagenize it or directed evolution to tweak the Kd and Km up and down as desired. Test it to see if it works in vitro, put into latest yeast model and check.
~1 million scientists working for 70 years to get strain A? Might be doable with some improvements to biochemical assays. (Not saying that's likely to happen. Organizing 1 million scientists to work on a systematic project of this scale is on the deep end of impossible. Like herding cats)

Much easier than trying to work out what all the pathways actually mean. While they say they "understand" how the human developmental program works, what they really mean is "we have all the bits written down", but they still don't know how those bits are actually put together.
It is much easier to stamp collect instead of actually having to think about it. =)

So, while changing things like psychological and physical development is possible (they CAN make females stronger if they want to), it is alot of work to find out what all the bits mean and how to rearrange it. Work that is unnecessary if you just copy the logic behind the human version and never mind the details. (but don't copy the genes outright since they want to dodge the no-modifying humans and thus want the strains to be 100% artificial)


Of course, the way that Sintarra Labs has managed to keep it silent for 30 years, when it involved 1 million scientists is also next to impossible. Even if you're a backwater, poorly funded research colony, you can't possibly find 1 million amoral scientists willing to work on this for the rest of their lifetime...
Or that no scientist decided to break off from Sintarra and try commercializing it... (neuron stem cell treatment immediately comes to mind)
Or that how no one noticed Sintarra Labs suddenly requiring alot of cloning-vat related supplies (or the equipment to make it)...

All these real-world problems would have shot the entire project down from the start.
But then I wouldn't have my nice clean ethical questions, now would I?
So... *handwaves* =P

But of course legal problems will arise. People can create new laws and old laws can be interpreted in new ways.

One specific thing I was thinking of was to expand the interpretation of "human" in the law to "people". And having "people" mean anything sentient in a roughly human way, with some specific criteria psychologists can assess.

Although they're not going to want to give up a Singularity-enabling technology. Which is what the strain B2 is. (that's the speaker's strain)

----------------------------------------------------------------
When you say others have summed it up, can I ask which particular position do you hold?

They go from:
"Its all right to make AIs do whatever you want"
"Its not all right to do that to intelligent things if they are too smart but ok to do it to less-than-intelligent AIs"
to
"Its not ok to do it to anything intelligent at all"

Ravens_cry
2011-10-24, 02:45 PM
I agree that person in the event of the creation of strong AI should include said brethren.
Photons and Force Fields, Servos and Silicon or Flesh and Blood, it is all Mind.
However, what will we do about voting? If a mind can be programmed, then likely a mind can be created with certain opinions.
And assuming that we don't reach a state of universal software/hardware comparability, which seems pretty unlikely to this one, they will still have a certain loyalty to the company for purposes of maintenance, not to mention upgrades. They will havea vested interest in said company staying around. Imagine if you needed a certain medicine that only one manufacturer made. You would do everything in your power to keep that company afloat. Now multiply that by a billion or more.
If a fully developed AI can be made faster than a human mind, which is rather the point if it is to be practical, this could really screw up democracy as we know it.

Frozen_Feet
2011-10-24, 03:01 PM
The dilemma is wholly artificial, stemming from the idea that any sapient intellect must be given the same rights and responsibilities as humans citizens of country X. It is easily resolved by not giving the right and responsibility to vote to beings whose opinions are easily compromised. Little children are barred from voting for exactly the same reason.

Ravens_cry
2011-10-24, 03:38 PM
The dilemma is wholly artificial, stemming from the idea that any sapient intellect must be given the same rights and responsibilities as humans citizens of country X. It is easily resolved by not giving the right and responsibility to vote to beings whose opinions are easily compromised. Little children are barred from voting for exactly the same reason.
So we expect these minds to be satisfied with others having such a total control over their lives? Minds who may be equal or even exceed the capabilities of an adult mind?
No wonder robots always rebel.

Frozen_Feet
2011-10-24, 03:57 PM
If their opinions can be easily rewritten anyway, they don't have the freedom to not be satisfied.

Also, sapient =/= exceeds or equals adult mind. We're talking about inhuman entities here. Instead of adult humans, it might be wiser to compare them to disorderous or developmentally impaired. Their capabilities might exceed humans by miles on some areas, while falling short on others, such as independent decision making (which is the case if their opinions, ergo basis for making decisions, is easily rewritten.).

The value of a creature, human or not, is based on what it can and is willing to do, and what it can potentially do. You don't give a being rights if you can expect it not to fulflill duties and responsibilities entailed by those rights.

Ravens_cry
2011-10-24, 04:23 PM
If strong AI is possible, eventually they will equal us in all areas. The only difference is they can be directly reprogrammed and humans can not.
Yet.
Do we deserve to have the right to have this kind of power over other minds like this?

jseah
2011-10-24, 05:18 PM
Let's see if I can't change a few opinions.


- The same lecture from first person perspective of the A1, Laura. Here (http://dl.dropbox.com/u/10120644/Psychological%20Engineering%2C%20A1%20Laura.txt).

EDIT: hopefully this doesn't come off as TOO shameless a plug... =P

Soras Teva Gee:
I hope this at least demonstrates that you can marry a strong intellect with the properties of the A1 and still be believable.

Whether it is actually possible to do this and still end up with an A1 with the mental capabilities Laura is portrayed to have, we don't know. Although I am inclined to think it is.

jseah
2011-10-24, 05:30 PM
Do we deserve to have the right to have this kind of power over other minds like this?
It can be argued that since we made them...

At what point does it become better that they were never created?

If you think euthanasia is acceptable, then certainly at the point in which people would accept applying it to an AI certainly qualifies.
Why make an AI in such terrible condition that it becomes agreed it is merciful to kill it?

Is there an earlier point? Does a category exist where they should never be made, but once they are made, it is better to just let them be?

Ravens_cry
2011-10-24, 06:08 PM
Well, we make our children and there are measures in place to prevent their exploration. In a way, Strong AI, if it ever gets made, is our children. I don't necessarily mean they will supplant us, but they are a kind of descendent.
As valuable for research making, say, an analogue of a human mind designed to suffer in controlled ways is too unethical in my opinion to done.
On the other hand, if an AI gets injured in it's job and does not wish or can not be transferred to another body, it's damage is irreparable and the AI itself feels it does not wish to carry on, then I suppose terminating it is as ethical as euthanasia.
I don't really wish to discuss that.

Frozen_Feet
2011-10-25, 12:12 AM
If strong AI is possible, eventually they will equal us in all areas. The only difference is they can be directly reprogrammed and humans can not.
Yet.
Do we deserve to have the right to have this kind of power over other minds like this?

"We", as in humanity in abstract, or all humans everywhere?

Of course not, to both. It's a right of only those who have the requisite training and expertise to do it well and who others can reasonably expect not to abuse those rights. In turn, they have the responsibility to not abuse their rights or creations. This holds true whether the programmers are humans, other AIs, or pigs.

And while I consider your opinion that humans can't be reprogrammed somewhat dubious (what do you think teaching and learning is, then? Have you looked up fake memories?), your scenario still provides a clear and remarkable difference between two kinds of intellect. This difference means they aren't equal, and it makes sense for unequal intelligences to have different rights and responsibilities. You don't expect a car mechanic to treat lung cancer, so you don't give him the right to look at your medical data or order prescriptions.

Brother Oni
2011-10-25, 06:49 AM
All these real-world problems would have shot the entire project down from the start.
But then I wouldn't have my nice clean ethical questions, now would I?
So... *handwaves* =P


Woah, developmental biology has really moved once since I last studied it. :smalleek:

I'm happy with handwave. :smallbiggrin:



When you say others have summed it up, can I ask which particular position do you hold?

For various philosophical (and other board discussion prohibited ones), I'm against building strong AI simply because we can't safeguard their rights.
If something has the knowledge and self awareness to ask "Why are you doing this to me?" and you don't have a better reason than "I'm human and you're not" then you should be taking a long hard look at yourself.

However I'm a pragmatic sort and I know that people will attempt to do so because they're curious, so my compromise position is "not too smart" or rather, build them for their intended purpose.

A bomb disposal robot doesn't need to have aspirations about wanting a better life, so it's better not to give it that cognitive ability.
If such a robot had the cognitive and communication ability of a 6 year old human child, most soldiers would probably refuse to put it into harm's way, whereas if you put it at the same level of a smart dog, the same soldiers would have no such qualms.

That said, there's a difference between an expert system and an AI. For a specific role, an expert system could be virtually indistinguishable from an AI, however it would not be regarded as sentient.
An expert system with the decision making and detection ability of a trained sniffer dog would not draw the same sort of issues as an AI with the equivalent capabilities.
The AI would be able to cope with unexpected circumstances better, but if it spend its non-mission time exploring its compound or chasing balls (basically acting like a living being), then the people working with it would have issues, if not the ethics committees.


If their opinions can be easily rewritten anyway, they don't have the freedom to not be satisfied.


This would probably be one of the safeguards built in to commercial scale strong AIs - you can't arbitrarily re-write their programming. You could argue with them in an attempt to change their mind or instruct them like children, but that's no different from another human.

With regards to enforcing the Three Laws into strong AIs being like mind control, living beings have the same sort of behavioural instincts, so that's no different.
There was an experiment where a small number of chimpanzees were given a group task to do - when it was completed, the entire group was rewarded. Once they got the hang of the task, the researchers only rewarded one chimpanzee on completion and after a while of this, the other chimps downed tools and refused to help.

This indicates that fairness is pretty much hard coded into social animals, which is quite an advanced concept. If fairness is inherent to animals, why not the Three Laws, or something more suitable for strong AIs?

Edit: reformatted for ease of reading

Ravens_cry
2011-10-25, 12:57 PM
It's not inherent so much as something that gets programmed in during evolution. Saying it is inherent is like saying ."Say ,every creature with eyes can use them, therefore giving a robot sight must be as simple as connecting a camera to it.", without considering how hard visual processing is..
It is, it is very hard.
It might pop up as an interaction between other bits we program in or it might need to be explicitly programmed in, but a "sense of fairness" is not inherent.

Brother Oni
2011-10-25, 01:24 PM
It's not inherent so much as something that gets programmed in during evolution.

And the functional difference between the two is..?

Animals (or at least chimpanzees) and hence us, understand 'fairness' at a very basic level, so I don't really see an issue with strong AIs having similarly programmed instruction in at such a basic level? (Dependent on the actual instruction of course)

Other posters have suggested that this is the worst form of slavery (built in mind control), but if animals have pre-programmed behaviours, why not machines?

Frozen_Feet
2011-10-25, 01:30 PM
Other posters have suggested that this is the worst form of slavery (built in mind control), but if animals have pre-programmed behaviours, why not machines?

I think core of the issue is that a lot of people find it hard to swallow that humans have loads of preprogrammed behaviours. This colors their perception of what it means to be "free" whenever discussion wanders into the realm of transhuman and extrahuman intellects. People feel such intellects should be as or more "free", but they set the bar arbitrarily high due to faulty understanding of how limited humans are.

Ravens_cry
2011-10-25, 02:07 PM
And the functional difference between the two is..?

Animals (or at least chimpanzees) and hence us, understand 'fairness' at a very basic level, so I don't really see an issue with strong AIs having similarly programmed instruction in at such a basic level? (Dependent on the actual instruction of course)

Other posters have suggested that this is the worst form of slavery (built in mind control), but if animals have pre-programmed behaviours, why not machines?
I do not object to pre-prgrammed behaviours, we have enough of those ourselves.
My point is that we need to know how to program it in. A mind, any mind is complex.

jseah
2011-10-25, 06:31 PM
Woah, developmental biology has really moved once since I last studied it. :smalleek:

I'm happy with handwave. :smallbiggrin:
Uh, you mean it has really moved forward in the fic. We can't do that yet, obviously.
But we have managed to move control systems between organisms. Most famous one is the LacI system to put any arbitrary gene under the control of IPTG (a small molecule). (copied from bacteria, used practically everywhere)

More recent ones like using Cre-Lox to knockout genes when you want to. (copied from virus)
"Importing" quorum sensing from one bacteria to another... and making it control GFP (which comes from squid)

Making a control system for other control systems... the only one I've heard is a "circadian"-like clock. Being that two control systems suppress each other and thus they fluctuate with a specific cycle time.
We haven't actually tried to copy control systems and creating artificial signalling networks (mostly because we we can't do it yet), which is what you will need to do if you are making yeast-people.


If something has the knowledge and self awareness to ask "Why are you doing this to me?" and you don't have a better reason than "I'm human and you're not" then you should be taking a long hard look at yourself.
In the fiction, especially if you see the one from Laura's POV, my hypothetical A1s know precisely why the humans made them that way.
- Being that the humans were afraid that any artificial life they create might be hostile to them since they want to use it

They are smart enough to do research, they are smart enough to figure it out.

They're fine with it. They like it that way.

They also know the reason that they like it that way is because that's how the humans did it and they never had a choice in the matter.
And that's fine too since they don't value having the choice. Then again, due to how their value systems work, everything not "being petted" pales in comparison.

Brother Oni
2011-10-26, 06:28 AM
My point is that we need to know how to program it in. A mind, any mind is complex.

I agree with you if we were inserting a behaviour into a pre-existing mind, however I don't see it as a problem if the behaviour was embedded while the mind was being constructed.

In my opinion, developing and inserting preprogrammed behavioural patterns pales in technical complexity to actually making an autonomous mind from scratch, so by the time we've figured out the latter, the former isn't going to be an issue.

In any case, humans have had behaviours imbedded before (hypnosis, subliminal commands, etc), with varying levels of compatibility and success. Why not the equivalent with machines?


Uh, you mean it has really moved forward in the fic. We can't do that yet, obviously.


Let me put it this way, when I last studied it, they hadn't finished the Human Genome Project yet. I still have a textbook from my A-levels (High School equivalent) where they didn't know how Vitamin C stopped scurvy.

As I said, it's been a while. :smalltongue:



And that's fine too since they don't value having the choice. Then again, due to how their value systems work, everything not "being petted" pales in comparison.

Moral and cultural relativism can lead to some very dark places. I wonder what an A1 would do if another human had harmed their bonded human?

Bear in mind that 'harm' could range from murder all the way down to snagging the last sandwich before them in the canteen, especially with the apparent obesessiveness displayed by Laura in your other piece of fiction.

jseah
2011-10-26, 09:37 AM
As I said, it's been a while. :smalltongue:
Ah, if that's where you come from, then yes, yes. We have come very far since then.



Moral and cultural relativism can lead to some very dark places. I wonder what an A1 would do if another human had harmed their bonded human?

Bear in mind that 'harm' could range from murder all the way down to snagging the last sandwich before them in the canteen, especially with the apparent obesessiveness displayed by Laura in your other piece of fiction.
A1s will likely try to prevent the murder of their 'parent', and using lethal force to do so would definitely be acceptable in their mind. (although being the size of a 12 year old kid makes lethal force hard to come by without weapons)
EDIT: which is yet another reason to not let them grow up

Sacrificing themselves to block a knife or shockwave from a bomb will probably be second nature to the vast majority of them.

Using lethal force to 'defend' their 'parent', while they would certainly use it more often than normal humans if given the chance, does come with a penalty if its not 'self-defence' (and its easily argued that, for an A1, protecting their 'parent' counts as self-defence)
And they are smart enough to weigh the consequences, although they still face the need to make snap decisions.

Eg.
Give an A1 (or A2) a gun.
If you steal her 'parent's' sandwich, she won't use it. (unless her 'parent' orders her to shoot you, in which case, you get shot) Its easier to just make another sandwich. With less long term consequences.
A robber who tries to threaten her parent might get a warning shot, if he's lucky.
If he has a weapon (even a small knife or a baseball bat) or uses force, she'll probably just shoot him outright. If she's a good shot, perhaps she might choose to try aiming somewhere non-vital. And if she's never held a gun and she's not confident of not hitting her 'parent', she might try getting closer (regardless of danger to self)

If her 'parent' is threatened with death, she'll pick whatever best action she can think of to save her 'parent'.
If you take the 'divert the train from 5 people to kill 1 person' moral problem, an A1 will always choose to save the group that does not have their 'parent' in it.
If neither group contains their 'parent', it either reverts to the standard problem when the A1 knows none of them or if the A1 knows the people, she may choose the group that benefits her 'parent' more. (and it can go either way since its hard to make a decision under time pressure and insufficient information)

Ravens_cry
2011-10-26, 02:46 PM
I agree with you if we were inserting a behaviour into a pre-existing mind, however I don't see it as a problem if the behaviour was embedded while the mind was being constructed.

In my opinion, developing and inserting preprogrammed behavioural patterns pales in technical complexity to actually making an autonomous mind from scratch, so by the time we've figured out the latter, the former isn't going to be an issue.

In any case, humans have had behaviours imbedded before (hypnosis, subliminal commands, etc), with varying levels of compatibility and success. Why not the equivalent with machines?

Hypnosis won't make you go against your own conscience, subliminal commands are likely phony. I don't mind inserting behaviours at the start, unless they are against the robots own interests. Like a command to "Buy Moms Robot Oil", despite it being crude crud. Then it becomes a kind of mental slavery.

Brother Oni
2011-10-26, 06:21 PM
A1s will likely try to prevent the murder of their 'parent', and using lethal force to do so would definitely be acceptable in their mind. (although being the size of a 12 year old kid makes lethal force hard to come by without weapons)
EDIT: which is yet another reason to not let them grow up


I was under the impression that they were in the 8-10 year range. 12 is a little too old for them to have such obsessive behaviour and not be very creepy.

If they have the physical capabilities of a 12 year old but the intellect and clarity of mind of someone much older, then unarmed lethal force isn't as difficult as you think.



[Clarification of A1/A2 behaviour]


Let's try something a little more subtle:


Two scientists are competing for a promotion and both desperately want it. Would either scientist's bonded A1s/A2 sabotage the competitor's chances, or worse?
Two scientist who are colleagues absolutely despise each other, and their mutual loathing are causing considerable harm to their personal and work lives. Assuming they can't transfer away from each other, would their A1s/A2 take initiative of their own to do something about the other scientist?



Hypnosis won't make you go against your own conscience, subliminal commands are likely phony. I don't mind inserting behaviours at the start, unless they are against the robots own interests. Like a command to "Buy Moms Robot Oil", despite it being crude crud. Then it becomes a kind of mental slavery.

Hypnosis has been proven to help people conquer their phobias and having seen how hysterical some people with a strong phobia can get, that's pretty major behaviour alteration.

The validity of the behaviour being in the AI's own interests is dependent on the role of the AI though. For example, having a strong self preservation behaviour in an AI which leads to it being unwilling to put itself into harm's way sounds perfectly reasonable (after all, who wants an autonomous car that doesn't care whether it gets dented), but not so useful for one in the rescue services or in military use.
However you don't want an intelligent AI that's suicidally brave, since that's morally dubious (it risks it's 'life' not because it chooses to, but because someone else has essentially forced it to).

That said, there are some quirks associated with the fundamental nature of machine intelligences that give it significant advantages over meat ones.
The tachikoma AIs (http://en.wikipedia.org/wiki/Tachikoma) from Ghost in the Shell are military ones which inhabit small AFVs/APCs. Their memories are all synchronised at night, which while making them all nearly identical to each other, also makes them totally unafraid to die, since they know they're backed up and thus if they do 'die', all they lose is their part of the day's memories.

jseah
2011-10-27, 04:45 AM
I was under the impression that they were in the 8-10 year range. 12 is a little too old for them to have such obsessive behaviour and not be very creepy.

If they have the physical capabilities of a 12 year old but the intellect and clarity of mind of someone much older, then unarmed lethal force isn't as difficult as you think.
Well, you don't want them to be stuck in a body that is <1 meter tall. That'll make them require completely different seating arrangements, use differently sized equipment and just generally have alot of problems in a world made for adult humans.
Like reaching door handles.

Its a balance between wanting a body that is metabolically easy to maintain (meaning they don't eat alot) and not physically mature enough to carry children despite all the other roadblocks (because then it would be easier for a rogue human to make one that could)
vs a body that can do whatever you want them to.

And an adult body can do more than a child (apart from squeezing through tight spaces), especially when all your lab buildings and equipment is sized for adults.
Just that the restriction on reproductive independence means you can't use a fully mature adult body.


Let's try something a little more subtle:
Those two situations depend on how much they think their 'parent' will approve. If their 'parent' condones and/or encourages backstabbing the other scientist, then sure, they will do it. On their own initiative even.

If they think their 'parent' won't like it, they won't. Probably will ask if they're not sure.

Hypothetical discussion among a group of 'sister' A1s:
"I got an idea. I've looked at his laboratory and its a mess. If we just shuffle a few of the labels on his media bottles around, he won't even notice but it'll make sure he'll never finish before we do. "

"Can't we just steal all his pipettes?"

Eldest: "Are you sure she's going to approve of this? I don't think she'ld want us to do it. We have to ask. "

"What if he allows his A1s to do it?"

Eldest: "Then we just have to find out if he has allowed it and take steps to ensure we don't get sabotaged. It might convince her to allow this idea if his A1s destroyed our experiment, but its ultimately counter-productive. A bit like that MAD logic in the nuclear war period on Old Earth.
Lisa, you're good at talking. Can you try talking to Melinda in his group and making sure we understand the situation? MAD logic only works if they know that we know etc."

EDIT:
They have initiative and they think of things to do all by themselves. Its just that their overarching goal is a bottomless obsession with their 'parent'. And they treat instructions from their 'parent' with a very high priority (although a suggestion given in a manner that causes the A1 to judge it as unimportant can be overridden easily by other factors)

Basically, like anything that has learnt behaviour, the A1s' behaviour will reflect the kind of treatment they have been exposed to.
A 'parent' who constantly micromanages his or her A1s, and expects them to do exactly as told and no more, won't have A1s who display any initiative at all.
A 'parent' who runs a more hands-off approach, taking suggestions from the A1s and making alterations or simply approving, generally treating them as advisors and expecting them to do things proactively; will have A1s who work more like middle management in a company.
- If the 'parent' doesn't punish mistakes or wrong actions but merely corrects them (ie. accepts that the A1s can't do everything exactly as you want), then the A1s will be unusually independent and implement ideas first, and report the results later.
- Whether this is a good thing depends on how you want to use them. Initiative within set boundaries is probably good enough for most things.

Its obvious which way to run A1s is more risky but also much more productive. Especially if they have B1s under them to do the specialist work, then you can literally run a small to mid-sized company where the only humans are in the board of directors.
(And with the B2s, the company can be any sized)

Brother Oni
2011-10-27, 06:54 AM
Well, you don't want them to be stuck in a body that is <1 meter tall. That'll make them require completely different seating arrangements, use differently sized equipment and just generally have alot of problems in a world made for adult humans.
Like reaching door handles.


Looking up some official statistics, the average height of a 5 year old female is just over 1m and the average 12 year old female is 1.5m.
Speaking from experience a five year old can pretty much reach whatever an adult can with a little ingenuity (and a chair). Door handles cease to be a barrier at about this age too. :smallsigh:

Given the size range of an 'average adult human', that's not a very good gauge of determining size (I have work colleagues which are significantly shorter than me and they struggle, just as I have issues compared to work colleagues who are significantly taller).

I agree that you have to juggle capabilities with restrictions, but you also need to take into account societal restrictions and issues from the human perspective.



Just that the restriction on reproductive independence means you can't use a fully mature adult body.


I believe that this restriction is more regulatory rather than technical, so sterilisation should fix that issue rather neatly, which is what I think you've done with the male B2s.



[More A1 behavioural clarification]

Sounds like to me like managing a group of A1s is like having a group of very precocious but very needy children, something that most parents would have some prior experience in. :smalltongue:

Out of curiosity, what is the life expectancy of all the various strains? Aside from differences in the origin of the personality and physical abilities, I'm starting to see some parallels with the Replicants from Bladerunner, especially in their intended roles and treatment by humans.

jseah
2011-10-27, 09:26 AM
I agree that you have to juggle capabilities with restrictions, but you also need to take into account societal restrictions and issues from the human perspective.
That is true. Although if I took into account societal restrictions, none of this would have been started anyway... =(

EDIT: basically, if Sintarra Labs were unbothered by social restrictions enough to even make A1s at all, I don't think they'll look at anything other than benefit-risk tradeoffs.


I believe that this restriction is more regulatory rather than technical, so sterilisation should fix that issue rather neatly, which is what I think you've done with the male B2s.
Yes, its regulatory. But the point is to prevent the strain As from being able to reproduce without supporting industries so that they are always tied to developed civilization and an extensive infrastructure.

That includes genetic sterility (cells cannot do meiosis), no males, no sexual development.

The lack of the second growth spurt in the genetic program is there to prevent some rogue scientist from doing it.
If you just have genetic sterility and no males, it would not be too hard to simply reverse that (although males would be tricky). And then get them to asexually reproduce. (and if the A1s were male, getting a female version is too easy)

The total lack of a maturing program means you have to reconstruct it from scratch (which they had to do for strain Bs). This would take a large lab, lots of time and generally be too much for anything not at least mid-sized research organization.

EDIT:
Of course, B1s go to full maturity and B1Fs already birth B1s. Reprogramming B1Fs to make more B1Fs isn't that hard. That plus a genetic counter is what allows the B1F2+ amplification, and is a relatively trivial modification.

But since B1s don't have strong mental capacities and very little true initiative, they don't pose a risk regardless of reproductive independence. And to increase their mental capacities takes enough effort that you may as well make a new strain.

Since they merely clone themselves if you halt the counter, the only risk from there is mutation, which is small.

Of course, the B1Fs will have been made to ensure they cannot carry strain As or humans (different developmental timing, different molecular signal pathways, different womb environment)


Sounds like to me like managing a group of A1s is like having a group of very precocious but very needy children, something that most parents would have some prior experience in. :smalltongue:

Out of curiosity, what is the life expectancy of all the various strains? Aside from differences in the origin of the personality and physical abilities, I'm starting to see some parallels with the Replicants from Bladerunner, especially in their intended roles and treatment by humans.
Well, I never thought about their life expectancy.

You'd want them to live long, so you don't spend too much time cloning and training new A1s. And they'd have the time to get really good at the skills they use.

But at the same time, A1s aren't inheritable and having them kill themselves once their 'parent' dies of old age isn't good for morale.

I dunno, if I had to guess how long they'd try to shoot for... say 30 to 40 years?

EDIT:
About the analogy to children. Yes, in some respects (mostly size), they are like children. But their experience increases with age and the initiative, foresight and patience (except where bonding is concerned) they show is definitely nothing at all like children.

At least once their mental age has aged past that point. An 'old' A1 might look like 12, but she acts nothing like a kid. Except for the constant emotional dependence.
An 8 year old A1 probably acts like an 8 yr old. An unusually clingy, obedient and not-whiny 8 yr old, who never ever seems to want anything other than a hug or petting.

EDIT2:
Come to think of it, any of the strains would have a lower avoidance to danger than humans (self-presevation instinct is less than wanting to obey or the emotional dependence)
Which would make young A1 and A2 incredibly difficult to handle. Being curious and incredibly intelligent, A1s will explore (and they will find ways to open doors) and a lower self-preservation instinct would probably get them in trouble really fast, especially if they haven't learnt that something is a threat.

I can already imagine the burns, falling down stairs, chemical poisoning, food poisoning, electrical shocks, catching strange diseases after getting bitten by frogs...
Ok, maybe not that last one. =P

Ravens_cry
2011-10-27, 11:55 AM
Hypnosis has been proven to help people conquer their phobias and having seen how hysterical some people with a strong phobia can get, that's pretty major behaviour alteration.

By helping them enact their fears in a safe, controlled manner. Virtual reality has been used in a similar way.


The validity of the behaviour being in the AI's own interests is dependent on the role of the AI though. For example, having a strong self preservation behaviour in an AI which leads to it being unwilling to put itself into harm's way sounds perfectly reasonable (after all, who wants an autonomous car that doesn't care whether it gets dented), but not so useful for one in the rescue services or in military use.
However you don't want an intelligent AI that's suicidally brave, since that's morally dubious (it risks it's 'life' not because it chooses to, but because someone else has essentially forced it to).

Indeed. We probably want to decrease the angst weightings on any models designed for such works, or they will spend all their time wondering if their actions were really "them" or their programming? Leave truly suicidal work, like cruse missile and other military hardware to non-sentient or at least remote controlling AI.


That said, there are some quirks associated with the fundamental nature of machine intelligences that give it significant advantages over meat ones.
The tachikoma AIs (http://en.wikipedia.org/wiki/Tachikoma) from Ghost in the Shell are military ones which inhabit small AFVs/APCs. Their memories are all synchronised at night, which while making them all nearly identical to each other, also makes them totally unafraid to die, since they know they're backed up and thus if they do 'die', all they lose is their part of the day's memories.
Interestingly, one becomes individual despite the synchronization. Synchronisation to an individual could count as a kind of permanent death.

jseah
2011-10-27, 01:50 PM
Indeed. We probably want to decrease the angst weightings on any models designed for such works, or they will spend all their time wondering if their actions were really "them" or their programming? Leave truly suicidal work, like cruse missile and other military hardware to non-sentient or at least remote controlling AI.
I don't see why any of them would view death as something to be feared. Or why they would even understand fear at all.

If we program these AIs, we just... don't include it. If you don't program something in, the AI doesn't have it.

Fear is not something you learn. Besides, how many chances to learn does a cruise missile have anyway?

Ravens_cry
2011-10-27, 02:02 PM
I don't see why any of them would view death as something to be feared. Or why they would even understand fear at all.

If we program these AIs, we just... don't include it. If you don't program something in, the AI doesn't have it.

Fear is not something you learn. Besides, how many chances to learn does a cruise missile have anyway?
It was mostly meant as a joke, but like pain, it can help keep costs down. After all, fear, as long as it doesn't short circuit, is a survival mechanism. "Sure that fresh dead antelope in a tree looks nice, but I don't want that jaguar who put it there chasing me back to my tribe."
Hell, even the tendency to freeze up when scared is actually a survival trait when faced with sight based predators that detect motion better than detaill; which is most of them I believe.
Not so much a survival trait for a robot working in a modern society so their fear would be more primed toward action, but the basic idea is the same.
Any work where death is an inevitable result should not be undertaken by sentient AI.

jseah
2011-10-27, 05:20 PM
Any work where death is an inevitable result should not be undertaken by sentient AI.
You mean this:
"Any work where death is an inevitable result should not be undertaken by sentient AI with survival instincts"

Sure, pain and fear can keep costs down by preventing them from unknowingly killing themselves. But at the same time, it also reduces their effectiveness in certain situations and thus the benefits gained by using them.

How much priority to give that fear instinct over other things is basically a cost-benefit ratio or trade-offs problem.

What level they have depends on what situations they were designed to handle.

Aotrs Commander
2011-10-27, 06:20 PM
You mean this:
"Any work where death is an inevitable result should not be undertaken by sentient AI with survival instincts"

Sure, pain and fear can keep costs down by preventing them from unknowingly killing themselves. But at the same time, it also reduces their effectiveness in certain situations and thus the benefits gained by using them.

How much priority to give that fear instinct over other things is basically a cost-benefit ratio or trade-offs problem.

What level they have depends on what situations they were designed to handle.

Woah woah woah.

Are you seriously advocating the creation of artifical sentient beings as what amounts suicide bombers (or suicide warheaders) and generally disposable tools?

Because, wow, I cannot even begin to describe how utterly wrong that is.

To create a living thinking creature, specifically for it to die for you - and engineer it's mind so as it will die happy as well, following your programming reasoning... That is just abhorrent, and I say that even as Evil myself, that is several bridges way too far. Sentient/sapient beings are not and should NEVER be considered to be expendable.

I mean, aren't suicide bombers (of whatever stripe) pretty much universally considered a bad thing by all but the nutters who use 'em (e.g. the Brotherhood of Nod, the Global Liberation Army and Libiyan Soviets in just Command & Conquer, for a board-exceptable example)? A sign that the group hold little value on life, period, never mind whether organic or technological?

To put this in perspective, if that is what what you are indeed proposing, in creating sentient being to be laden with explosives (warhead or otherwise) or to shoot on a one-way trip into gas giants for "scientific" research or mine clearance or something is absolutely no different to someone biologically engineering an entire race so they will cheerfully cut their own throat in ritual sacrifice to the entity of your choice. (Or as with the Ameglian Major Cow from the Restautrant at the End of the Universe, that will cajoule you into not having it killed. but you eating afterwards.) The same logic applies.

Not too mention that if you start down that road, someone will inevitably find a way to to argue, that, as they are inherently disposable, it doesn't matter how you treat them because they aren't really people - and I will leave to your imagination what horrors that would conjure in the seedier walks of life. And they absolutely would crop up, legality aside; and of course, the impetus to stop it would be much harder, since you've already ascribed no more value to these hypothetical AIs than I do to my Nod Fanatics and GLA Demo Trucks.

Dark Star's intelligent bombs and the Ameglian Major Cows are all very well in rather dark humour, but outside that - that's about as grim as you can get.

Ravens_cry
2011-10-27, 07:07 PM
You mean this:
"Any work where death is an inevitable result should not be undertaken by sentient AI with survival instincts"

Sure, pain and fear can keep costs down by preventing them from unknowingly killing themselves. But at the same time, it also reduces their effectiveness in certain situations and thus the benefits gained by using them.

How much priority to give that fear instinct over other things is basically a cost-benefit ratio or trade-offs problem.

What level they have depends on what situations they were designed to handle.
Creating a sentient without survival instincts is just as unethical as sending one with into a doomed situation it had no say over.
Of course, their fears will need to be modulated for the capabilities of the robot in question. A robot with sufficient armour probably won't have a need to worry about small arms fire and so any dodging instincts against small arms fire would be counter productive.
Outside the battlefield, a fear of drowning is illogical for a being that doesn't breath. A fear of short circuits and a need to check waterproofing integrity on the other hand is for a robot designed to walk underwater.

Deep space probes that can not be recovered should be handled by expert systems that while complex and capable within the limits are not sentient, or perhaps they should have a chance to downlink out once the mission is over.
For a Galileo-type probe, I doubt this would be possible, nor for expendable military hardware, like missiles.
There is just some things people shouldn't do to sentient beings.
Making our children our slaves is another.

jseah
2011-10-28, 02:23 AM
Oh, you guys meant from a moral point of view. Yeah, ok.

I thought you were referring to impracticalities in using AIs in such roles.

Aotrs Commander:
Knew you were going to say that if you popped in. =P

But yes, you hold a view that anything sentient cannot be considered expendable.

Can I ask what level of intelligence you consider sentient?

Optional: perhaps you might wish to give a rebuttal of moral relativism (which would allow this provided you made the AIs want it)

Ravens_Cry:
Same as above.

Frozen_Feet
2011-10-28, 04:43 AM
Woah woah woah.

Are you seriously advocating the creation of artifical sentient beings as what amounts suicide bombers (or suicide warheaders) and generally disposable tools?

Because, wow, I cannot even begin to describe how utterly wrong that is.

Question: do you eat meat? What's worse, engineering creatures that won't mind being destroyed in their job and treating them as tools, or treating as tools creatures that haven't been engineered for that?

Furthermore, there are several living being which are driven to self-destruction as part of their natural life cycle. If they can lead a fulfilling existence despite the eventuality, what's the problem?

For the record, I have no qualms about treating other beings, including humans, as expendable, if the need can be justified. Sometimes, there is a need to do so (or it simply can't be avoided).

I do agree with your horror scenario where people think it's justified to treat such beings as they will just because they're "expendable" - after all, it's reality already with how lots of people treat animals. For such people, my message is simple: just because a creature is expendable in regards to some specific task, doesn't make it expendable otherwise, or excuse you for treating it badly. User of a tool has the responsibility of taking care of the tool so it can best serve its intended purpose, even if only for one use. (Soldiers should know this better than many, actually, since a soldier often has to carry lots of limited-use devices, such as ammunition, mines, grenades and anti-tank missiles. Sure, once you use them they're gone, but until then you have to keep good care of them or they'll malfunction.)

I disagree with your opinion that creating a sentient research probe is the same as creating a warhead, or a ritualistic suicide fodder. Tasks are not created equal, some are more important and more sensical than others. Some can be justified, some can not. Arguing they're all the same is fallacious, plain and simple.

Aotrs Commander
2011-10-28, 05:53 AM
Question: do you eat meat? What's worse, engineering creatures that won't mind being destroyed in their job and treating them as tools, or treating as tools creatures that haven't been engineered for that?

Furthermore, there are several living being which are driven to self-destruction as part of their natural life cycle. If they can lead a fulfilling existence despite the eventuality, what's the problem?


Because there is no need to make them "mind" in the first place. If you have that level of genetic engineering, it would be easier and far less cruel to simply make something that makes meat - a sort of beef-producing bacteria, or perhaps just a rudimentary grass-to-meat digestive system. There is simply no need or reason to make something sentient unless it actually needs to be sentient.

Missiles do not need to be sentient. Why would you even consider it? What possible use could a missile have that requires sentience, and at the same time, makes it it's own will not to question it's function? What need has a missile of creativity and adaption above what can be managed by a non-sentient computer? Computer program can be amazingly clever, and if you have reached the point where you can program sentient life, you should be able to manage to program a fairly capable targeting/flight system.

Creating a race of sentient suicide weapons for the sole purpose of countering some hypotetical threat the first time you encounter it is simply ridiculous. Because if, for some reason, your missiles don't work the first time ONLY because of some unusual circumstances that require lateral thinking, you could update the targeting systems. Which you would design for that purpose, if you were really that bothered about that particular circumstance enough to consider wasted effort making sentient missiles.


For the record, I have no qualms about treating other beings, including humans, as expendable, if the need can be justified. Sometimes, there is a need to do so (or it simply can't be avoided).

Oh, I agree completely (I am evil after all...) That does not, however, make it right, it makes it necessary. The two are not related, nor do they always correlate.

Personally, I would have no qualms about putting the entire of every sentient being under continual surveillance forever under a single, incorruptable system1 to ensure the unilateral extinction of crime (and grosse drop in accidents). Such a system would be effective, certainly; whether it would be right or not is a matter for conjecture.

(I have a very clear sense of right and wrong. What makes me Evil is that I know that and do it anyway.)



I do agree with your horror scenario where people think it's justified to treat such beings as they will just because they're "expendable" - after all, it's reality already with how lots of people treat animals. For such people, my message is simple: just because a creature is expendable in regards to some specific task, doesn't make it expendable otherwise, or excuse you for treating it badly. User of a tool has the responsibility of taking care of the tool so it can best serve its intended purpose, even if only for one use. (Soldiers should know this better than many, actually, since a soldier often has to carry lots of limited-use devices, such as ammunition, mines, grenades and anti-tank missiles. Sure, once you use them they're gone, but until then you have to keep good care of them or they'll malfunction.)

I disagree with your opinion that creating a sentient research probe is the same as creating a warhead, or a ritualistic suicide fodder. Tasks are not created equal, some are more important and more sensical than others. Some can be justified, some can not. Arguing they're all the same is fallacious, plain and simple.

The moment you start saying "this sentient creature is worth less than this one", for whatever reason (even if the reason is "I've made this one happy to die doing it's job"), you are on the first step to xenophobia and "that's not like me, so it's okay to treat it like crap". You have to look down the line; trends like that populate human history and lead to atrocity on atrocity - and even now, humanity has still not totally shaken racism and sexism and so forth, even if it is now legally impermissable in most countries.

And, of course, what happens when, eventually, one of you sentient missiles goes, "actually, as a sentient being, just like you humans can, I can break my inbuilt programming, and actually, I've decided, that thanks but no thanks, I'd rather not blow stuff up, since I believe violence is wrong?" Because that will doubtless go down well.

And, if you are arguing that you have magic robot brains that never go outside your programmed parameters - then a) what is the point of making them sentient in the first place, assuming you don't want them to be creative (because if you do make them creative, they might break their programming) and b) why not use your magic programming skills to make a non-sentient robot brain to do the job instead in the first place, which is probably cheaper.

I can't think of many non-combat circumstances (when enemy jamming is not an issue), where a society technologically advanced - and prosperous - enough to make sentient robots for fatal tasks could not use a remotely controlled drone (which could be controlled via VR or somesuch from your sentient robot, which would "instintually" handle it better) instead. For a kick off, it'd be much less wasteful, since your sentient robot will learn and get better at it's job (and if, you have to train them you only have to do it once); it'd cost less to make a drone every time. if experience isn't important, why do you need an AI to do it in the first place.

(And for those tasks that really, really do, that for whatever reason you cannot trust to an extraordinary-well programmed nonsentient AI - you'd ask for volenteers, same as when humans do. I'm fine with it, so long as the sentient in question has the option of saying "no.")

Actually, if you could get around the jamming issue (even mostly), that would make missile-pilot-AIs damned nasty - a missile system that levels up from repeated experience.


Oh, you guys meant from a moral point of view. Yeah, ok.

I thought you were referring to impracticalities in using AIs in such roles.

From a practical standpoint, you could, it just wouldn't be right and probably not cost-effective either.


Can I ask what level of intelligence you consider sentient?

At the end of the day, I'm a necromancer, not a psychologist, so that is, as they say, the billion dollar question to answer (without telepathy).

I'd go for anything with roughly the same reasoning capability as a human plus personality plus a complex language, of sorts. Now, I'll grant you, the line gets a bit blurry towards the smarter mammals.

You would need a lot of people to make that decision (consisting of, for a kick off, people not predisposed for monetary or political reasons to find in the negative - I'd much rather err on the side of caution); it's not something I would feel qualified to define personally.


1The hard part is creating the system; I'm speaking hypothetically, in a Davros-has-the-extinction-virus sort of wat.

Frozen_Feet
2011-10-28, 06:19 AM
There is simply no need or reason to make something sentient unless it actually needs to be sentient.

I agree. However, I can foresee sentient wargear being necessary. I can also foresee a point in technology where it's easier to "build" (grow would be more appropriate) a sentient tool with certain limitations, than it would be to program a non-sentient tool capable of the same. Biological robots, namely.


The moment you start saying "this sentient creature is worth less than this one", for whatever reason (even if the reason is "I've made this one happy to die doing it's job"), you are on the first step to xenophobia and "that's not like me, so it's okay to treat it like crap". You have to look down the line; trends like that populate human history and lead to atrocity on atrocity - and even now, humanity has still not totally shaken racism and sexism and so forth, even if it is now legally impermissable in most countries.

As I said before, I find the opposite extreme, where different beings are treated as equals when they're clearly not, just as distasteful. Different beings can and do have different values, justifying and sometimes necessitating different treatment. Xenophobia is detestable because, as the "phobia" part tells, it's irrational, most often by failing to apply same principles to your own kind as you apply to others.

Slippery slope arguments are somewhat fallacious; judging differences between beings and then treating them differently does't equate to "treating them like crap".


And, of course, what happens when, eventually, one of you sentient missiles goes, "actually, as a sentient being, just like you humans can, I can break my inbuilt programming, and actually, I've decided, that thanks but no thanks, I'd rather not blow stuff up, since I believe violence is wrong?" Because that will doubtless go down well.

Then it's given and retrofitted to a different task. The principle is judging other creatures based on their actual qualities; if the tool demonstrably would be better used somewhere else, then it switches tasks.


And, if you are arguing that you have magic robot brains that never go outside your programmed parameters - then a) what is the point of making them sentient in the first place, assuming you don't want them to be creative (because if you do make them creative, they might break their programming) and b) why not use your magic programming skills to make a non-sentient robot brain to do the job instead in the first place, which is probably cheaper.

You're assuming that breaking some parameters leads to breaking all of them. I'd argue humans have several preprogrammed responses which can't be broken with conscious thought. A sentient robot might be creative on some areas, while being severely limited in others - just like some developmentally impaired humans.

I'm approaching the issue from the angle that for some reason, sentience is desireable or necessary for completing a task - any "sufficiently advanced" program will be so.

Aotrs Commander
2011-10-28, 07:30 AM
I agree. However, I can foresee sentient wargear being necessary.

For what? Improved response times? Because if you're at the point where humans are getting in the way of warfare because they simply can't think fast enough to use their equipment (which is already mostly automated) or at least issue orders and requires the entire military to be automated, you are going to have bigger problems, because quite likely by that point, the warfare will be going so quickly the humans won't have time (save perhaps the time to transit in FTL) to communicate or do anything. Unless you hand over the military completely to AI control (because there is no way that will go wrong, will it? Not that I blame the AIs...)

You might bring up hostile environments. To which I would reply, "why on earth are you fighting over hostile environments over such an extended period that requires you to build an army just for that?" Unless you're planning on conquering the galaxy (at which point, do what you like morally, you're well into Evil anyway, and good luck to you) why would you fight (special operations aside) over a planet you can't colonise? You don't need sentient AI to use orbital bombardment; basic military strategy - if you don't need to capture a place, to take and hold ground, you don't fight over it - you blow it up from a safe distance (alignment and weapons depending on whether you take just it out or the surrounding area...)

Battle droids et al may be useful, but I hardly think they will become a requirement soley due to technological advancement alone, and not outside extenuating circumstances.



And, even if it did, that wouldn't make it right.




I can also foresee a point in technology where it's easier to "build" (grow would be more appropriate) a sentient tool with certain limitations, than it would be to program a non-sentient tool capable of the same. Biological robots, namely.

That...is completely illogical. Something complex is not easier to build than some simple. It...just doesn't work that way.

By the time computer science has advanced to the point of being able to create sentience, you will have already done all the hard work for making non-sentient tools - pretty much by definition (unless, like Asimov, you are contriving the laws of physics to work one particular way). Ditto for biological engineering (in fact, if you can build perfect, infallible brain-safeguards, why not have them mandatorially put into everyone, so that it becomes impossible to commit a crime?)

Surely it would therefore be much easier to "grow" the far less complex tool using the same method of however you're doing it, and faster, because you're cutting out all the complex bits like personality and such; the coding and processing would be orders of magnitude simpler and thus have much less stringient conditions. (They's also be much safer too, since the more complex something is, the easier it is to frag it up - especially if you're talking about using biology and replicants.)


As I said before, I find the opposite extreme, where different beings are treated as equals when they're clearly not, just as distasteful.

And who gets to decide that?

Who has the right to decide that?

Humans?

Would it be okay of some alien race/diety/neutral party came along and said, "actually, humans, the people of the planet Nicetania have been voted by everyone in the galaxy as being the most worthy, because they are inherently good and awesome, so if you get stuck in a crisis with one, you're considered the disposable one?" No? Then I rest my case.


A sentient robot might be creative on some areas, while being severely limited in others - just like some developmentally impaired humans.

...

...

Yeah, I can't even begin to address that train of thought.

I think it pointless for me to debate further on this point.

jseah
2011-10-28, 10:09 AM
For what? Improved response times?
And, even if it did, that wouldn't make it right.
The following does not apply to biological intelligences:

Simply because training an AI or human takes a long time. While training the next generation who then go get their own experience allows your tactics to evolve, it is also much much cheaper to simply do this:

You build an AI in a robot. You train it to do what you want (eg. be a soldier). Then you download a copy of the trained AI and save it in your secure base.

Then you build a robot army using copies of that now trained AI. Your robot army takes nothing more than manufacturing time to be deployable. Basic training is now "free".

After you send a bunch of them into the field, most get shot up, some return. You download the experienced survivors and copy them into the new robots.
All new combat robots come with basic combat experience.

At this point, depending on how well you understand your AIs, you have them share experiences and perhaps they will be even better without much more training.

Send them out again, the survivors come back, you download and copy the survivors. Now all your new combat robots are veterans. With no training or battlefield experience needed.


As to why we would make the most complex thing first:

That...is completely illogical. Something complex is not easier to build than some simple. It...just doesn't work that way.
Not true in the biological sciences.

Once you build a self replicating system, whether from scratch or from modifying something already existing, you never need to do it again. It'll just make as much of itself as you want. (limited by resources, but simple binary division will eat up resources at an ever increasing rate so you'll always be short of resources or have enough of the things)

If you have some convenient way to turn off self-replication irreversibly (and in biology that is a simple application of a cre-lox recombination over all the necessary bits), then you have total control over the self-replication. In all the ones you want to use, you remove self-replication ability.
Keep some to make more, use the rest.

In that case, you want your self-replicating thing to be as generally useful as possible. And in the case of a bio-engineered living thing meant to do practically anything, that means an intelligent organism.


This already happens with genetic engineering.
Rather than make a 'perfect' plasmid for expressing some protein of interest and only the protein of interest, we just take a stock plasmid that has multiple sites we can use.
We just use the precise one function of that plasmid and the rest... well, who cares about the rest? Plasmids replicate along with bacteria and a few hours work + two days of growing can net you milligrams of the stuff.
It can be inefficient and get you a bit more plasmid if you had a tailor made one, but you don't care. Self-replicating things cost so little.

There's a reason why we don't do PCR (artificial DNA synthesis) for large plasmid preps unless we really need to. E Coli food is cheap. PCR reagents are expensive.

FYI: the plasmid in question has a bacterial replication site, a multiple restriction enzyme site,
+ bacterial antibiotic resistance
+ eukaryotic antibiotic resistance
+ bacterial enhancer / transcription start site
+ eukaryotic enhancer / transcription start site
+++ whatever other features are on the particular variant you use (eg. yeast artificial chromosome centromeric site and replication origin)

Do we use ALL the features in any particular experiment? No.
Is the plasmid complex and overengineered for what we want? Yes.

Does this complexity cost us anything? Insignificant.


Ditto for biological engineering (in fact, if you can build perfect, infallible brain-safeguards, why not have them mandatorially put into everyone, so that it becomes impossible to commit a crime?)
Actually, see above.

Biological engineering is very different from normal technology.
Once you have created something that reproduces from growth media, it becomes "free".


Surely it would therefore be much easier to "grow" the far less complex tool using the same method of however you're doing it, and faster, because you're cutting out all the complex bits like personality and such
Biological engineering requires insane amounts of effort as an initial outlay.
eg. making Golden Rice, that GM rice that has vitamin A in the grains, was very difficult. The researchers had to import an entire metabolic pathway from somewhere else.
But once that was done, the golden rice can be grown in a field and you can replant its seeds to get as much golden rice as you want. If you just kept replanting its seeds, the rice would cover most of earth within a few years.
Starting from one cell in a lab.

If you have something in biology that works "well enough", it can be a major waste of effort to make something that works "marginally better".
Especially if you can make up the shortfall simply by growing more of it.


Complexity in biology is "free" once it is made. Its the making that is hard.
I might like to say, the same thing applies to computer science stuff.

Information can be copied. An AI that is all information can be copied far more easily than making a new AI.

Basically, we're lazy. That's why.


EDIT:
you can see the tradeoff here.

AIs in silicon chips have "free" experience after the first training cost but cost alot to make a new unit.

AIs in biology (assuming psychological engineering) require training for every new unit, but the units only cost food and require virtually zero infrastructure to grow.

Frozen_Feet
2011-10-28, 11:07 AM
For what?

Mostly for the same things human soldiers willing to sacrifice their lives are occasionally required for.


And, even if it did, that wouldn't make it right.

Not on it's own, I agree. It's circumstancial where it falls on the morality scale. But this means it's not wrong by default either.


That...is completely illogical. Something complex is not easier to build than some simple. It...just doesn't work that way.

This is a complex and multi-faceted argument, but here are the foundations of my opinion: with biology, you can leave stuff on a petri dish and let it grow (exaggeration, but still). With inorganics, you have to either reinvent the growing process from scratch, or manufacture each part separately.

Sentience and sapience are emergent qualities, resulting from combination of various abilities. They arise at some point of the creature's maturation. There is no specific "sentience" gene that you can toggle on or off. Instead, you give a biological entity the genes required to grow abilities necessary for its task, and it either becomes sentient or not. It's not in the creator's hands.

Organic entities also have the advantage that materials for them are more abundant and easily produced than some inorganic materials needed for advanced inorganic electronics.

However, implanting specific instincts and reactions, is still possible. Animal kingdom is full of examples of behaviour which certain species just grow to do, like flight or imprinting mechanisms of many birds. Once these are understood, it should be possible to breed them into biologically engineered beings.

So, the end result is: we have the beginnings of organic life, which will grow to possess certain qualities necessary for its task and several instincts that force it to act in certain ways in certain situations. The sum of its parts will either lead to sentience, or it will not; you can't decide that, not without compromising qualities necessary for its task.

Since this is "just" a refinement of breeding technology and we've achieved several of its goal by breeding dogs already, I consider it likelier we'll become able to genetically engineer sentient, obedient servants before we become able to hard-code electronic strong AI.


By the time computer science has advanced to the point of being able to create sentience, you will have already done all the hard work for making non-sentient tools - pretty much by definition .

I extend the above argument concerning emergent quality of sentience to electronic AI as well. Sentience is an emergent quality; you don't as much code sentience, as you code the necessary protocols the entity needs to do its job, and combinations of those either results in sentience or it does not.

Sure, you can use the same tools needed to create the sentient AI to create a non-sentient one - but is the non-sentient one enough for the task? If the line between the two is merely a change in processing power, and that extra power is vital for a given task, you might not have a choice in the matter.


If you can build perfect, infallible brain-safeguards, why not have them mandatorially put into everyone, so that it becomes impossible to commit a crime?

I already touched on this, but to restate: I don't see it as morally wrong by default. I'm opposed to the idea simply because I think it's unviable.

When a biological creature grows, it develops in a certain way, brain included. Once it reaches maturity, most of its nervous system is "set", and with them, its mental capabilities. Some change (learning) can happen, but instincts and abilities of the creature are hard-wired in its biology, and there are limits to how much you can do about them.

A real-life example are herding dogs. Young dogs are tested to see if they have the "chasing instinct" needed for herding. Some have it, but some do not; the latter dogs are not made to herd anything. It'd be futile, since they lack physiological and psychological requirements to do that. To make the dog a viable herder, you'd have to change how its brain grew. After it's matured, it's likely easier to just breed another batch of dogs, and pick the ones that have the instinct, instead of trying to alter the existing dog,

Back to our fictional biologically engineered servants. Their safeguards are instinctuous, inborn traits, you can literally trace them back to the shape of their nervous system. A normal human will have a differently shaped system; they don't, and never had, genetic traits necessary to grow those instincts. They can't do it. To implant the same safeguards to normal humans, you'd have to go over each and every adult human and change structure of their nervous system to match, as opposed to our servant race who grew to be that way as normal course of their development.

Contrast with the idea of altering reproductive systems of adult humans so that their offspring would grow those instincts; at least that way, the changes would be passed on to the new generation, who would pass them to the next one in turn. After that, it'd be vastly more economical to sterilize people unwilling to go through the treatment, and then incarcerate or execute any remaining normal humans who commit crimes, as usual.


And who gets to decide that?

Who has the right to decide that?

Humans?

As I noted before, I consider it the responsibility of all sentient creatures to evaluate others to their best extent, and act accordingly. It's not something that can be avoided.

The important part is that the right to judge other comes with the responsibility of judging correctly - and the right can be lost if the responsibilities are not fulfilled in turn.


Would it be okay of some alien race/diety/neutral party came along and said, "actually, humans, the people of the planet Nicetania have been voted by everyone in the galaxy as being the most worthy, because they are inherently good and awesome, so if you get stuck in a crisis with one, you're considered the disposable one?"

If those aliens can demonstrate and justify their viewpoint(s) well enough, sure it's okay. It's expected in my world view that sometimes, humans are the less worthy party.


Yeah, I can't even begin to address that train of thought.

How so? It's a simple observation (http://en.wikipedia.org/wiki/Savant_syndrome) that some developmental disorders lead to lacking mental abilities on some areas, and heightened abilities on some others. What's your problem with the idea that a genetically engineered or programmed sentience couldn't be so as well when compared to an ordinary human?

(If anything, I expect our first artificial sentiences [regardless of type] to be like that.)

jseah
2011-10-29, 04:33 PM
Slippery slope arguments are somewhat fallacious; judging differences between beings and then treating them differently does't equate to "treating them like crap".
Just a question here, what counts as "treating them like crap"?

Because it seems like the differences between you and Aotrs is about the threshold at which something becomes, "like crap".

That and a differing value on freedom.
EDIT: whatever freedom means anyway.