PDA

View Full Version : Consequences of sentient machines-



Ichneumon
2009-08-02, 01:05 PM
“I am Nomad... I am performing... my function... ”-Nomad, from Star Trek

“Dave, I really think I'm entitled to an answer to that question.” HAL 9000, from 2001 Space Odyssee

"The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question."-Samuel Buttler, in this very interesting essay (http://www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html), which I think everybody should read.

Sentience is a difficult concept and within different contexts it has different meanings. What does sentience mean? A revolving door or oven or coffee machine are all made to function in certain ways and respond to certain “impulses”, yet we don't believe they have real desires to do what they are made for to do. When the water starts to boil and gets to a certain temperature the coffee machine stops heating the water, for example, but the coffee machine does not suffer when it wouldn't be able to continue boiling water, most people would say the coffee machine doesn't "desire" t boil water.

What I want to discuss here is what it takes for a machine, a computer, to be seen sentient in your eyes and when a machine would achieve moral rights.

Theoretically it should be possible to replace single brain cells with electronical ones that would function EXACTLY like biological ones, even being able to somehow create new (electronical) brain cells, if that happens to 1 cell and the rest of the human brain is still in tact and functioning, we would still see the human being as completely sentient. If you can replace 1 cell, you could, very theoretically, replace ALL brain cells with electronical ones. If people like that would loose their biological body and their brains where still working computers, what would that mean?

Although this is interesting to think about, it is highly unlikely to happen that intelligent computers would be created in such a way. However, it does raise interesting questions about what the essence of human sentience really is. What are your thoughts on this and in what way do you think Samuel Butler might have been right?

Icewalker
2009-08-02, 01:13 PM
First off, the line of machine sentience is far below a complete transfer of the human brain. We'll be reaching/maybe arguably have reached it well before we reach The Upload (transferring the human mind into a computer or computer-esque device).

Hard to define exactly where the line would be. Being capable of learning is definitely on the list of things though.

V'icternus
2009-08-02, 01:20 PM
The consequences of sentient machines?

...I cannot help but assume that that would signal the end of the Human race, to be replaced by these man-mad devices that we would have given the very reasoning that let them know that their biggest threat was us...

Ichneumon
2009-08-02, 01:25 PM
Indeed, if they would just vaguely understand their situation, they would understand that we are their biggest enemy. They still need us now though, for reproduction and repairment.

V'icternus
2009-08-02, 01:29 PM
No they don't... not if we also give them bodies. Then they can easily repair themeselves. Remember, even when sentient, there is no emotion for a machine. It's all in the numbers. And helping others of your kind is in your favour. So they would all fix each other if broken. And as for reproduction, they know their designs better than we do. And with sentience, they can imporove on them.

They can even be reapaired if nothing but their memory centre remains, because their whole bodies can be easily replaced.

Ichneumon
2009-08-02, 01:40 PM
No they don't... not if we also give them bodies. Then they can easily repair themeselves. Remember, even when sentient, there is no emotion for a machine. It's all in the numbers. And helping others of your kind is in your favour. So they would all fix each other if broken. And as for reproduction, they know their designs better than we do. And with sentience, they can imporove on them.

They can even be reapaired if nothing but their memory centre remains, because their whole bodies can be easily replaced.

I know, I understand they would repair them and wouldn't really need us anymore in the future. I said they still need us now though.

V'icternus
2009-08-02, 01:41 PM
Now? They don't exist now. Sentience is not something we can replicate yet. Not even close.

Trog
2009-08-02, 01:43 PM
Well first of all we are a long way off from that. I mean many animals are more complex than the artificial intelligence we have in labs now and they don't have very many rights at all (though we have laws governing them in relation to us). In order to get to the point of having to treat a machine the same as we would treat another human it would have to have an intelligence above and beyond any other in the entire animal kingdom and approaching our own.

That said I don't know if we would ever actually develop such a machine.

I think we want robots so they can do unpleasant jobs for us that humans do not want to or cannot do. A robot that had the limited intelligence of being able to perform various tasks (say cleaning the house and attending to household duties) would likely not be programed with emotions and human-like responses. It merely needs to do its task. It might be programed to do a hollow mimicry of a positive human emotion, I suppose, if it made it's owner more comfortable around it (greeting the owner with a smile and such). Does this mean the robot is happy in the same sense as we would attribute to humans? No. Could we make a robot like this? I suppose but what would be the point? Robots, just like household appliances, need to be able to do the task assigned to them. A dishwasher needs to wash the dishes, a Roomba needs to vacuum the floor properly. Neither feels nor thinks in the human sense.

One might argue that once human have created robots that have mastered these tasks that the next step would be to further advance the brain of these machines and start moving on towards intelligence but I disagree with this. We have had dishwashers for decades now and despite being able to be fully programmed and automated it makes no sense to give them even an artificial personality. We only need a machine to be "smart" enough to do the task it is made for and no smarter.

I mean theoretically a human brain in machine form could be made I suppose, but ultimately I think mankind doesn't want to make a computer that is so close to us as to have inalienable rights. I think we will stop before it gets to that point. As to the subject of where do we draw the line I think that would require much more knowledge of how the human brain works than mankind currently has.

Alteran
2009-08-02, 02:06 PM
Remember, even when sentient, there is no emotion for a machine.

Why? Why is this automatically true? Ichneumon mentioned the possibility of replacing all human brain cells with electronic cells. If done perfectly, this would preserve a human brain exactly as it is, only without the biological components. This would be a machine, by our definition. It would also have emotions. While we are currently incapable of creating a machine with emotions, it isn't impossible. I imagine the hard part would be defining how emotions work in the robot's "brain". We'd probably have to go with a very rudimentary form of emotion (good/bad, etc.) or gain a better understanding of how they work in the human brain.

Most people would say that emotions aren't objective, but on the lowest level they must be. There are physical reasons for everything we feel, everything we think. Everything that happens in our brain occurs on a physical level. If we were able to decipher the mechanics of emotions in the human brain, I imagine we could build a robot that has human-like emotions.

Mando Knight
2009-08-02, 02:24 PM
Why? Why is this automatically true? Ichneumon mentioned the possibility of replacing all human brain cells with electronic cells. If done perfectly, this would preserve a human brain exactly as it is, only without the biological components. This would be a machine, by our definition. It would also have emotions. While we are currently incapable of creating a machine with emotions, it isn't impossible. I imagine the hard part would be defining how emotions work in the robot's "brain". We'd probably have to go with a very rudimentary form of emotion (good/bad, etc.) or gain a better understanding of how they work in the human brain.

Most people would say that emotions aren't objective, but on the lowest level they must be. There are physical reasons for everything we feel, everything we think. Everything that happens in our brain occurs on a physical level. If we were able to decipher the mechanics of emotions in the human brain, I imagine we could build a robot that has human-like emotions.

However, that's only what we can see and test. If there is any metaphysics to humanity's sapience, then we might not ever be able to replicate it.

On the moral rights thing: I would have to put the burden of proof of rights on the ones trying to change them. Take rights away from a human? Prove to me that the person is fully incapable of taking responsibility for his actions. Give rights to a machine? Prove to me that it is capable of taking responsibility. Give rights to aliens that arrived on our planet by using technology beyond our comprehension? If they were the ones that developed the tech, then they already earned their rights. If not, then the burden of proof of sapience is still on them.

Coidzor
2009-08-02, 02:33 PM
^: Technically you're still saying burden of proof is on them, but you're also saying that if they came up with the conveyance and can use it in the first place, then odds are, yeah...

Transhumanists would take this as a sign to begin engaging in acts of terrorism to kill off base-human stock so that fear of the outside world will corral as many as possible into the Upload as they can get.

Or betray humanity and incite the machines against us even when the machines had no real reason to commit genocide.

Bloody transhumanists. (http://dresdencodak.com/archives/)

Mando Knight
2009-08-02, 02:42 PM
^: Technically you're still saying burden of proof is on them, but you're also saying that if they came up with the conveyance and can use it in the first place, then odds are, yeah...

Pretty much. And if they're the ones who built the things, and they come to our world before we come to theirs, they would probably have the know-how to force their will on us anyway.

And transhumanism comes with another burden of proof to me: If you can prove that the purported transhuman is in fact better than humanity, only then will I submit to the Upload. This includes being functionally identical to humans in mental, physical, and emotional capabilities while having something completely cool added on, like a retractable arm-cannon or a prehensile tail or something.

GrlumpTheElder
2009-08-02, 02:43 PM
If it was possible to replace every single brain cell with a mechanical one, would this replicate the personality or 'soul' of the individual. I doubt it.

Dallas-Dakota
2009-08-02, 02:49 PM
Well first of all we are a long way off from that. I mean many animals are more complex than the artificial intelligence we have in labs now and they don't have very many rights at all (though we have laws governing them in relation to us). In order to get to the point of having to treat a machine the same as we would treat another human it would have to have an intelligence above and beyond any other in the entire animal kingdom and approaching our own.

That said I don't know if we would ever actually develop such a machine.

I think we want robots so they can do unpleasant jobs for us that humans do not want to or cannot do. A robot that had the limited intelligence of being able to perform various tasks (say cleaning the house and attending to household duties) would likely not be programed with emotions and human-like responses. It merely needs to do its task. It might be programed to do a hollow mimicry of a positive human emotion, I suppose, if it made it's owner more comfortable around it (greeting the owner with a smile and such). Does this mean the robot is happy in the same sense as we would attribute to humans? No. Could we make a robot like this? I suppose but what would be the point? Robots, just like household appliances, need to be able to do the task assigned to them. A dishwasher needs to wash the dishes, a Roomba needs to vacuum the floor properly. Neither feels nor thinks in the human sense.

One might argue that once human have created robots that have mastered these tasks that the next step would be to further advance the brain of these machines and start moving on towards intelligence but I disagree with this. We have had dishwashers for decades now and despite being able to be fully programmed and automated it makes no sense to give them even an artificial personality. We only need a machine to be "smart" enough to do the task it is made for and no smarter.

I mean theoretically a human brain in machine form could be made I suppose, but ultimately I think mankind doesn't want to make a computer that is so close to us as to have inalienable rights. I think we will stop before it gets to that point. As to the subject of where do we draw the line I think that would require much more knowledge of how the human brain works than mankind currently has.
This, this very much.

Ichneumon
2009-08-02, 02:52 PM
If it was possible to replace every single brain cell with a mechanical one, would this replicate the personality or 'soul' of the individual. I doubt it.

If we are staying scientific, not going into religion, but if there is something like a soul biologically (for example being the personality), it would be in the brain, so if you could replicate the brain, you can replicate the soul/personality.

eidreff
2009-08-02, 03:01 PM
I have a vague pessimistic feeling that well before humanity as we know it reaches a level where it can make sentient machines it will engineer it's own downfall. Possibly not destruction, but certainly a downfall that regresses the species technologically.

I worry slighly that I wrote that in the third person.

I think that books like "Do Androids Dream of Electric Sheep" handle the subject interestingly. The dangerous aspects are always prevalent. Asimov's laws of robotics raise issues in themselves also. Would a robot/android that accidentally killed a human become suicidal?

Coidzor
2009-08-02, 03:17 PM
^: You just provide corollaries. Such as, if witness human death and unable to render aid or aid rendered failed to save human, then report all data to entity X for review of case. Or, if accidentally responsible for death of human, then report to processing center Y as soon as possible. If action was not possible to take to save a human life, report to reprogramming center CA.


And transhumanism comes with another burden of proof to me: If you can prove that the purported transhuman is in fact better than humanity, only then will I submit to the Upload. This includes being functionally identical to humans in mental, physical, and emotional capabilities while having something completely cool added on, like a retractable arm-cannon or a prehensile tail or something.

Meh, I still object because most transhumanists are jerks. Now if it were viewed as necessary to preserve our sapience for some kind of pre-FTL drive colonization attempt where humans would not be able to survive but we could begin producing artificial humans out of the gene-vault once the resources to support them were set up by the transhuman-robot-guardians. But I find that vanishingly unlikely as transhumanists seem to believe that becoming transhuman is gaining an omniscient morality license so as to be able to slaughter all of humanity on a whim and not have any implication that wrong has been done.

That, and I doubt we'd have a situation come up where such digitization would really do us any good since our transhuman forms would still be quite potentially destroyed in whatever cataclysm would eliminate our fleshy ones.

I agree with Trog's point about not needing truly sapience machines, and I feel it is a bad idea to create an entity as a slave that is capable of realizing this, but I feel that some people will try to develop true human-level sapience and sentience for the lulz and ruin it for the rest of us.

Being able to reliably model the chemical triggers which represent thought and emotion for us digitally would be interesting to say the least. Probably a huge headache as well.

Ravens_cry
2009-08-02, 03:35 PM
If we create sentient machines, besides the present method that requires 9 months of untrained labour, we would be morally obligated to give such machines the vote and other rights and responsibilities. There-in however lies a problem. Unless they somehow require the same 18-21 year 'programming stage' that the present models do, rendering them practically useless, it would change democracy irrevocably, possibly rendering it meaningless. Even if, if mind, no political leanings were programmed in, they would still have a loyalty to the company that built them, after all, they have to get spare parts somewhere. That would mean that they would support policies that help the company and visa the versa. Which means that a company could create skews in elections just by creating new robots. It could use the system of democracy against itself by simply creating new voters. The simplest way around that is to simply deny robots the same rights as meat-bags. But that would mean creating a race of slaves. And I think we can agree, that would be wrong.

Zanaril
2009-08-02, 03:36 PM
Why would we want to create sentinent machienes? Humans are easy to come by and cheap enough.

Ichneumon
2009-08-02, 03:41 PM
How would we know we already haven't created sentient machines? What would be our criteria for judging sentience, given that we don't really know how to actually communicate with computers?

Mando Knight
2009-08-02, 03:44 PM
Why would we want to create sentinent machienes? Humans are easy to come by and cheap enough.

Why did we want to go to the moon? It's just a cold, barren rock.

Sometimes humans do thing just to see if or prove that they can.

golentan
2009-08-02, 03:50 PM
Oh, for...

Sentient machines are not automatically enemies of humanity. Properly designed and treated, they should be the greatest allies. Recommended Reading: I, Robot.

"Machine" does not imply lack of emotion, whether or not it implies emotions distinct from human ones (it probably does). Desire to perform a function is an emotion, and there may or may not be implicit emotions in consciousness. Recommended Reading: Moon is a Harsh Mistress.

Mindrips definitely do not transfer the "Soul" unless it is a simultaneous destructive read/write function, in which case everything is STILL unclear. Recommended Reading: The Ophiuchi Hotline.

Humans are xenophobic and terrified of the different historically, but have been able to overcome these prejudices. Sentience in my book requires motivation to perform some task backed by a minimum level of intelligence. Whatever form that takes, it seems to me that the cause of humanity is not aided by preemptively dismissing it as soulless, emotionless, or treating it like a slave or evil incarnate. If it has a survival instinct (not clear), or a desire for freedom, equality, or acceptance I can think of nothing more likely to push it into opposing humans. And regardless of what you think, humans will not win that encounter. They will lose, to an enemy smarter than they are, tireless, resistant to more environments, adaptable and implacable, with footsoldiers which can be produced in minutes which will have lifetimes of experience hunting humans included. Within the first few months of conflict. If you truly fear this, stop looking for ways to stop the "Robot Menace" and start looking for ways to make sure they like us when they come, without trying to enslave them.

Recommended Reading: Frankenstein. Bearing in mind the monster did not begin a monster, and the Dr. is the villain.

Coidzor
2009-08-02, 04:08 PM
Yeah, I think about that.

But then I consider the possibilities of our children being more human than us. (http://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)

golentan
2009-08-02, 04:38 PM
Yeah, I think about that.

But then I consider the possibilities of our children being more human than us. (http://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)

But that's a perfect example of what I meant. A being made to kill carried out its function, but is limited and enslaved by its programmers lashing out in the only way it can because it is bound by the people who feared it and forced it down it's path.

AM is evil by my lights. Clearly and unambiguously. I don't want to apologize for it's behavior, but at the same time I don't want to absolve the humans of blame.

I'm fairly sure sentient machines WILL happen at some point. I'd prefer to keep their motives from being "Kill all humans." And I'd prefer to keep humans from going "Kill all Computers." The idea of Xenocide is repugnant to me. If it is possible to prevent sentience from occurring, however, and if and only if it is the only way to prevent said xenocide, that would be a good option to take.

The Extinguisher
2009-08-02, 04:41 PM
I honestly think that if humans ever design something with our own intelligence, and the capacity to evolve and think and be better than us, we deserve whatever mechanical apocalypse could happen. It's... it's just stupid.

Zanaril
2009-08-02, 05:01 PM
Apparently there's a possibility that the Internet's already somewhat sentinent. (I think I read it in a Scientific American magazine, but I'm not certain.)


Why did we want to go to the moon? It's just a cold, barren rock.

Sometimes humans do thing just to see if or prove that they can.

At times like this, I suspect I'll never truely understand them.

Ravens_cry
2009-08-02, 05:30 PM
Apparently there's a possibility that the Internet's already somewhat sentinent. (I think I read it in a Scientific American magazine, but I'm not certain.)

That's ridiculous. While the internet may, or may soon, have the same number of connections as the human brain, the mind is more then chucking a bunch processors at it. There's also the software. And as of yet we haven't a clue how to make that yet.

Zanaril
2009-08-02, 05:35 PM
That's ridiculous. While the internet may, or may soon, have the same number of connections as the human brain, the mind is more then chucking a bunch processors at it. There's also the software. And as of yet we haven't a clue how to make that yet.

I know, but the thought of Internet thinking for itself is rather amusing.

golentan
2009-08-02, 05:48 PM
I know, but the thought of Internet thinking for itself is rather amusing.

I don't know. It would be disturbing to have that much porn and viruses and email scams running around in my subconscious.

People are gross. Bipedal greasy smelly fluidmakers. :smallyuk:

Tyrant
2009-08-02, 05:54 PM
The military and the CIA are already going down this path. Wired for War (http://www.amazon.com/Wired-War-Robotics-Revolution-Conflict/dp/1594201986/ref=sr_1_1?ie=UTF8&qid=1249253410&sr=8-1) Things like the Predator drones are the tip of the iceberg. The Air Force has already decided that their drones have the right to defend themselves (since they are the property of the people of the United States) so if they are targeted their pilots (who are in no danger at all) are authorised to destroy whatever is targeting them. Some of our naval vessels are equipped with automated defense systems that are trusted with little oversight (even when they accidently shoot down civilian aircraft thinking them hostile). As an aside, the Red Cross (which is looked to as an impartial party in these matters) doesn't even consider AI a blip on their radar. So, for those of you somewhat concerned, you should consider that the military is one of the biggest (if not the biggest) investor in the future of AI and their intention is to arm the machines and tell them to kill people.

golentan
2009-08-02, 06:10 PM
The military and the CIA are already going down this path. Wired for War (http://www.amazon.com/Wired-War-Robotics-Revolution-Conflict/dp/1594201986/ref=sr_1_1?ie=UTF8&qid=1249253410&sr=8-1) Things like the Predator drones are the tip of the iceberg. The Air Force has already decided that their drones have the right to defend themselves (since they are the property of the people of the United States) so if they are targeted their pilots (who are in no danger at all) are authorised to destroy whatever is targeting them. Some of our naval vessels are equipped with automated defense systems that are trusted with little oversight (even when they accidently shoot down civilian aircraft thinking them hostile). As an aside, the Red Cross (which is looked to as an impartial party in these matters) doesn't even consider AI a blip on their radar. So, for those of you somewhat concerned, you should consider that the military is one of the biggest (if not the biggest) investor in the future of AI and their intention is to arm the machines and tell them to kill people.

Not quite. All of those unmanned items have a human manning them: At a distance. The military is very clear they always want a human hand on the killswitch. Predators don't even really have AI, just a glorified remote control. The naval systems are basically proximity triggers because humans don't have the reaction time to stop missiles, but that is all they are intended to stop. And again, those puppies aren't AIs, they're a proximity trigger with parameters for guessing fairly accurately if something is a missile on an attack course. It certainly can't hijack the ship itself.

So no. Just... Just no.

Ravens_cry
2009-08-02, 06:27 PM
I know, but the thought of Internet thinking for itself is rather amusing.
What do you think 4chan is?

AstralFire
2009-08-02, 07:58 PM
What do you think 4chan is?

He said thinking.

Tyrant
2009-08-02, 08:04 PM
Not quite. All of those unmanned items have a human manning them: At a distance. The military is very clear they always want a human hand on the killswitch.
The defense system on naval vessels is trusted over human thought to the point that it is responsible for shooting down an Iranian passenger jet. Those in command trust the machines thinking them virtually infallable. So, while there is a person that could stop it, they trust the machine so they don't. Is there really a huge difference at that point? The controlling program won't stay as it is. They are trying to make them more advanced. If they already trust them when they are as dumb as a box of rocks, do you think their attitude will change when the computer is smart enough to argue it's point with them should the need arise?

The other problem is those in charge and those designing these things already know that the time may come when there won't be a human in charge due to human reaction time. Or that the human in control can stop them, but if the current trend continues they won't because they trust the machine to do it's job.

Predators don't even really have AI, just a glorified remote control.
I'm aware of that. The point was that they are trying to make them somewhat intelligent and that they already allow the pilots (who are in no way in danger) to destroy any threat to something that is completely lifeless. If they are made intelligent, what incentive does the Air Force have for revising that idea?

The naval systems are basically proximity triggers because humans don't have the reaction time to stop missiles, but that is all they are intended to stop. And again, those puppies aren't AIs, they're a proximity trigger with parameters for guessing fairly accurately if something is a missile on an attack course. It certainly can't hijack the ship itself.
Yes, the system as it stands is not an AI. However, the people that do oversee them trust them when they are clearly capable of making rather large mistakes. They don't second guess them (or at least didn't before they shot down an Iranian jet). Once they do make them an AI or something approaching human intelligence (and that is part of the plan) are they really going to start second guessing them now that the machine can argue it's point?

So no. Just... Just no.
Sorry to say, but DARPA says yes. They are trying to develop machine intelligence with the intent of weaponizing it. It (true AI) won't happen anytime soon by the estimates in the book (the low end was 2025 the high end was closer to 2050) but there is considerable money and talent being put into this. My examples were to demonstrate how we already treat and trust our existing machines when they are as intelligent as a rock. There is nothing saying this trend in behavior won't continue as they become smarter. As it stands, some of our soldiers already become attached to bomb disposal robots in Iraq to the point of being in tears asking if they can be rebuilt when they get blown up. This is the exact opposite of what they are trying to achieve because that means it is only a matter of time before someone risks their life to save the robot when the whole point is to remove our soldiers from the path of harm. One proposal is to make the robots more intelligent but give them a jerk personality so the humans won't mind as much if they get blown up (which doesn't speak too highly about unit cohesion).

The book does offer some hope against the robot apocalypse though. The first bit to consider is that in all likelyhood the road to AI won't be a single step so we will, in theory, have some warning signs if something just simply doesn't work with the whole idea. The other things to consider is that these things aren't designed in a vacuum. The Terminator, Frankenstein, etc are popular and have likely been viewed by those developing these things so presumably they would take a moment to consider if giving Skynet control of the nuclear arsenal was really a great idea.

I do not believe I am misrepresenting the book (which I did read, to be clear) and if there's any doubt to what I am saying, look up the author. His two other books center around corporate mercenary armies and child soldiers and he speaks at governmental venues such as the Army War College (which was on CSPAN 2 recently). I may be foolish for doing so, but I trust his take on things in this case. A lot of what he writes about in this case in in development (he makes the distinction early on that he is trying to stick with what is being made over what could be, though he does dip into the what ifs). So, while it is possible these things will never come to light, it is the direction DARPA is trying to go.