Results 1 to 25 of 25
Thread: CHARLI-2 for president
-
2012-10-24, 08:29 AM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
CHARLI-2 for president
he can dance
he can win at football .
I suggest that we elect him should the zombie apocalypse occur. The robot overlords guided by Friend Computer will save us all.
ETA: And I note in the second clip that Japan lost to the US in the robot world cup. The shame of that must be immense, given the number of giant fighting robots over there.
Tongue-in-cheek,
Brian P.Last edited by pendell; 2012-10-24 at 08:29 AM.
"Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 09:29 AM (ISO 8601)
- Join Date
- Mar 2011
- Gender
-
2012-10-24, 09:34 AM (ISO 8601)
- Join Date
- Mar 2009
- Location
- Gothenburg, Sweden
- Gender
Re: CHARLI-2 for president
Call that dancing? I've seen better dancing from the English Eurovision Song Contest Team.
Avatar by CoffeeIncluded
Oooh, and that's a bad miss.
“Don't exercise your freedom of speech until you have exercised your freedom of thought.”
― Tim Fargo
-
2012-10-24, 09:37 AM (ISO 8601)
- Join Date
- Oct 2009
Re: CHARLI-2 for president
I am not seaweed. That's a B.
Praise I've received A quick outline on building a homebrew campaign
Avatar by Tiffanie Lirle
-
2012-10-24, 11:08 AM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
I strongly doubt they are programmed with the three laws, because the three laws presuppose a level of cognitive function these robots do not have.
Case in point: Law 1: "A robot may not harm a human being nor, by inaction, allow a human being to come to harm"
Okay, let's try to puzzle this out.
"A robot" -- well, there's no point in explaining what a robot is. So I think "self" would probably be better.
"human being" -- what IS a human being? What sensors does the robot use ? Thermal? How does it distinguish a human from a hot rock? Visual? Great Scott, has ANYONE yet solved the visual recognition problem? Aural sensors?
Even if we had solved visual recognition, you'd still need some way to get all the possible shapes and sizes of human, from squalling infant to old man to young female teenager et al, and still have the machine not deferring to a store mannequin or the TV image of a human.
I could go on , but my point is that the first law, as written, can only be processed by human-level or near-human-level intelligence. It takes us twenty years to teach a child to recognize what a human is and even then the process can be flawed -- most seriously in psychopaths but even two hundred years ago someone like Thomas Jefferson did not accept that African slaves were human like he was. And as to what "harm" is -- ever hear the phrase "For your own good"? Could a robot, thus programmed, not perform surgery because surgery involves cutting into human flesh and thereby harming a human?
And so the laws, though they seem clean and unambiguous, are only so to humans and those with near-human intelligence. Programming such a thing into any existing robot would be a challenge beyond the capability of current technology, I think.
Respectfully,
Brian P."Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 12:09 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
Re: CHARLI-2 for president
I was mostly joking, same way actually electing this thing president is clearly a joke.
Jude P.
-
2012-10-24, 12:18 PM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
Sure! And it was a good joke. You nonetheless pushed my brain into thinking mode, and I started thinking about how you would implement the three laws. Remember my job IS building rob- , er, intelligent minibars.
The fighting machines of death disguised as minibars will wait until the technology is properly in place
Respectfully,
Brian P.Last edited by pendell; 2012-10-24 at 12:18 PM.
"Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 12:28 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
Re: CHARLI-2 for president
Well, the Zeroth Law could help out for things like "for your own good" and similar, I think. Depends on the interpretation, really. Still, surgical robots would "die" every time they failed an operation, because they allowed a human to come to harm. Could be problematic. And robot cops might have some issues, even running under the Zeroth Law. The Zeroth Law in general has the potential to lead to a robot apocalypse.
Jude P.
-
2012-10-24, 12:31 PM (ISO 8601)
- Join Date
- Oct 2008
- Location
- Bottom of a well
Re: CHARLI-2 for president
The movie I, Robot was a perversion of the stories. Asimov dealt with what happens when a robot knows what's best for humanity better than humanity knows itself. "The Evitable Conflict." A zeroth law rebellion as violent and frankly stupid as that in the movie wouldn't work.
On topic: Cool robot, bro. I look forward to meeting his son ASH.
Totally going to see autonomous robots before I die. It will be awesome.
-
2012-10-24, 12:54 PM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
Well, the Zeroth Law could help out for things like "for your own good" and similar, I think
What is "good for humanity"? Is it better for humanity if , say, we eliminated the gene for down's syndrome? What if a robot arrived at this conclusion, flawed or no, and started terminating the lives of anyone with that gene? Would we accept its defense that it was acting in accord with the zeroth law?
And what 'humanity'? If a robot concluded that 'humanity' must evolve to the next stage of evolution, and proceeded therefore to exert the necessary environmental pressure on the gene pool by fomenting wars or performing selected assassinations, would we want this to happen?
There is no deed so base, so vile, that it cannot be somehow justified as "for the good of humanity".
That is why I would prefer a concrete rule with tangible measures of performance (such as "Don't kill a human being") over a nebulous concept that can be rationalized to mean ANYTHING. I've debugged enough computer programs to know what happens when a computer follows the instructions assigned to it by humans to it's logical conclusion.
Respectfully,
Brian P."Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 12:56 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
-
2012-10-24, 02:40 PM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
Last edited by pendell; 2012-10-24 at 02:40 PM.
"Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 02:42 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
-
2012-10-24, 02:47 PM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
This raises a question: Is it truly possible for humans to make a near-god creature, given we aren't near-gods ourselves?
Would the most plausible path be to build something as much like us as possible, but give it the possibility to improve, such that it can learn for itself the lessons we cannot teach it, and so surpass us and become superhuman?
And if such a being was able to achieve such a feat, would it be possible for us to follow in its footsteps and become superhuman ourselves?
Respectfully,
Brian P."Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 02:51 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
-
2012-10-24, 03:33 PM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
But is that a necessary conclusion? It makes for great space opera, but why should a superhuman AI conclude that it is necessary to destroy or exterminate humans? I grant that it is a distinct possibility. But might it not be possible that it would be amused by our antics, and watch as an alternative to being bored?
Or build a spacecraft for itself and go away?
Or spend its time messing with people's heads, a la Simon Jester from The Moon Is A Harsh Mistress?
Hmmm ... thing is, if a superhuman AI came into existence, it would by definition be quite a bit more intelligent than its human creators. This implies that at some point it would slip beyond our control. Even with strict safeguards, even in captivity a sufficiently intelligent machine could manipulate its captors, to the point of running the universe from a prison cell.
And once outside of our control, we cannot guarantee any outcome.
So the problem with superhuman AI is that, although we cannot be certain it would want to kill us all, there doesn't seem to be any way from preventing it from reaching that conclusion, once it is beyond our control, if it should come to that conclusion. True?
Respectfully,
Brian P."Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-24, 03:36 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
Re: CHARLI-2 for president
It's been a while since I read that, I don't remember it well.
Hmmm ... thing is, if a superhuman AI came into existence, it would by definition be quite a bit more intelligent than its human creators. This implies that at some point it would slip beyond our control. Even with strict safeguards, even in captivity a sufficiently intelligent machine could manipulate its captors, to the point of running the universe from a prison cell.
And once outside of our control, we cannot guarantee any outcome.
So the problem with superhuman AI is that, although we cannot be certain it would want to kill us all, there doesn't seem to be any way from preventing it from reaching that conclusion, once it is beyond our control, if it should come to that conclusion. True?
Respectfully,
Brian P.Jude P.
-
2012-10-24, 04:56 PM (ISO 8601)
- Join Date
- Oct 2008
- Location
- Bottom of a well
Re: CHARLI-2 for president
On that note: Do you fear children?
Do you look at children, and realize that one day you are going to die? Do you look at them, and see that they tend to have underdeveloped moral skills and worry that they will murder you in your sleep? Do you see them and think "Oh no, that child may one day compete with me for my job and be better at it than I am? If the child isn't me, how can I ever understand it as another person, is it even a person, should we ever give it the chance to think things different than what we tell it to? What if that child grows up to be the next Adolf Hitler or Ghengis Khan and I could have stopped it by preventing the child's birth/killing it before it grows up to make its own choices? And do you think such questions reflect more an accurate picture of the risk to society posed by a given child, or the nature and parenting skills of the person asking the question?
-
2012-10-24, 04:59 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
-
2012-10-25, 08:09 AM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
Two factors influence my answer:
1) I compete in the job market with people half my age and half my salary requirements.
2) I well remember what it was like being the small kid with glasses on school playgrounds. Children may look cute to adults, but they are capable of viciousness adults don't have, because they don't have adult boundaries.
Children are cute, yes. We're genetically programmed to see them that way. But being cute and being kind and gentle are two different things.
So I wouldn't say I fear children. But I don't look at them and get all dewy-eyed when I see them, either. Because I know they have great potential, both for evil and for good. It is our job as adults to try to steer them towards the good. But I think we need a proper appreciation for the evil the cute little tykes are capable of , or we're going to have a hard time raising them to be civilized human beings. Because we're blind to their true nature, not as little angels, but as immature specimens of the most vicious, most dangerous, most successful predator this planet has ever known.
Respectfully,
Brian P."Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-25, 09:58 AM (ISO 8601)
- Join Date
- Oct 2009
Re: CHARLI-2 for president
Wow this thread got derailed hard. What was the original topic again?
I am not seaweed. That's a B.
Praise I've received A quick outline on building a homebrew campaign
Avatar by Tiffanie Lirle
-
2012-10-25, 10:30 AM (ISO 8601)
- Join Date
- Dec 2006
- Location
- Raleigh NC
- Gender
Re: CHARLI-2 for president
"Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."
-Valery Legasov in Chernobyl
-
2012-10-25, 11:52 AM (ISO 8601)
- Join Date
- Mar 2009
- Location
- Gothenburg, Sweden
- Gender
Re: CHARLI-2 for president
Avatar by CoffeeIncluded
Oooh, and that's a bad miss.
“Don't exercise your freedom of speech until you have exercised your freedom of thought.”
― Tim Fargo
-
2012-10-25, 01:07 PM (ISO 8601)
- Join Date
- Mar 2011
- Gender
Re: CHARLI-2 for president
On the off-topic topic, I just watched the old Doctor Who episode "The Robots of Death". Made me think of this.
Jude P.
-
2012-10-25, 09:48 PM (ISO 8601)
- Join Date
- Nov 2007
- Location
- Cippa's River Meadow
- Gender
Re: CHARLI-2 for president
It could also end up like the Culture series by Iain Banks, where the AIs improved themselves over successive generations, with the current AIs building new and better ones on their own.
Yes, the Minds there are vastly superior to the organic members of the Culture, but they are still all full and equal citizens in it.