New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Results 1 to 25 of 25
  1. - Top - End - #1
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default CHARLI-2 for president

    he can dance

    he can win at football .

    I suggest that we elect him should the zombie apocalypse occur. The robot overlords guided by Friend Computer will save us all.

    ETA: And I note in the second clip that Japan lost to the US in the robot world cup. The shame of that must be immense, given the number of giant fighting robots over there.

    Tongue-in-cheek,

    Brian P.
    Last edited by pendell; 2012-10-24 at 08:29 AM.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  2. - Top - End - #2
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by pendell View Post
    he can dance

    he can win at football .

    I suggest that we elect him should the zombie apocalypse occur. The robot overlords guided by Friend Computer will save us all.

    ETA: And I note in the second clip that Japan lost to the US in the robot world cup. The shame of that must be immense, given the number of giant fighting robots over there.

    Tongue-in-cheek,

    Brian P.
    Science is scary. Do those things have the three laws?
    Jude P.

  3. - Top - End - #3
    Titan in the Playground
     
    Asta Kask's Avatar

    Join Date
    Mar 2009
    Location
    Gothenburg, Sweden
    Gender
    Male

    Default Re: CHARLI-2 for president

    Call that dancing? I've seen better dancing from the English Eurovision Song Contest Team.
    Avatar by CoffeeIncluded

    Oooh, and that's a bad miss.

    “Don't exercise your freedom of speech until you have exercised your freedom of thought.”
    ― Tim Fargo

  4. - Top - End - #4
    Titan in the Playground
     
    Kelb_Panthera's Avatar

    Join Date
    Oct 2009

    Default Re: CHARLI-2 for president

    Quote Originally Posted by noparlpf View Post
    Science is scary. Do those things have the three laws?
    Gods, I hope they have something a little more complex than Aasimov's laws guiding them.

    The three laws got us the movie I Robot. (decent flick, imo, but it highlights only one of the many flaws in Aasimov's three laws.)
    I am not seaweed. That's a B.

    Praise I've received
    Spoiler
    Show
    Quote Originally Posted by ThiagoMartell View Post
    Kelb, recently it looks like you're the Avatar of Reason in these forums, man.
    Quote Originally Posted by LTwerewolf View Post
    [...] bringing Kelb in on your side in a rules fight is like bringing Mike Tyson in on your side to fight a toddler. You can, but it's such massive overkill.
    A quick outline on building a homebrew campaign

    Avatar by Tiffanie Lirle

  5. - Top - End - #5
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    I strongly doubt they are programmed with the three laws, because the three laws presuppose a level of cognitive function these robots do not have.

    Case in point: Law 1: "A robot may not harm a human being nor, by inaction, allow a human being to come to harm"

    Okay, let's try to puzzle this out.

    "A robot" -- well, there's no point in explaining what a robot is. So I think "self" would probably be better.

    "human being" -- what IS a human being? What sensors does the robot use ? Thermal? How does it distinguish a human from a hot rock? Visual? Great Scott, has ANYONE yet solved the visual recognition problem? Aural sensors?

    Even if we had solved visual recognition, you'd still need some way to get all the possible shapes and sizes of human, from squalling infant to old man to young female teenager et al, and still have the machine not deferring to a store mannequin or the TV image of a human.

    I could go on , but my point is that the first law, as written, can only be processed by human-level or near-human-level intelligence. It takes us twenty years to teach a child to recognize what a human is and even then the process can be flawed -- most seriously in psychopaths but even two hundred years ago someone like Thomas Jefferson did not accept that African slaves were human like he was. And as to what "harm" is -- ever hear the phrase "For your own good"? Could a robot, thus programmed, not perform surgery because surgery involves cutting into human flesh and thereby harming a human?

    And so the laws, though they seem clean and unambiguous, are only so to humans and those with near-human intelligence. Programming such a thing into any existing robot would be a challenge beyond the capability of current technology, I think.

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  6. - Top - End - #6
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    I was mostly joking, same way actually electing this thing president is clearly a joke.
    Jude P.

  7. - Top - End - #7
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    Sure! And it was a good joke. You nonetheless pushed my brain into thinking mode, and I started thinking about how you would implement the three laws. Remember my job IS building rob- , er, intelligent minibars.

    The fighting machines of death disguised as minibars will wait until the technology is properly in place






    Respectfully,

    Brian P.
    Last edited by pendell; 2012-10-24 at 12:18 PM.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  8. - Top - End - #8
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by pendell View Post
    Sure! And it was a good joke. You nonetheless pushed my brain into thinking mode, and I started thinking about how you would implement the three laws. Remember my job IS building rob- , er, intelligent minibars.

    The fighting machines of death disguised as minibars will wait until the technology is properly in place


    Respectfully,

    Brian P.
    Well, the Zeroth Law could help out for things like "for your own good" and similar, I think. Depends on the interpretation, really. Still, surgical robots would "die" every time they failed an operation, because they allowed a human to come to harm. Could be problematic. And robot cops might have some issues, even running under the Zeroth Law. The Zeroth Law in general has the potential to lead to a robot apocalypse.
    Jude P.

  9. - Top - End - #9
    Titan in the Playground
     
    golentan's Avatar

    Join Date
    Oct 2008
    Location
    Bottom of a well

    Default Re: CHARLI-2 for president

    Quote Originally Posted by Kelb_Panthera View Post
    Gods, I hope they have something a little more complex than Aasimov's laws guiding them.

    The three laws got us the movie I Robot. (decent flick, imo, but it highlights only one of the many flaws in Aasimov's three laws.)
    The movie I, Robot was a perversion of the stories. Asimov dealt with what happens when a robot knows what's best for humanity better than humanity knows itself. "The Evitable Conflict." A zeroth law rebellion as violent and frankly stupid as that in the movie wouldn't work.

    On topic: Cool robot, bro. I look forward to meeting his son ASH.

    Totally going to see autonomous robots before I die. It will be awesome.
    Spoiler
    Show
    My motto: Repensum Est Canicula.

    Quote Originally Posted by turkishproverb View Post
    I am not getting into a shootout with Golentan. Too many gun-arms.
    Leiningen will win, even if he must lose in the attempt.

    Credit to Astrella for the new party avatar.

  10. - Top - End - #10
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    Well, the Zeroth Law could help out for things like "for your own good" and similar, I think
    The problem with the zeroth law is that it is so amorphous that it's not really useful as a guide to behavior.

    What is "good for humanity"? Is it better for humanity if , say, we eliminated the gene for down's syndrome? What if a robot arrived at this conclusion, flawed or no, and started terminating the lives of anyone with that gene? Would we accept its defense that it was acting in accord with the zeroth law?

    And what 'humanity'? If a robot concluded that 'humanity' must evolve to the next stage of evolution, and proceeded therefore to exert the necessary environmental pressure on the gene pool by fomenting wars or performing selected assassinations, would we want this to happen?

    There is no deed so base, so vile, that it cannot be somehow justified as "for the good of humanity".

    That is why I would prefer a concrete rule with tangible measures of performance (such as "Don't kill a human being") over a nebulous concept that can be rationalized to mean ANYTHING. I've debugged enough computer programs to know what happens when a computer follows the instructions assigned to it by humans to it's logical conclusion.

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  11. - Top - End - #11
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by pendell View Post
    The problem with the zeroth law is that it is so amorphous that it's not really useful as a guide to behavior.

    What is "good for humanity"? Is it better for humanity if , say, we eliminated the gene for down's syndrome? What if a robot arrived at this conclusion, flawed or no, and started terminating the lives of anyone with that gene? Would we accept its defense that it was acting in accord with the zeroth law?

    And what 'humanity'? If a robot concluded that 'humanity' must evolve to the next stage of evolution, and proceeded therefore to exert the necessary environmental pressure on the gene pool by fomenting wars or performing selected assassinations, would we want this to happen?

    There is no deed so base, so vile, that it cannot be somehow justified as "for the good of humanity".

    That is why I would prefer a concrete rule with tangible measures of performance (such as "Don't kill a human being") over a nebulous concept that can be rationalized to mean ANYTHING. I've debugged enough computer programs to know what happens when a computer follows the instructions assigned to it by humans to it's logical conclusion.

    Respectfully,

    Brian P.
    The Zeroth Law requires robots not only vastly more intelligent than humans but also nigh-omniscient robots, really.
    Jude P.

  12. - Top - End - #12
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by noparlpf View Post
    The Zeroth Law requires robots not only vastly more intelligent than humans but also nigh-omniscient robots, really.
    Why not just say it? It requires metal gods. Well .. metal archangels, if you prefer. Metal demiurges?

    Spoiler
    Show

    Which is what Daneel Olivaw and Giskard become in the later Asimov books, starting with Robots and Empire


    Respectfully,

    Brian P.
    Last edited by pendell; 2012-10-24 at 02:40 PM.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  13. - Top - End - #13
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by pendell View Post
    Why not just say it? It requires metal gods. Well .. metal archangels, if you prefer. Metal demiurges?

    Spoiler
    Show

    Which is what Daneel Olivaw and Giskard become in the later Asimov books, starting with Robots and Empire


    Respectfully,

    Brian P.
    Yeah, basically. Man, I need to go back and reread those, and try to actually get them all in the right order this time.
    Jude P.

  14. - Top - End - #14
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    This raises a question: Is it truly possible for humans to make a near-god creature, given we aren't near-gods ourselves?

    Would the most plausible path be to build something as much like us as possible, but give it the possibility to improve, such that it can learn for itself the lessons we cannot teach it, and so surpass us and become superhuman?

    And if such a being was able to achieve such a feat, would it be possible for us to follow in its footsteps and become superhuman ourselves?

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  15. - Top - End - #15
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by pendell View Post
    This raises a question: Is it truly possible for humans to make a near-god creature, given we aren't near-gods ourselves?

    Would the most plausible path be to build something as much like us as possible, but give it the possibility to improve, such that it can learn for itself the lessons we cannot teach it, and so surpass us and become superhuman?

    And if such a being was able to achieve such a feat, would it be possible for us to follow in its footsteps and become superhuman ourselves?

    Respectfully,

    Brian P.
    It might be possible to build self-improving AI that could eventually evolve beyond humanity. But in most sci-fi I've read, that leads to apocalypses and dystopias.
    Jude P.

  16. - Top - End - #16
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by noparlpf View Post
    It might be possible to build self-improving AI that could eventually evolve beyond humanity. But in most sci-fi I've read, that leads to apocalypses and dystopias.
    But is that a necessary conclusion? It makes for great space opera, but why should a superhuman AI conclude that it is necessary to destroy or exterminate humans? I grant that it is a distinct possibility. But might it not be possible that it would be amused by our antics, and watch as an alternative to being bored?

    Or build a spacecraft for itself and go away?

    Or spend its time messing with people's heads, a la Simon Jester from The Moon Is A Harsh Mistress?

    Hmmm ... thing is, if a superhuman AI came into existence, it would by definition be quite a bit more intelligent than its human creators. This implies that at some point it would slip beyond our control. Even with strict safeguards, even in captivity a sufficiently intelligent machine could manipulate its captors, to the point of running the universe from a prison cell.

    And once outside of our control, we cannot guarantee any outcome.

    So the problem with superhuman AI is that, although we cannot be certain it would want to kill us all, there doesn't seem to be any way from preventing it from reaching that conclusion, once it is beyond our control, if it should come to that conclusion. True?

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  17. - Top - End - #17
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by pendell View Post
    But is that a necessary conclusion? It makes for great space opera, but why should a superhuman AI conclude that it is necessary to destroy or exterminate humans? I grant that it is a distinct possibility. But might it not be possible that it would be amused by our antics, and watch as an alternative to being bored?

    Or build a spacecraft for itself and go away?

    Or spend its time messing with people's heads, a la Simon Jester from The Moon Is A Harsh Mistress?
    It's been a while since I read that, I don't remember it well.

    Hmmm ... thing is, if a superhuman AI came into existence, it would by definition be quite a bit more intelligent than its human creators. This implies that at some point it would slip beyond our control. Even with strict safeguards, even in captivity a sufficiently intelligent machine could manipulate its captors, to the point of running the universe from a prison cell.

    And once outside of our control, we cannot guarantee any outcome.

    So the problem with superhuman AI is that, although we cannot be certain it would want to kill us all, there doesn't seem to be any way from preventing it from reaching that conclusion, once it is beyond our control, if it should come to that conclusion. True?

    Respectfully,

    Brian P.
    Yeah, basically my fear is that whatever it might be, it would be uncontrollable and unknowable, and that's a scary prospect.
    Jude P.

  18. - Top - End - #18
    Titan in the Playground
     
    golentan's Avatar

    Join Date
    Oct 2008
    Location
    Bottom of a well

    Default Re: CHARLI-2 for president

    On that note: Do you fear children?

    Do you look at children, and realize that one day you are going to die? Do you look at them, and see that they tend to have underdeveloped moral skills and worry that they will murder you in your sleep? Do you see them and think "Oh no, that child may one day compete with me for my job and be better at it than I am? If the child isn't me, how can I ever understand it as another person, is it even a person, should we ever give it the chance to think things different than what we tell it to? What if that child grows up to be the next Adolf Hitler or Ghengis Khan and I could have stopped it by preventing the child's birth/killing it before it grows up to make its own choices? And do you think such questions reflect more an accurate picture of the risk to society posed by a given child, or the nature and parenting skills of the person asking the question?
    Spoiler
    Show
    My motto: Repensum Est Canicula.

    Quote Originally Posted by turkishproverb View Post
    I am not getting into a shootout with Golentan. Too many gun-arms.
    Leiningen will win, even if he must lose in the attempt.

    Credit to Astrella for the new party avatar.

  19. - Top - End - #19
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by golentan View Post
    On that note: Do you fear children?

    Do you look at children, and realize that one day you are going to die? Do you look at them, and see that they tend to have underdeveloped moral skills and worry that they will murder you in your sleep? Do you see them and think "Oh no, that child may one day compete with me for my job and be better at it than I am? If the child isn't me, how can I ever understand it as another person, is it even a person, should we ever give it the chance to think things different than what we tell it to? What if that child grows up to be the next Adolf Hitler or Ghengis Khan and I could have stopped it by preventing the child's birth/killing it before it grows up to make its own choices? And do you think such questions reflect more an accurate picture of the risk to society posed by a given child, or the nature and parenting skills of the person asking the question?
    A little bit, sometimes.
    However, an individual human and especially a child is relatively knowable and fairly controllable.
    Jude P.

  20. - Top - End - #20
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by golentan View Post
    On that note: Do you fear children?

    Do you look at children, and realize that one day you are going to die? Do you look at them, and see that they tend to have underdeveloped moral skills and worry that they will murder you in your sleep? Do you see them and think "Oh no, that child may one day compete with me for my job and be better at it than I am? If the child isn't me, how can I ever understand it as another person, is it even a person, should we ever give it the chance to think things different than what we tell it to? What if that child grows up to be the next Adolf Hitler or Ghengis Khan and I could have stopped it by preventing the child's birth/killing it before it grows up to make its own choices? And do you think such questions reflect more an accurate picture of the risk to society posed by a given child, or the nature and parenting skills of the person asking the question?
    Two factors influence my answer:
    1) I compete in the job market with people half my age and half my salary requirements.

    2) I well remember what it was like being the small kid with glasses on school playgrounds. Children may look cute to adults, but they are capable of viciousness adults don't have, because they don't have adult boundaries.


    Children are cute, yes. We're genetically programmed to see them that way. But being cute and being kind and gentle are two different things.

    So I wouldn't say I fear children. But I don't look at them and get all dewy-eyed when I see them, either. Because I know they have great potential, both for evil and for good. It is our job as adults to try to steer them towards the good. But I think we need a proper appreciation for the evil the cute little tykes are capable of , or we're going to have a hard time raising them to be civilized human beings. Because we're blind to their true nature, not as little angels, but as immature specimens of the most vicious, most dangerous, most successful predator this planet has ever known.

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  21. - Top - End - #21
    Titan in the Playground
     
    Kelb_Panthera's Avatar

    Join Date
    Oct 2009

    Default Re: CHARLI-2 for president

    Wow this thread got derailed hard. What was the original topic again?
    I am not seaweed. That's a B.

    Praise I've received
    Spoiler
    Show
    Quote Originally Posted by ThiagoMartell View Post
    Kelb, recently it looks like you're the Avatar of Reason in these forums, man.
    Quote Originally Posted by LTwerewolf View Post
    [...] bringing Kelb in on your side in a rules fight is like bringing Mike Tyson in on your side to fight a toddler. You can, but it's such massive overkill.
    A quick outline on building a homebrew campaign

    Avatar by Tiffanie Lirle

  22. - Top - End - #22
    Titan in the Playground
     
    Planetar

    Join Date
    Dec 2006
    Location
    Raleigh NC
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by Kelb_Panthera View Post
    Wow this thread got derailed hard. What was the original topic again?
    Dancing robots and robots playing football/soccer. This naturally segued into Our Robot Overlords and the potential for AI which exceeds human abilities.

    Respectfully,

    Brian P.
    "Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid."

    -Valery Legasov in Chernobyl

  23. - Top - End - #23
    Titan in the Playground
     
    Asta Kask's Avatar

    Join Date
    Mar 2009
    Location
    Gothenburg, Sweden
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by golentan View Post
    On that note: Do you fear children?

    Do you look at children, and realize that one day you are going to die? Do you look at them, and see that they tend to have underdeveloped moral skills and worry that they will murder you in your sleep? Do you see them and think "Oh no, that child may one day compete with me for my job and be better at it than I am? If the child isn't me, how can I ever understand it as another person, is it even a person, should we ever give it the chance to think things different than what we tell it to? What if that child grows up to be the next Adolf Hitler or Ghengis Khan and I could have stopped it by preventing the child's birth/killing it before it grows up to make its own choices? And do you think such questions reflect more an accurate picture of the risk to society posed by a given child, or the nature and parenting skills of the person asking the question?
    Human beings are still humans. We can understand them. A robot is something inhuman, by definition. Who knows what they'll turn out like.
    Avatar by CoffeeIncluded

    Oooh, and that's a bad miss.

    “Don't exercise your freedom of speech until you have exercised your freedom of thought.”
    ― Tim Fargo

  24. - Top - End - #24
    Firbolg in the Playground
     
    noparlpf's Avatar

    Join Date
    Mar 2011
    Gender
    Male

    Default Re: CHARLI-2 for president

    On the off-topic topic, I just watched the old Doctor Who episode "The Robots of Death". Made me think of this.
    Jude P.

  25. - Top - End - #25
    Titan in the Playground
     
    Brother Oni's Avatar

    Join Date
    Nov 2007
    Location
    Cippa's River Meadow
    Gender
    Male

    Default Re: CHARLI-2 for president

    Quote Originally Posted by noparlpf View Post
    It might be possible to build self-improving AI that could eventually evolve beyond humanity. But in most sci-fi I've read, that leads to apocalypses and dystopias.
    It could also end up like the Culture series by Iain Banks, where the AIs improved themselves over successive generations, with the current AIs building new and better ones on their own.

    Yes, the Minds there are vastly superior to the organic members of the Culture, but they are still all full and equal citizens in it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •