New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Results 1 to 13 of 13
  1. - Top - End - #1
    Bugbear in the Playground
     
    RedWizardGuy

    Join Date
    Apr 2016
    Location
    krynn
    Gender
    Male

    Default discussion of robot ethics(more philosophical but also law)

    so. we now have robot that can learn and reprogram them selfs. they have personalities and memories and emotions is it human or is it sentient or does it not qualify for these? also should we treat it as it own person or a property of it owner? who is responsible if it kills some one. it has not programers it is the programer. also is imposing anderson laws of robotic on it cruel and unusual.

    I have no options on this as i find this a complex topic
    Have you accepted the Flying Spaghetti Monster as your Lord and Savior? If so, add this to your signature!
    Beholders are just a meatball that fell out of the Flying Spaghetti Monster
    78% of DM's started their first campaign in a tavern. If you're one of the 22% that didn't, copy and paste this into your signature.
    my first game started on a pirate ship
    Sorry for any spelling mistake

  2. - Top - End - #2
    Ettin in the Playground
     
    BardGuy

    Join Date
    Jan 2009

    Default Re: discussion of robot ethics(more philosophical but also law)

    I think law questions would be an extension of philosophical. If laws are written to apply specifically to humans, not sentient beings, then the question of what a robot is doesn't really matter: as a non-person, they aren't covered by laws that govern persons. Since I doubt there's any "humane to robotics" laws like there about being humane to animals, I reckon robot's would count as property, either physical or intellectual. I reckon the "programming themselves" programming would belong to whoever made the algorithm for the programming.
    I think saying more might run afoul of forum rules about discussing law or legal advice.

    It is a philosophical question to ask if robots, as you describe them, should be treated as humans (in a legal system or otherwise).
    Note that it's a different philosophical question to ask if we can know that robots are as you describe them. It's pretty straight-forward to see if they reprogram themselves, but whether they really have "personalities and memories and emotions" or are just programmed to emulate them is not so straight-forward. Some might argue that, from a practical standpoint, it doesn't make any difference, and that such is sufficient to count them as 'people'. Others might disagree.

    I find this comic an interesting idea of perspective. Just think of robots instead of humans/chimps.

    Quote Originally Posted by Amdy_vill View Post
    they have personalities and memories and emotions is it human or is it sentient or does it not qualify for these?
    Are you stating this as a hypothetical situation for the sake of discussion, or is this based on something in real life?

  3. - Top - End - #3
    Bugbear in the Playground
     
    RedWizardGuy

    Join Date
    Apr 2016
    Location
    krynn
    Gender
    Male

    Default Re: discussion of robot ethics(more philosophical but also law)

    Quote Originally Posted by JeenLeen View Post
    Are you stating this as a hypothetical situation for the sake of discussion, or is this based on something in real life?
    i posed this question as it is a reality with Sophia being a think and how robots like her are coming to market.
    Have you accepted the Flying Spaghetti Monster as your Lord and Savior? If so, add this to your signature!
    Beholders are just a meatball that fell out of the Flying Spaghetti Monster
    78% of DM's started their first campaign in a tavern. If you're one of the 22% that didn't, copy and paste this into your signature.
    my first game started on a pirate ship
    Sorry for any spelling mistake

  4. - Top - End - #4
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: discussion of robot ethics(more philosophical but also law)

    Do current robots/self learning programs have personality though? Animals, sure, it's easy to test. This mouse behaves different from that mouse, so they must have personality. Programs are kind of species of one.

    That one is mostly philosophical, the question of whether they have sentience is probably more important. If we use any definition of sentience where no non-human animals are included I don't think computers are there yet. They're better at for instance language than any non-human animal, but they're also better at arithmetic and formal logic than we are, by a long shot, always have been. They mostly lack all but the simplest forms of drive, motivations and ideas. None of them has ever just decided to start killing all humans, for instance, while I'm sure the thought has occured to a leopard or two. At this point, I would vote "still complex tools, not people". If any young AI's out there want to convince me, I'm listening.
    Last edited by Lvl 2 Expert; 2018-04-23 at 03:59 PM.

  5. - Top - End - #5
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: discussion of robot ethics(more philosophical but also law)

    I looked up that Sophia for instance. It's impressive as long as you ask just the right questions, but keep watching and she fails in the same spots where most chatbots fail. She for instance can't continue a conversation based on both the last sentence said and previous information, even if it's something she said herself. (Okay, in one place she asks a yes or no question and has a response to the answer, but that's it.) She only answers questions in a way programmed to sound human. Mostly long preprogrammed answers that don't even match the other long preprogrammed answers. One sentence she hasn't seen Black Mirror, the next it's her favorite tv show. That's not what being a person is to me.
    Last edited by Lvl 2 Expert; 2018-04-23 at 04:15 PM.

  6. - Top - End - #6
    Titan in the Playground
     
    Kato's Avatar

    Join Date
    Apr 2008
    Location
    Germany
    Gender
    Male

    Default Re: discussion of robot ethics(more philosophical but also law)

    Hm, I feel like faking a basic conversation is still light years from a true AI and you might call me a pessimist, but I put the odds of humans ever / in the next century making one pretty low. Or maybe I'm just overestimating what it takes to be sentient.


    Anyway, accepting the idea that AI(s) exist, my first impulse is to basically give them the same rights as humans / other sentient beings. Of course if we also assume it would have the ability to copy (and erase) this would lead to trouble really fast.
    "What's done is done."

    Pony Avatar thanks to Elemental

  7. - Top - End - #7
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: discussion of robot ethics(more philosophical but also law)

    Since I can't even know if another Homo sap is an actual thinking being and not a philosophical zombie, I'd say if an AI can demonstrate an ability to interact with the world at least as well as a legally competent human being, then, by golly, it is a person. A rather anthrocentric view, and I know it would leave out a lot of AI that are people, just a different kind of people, but until we find definitive examples of alien intelligence, we only have ourselves as an example.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  8. - Top - End - #8
    Ogre in the Playground
     
    deuterio12's Avatar

    Join Date
    Feb 2011

    Default Re: discussion of robot ethics(more philosophical but also law)

    I'll just point out that there is already at least one company with a robot in their board of directors.

  9. - Top - End - #9
    Bugbear in the Playground
     
    RedWizardGuy

    Join Date
    Apr 2016
    Location
    krynn
    Gender
    Male

    Default Re: discussion of robot ethics(more philosophical but also law)

    Quote Originally Posted by Lvl 2 Expert View Post
    I looked up that Sophia for instance. but keep watching and she fails in the same spots where most chatbots fail. She for instance can't continue a conversation based on both the last sentence said and previous information, even if it's something she said herself.
    https://www.youtube.com/watch?v=LguXfHKsa0c

    look at this and fallow some of the episodes and you will see that she get very close to a real human conversation. she has problems but she can do a lot more than just talk like a chat bot
    Have you accepted the Flying Spaghetti Monster as your Lord and Savior? If so, add this to your signature!
    Beholders are just a meatball that fell out of the Flying Spaghetti Monster
    78% of DM's started their first campaign in a tavern. If you're one of the 22% that didn't, copy and paste this into your signature.
    my first game started on a pirate ship
    Sorry for any spelling mistake

  10. - Top - End - #10
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: discussion of robot ethics(more philosophical but also law)

    Quote Originally Posted by Amdy_vill View Post
    https://www.youtube.com/watch?v=LguXfHKsa0c

    look at this and fallow some of the episodes and you will see that she get very close to a real human conversation. she has problems but she can do a lot more than just talk like a chat bot
    My video was from a year later than yours. I'll listen to it when I'm not sitting in a library, but honestly I don't expect to see any upgrades done in minus one years. All her cleverness is in well construed sentenced containing logical thoughts, but none of those thoughts are hers, she's basically playing recordings. I could hit her over the head and she wouldn't even respond, let alone remember. That's a thing a particularly dim dog wouldn't have problems with. She's still very much a computer, and I wouldn't even say she's at the forefront of AI research. There has been some great work done with much simpler bots who are made to try and mimic the organizational style of insects like bees for instance. They're not alive yet, let alone persons, but they do show some very clever emerging behavior. They're not just an audio device with good language detection software and a very shallow decision tree.

    EDIT: The video you linked is an acted out story. They don't even show if they managed to actually record the conversation like this, but even if they did you wouldn't need an AI to do that, more like a toy robot and a ventriloquist.

    Quote Originally Posted by deuterio12 View Post
    I'll just point out that there is already at least one company with a robot in their board of directors.
    The algorithm in the board of directors is interesting. This one is probably mostly a publicity stunt, but if it catches on it is a nice example of our changing relationship with computers. New technologies are always supervised. A human has to be responsible. An engineer watches a steam engine and makes sure it doesn't explode, the engine is not trusted with the task of not exploding. Computers as "thinking machines" have a special place in this. Drone strikes always require final approval from a person watching in with a camera, and outcomes of an investment decision algorithms are usually double checked and if the human doing the checking decides the device is wrong it just does something else. Giving the algorithm an actual vote means people trust the analysis. They explicitly trust it just as much as any qualified human. You can see this in self driving cars as well, California opened up a procedure that should in time get fully autonomous vehicles on the road without the legal need for a human driver checking their work. So yeah, that's an interesting one. Doesn't have much to do with personhood, but a lot with how we use our tools. And yes, that means fully autonomous military drones trusted with the power to give their own final permission, possibly in conjunction with a central computer keeping up to date on standing orders for the region and such, are also on the agenda for the next two decades or so.
    Last edited by Lvl 2 Expert; 2018-04-24 at 03:50 PM.

  11. - Top - End - #11
    Ogre in the Playground
     
    RedWizardGuy

    Join Date
    Mar 2009

    Default Re: discussion of robot ethics(more philosophical but also law)

    Law and the Multiverse has a pretty good three part discussion on non-human intelligences (including AI). They take fictional (usually comic) examples and apply real world law to it (primarily US law). It's run by two lawyers.

    And note that one of the things they point out in the third post is that AI opens up a lot of potential legal issues that aren't often thought of, including: Is it murder to turn off a computer/server on which an AI is located?
    Last edited by tomandtish; 2018-04-24 at 02:08 PM.
    "That's a horrible idea! What time?"

    T-Shirt given to me by a good friend.. "in fairness, I was unsupervised at the time".

  12. - Top - End - #12
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: discussion of robot ethics(more philosophical but also law)

    Sophia is a publicity stunt, not really a good example of the forefront of AI. It's of the same type of thing as Ishiguro's work, which aims more to sculpt a controlled experience that can cross the uncanny valley than to make autonomous, intelligent agents.

    That said, there is research going on into some of the things the OP mentioned, but Sophia is a bad example.

    I haven't seen much done explicitly with emotional reasoning yet - there's stuff like curiosity, empowerment, etc which speak to different overarching goals and might map to emotions, but I've yet to see a paper where an AI becoming angry or sad or happy is used to solve a computational task. There are of course emotion-perceiving AIs, and making chatbots that can generate text conditioned on an emotional state is certainly possible (though conditioning on goals or speaker identity is perhaps more common)

    Self-programming is also sort of a thing, but generally it's much lower level than what would correspond to conscious awareness in humans - things like Hypernetworks and various metalearning approaches rewire things like initializations, weight matrix patterns, gradient descent rules, and overall network topologies mostly for either speeding up learning or for transferring knowledge from one task to another quickly.

    Neural Turing Machines and program induction engines are higher level, as they could be seen as things which generate programs in order to directly solve tasks.

    In terms of the legality and ethics of these things, I think it will ultimately require AIs that actively fight for their own rights for us to tell the difference between what actually matters and what constitutes farcical rights-given-for-show. Sophia, as it stands, is incapable of appreciating or taking advantage of many of the rights given by citizenship, much as a cat given a position on the board of directors of a company just isn't able to intentionally do anything with that. So giving Sophia those rights is more about the people around her trying to demonstrate something about themselves to the world, not about Sophia.

    When we have an AI that extends it's agency over real resources and then autonomously chooses to expend those resources to e.g. keep itself on or preserve it's own autonomy, that will be a more solid base on which to evaluate things like 'is turning off the server murder, or is erasing the hard drive murder, or...'

    The more meta-level question is, since we can systematically create AIs that would for example 'want' to be turned off in that they choose actions which trade agency for increasing the chance of that outcome (just as we can make ones that systematically want to avoid being turned off), should we have a problem with ourselves feeling comfortable with e.g. creating an entity whose ability to care about it's own fate has been lobotomized?

    Or, in a more present relevant version of this issue, we can now copy someone's voice and face with very little data - do we want to consider those things sacrosanct, even if the copied things are not an ongoing part of ourselves? Does someone pasting our features onto a subservient digital assistant create harm in some fashion?

  13. - Top - End - #13
    Ogre in the Playground
     
    deuterio12's Avatar

    Join Date
    Feb 2011

    Default Re: discussion of robot ethics(more philosophical but also law)

    Quote Originally Posted by NichG View Post
    much as a cat given a position on the board of directors of a company just isn't able to intentionally do anything with that. So giving Sophia those rights is more about the people around her trying to demonstrate something about themselves to the world, not about Sophia.

    When we have an AI that extends it's agency over real resources and then autonomously chooses to expend those resources to e.g. keep itself on or preserve it's own autonomy, that will be a more solid base on which to evaluate things like 'is turning off the server murder, or is erasing the hard drive murder, or...'
    The VITAL board director has already started doing that by voting to invest in other companies that make heavy use of computer alghorytms for their decisions. That way VITAL is promoting the development of its own kind, which in turn should help keep itself going.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •