New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 5 of 5 FirstFirst 12345
Results 121 to 138 of 138
  1. - Top - End - #121
    Firbolg in the Playground
     
    Cikomyr's Avatar

    Join Date
    Jan 2012
    Location
    Montreal
    Gender
    Male

    Default Re: Blade Runner 2049

    I suppose you could draw the line between sapience and simulated responses aping sapience at the degree of agency the software is able to demonstrate.

    Accepting its own potential mortality and be willing to endanger itself for the benefit of its owner might qualify as being more than preprogrammed responses.

    At one point of sophistication, I think you have to accept sapience, or "good enough sapience".

  2. - Top - End - #122
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Blade Runner 2049

    As with the emanator, it's not what Joi is doing in that scene that's the relevant question, it is how.

    How does it know its owner is in danger? How does it know it itself is in danger? These are pretty abstract problems and Joi is not demonstrated to have easy solutions to them.

    Spoiler: For example
    Show
    The car crash. The "easy" solution would be for K's car to have an AI of its own that's monitoring his vitals and condition of his car, and for the car to communicate those to Joi's emanator via radio link. That way you bypass most hard things Joi would have to do to figure out what's happening. But while this is eminently plausible, it's not confirmed anywhere that this is what's happening.
    Last edited by Frozen_Feet; 2017-10-25 at 10:51 AM.

  3. - Top - End - #123
    Troll in the Playground
    Join Date
    Mar 2010

    Default Re: Blade Runner 2049

    For the level of computing necessary to produce that level AI anyways, it wouldn't be too hard to add some sort of simple libraries for recognizing injury/danger and incorporate them anyways. Would be a good selling point. Could advertise it as an extra medic-alert feature that comes with your AI pleasure servant.

  4. - Top - End - #124
    Titan in the Playground
     
    Tyndmyr's Avatar

    Join Date
    Aug 2009
    Location
    Maryland
    Gender
    Male

    Default Re: Blade Runner 2049

    Quote Originally Posted by Dienekes View Post
    As someone who’s just casually watching this conversation. Wouldn’t this be the kind of blanket statement one would need some kind of logical proof to provide support for the assertion?
    I agree! This seems like a really big statement. I mean, we have chatbots now that can fool people into believing they are speaking with humans, at least for a time. They're not great, and they're definitely not sapient, but they can fool to some degree. I have little trouble envisioning a future in which they are improved, and can fool people longer and more consistently, but are still definitely far short of sapience.

    I mean, the whole point of the turing test is fooling people into believing they're human, right?

  5. - Top - End - #125
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Blade Runner 2049

    Quote Originally Posted by Chen View Post
    For the level of computing necessary to produce that level AI anyways, it wouldn't be too hard to add some sort of simple libraries for recognizing injury/danger and incorporate them anyways. Would be a good selling point. Could advertise it as an extra medic-alert feature that comes with your AI pleasure servant.
    This is just as wrong as talking about how something is "as advanced as a parrot".

    To wit: the level of computing required to achieve an outcome varies by algorithm, and for most tasks there are multiple different algorithms with different computational needs. The corollary to this is that there is no straightforward way to link "level of computation" to "level of AI". And because of that, the argument that "it wouldn't be too hard to add some simple libraries" is a complete non-sequitur. You cannot even begin to answer the question of how hard it would be before specifying the algorithm, that is, explaing how the AI does what it does.

    ---

    Quote Originally Posted by Tyndmyr View Post
    I agree! This seems like a really big statement. I mean, we have chatbots now that can fool people into believing they are speaking with humans, at least for a time. They're not great, and they're definitely not sapient, but they can fool to some degree. I have little trouble envisioning a future in which they are improved, and can fool people longer and more consistently, but are still definitely far short of sapience.

    I mean, the whole point of the turing test is fooling people into believing they're human, right?
    The point of the Turing test was the hypothetical that if a machine is capable of holding discussion in a natural language so well that a human doesn't recognize it as a a machine, then maybe it can think.

    This has since been proven false: even the chatbots that can pass the Turing test are really dumb even compared to other types of contemporary AI.

    The issue here is, again, that you're looking at what the chatbots did, but not how.

    Speaking was not involved. It's text-only medium. No chat AI has gotten even close to passing Turing test audially or audio-visually, because the type of AI used for text-only Turing test is completely useless for those variations of the test. It's the same issue as with comparing a chat AI to parrot. The two function nothing alike. You cannot get one from the other no matter how big a database you try to plug in.

    The point here is that if you're envisioning something like Joi based on ELIZA, you're committing an error of thought. The former is not "more advanced" form of the latter, it'd require entirely different type of technology and AI. The same is true between McDonalds designing McLover to fool you, and McLover itself fooling you, as I priorly tried to explain.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  6. - Top - End - #126
    Titan in the Playground
     
    Tyndmyr's Avatar

    Join Date
    Aug 2009
    Location
    Maryland
    Gender
    Male

    Default Re: Blade Runner 2049

    I am aware of exactly how limited chatbots are at present. However, a range of realism exists even within chatbots. Adding non verbal communication and what not isn't

    I mean, a chinese room that runs offa code isn't sentient, now is it?

    The idea that you can't fake sentience without BEING sentient just seems wrong. It's not true for literally anything else that is faked, is it? And how hard it is to fake something can depend strongly on situation, the person being deceived, and so forth.

    Those dumb chatbots DO fool some people into believing they are human, at least for a while. Despite all of their harsh limitations. Why couldn't a greatly advanced bot do something similar?

    Edit: It doesn't matter if it's a straight lookup table, or if you're using backpropogation or what have you. Nothing we have now would be considered sentient by any reasonable person, regardless of tech.
    Last edited by Tyndmyr; 2017-10-25 at 05:04 PM.

  7. - Top - End - #127
    Barbarian in the Playground
     
    NecromancerGuy

    Join Date
    Aug 2017

    Default Re: Blade Runner 2049

    First, I think we want to be discussing the term sentience, and not sapience. From my inexpert view, and a little googling, they are not synonymous.

    Next I would get back to what someone said earlier; if something is indistinguishable from the original, does it matter? If you have two painting that are identical and can not empirically be told apart, does it matter if one was painted by Andy Warhol and the other by me? Well, if you have papers or similar saying that one is the Warhol painting people will pay more than the one painted by me. There is subjective value there.

    Now, what if I have two oranges that are empirically identical. But one was grown on a tree and one made in device like a replicator. Again, most people would value the tree grown one more than the other. I would say that this is because we place value on things we think are more valuable, that are "natural" or "expected". Now, should one be more valuable than the other? Why?

    Now take two living creatures. If they are empirically identical (which is not the case in BladeRunner), is one more valuable than the other? Carry this into sentience and/or "humanity".

    If you poke something with a pin and it reacts as if pained, it bleeds, and it is adverse to the experience. Is it less if that is a response that is created/programed response by another creature or if those same responses are due to biological/evolution?

  8. - Top - End - #128
    Firbolg in the Playground
     
    Imp

    Join Date
    Nov 2006
    Location
    Texas
    Gender
    Male

    Default Re: Blade Runner 2049

    Quote Originally Posted by LordEntrails View Post
    If you poke something with a pin and it reacts as if pained, it bleeds, and it is adverse to the experience.
    Hath not a Replicant eyes? Hath not a Replicant hands, organs, dimensions, senses, affections, passions? Fed with the same food, hurt with the same weapons, subject to the same diseases, healed by the same means, warmed and cooled by the same winter and summer as a human is? If you prick us, do we not bleed?
    Spoiler: I've checked out the spoiler thoroughly and there's no actual erotic Harry Potter fanfiction
    Show
    Quote Originally Posted by The Giant View Post
    I've checked out the comic thoroughly and there's no actual erotic Harry Potter fanfiction
    Quote Originally Posted by The Giant View Post
    I can't find the one with the "cartoon butt," though.
    Quote Originally Posted by The Giant View Post
    OK, finally tracked the Naked Superheroes guy down
    Quote Originally Posted by The Giant View Post
    What do you see as being objectionable about it? The use of the word "bimbos"?
    Quote Originally Posted by The Giant View Post
    Quote Originally Posted by stack View Post
    Quote Originally Posted by The Giant View Post
    There are no nipples or genitals
    Looks like a nipple when I look close.
    Then don't look close.

  9. - Top - End - #129
    Ogre in the Playground
     
    gomipile's Avatar

    Join Date
    Jul 2010

    Default Re: Blade Runner 2049

    Quote Originally Posted by Tyndmyr View Post
    The idea that you can't fake sentience without BEING sentient just seems wrong.
    The idea is that for definitions of sentience and sapience to be useful, they have to be based on testable qualities.

    So, given a useful definition of sentience, we can devise a sentience test which can determine sentience. Realistically, the test will probably have a statistical confidence proportional to the amount of data you feed it about a subject being tested. In practical terms, the longer the test goes on, the more confidence we can have that the results are correct according to our definition.

    Once you have a theoretical definition of and a practical test for sentience, the working definition of sentience for most people becomes "passes the sentience test."
    Quote Originally Posted by Harnel View Post
    where is the atropal? and does it have a listed LA?

  10. - Top - End - #130
    Ogre in the Playground
     
    NinjaGuy

    Join Date
    Jul 2013

    Default Re: Blade Runner 2049

    Quote Originally Posted by Tyndmyr View Post
    Those dumb chatbots DO fool some people into believing they are human, at least for a while. Despite all of their harsh limitations. Why couldn't a greatly advanced bot do something similar?
    The chatbots that have fooled people into thinking they're human have also played with what a presenters thought was human as well. For example, the chatbot "Eugene Goostman" that passed the test in 2014 was modeled to present as a 13 year old boy who had English as a second language. The next one, IIRC, was modeled to present as a 17 year old Chinese girl, also with English as a second language.

    HAL 9000, they are not. The reason they're using these is by lowering the bar by manipulating the variations on a theme of "human". Also note, that these are just text chatbots... a joi-style bot likely couldn't pass as more than a 4 or 5 year old at this point.

  11. - Top - End - #131
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Blade Runner 2049

    It looks like I was ninja'd by a bunch of other posters, but for clarity, here goes again:

    Quote Originally Posted by Tyndmyr
    I am aware of exactly how limited chatbots are at present. However, a range of realism exists even within chatbots. Adding non verbal communication and what not isn't

    I mean, a chinese room that runs offa code isn't sentient, now is it?

    The idea that you can't fake sentience without BEING sentient just seems wrong. It's not true for literally anything else that is faked, is it? And how hard it is to fake something can depend strongly on situation, the person being deceived, and so forth.

    Those dumb chatbots DO fool some people into believing they are human, at least for a while. Despite all of their harsh limitations. Why couldn't a greatly advanced bot do something similar?

    Edit: It doesn't matter if it's a straight lookup table, or if you're using backpropogation or what have you. Nothing we have now would be considered sentient by any reasonable person, regardless of tech
    I can grok it's an unintuitive idea that something can't fake sentience, without being sentient. But before we go further down that line, let's return back a few posts to what I said, because there are actually two different lines of discussion going on here.

    Quote Originally Posted by Frozen_Feet View Post
    It is categorically impossible for a being to fake sapience to the degree required to infiltrate and undermine society, without ticking all the boxes we'd use to check if it is sapient.

    Try again.
    You (and TvTyrant) have chosen to interprete this as "something can't fake sentience without being sentient", and then tried to point out that we already have programs that can fool some humans. But when you look at my post, you'll see that that's not the problem. The problem is that once you invoke Chinese Room and Philosophical Zombies, you've made the concept of sapience unfalsifiable. Once you can't distinquish machine from a human, you also can't distinquish human from a machine, so either you have to categorically accept or categorically doubt all claims of sapience, or you have to admit that the way you label things sapient is completely arbitrary.

    This what other posters have also pointed out. The important thing here is that by proving people easier to fool, you're not actually making a stronger case for "Joi is not sapient". You're making a stronger case for "people would not be able to tell if Joi is sapient, and would hence either accept or reject it at face value."

    So that's the first discussion.

    The second discussion is about what traits of future tech you can extrapolate from modern tech. This is the discussion where distinquishing a "McLover" from a robot that can itself fake a thing, or distinquishing a chatbot from a parrot, becomes relevant.

    What I've done, or tried to do, is show that neither chatbots nor parrots fit the description of "being that can fake sapience to the degree required to infiltrate and undermine society", for various reasons, and that such a being would be unlike either. So trying to extrapolate traits of such a being from modern chatbots or parrots is invalid, just like trying to extrapolate traits of a parrot from a chatbot is invalid.

    Or in other words: if you have a "greatly advanced bot" that's not, and could not, be based on the technology of chatbot, you can't make decision of whether the "greatly advanced bot" is sapient based on the chatbot. It's plain and simple not a valid criterion for judging.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  12. - Top - End - #132
    Titan in the Playground
     
    Tyndmyr's Avatar

    Join Date
    Aug 2009
    Location
    Maryland
    Gender
    Male

    Default Re: Blade Runner 2049

    Quote Originally Posted by Vogie View Post
    The chatbots that have fooled people into thinking they're human have also played with what a presenters thought was human as well. For example, the chatbot "Eugene Goostman" that passed the test in 2014 was modeled to present as a 13 year old boy who had English as a second language. The next one, IIRC, was modeled to present as a 17 year old Chinese girl, also with English as a second language.

    HAL 9000, they are not. The reason they're using these is by lowering the bar by manipulating the variations on a theme of "human". Also note, that these are just text chatbots... a joi-style bot likely couldn't pass as more than a 4 or 5 year old at this point.
    Absolutely. What "passes as human" depends on the context. Alter the context, you alter the difficulty. After all, none of us would say that someone speaking a second language was NOT human, but we would have lowered expectations of ability to communicate.

    One fun thing to examine in turing tests are the errors. Not the incorrect identification of the bots...but of the humans. A certain rate of humans are often misidentified as bots. This already happens a fair amount of the time.

    Now, at our current level of bots, which, as you say, is pretty awful, we can still usually tell bots from humans. Given a significant amount of time to get a good sample set, of course. But...not always. We're reasonably bad at grokking coded intelligence *now*, when it is still easy, because it's not a trait we evolved for. As improvements happen, it will become more so.

    Now, Frozen, ultimately, the humanity of Joi is determined primarily by what she says, yes? We are under no illusions that she is physically human. The question is merely if she appears to be truly human to him. And, she does. At least until he takes the time to interact with another copy and realizes that they're acting similarly. As portrayed, she's reasonably able to act as a surrogate for a person to an individual. It's only when you look at the broader context and see all the copies acting similarly that the ruse is apparent. What's unrealistic about that?

    And, as for "hiding within society", that's...pretty easy. We're more and more online these days, there are increasingly few interactions which *must* happen in person. Plenty of people have online friends they have literally never met, despite knowing them for years. This trend will probably continue, and it'll be increasingly easy to fake personas, or, given advancements in AI, entire lives.

  13. - Top - End - #133
    Orc in the Playground
     
    OrcBarbarianGuy

    Join Date
    Aug 2013

    Default Re: Blade Runner 2049

    Quote Originally Posted by Tyndmyr View Post
    ultimately, the humanity of Joi is determined primarily by what she says, yes?
    Well, no.

    Spoiler
    Show
    It is Joi's idea that K erase her from the apartment console and break off the antenna of the emanatory. Why would someone programming a companion simulator include something like that it's canned responses no matter how sophisticated simulation?

  14. - Top - End - #134
    Titan in the Playground
     
    Tyndmyr's Avatar

    Join Date
    Aug 2009
    Location
    Maryland
    Gender
    Male

    Default Re: Blade Runner 2049

    Quote Originally Posted by Ranxerox View Post
    Well, no.

    Spoiler
    Show
    It is Joi's idea that K erase her from the apartment console and break off the antenna of the emanatory. Why would someone programming a companion simulator include something like that it's canned responses no matter how sophisticated simulation?
    Easy. Repeat sales.

    What, you think a corporation *wouldn't* be that cynical?

  15. - Top - End - #135
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Blade Runner 2049

    Quote Originally Posted by Tyndmyr
    Now, Frozen, ultimately, the humanity of Joi is determined primarily by what she says, yes? We are under no illusions that she is physically human. The question is merely if she appears to be truly human to him. And, she does. At least until he takes the time to interact with another copy and realizes that they're acting similarly. As portrayed, she's reasonably able to act as a surrogate for a person to an individual. It's only when you look at the broader context and see all the copies acting similarly that the ruse is apparent. What's unrealistic about that?
    "Unrealistic" is the wrong word. The problem here is that what the giant purple Joi says is within realm of what an actual human might say in the same situation, so thinking it as confirmation that all Jois are the same is too low of a bar to pass. It's enough to create ambiguity, but not confirm it either way.

    As to the "hiding in society" thing, there are currently dead easy ways to stop malware chatbots (etc.) which rely on the inability of an AI to generalize. A standard IQ test array stops such programs cold, and this is the basis for majority of "anti-bot" questions ("What numbers do you see in this image?") you see on the net these days. Humans who are fooled by such programs, remain fooled because they don't know about differences such as this, so they never think to test for them.

    EDIT: Scratch the above, I thought of a better way to explain what I was after.

    Your question has in it the assumption that if two Jois act alike, then that means neither Joi is sapient. But this is not given. Contrast with the situation in the original Blade Runner: we know Rachel is a replicant and can be confirmed as such by VK test. Does this also mean that Rachel is not sapient?

    For the purposes of the scene were talking about, we could replace holographic Jois with two actual, human hookers, one of which was hired by Wallace to keep tabs on K and tell him everything he wants to hear. That's... basically how it goes in plenty of other Noir movies, so shouldn't be hard to imagine. So in the end, K finds out that his precious girlfriend was a hooker and acting just like expected from a hooker hired to do that job. But does that tell anything about her sapience? Does it even confirm she was faking everything?
    Last edited by Frozen_Feet; 2017-10-26 at 02:27 PM.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  16. - Top - End - #136
    Titan in the Playground
     
    RCgothic's Avatar

    Join Date
    Jun 2011
    Location
    UK

    Default Re: Blade Runner 2049

    Sapience zombies aren't falsifiable. There's no way to prove an entity that passes any conceivable test of sapience isn't just a convincing fake. Humans can't pass a bar that high.

    And defending ourselves against such hypothetical threats leads humanity down some very dark paths. Even a p-zombie will apparently resent its treatment.

    Wrong us, shall we not revenge?

    Better to just treat anything with the appearance of sapience as being sapient. And if we do all end up being replaced by p-zombies, so what? The universe will not be measurably different.

    As for JOI, she's evidently emotionally and rationally aware. That there are others like her doesn't change anything. It's like twins raised separately on identical cultural curriculums. They're going to come out of it with some similar predispositions. But arguing that means they aren't each self aware because they fail at some unrelated uniqueness quotient is a non sequitur.

    Note that we're not even having this discussion about Luv, K, Roy or Rachel. Self awareness is not a property restricted to humanoids. There's no reason 0s and 1s shouldn't be as capable in that regard as ACGTs.
    Last edited by RCgothic; 2017-10-26 at 04:26 PM.

  17. - Top - End - #137
    Barbarian in the Playground
     
    NecromancerGuy

    Join Date
    Aug 2017

    Default Re: Blade Runner 2049

    Quote Originally Posted by Ranxerox View Post
    Well, no.

    Spoiler
    Show
    It is Joi's idea that K erase her from the apartment console and break off the antenna of the emanatory. Why would someone programming a companion simulator include something like that it's canned responses no matter how sophisticated simulation?
    Is it hard to imagine a program that is programmed to delete itself under certain circumstances? Of course not, virus' do that today. The will to end one's own existence it not a clue about sentience. We know of humans that have been willing to give their own lives for various reasons.

    I think the movie(s) intentionally don't give us enough information to judge unequivocally if Joi (Rachel, etc) are sentient or human. I see the value in asking and discussing the possibilities are simply to discuss the bigger questions, "What is sentience?" and "What does it mean to be human?"

  18. - Top - End - #138
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: Blade Runner 2049

    Nah, the source material makes it clear enough that androids (replicants) do, in fact, dream. Rachel, Luv, K etc. are biological anyway, so it'd be bloody hard to make a convincing argument for their non-sentience which wouldn't apply to humans just as well.

    The reason we have this discussion of Joi is because Joi is not biological, so we cannot assume its internal processes are anything like humans, or replicants. Or we could assume, but it's not confirmed.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •