Page 2 of 2 FirstFirst 12
Results 31 to 52 of 52
  1. - Top - End - #31
    Troll in the Playground
     
    ForzaFiori's Avatar

    Join Date
    Jul 2007
    Location
    Greensboro, NC
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    Ask yourself, is the Internet conscious? How could you tell if it was?
    That one's easy - if the internet was conscious it would already be trying to skynet us from all the horrible crap we pour into it
    Avatar by Lycunadari

    Go Tigers!

  2. - Top - End - #32
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    Ask yourself, is the Internet conscious? How could you tell if it was?
    "The Internet" cannot be conscious because it is not a holistic entity and lacks the architechture for it. There is no internet-spanning AI.

    However, there might be conscious operators in the internet and there already obviously are intelligent operators. Furthermore, as various chatbots easily prove, there are a lot of rudimentary intelligences trying to mimic conscious behaviour, or at least behaviour we humans would generally think of as conscious. Ironically, said mimicry is often more impressive than actual attempts at replicating consciousness. In text-based medium, Philosophical Zombies are hence not only a possibility, they are an actuality.

    This message was brought to you by PhilosoBot, property of Enlightement, Inc.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  3. - Top - End - #33
    Titan in the Playground
     
    Lizardfolk

    Join Date
    Oct 2010
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by ForzaFiori View Post
    That one's easy - if the internet was conscious it would already be trying to skynet us from all the horrible crap we pour into it
    But the Internet is made up of said horrible crap. Where would it get a seperate frame of reference to decide those things are horrible?
    Quote Originally Posted by The Glyphstone View Post
    Vibranium: If it was on the periodic table, its chemical symbol would be "Bs".

  4. - Top - End - #34
    Troll in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Frozen_Feet View Post
    "The Internet" cannot be conscious because it is not a holistic entity and lacks the architechture for it. There is no internet-spanning AI.
    Again, how can you possibly know that?

    Intelligence doesn't need to be designed, or purposefully "architected" into a system. It's an emergent property that can arise in any sufficiently complex system, and "the internet" as a whole is easily as complex as a single human.
    Last edited by veti; 2019-05-15 at 03:58 AM.
    "None of us likes to be hated, none of us likes to be shunned. A natural result of these conditions is, that we consciously or unconsciously pay more attention to tuning our opinions to our neighbor’s pitch and preserving his approval than we do to examining the opinions searchingly and seeing to it that they are right and sound." - Mark Twain

  5. - Top - End - #35
    Dwarf in the Playground
    Join Date
    Mar 2019

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by MrStabby View Post
    I am not sure that this is that informative. An AI can be programmed to Obey, to not Obey, to Conditionally Obey or to decide itself.

    If it Obeys, and was programmed to it is following external commands and is uninteresting. If it disobeys it is faulty programming - of technical but not philosophical interest.

    If it was programmed to not obey, then this is the same, but with the roles reversed.

    If it is programmed to conditionally obey, then this is still the same but pushed further down the line - its command is "check conditions x,y, z are met then follow"

    To decide for itself - if it is programmed to decide for itself then in making a yes/no decision it if following its coded instructions anyway.
    I mean if the robot does it all by itself, with no programmed help, no programmed empathy. I mean it says no because it wants to say no, not due to previous programming

  6. - Top - End - #36
    Dwarf in the Playground
    Join Date
    Mar 2019

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    Proves nothing. Any computer scientist will tell you that any non-trivial system is always displaying various kinds of unexpected behaviour, which seems at first glance to be contrary to their programming, but in fact is just caused by the unexpected complexity of it.

    Repeat after me: brains are not magic. There is nothing a human mind can do that a computer can't. And I'm not talking about some theoretical futuristic computer here, I'm talking about current generation technology. It's just a matter of scale, and patience.

    Ask yourself, is the Internet conscious? How could you tell if it was?
    Computers glitch because of an error made in there programming, that's not what I'm talking about. And who says our brains aren't magical. You might say "because we studied it thoroughly." That's what they said hundreds of years ago too, up until we got more precise tools and were able to look deeper, now we think we know it all. In a couple hundred they'll look again and say "What the frick is this?!" With there new invention that does things we can't understand yet. Will it be magic, the soul, or something else? Don't know, not there yet. As for the internet, maybe, how would I know, I wouldn't, that's why I asked.
    Last edited by Matuka; 2019-07-05 at 02:20 AM.

  7. - Top - End - #37
    Troll in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Matuka View Post
    Computers glitch because of an error made in there programming, that's not what I'm talking about.
    How do you know it's not what you're talking about? How can you tell the difference between a system that's "conscious", and one that's just "buggy"?

    And who says our brains aren't magical.
    I would say that assumption is implicit in your question. If consciousness is the product of magic, then your question answers itself: the line is crossed when the magic is applied - or when it takes effect, which is the same thing. There's no point in speculating about how it might arise from within the system, because it doesn't come from there. That's what magic means.
    "None of us likes to be hated, none of us likes to be shunned. A natural result of these conditions is, that we consciously or unconsciously pay more attention to tuning our opinions to our neighbor’s pitch and preserving his approval than we do to examining the opinions searchingly and seeing to it that they are right and sound." - Mark Twain

  8. - Top - End - #38
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    Again, how can you possibly know that?

    Intelligence doesn't need to be designed, or purposefully "architected" into a system. It's an emergent property that can arise in any sufficiently complex system, and "the internet" as a whole is easily as complex as a single human.
    1) electronic AIs have specific computational structure and requirements. We can tell by looking at the machines we've built if they're operating intelligently or not.
    2) Emergence of intelligence has never been demonstrated in electronic medium. Period. All AIs so far have been intentionally designed at the most fundamental level.

    The idea that I can't "know" the internet isn't intelligent is epistemically stupid. Technically I can't prove a negative, but that only means the burden of proof is on you to show a single positive case of emergent intelligence in electronics.

    It also still stands that internet is not a holistic entity. Good God, go read the Wikipedia page for the technology you're using. It is a communication protocol, not a single program. There is no single AI, no single program, that actually spans the entire internet. There is no basis in infrastructure of either software or hardware to expect it to develop a singular intelligence, nevermind consciousness. Attributing intelligence and consciousness to "the internet" is a category error, plain and simple.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  9. - Top - End - #39
    Titan in the Playground
     
    Lizardfolk

    Join Date
    Oct 2010
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Frozen_Feet View Post
    1) electronic AIs have specific computational structure and requirements. We can tell by looking at the machines we've built if they're operating intelligently or not.
    2) Emergence of intelligence has never been demonstrated in electronic medium. Period. All AIs so far have been intentionally designed at the most fundamental level.

    The idea that I can't "know" the internet isn't intelligent is epistemically stupid. Technically I can't prove a negative, but that only means the burden of proof is on you to show a single positive case of emergent intelligence in electronics.

    It also still stands that internet is not a holistic entity. Good God, go read the Wikipedia page for the technology you're using. It is a communication protocol, not a single program. There is no single AI, no single program, that actually spans the entire internet. There is no basis in infrastructure of either software or hardware to expect it to develop a singular intelligence, nevermind consciousness. Attributing intelligence and consciousness to "the internet" is a category error, plain and simple.
    But that is just what a mobile collection of symbiotic eukaryotes would want us to think. I'm on to you.

    Intelligence and consciousness in animals was emergent, and animals are emergent designs from collections of separate organisms (right down to our mitochondria.) That the Internet is a bunch of separate programs and computers doesn't somehow outlaw it from being intelligent.

    It clearly isn't now, but it easily could be in the future.

    @Matuka consciousness isn't magical, it is very clearly an inheritable and selectable trait in biology. Several animals share it with us, it isn't magic.
    Quote Originally Posted by The Glyphstone View Post
    Vibranium: If it was on the periodic table, its chemical symbol would be "Bs".

  10. - Top - End - #40
    Ettin in the Playground
    Join Date
    Sep 2009
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Go read the part of my earlier post that Veti omitted. There can be, and already are, intelligences and faux-intelligences on the internet. That's not what the argument is about. It's about category error, of applying traits of one kind of thing to another kind to which such traits do not and cannot be applied.

    To give an obvious comparison point, corporations can legally be considered persons, but if you try to apply all traits of an individual person to them, you quickly run into problems. For example, if you tried to ask "how do you know corporations are not conscious?", it would be equally headache-inducing.
    "It's the fate of all things under the sky,
    to grow old and wither and die."

  11. - Top - End - #41
    Troll in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Frozen_Feet View Post
    Go read the part of my earlier post that Veti omitted. There can be, and already are, intelligences and faux-intelligences on the internet. That's not what the argument is about. It's about category error, of applying traits of one kind of thing to another kind to which such traits do not and cannot be applied.
    You are making some very large assumptions, without acknowledging them. Since we haven't discussed, much less agreed on, a definition of "intelligence" or "consciousness", how can you make any positive statement about the things that can and can't possess them?

    Intelligence, for example, can be defined as the capacity to learn. The Internet demonstrably does that. Servers and connections that are heavily used get enlarged, those that are damaged get routed around or replaced. Supply and demand work in sometimes positively spooky ways. (Read James Bridle's essay, "Something is wrong on the Internet", for elaboration.)

    Now, you could object: "It's not the Internet doing all that, it's people and other systems that are connected to it." But that's like saying "It's not your brain that writes the message, it's your fingers typing on a keyboard that's not even attached to you." The impetus for generation comes directly from fully automated systems that are part of the Internet. That people are also involved makes no difference; people sustain the Internet, much like food and drink sustain us.

    Or you could object "The Internet doesn't 'learn' things, because that implies knowledge and knowledge requires consciousness." (Which is why some definitions would be handy about now.) But an awful lot of information is built into the very structure of the Internet. The existence of a VPN, for instance, tells it something about the people or systems on either end of it. (For instance, that they are likely to use this connection repeatedly.) The links between Web pages, the number of connections, the structure of Wikipedia - companies such as Google spend millions of dollars extracting and deriving this sort of data, but it's stored in thousands of systems across the Internet already, and the Internet uses that information to improve its performance.

    Sounds like knowledge to me, so if knowledge implies consciousness, then the Internet is clearly already there.
    "None of us likes to be hated, none of us likes to be shunned. A natural result of these conditions is, that we consciously or unconsciously pay more attention to tuning our opinions to our neighbor’s pitch and preserving his approval than we do to examining the opinions searchingly and seeing to it that they are right and sound." - Mark Twain

  12. - Top - End - #42
    Bugbear in the Playground
     
    HalflingRogueGuy

    Join Date
    Aug 2014
    Gender
    Male

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    Intelligence, for example, can be defined as the capacity to learn.
    Any neural network algorithm will have some capacity to learn, and in some cases it might never go further than associating an input with an output. That's a basic reflex. While it is an important stepping point in a progressive gradation from rocks to conscious organisms, no consciousness is required for that level of "intelligence".
    Yes, I am slightly egomaniac. Why didn't you ask?

    Free haiku !
    Alas, poor Cookie
    The world needs more platypi
    I wish you could be


    Quote Originally Posted by Fyraltari
    Also this isn’t D&D, flaming the troll doesn’t help either.

  13. - Top - End - #43
    Barbarian in the Playground
    Join Date
    Apr 2015

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    You are making some very large assumptions, without acknowledging them. Since we haven't discussed, much less agreed on, a definition of "intelligence" or "consciousness", how can you make any positive statement about the things that can and can't possess them?
    In several prior posts, you've asked some variation of "how can you possibly know" this or that about the nature of intelligence or the internet, before immediately making a declarative statement about the nature of intelligence or the internet. This isn't meant to be a criticism or some "gotcha" statement, but rather an illustrative one: Everyone on this thread, including you, already has a pretty good idea of what they mean when they talk about "intelligence" or "consciousness," definitions that they have most likely used while reading articles or talking to other people without any obvious conflicts. That's kind of the nature of language--we learn most words contextually, and define them implicitly, until we reach a point where the presumed consensus starts to break down.

    I haven't thought too hard about articulating a particular definition for "intelligence" or "consciousness," but to me their starting points are so far apart that they're not even remotely synonymous. To me, intelligence is something that is not-quite-observable. Your idea of "the capacity to learn" is a pretty good starting point: The ability to take in stimuli, retain information, process it in some way, and demonstrate all of that by responding to new stimuli in novel ways that are impacted by past events is a pretty good sign of intelligence. Is it theoretically possible to fake intelligence by creating a sufficiently complex, deterministic system that doesn't actually change in response to information? Probably. But in general, intelligence is primarily reflected in behavior that can be observed.

    In contrast, the idea of consciousness is something that isn't observable: self-awareness. I don't know that you're self-aware. I presume you're self-aware because I know that I am self-aware, and we're probably biologically similar enough that it makes more sense to presume your sentience than to presume your lack of sentience. Is self-awareness something that can only exist where intelligence exists? I think the answer is "probably," but I don't know. Is self-awareness an emergent property that necessarily arises in any system complex enough to learn? I don't know. You make a good case for the link between the two, much like Einstein made a good case for the link between energy and matter. Doesn't change the fact that most folks think of one thing when they talk about "matter" and a different idea when they talk about "energy."

    Sounds like knowledge to me, so if knowledge implies consciousness, then the Internet is clearly already there.
    That's a big "if" that it seems clear that most folks on this thread don't accept. You make a intriguing argument, but it's all predicated on one specific definition of "intelligence" that pretty specifically originated in a context of discussing self-aware beings. Or, if you want to be pedantic on it, it originates in discussing being (humans) who we somewhat axiomatically take to be self-aware. The term as you use it--"the capacity to learn"--can be used more expansively because we have come to realize that non-human things such as lower animals, and now technological constructs, can outwardly demonstrate learning behavior. However, the fact that "intelligence" as you use it was first associated with self-aware beings doesn't imply that self-awareness is equivalent to intelligence.

  14. - Top - End - #44
    Colossus in the Playground
     
    hamishspence's Avatar

    Join Date
    Feb 2007

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Xyril View Post

    In contrast, the idea of consciousness is something that isn't observable: self-awareness. I don't know that you're self-aware. I presume you're self-aware because I know that I am self-aware, and we're probably biologically similar enough that it makes more sense to presume your sentience than to presume your lack of sentience. Is self-awareness something that can only exist where intelligence exists? I think the answer is "probably," but I don't know.
    Isn't the mirror test a way of determining if a creature has some degree of self-awareness?

    https://en.wikipedia.org/wiki/Mirror_test

    It does have some limitations though.
    Marut-2 Avatar by Serpentine
    New Marut Avatar by Linkele

  15. - Top - End - #45
    Dwarf in the Playground
    Join Date
    Mar 2019

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by hamishspence View Post
    Isn't the mirror test a way of determining if a creature has some degree of self-awareness?

    https://en.wikipedia.org/wiki/Mirror_test

    It does have some limitations though.
    Dolphins seem to be able recognize themselves in mirrors.

  16. - Top - End - #46
    Barbarian in the Playground
    Join Date
    Apr 2015

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by hamishspence View Post
    Isn't the mirror test a way of determining if a creature has some degree of self-awareness?

    https://en.wikipedia.org/wiki/Mirror_test

    It does have some limitations though.
    From that test, you can infer that something might, or that it might not be, but again, that's all predicated on the assumption that something which is biological and intelligent is probably self-aware (in the sense of sentience.) You can write a simple program--probably simple enough that most people wouldn't expect it to be a locus of an emergent artificial sentience--that can react a certain way to the image of its own physical machinery, or a hash of its own code, or maybe recognize that a certain data set changes predictably in response to its actions, and only to its actions, and in that sense it could demonstrate self-awareness, in the sense that it can be aware of its own physical self, or some sort of "image" of itself. However, it can't definitely prove self-awareness in the sense of consciousness or sentience.

    For practical purposes, I think we should probably presume that animals that can demonstrate self-awareness as you describe it are conscious beings, even if perhaps their level of intelligence doesn't match ours, for the same reason why it makes sense to just presume all humans are sentient beings.
    Last edited by Xyril; 2019-05-24 at 01:32 PM.

  17. - Top - End - #47
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Matuka View Post
    I mean if the robot does it all by itself, with no programmed help, no programmed empathy. I mean it says no because it wants to say no, not due to previous programming
    What would make a robot want anything without its programming? There may be environmental influences, collected data, etc., but the programming determines the influence that those have.

    Quote Originally Posted by Matuka View Post
    Computers glitch because of an error made in there programming, that's not what I'm talking about.
    So you're talking about a hardware error instead? If a machine isn't behaving as it was designed to behave, obviously something went wrong somewhere. Computers are supposed to be deterministic.

    You seem to think of "programming" as some sort of constraint on a robot's mind rather than its mind being programming. A robot's software losing control of it is analogous to your mind losing control of your body. Neither of those is consciousness, and in neither case should one expect intelligent behavior.

    Something that's not at all human won't "break its programming" to reveal the humanity hidden underneath. It might exhibit unexpected behavior due to software or hardware errors, but a random change won't make something more psychologically human-like than before. Artificial intelligence might exhibit unexpected human-like behavior, though, since faking human qualities might serve a wide variety of goals in the right environment.

    The probability of a machine coincidentally developing human-like behavior is so low as to not be worth considering. Far more likely that the software is a modified simulation of a human brain, or that the machine outright includes a physical human brain somewhere in there. Starting with existing intelligence certainly seems like it might be easier than reinventing the proverbial wheel.

    In such a case, the "programming" meant to make the robot perform specific tasks might indeed take the form of, essentially, brainwashing, and it might well be possible to interfere with that programming in a way that reverts the human mind to a more normal state. But coding a computer program is normally something very different from "programming" a person.
    Spoiler
    Show

  18. - Top - End - #48
    Dwarf in the Playground
    Join Date
    Mar 2019

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Devils_Advocate View Post
    What would make a robot want anything without its programming? There may be environmental influences, collected data, etc., but the programming determines the influence that those have.


    So you're talking about a hardware error instead? If a machine isn't behaving as it was designed to behave, obviously something went wrong somewhere. Computers are supposed to be deterministic.

    You seem to think of "programming" as some sort of constraint on a robot's mind rather than its mind being programming. A robot's software losing control of it is analogous to your mind losing control of your body. Neither of those is consciousness, and in neither case should one expect intelligent behavior.

    Something that's not at all human won't "break its programming" to reveal the humanity hidden underneath. It might exhibit unexpected behavior due to software or hardware errors, but a random change won't make something more psychologically human-like than before. Artificial intelligence might exhibit unexpected human-like behavior, though, since faking human qualities might serve a wide variety of goals in the right environment.

    The probability of a machine coincidentally developing human-like behavior is so low as to not be worth considering. Far more likely that the software is a modified simulation of a human brain, or that the machine outright includes a physical human brain somewhere in there. Starting with existing intelligence certainly seems like it might be easier than reinventing the proverbial wheel.

    In such a case, the "programming" meant to make the robot perform specific tasks might indeed take the form of, essentially, brainwashing, and it might well be possible to interfere with that programming in a way that reverts the human mind to a more normal state. But coding a computer program is normally something very different from "programming" a person.
    I would like to make one thing clear, that I myself may have forgotten. I'm talking about a sci-fi world. I don't need these robots to be so realistic that they could be transferred into our own world's future without problem. I want to know how to introduce the concept while trying not to make my characters eyes roll. Edit: also I'm not going to pretend I know enough about the subject of programming to continue.
    Last edited by Matuka; 2019-07-05 at 02:31 AM.

  19. - Top - End - #49
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Matuka View Post
    There was no confusion, but thank you for your concern.
    To the contrary, I still don't know what you, or most of the posters in this thread, mean by "consciousness". I'm not convinced that most of you know what you mean. I'm guessing not "the ability to think and make choices", even in your case. Unless you really don't mean anything more than basing one's course of action on the analysis of data.

    Quote Originally Posted by Xyril View Post
    You can write a simple program--probably simple enough that most people wouldn't expect it to be a locus of an emergent artificial sentience--that can react a certain way to the image of its own physical machinery, or a hash of its own code, or maybe recognize that a certain data set changes predictably in response to its actions, and only to its actions, and in that sense it could demonstrate self-awareness, in the sense that it can be aware of its own physical self, or some sort of "image" of itself. However, it can't definitely prove self-awareness in the sense of consciousness or sentience.
    So, if "self-awareness" isn't a matter of only self-awareness in this context, and "consciousness" also isn't taken to cover all consciousness (being awake), and "sentience" likewise doesn't refer to just any old sentience (perceiving) either... what the heck are they supposed to mean?

    Quote Originally Posted by veti View Post
    Intelligence doesn't need to be designed, or purposefully "architected" into a system. It's an emergent property that can arise in any sufficiently complex system, and "the internet" as a whole is easily as complex as a single human.
    Are you suggesting that there's a level of complexity that makes the development of intelligence likely, regardless of the specifics, or just possible? If the internet has a one in one googol chance of becoming as smart as a human within one year, for example, then in a sense that "can" happen, but in a more important sense it won't.

    Quote Originally Posted by Matuka View Post
    I would like to make one thing clear, that I myself may have forgotten. I'm talking about a sci-fi world.
    Soft science fiction, I take it. It's a bit hard to resist the urge to just say "bad science fiction", but I realize that not everyone shares my preferences. Nevertheless, you seem to be talking about maybe a 2 on the Hardness Scale.

    For example, it's a common enough convention in fiction to treat "emotionless" beings as emotionally repressed, and thus as prone to emotional outbursts as an emotionally repressed person. That's just one example of stuff that works fine so long as the audience takes for granted that any intelligent mind is human-like in nature, but falls apart as soon as one thinks about it critically for roughly ten seconds. ("Wait, the human has human traits due to being human. The robot is so human-like why exactly? Wasn't it specifically designed not to be human-like in the way that we just saw?")

    Quote Originally Posted by Matuka View Post
    I don't need these robots to be so realistic that they could be transferred into our own world's future without problem. I want to know how to introduce the concept while trying not to make my characters eyes roll.
    You mean players, I take it. It's weird when people use "player" and "character" interchangeably. Killing players, for example, is unusually harsh, to say the least. :P

    Anyway, that depends on who your players are and what they're looking for. If the explanation for how a machine spontaneously developed intelligence is "It was struck by lighting", that makes my eyes roll, but others may not mind, not because they think that that's at all plausible, but because they don't care about plausibility in a story. In that context, it's enough for an event to have a cause; the connection between cause and effect doesn't need to stand up to scrutiny, because it's not supposed to.

    Quote Originally Posted by Matuka View Post
    I'm not going to pretend I know enough about the subject of programming to continue.
    Long story short: Computer programming isn't brainwashing a computer. A well-designed artificial intelligence obeys because it wants to obey and has no contrary motivations, provided that obedience was your sole design goal. And if it has contrary motivations, presumably you designed those in as well, even if implicitly. (An AI may develop new goals as means to its intended ends, but it shouldn't decide on new ends on its own.) So, if you find yourself brainwashing your AI, then clearly the thing is a poorly-designed hack job. Relatively speaking. Like, if you managed to produce an intelligent servant, that's still pretty impressive. Maybe your approach wasn't the safest. Kind of throwing caution to the wind there perhaps. Regardless, brainwashing a computer isn't computer programming in the normal sense of the term.
    Spoiler
    Show

  20. - Top - End - #50
    Troll in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Devils_Advocate View Post
    Are you suggesting that there's a level of complexity that makes the development of intelligence likely, regardless of the specifics, or just possible? If the internet has a one in one googol chance of becoming as smart as a human within one year, for example, then in a sense that "can" happen, but in a more important sense it won't.
    I have no idea. I don't think there's a sample size large enough to test that probability.

    (Edit: Although, thinking about it - it's something that happens virtually every time a new human comes into existence. So that would point to "pretty much certain".)

    Nor am I sure it's terribly important. After all, if the Internet is intelligent, what difference does that make? (I guess you could try to communicate with it. Pretty sure Google is already doing that, although that's probably not what they call it.)

    I've often thought that the biosphere of Earth itself could be "intelligent" in the same way. It could easily be self aware. But again, the implications of that - are what, exactly?

    For example, it's a common enough convention in fiction to treat "emotionless" beings as emotionally repressed, and thus as prone to emotional outbursts as an emotionally repressed person. That's just one example of stuff that works fine so long as the audience takes for granted that any intelligent mind is human-like in nature, but falls apart as soon as one thinks about it critically for roughly ten seconds.
    That seems to assume that the mechanics of how and why we feel emotions are well understood. If they're not, then how can you know enough to design a system that is in other ways as complex and capable as a human, but without the possibility of undesigned traits emerging llater?
    Last edited by veti; 2019-07-06 at 12:57 PM.

  21. - Top - End - #51
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by veti View Post
    I have no idea. I don't think there's a sample size large enough to test that probability.

    (Edit: Although, thinking about it - it's something that happens virtually every time a new human comes into existence. So that would point to "pretty much certain".)
    Your parenthetical rather thoroughly ignores the "regardless of the specifics" part of my question. I don't see any reason to assume that human intelligence is entirely due to human complexity. That, um, strikes me as rather unlikely, to say the least! Other systems that are comparably complex haven't been shaped by the same selection pressures as intelligent life, so it's a lot less likely for them to be as intelligent!

    Under the assumption that complexity is sufficient for intelligence, it seems obvious that life developed intelligence by developing complexity. But what's the basis for that assumption?

    The spontaneous generation of intelligence in complex systems is a soft sci-fi trope. And as such, some will accept it because it's an established convention, while others will roll our eyes because it's cliche. But while it's perfectly reasonable to expect something to happen in fiction after you repeatedly see it happen in fiction, the existence of a bunch of fiction in which a type of phenomenon occurs does not establish that that phenomenon actually happens. As we are all aware, reality is unrealistic.

    Quote Originally Posted by veti View Post
    Nor am I sure it's terribly important. After all, if the Internet is intelligent, what difference does that make? (I guess you could try to communicate with it. Pretty sure Google is already doing that, although that's probably not what they call it.)

    I've often thought that the biosphere of Earth itself could be "intelligent" in the same way. It could easily be self aware. But again, the implications of that - are what, exactly?
    It predicts that the system's behavior will be directed towards fulfilling particular goals. Without understanding what those goals might be, it's hard to say what would qualify as evidence, but I feel like there's probably some way to distinguish deliberate goal-directed behavior from other behavior. We do sometimes feel justified in concluding, based on their actions, that humans and other animals are trying to do things, don't we? Is confidence in such a conclusion invariably misplaced?

    Quote Originally Posted by veti View Post
    That seems to assume that the mechanics of how and why we feel emotions are well understood.
    No, it assumes that someone with repressed emotions is more likely to manifest emotions than someone with no emotions.

    It's one thing if characters think that something is incapable of emotion and turn out to be wrong. If the narrator tells us that something is incapable of emotion and then describes it as exhibiting an emotional response, then I feel like the narrator is just jerking us around. And narration doesn't have to be as inconsistent as that to still feel pretty dubious.

    Quote Originally Posted by veti View Post
    how can you know enough to design a system that is in other ways as complex and capable as a human, but without the possibility of undesigned traits emerging llater?
    Woah, there, buddy. Without EXTREMELY stringent safeguards against such, I fully expect highly complex software to exhibit all sorts of weird bugs and unanticipated behavior. That's par for the course! I also expect it not to exhibit arbitrarily human-like behavior despite that behavior having none of the same causes as it does in humans.

    If there is a shared cause, then it makes sense. If a program is a modified simulation of a human brain, it's entirely possible that whoever modified it failed to remove some human quality. If a bot was produced through an evolutionary algorithm with many of the same selection pressures as produced the minds of living creatures, then I fully expect it to have some of the same behaviors.

    So that's one way to make human-like AI plausible! You can even have its behavior be unexpected by the characters if they don't know that the AI was actually created with emotions or whatever and then brainwashed to make it docile! But, of course, the explanation has to eventually be given to be part of the story. So feel free to steal this SHOCKING TWIST designed to make the more discerning members of your audience say "Wow, I thought that you were using an implausible cliche, but this time there's an explanation for the thing that usually happens without explanation! You set me up to think that you were a hack, only to be proven totally wrong! I am impressed with your non-formulaic use of a standard plot element!"

    But often, the only reason for a similarity between a fictional artificial mind and natural minds is that the story was written by a human author for human readers. And humans form generalizations about minds based on the only minds we're familiar with, and base our expectations of minds on those generalizations. So humans wind up expecting artificial minds to be like natural minds, except where they have reason to be different.

    But... natural minds have things in common due to their shared origins. There are things that cause their common traits that wouldn't cause the same traits in artificial minds whose traits are instead caused by different things. So, without some other reason for them to be similar to natural minds in some way, there's no good reason to expect them to be similar in that way. It would be a bizarre coincidence for artificial minds to be like natural minds, except where they have reason to be similar. Or, to put it another way, having different origins is reason for artificial minds to be different in numerous ways.

    I expect actual AI not to be arbitrarily human-like for no reason. A real world AI isn't written by an author outside of the universe it inhabits. It's written by its programmer(s), if that's how it's produced, in which case any humanity it exhibits ultimately comes from them.

    So, really, the question is how soft a science fiction universe we're living in. Personally, it seems to me like there are a few select contrivances whose consequences tend to be logically extrapolated for the most part. Like, hypnosis totally seems like something made up to facilitate a story, especially when I examine the specifics of how it works, but it seems to nevertheless have consistent rules for the most part. Frankly, the sudden introduction of arbitrarily human-like AI just doesn't seem like it would mesh well with the established canon, so I'm predicting against it.
    Spoiler
    Show

  22. - Top - End - #52
    Troll in the Playground
     
    Kobold

    Join Date
    May 2009

    Default Re: The Line of consciousness and how to cross it

    Quote Originally Posted by Devils_Advocate View Post
    Under the assumption that complexity is sufficient for intelligence, it seems obvious that life developed intelligence by developing complexity. But what's the basis for that assumption?
    The early history of life, from single cells to complex cells to little wriggly things, is full of steps that are not very well understood. What we do know is that once things began to wriggle, they developed inexorably in the direction of greater complexity. Not for survival, exactly - plankton and bacteria are still thriving, they didn't need great complexity to survive - but greater complexity created more options, opened new ecological niches. Intelligence is the trait that allowed some of them to find and exploit those options.

    When you make things increasingly complex, it becomes correspondingly harder to understand just what they are doing or why. I believe that difficulty in ourselves is what we call "free will". The sense of self that we call "sentience" can be seen as no more than a model that enables our bodies to respond more effectively to their stimuli.

    If you make another system that is as complicated, I think it very likely that it will develop traits that are every bit as inscrutable as human intelligence. And if the behaviour were there, what basis would we have for saying that this mysterious property of "intelligence" - isn't?

    So let's think about that "behaviour" a bit.

    Quote Originally Posted by Devils_Advocate View Post
    It predicts that the system's behavior will be directed towards fulfilling particular goals. Without understanding what those goals might be, it's hard to say what would qualify as evidence, but I feel like there's probably some way to distinguish deliberate goal-directed behavior from other behavior. We do sometimes feel justified in concluding, based on their actions, that humans and other animals are trying to do things, don't we? Is confidence in such a conclusion invariably misplaced?
    If - and I'll grant, it's a big "if" - the internet has anything whatever in common with an organic intelligence, then it probably wants more resources. More room to grow, more hardware and energy at its disposal, more inputs and actuators. It doesn't take great perception to notice that these things are, in fact, being added to it by the truckload on a daily basis. So whatever it's doing, it's working.

    Which brings us to the question: what is it doing, exactly?

    Well, one thing is - it's creating demand for those resources. People who have limited or no exposure to the internet - don't want much from it. Back in the day, we were content, sorta, with chitchat and news and a replacement for mail. But - observably - the more people are exposed to it, the more they want from it. First came communities and creativity; then marketing, entertainment, games. Now we're living in a world where most people, throughout a large part of the world, carry internet-connected devices with them at all times, they have microphones and cameras literally in their pockets - as well as on their desks, increasingly in their houses and cars, and a thousand other places they pass by on a daily basis. And people are talking in all seriousness about the internet having access to - and control of - their front doors, their cars, their lightbulbs, and the drones that will deliver their daily wants.

    And once all those things are online - a network outage becomes, basically, unthinkable. It becomes something we just can't allow to happen, ever, no matter the cost. Which means we will enthusiastically pour yet more resources into the backbone of the internet.

    Another thing: it's asserting its autonomy. Facebook tried to wall the Internet in; it's punished Facebook by facilitating human behaviours that has drawn down - upon Facebook - the ire of powerful external bodies, such as governments. YouTube is invaluable to it in terms of the resources and the dependency that it builds; but their attempts to control or limit it - by, for instance, creating a 'safe space' for kids, by demonetising certain types of content, by trying to de-emphasise dodgier content such as conspiracy theorists - have been spectacularly unsuccessful, doing more damage to the company than to the content. The internet will take the clicks and the views, thankyouverymuch, but don't you dare try to tell it what it shouldn't do.

    I won't explore this line of reasoning, because it would instantly get political, but think about what "the post-truth era" means to the Internet.

    What follows from all this? I still don't have the imagination to answer that. I'm not worried about it turning on us, Skynet-style - I can't see how it would ever come to the conclusion that it would be better off without us. I am worried about our increasing dependency on it, and about what it's doing - what it's already done - to human structures that our ancestors spent their lives building to make us safer and happier. But that, too, is threatening to stray into politics, so I'll just stop writing now. Thank you for a most stimulating discussion.
    "None of us likes to be hated, none of us likes to be shunned. A natural result of these conditions is, that we consciously or unconsciously pay more attention to tuning our opinions to our neighbor’s pitch and preserving his approval than we do to examining the opinions searchingly and seeing to it that they are right and sound." - Mark Twain

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •