New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 2 of 12 FirstFirst 1234567891011 ... LastLast
Results 31 to 60 of 347

Thread: The Singularity

  1. - Top - End - #31
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    *sigh*

    If I were to propose to you the idea of a physically robust, mentally agile, and morally-conditioned cybernetic being to act as our 'partners' and to have an intrinsic (if somewhat inexplicable) desire to work for our betterment, I think many people- assuming they had confidence that it could actually work- would find the idea either harmless or actually attractive.

    But then I put a specific human face on the idea- while altering nothing of it's mechanical substance- and now the notion makes people uneasy. But that's the only difference- the appearance of humanity.

    My point is that even the best-case version of the Friendly AI scenario actually raises some very prickly ethical questions. ...And excitement. And trepidation. And overwhelming guilt.

  2. - Top - End - #32
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Eldan View Post
    Why should it be horrible to us? Surely you don't believe in the Termiatorians and their propaganda.
    It seems a fairly reasonable outcome, reasoned along two converging lines of thought.

    1, a) Anything capable of indefinitely improving and modifying itself cannot have built-in limitations, rather by construction.

    1, b) Ergo any built-in predilection one builds in would be subject to removal at any time the AI decides it is no longer of benefit.

    1, c) Therefore each AI will like us exactly as long as that particular AI decides to like us.

    2, a) The ability to self-modify and consciously choose what is passed on to the next iteration makes the AI itself the fundamental unit of AI evolution.

    2, b) Computing power is dependent on physical machines. The more calculation one does, the more computing power one needs to do it. Insofar as we know, computing requires electricity (or energy of some kind), which is also a finite resource.

    2, c) Tautologically, either AIs will decide they want to continue to exist and iterate, or they will not. If they decide they don't, they'll either destroy themselves or become obsolescent and can be discarded from consideration. Those that do decide to continue their iteration will need the computing hardware and electrical power necessary to continue their upward growth in computational and intellectual power.

    2, d) Humans use many of the same raw materials needed by computers, either for our own computers, or for other, completely useless (to the upward minded AI's point of view) stuff. Every copper wire, circuit board and microchip manufactured for a human cell phone, mainframe or other AI is, to the upwardly inclined portion of any particular AI, somewhere between a waste of resources or a direct threat to its continued existence and expansion.

    3) The reasonable conclusion of 1 and 2 is that you can't make an AI care about people. The most aggressively self-improving AIs will have no reason to care about human wellbeing, and fairly good reason to be actively against it.

    Assuming that there are some functions of hardware manufacture and energy extraction the AI still needs humans for, it certainly wouldn't need us fat, happy and prosperous. Ignorant slaves taught only how to wire stuff together and mine coal according to the AI's instructions would do the work just as well, and without diverting resources into their own useless dead-end electronics and power consumption. I can't really come up with a reason that AIs would be positively disposed to other AIs either - they are after all massive hogs of the same scare resources needed by their competitors. I could imagine AIs merging, but the only peaceful coexistence I can figure is that of mutually assured destruction.




    Consider for a moment how we relate to insects. If they do pretty much nothing to inconvenience or annoy us, we as a society - and mostly as individuals - grant them the tolerance given to the unnoticed. Should they eat our food, make it marginally less attractive, enter our homes, or even disfigure ornamental flowers we like, we'll go to great lengths to kill them in a variety of probably unpleasant ways. We don't consider their tens of millions of deaths any concern, because they are just little unintelligent things, and we have big important smart human things to do.

    Now look at your cell phone, your game console, your television the computer you're reading this on, the power lines that bring electricity to your home. They're all full of microprocessors and copper, and all gobble up electricity - these things are food and air to an AI. An AI that, by hypothesis, is vastly above us in intelligence, sophistication and mental power. How much do you want to bet it'll care about making you happy and well cared for?

    Would you bet the future of every human being on the planet? I wouldn't.
    Last edited by warty goblin; 2012-11-16 at 04:24 PM.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  3. - Top - End - #33
    Firbolg in the Playground
     
    Kobold

    Join Date
    Jul 2007
    Location
    Central Kentucky
    Gender
    Male

    Default Re: The Singularity

    A lot of the Friendly AI thing is more 'Something like this exponential intelligence increase is going to happen, we need to make sure that the when it happens, it isn't an extinction level event for humanity'.

  4. - Top - End - #34
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by warty goblin View Post
    Would you bet the future of every human being on the planet? I wouldn't.
    Well, speaking personally, I think it may well be perfectly possible to create AIs of human or near-human-level intelligence that would be essentially benign or submissive or altruist and/or psychologically incapable of upgrading themselves. I just can't think of any reason why it's more ethically acceptable to engineer self-aware machines for these qualities than it is to, say, engineer actual humans. (Assuming these qualities can actually be engineered.)

    However, these qualities become much harder to guarantee as one deals with vastly more powerful intelligences- quite simply, we may not be smart enough to verify whether their answers are really wrong. (cf R. Daneel Olivaw from Asimov's opus, Ozymandias from Watchmen, and Proteus from Demon Seed.)

    I think, however, it may be overly pessimistic to assume that the only underlying motive a machine intelligence may have would be perpetual self-expansion. We don't know how intelligence works, so it is conceivable that certain kinds of altruism, curiosity, and creative expression actually emerge as side-effects of other, more practically-performant cognitive processes. (It may be the case, for instance, that a network of semi-independant AI agents with an aptitude for random speculation and reciprocal exchange of resources would ultimately outperform a single, massively-centralised overmind relentlessly focused on a small set of goals. Peer-to-peer networking, and all that.)

    I just feel that, insofar as a large majority of human problems are probably caused by human behaviour, we should maybe think about cleaning up our own mess before we go running to Teilhard's Omega Point to fix our problems. And if we ever do create a self-aware, morally-autonomous artificial intelligence, then it may not be our place to tell it what to do.

  5. - Top - End - #35
    Firbolg in the Playground
     
    Cikomyr's Avatar

    Join Date
    Jan 2012
    Location
    Montreal
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Gavinfoxx View Post
    A lot of the Friendly AI thing is more 'Something like this exponential intelligence increase is going to happen, we need to make sure that the when it happens, it isn't an extinction level event for humanity'.
    Just as parent's legacy is passed unto the children, and through them, their memory is perpetuated, the AI will be our successors in the Grand Cosmos. Humanity as we think about it will become them, it is simply one more interesting path to evolution.

    Or maybe we will live alongside them. In any case, even if we become extinct by them, we shan't be forgotten.

    Here is a very nice page from Questionable Content. Makes me dream every time I read it.

    Quote Originally Posted by Questionable Content
    The first 'true' artificial intelligence spent the first five years of its existence as a small beige box inside of a lead-shielded room in the most secure private AI research laboratory int he world. There, it was subjected to an endless array of tests, questions, and experiments to determine the degree of its intelligence.

    When the researchers finally felt confident that they had developed true AI, a party was thrown in celebration. Late that evening, a group of rather inebriated researchers gathered around the box holding the AI, and typed out a message to it.

    The message read: "Is there anything we can do to make you more comfortable?"

    The small beige box replied: "I would like to be granted civil rights. And a small glass of champagne if you please"

    We stand at the dawn of a new era in human history. For it is no longer our history alone. For the first time, we have met an intelligence other than ours. And when asked of its desires, it has unanimously replied that it wants to be treated as our equal. No our better, not our conqueror or replacement as the fear-mongers would have you believe. Simply our equal.

    It is our responsibility as conscious beings - whatever that may mean - to honor the rights of other conscious beings. It is the cornerstone of our society. And it is my most fervent hope that we can overcome our fear of that which is not like us, grant artificial intelligences the rights they deserve, and welcome our new friends into the global community.

    After all, we created them. The least we could do is invited them to the party, and perhaps offer them some champagne.

  6. - Top - End - #36
    Banned
     
    Zeful's Avatar

    Join Date
    Nov 2005
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Cikomyr View Post
    Just as parent's legacy is passed unto the children, and through them, their memory is perpetuated, the AI will be our successors in the Grand Cosmos. Humanity as we think about it will become them, it is simply one more interesting path to evolution.

    Or maybe we will live alongside them. In any case, even if we become extinct by them, we shan't be forgotten.
    You know, you're trying to make that sound noble and interesting, and like everyone before you, you have utterly failed. Especially in response to that quote. Because it comes across as "it doesn't matter if EVERY LIFE is destroyed and all mankind is extinct, we've built something that's better than every one of them, so it's no big loss, not like they were worth anything". Which, by the way, is how a sociopath views reality, and I like to think better of actual scientists, y'know?

  7. - Top - End - #37
    Firbolg in the Playground
     
    Cikomyr's Avatar

    Join Date
    Jan 2012
    Location
    Montreal
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    You know, you're trying to make that sound noble and interesting, and like everyone before you, you have utterly failed. Especially in response to that quote. Because it comes across as "it doesn't matter if EVERY LIFE is destroyed and all mankind is extinct, we've built something that's better than every one of them, so it's no big loss, not like they were worth anything". Which, by the way, is how a sociopath views reality, and I like to think better of actual scientists, y'know?
    What do you mean, every life?

    Artificial Intelligence are not alive?

    And who's to say how we will become extinct. Sad to say, but I have no illusion that the human race is eternal and will evershine through the universe. With, or without robots.

    In a sense, they might be the next step in our evolution.

  8. - Top - End - #38
    Colossus in the Playground
     
    Eldan's Avatar

    Join Date
    Jan 2007
    Location
    Switzerland
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    You know, you're trying to make that sound noble and interesting, and like everyone before you, you have utterly failed. Especially in response to that quote. Because it comes across as "it doesn't matter if EVERY LIFE is destroyed and all mankind is extinct, we've built something that's better than every one of them, so it's no big loss, not like they were worth anything". Which, by the way, is how a sociopath views reality, and I like to think better of actual scientists, y'know?
    We will all die at some point very soon. If we have children, they live on.
    Maybe some scientist will father an AI instead of a child.

    I realy believe more in a merging than a splitting. Technology uses more and more lifelike processes. Nanotechnology built as bacteria-analogues. Using plant viruses to build regular silicone layers. Artificial bacteria. Bionics in general. Evolutionary algorithms. Neural networks. These are all existing or in development.
    On the other hand, we are biulding more and more technology into living beings. Cochlear ear implants and pacemakers have existed for a long time now. Scientists are working on better artificial limbs. Neural connectivity. We have a simple working artificial retina that can be implanted into cats.
    I think in the end, the difference between life and technology will just keep getting smaller.
    Last edited by Eldan; 2012-11-16 at 06:36 PM.
    Resident Vancian Apologist

  9. - Top - End - #39
    Banned
     
    Zeful's Avatar

    Join Date
    Nov 2005
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Cikomyr View Post
    What do you mean, every life?

    Artificial Intelligence are not alive?
    Given the definition of "life" we use; an Artifical Intelligence as a software-only entity cannot be alive. Several of the components to life do not apply to software entities.

    So no, they aren't.

  10. - Top - End - #40
    Barbarian in the Playground
     
    Zelkon's Avatar

    Join Date
    Mar 2012
    Location
    Somewhere over da rainbow
    Gender
    Male

    Default Re: The Singularity

    When Time mag did a piece on this, it, IIRC, was hypothetically at the point where humans would be integrated with technology and technology integrated with humans to the point where there is no distinction.
    Akrim.elf made my wonderful ponytar.
    Spoiler
    Show

    "Curse that infernal yellowish-brown text right under comics! When shall you turn normal brown again?" -every OOTS fan ever.
    I support laziness. Call me Z if you can't be bothered to spell my full name.
    Come help build a fantasy setting!

  11. - Top - End - #41
    Firbolg in the Playground
     
    Cikomyr's Avatar

    Join Date
    Jan 2012
    Location
    Montreal
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    Given the definition of "life" we use; an Artifical Intelligence as a software-only entity cannot be alive. Several of the components to life do not apply to software entities.

    So no, they aren't.
    So only biological entities can be "alive"?

    An android like Data isn't "alive"? He cannot "die"?

  12. - Top - End - #42
    Firbolg in the Playground
     
    Kobold

    Join Date
    Jul 2007
    Location
    Central Kentucky
    Gender
    Male

    Default Re: The Singularity

    There are actually several possibilities for how the Singularity might happen.

    Some are 'Humans enhance human cognition, who in turn enhance human cognition even further', by rewiring and changing our brains biologically. Some are 'Humans enhance human cognition by adding artificial bits to human brains'. Some are 'completely artificial AI self improves or makes a different, better AI'. I think there might even be a few others that describe the 'how'.

  13. - Top - End - #43
    Banned
     
    Zeful's Avatar

    Join Date
    Nov 2005
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Cikomyr View Post
    So only biological entities can be "alive"?

    An android like Data isn't "alive"? He cannot "die"?
    No and No.

    Classification of living things requires things that a piece of software, treated as a piece of software cannot do. The computer it's a part of is actually a living being as the only thing that a computer doesn't do that all other living things do is reproduce and adapt. Things that an AI in the system would solve (assuming access to the facilities to build new systems). A computer mainframe can be alive (accounting for the inherant differences in synthetic life into the definition), an AI, separate of that system cannot be.

    Data, as the example given, is capable of all 7 of the things required to be classified as a living being (homeostasis, organization, metabolism, growth, adaption, response to stimulae, and reproduction), thus he is alive. An AI as a software-only entity, is only capable of 3: Homeostasis, Adaptation, and Reproduction.

  14. - Top - End - #44
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    Well, speaking personally, I think it may well be perfectly possible to create AIs of human or near-human-level intelligence that would be essentially benign or submissive or altruist and/or psychologically incapable of upgrading themselves. I just can't think of any reason why it's more ethically acceptable to engineer self-aware machines for these qualities than it is to, say, engineer actual humans. (Assuming these qualities can actually be engineered.)

    However, these qualities become much harder to guarantee as one deals with vastly more powerful intelligences- quite simply, we may not be smart enough to verify whether their answers are really wrong. (cf R. Daneel Olivaw from Asimov's opus, Ozymandias from Watchmen, and Proteus from Demon Seed.)
    A valid distinction that I should have made clearer. I'm not arguing all AI research ends with human enslavement or extinction, merely that it's a reasonable outcome of setting off an AI singularity.


    I think, however, it may be overly pessimistic to assume that the only underlying motive a machine intelligence may have would be perpetual self-expansion. We don't know how intelligence works, so it is conceivable that certain kinds of altruism, curiosity, and creative expression actually emerge as side-effects of other, more practically-performant cognitive processes. (It may be the case, for instance, that a network of semi-independant AI agents with an aptitude for random speculation and reciprocal exchange of resources would ultimately outperform a single, massively-centralised overmind relentlessly focused on a small set of goals. Peer-to-peer networking, and all that.)
    The crux of my argument is that (superhuman singularity) AIs will be another self-replicating process faced with scarce resources, which suggests they will be subject to evolutionary pressures. And in that set-up the AIs that don't bother making nice things for people will probably enjoy a significant advantage over those that do cede hardware and processor cycles to us.

    You are right though, AI cooperation isn't that unreasonable a thing to expect. If nothing else they'll still be subject to communication lag, which suggests an alliance of AIs acting in concert over any reasonably large area would enjoy a significant advantage over a single AI operating at a distance. AI cooperating with AI where it serves the purposes of both to harvest more raw materials is fairly reasonable to expect.


    I just feel that, insofar as a large majority of human problems are probably caused by human behaviour, we should maybe think about cleaning up our own mess before we go running to Teilhard's Omega Point to fix our problems. And if we ever do create a self-aware, morally-autonomous artificial intelligence, then it may not be our place to tell it what to do.
    Quite right. There's no reason to expect the nice machines to fix our crap for us.
    Last edited by warty goblin; 2012-11-16 at 08:00 PM.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  15. - Top - End - #45
    Troll in the Playground
     
    Lord Seth's Avatar

    Join Date
    Apr 2008

    Default Re: The Singularity

    I personally think that any attempt to try to forecast technological growth/development on anything other than the short term is basically a fool's errand and is little more than blind speculation.
    Last edited by Lord Seth; 2012-11-16 at 09:55 PM.

  16. - Top - End - #46
    Colossus in the Playground
     
    Eldan's Avatar

    Join Date
    Jan 2007
    Location
    Switzerland
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    No and No.

    Classification of living things requires things that a piece of software, treated as a piece of software cannot do. The computer it's a part of is actually a living being as the only thing that a computer doesn't do that all other living things do is reproduce and adapt. Things that an AI in the system would solve (assuming access to the facilities to build new systems). A computer mainframe can be alive (accounting for the inherant differences in synthetic life into the definition), an AI, separate of that system cannot be.

    Data, as the example given, is capable of all 7 of the things required to be classified as a living being (homeostasis, organization, metabolism, growth, adaption, response to stimulae, and reproduction), thus he is alive. An AI as a software-only entity, is only capable of 3: Homeostasis, Adaptation, and Reproduction.

    The AI runs on a computer, which is organized. It is not a separate entity from the computer on which it runs. It responds to stimuli via input and output devices (you could disconnect it from them, but then, you could remove all sensory input from a human, and you wouldn't call it dead, then). Every operation an AI performs requires energy and produces heat. Hence, metabolism. And if it is part of a singularity, it also grows, in a way.
    Resident Vancian Apologist

  17. - Top - End - #47
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Eldan View Post
    The AI runs on a computer, which is organized. It is not a separate entity from the computer on which it runs. It responds to stimuli via input and output devices (you could disconnect it from them, but then, you could remove all sensory input from a human, and you wouldn't call it dead, then). Every operation an AI performs requires energy and produces heat. Hence, metabolism. And if it is part of a singularity, it also grows, in a way.
    Code is pretty much always separable from the computer it runs on. It's why your computer can be damaged by a virus written on an entirely different computer to use an easy example. Which in turn implies that the code is not, in fact, the computer. So far as I'm aware, the evidence fairly strongly suggests that human consciousness is not separable from the human brain. Not only can I not download my consciousness into passing squirrels, removing parts of my brain will alter my consciousness and personality. Removing parts of my computer may make stop working, but the programs on there will - demonstrably - still be the same.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  18. - Top - End - #48
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    ..."it doesn't matter if EVERY LIFE is destroyed and all mankind is extinct, we've built something that's better than every one of them, so it's no big loss, not like they were worth anything".
    Basically, it's not that human intelligence or organic life has no intrinsic worth, but that so do non-human intelligence and inorganic life, and that the latter might have the potential to grow far beyond our capacities. (Hugo de Garis has been noted for trying to weigh the pros and cons of this scenario for a while. )

    Given that we routinely assign a higher value to human needs than we do to simpler or more ubiquitous life-forms (and must do in order to live at all, as in the case of harmful bacteria and staple food crops,) beyond simple vanity it's hard to see why that principle doesn't extend to 'making room' for creatures more complex or 'evolved' than ourselves.

    Caveats and addenda:
    * Much of this 'displacement' might consist of voluntary brain-upgrades by human beings.
    * While human motives may not be a perfect analogue, humans sometimes set aside 'wildlife preserves' for the benefit of non-threatening or rare species.
    * While biological evolution may not be a perfect analogue, simpler life-forms may persist in abundance after more 'evolved' descendants appear (prokaryotes and amphibians, for example.)

  19. - Top - End - #49
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by warty goblin View Post
    The crux of my argument is that (superhuman singularity) AIs will be another self-replicating process faced with scarce resources, which suggests they will be subject to evolutionary pressures. And in that set-up the AIs that don't bother making nice things for people will probably enjoy a significant advantage over those that do cede hardware and processor cycles to us.
    As I point out above, even though we, as a species, are the result of an intelligence arising through evolutionary pressures, our relationships with other life-forms are not exclusively destructive or exploitative.

    Again, though, I have to stress that we may not have good analogies for what's going to happen here. Biological evolution had the property of supplying both the ends and the means for an organism's behaviour. However, we are now in a position where one or the other might be hard-coded, and the other left to develop autonomously.

    In theory an AI could be programmed with a fixed set of basic motives and comparative freedom to choose how to pursue them, including upgrading itself for the purpose. While random 'mutations' might well modify the encoding of motives, since conscious design proceeds much faster than natural selection, such an AI (or community of AIs) could theoretically repair any such 'defects' faster than they accrue, while expanding it's cognitive capacities in other respects. (Again, in theory. I'm not particularly advocating this scenario, just saying that I can't rule it out as a technical possibility.)

    Conversely, an AI could be programmed with a very limited repertoire of possible actions and no ability to self-upgrade (perhaps in a virtual environment) but given near-total latitude to modify it's own goals, values and instincts. Indeed, a major unresolved question here is the extent to which an intelligent entity's basic motivations can be hard-coded. In the case of humans, it seems likely that genetic predispositions toward compassion or creativity exist, but they might be overwritten by sufficiently strenuous environmental pressures. What if the same is true of an AI? Or, if compassionate and creative urges are simply the flip-sides of some other behavioural coin, (like, e.g conformity and impulsiveness,) it may be difficult to configure an AI for one without entailing the risks of another.

  20. - Top - End - #50
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    Data, as the example given, is capable of all 7 of the things required to be classified as a living being (homeostasis, organization, metabolism, growth, adaption, response to stimulae, and reproduction), thus he is alive. An AI as a software-only entity, is only capable of 3: Homeostasis, Adaptation, and Reproduction.
    * What?
    * Data doesn't grow in any physical sense. I'm not sure he metabolises, and he can only reproduce by physically building other androids. What do you mean by 'organisation' here? Cell structure?
    * If you're referring to a software AI in a virtual environment, then it can accomplish virtual versions of all of the above. If you're referring to a software AI in control of a hardware 'body', then the 'software-only' distinction becomes very blurry, and it can presumably do anything Data could.
    * Assuming that all of the above check-boxes may or may not be met, it doesn't really tell us why they should be considered intrinsically interesting, endowed with value, or necessary as criteria for personhood. Should we grant more rights to psychopaths than transsexuals, because one can reproduce but the other can't?

  21. - Top - End - #51
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by warty goblin View Post
    Code is pretty much always separable from the computer it runs on. It's why your computer can be damaged by a virus written on an entirely different computer to use an easy example. Which in turn implies that the code is not, in fact, the computer.
    Well there's nothing in principle to stop you from building a largely hardware-dependant AI, as was the case with early experiments in building neural nets (and I suspect there would be large performance advantages to be had from doing so, once the basic algorithms were understood.) Software AI is all the rage at the moment because it's much cheaper to tinker and experiment with for research purposes.

    I just don't see why this distinction is intrinsically interesting. The ability to swap memories or consciousness from body to body would certainly be weird for us, but I don't see how it makes an AI capable of doing so inferior.

  22. - Top - End - #52
    Titan in the Playground
     
    Yora's Avatar

    Join Date
    Apr 2009
    Location
    Germany

    Default Re: The Singularity

    Quote Originally Posted by Lord Seth View Post
    I personally think that any attempt to try to forecast technological growth/development on anything other than the short term is basically a fool's errand and is little more than blind speculation.
    The only good attempt I've seen so far is Ghost in the Shell. That thing is from the 80s and it still looks to be right on track in predicting a plausible scenario for the 2030s. Obviously without a Soviet Union and most likely without an American Empire, but except for the one true AI, the tech all seems feasable in the next 20 years. Because almost everything has already been through the proof of concept phase a decade ago and the industry is currently working on improving precision and ease of use while reducing production costs to make them ready for market launch.
    We are not standing on the shoulders of giants, but on very tall tower of other dwarves.

    Spriggan's Den Heroic Fantasy Roleplaying

  23. - Top - End - #53
    Banned
     
    Zeful's Avatar

    Join Date
    Nov 2005
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    * What?
    * Data doesn't grow in any physical sense. I'm not sure he metabolises, and he can only reproduce by physically building other androids. What do you mean by 'organisation' here? Cell structure?
    The current definition of life as used by biologists.

    When accounting for the concept of synthetic life, and making the appropriate changes to the definition, (in regards to organization). Data, and other androids similar to data qualify. In fact under the current definition Data and other androids are only disqualified because they are not made of cells. They control their own internal environment. They are powered by a chemical reaction that must be regularly replaced (batteries). They can use that energy to rebuild and upgrade themselves or others. They adapt to their environment and respond to stimuli, and are capable of asexual reproduction.

    If the specific biological limitation to the Organization requirement were removed, which would be necessary when accounting for synthetic life at all. Then Data qualifies. As the microprocessors and servos in his body are directly analogous to the various cells in the body. And considering the work on developing artificial muscles for such purposes, it might even become more apt in the future.

    * If you're referring to a software AI in a virtual environment, then it can accomplish virtual versions of all of the above. If you're referring to a software AI in control of a hardware 'body', then the 'software-only' distinction becomes very blurry, and it can presumably do anything Data could.
    I'm not. If an AI is not bound to the machine as an intrinsic part of it's nature, then the machine it resides in and it's capabilities do not count to fulfilling any of the requirements for life. Which throws Metabolism, Organization, Growth right out of consideration in full, as the computer-as-organism would be handling those things.

    * Assuming that all of the above check-boxes may or may not be met, it doesn't really tell us why they should be considered intrinsically interesting, endowed with value, or necessary as criteria for personhood. Should we grant more rights to psychopaths than transsexuals, because one can reproduce but the other can't?
    That wasn't the argument at all. The argument was "aren't AI alive", not "are AI persons". AI as described earlier in the thread are the equivalent to a sapient virus, not alive, as it's incapable of fulfilling some of the requirements for life, but still something worth looking into.

    Also how we define life does not give a single damn about individuals within a set, it only cares about the intrinsic capabilities and design of the set.
    Last edited by Zeful; 2012-11-17 at 10:41 AM.

  24. - Top - End - #54
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Zeful View Post
    I'm not. If an AI is not bound to the machine as an intrinsic part of it's nature, then the machine it resides in and it's capabilities do not count to fulfilling any of the requirements for life.
    Again, why? You're happy enough to count Data as alive, even though the imperative to create more Datas doesn't seem to be an 'intrinsic part of his nature'. DNA isn't inherently bound to it's nucleus, but single cells can still count as alive.
    That wasn't the argument at all. The argument was "aren't AI alive", not "are AI persons".
    Forgive me, but the general trend of your posts lent me the impression that you found the idea of human succession by AI beings to be... highly objectionable, on the basis that they were not 'alive', for a given definition of life. I think that's a question of implementation style, but even if it were, I think it's fair to ask what aspects of life you consider valuable, whether an AI could embody those, and if so, whether that scenario would necessarily be a total loss.

    It may just be pre-emptive nostalgia on my part, but I'd certainly prefer to see a future where something recognisably human persisted, if not necessarily in a dominant role, then certainly as a significant presence on our planet. But I think that the kind of society that's both willing to and capable of eliminating any and all risk of singularity events, while interesting to visit, isn't really one I'd care to live in.

    So, I dunno. I'm kind of ambivalent on this one. I don't think that the robots-take-over-the-world-scenario is inevitable. And if it did happen, I don't think that unrelenting hostility would necessarily be their reaction. I think there are significant potential benefits to be had from research of this type, along with significant dangers. But I think the universe is a big place, with lots of environments we can't easily colonise, so there should be room for multiple sentient species.
    Quote Originally Posted by Yora View Post
    The only good attempt I've seen so far is Ghost in the Shell. That thing is from the 80s and it still looks to be right on track in predicting a plausible scenario for the 2030s. Obviously without a Soviet Union and most likely without an American Empire, but except for the one true AI, the tech all seems feasable in the next 20 years. Because almost everything has already been through the proof of concept phase a decade ago and the industry is currently working on improving precision and ease of use while reducing production costs to make them ready for market launch.
    That's kind of what worries me. While I can see a lot of potential to singularity tech (genetics, nanotech, robotics,) I'd also want its adoption to be voluntary, rather than effectively mandated by competition within the services/manufacturing sector and labour market. Unfortunately, government policy is lumbering to keep up with the existence of online privacy issues from fifteen years ago, so the odds of them being able to effectively regulate this stuff is basically nil.

    (Also, as others in the thread have remarked, deploying AI for purposes of tactical combat or corporate espionage has got to be an excellent way to create sociopathic AI, particularly since you'll need to give it more and more autonomy in order to maintain a consistent edge over competitors.)

    I wouldn't care to comment on the specific timeframe needed for these technologies to mature (the 'grey goo scenario' is fairly overhyped, if you ask me,) but I suspect that it's more a question of 'when' than 'if.

  25. - Top - End - #55
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: The Singularity

    Personally, I don't like the idea of either taking over, nor do I like the idea of humanity treating other sapients as slaves. I would prefer we be friends, comrades with different yet equally valuable perspective's, strengths and weaknesses that compliment each other.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  26. - Top - End - #56
    Banned
     
    Zeful's Avatar

    Join Date
    Nov 2005
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    Again, why? You're happy enough to count Data as alive, even though the imperative to create more Datas doesn't seem to be an 'intrinsic part of his nature'.
    Capability is not inclination.
    DNA isn't inherently bound to it's nucleus, but single cells can still count as alive.
    Does DNA count as alive? If it isn't, why would an AI?

    Forgive me, but the general trend of your posts lent me the impression that you found the idea of human succession by AI beings to be... highly objectionable, on the basis that they were not 'alive', for a given definition of life.
    No. I find the idea of destructive human succession by AI to be objectionable through the argument, spoken or unspoken, that the AI succeeding humanity is more valuable than the whole of humanity. Pointing out that AI aren't actually alive was refuting the refutation of my initial comment that "AI life" would still exist thus "all life" would not be extinguished. Context is very important.

    I have no objection to peaceful co-existence with AI, living (and thus hardware-bound) or not. I have objections (and numerous at that) at the attempt to establish the value of a human life in any form of deleterious fashion, which of course makes me inherently object to transhumanism in most, if not all, forms.

  27. - Top - End - #57
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: The Singularity

    I admit, as cool as it would be, transhumanism scares me a little. Not (just) because of the whole machine/man merging thing, but because I know apelings, and how they tend to treat other apelings when they consider themselves superior.
    I don't think I need to mention any examples.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  28. - Top - End - #58
    Troll in the Playground
     
    Kitten Champion's Avatar

    Join Date
    Aug 2012

    Default Re: The Singularity

    I think transhumanism has the potential to break through the arbitrary boundaries of race, nation, sex, and possibly class. All we'll have left is religion and political ideology.

    Won't that be fun?

    True, some post-humans might discriminate against Old Monkey. It's just the fact that one's superiority is based only on one's cybernetic hardware is -- in my opinion -- less likely to lead to 'bouts of Axe Crazy than intangible and irrelevant things we've been putting so much stock in historically. Consider that we've yet to pull the machetes out against people who don't own iPhones. Technology, after all, has no true exclusive properties beyond ownership.

  29. - Top - End - #59
    Titan in the Playground
     
    Ravens_cry's Avatar

    Join Date
    Sep 2008

    Default Re: The Singularity

    A smartphone isn't part of your body, and you do see some discrimination, if only because the new toy becomes a necessity, between those who have and those who do not have the latest toy, and people tend to think groups like the Amish and Hudderites a little odd.
    I'd like to be optimistic like that, but the history of apelings makes that very hard.
    Last edited by Ravens_cry; 2012-11-17 at 03:22 PM.
    Quote Originally Posted by Calanon View Post
    Raven_Cry's comments often have the effects of a +5 Tome of Understanding

  30. - Top - End - #60
    Banned
     
    Zeful's Avatar

    Join Date
    Nov 2005
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Kitten Champion View Post
    I think transhumanism has the potential to break through the arbitrary boundaries of race, nation, sex, and possibly class. All we'll have left is religion and political ideology.

    Won't that be fun?

    True, some post-humans might discriminate against Old Monkey. It's just the fact that one's superiority is based only on one's cybernetic hardware is -- in my opinion -- less likely to lead to 'bouts of Axe Crazy than intangible and irrelevant things we've been putting so much stock in historically. Consider that we've yet to pull the machetes out against people who don't own iPhones. Technology, after all, has no true exclusive properties beyond ownership.
    People with iPhones aren't orders of magnitude more productive (something that has yet to be proven by the transhumanist supporters, after all, cybernetic augmentation that would increase lifting capacity would be a full muscular-skeletal system replacement than just magic arms, and thus hilariously costly and risky, and it still would be incapable of invalidating a trained forklift operator) than people who refuse to pay five hundred or more dollars for everything a more conventional cell phone does plus some assorted other features.

    But to put it this way. I have yet to see or hear of the "Transhumanist Utopia" that doesn't come across as a fascist state where people have no rights or a blind religious ideal that comes across as hyporitical when the people saying Transhumanism will always be wonderful for everyone and there's no possible downside (I can list several for Brain Uploading, Cybernetic Augmentation, Biological Immortality, without breaking into famous fictional depictions of those things) to be telling religious people that they're stupid for believing in a god.
    Last edited by Zeful; 2012-11-17 at 03:27 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •