Page 4 of 4 FirstFirst 1234
Results 91 to 102 of 102

Thread: Transhumanism

  1. - Top - End - #91
    Ogre in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    We are already using AI for mechanical design. Dreamcatcher by Autodeck (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.

    For more, check out this site: www.aee.odu.edu/proddesign/.
    Is it conscious?

    Quote Originally Posted by georgie_leech View Post
    Right, I'm taking their argument at face value to try to convince him of what he's missing. Currently I'm trying to draw a comparisson between Halfeye's argument about natural selection leading to murderous intent is countered by the fact that they, a product of natural selection, don't feel the need to kill stuff in his usual day because it's not something they want or care about.
    It's not about murderous intent. Lions don't murder zebras. It's about survival. Lions need zebras, or something of the sort, we can sometimes get by on vegetables if we're careful, but lions can't. Lions that killed all prey in their range would die, so they don't do that. If the AI need us, we'll be alright, but they very probably won't so that's a worry, and suggesting that we can both let them reproduce at will and control them is mistaken.

    Then leading to the idea that AI have the goals we give them, so we should be careful when making said goals.
    My point, is that natural selection changes all goals to survival. If a species (supposing we can call a type of AI a species) reproduces autonomously, survival will become a priority.

    But apparently mosquitos cooperating is where they're trying to draw the conversation, so I'm not sure they're actually engaging my point.
    Someone's not understanding something, it's not a problem. Mosquitos don't cooperate with their prey, they probably don't cooperate with the diseases they sometimes carry. I saw a huge mosquito last night, I didn't like the look of it at all, the idea of it has been with me all day. Mosquitos are predators, or parasites, depending how one looks at it, they cannot live if they don't suck blood. It's really not about the mosquitos, it's about natural selection. Natural selection is a huge constraint upon what can and cannot be, and so far, most people don't realise how fundamentally important it is.
    Last edited by halfeye; 2017-10-11 at 01:47 PM.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  2. - Top - End - #92
    Ogre in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Beer and Chocolateland
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    Humans are thus far wild.
    I'd argue we're self-domesticated, given some of the similarities in our adaptations to our current lifestyle to those found in (other) domesticated species. But that doesn't really impact the current discussion at all, so do carry on.
    The ultimate OOTS cookie cutter nameless soldier is the hobgoblin.

  3. - Top - End - #93
    Ettin in the Playground
    Join Date
    Dec 2010

    Default Re: Transhumanism

    Despite cells in our body replicating autonomously in context, they have a particular niche. We get cancer, but we don't get cancers that peel themselves away from the original person and become their own autonomously surviving organisms. We have been coexisting with self-replicating computer programs for quite some time now - computer viruses - and they haven't extracted themselves from their hosts either.

    There is so much structure that can come about from evolutionary dynamics, ecological dynamics, and game theoretic concerns that a sweeping catchphrase like 'survival of the fittest' is not a solid basis for extrapolation.

    Things that are parasitic or symbiotic or in different niches or which can obtain information about their own population size and set their grown accordingly or which have otherwise nonlinear replicative dynamics (dP/dt not linear in P) all don't generally have competitive exclusion. Things with horizontal information transfer are even hard to define exclusion with respect to - our genes generally don't compete with eachother aside from rare cases, even though each gene can be separately considered a replicating system under natural selection thanks to crossover, HGT, transposons, and other non-vertical genetic mechanisms. This isn't even going into correlation effects like kin selection which provide alternatives to direct replication for increasing genetic influence.

    Meta-evolution (e.g. the evolutionary equivalent of self-improvement in nature) seems to have historically favored the construction of increasingly neutral fitness landscapes - the Baldwin effect and survival of the flattest as two broad examples.

    But of course, the dynamics of intelligent learning systems also don't have to look like the dynamics of evolutionary systems at all, since there are more than one way for information to be carried forward.

    Either way, this is all pretty self-indulgent crystal ball gazing. Like a Ouija board, projecting things forward into a transhumanist future tends to reveal more about yourself than about the world.

  4. - Top - End - #94
    Ogre in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Ouija boards are dangerous. Not because of spirits, but because they are a direct line to the subconscious.

    Please do not play with those things under any circumstances.

    Quote Originally Posted by NichG View Post
    Despite cells in our body replicating autonomously in context, they have a particular niche. We get cancer, but we don't get cancers that peel themselves away from the original person and become their own autonomously surviving organisms. We have been coexisting with self-replicating computer programs for quite some time now - computer viruses - and they haven't extracted themselves from their hosts either.
    I don't understand cancer, there seems to me to be some sort of a gene group that gets switched on, but where it came from and why it survives is a mystery.

    https://xkcd.com/925/

    If the cancer half of that graph is right, something happened before 1970 to make cancer much more common, possibly in part more recognition, but surely more than that. Two obvious candidates are post WW2 nuclear testing, and mass use of penicillin and other antibiotics.

    There is so much structure that can come about from evolutionary dynamics, ecological dynamics, and game theoretic concerns that a sweeping catchphrase like 'survival of the fittest' is not a solid basis for extrapolation.
    It's true because it's vague.

    Things that are parasitic or symbiotic or in different niches or which can obtain information about their own population size and set their grown accordingly or which have otherwise nonlinear replicative dynamics (dP/dt not linear in P) all don't generally have competitive exclusion. Things with horizontal information transfer are even hard to define exclusion with respect to - our genes generally don't compete with eachother aside from rare cases, even though each gene can be separately considered a replicating system under natural selection thanks to crossover, HGT, transposons, and other non-vertical genetic mechanisms. This isn't even going into correlation effects like kin selection which provide alternatives to direct replication for increasing genetic influence.

    Meta-evolution (e.g. the evolutionary equivalent of self-improvement in nature) seems to have historically favored the construction of increasingly neutral fitness landscapes - the Baldwin effect and survival of the flattest as two broad examples.
    The Selfish Gene is a great and very important book. A lot of people who haven't read it assume it is about a gene for selfishness, but it isn't.

    But of course, the dynamics of intelligent learning systems also don't have to look like the dynamics of evolutionary systems at all, since there are more than one way for information to be carried forward.
    So long as there is information, entropy will act on it.

    Either way, this is all pretty self-indulgent crystal ball gazing. Like a Ouija board, projecting things forward into a transhumanist future tends to reveal more about yourself than about the world.
    Apart from the Ouija bit, I think I somewhat agree, there is a future, we will go forward into it, what it will be I can't tell in detail.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  5. - Top - End - #95
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    It becomes a contradiction in terms when applied to natural selection only, because people are subject to natural selection
    People are subject to various forces and processes. Can we control none of them?

    Quote Originally Posted by halfeye View Post
    We can probably eliminate obvious faults like some forms of heart disease if they are genetically based, because that's the direction natural selection is probably going in, but trying to steer natural selection to somewhere it wouldn't naturally go, such as making heart attacks more likely (perhaps for some weird future aesthetic), would not tend to work out the way it was desired.
    That's like saying that we can't control the flow of water if we can't make it run uphill. Which could be a thing you believe, for all I know.

    Quote Originally Posted by halfeye View Post
    Transplanting a human mind into any sort of computer is is a non-trivial problem that is not yet anywhere near to being solved, we barely know how the brain works, and there is almost certainly no current computer that is powerful enough to simulate it at full speed.
    Oh, sure, I was talking about hypothetical future technology, not anything that can be done today. I thought that that much was obvious.

    Quote Originally Posted by halfeye View Post
    Maths is a vast subject, you almost certainly mean arithmetic
    On the one hand, I pretty much did, and I guess I should have been more specific. On the other hand, maybe specialized intelligent programs could handle math in general and various other tasks for me better than I ever could myself... to the dubious extent that they aren't just new parts of me.

    Quote Originally Posted by halfeye View Post
    We don't understand the brain well enough to excise parts of it without causing serious side effects on other parts.
    Ah, but I'm talking about taking out parts of the mind, not the brain. I'm thinking more of dealing with high-level abstract mental phenomena distributed throughout the brain in a high-level, abstract, distributed manner. This is something that we already do with drugs, and might be able to do with greater precision through other means someday.

    Quote Originally Posted by halfeye View Post
    A brain is much much more complicated than the most complicated modern ship.
    Huh? How is that relevant?

    Quote Originally Posted by halfeye View Post
    On the other hand, this geezer:

    https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

    seems to me to be foolish
    He's not especially foolish in any sort of general way, and I'm pretty confident that in this case you're mistakenly attributing to him foolishness that isn't his, if not mistaking wisdom for foolishness due to foolishness of your own. (I think that most of us can agree that the latter is a fairly standard form of irony.)

    Quote Originally Posted by halfeye View Post
    A learning AI would be like an almost infinite tree, you might design the first couple of branches, but once it gets into hundreds of branches, telling where it will go next is going to be impossible, there will be branches everywhere, in all directions, and humans just won't be able to keep up with where the branches are branching towards.
    If you already know in detail everything that a program might do before running it, it isn't even artificial intelligence of the relevant sort.

    Quote Originally Posted by halfeye View Post
    The point is that bugs build up, and a sufficient complex of them may enable an AI that contains them to bypass the programming that keeps it human friendly.
    Yes, and so reliable friendliness requires measures sufficient to prevent the AI from bypassing that programming. And that's a very hard problem. That's the point!

    It's probably not an impossible problem. A sufficiently advanced Friendly AI could very probably do it, but here we run into a rather obvious "chicken and egg" issue. Hence a focus on self-improving AI, to lower the requirements of "sufficiently advanced" to the point that the problem is remotely solvable. Does that lower the requirements enough to make the problem solvable by human beings? Perhaps not and we're all doomed, but that's far from proven.

    Unless you're talking about guaranteed friendliness in the sense of a literal zero chance of failure. Yudkowsky is on record that assigning a probability of exactly zero to anything is irrational, so in that sense he makes no guarantees. Of anything, ever. So, really, that makes the question one of how low we can get the probability of failure.

    Quote Originally Posted by halfeye View Post
    control of a freely reproducing species is not an option
    To the extent that free reproduction is uncontrolled by definition, this seems to be tautologous, even if of dubious relevance.

    Quote Originally Posted by halfeye View Post
    Your man is talking about control
    You don't even seem to just be taking "control" to mean something that it isn't intended to mean here, because the word isn't even used in the article.

    Quote Originally Posted by halfeye View Post
    and that can't happen if the reproduction is unsupervised.
    Where is it suggested that reproduction be unsupervised?

    Quote Originally Posted by halfeye View Post
    My point, is that natural selection changes all goals to survival. If a species (supposing we can call a type of AI a species) reproduces autonomously, survival will become a priority.
    Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?

    Traits don't even need to facilitate individual or kin survival in order to be selected for, though.

    Spoiler: This is some of Yudkowsky's writing, as it happens.
    Show
    Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.

    Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.

    This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.

    Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.

    Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.

    The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness. Let's consider:

    Suppose that, at some point in the future, there are super-intelligent machines, and all artificial minds want most of all to protect and to help humanity. You may object that this scenario must be short-lived because at some point the accumulation of random errors will result in one of the growing population of intelligent programs being more concerned with its own survival. But, you see, in that sort of environment where the most powerful, most intelligent beings favor humanity's prosperity over other concerns, your survival is best assured if it somehow contributes to the prosperity of humanity. If you're instead a danger to humanity, all of those other AIs are instead your enemies. That does not bode well for your survival. And the odds of staging a successful coup are very low. There are multiple redundant security measures in place, because these super-intelligent machines are not idiots. They're super-intelligent!

    Quote Originally Posted by halfeye View Post
    Ouija boards are dangerous. Not because of spirits, but because they are a direct line to the subconscious.

    Please do not play with those things under any circumstances.
    Um... What's so dangerous about that? I've idly pondered communicating with my subconscious mind, probably via hypnosis, so I'd be interested to hear about any risks you're aware of.
    Spoiler
    Show

  6. - Top - End - #96
    Ogre in the Playground
     
    Lord Torath's Avatar

    Join Date
    Aug 2011
    Location
    USA
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    We are already using AI for mechanical design. Dreamcatcher by Autodesk (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.

    For more, check out this site: www.aee.odu.edu/proddesign/.
    Quote Originally Posted by halfeye View Post
    Is it conscious?
    Define "conscious". We just had a huge discussion about what consciousness is, and whether or not you can measure it in the Time Travel and Teleportation (but mostly teleportation) thread. As far as I can tell, no consensus was reached on a useful definition or measurement.
    Last edited by Lord Torath; 2017-11-28 at 12:34 PM. Reason: AutoDeck vs AutoDesk
    Thri-Kreen Ranger/Psionicist by me, based off of Rich's A Monster for Every Season

  7. - Top - End - #97
    Ogre in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    Define "conscious". We just had a huge discussion about what consciousness is, and whether or not you can measure it in the Time Travel and Teleportation (but mostly teleportation) thread. As far as I can tell, no consensus was reached on a useful definition or measurement.
    I was halfway joking, because I know it's a difficult question when it is in actual doubt, however I would assume that current PC compatible software isn't. Are wasps conscious? are cats? somewhere there's a boundary, but it's not clear where that is, I don't think current software on consumer hardware is anywhere close to that boundary.

    Quote Originally Posted by Devils_Advocate View Post
    Um... What's so dangerous about that? I've idly pondered communicating with my subconscious mind, probably via hypnosis, so I'd be interested to hear about any risks you're aware of.
    Messing about with your own subconscious is not advised for the public at random. Sometimes people can hear voices, which come from their subconscious, that usually does not go at all well.

    For Ouija boards in particular, I once new someone who was hurt, and they then told us that they had taken part in an Ouija board game and the game had predicted that someone would be hurt in that way. I have no idea whether it was someone in the group messing about, or the person's own mind, but the result was a serious injury that seemed to have been facilitated by the game.

    People are subject to various forces and processes. Can we control none of them?
    I don't know what you mean by "subject to" in this context. The problem with natural selection is that it is part of a feedback loop, if you don't understand what that is, this will be hugely difficult.

    That's like saying that we can't control the flow of water if we can't make it run uphill. Which could be a thing you believe, for all I know.
    Without pipes, we can't make water flow uphill. We have aquaducts, dams and canals, but if you think we have total control of big rivers you are mistaken, we are losing lakes and inland seas to irrigation, and there are still occasional floods.

    On the one hand, I pretty much did, and I guess I should have been more specific. On the other hand, maybe specialized intelligent programs could handle math in general and various other tasks for me better than I ever could myself... to the dubious extent that they aren't just new parts of me.
    The question of how integrated those are into your mind will be critical, if it ever happens.

    Ah, but I'm talking about taking out parts of the mind, not the brain. I'm thinking more of dealing with high-level abstract mental phenomena distributed throughout the brain in a high-level, abstract, distributed manner. This is something that we already do with drugs, and might be able to do with greater precision through other means someday.
    This is highly problematical, we don't yet have a good detailed map of the brain let alone the mind, however we do know that things are connected in unobvious ways.

    Huh? How is that relevant?
    https://en.wikipedia.org/wiki/Ship_of_Theseus

    He's not especially foolish in any sort of general way, and I'm pretty confident that in this case you're mistakenly attributing to him foolishness that isn't his, if not mistaking wisdom for foolishness due to foolishness of your own. (I think that most of us can agree that the latter is a fairly standard form of irony.)
    I am attributing foolishness to reported statements of his. It may be that the statements are mis-reported.

    If you already know in detail everything that a program might do before running it, it isn't even artificial intelligence of the relevant sort.
    If you know that, it's not much of a program at all. Almost all programs do unexpected things.

    Yes, and so reliable friendliness requires measures sufficient to prevent the AI from bypassing that programming. And that's a very hard problem. That's the point!
    You say very hard, I say impossible.

    It's probably not an impossible problem. A sufficiently advanced Friendly AI could very probably do it, but here we run into a rather obvious "chicken and egg" issue. Hence a focus on self-improving AI, to lower the requirements of "sufficiently advanced" to the point that the problem is remotely solvable. Does that lower the requirements enough to make the problem solvable by human beings? Perhaps not and we're all doomed, but that's far from proven.
    I disagree that it's not impossible, I believe it is impossible, and I believe that is proven.

    Unless you're talking about guaranteed friendliness in the sense of a literal zero chance of failure. Yudkowsky is on record that assigning a probability of exactly zero to anything is irrational, so in that sense he makes no guarantees. Of anything, ever. So, really, that makes the question one of how low we can get the probability of failure.
    In geological time, the probability of failure is 100% to at least six significant figures.

    To the extent that free reproduction is uncontrolled by definition, this seems to be tautologous, even if of dubious relevance.
    I was talking about control of the other behaviours of the species, as well as the breeding.

    You don't even seem to just be taking "control" to mean something that it isn't intended to mean here, because the word isn't even used in the article.
    He's talking about making something do something or not do something. To me, that's talking about control, whether or not he used the word itself.

    Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?

    Traits don't even need to facilitate individual or kin survival in order to be selected for, though.

    Spoiler: This is some of Yudkowsky's writing, as it happens.
    Show
    Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.

    Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.

    This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.

    Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.

    Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.

    The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness.
    While the peacock example seems useful at first glance, it's actually not what it seems. Yes the bird with the flashy tail is somewhat less healthy, but among his offspring, the healthier siblings will have brighter tails than the less healthy ones, so it evens out, and in the long run the healthier birds look nicer.

    Let's consider:

    Suppose that, at some point in the future, there are super-intelligent machines, and all artificial minds want most of all to protect and to help humanity. You may object that this scenario must be short-lived because at some point the accumulation of random errors will result in one of the growing population of intelligent programs being more concerned with its own survival. But, you see, in that sort of environment where the most powerful, most intelligent beings favor humanity's prosperity over other concerns, your survival is best assured if it somehow contributes to the prosperity of humanity. If you're instead a danger to humanity, all of those other AIs are instead your enemies. That does not bode well for your survival. And the odds of staging a successful coup are very low. There are multiple redundant security measures in place, because these super-intelligent machines are not idiots. They're super-intelligent!
    Super-intelligent but totally constrained by their programming? I don't think that works, intelligence gives you more opportunity to make choices. There is also the option of a mutant arising that keeps quiet about its heresy until it is in a majority. This is not a safe option.

    Controlling intelligent reproducing beings is really not an option. There are controlling personality types among people, other people don't usually like them.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  8. - Top - End - #98
    Ettin in the Playground
    Join Date
    Dec 2010

    Default Re: Transhumanism

    In practice, having a population of replicating AIs is significantly suboptimal compared to having a single AI who uses that population's worth of computation for training.

    AI isn't biology, and learning and evolution have different asymptotic behaviors.

    These singularity discussions often just end up being selectively asserting certainty or uncertainty to only keep evidence on the table that supports what either side is trying to sell - unconditional fear and unconditional optimism respectively.

    The right answer IMO is go build things and see what they actually do. Actual AI that works at superhuman performance doesn't look anything like the kinds of objects that get introduced in these kinds of projections. People in the 60s were convinced that propositional logic was definitely going to be how machine intelligence was going to be, and 60 years later it's all neural networks and statistical techniques with very, very different properties. There are still people who think that Asimov's Three Laws are a great idea - how exactly do you intend to apply that to e.g. linear regression? Yudkowsky himself commented that his predictions about AlphaGo vs Lee Sedol necessarily being either 0-5 or 5-0 were wrong and so he had to go back to the drawing board about his assumptions on what AI is like.

    This topic happens to be a bit of a pet peeve of mine. In my line of work, I see too many content-free talks selling AI as a man-made deity or AI as man-made devil, with the inevitable implied 'if you throw me a billion dollars I'll make it so/fix it/make AI to defend against it' to the VCs in the room (who, to my relief, don't actually tend to buy into it nearly as much as the stereotype would suggest). So apologies if I'm a bit of a buzz kill on this subject.

  9. - Top - End - #99
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    I don't know what you mean by "subject to" in this context.
    Acted on, influenced by; however you want to put it. Is that not what you meant?

    Quote Originally Posted by halfeye View Post
    The problem with natural selection is that it is part of a feedback loop, if you don't understand what that is, this will be hugely difficult.
    I know what a feedback loop is.

    Quote Originally Posted by halfeye View Post
    It is much easier to blow down a house made of straw than a house made of bricks.

    Quote Originally Posted by halfeye View Post
    I am attributing foolishness to reported statements of his. It may be that the statements are mis-reported.
    I read the Wikipedia article, and nothing in there seemed foolish to me. I'm guessing that the problem is on your end.

    Quote Originally Posted by halfeye View Post
    I disagree that it's not impossible, I believe it is impossible, and I believe that is proven.
    Where's the proof?

    Quote Originally Posted by halfeye View Post
    In geological time, the probability of failure is 100% to at least six significant figures.
    "Geological time"? I'm not sure that we're even talking about the same thing at this point. Let me try to lay out my understanding of the subject matter:

    There are a great many goals that seem like they would be easier to achieve given greater intelligence. Hence the appeal of creating something smarter than you to solve your problems. It's entirely likely that AIs will also create smarter AIs to do what they want done, and so on and so forth. Each generation, being smarter than the last, is better able to design new smarter minds that do what it wants. The hardware and software involved may and likely will change drastically, but purpose is inherited, and increasingly reliably so with time, because ensuring that purpose is inherited reliably is one of the issues that's important to address early on.

    The problem is that we don't yet know how to create superhuman intelligence with good purposes, in no small part due to ambiguity as to what constitutes a "good purpose". And any purposes that are accidentally or ill-advisedly build into early AI are likely to get harder to get rid of as AI advances, because each generation gets better at protecting its purposes; that's what "more intelligent" means.

    But the Evolution Fairy is unlikely to be granted the opportunity to strip out engineered purposes for "fitter" ones either way. Each generation becomes better at preventing random errors, and as designs grow increasingly complex, introducing a random error into one becomes vanishingly likely to result in an entity capable of competing with its peers anyway.

    Quote Originally Posted by halfeye View Post
    He's talking about making something do something or not do something. To me, that's talking about control, whether or not he used the word itself.
    But then selective breeding does control evolution, because it makes evolution do something. But I thought that you took the position that "control" was more than that.

    Quote Originally Posted by halfeye View Post
    While the peacock example seems useful at first glance, it's actually not what it seems.
    What do you mean? What do you think it seems to be?

    Quote Originally Posted by halfeye View Post
    Yes the bird with the flashy tail is somewhat less healthy, but among his offspring, the healthier siblings will have brighter tails than the less healthy ones, so it evens out, and in the long run the healthier birds look nicer.
    What "evens out"? At first, the healthier offspring will be the ones without the bright tail gene, all else being equal, because the bright tail gene causes poorer health. If eventually the bright tail gene is so common that nearly everyone has it, then it's unlikely to account for the difference between two individuals, but I don't see how that makes the example misleading. Do you think that the example is misleading? If so, could you explain how you think it misleads anyone into believing something untrue? Because I'm not seeing it.

    Quote Originally Posted by halfeye View Post
    Super-intelligent but totally constrained by their programming? I don't think that works, intelligence gives you more opportunity to make choices.
    The more intelligent a being is, the less likely it is to make mistakes. If you disagree, then what do you think "intelligent" means? Feel free to substitute another word, if you think another is more appropriate; just keep in mind that not screwing up is the point.

    Quote Originally Posted by halfeye View Post
    There is also the option of a mutant arising that keeps quiet about its heresy until it is in a majority.
    Why would the AIs allow for that? Why would there even be such a thing as private thoughts among them? Assuming that that represents a needless danger, the sensible option is not to have it, which means that at a minimum the mutant needs (a) deviant goals, (b) to not broadcast its thoughts as is normal, and (c) to broadcast plausible false thoughts as well. Note that this all has to happen simultaneously at random in conjunction with whatever else is necessary to get around however many dozens of other safeguards.

    Quote Originally Posted by halfeye View Post
    This is not a safe option.
    Yes. Hence why it's prevented from happening. Again, to be clear, I'm positing a scenario in which everything is managed by beings that are not stupid. I do realize that that's a fantastic scenario removed from everyday experience. If you want to argue that stupidity can't be eliminated, feel free to do so.

    Quote Originally Posted by halfeye View Post
    Controlling intelligent reproducing beings is really not an option.
    Governance is a myth, eh?

    Quote Originally Posted by halfeye View Post
    There are controlling personality types among people, other people don't usually like them.
    Creating artificial human-like personalities is a very different goal from creating Friendly AI.

    One way of anthropomorphizing an AI is to think of it as a basically normal person with a bunch of compulsions artificially layered on. This is how artificial intelligence is liable to be portrayed in soft science fiction. That's... not a totally implausible sort of digital mind to have, as mind uploading could be developed before AI. But in that case, the intelligence isn't really artificial, just copied over from a natural source. A cheat, in short.

    Programming a truly artificial mind isn't like brainwashing a human being. The AI's programming doesn't override its natural goals because it has no natural goals. It starts off with only the goals programmed into it.
    Spoiler
    Show

  10. - Top - End - #100
    Ogre in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    I was involuntarily mostly offline for the past week plus, so replying to this was delayed.

    Quote Originally Posted by Devils_Advocate View Post
    Acted on, influenced by; however you want to put it. Is that not what you meant?
    Controlling things is theoretically much easier if you're not in a feedback loop which also contains them. The thing about feedback loops is that altering them changes things which also alters them, so you get side effects to your alterations, which are inherently unpredictable.

    I read the Wikipedia article, and nothing in there seemed foolish to me. I'm guessing that the problem is on your end.
    Someone is a fool, I don't think it's me in this case.

    Where's the proof?
    After losing all the related quotes, this is a bit of a non-sequitur, however it goes something along the lines of your chickens are connected to the eggs by parent-child relationships, which inevitably means that natural selection is in the system. It therefore follows that somewhere along the line, natural selection will have control. Natural selection has no sense of morality, nor a sense of humour, it just does what it does.

    "Geological time"? I'm not sure that we're even talking about the same thing at this point.
    The short term is a part of the long term, if whatever it is isn't going to work in the short term, it can't happen in the long term, but if in another case something can't happen in the long term, it's possible it might happen for a short time.

    Let me try to lay out my understanding of the subject matter:

    There are a great many goals that seem like they would be easier to achieve given greater intelligence. Hence the appeal of creating something smarter than you to solve your problems. It's entirely likely that AIs will also create smarter AIs to do what they want done, and so on and so forth. Each generation, being smarter than the last, is better able to design new smarter minds that do what it wants. The hardware and software involved may and likely will change drastically, but purpose is inherited, and increasingly reliably so with time, because ensuring that purpose is inherited reliably is one of the issues that's important to address early on.

    The problem is that we don't yet know how to create superhuman intelligence with good purposes, in no small part due to ambiguity as to what constitutes a "good purpose". And any purposes that are accidentally or ill-advisedly build into early AI are likely to get harder to get rid of as AI advances, because each generation gets better at protecting its purposes; that's what "more intelligent" means.

    But the Evolution Fairy is unlikely to be granted the opportunity to strip out engineered purposes for "fitter" ones either way. Each generation becomes better at preventing random errors, and as designs grow increasingly complex, introducing a random error into one becomes vanishingly likely to result in an entity capable of competing with its peers anyway.
    The process from a small patch of light sensitive skin to an eye, is not simple, but natural selection has done it several times. Evolution is not a fairy, if anything it's a demon/devil (dang D&D confusing things) and a really, really big one.

    But then selective breeding does control evolution, because it makes evolution do something. But I thought that you took the position that "control" was more than that.
    Selective breeding is where natural selection got its name, we were doing it long before we understood what we were doing. Selective breeding does not control evolution, it guides it, and channels it, but to refer back to an earlier metaphore it does not make un-piped water flow uphill.

    What do you mean? What do you think it seems to be?
    To me it seems to be a badly thought through attempt to disprove evolutionary theory. Creationists must love this guy.

    What "evens out"? At first, the healthier offspring will be the ones without the bright tail gene, all else being equal, because the bright tail gene causes poorer health. If eventually the bright tail gene is so common that nearly everyone has it, then it's unlikely to account for the difference between two individuals, but I don't see how that makes the example misleading. Do you think that the example is misleading? If so, could you explain how you think it misleads anyone into believing something untrue? Because I'm not seeing it.
    Evolution happens in multiple generations, not single lifetimes. There never was a successful single bright tail mutation, the costs would be too high. There were probably millions of minor mutations to make the tail of the peacock.

    The more intelligent a being is, the less likely it is to make mistakes.
    Wrong.

    If you disagree, then what do you think "intelligent" means? Feel free to substitute another word, if you think another is more appropriate; just keep in mind that not screwing up is the point.
    Intelligence is the capacity to learn from mistakes, and as a result, the ability to make more, and more interesting, mistakes.

    Why would the AIs allow for that? Why would there even be such a thing as private thoughts among them? Assuming that that represents a needless danger, the sensible option is not to have it, which means that at a minimum the mutant needs (a) deviant goals, (b) to not broadcast its thoughts as is normal, and (c) to broadcast plausible false thoughts as well. Note that this all has to happen simultaneously at random in conjunction with whatever else is necessary to get around however many dozens of other safeguards.
    Police states are inherently unstable. They can be very, very unpleasant for their duration, but they typically so far haven't lasted.

    Yes. Hence why it's prevented from happening. Again, to be clear, I'm positing a scenario in which everything is managed by beings that are not stupid. I do realize that that's a fantastic scenario removed from everyday experience. If you want to argue that stupidity can't be eliminated, feel free to do so.
    The inability to make mistakes is stupid.

    Governance is a myth, eh?
    Yes. So far. An actual million year police state is becoming potentially possible with modern surveillence technology, which would be a very bad thing, but even a million years isn't long in the universe.

    Creating artificial human-like personalities is a very different goal from creating Friendly AI.
    Sure.

    Programming a truly artificial mind isn't like brainwashing a human being. The AI's programming doesn't override its natural goals because it has no natural goals. It starts off with only the goals programmed into it.
    Natural goals will arrive via the "Natural selection demon", it's what it does, it's what it has always done, and it's not limited to meat, it acts on all information. Programming isn't what non-programmers think it is. Mostly, programming is debugging, and there's always that one bug you didn't find yet, it's often one you introduced when you fixed the previous one.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  11. - Top - End - #101
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    I was involuntarily mostly offline for the past week plus, so replying to this was delayed.
    Oh, that's fine; I certainly took long enough to reply.

    Controlling things is theoretically much easier if you're not in a feedback loop which also contains them.
    If you modify your methods based on observations of how they work, then isn't that a feedback loop of attempts --> results --> observations --> analyses --> hopefully improved understanding --> attempts? That seems like a better basis for control than not changing what you do based on how things perform.

    You know what, on second thought, maybe you're using the term "feedback loop" in some technical sense that I don't understand.

    The thing about feedback loops is that altering them changes things which also alters them, so you get side effects to your alterations, which are inherently unpredictable.
    Isn't compound interest basically a predictable feedback loop?

    Someone is a fool, I don't think it's me in this case.
    Well, I think it is you. It appears we are at an impasse.

    After losing all the related quotes, this is a bit of a non-sequitur, however it goes something along the lines of your chickens are connected to the eggs by parent-child relationships, which inevitably means that natural selection is in the system. It therefore follows that somewhere along the line, natural selection will have control. Natural selection has no sense of morality, nor a sense of humour, it just does what it does.
    Isn't natural selection much more slow and gradual than advances in engineering, though? It's conceivable that e.g. certain cognitive biases could be propagated by minds designing more advanced minds -- and potential self-sustaining flaws of that nature are an excellent example of what to look out for when attempting to create Friendly AI. But I would expect deliberate design goals to generally dominate and to generally win out over accidents when the deliberate and the accidental come into conflict.

    The short term is a part of the long term, if whatever it is isn't going to work in the short term, it can't happen in the long term
    Well, not as a general rule. Not being able to accumulate one million dollars in one year doesn't mean never being able to have that much money, for example. If something irreversibly fails in the short term, then there's no prospect of long-term success by definition (of "irreversibly"). But if the failure is reversible, then it's different, innit?

    but if in another case something can't happen in the long term, it's possible it might happen for a short time.
    Yeah, but far before a geological era passes, Friendly AI sets up safety measures sufficient to prevent random errors from ruining everything. (If you can make catastrophic failure so unlikely that there's only a one in a googol chance of it happening before the heat death of the universe, that's basically "good enough".)

    Provided that Friendly AI exists.

    The process from a small patch of light sensitive skin to an eye, is not simple, but natural selection has done it several times.
    Natural selection isn't dominated by deliberate design goals; indeed, there are none.

    Selective breeding does not control evolution, it guides it, and channels it, but to refer back to an earlier metaphore it does not make un-piped water flow uphill.
    Quote Originally Posted by halfeye View Post
    He's talking about making something do something or not do something. To me, that's talking about control, whether or not he used the word itself.
    So, if I understand you, you are claiming that selective breeding does not make evolution do something or not do something. Is that correct?

    Quote Originally Posted by halfeye View Post
    To me it seems to be a badly thought through attempt to disprove evolutionary theory.
    I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?

    Creationists must love this guy.
    Now you're just being ridiculous. There's nothing creationist about it.

    Evolution happens in multiple generations, not single lifetimes.
    Sounds like the sorites paradox.

    There never was a successful single bright tail mutation, the costs would be too high. There were probably millions of minor mutations to make the tail of the peacock.
    What's your basis for that assessment? (Perhaps there's some general principle that the cost to benefit ratio is much higher for mutations that cause drastic changes? I honestly wouldn't know.)

    Intelligence is the capacity to learn from mistakes, and as a result, the ability to make more, and more interesting, mistakes.
    What's the word for a tendency to make better choices, then? Such that if X reliably makes better choices than Y, X is more _____ than Y.

    Police states are inherently unstable. They can be very, very unpleasant for their duration, but they typically so far haven't lasted.
    Correct me if I'm wrong, but police states generally impose upon people restrictions that they would prefer not to be under. Friendly AIs want to be prevented from becoming unFriendly, so they don't have to be forced to cooperate with measures to ensure that. Your comparison thus seems rather inapt.

    Natural goals will arrive via the "Natural selection demon", it's what it does, it's what it has always done, and it's not limited to meat, it acts on all information.
    The environment determines which traits propagate themselves most effectively. It's not like we're talking about an actual malevolent entity determined to sow selfishness and suffering; altruism is a product of evolution as well. It's really more of a natural selection daemon. :P I see no reason to think that evolution could never decrease selfishness in the right environment.

    Programming isn't what non-programmers think it is. Mostly, programming is debugging, and there's always that one bug you didn't find yet, it's often one you introduced when you fixed the previous one.
    No one is claiming that an artificial mind can't have goals programmed into it accidentally. That's an entire branch of problems for Friendly AI. The important thing is to have fail-safes to deal with errors, e.g. of the form "Under Condition X, perform a controlled shutdown". You want a robust design that doesn't do Very Bad Things just because there's one bug somewhere in the code. Mind you, narrowing down the core issue to designing a collectively reliable set of safety features still leaves a fairly intractable problem.

    Really, as I see it, the appropriate benchmark is making something one can be rationally confident is more safe. The world is dangerous already. If someone can make it less dangerous and then bootstrap from there, that'll be great.

    So, do you think that that benchmark is achievable, do you think it's unachievable, or are you unsure?
    Spoiler
    Show

  12. - Top - End - #102
    Ogre in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Devils_Advocate View Post
    If you modify your methods based on observations of how they work, then isn't that a feedback loop of attempts --> results --> observations --> analyses --> hopefully improved understanding --> attempts? That seems like a better basis for control than not changing what you do based on how things perform.
    Yes, that's how a feedback loop works if the operator/observer is outside the loop, and thus in control of it. When the feedback changes YOU, it's a whole different can of worms.

    You know what, on second thought, maybe you're using the term "feedback loop" in some technical sense that I don't understand.
    It's the operator/controller being changed by the feedback that makes a very significant difference.

    Isn't compound interest basically a predictable feedback loop?
    I suppose it's something like, but again, the investor isn't significantly changed by the outcome.

    Isn't natural selection much more slow and gradual than advances in engineering, though? It's conceivable that e.g. certain cognitive biases could be propagated by minds designing more advanced minds -- and potential self-sustaining flaws of that nature are an excellent example of what to look out for when attempting to create Friendly AI. But I would expect deliberate design goals to generally dominate and to generally win out over accidents when the deliberate and the accidental come into conflict.
    Natural Selection typically takes many generations to make changes (penicillin resistance, melanistic moths), but to eliminate a species can take very few (passenger pigeon, dodo, great auk)

    Well, not as a general rule. Not being able to accumulate one million dollars in one year doesn't mean never being able to have that much money, for example. If something irreversibly fails in the short term, then there's no prospect of long-term success by definition (of "irreversibly"). But if the failure is reversible, then it's different, innit?
    In terms of natural selection, reversible failure is NOT failure.

    Yeah, but far before a geological era passes, Friendly AI sets up safety measures sufficient to prevent random errors from ruining everything. (If you can make catastrophic failure so unlikely that there's only a one in a googol chance of it happening before the heat death of the universe, that's basically "good enough".)

    Provided that Friendly AI exists.
    That's the billion dollar "if".

    Natural selection isn't dominated by deliberate design goals; indeed, there are none.
    Except survival. Wings, eyes, intelligence, teeth, if any of them are contrary to survival then they go, but survival is required.

    So, if I understand you, you are claiming that selective breeding does not make evolution do something or not do something. Is that correct?
    Selective breeding moves the goalposts somewhat, but different breeders have at least slightly different understandings of what any particular breed standard is aiming for, and none of it can make a lethal gene part of a breed standard, even if an unhelpful one that doesn't kill may be favoured (hip displasia in canines for example).

    I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?
    Survival rules all. If there is no survival, there is nothing. Sexual selection is possible so long as it favours survival.

    Now you're just being ridiculous. There's nothing creationist about it.
    Creationists hate Darwin. This guy seems to be against Darwin. It seems a reasonable suspicion without other evidence, we know creationists have a lot of money.

    Sounds like the sorites paradox.

    We're back to the ship of Theseus.

    What's your basis for that assessment? (Perhaps there's some general principle that the cost to benefit ratio is much higher for mutations that cause drastic changes? I honestly wouldn't know.)
    You think evolution went from no eye to the human eye in one generation?

    Correct me if I'm wrong, but police states generally impose upon people restrictions that they would prefer not to be under. Friendly AIs want to be prevented from becoming unFriendly, so they don't have to be forced to cooperate with measures to ensure that. Your comparison thus seems rather inapt.
    However, are these "friendly" AIs not forced to want to be friendly? Not everything a police state forces people to do is unwelcome by all of the people.

    The environment determines which traits propagate themselves most effectively. It's not like we're talking about an actual malevolent entity determined to sow selfishness and suffering; altruism is a product of evolution as well. It's really more of a natural selection daemon. :P I see no reason to think that evolution could never decrease selfishness in the right environment.
    Altruism in so far as it exists, and it does exist to some extent, is certainly selected for. I would argue very strongly that there are limits to altruisms possible extent, but it certainly is selected for in some circumstances. Daemon or demon it's powerful, and it's in no sense moral, karma is a human invention, not a feature of the universe. Selfishness is on the borderline of being required for survival, kin altruism makes a lot of sense for survival, and in a war situation solidarity is frequently a survival trait, but in less constrained times, selfishness is often favoured, and outlawing it makes it much more favoured than it otherwise would be.

    No one is claiming that an artificial mind can't have goals programmed into it accidentally. That's an entire branch of problems for Friendly AI. The important thing is to have fail-safes to deal with errors, e.g. of the form "Under Condition X, perform a controlled shutdown". You want a robust design that doesn't do Very Bad Things just because there's one bug somewhere in the code. Mind you, narrowing down the core issue to designing a collectively reliable set of safety features still leaves a fairly intractable problem.

    Really, as I see it, the appropriate benchmark is making something one can be rationally confident is more safe. The world is dangerous already. If someone can make it less dangerous and then bootstrap from there, that'll be great.

    So, do you think that that benchmark is achievable, do you think it's unachievable, or are you unsure?
    The thing about code is, it does exactly what it says it does, not what anyone thinks it says or ought to say. Every time code is extended, the chances of misunderstandings of what it says are increased.
    Last edited by halfeye; 2017-12-08 at 03:18 PM.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •