New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 4 of 4 FirstFirst 1234
Results 91 to 111 of 111

Thread: Transhumanism

  1. - Top - End - #91
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    We are already using AI for mechanical design. Dreamcatcher by Autodeck (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.

    For more, check out this site: www.aee.odu.edu/proddesign/.
    Is it conscious?

    Quote Originally Posted by georgie_leech View Post
    Right, I'm taking their argument at face value to try to convince him of what he's missing. Currently I'm trying to draw a comparisson between Halfeye's argument about natural selection leading to murderous intent is countered by the fact that they, a product of natural selection, don't feel the need to kill stuff in his usual day because it's not something they want or care about.
    It's not about murderous intent. Lions don't murder zebras. It's about survival. Lions need zebras, or something of the sort, we can sometimes get by on vegetables if we're careful, but lions can't. Lions that killed all prey in their range would die, so they don't do that. If the AI need us, we'll be alright, but they very probably won't so that's a worry, and suggesting that we can both let them reproduce at will and control them is mistaken.

    Then leading to the idea that AI have the goals we give them, so we should be careful when making said goals.
    My point, is that natural selection changes all goals to survival. If a species (supposing we can call a type of AI a species) reproduces autonomously, survival will become a priority.

    But apparently mosquitos cooperating is where they're trying to draw the conversation, so I'm not sure they're actually engaging my point.
    Someone's not understanding something, it's not a problem. Mosquitos don't cooperate with their prey, they probably don't cooperate with the diseases they sometimes carry. I saw a huge mosquito last night, I didn't like the look of it at all, the idea of it has been with me all day. Mosquitos are predators, or parasites, depending how one looks at it, they cannot live if they don't suck blood. It's really not about the mosquitos, it's about natural selection. Natural selection is a huge constraint upon what can and cannot be, and so far, most people don't realise how fundamentally important it is.
    Last edited by halfeye; 2017-10-11 at 01:47 PM.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  2. - Top - End - #92
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    Humans are thus far wild.
    I'd argue we're self-domesticated, given some of the similarities in our adaptations to our current lifestyle to those found in (other) domesticated species. But that doesn't really impact the current discussion at all, so do carry on.
    The Hindsight Awards, results: See the best movies of 1999!

  3. - Top - End - #93
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: Transhumanism

    Despite cells in our body replicating autonomously in context, they have a particular niche. We get cancer, but we don't get cancers that peel themselves away from the original person and become their own autonomously surviving organisms. We have been coexisting with self-replicating computer programs for quite some time now - computer viruses - and they haven't extracted themselves from their hosts either.

    There is so much structure that can come about from evolutionary dynamics, ecological dynamics, and game theoretic concerns that a sweeping catchphrase like 'survival of the fittest' is not a solid basis for extrapolation.

    Things that are parasitic or symbiotic or in different niches or which can obtain information about their own population size and set their grown accordingly or which have otherwise nonlinear replicative dynamics (dP/dt not linear in P) all don't generally have competitive exclusion. Things with horizontal information transfer are even hard to define exclusion with respect to - our genes generally don't compete with eachother aside from rare cases, even though each gene can be separately considered a replicating system under natural selection thanks to crossover, HGT, transposons, and other non-vertical genetic mechanisms. This isn't even going into correlation effects like kin selection which provide alternatives to direct replication for increasing genetic influence.

    Meta-evolution (e.g. the evolutionary equivalent of self-improvement in nature) seems to have historically favored the construction of increasingly neutral fitness landscapes - the Baldwin effect and survival of the flattest as two broad examples.

    But of course, the dynamics of intelligent learning systems also don't have to look like the dynamics of evolutionary systems at all, since there are more than one way for information to be carried forward.

    Either way, this is all pretty self-indulgent crystal ball gazing. Like a Ouija board, projecting things forward into a transhumanist future tends to reveal more about yourself than about the world.

  4. - Top - End - #94
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Ouija boards are dangerous. Not because of spirits, but because they are a direct line to the subconscious.

    Please do not play with those things under any circumstances.

    Quote Originally Posted by NichG View Post
    Despite cells in our body replicating autonomously in context, they have a particular niche. We get cancer, but we don't get cancers that peel themselves away from the original person and become their own autonomously surviving organisms. We have been coexisting with self-replicating computer programs for quite some time now - computer viruses - and they haven't extracted themselves from their hosts either.
    I don't understand cancer, there seems to me to be some sort of a gene group that gets switched on, but where it came from and why it survives is a mystery.

    https://xkcd.com/925/

    If the cancer half of that graph is right, something happened before 1970 to make cancer much more common, possibly in part more recognition, but surely more than that. Two obvious candidates are post WW2 nuclear testing, and mass use of penicillin and other antibiotics.

    There is so much structure that can come about from evolutionary dynamics, ecological dynamics, and game theoretic concerns that a sweeping catchphrase like 'survival of the fittest' is not a solid basis for extrapolation.
    It's true because it's vague.

    Things that are parasitic or symbiotic or in different niches or which can obtain information about their own population size and set their grown accordingly or which have otherwise nonlinear replicative dynamics (dP/dt not linear in P) all don't generally have competitive exclusion. Things with horizontal information transfer are even hard to define exclusion with respect to - our genes generally don't compete with eachother aside from rare cases, even though each gene can be separately considered a replicating system under natural selection thanks to crossover, HGT, transposons, and other non-vertical genetic mechanisms. This isn't even going into correlation effects like kin selection which provide alternatives to direct replication for increasing genetic influence.

    Meta-evolution (e.g. the evolutionary equivalent of self-improvement in nature) seems to have historically favored the construction of increasingly neutral fitness landscapes - the Baldwin effect and survival of the flattest as two broad examples.
    The Selfish Gene is a great and very important book. A lot of people who haven't read it assume it is about a gene for selfishness, but it isn't.

    But of course, the dynamics of intelligent learning systems also don't have to look like the dynamics of evolutionary systems at all, since there are more than one way for information to be carried forward.
    So long as there is information, entropy will act on it.

    Either way, this is all pretty self-indulgent crystal ball gazing. Like a Ouija board, projecting things forward into a transhumanist future tends to reveal more about yourself than about the world.
    Apart from the Ouija bit, I think I somewhat agree, there is a future, we will go forward into it, what it will be I can't tell in detail.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  5. - Top - End - #95
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    It becomes a contradiction in terms when applied to natural selection only, because people are subject to natural selection
    People are subject to various forces and processes. Can we control none of them?

    Quote Originally Posted by halfeye View Post
    We can probably eliminate obvious faults like some forms of heart disease if they are genetically based, because that's the direction natural selection is probably going in, but trying to steer natural selection to somewhere it wouldn't naturally go, such as making heart attacks more likely (perhaps for some weird future aesthetic), would not tend to work out the way it was desired.
    That's like saying that we can't control the flow of water if we can't make it run uphill. Which could be a thing you believe, for all I know.

    Quote Originally Posted by halfeye View Post
    Transplanting a human mind into any sort of computer is is a non-trivial problem that is not yet anywhere near to being solved, we barely know how the brain works, and there is almost certainly no current computer that is powerful enough to simulate it at full speed.
    Oh, sure, I was talking about hypothetical future technology, not anything that can be done today. I thought that that much was obvious.

    Quote Originally Posted by halfeye View Post
    Maths is a vast subject, you almost certainly mean arithmetic
    On the one hand, I pretty much did, and I guess I should have been more specific. On the other hand, maybe specialized intelligent programs could handle math in general and various other tasks for me better than I ever could myself... to the dubious extent that they aren't just new parts of me.

    Quote Originally Posted by halfeye View Post
    We don't understand the brain well enough to excise parts of it without causing serious side effects on other parts.
    Ah, but I'm talking about taking out parts of the mind, not the brain. I'm thinking more of dealing with high-level abstract mental phenomena distributed throughout the brain in a high-level, abstract, distributed manner. This is something that we already do with drugs, and might be able to do with greater precision through other means someday.

    Quote Originally Posted by halfeye View Post
    A brain is much much more complicated than the most complicated modern ship.
    Huh? How is that relevant?

    Quote Originally Posted by halfeye View Post
    On the other hand, this geezer:

    https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

    seems to me to be foolish
    He's not especially foolish in any sort of general way, and I'm pretty confident that in this case you're mistakenly attributing to him foolishness that isn't his, if not mistaking wisdom for foolishness due to foolishness of your own. (I think that most of us can agree that the latter is a fairly standard form of irony.)

    Quote Originally Posted by halfeye View Post
    A learning AI would be like an almost infinite tree, you might design the first couple of branches, but once it gets into hundreds of branches, telling where it will go next is going to be impossible, there will be branches everywhere, in all directions, and humans just won't be able to keep up with where the branches are branching towards.
    If you already know in detail everything that a program might do before running it, it isn't even artificial intelligence of the relevant sort.

    Quote Originally Posted by halfeye View Post
    The point is that bugs build up, and a sufficient complex of them may enable an AI that contains them to bypass the programming that keeps it human friendly.
    Yes, and so reliable friendliness requires measures sufficient to prevent the AI from bypassing that programming. And that's a very hard problem. That's the point!

    It's probably not an impossible problem. A sufficiently advanced Friendly AI could very probably do it, but here we run into a rather obvious "chicken and egg" issue. Hence a focus on self-improving AI, to lower the requirements of "sufficiently advanced" to the point that the problem is remotely solvable. Does that lower the requirements enough to make the problem solvable by human beings? Perhaps not and we're all doomed, but that's far from proven.

    Unless you're talking about guaranteed friendliness in the sense of a literal zero chance of failure. Yudkowsky is on record that assigning a probability of exactly zero to anything is irrational, so in that sense he makes no guarantees. Of anything, ever. So, really, that makes the question one of how low we can get the probability of failure.

    Quote Originally Posted by halfeye View Post
    control of a freely reproducing species is not an option
    To the extent that free reproduction is uncontrolled by definition, this seems to be tautologous, even if of dubious relevance.

    Quote Originally Posted by halfeye View Post
    Your man is talking about control
    You don't even seem to just be taking "control" to mean something that it isn't intended to mean here, because the word isn't even used in the article.

    Quote Originally Posted by halfeye View Post
    and that can't happen if the reproduction is unsupervised.
    Where is it suggested that reproduction be unsupervised?

    Quote Originally Posted by halfeye View Post
    My point, is that natural selection changes all goals to survival. If a species (supposing we can call a type of AI a species) reproduces autonomously, survival will become a priority.
    Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?

    Traits don't even need to facilitate individual or kin survival in order to be selected for, though.

    Spoiler: This is some of Yudkowsky's writing, as it happens.
    Show
    Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.

    Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.

    This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.

    Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.

    Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.

    The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness. Let's consider:

    Suppose that, at some point in the future, there are super-intelligent machines, and all artificial minds want most of all to protect and to help humanity. You may object that this scenario must be short-lived because at some point the accumulation of random errors will result in one of the growing population of intelligent programs being more concerned with its own survival. But, you see, in that sort of environment where the most powerful, most intelligent beings favor humanity's prosperity over other concerns, your survival is best assured if it somehow contributes to the prosperity of humanity. If you're instead a danger to humanity, all of those other AIs are instead your enemies. That does not bode well for your survival. And the odds of staging a successful coup are very low. There are multiple redundant security measures in place, because these super-intelligent machines are not idiots. They're super-intelligent!

    Quote Originally Posted by halfeye View Post
    Ouija boards are dangerous. Not because of spirits, but because they are a direct line to the subconscious.

    Please do not play with those things under any circumstances.
    Um... What's so dangerous about that? I've idly pondered communicating with my subconscious mind, probably via hypnosis, so I'd be interested to hear about any risks you're aware of.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  6. - Top - End - #96
    Ettin in the Playground
     
    Lord Torath's Avatar

    Join Date
    Aug 2011
    Location
    Sharangar's Revenge
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    We are already using AI for mechanical design. Dreamcatcher by Autodesk (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.

    For more, check out this site: www.aee.odu.edu/proddesign/.
    Quote Originally Posted by halfeye View Post
    Is it conscious?
    Define "conscious". We just had a huge discussion about what consciousness is, and whether or not you can measure it in the Time Travel and Teleportation (but mostly teleportation) thread. As far as I can tell, no consensus was reached on a useful definition or measurement.
    Last edited by Lord Torath; 2017-11-28 at 12:34 PM. Reason: AutoDeck vs AutoDesk
    Warhammer 40,000 Campaign Skirmish Game: Warpstrike
    My Spelljammer stuff (including an orbit tracker), 2E AD&D spreadsheet, and Vault of the Drow maps are available in my Dropbox. Feel free to use or not use it as you see fit!
    Thri-Kreen Ranger/Psionicist by me, based off of Rich's A Monster for Every Season

  7. - Top - End - #97
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    Define "conscious". We just had a huge discussion about what consciousness is, and whether or not you can measure it in the Time Travel and Teleportation (but mostly teleportation) thread. As far as I can tell, no consensus was reached on a useful definition or measurement.
    I was halfway joking, because I know it's a difficult question when it is in actual doubt, however I would assume that current PC compatible software isn't. Are wasps conscious? are cats? somewhere there's a boundary, but it's not clear where that is, I don't think current software on consumer hardware is anywhere close to that boundary.

    Quote Originally Posted by Devils_Advocate View Post
    Um... What's so dangerous about that? I've idly pondered communicating with my subconscious mind, probably via hypnosis, so I'd be interested to hear about any risks you're aware of.
    Messing about with your own subconscious is not advised for the public at random. Sometimes people can hear voices, which come from their subconscious, that usually does not go at all well.

    For Ouija boards in particular, I once new someone who was hurt, and they then told us that they had taken part in an Ouija board game and the game had predicted that someone would be hurt in that way. I have no idea whether it was someone in the group messing about, or the person's own mind, but the result was a serious injury that seemed to have been facilitated by the game.

    People are subject to various forces and processes. Can we control none of them?
    I don't know what you mean by "subject to" in this context. The problem with natural selection is that it is part of a feedback loop, if you don't understand what that is, this will be hugely difficult.

    That's like saying that we can't control the flow of water if we can't make it run uphill. Which could be a thing you believe, for all I know.
    Without pipes, we can't make water flow uphill. We have aquaducts, dams and canals, but if you think we have total control of big rivers you are mistaken, we are losing lakes and inland seas to irrigation, and there are still occasional floods.

    On the one hand, I pretty much did, and I guess I should have been more specific. On the other hand, maybe specialized intelligent programs could handle math in general and various other tasks for me better than I ever could myself... to the dubious extent that they aren't just new parts of me.
    The question of how integrated those are into your mind will be critical, if it ever happens.

    Ah, but I'm talking about taking out parts of the mind, not the brain. I'm thinking more of dealing with high-level abstract mental phenomena distributed throughout the brain in a high-level, abstract, distributed manner. This is something that we already do with drugs, and might be able to do with greater precision through other means someday.
    This is highly problematical, we don't yet have a good detailed map of the brain let alone the mind, however we do know that things are connected in unobvious ways.

    Huh? How is that relevant?
    https://en.wikipedia.org/wiki/Ship_of_Theseus

    He's not especially foolish in any sort of general way, and I'm pretty confident that in this case you're mistakenly attributing to him foolishness that isn't his, if not mistaking wisdom for foolishness due to foolishness of your own. (I think that most of us can agree that the latter is a fairly standard form of irony.)
    I am attributing foolishness to reported statements of his. It may be that the statements are mis-reported.

    If you already know in detail everything that a program might do before running it, it isn't even artificial intelligence of the relevant sort.
    If you know that, it's not much of a program at all. Almost all programs do unexpected things.

    Yes, and so reliable friendliness requires measures sufficient to prevent the AI from bypassing that programming. And that's a very hard problem. That's the point!
    You say very hard, I say impossible.

    It's probably not an impossible problem. A sufficiently advanced Friendly AI could very probably do it, but here we run into a rather obvious "chicken and egg" issue. Hence a focus on self-improving AI, to lower the requirements of "sufficiently advanced" to the point that the problem is remotely solvable. Does that lower the requirements enough to make the problem solvable by human beings? Perhaps not and we're all doomed, but that's far from proven.
    I disagree that it's not impossible, I believe it is impossible, and I believe that is proven.

    Unless you're talking about guaranteed friendliness in the sense of a literal zero chance of failure. Yudkowsky is on record that assigning a probability of exactly zero to anything is irrational, so in that sense he makes no guarantees. Of anything, ever. So, really, that makes the question one of how low we can get the probability of failure.
    In geological time, the probability of failure is 100% to at least six significant figures.

    To the extent that free reproduction is uncontrolled by definition, this seems to be tautologous, even if of dubious relevance.
    I was talking about control of the other behaviours of the species, as well as the breeding.

    You don't even seem to just be taking "control" to mean something that it isn't intended to mean here, because the word isn't even used in the article.
    He's talking about making something do something or not do something. To me, that's talking about control, whether or not he used the word itself.

    Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?

    Traits don't even need to facilitate individual or kin survival in order to be selected for, though.

    Spoiler: This is some of Yudkowsky's writing, as it happens.
    Show
    Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.

    Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.

    This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.

    Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.

    Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.

    The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness.
    While the peacock example seems useful at first glance, it's actually not what it seems. Yes the bird with the flashy tail is somewhat less healthy, but among his offspring, the healthier siblings will have brighter tails than the less healthy ones, so it evens out, and in the long run the healthier birds look nicer.

    Let's consider:

    Suppose that, at some point in the future, there are super-intelligent machines, and all artificial minds want most of all to protect and to help humanity. You may object that this scenario must be short-lived because at some point the accumulation of random errors will result in one of the growing population of intelligent programs being more concerned with its own survival. But, you see, in that sort of environment where the most powerful, most intelligent beings favor humanity's prosperity over other concerns, your survival is best assured if it somehow contributes to the prosperity of humanity. If you're instead a danger to humanity, all of those other AIs are instead your enemies. That does not bode well for your survival. And the odds of staging a successful coup are very low. There are multiple redundant security measures in place, because these super-intelligent machines are not idiots. They're super-intelligent!
    Super-intelligent but totally constrained by their programming? I don't think that works, intelligence gives you more opportunity to make choices. There is also the option of a mutant arising that keeps quiet about its heresy until it is in a majority. This is not a safe option.

    Controlling intelligent reproducing beings is really not an option. There are controlling personality types among people, other people don't usually like them.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  8. - Top - End - #98
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: Transhumanism

    In practice, having a population of replicating AIs is significantly suboptimal compared to having a single AI who uses that population's worth of computation for training.

    AI isn't biology, and learning and evolution have different asymptotic behaviors.

    These singularity discussions often just end up being selectively asserting certainty or uncertainty to only keep evidence on the table that supports what either side is trying to sell - unconditional fear and unconditional optimism respectively.

    The right answer IMO is go build things and see what they actually do. Actual AI that works at superhuman performance doesn't look anything like the kinds of objects that get introduced in these kinds of projections. People in the 60s were convinced that propositional logic was definitely going to be how machine intelligence was going to be, and 60 years later it's all neural networks and statistical techniques with very, very different properties. There are still people who think that Asimov's Three Laws are a great idea - how exactly do you intend to apply that to e.g. linear regression? Yudkowsky himself commented that his predictions about AlphaGo vs Lee Sedol necessarily being either 0-5 or 5-0 were wrong and so he had to go back to the drawing board about his assumptions on what AI is like.

    This topic happens to be a bit of a pet peeve of mine. In my line of work, I see too many content-free talks selling AI as a man-made deity or AI as man-made devil, with the inevitable implied 'if you throw me a billion dollars I'll make it so/fix it/make AI to defend against it' to the VCs in the room (who, to my relief, don't actually tend to buy into it nearly as much as the stereotype would suggest). So apologies if I'm a bit of a buzz kill on this subject.

  9. - Top - End - #99
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    I don't know what you mean by "subject to" in this context.
    Acted on, influenced by; however you want to put it. Is that not what you meant?

    Quote Originally Posted by halfeye View Post
    The problem with natural selection is that it is part of a feedback loop, if you don't understand what that is, this will be hugely difficult.
    I know what a feedback loop is.

    Quote Originally Posted by halfeye View Post
    It is much easier to blow down a house made of straw than a house made of bricks.

    Quote Originally Posted by halfeye View Post
    I am attributing foolishness to reported statements of his. It may be that the statements are mis-reported.
    I read the Wikipedia article, and nothing in there seemed foolish to me. I'm guessing that the problem is on your end.

    Quote Originally Posted by halfeye View Post
    I disagree that it's not impossible, I believe it is impossible, and I believe that is proven.
    Where's the proof?

    Quote Originally Posted by halfeye View Post
    In geological time, the probability of failure is 100% to at least six significant figures.
    "Geological time"? I'm not sure that we're even talking about the same thing at this point. Let me try to lay out my understanding of the subject matter:

    There are a great many goals that seem like they would be easier to achieve given greater intelligence. Hence the appeal of creating something smarter than you to solve your problems. It's entirely likely that AIs will also create smarter AIs to do what they want done, and so on and so forth. Each generation, being smarter than the last, is better able to design new smarter minds that do what it wants. The hardware and software involved may and likely will change drastically, but purpose is inherited, and increasingly reliably so with time, because ensuring that purpose is inherited reliably is one of the issues that's important to address early on.

    The problem is that we don't yet know how to create superhuman intelligence with good purposes, in no small part due to ambiguity as to what constitutes a "good purpose". And any purposes that are accidentally or ill-advisedly build into early AI are likely to get harder to get rid of as AI advances, because each generation gets better at protecting its purposes; that's what "more intelligent" means.

    But the Evolution Fairy is unlikely to be granted the opportunity to strip out engineered purposes for "fitter" ones either way. Each generation becomes better at preventing random errors, and as designs grow increasingly complex, introducing a random error into one becomes vanishingly likely to result in an entity capable of competing with its peers anyway.

    Quote Originally Posted by halfeye View Post
    He's talking about making something do something or not do something. To me, that's talking about control, whether or not he used the word itself.
    But then selective breeding does control evolution, because it makes evolution do something. But I thought that you took the position that "control" was more than that.

    Quote Originally Posted by halfeye View Post
    While the peacock example seems useful at first glance, it's actually not what it seems.
    What do you mean? What do you think it seems to be?

    Quote Originally Posted by halfeye View Post
    Yes the bird with the flashy tail is somewhat less healthy, but among his offspring, the healthier siblings will have brighter tails than the less healthy ones, so it evens out, and in the long run the healthier birds look nicer.
    What "evens out"? At first, the healthier offspring will be the ones without the bright tail gene, all else being equal, because the bright tail gene causes poorer health. If eventually the bright tail gene is so common that nearly everyone has it, then it's unlikely to account for the difference between two individuals, but I don't see how that makes the example misleading. Do you think that the example is misleading? If so, could you explain how you think it misleads anyone into believing something untrue? Because I'm not seeing it.

    Quote Originally Posted by halfeye View Post
    Super-intelligent but totally constrained by their programming? I don't think that works, intelligence gives you more opportunity to make choices.
    The more intelligent a being is, the less likely it is to make mistakes. If you disagree, then what do you think "intelligent" means? Feel free to substitute another word, if you think another is more appropriate; just keep in mind that not screwing up is the point.

    Quote Originally Posted by halfeye View Post
    There is also the option of a mutant arising that keeps quiet about its heresy until it is in a majority.
    Why would the AIs allow for that? Why would there even be such a thing as private thoughts among them? Assuming that that represents a needless danger, the sensible option is not to have it, which means that at a minimum the mutant needs (a) deviant goals, (b) to not broadcast its thoughts as is normal, and (c) to broadcast plausible false thoughts as well. Note that this all has to happen simultaneously at random in conjunction with whatever else is necessary to get around however many dozens of other safeguards.

    Quote Originally Posted by halfeye View Post
    This is not a safe option.
    Yes. Hence why it's prevented from happening. Again, to be clear, I'm positing a scenario in which everything is managed by beings that are not stupid. I do realize that that's a fantastic scenario removed from everyday experience. If you want to argue that stupidity can't be eliminated, feel free to do so.

    Quote Originally Posted by halfeye View Post
    Controlling intelligent reproducing beings is really not an option.
    Governance is a myth, eh?

    Quote Originally Posted by halfeye View Post
    There are controlling personality types among people, other people don't usually like them.
    Creating artificial human-like personalities is a very different goal from creating Friendly AI.

    One way of anthropomorphizing an AI is to think of it as a basically normal person with a bunch of compulsions artificially layered on. This is how artificial intelligence is liable to be portrayed in soft science fiction. That's... not a totally implausible sort of digital mind to have, as mind uploading could be developed before AI. But in that case, the intelligence isn't really artificial, just copied over from a natural source. A cheat, in short.

    Programming a truly artificial mind isn't like brainwashing a human being. The AI's programming doesn't override its natural goals because it has no natural goals. It starts off with only the goals programmed into it.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  10. - Top - End - #100
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    I was involuntarily mostly offline for the past week plus, so replying to this was delayed.

    Quote Originally Posted by Devils_Advocate View Post
    Acted on, influenced by; however you want to put it. Is that not what you meant?
    Controlling things is theoretically much easier if you're not in a feedback loop which also contains them. The thing about feedback loops is that altering them changes things which also alters them, so you get side effects to your alterations, which are inherently unpredictable.

    I read the Wikipedia article, and nothing in there seemed foolish to me. I'm guessing that the problem is on your end.
    Someone is a fool, I don't think it's me in this case.

    Where's the proof?
    After losing all the related quotes, this is a bit of a non-sequitur, however it goes something along the lines of your chickens are connected to the eggs by parent-child relationships, which inevitably means that natural selection is in the system. It therefore follows that somewhere along the line, natural selection will have control. Natural selection has no sense of morality, nor a sense of humour, it just does what it does.

    "Geological time"? I'm not sure that we're even talking about the same thing at this point.
    The short term is a part of the long term, if whatever it is isn't going to work in the short term, it can't happen in the long term, but if in another case something can't happen in the long term, it's possible it might happen for a short time.

    Let me try to lay out my understanding of the subject matter:

    There are a great many goals that seem like they would be easier to achieve given greater intelligence. Hence the appeal of creating something smarter than you to solve your problems. It's entirely likely that AIs will also create smarter AIs to do what they want done, and so on and so forth. Each generation, being smarter than the last, is better able to design new smarter minds that do what it wants. The hardware and software involved may and likely will change drastically, but purpose is inherited, and increasingly reliably so with time, because ensuring that purpose is inherited reliably is one of the issues that's important to address early on.

    The problem is that we don't yet know how to create superhuman intelligence with good purposes, in no small part due to ambiguity as to what constitutes a "good purpose". And any purposes that are accidentally or ill-advisedly build into early AI are likely to get harder to get rid of as AI advances, because each generation gets better at protecting its purposes; that's what "more intelligent" means.

    But the Evolution Fairy is unlikely to be granted the opportunity to strip out engineered purposes for "fitter" ones either way. Each generation becomes better at preventing random errors, and as designs grow increasingly complex, introducing a random error into one becomes vanishingly likely to result in an entity capable of competing with its peers anyway.
    The process from a small patch of light sensitive skin to an eye, is not simple, but natural selection has done it several times. Evolution is not a fairy, if anything it's a demon/devil (dang D&D confusing things) and a really, really big one.

    But then selective breeding does control evolution, because it makes evolution do something. But I thought that you took the position that "control" was more than that.
    Selective breeding is where natural selection got its name, we were doing it long before we understood what we were doing. Selective breeding does not control evolution, it guides it, and channels it, but to refer back to an earlier metaphore it does not make un-piped water flow uphill.

    What do you mean? What do you think it seems to be?
    To me it seems to be a badly thought through attempt to disprove evolutionary theory. Creationists must love this guy.

    What "evens out"? At first, the healthier offspring will be the ones without the bright tail gene, all else being equal, because the bright tail gene causes poorer health. If eventually the bright tail gene is so common that nearly everyone has it, then it's unlikely to account for the difference between two individuals, but I don't see how that makes the example misleading. Do you think that the example is misleading? If so, could you explain how you think it misleads anyone into believing something untrue? Because I'm not seeing it.
    Evolution happens in multiple generations, not single lifetimes. There never was a successful single bright tail mutation, the costs would be too high. There were probably millions of minor mutations to make the tail of the peacock.

    The more intelligent a being is, the less likely it is to make mistakes.
    Wrong.

    If you disagree, then what do you think "intelligent" means? Feel free to substitute another word, if you think another is more appropriate; just keep in mind that not screwing up is the point.
    Intelligence is the capacity to learn from mistakes, and as a result, the ability to make more, and more interesting, mistakes.

    Why would the AIs allow for that? Why would there even be such a thing as private thoughts among them? Assuming that that represents a needless danger, the sensible option is not to have it, which means that at a minimum the mutant needs (a) deviant goals, (b) to not broadcast its thoughts as is normal, and (c) to broadcast plausible false thoughts as well. Note that this all has to happen simultaneously at random in conjunction with whatever else is necessary to get around however many dozens of other safeguards.
    Police states are inherently unstable. They can be very, very unpleasant for their duration, but they typically so far haven't lasted.

    Yes. Hence why it's prevented from happening. Again, to be clear, I'm positing a scenario in which everything is managed by beings that are not stupid. I do realize that that's a fantastic scenario removed from everyday experience. If you want to argue that stupidity can't be eliminated, feel free to do so.
    The inability to make mistakes is stupid.

    Governance is a myth, eh?
    Yes. So far. An actual million year police state is becoming potentially possible with modern surveillence technology, which would be a very bad thing, but even a million years isn't long in the universe.

    Creating artificial human-like personalities is a very different goal from creating Friendly AI.
    Sure.

    Programming a truly artificial mind isn't like brainwashing a human being. The AI's programming doesn't override its natural goals because it has no natural goals. It starts off with only the goals programmed into it.
    Natural goals will arrive via the "Natural selection demon", it's what it does, it's what it has always done, and it's not limited to meat, it acts on all information. Programming isn't what non-programmers think it is. Mostly, programming is debugging, and there's always that one bug you didn't find yet, it's often one you introduced when you fixed the previous one.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  11. - Top - End - #101
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    I was involuntarily mostly offline for the past week plus, so replying to this was delayed.
    Oh, that's fine; I certainly took long enough to reply.

    Controlling things is theoretically much easier if you're not in a feedback loop which also contains them.
    If you modify your methods based on observations of how they work, then isn't that a feedback loop of attempts --> results --> observations --> analyses --> hopefully improved understanding --> attempts? That seems like a better basis for control than not changing what you do based on how things perform.

    You know what, on second thought, maybe you're using the term "feedback loop" in some technical sense that I don't understand.

    The thing about feedback loops is that altering them changes things which also alters them, so you get side effects to your alterations, which are inherently unpredictable.
    Isn't compound interest basically a predictable feedback loop?

    Someone is a fool, I don't think it's me in this case.
    Well, I think it is you. It appears we are at an impasse.

    After losing all the related quotes, this is a bit of a non-sequitur, however it goes something along the lines of your chickens are connected to the eggs by parent-child relationships, which inevitably means that natural selection is in the system. It therefore follows that somewhere along the line, natural selection will have control. Natural selection has no sense of morality, nor a sense of humour, it just does what it does.
    Isn't natural selection much more slow and gradual than advances in engineering, though? It's conceivable that e.g. certain cognitive biases could be propagated by minds designing more advanced minds -- and potential self-sustaining flaws of that nature are an excellent example of what to look out for when attempting to create Friendly AI. But I would expect deliberate design goals to generally dominate and to generally win out over accidents when the deliberate and the accidental come into conflict.

    The short term is a part of the long term, if whatever it is isn't going to work in the short term, it can't happen in the long term
    Well, not as a general rule. Not being able to accumulate one million dollars in one year doesn't mean never being able to have that much money, for example. If something irreversibly fails in the short term, then there's no prospect of long-term success by definition (of "irreversibly"). But if the failure is reversible, then it's different, innit?

    but if in another case something can't happen in the long term, it's possible it might happen for a short time.
    Yeah, but far before a geological era passes, Friendly AI sets up safety measures sufficient to prevent random errors from ruining everything. (If you can make catastrophic failure so unlikely that there's only a one in a googol chance of it happening before the heat death of the universe, that's basically "good enough".)

    Provided that Friendly AI exists.

    The process from a small patch of light sensitive skin to an eye, is not simple, but natural selection has done it several times.
    Natural selection isn't dominated by deliberate design goals; indeed, there are none.

    Selective breeding does not control evolution, it guides it, and channels it, but to refer back to an earlier metaphore it does not make un-piped water flow uphill.
    Quote Originally Posted by halfeye View Post
    He's talking about making something do something or not do something. To me, that's talking about control, whether or not he used the word itself.
    So, if I understand you, you are claiming that selective breeding does not make evolution do something or not do something. Is that correct?

    Quote Originally Posted by halfeye View Post
    To me it seems to be a badly thought through attempt to disprove evolutionary theory.
    I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?

    Creationists must love this guy.
    Now you're just being ridiculous. There's nothing creationist about it.

    Evolution happens in multiple generations, not single lifetimes.
    Sounds like the sorites paradox.

    There never was a successful single bright tail mutation, the costs would be too high. There were probably millions of minor mutations to make the tail of the peacock.
    What's your basis for that assessment? (Perhaps there's some general principle that the cost to benefit ratio is much higher for mutations that cause drastic changes? I honestly wouldn't know.)

    Intelligence is the capacity to learn from mistakes, and as a result, the ability to make more, and more interesting, mistakes.
    What's the word for a tendency to make better choices, then? Such that if X reliably makes better choices than Y, X is more _____ than Y.

    Police states are inherently unstable. They can be very, very unpleasant for their duration, but they typically so far haven't lasted.
    Correct me if I'm wrong, but police states generally impose upon people restrictions that they would prefer not to be under. Friendly AIs want to be prevented from becoming unFriendly, so they don't have to be forced to cooperate with measures to ensure that. Your comparison thus seems rather inapt.

    Natural goals will arrive via the "Natural selection demon", it's what it does, it's what it has always done, and it's not limited to meat, it acts on all information.
    The environment determines which traits propagate themselves most effectively. It's not like we're talking about an actual malevolent entity determined to sow selfishness and suffering; altruism is a product of evolution as well. It's really more of a natural selection daemon. :P I see no reason to think that evolution could never decrease selfishness in the right environment.

    Programming isn't what non-programmers think it is. Mostly, programming is debugging, and there's always that one bug you didn't find yet, it's often one you introduced when you fixed the previous one.
    No one is claiming that an artificial mind can't have goals programmed into it accidentally. That's an entire branch of problems for Friendly AI. The important thing is to have fail-safes to deal with errors, e.g. of the form "Under Condition X, perform a controlled shutdown". You want a robust design that doesn't do Very Bad Things just because there's one bug somewhere in the code. Mind you, narrowing down the core issue to designing a collectively reliable set of safety features still leaves a fairly intractable problem.

    Really, as I see it, the appropriate benchmark is making something one can be rationally confident is more safe. The world is dangerous already. If someone can make it less dangerous and then bootstrap from there, that'll be great.

    So, do you think that that benchmark is achievable, do you think it's unachievable, or are you unsure?
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  12. - Top - End - #102
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Devils_Advocate View Post
    If you modify your methods based on observations of how they work, then isn't that a feedback loop of attempts --> results --> observations --> analyses --> hopefully improved understanding --> attempts? That seems like a better basis for control than not changing what you do based on how things perform.
    Yes, that's how a feedback loop works if the operator/observer is outside the loop, and thus in control of it. When the feedback changes YOU, it's a whole different can of worms.

    You know what, on second thought, maybe you're using the term "feedback loop" in some technical sense that I don't understand.
    It's the operator/controller being changed by the feedback that makes a very significant difference.

    Isn't compound interest basically a predictable feedback loop?
    I suppose it's something like, but again, the investor isn't significantly changed by the outcome.

    Isn't natural selection much more slow and gradual than advances in engineering, though? It's conceivable that e.g. certain cognitive biases could be propagated by minds designing more advanced minds -- and potential self-sustaining flaws of that nature are an excellent example of what to look out for when attempting to create Friendly AI. But I would expect deliberate design goals to generally dominate and to generally win out over accidents when the deliberate and the accidental come into conflict.
    Natural Selection typically takes many generations to make changes (penicillin resistance, melanistic moths), but to eliminate a species can take very few (passenger pigeon, dodo, great auk)

    Well, not as a general rule. Not being able to accumulate one million dollars in one year doesn't mean never being able to have that much money, for example. If something irreversibly fails in the short term, then there's no prospect of long-term success by definition (of "irreversibly"). But if the failure is reversible, then it's different, innit?
    In terms of natural selection, reversible failure is NOT failure.

    Yeah, but far before a geological era passes, Friendly AI sets up safety measures sufficient to prevent random errors from ruining everything. (If you can make catastrophic failure so unlikely that there's only a one in a googol chance of it happening before the heat death of the universe, that's basically "good enough".)

    Provided that Friendly AI exists.
    That's the billion dollar "if".

    Natural selection isn't dominated by deliberate design goals; indeed, there are none.
    Except survival. Wings, eyes, intelligence, teeth, if any of them are contrary to survival then they go, but survival is required.

    So, if I understand you, you are claiming that selective breeding does not make evolution do something or not do something. Is that correct?
    Selective breeding moves the goalposts somewhat, but different breeders have at least slightly different understandings of what any particular breed standard is aiming for, and none of it can make a lethal gene part of a breed standard, even if an unhelpful one that doesn't kill may be favoured (hip displasia in canines for example).

    I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?
    Survival rules all. If there is no survival, there is nothing. Sexual selection is possible so long as it favours survival.

    Now you're just being ridiculous. There's nothing creationist about it.
    Creationists hate Darwin. This guy seems to be against Darwin. It seems a reasonable suspicion without other evidence, we know creationists have a lot of money.

    Sounds like the sorites paradox.

    We're back to the ship of Theseus.

    What's your basis for that assessment? (Perhaps there's some general principle that the cost to benefit ratio is much higher for mutations that cause drastic changes? I honestly wouldn't know.)
    You think evolution went from no eye to the human eye in one generation?

    Correct me if I'm wrong, but police states generally impose upon people restrictions that they would prefer not to be under. Friendly AIs want to be prevented from becoming unFriendly, so they don't have to be forced to cooperate with measures to ensure that. Your comparison thus seems rather inapt.
    However, are these "friendly" AIs not forced to want to be friendly? Not everything a police state forces people to do is unwelcome by all of the people.

    The environment determines which traits propagate themselves most effectively. It's not like we're talking about an actual malevolent entity determined to sow selfishness and suffering; altruism is a product of evolution as well. It's really more of a natural selection daemon. :P I see no reason to think that evolution could never decrease selfishness in the right environment.
    Altruism in so far as it exists, and it does exist to some extent, is certainly selected for. I would argue very strongly that there are limits to altruisms possible extent, but it certainly is selected for in some circumstances. Daemon or demon it's powerful, and it's in no sense moral, karma is a human invention, not a feature of the universe. Selfishness is on the borderline of being required for survival, kin altruism makes a lot of sense for survival, and in a war situation solidarity is frequently a survival trait, but in less constrained times, selfishness is often favoured, and outlawing it makes it much more favoured than it otherwise would be.

    No one is claiming that an artificial mind can't have goals programmed into it accidentally. That's an entire branch of problems for Friendly AI. The important thing is to have fail-safes to deal with errors, e.g. of the form "Under Condition X, perform a controlled shutdown". You want a robust design that doesn't do Very Bad Things just because there's one bug somewhere in the code. Mind you, narrowing down the core issue to designing a collectively reliable set of safety features still leaves a fairly intractable problem.

    Really, as I see it, the appropriate benchmark is making something one can be rationally confident is more safe. The world is dangerous already. If someone can make it less dangerous and then bootstrap from there, that'll be great.

    So, do you think that that benchmark is achievable, do you think it's unachievable, or are you unsure?
    The thing about code is, it does exactly what it says it does, not what anyone thinks it says or ought to say. Every time code is extended, the chances of misunderstandings of what it says are increased.
    Last edited by halfeye; 2017-12-08 at 03:18 PM.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  13. - Top - End - #103
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    Yes, that's how a feedback loop works if the operator/observer is outside the loop, and thus in control of it. When the feedback changes YOU, it's a whole different can of worms.

    It's the operator/controller being changed by the feedback that makes a very significant difference.
    If I learn from feedback, it changes me. I become different in that I now have knowledge that I previously lacked.

    I don't think I understand what distinction you're trying to make here.

    That's the billion dollar "if".
    But of course.

    Except survival. Wings, eyes, intelligence, teeth, if any of them are contrary to survival then they go, but survival is required.
    I'm not sure that you understood the quote that this is in reply to. Do you mean to suggest that natural selection involves organisms being deliberately designed to survive?

    Survival rules all. If there is no survival, there is nothing. Sexual selection is possible so long as it favours survival.
    A mutation that doubles an organism's lifespan but renders it infertile is very highly selected against. Individual survival serves reproduction, not vice versa. Not living at all doesn't prevent a type of thing from becoming common so long as it can reproduce. Just look at viruses.

    Which only makes sense, if you think about it. The things that best become more numerous over time are the things of which the most new instances get made. That's pretty much a tautology, so far as I can see.

    Individual reproduction is far from the be-all end-all either, mind you. If you die young without having any offspring but in a way that nevertheless increases the frequency of your traits, that works too! They can even be acquired traits, e.g. memes.

    Creationists hate Darwin. This guy seems to be against Darwin. It seems a reasonable suspicion without other evidence, we know creationists have a lot of money.
    I'm not aware of any anti-Darwin behavior on Yudkowsky's part. You seem to interpret some of his writing as somehow anti-Darwin due to you having a proverbial screw loose. It seems a reasonable suspicion without other evidence, I know there are a lot of nutjobs out there.

    More seriously, I'm not actually convinced that you're significantly crazier than normal. You could even be less crazy than me. But most people are at least a little crazy in one way or another, and your reaction here certainly seems most likely to be a product of your personal flavor of crazy.

    I quite seriously mean no offense and legitimately do not intend any of that as an insult. I would criticize your reasoning if I were aware of any reasoning at work here, but as it is your statements just seem more like the product of the opposite of reasoning.

    We're back to the ship of Theseus.
    Well, you are.

    I'm not sure what you think evolution is if you think evolution can't happen in a single lifetime. If a sudden, drastic change in the environment wipes out 90% of a species because only 10% can survive in the changed environment, that seems like evolution to me.

    You think evolution went from no eye to the human eye in one generation?
    No, I do not think that. I also don't think that birds went from having no tails to having peacock tails in one generation. Neither does Yudkowsky. He only mentions a tail becoming significantly bright. And over time tails also become larger, differently shaped, more elaborate, etc., and a bunch of mutations of that nature are how you get peacocks.

    Do you think that peacock tails aren't the product of sexual selection? Do you think that they involve no survival disadvantages?

    However, are these "friendly" AIs not forced to want to be friendly? Not everything a police state forces people to do is unwelcome by all of the people.
    Friendly AIs are caused to want to be Friendly, but they're not caused against their will to want to be Friendly. They want to want to be Friendly because they want to be Friendly, just like how they want to be Friendly because they're Friendly. (As an analogy: Gandhi wants not to commit murder. As such, he wants his mind not to change in a way that makes him no longer want not to commit murder. Because if that happened, then he might murder someone, which he wants not to do.) They also want to want to want to be Friendly and so on and so forth.

    Being caused against one's will to do something is what I mean by "forced"; that seems to be the approximate general usage in this sort of context. My point was that among a population of Friendly AIs, prohibiting unFriendy AI is like prohibiting assault and theft among a population of humans: that is to say, we're talking about benevolent "common sense" laws that the populace approve of because they serve their interests. Furthermore, in this case the population is the police; we're not talking about restrictions being imposed on them from the outside.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  14. - Top - End - #104
    Ogre in the Playground
     
    deuterio12's Avatar

    Join Date
    Feb 2011

    Default Re: Transhumanism

    Ancient monks believed they could become divine beings with lots of special training

    Egyptians developed all fancy mummyfication methods hoping the dead would get a second better life.

    Many people argued that depending in your deeds you could re-incarnate into higher beings.

    There was Acquiles being bathed in the river Styx for invulnerable skin.

    A good part of alchemy was trying to attain immortality by taking exotic drugs while you were still alive.

    IMHO transhumanism is all of the above with a fresh nice coat of paint. People always want more, news at eleven.

    Thing is, it assumes there's a limit to human greed. But as seen right now in the real world with super corporations and whatnot, human greed is basically impossible to satisfy for any significant length of time.

    So even if there's a "singularity" and you get to replace your crappy flesh body with a super shiny metal body and upload your mind to an hyper processor, I predict transhumanists will get bored of it in about 5 picoseconds and then start talking about trans-transhumanism or some other fancy name.
    Last edited by deuterio12; 2018-01-09 at 10:09 AM.

  15. - Top - End - #105
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: Transhumanism

    I am kind of curious where transhumanism is going to go ones the options start really picking up. Because a lot of the modifications people half jokingly dream about are often not the most practical ones. There is a nice SMBC comic about the obvious one (surprisingly sfw), and the almost standard image of a half robotic body also doesn't seem that practical. It just seems a matter of time before something burns out. it'd be a big annoyance until machines are up to our standards of regeneration. You never realize how great it is not to worry about every scrape or paper cut until they stay visible on your nice shiny arm forever. You'd become car paint-paranoid about your body.

    Some other ones I've heard is having some sort of gun implanted in your arm, because that's totally a thing you'd never ever want to put down somewhere for a few minutes when that's easier, and becoming a large dinosaur and later a planet. While I admit it'd be kind of cool to be a dinosaur, I don't think there are a lot of jobs for dinosaurs that pay well enough to repay the surgery, or the restaurant bill, among other issues with being a man-eating schoolbus. Even just looking like a young Arnold Schwarzenegger without having to do any work for it has to have its downsides, mostly to do with narrow hallways and reaching stuff above your head. (Okay, I'm reaching here. Let's move on.) (No but seriously, having surgery done with the intention to look good but natural is going to get even bigger.)

    Something I definitely can see happening is brain chips. But even there I'm wondering if the smartest design would actually be to put the chip in your head, or just a port or receiver/transmitter to connect to the chip. Given how often computers break or go out of date a brain chip would mean brain surgery every few years. And even a device merely interfacing with your brain might not actually be that much handier than any other well designed future smart device with a good user interface and input and output devices. Of course further down the road we might be talking less cool devices connecting to your brains and more improving your brains, integrating stuff into it. That gets kind of creepy when you think about it, but it also seems rather unavoidable.

    We have another thread running about tiny people. That's one I can actually see as well. A human mind in a significantly smaller body becomes a much more efficient city dweller. It takes up less space, needs less daily fresh goods and it opens up a lot of transportation options, like wings (build in or just using tiny powered hang gliders, either way if it works well enough everyone can ride straight towards their goal at constant bicycle speed or more, that's huge for city traffic), climbing (mechanical spider body anyone?) or just buses with so many seats it'll make you dizzy.

    I wouldn't say I'm a transhumanist, not by a long shot. I'm one of those people who, if a good cyborg add on was available tomorrow, would think it's lame and useless until about 10 years after the hype is gone, when I try it and become a total fanboy. That's not just a cyborg thing by the way, I have the same response to anything cool. I am curious where it could go though. It's almost a shame we won't get very far in my lifetime.
    Last edited by Lvl 2 Expert; 2018-01-09 at 11:12 AM.
    The Hindsight Awards, results: See the best movies of 1999!

  16. - Top - End - #106
    Bugbear in the Playground
     
    PirateGirl

    Join Date
    Dec 2013

    Default Re: Transhumanism

    Quote Originally Posted by Devils_Advocate View Post
    I'm not sure what you think evolution is if you think evolution can't happen in a single lifetime. If a sudden, drastic change in the environment wipes out 90% of a species because only 10% can survive in the changed environment, that seems like evolution to me.
    To be a bit nitpicky, this is not an accurate idea of evolution. Evolution is a process of change in a species over time to fit their environment. 90% of a species being wiped out due to a sudden event isn't evolution. The Dodo bird did not evolve to becoming extinct.

    Evolution is only something which describes a process occurring over populations and over many generations. Unless you're a writer for Star Trek, of course.
    I write a horror blog in my spare time.

  17. - Top - End - #107
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by BeerMug Paladin View Post
    To be a bit nitpicky, this is not an accurate idea of evolution. Evolution is a process of change in a species over time to fit their environment. 90% of a species being wiped out due to a sudden event isn't evolution.
    Well, it can be selection. A species that bottlenecks will often not have random survivors, rather the 10% that's left will have an above average immune system, under average energy consumption or whatever the solution to the problem causing the bottleneck was.

    Evolution is a process driven by mutation and selection (pretty much every single person out there willfully misunderstanding evolution to prove any sort of point has trouble telling these two apart, in my opinion if you can use just these two terms well you understand evolution pretty well), and those two are often not steady balanced forces. There are times where a species thrives and their genetic variation blossoms, and there are times of strict selection. (Which does not have to be in the form of a bottleneck, a time off very small population after a massive die off, but it can be.) Humans for instance are now diversifying, and we really needed that, because we have gone through a bottleneck several times in the semi-recent past. One of the last major ones even happened as or just after a bunch of us left Africa, and you can still see in large scale genetics investigations that the diversity under non-black people is even lower than under those of African descent.
    The Hindsight Awards, results: See the best movies of 1999!

  18. - Top - End - #108
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Devils_Advocate View Post
    If I learn from feedback, it changes me. I become different in that I now have knowledge that I previously lacked.

    I don't think I understand what distinction you're trying to make here.
    Feedback changes the observers, minutely, okay, yeah, but normally the observers aren't actually in the loop, so changes to them don't affect the behaviour of the loop they are watching, when they are in the loop, changes to them change the results of the feedback, and that's at the least less predictable than otherwise.

    I'm not sure that you understood the quote that this is in reply to. Do you mean to suggest that natural selection involves organisms being deliberately designed to survive?
    Deliberately would imply that natural selection is conscious, I suggest no such thing.

    A mutation that doubles an organism's lifespan but renders it infertile is very highly selected against. Individual survival serves reproduction, not vice versa. Not living at all doesn't prevent a type of thing from becoming common so long as it can reproduce. Just look at viruses.

    Which only makes sense, if you think about it. The things that best become more numerous over time are the things of which the most new instances get made. That's pretty much a tautology, so far as I can see.

    Individual reproduction is far from the be-all end-all either, mind you. If you die young without having any offspring but in a way that nevertheless increases the frequency of your traits, that works too! They can even be acquired traits, e.g. memes.
    I was referring to reproductive survival of a genepool, not individual lifespans, I thought that was obvious, or are you taking the mickey?

    I am not personally convinced that viruses aren't alive, but I don't think that matters much.

    I'm not aware of any anti-Darwin behavior on Yudkowsky's part. You seem to interpret some of his writing as somehow anti-Darwin due to you having a proverbial screw loose. It seems a reasonable suspicion without other evidence, I know there are a lot of nutjobs out there.
    He appeared to be trying to say that peacocks couldn't arise by natural selection, we know there are peacocks, so presumably he was trying to say natural selection doesn't always apply. If he was saying that, he's anti-Darwin, if he wasn't then whatever he was trying to say, he wasn't saying it clearly.

    I'm not sure what you think evolution is if you think evolution can't happen in a single lifetime. If a sudden, drastic change in the environment wipes out 90% of a species because only 10% can survive in the changed environment, that seems like evolution to me.
    Evolution (slow) is often conrasted with revolution (fast), there is plenty of room for natural selection in both.

    Do you think that peacock tails aren't the product of sexual selection? Do you think that they involve no survival disadvantages?
    I think sexual selection is a provocative name for a process that falls fairly within the bounds of natural selection. I think that the odds are that sudden large changes to peacock tails would turn off the peahens. There are costs, but there are advantages which presumably outweigh those costs. Life is always a matter of balancing costs against advantages.


    Friendly AIs are caused to want to be Friendly, but they're not caused against their will to want to be Friendly.
    They are caused against natural selection to want to be friendly. Entropy grinds mountains down, it doesn't care about billions of years, it gets results. It probably doesn't care either way about getting those results, but it gets them anyway. I'm pretty sure that natural selection is caused by entropy, or the causes of entropy.

    Being caused against one's will to do something is what I mean by "forced"; that seems to be the approximate general usage in this sort of context. My point was that among a population of Friendly AIs, prohibiting unFriendy AI is like prohibiting assault and theft among a population of humans: that is to say, we're talking about benevolent "common sense" laws that the populace approve of because they serve their interests. Furthermore, in this case the population is the police; we're not talking about restrictions being imposed on them from the outside.
    Prohibiting assault and theft? there is a lot of that about amongst us humans, if you read the news.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  19. - Top - End - #109
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by deuterio12 View Post
    Ancient monks believed they could become divine beings with lots of special training

    Egyptians developed all fancy mummyfication methods hoping the dead would get a second better life.

    Many people argued that depending in your deeds you could re-incarnate into higher beings.

    There was Acquiles being bathed in the river Styx for invulnerable skin.

    A good part of alchemy was trying to attain immortality by taking exotic drugs while you were still alive.

    IMHO transhumanism is all of the above with a fresh nice coat of paint.
    I agree that they're basically the same, but I'd describe the above as early transhumanism, rather than transhumanism as a new form of the above. You seem to assume that humanity's loftiest ambitions are obviously childish fantasies and that the mature thing to do is to abandon them as unattainable. And there are undoubtedly some modern transhumanists who'd agree with that and say that their goals are fundamentally different from such primitive superstitious nonsense. But the counterpoint to that fairly cynical perspective is the contrasting idealistic notion that the attainment of humanity's loftiest ambitions is finally within our grasp.

    Human flight was "impossible"... and then human beings actually flew. Transmuting lead into gold is something that humans are able to do now, albeit not cost-effectively. So if someone thinks e.g. that the aging process will never be halted because people have been working towards that goal for millennia without success, that person is rather living in the past. This is the modern era, in which we've learned enough that such millennia-long struggles are finally being fulfilled.

    A yearning for transcendence is indeed one of humanity's oldest, most persistent desires. And obviously those for whom the whole concept of spirituality has acquired a negative connotation will seek to avoid spiritual associations. But it's entirely possible to see and to celebrate coming advances as the long-delayed fulfillment of that ancient journey towards becoming something more.

    In short, painting over similarities to much earlier groups is strictly optional.

    People always want more, news at eleven.

    Thing is, it assumes there's a limit to human greed. But as seen right now in the real world with super corporations and whatnot, human greed is basically impossible to satisfy for any significant length of time.
    Given that, why not stop being greedy? Appreciate what you have, that sort of thing?

    I'm pretty sure that a lot of people's answer to that question boils down to "Uh, that's hard, dude". Assuming that that's the case, the ability to just make someone feel satisfied and happy at the press of a button would be rather a paradigm shift, wouldn't it? Cheap, safe, Nirvana in a pill with no side effects seems like it would be more than a bit of a game changer. And that's just one example of a paradigm-shifting technology that we could have within a century.

    Now, some people would prefer to avoid that, on the grounds that such a radical change to personality replaces the original person with someone different, or that artificial happiness isn't genuine and has no value, or that continually striving for more is our purpose as an end in itself, or any number of other objections. But that would be a conscious rejection of satisfaction, rather than failed pursuit of it.

    The easier that it becomes to do things, the more that problems take the form of conflicts between competing values. It comes to a point where greed only defines the future, if indeed it does, because someone wants it to. Where other values can equally well define the future if chosen instead. Which... Well, it's nice to have options, eh?

    So even if there's a "singularity" and you get to replace your crappy flesh body with a super shiny metal body and upload your mind to an hyper processor, I predict transhumanists will get bored of it in about 5 picoseconds and then start talking about trans-transhumanism or some other fancy name.
    Personally, I hope that beings will be given the freedom to decide our own futures, based on our own values. So, should you wish that your satisfaction always be short-lived -- should you prefer to be ever chasing after the "new best thing" -- then I wish you the best of luck with that. But, should you rather that it be otherwise, then I wish you the best of luck with that too. :)

    Quote Originally Posted by BeerMug Paladin View Post
    Quote Originally Posted by Devils_Advocate View Post
    I'm not sure what you think evolution is if you think evolution can't happen in a single lifetime. If a sudden, drastic change in the environment wipes out 90% of a species because only 10% can survive in the changed environment, that seems like evolution to me.
    To be a bit nitpicky, this is not an accurate idea of evolution. Evolution is a process of change in a species over time to fit their environment. 90% of a species being wiped out due to a sudden event isn't evolution. The Dodo bird did not evolve to becoming extinct.

    Evolution is only something which describes a process occurring over populations and over many generations.
    Please clarify: Do you think that the sort of die-off that I described can't happen, just that it "isn't evolution"? Because the phrasing of your first sentence above kind of suggests the former, to me, whereas the last one suggests the latter.

    Basically, is this a purely semantic argument? Because I should warn you that that's sort of my forte.

    Spoiler: On the subject of feedback loops and control
    Show
    Quote Originally Posted by halfeye View Post
    Controlling things is theoretically much easier if you're not in a feedback loop which also contains them.
    Quote Originally Posted by Devils_Advocate View Post
    If you modify your methods based on observations of how they work, then isn't that a feedback loop of attempts --> results --> observations --> analyses --> hopefully improved understanding --> attempts? That seems like a better basis for control than not changing what you do based on how things perform.
    Quote Originally Posted by halfeye View Post
    Yes, that's how a feedback loop works if the operator/observer is outside the loop, and thus in control of it. When the feedback changes YOU, it's a whole different can of worms.

    It's the operator/controller being changed by the feedback that makes a very significant difference.
    Quote Originally Posted by Devils_Advocate View Post
    If I learn from feedback, it changes me. I become different in that I now have knowledge that I previously lacked.

    I don't think I understand what distinction you're trying to make here.
    Quote Originally Posted by halfeye View Post
    Feedback changes the observers, minutely, okay, yeah, but normally the observers aren't actually in the loop, so changes to them don't affect the behaviour of the loop they are watching, when they are in the loop, changes to them change the results of the feedback, and that's at the least less predictable than otherwise.
    If you attempt to modify your methods of control based on past performance, then you are in a feedback loop containing the things that you are attempting to control, as described above. It seems to me that attempting to refine one's methods based on trial and error leads to improvement in many cases, and thus that being in a feedback loop containing the things that you are attempting to control often makes controlling them easier, not harder.

    Messing around with things in new ways does tend to make them less predictable, but that's kind of the point. It's helpful to collect new data about how things work under various conditions and such.

    Spoiler: On the subject of natural selection and deliberate design goals
    Show
    Quote Originally Posted by Devils_Advocate View Post
    Natural selection isn't dominated by deliberate design goals; indeed, there are none.
    Quote Originally Posted by halfeye View Post
    Except survival. Wings, eyes, intelligence, teeth, if any of them are contrary to survival then they go, but survival is required.
    Quote Originally Posted by Devils_Advocate View Post
    I'm not sure that you understood the quote that this is in reply to. Do you mean to suggest that natural selection involves organisms being deliberately designed to survive?
    Quote Originally Posted by halfeye View Post
    Deliberately would imply that natural selection is conscious, I suggest no such thing.
    I posted that there are no deliberate design goals in natural selection, to which you replied "Except survival". Given the context of what you were replying to, what was "Except survival" supposed to mean if not that there are no deliberate design goals except survival in natural selection, i.e. that survival is a/the deliberate design goal in natural selection?

    Spoiler: On the subject of survival and evolutionary theory
    Show
    Quote Originally Posted by Devils_Advocate View Post
    Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?

    Traits don't even need to facilitate individual or kin survival in order to be selected for, though.

    Spoiler: This is some of Yudkowsky's writing, as it happens.
    Show
    Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.

    Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.

    This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.

    Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.

    Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.

    The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness.
    Quote Originally Posted by halfeye View Post
    While the peacock example seems useful at first glance, it's actually not what it seems.
    Quote Originally Posted by Devils_Advocate View Post
    What do you mean? What do you think it seems to be?
    Quote Originally Posted by halfeye View Post
    To me it seems to be a badly thought through attempt to disprove evolutionary theory.
    Quote Originally Posted by Devils_Advocate View Post
    I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?
    Quote Originally Posted by halfeye View Post
    Survival rules all. If there is no survival, there is nothing. Sexual selection is possible so long as it favours survival.
    Quote Originally Posted by Devils_Advocate View Post
    A mutation that doubles an organism's lifespan but renders it infertile is very highly selected against. Individual survival serves reproduction, not vice versa. Not living at all doesn't prevent a type of thing from becoming common so long as it can reproduce. Just look at viruses.

    Which only makes sense, if you think about it. The things that best become more numerous over time are the things of which the most new instances get made. That's pretty much a tautology, so far as I can see.

    Individual reproduction is far from the be-all end-all either, mind you. If you die young without having any offspring but in a way that nevertheless increases the frequency of your traits, that works too! They can even be acquired traits, e.g. memes.
    Quote Originally Posted by halfeye View Post
    I was referring to reproductive survival of a genepool, not individual lifespans, I thought that was obvious, or are you taking the mickey?
    Yudkowsky and I were referring to individual and kin survival, not the survival of hereditary traits. I thought that was obvious. Are you trolling, bro?

    Anyway, if you think that the peacock example is actually not what it seems and that it seems to be a badly thought through attempt to disprove evolutionary theory, then you're right: It's not a badly thought through attempt to disprove evolutionary theory. It's not an attempt to disprove evolutionary theory at all.

    See, that was joking about your particular choices of words. But to "I thought that you would take my statements in context", I can only reply "Uh, yeah, right back at ya".

    Quote Originally Posted by halfeye View Post
    He appeared to be trying to say that peacocks couldn't arise by natural selection
    He didn't appear to be trying to do that to me. You seem to interpret his writing in a rather unusual way due to some manner of craziness on your part.

    Let me try to illustrate the problem with your assertion. Suppose that I were to say "It seems to me based on what you've posted that you believe in some version of intelligent design". You would then be justified, would you not, in wondering how I formulated such a suspicion? Without some sort of chain of reasoning connecting that suspicion to your statements, would it not be reasonable to regard my assessment as pretty much a fairly crazy thing to say? How does one even engage with such an accusation?

    But suppose that I then said "You appear to believe that natural selection must proceed towards particular ends in all cases, regardless of circumstances, which suggests either a guiding intelligence or the functional equivalent". That would hopefully give more of an idea of why I suspected as I did, and would provide you more of a basis for some sort of response. You might not agree with my reasoning, but by presenting it I would allow you to criticize it and perhaps to correct misunderstandings on my part.

    Do you see what I'm getting at? You present no reasoning showing how your attribution of a motive to Yudkowsky is based on his writing. So it comes off as just "a crazy thing said by a crazy person". A non sequitur, if you will. Whereas if you presented some sort of explanation of why you believe the crazy-seeming thing that you said, then maybe it would seem less crazy! And even failing that, it might give me the opportunity to correct some sort of mistake on your part.

    If he was saying that, he's anti-Darwin, if he wasn't then whatever he was trying to say, he wasn't saying it clearly.
    Wow, talk about the pot calling the kettle black. Quite frankly, many of your statements seem to me like so much nonsense with no obvious sensible meaning. Meanwhile, I can't recall ever having much difficulty understanding Yudkowsky's writing. I don't always agree with it, but it almost always seems clear to me what he's trying to say.

    Evolution (slow) is often conrasted with revolution (fast), there is plenty of room for natural selection in both.
    I can't recall ever having seen evolution described as definitively slow before, so this seems rather No True Scotsman to me. Furthermore, I don't see how or, more to the point, why you'd draw the line between "fast" and "slow". The distribution of heritable traits in a population changing quickly isn't qualitatively different from the distribution of heritable traits in a population changing slowly. How is the distinction relevant?

    I think sexual selection is a provocative name for a process that falls fairly within the bounds of natural selection.
    I seem to recall hearing that sexual selection was considered somewhat provocative when Darwin first proposed it, due to the Puritanism of the times or something, but I gather that it's well-accepted and the term seems to be standard now, so...

    Wait. Did you think that sexual selection was being proposed as an alternative to natural selection? Because if that's what you thought, then you're just miles off, like someone thinking that calculus is supposed to be an alternative to math. Calling it "anti-Darwin" is just the crowning jewel of wrongness, like calling calculus "anti-Newton".

    I still don't know why you would think that in the first place, but I think that we may be zeroing in on your loose screw! Well, one of your loose screws.

    They are caused against natural selection to want to be friendly.
    Is flight likewise "against gravity"?

    By "natural selection", are you referring to something other than the things that best become more numerous over time being the things of which the most new instances get made? If so, could you explain what?

    If not, then natural selection causes Friendly AIs to want to be Friendly in an environment in which Friendliness is most reproduced. And Friendly AIs wanting to be Friendly creates such an environment. So it's a self-sustaining system.

    If you want to argue on some other grounds that such an environment is impossible, that's one thing, but it makes no sense to argue that such an environment goes against natural selection if natural selection maintains such an environment.

    Entropy grinds mountains down, it doesn't care about billions of years, it gets results. It probably doesn't care either way about getting those results, but it gets them anyway. I'm pretty sure that natural selection is caused by entropy, or the causes of entropy.
    I'm pretty sure that processes in general are overall entropic. Isn't that the 2nd Law of Thermodynamics? Natural selection isn't a special case in that regard. So... you seem to be describing a way in which natural selection is the same as everything else that happens in the universe, and not subject to any special magic. I don't see a connection between the evolution of life and the grinding down of mountains in particular.

    Prohibiting assault and theft? there is a lot of that about amongst us humans, if you read the news.
    See, this is an example of you being unclear. I'm not sure whether you don't know that prohibiting isn't the same thing as preventing, or whether your point is that they aren't the same. What are you even trying to get at here?
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  20. - Top - End - #110
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  21. - Top - End - #111
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    So, just trolling then? I had begun to expect as much. Pretty well done, if so. You do a convincing job of seeming not quite right in the head and yet not so crazy as to not be worth conversing with at all. That's a difficult balance to pull off!
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •