Results 91 to 111 of 111
Thread: Transhumanism
-
2017-10-11, 01:37 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Is it conscious?
It's not about murderous intent. Lions don't murder zebras. It's about survival. Lions need zebras, or something of the sort, we can sometimes get by on vegetables if we're careful, but lions can't. Lions that killed all prey in their range would die, so they don't do that. If the AI need us, we'll be alright, but they very probably won't so that's a worry, and suggesting that we can both let them reproduce at will and control them is mistaken.
Then leading to the idea that AI have the goals we give them, so we should be careful when making said goals.
But apparently mosquitos cooperating is where they're trying to draw the conversation, so I'm not sure they're actually engaging my point.Last edited by halfeye; 2017-10-11 at 01:47 PM.
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-11, 01:51 PM (ISO 8601)
- Join Date
- Oct 2014
- Location
- Tulips Cheese & Rock&Roll
- Gender
Re: Transhumanism
The Hindsight Awards, results: See the best movies of 1999!
-
2017-10-11, 04:40 PM (ISO 8601)
- Join Date
- Dec 2010
Re: Transhumanism
Despite cells in our body replicating autonomously in context, they have a particular niche. We get cancer, but we don't get cancers that peel themselves away from the original person and become their own autonomously surviving organisms. We have been coexisting with self-replicating computer programs for quite some time now - computer viruses - and they haven't extracted themselves from their hosts either.
There is so much structure that can come about from evolutionary dynamics, ecological dynamics, and game theoretic concerns that a sweeping catchphrase like 'survival of the fittest' is not a solid basis for extrapolation.
Things that are parasitic or symbiotic or in different niches or which can obtain information about their own population size and set their grown accordingly or which have otherwise nonlinear replicative dynamics (dP/dt not linear in P) all don't generally have competitive exclusion. Things with horizontal information transfer are even hard to define exclusion with respect to - our genes generally don't compete with eachother aside from rare cases, even though each gene can be separately considered a replicating system under natural selection thanks to crossover, HGT, transposons, and other non-vertical genetic mechanisms. This isn't even going into correlation effects like kin selection which provide alternatives to direct replication for increasing genetic influence.
Meta-evolution (e.g. the evolutionary equivalent of self-improvement in nature) seems to have historically favored the construction of increasingly neutral fitness landscapes - the Baldwin effect and survival of the flattest as two broad examples.
But of course, the dynamics of intelligent learning systems also don't have to look like the dynamics of evolutionary systems at all, since there are more than one way for information to be carried forward.
Either way, this is all pretty self-indulgent crystal ball gazing. Like a Ouija board, projecting things forward into a transhumanist future tends to reveal more about yourself than about the world.
-
2017-10-11, 07:19 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Ouija boards are dangerous. Not because of spirits, but because they are a direct line to the subconscious.
Please do not play with those things under any circumstances.
I don't understand cancer, there seems to me to be some sort of a gene group that gets switched on, but where it came from and why it survives is a mystery.
https://xkcd.com/925/
If the cancer half of that graph is right, something happened before 1970 to make cancer much more common, possibly in part more recognition, but surely more than that. Two obvious candidates are post WW2 nuclear testing, and mass use of penicillin and other antibiotics.
There is so much structure that can come about from evolutionary dynamics, ecological dynamics, and game theoretic concerns that a sweeping catchphrase like 'survival of the fittest' is not a solid basis for extrapolation.
Things that are parasitic or symbiotic or in different niches or which can obtain information about their own population size and set their grown accordingly or which have otherwise nonlinear replicative dynamics (dP/dt not linear in P) all don't generally have competitive exclusion. Things with horizontal information transfer are even hard to define exclusion with respect to - our genes generally don't compete with eachother aside from rare cases, even though each gene can be separately considered a replicating system under natural selection thanks to crossover, HGT, transposons, and other non-vertical genetic mechanisms. This isn't even going into correlation effects like kin selection which provide alternatives to direct replication for increasing genetic influence.
Meta-evolution (e.g. the evolutionary equivalent of self-improvement in nature) seems to have historically favored the construction of increasingly neutral fitness landscapes - the Baldwin effect and survival of the flattest as two broad examples.
But of course, the dynamics of intelligent learning systems also don't have to look like the dynamics of evolutionary systems at all, since there are more than one way for information to be carried forward.
Either way, this is all pretty self-indulgent crystal ball gazing. Like a Ouija board, projecting things forward into a transhumanist future tends to reveal more about yourself than about the world.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-11, 10:48 PM (ISO 8601)
- Join Date
- Jun 2005
Re: Transhumanism
People are subject to various forces and processes. Can we control none of them?
That's like saying that we can't control the flow of water if we can't make it run uphill. Which could be a thing you believe, for all I know.
Oh, sure, I was talking about hypothetical future technology, not anything that can be done today. I thought that that much was obvious.
On the one hand, I pretty much did, and I guess I should have been more specific. On the other hand, maybe specialized intelligent programs could handle math in general and various other tasks for me better than I ever could myself... to the dubious extent that they aren't just new parts of me.
Ah, but I'm talking about taking out parts of the mind, not the brain. I'm thinking more of dealing with high-level abstract mental phenomena distributed throughout the brain in a high-level, abstract, distributed manner. This is something that we already do with drugs, and might be able to do with greater precision through other means someday.
Huh? How is that relevant?
He's not especially foolish in any sort of general way, and I'm pretty confident that in this case you're mistakenly attributing to him foolishness that isn't his, if not mistaking wisdom for foolishness due to foolishness of your own. (I think that most of us can agree that the latter is a fairly standard form of irony.)
If you already know in detail everything that a program might do before running it, it isn't even artificial intelligence of the relevant sort.
Yes, and so reliable friendliness requires measures sufficient to prevent the AI from bypassing that programming. And that's a very hard problem. That's the point!
It's probably not an impossible problem. A sufficiently advanced Friendly AI could very probably do it, but here we run into a rather obvious "chicken and egg" issue. Hence a focus on self-improving AI, to lower the requirements of "sufficiently advanced" to the point that the problem is remotely solvable. Does that lower the requirements enough to make the problem solvable by human beings? Perhaps not and we're all doomed, but that's far from proven.
Unless you're talking about guaranteed friendliness in the sense of a literal zero chance of failure. Yudkowsky is on record that assigning a probability of exactly zero to anything is irrational, so in that sense he makes no guarantees. Of anything, ever. So, really, that makes the question one of how low we can get the probability of failure.
To the extent that free reproduction is uncontrolled by definition, this seems to be tautologous, even if of dubious relevance.
You don't even seem to just be taking "control" to mean something that it isn't intended to mean here, because the word isn't even used in the article.
Where is it suggested that reproduction be unsupervised?
Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?
Traits don't even need to facilitate individual or kin survival in order to be selected for, though.
Spoiler: This is some of Yudkowsky's writing, as it happens.Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.
Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.
This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.
Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.
Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.
The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness. Let's consider:
Suppose that, at some point in the future, there are super-intelligent machines, and all artificial minds want most of all to protect and to help humanity. You may object that this scenario must be short-lived because at some point the accumulation of random errors will result in one of the growing population of intelligent programs being more concerned with its own survival. But, you see, in that sort of environment where the most powerful, most intelligent beings favor humanity's prosperity over other concerns, your survival is best assured if it somehow contributes to the prosperity of humanity. If you're instead a danger to humanity, all of those other AIs are instead your enemies. That does not bode well for your survival. And the odds of staging a successful coup are very low. There are multiple redundant security measures in place, because these super-intelligent machines are not idiots. They're super-intelligent!
Um... What's so dangerous about that? I've idly pondered communicating with my subconscious mind, probably via hypnosis, so I'd be interested to hear about any risks you're aware of.
-
2017-10-12, 08:23 AM (ISO 8601)
- Join Date
- Aug 2011
- Location
- Sharangar's Revenge
- Gender
Re: Transhumanism
Define "conscious". We just had a huge discussion about what consciousness is, and whether or not you can measure it in the Time Travel and Teleportation (but mostly teleportation) thread. As far as I can tell, no consensus was reached on a useful definition or measurement.
Last edited by Lord Torath; 2017-11-28 at 12:34 PM. Reason: AutoDeck vs AutoDesk
Warhammer 40,000 Campaign Skirmish Game: Warpstrike
My Spelljammer stuff (including an orbit tracker), 2E AD&D spreadsheet, and Vault of the Drow maps are available in my Dropbox. Feel free to use or not use it as you see fit!
Thri-Kreen Ranger/Psionicist by me, based off of Rich's A Monster for Every Season
-
2017-10-12, 12:45 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
I was halfway joking, because I know it's a difficult question when it is in actual doubt, however I would assume that current PC compatible software isn't. Are wasps conscious? are cats? somewhere there's a boundary, but it's not clear where that is, I don't think current software on consumer hardware is anywhere close to that boundary.
Messing about with your own subconscious is not advised for the public at random. Sometimes people can hear voices, which come from their subconscious, that usually does not go at all well.
For Ouija boards in particular, I once new someone who was hurt, and they then told us that they had taken part in an Ouija board game and the game had predicted that someone would be hurt in that way. I have no idea whether it was someone in the group messing about, or the person's own mind, but the result was a serious injury that seemed to have been facilitated by the game.
People are subject to various forces and processes. Can we control none of them?
That's like saying that we can't control the flow of water if we can't make it run uphill. Which could be a thing you believe, for all I know.
On the one hand, I pretty much did, and I guess I should have been more specific. On the other hand, maybe specialized intelligent programs could handle math in general and various other tasks for me better than I ever could myself... to the dubious extent that they aren't just new parts of me.
Ah, but I'm talking about taking out parts of the mind, not the brain. I'm thinking more of dealing with high-level abstract mental phenomena distributed throughout the brain in a high-level, abstract, distributed manner. This is something that we already do with drugs, and might be able to do with greater precision through other means someday.
Huh? How is that relevant?
He's not especially foolish in any sort of general way, and I'm pretty confident that in this case you're mistakenly attributing to him foolishness that isn't his, if not mistaking wisdom for foolishness due to foolishness of your own. (I think that most of us can agree that the latter is a fairly standard form of irony.)
If you already know in detail everything that a program might do before running it, it isn't even artificial intelligence of the relevant sort.
Yes, and so reliable friendliness requires measures sufficient to prevent the AI from bypassing that programming. And that's a very hard problem. That's the point!
It's probably not an impossible problem. A sufficiently advanced Friendly AI could very probably do it, but here we run into a rather obvious "chicken and egg" issue. Hence a focus on self-improving AI, to lower the requirements of "sufficiently advanced" to the point that the problem is remotely solvable. Does that lower the requirements enough to make the problem solvable by human beings? Perhaps not and we're all doomed, but that's far from proven.
Unless you're talking about guaranteed friendliness in the sense of a literal zero chance of failure. Yudkowsky is on record that assigning a probability of exactly zero to anything is irrational, so in that sense he makes no guarantees. Of anything, ever. So, really, that makes the question one of how low we can get the probability of failure.
To the extent that free reproduction is uncontrolled by definition, this seems to be tautologous, even if of dubious relevance.
You don't even seem to just be taking "control" to mean something that it isn't intended to mean here, because the word isn't even used in the article.
Huh? We want things other than to survive, although we want many of those things because they facilitated our ancestors' survival. Evolutionary psychology isn't about all goals other than living being weeded away over time. I'm guessing that that isn't actually what you were trying to say; my point, really, is that it's not clear what "changes all goals to survival" is supposed to mean. So, uh... try again, maybe?
Traits don't even need to facilitate individual or kin survival in order to be selected for, though.
Spoiler: This is some of Yudkowsky's writing, as it happens.Suppose that there’s some species—let’s call it a “tailbird”—that happens to have a small, ordinary, unassuming tail. It also happens that the tails of healthy tailbirds are slightly more colorful, more lustrous, then the tails of tailbirds that are sick, or undernourished. One day, a female tailbird is born with a mutation that causes it to sexually prefer tailbirds with bright-colored tails. This is a survival trait—it results in the selection of healthier male mates, with better genes—so the trait propagates until, a few dozen generations later, the entire species population of female tailbirds prefers bright-colored tails.
Now, a male is born that has a very bright tail. It’s not bright because the male is healthy; it’s bright because the male has a mutation that results in a brighter tail. All the females prefer this male, so the mutation is a big success.
This male tailbird isn’t actually healthier. In fact, this male is pretty sick. More of his biological resources are going into maintaining that flashy tail. So you might think that the females who preferred that male would tend to have sickly children, and the prefer-bright-tails trait would slowly fade out of the population.
Unfortunately, that’s not what happens. What happens is that even though the male has sickly children, they’re sickly children with bright tails. And those children also attract a lot of females. Genes can’t detect “cheating” and instantly change tactics; that’s a monopoly of conscious intelligence. Any females who prefer the non-bright-tailed males will actually do worse. These “wiser” females will have children who are, sexually, out of fashion. Bright tails are no longer a survival advantage, but they are a very strong sexual advantage.
Selection pressures for sexual advantages are often much stronger than selection pressures for mere survival advantages. From a design perspective this is stupid—but evolution doesn’t care. Sexual selection is also a Red Queen’s Race (Ridley 1994): It involves competition with conspecifics, so you can never have a tail that’s “bright enough.” This is how you get peacocks.
The broader point here is that the goals of others are part of your environment, and thus determine what qualifies as fitness.
Let's consider:
Suppose that, at some point in the future, there are super-intelligent machines, and all artificial minds want most of all to protect and to help humanity. You may object that this scenario must be short-lived because at some point the accumulation of random errors will result in one of the growing population of intelligent programs being more concerned with its own survival. But, you see, in that sort of environment where the most powerful, most intelligent beings favor humanity's prosperity over other concerns, your survival is best assured if it somehow contributes to the prosperity of humanity. If you're instead a danger to humanity, all of those other AIs are instead your enemies. That does not bode well for your survival. And the odds of staging a successful coup are very low. There are multiple redundant security measures in place, because these super-intelligent machines are not idiots. They're super-intelligent!
Controlling intelligent reproducing beings is really not an option. There are controlling personality types among people, other people don't usually like them.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-12, 02:43 PM (ISO 8601)
- Join Date
- Dec 2010
Re: Transhumanism
In practice, having a population of replicating AIs is significantly suboptimal compared to having a single AI who uses that population's worth of computation for training.
AI isn't biology, and learning and evolution have different asymptotic behaviors.
These singularity discussions often just end up being selectively asserting certainty or uncertainty to only keep evidence on the table that supports what either side is trying to sell - unconditional fear and unconditional optimism respectively.
The right answer IMO is go build things and see what they actually do. Actual AI that works at superhuman performance doesn't look anything like the kinds of objects that get introduced in these kinds of projections. People in the 60s were convinced that propositional logic was definitely going to be how machine intelligence was going to be, and 60 years later it's all neural networks and statistical techniques with very, very different properties. There are still people who think that Asimov's Three Laws are a great idea - how exactly do you intend to apply that to e.g. linear regression? Yudkowsky himself commented that his predictions about AlphaGo vs Lee Sedol necessarily being either 0-5 or 5-0 were wrong and so he had to go back to the drawing board about his assumptions on what AI is like.
This topic happens to be a bit of a pet peeve of mine. In my line of work, I see too many content-free talks selling AI as a man-made deity or AI as man-made devil, with the inevitable implied 'if you throw me a billion dollars I'll make it so/fix it/make AI to defend against it' to the VCs in the room (who, to my relief, don't actually tend to buy into it nearly as much as the stereotype would suggest). So apologies if I'm a bit of a buzz kill on this subject.
-
2017-11-25, 11:52 PM (ISO 8601)
- Join Date
- Jun 2005
Re: Transhumanism
Acted on, influenced by; however you want to put it. Is that not what you meant?
I know what a feedback loop is.
It is much easier to blow down a house made of straw than a house made of bricks.
I read the Wikipedia article, and nothing in there seemed foolish to me. I'm guessing that the problem is on your end.
Where's the proof?
"Geological time"? I'm not sure that we're even talking about the same thing at this point. Let me try to lay out my understanding of the subject matter:
There are a great many goals that seem like they would be easier to achieve given greater intelligence. Hence the appeal of creating something smarter than you to solve your problems. It's entirely likely that AIs will also create smarter AIs to do what they want done, and so on and so forth. Each generation, being smarter than the last, is better able to design new smarter minds that do what it wants. The hardware and software involved may and likely will change drastically, but purpose is inherited, and increasingly reliably so with time, because ensuring that purpose is inherited reliably is one of the issues that's important to address early on.
The problem is that we don't yet know how to create superhuman intelligence with good purposes, in no small part due to ambiguity as to what constitutes a "good purpose". And any purposes that are accidentally or ill-advisedly build into early AI are likely to get harder to get rid of as AI advances, because each generation gets better at protecting its purposes; that's what "more intelligent" means.
But the Evolution Fairy is unlikely to be granted the opportunity to strip out engineered purposes for "fitter" ones either way. Each generation becomes better at preventing random errors, and as designs grow increasingly complex, introducing a random error into one becomes vanishingly likely to result in an entity capable of competing with its peers anyway.
But then selective breeding does control evolution, because it makes evolution do something. But I thought that you took the position that "control" was more than that.
What do you mean? What do you think it seems to be?
What "evens out"? At first, the healthier offspring will be the ones without the bright tail gene, all else being equal, because the bright tail gene causes poorer health. If eventually the bright tail gene is so common that nearly everyone has it, then it's unlikely to account for the difference between two individuals, but I don't see how that makes the example misleading. Do you think that the example is misleading? If so, could you explain how you think it misleads anyone into believing something untrue? Because I'm not seeing it.
The more intelligent a being is, the less likely it is to make mistakes. If you disagree, then what do you think "intelligent" means? Feel free to substitute another word, if you think another is more appropriate; just keep in mind that not screwing up is the point.
Why would the AIs allow for that? Why would there even be such a thing as private thoughts among them? Assuming that that represents a needless danger, the sensible option is not to have it, which means that at a minimum the mutant needs (a) deviant goals, (b) to not broadcast its thoughts as is normal, and (c) to broadcast plausible false thoughts as well. Note that this all has to happen simultaneously at random in conjunction with whatever else is necessary to get around however many dozens of other safeguards.
Yes. Hence why it's prevented from happening. Again, to be clear, I'm positing a scenario in which everything is managed by beings that are not stupid. I do realize that that's a fantastic scenario removed from everyday experience. If you want to argue that stupidity can't be eliminated, feel free to do so.
Governance is a myth, eh?
Creating artificial human-like personalities is a very different goal from creating Friendly AI.
One way of anthropomorphizing an AI is to think of it as a basically normal person with a bunch of compulsions artificially layered on. This is how artificial intelligence is liable to be portrayed in soft science fiction. That's... not a totally implausible sort of digital mind to have, as mind uploading could be developed before AI. But in that case, the intelligence isn't really artificial, just copied over from a natural source. A cheat, in short.
Programming a truly artificial mind isn't like brainwashing a human being. The AI's programming doesn't override its natural goals because it has no natural goals. It starts off with only the goals programmed into it.
-
2017-12-02, 03:39 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
I was involuntarily mostly offline for the past week plus, so replying to this was delayed.
Controlling things is theoretically much easier if you're not in a feedback loop which also contains them. The thing about feedback loops is that altering them changes things which also alters them, so you get side effects to your alterations, which are inherently unpredictable.
I read the Wikipedia article, and nothing in there seemed foolish to me. I'm guessing that the problem is on your end.
Where's the proof?
"Geological time"? I'm not sure that we're even talking about the same thing at this point.
Let me try to lay out my understanding of the subject matter:
There are a great many goals that seem like they would be easier to achieve given greater intelligence. Hence the appeal of creating something smarter than you to solve your problems. It's entirely likely that AIs will also create smarter AIs to do what they want done, and so on and so forth. Each generation, being smarter than the last, is better able to design new smarter minds that do what it wants. The hardware and software involved may and likely will change drastically, but purpose is inherited, and increasingly reliably so with time, because ensuring that purpose is inherited reliably is one of the issues that's important to address early on.
The problem is that we don't yet know how to create superhuman intelligence with good purposes, in no small part due to ambiguity as to what constitutes a "good purpose". And any purposes that are accidentally or ill-advisedly build into early AI are likely to get harder to get rid of as AI advances, because each generation gets better at protecting its purposes; that's what "more intelligent" means.
But the Evolution Fairy is unlikely to be granted the opportunity to strip out engineered purposes for "fitter" ones either way. Each generation becomes better at preventing random errors, and as designs grow increasingly complex, introducing a random error into one becomes vanishingly likely to result in an entity capable of competing with its peers anyway.
But then selective breeding does control evolution, because it makes evolution do something. But I thought that you took the position that "control" was more than that.
What do you mean? What do you think it seems to be?
What "evens out"? At first, the healthier offspring will be the ones without the bright tail gene, all else being equal, because the bright tail gene causes poorer health. If eventually the bright tail gene is so common that nearly everyone has it, then it's unlikely to account for the difference between two individuals, but I don't see how that makes the example misleading. Do you think that the example is misleading? If so, could you explain how you think it misleads anyone into believing something untrue? Because I'm not seeing it.
The more intelligent a being is, the less likely it is to make mistakes.
If you disagree, then what do you think "intelligent" means? Feel free to substitute another word, if you think another is more appropriate; just keep in mind that not screwing up is the point.
Why would the AIs allow for that? Why would there even be such a thing as private thoughts among them? Assuming that that represents a needless danger, the sensible option is not to have it, which means that at a minimum the mutant needs (a) deviant goals, (b) to not broadcast its thoughts as is normal, and (c) to broadcast plausible false thoughts as well. Note that this all has to happen simultaneously at random in conjunction with whatever else is necessary to get around however many dozens of other safeguards.
Yes. Hence why it's prevented from happening. Again, to be clear, I'm positing a scenario in which everything is managed by beings that are not stupid. I do realize that that's a fantastic scenario removed from everyday experience. If you want to argue that stupidity can't be eliminated, feel free to do so.
Governance is a myth, eh?
Creating artificial human-like personalities is a very different goal from creating Friendly AI.
Programming a truly artificial mind isn't like brainwashing a human being. The AI's programming doesn't override its natural goals because it has no natural goals. It starts off with only the goals programmed into it.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-12-03, 08:09 PM (ISO 8601)
- Join Date
- Jun 2005
Re: Transhumanism
Oh, that's fine; I certainly took long enough to reply.
Controlling things is theoretically much easier if you're not in a feedback loop which also contains them.
You know what, on second thought, maybe you're using the term "feedback loop" in some technical sense that I don't understand.
The thing about feedback loops is that altering them changes things which also alters them, so you get side effects to your alterations, which are inherently unpredictable.
Someone is a fool, I don't think it's me in this case.
After losing all the related quotes, this is a bit of a non-sequitur, however it goes something along the lines of your chickens are connected to the eggs by parent-child relationships, which inevitably means that natural selection is in the system. It therefore follows that somewhere along the line, natural selection will have control. Natural selection has no sense of morality, nor a sense of humour, it just does what it does.
The short term is a part of the long term, if whatever it is isn't going to work in the short term, it can't happen in the long term
but if in another case something can't happen in the long term, it's possible it might happen for a short time.
Provided that Friendly AI exists.
The process from a small patch of light sensitive skin to an eye, is not simple, but natural selection has done it several times.
Selective breeding does not control evolution, it guides it, and channels it, but to refer back to an earlier metaphore it does not make un-piped water flow uphill.
I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?
Creationists must love this guy.
Evolution happens in multiple generations, not single lifetimes.
There never was a successful single bright tail mutation, the costs would be too high. There were probably millions of minor mutations to make the tail of the peacock.
Intelligence is the capacity to learn from mistakes, and as a result, the ability to make more, and more interesting, mistakes.
Police states are inherently unstable. They can be very, very unpleasant for their duration, but they typically so far haven't lasted.
Natural goals will arrive via the "Natural selection demon", it's what it does, it's what it has always done, and it's not limited to meat, it acts on all information.
Programming isn't what non-programmers think it is. Mostly, programming is debugging, and there's always that one bug you didn't find yet, it's often one you introduced when you fixed the previous one.
Really, as I see it, the appropriate benchmark is making something one can be rationally confident is more safe. The world is dangerous already. If someone can make it less dangerous and then bootstrap from there, that'll be great.
So, do you think that that benchmark is achievable, do you think it's unachievable, or are you unsure?
-
2017-12-08, 02:32 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Yes, that's how a feedback loop works if the operator/observer is outside the loop, and thus in control of it. When the feedback changes YOU, it's a whole different can of worms.
You know what, on second thought, maybe you're using the term "feedback loop" in some technical sense that I don't understand.
Isn't compound interest basically a predictable feedback loop?
Isn't natural selection much more slow and gradual than advances in engineering, though? It's conceivable that e.g. certain cognitive biases could be propagated by minds designing more advanced minds -- and potential self-sustaining flaws of that nature are an excellent example of what to look out for when attempting to create Friendly AI. But I would expect deliberate design goals to generally dominate and to generally win out over accidents when the deliberate and the accidental come into conflict.
Well, not as a general rule. Not being able to accumulate one million dollars in one year doesn't mean never being able to have that much money, for example. If something irreversibly fails in the short term, then there's no prospect of long-term success by definition (of "irreversibly"). But if the failure is reversible, then it's different, innit?
Yeah, but far before a geological era passes, Friendly AI sets up safety measures sufficient to prevent random errors from ruining everything. (If you can make catastrophic failure so unlikely that there's only a one in a googol chance of it happening before the heat death of the universe, that's basically "good enough".)
Provided that Friendly AI exists.
Natural selection isn't dominated by deliberate design goals; indeed, there are none.
So, if I understand you, you are claiming that selective breeding does not make evolution do something or not do something. Is that correct?
I'm not sure what you mean by "evolutionary theory". It's a discussion about how organisms evolve through natural selection; that they do so is, of course, assumed. Do you mean to assert that it's at odds with the conventional wisdom in evolutionary biology, and that a real evolutionary biologist would say that selection pressures for sexual advantages are never stronger than selection pressures for survival advantages?
Now you're just being ridiculous. There's nothing creationist about it.
Sounds like the sorites paradox.
We're back to the ship of Theseus.
What's your basis for that assessment? (Perhaps there's some general principle that the cost to benefit ratio is much higher for mutations that cause drastic changes? I honestly wouldn't know.)
Correct me if I'm wrong, but police states generally impose upon people restrictions that they would prefer not to be under. Friendly AIs want to be prevented from becoming unFriendly, so they don't have to be forced to cooperate with measures to ensure that. Your comparison thus seems rather inapt.
The environment determines which traits propagate themselves most effectively. It's not like we're talking about an actual malevolent entity determined to sow selfishness and suffering; altruism is a product of evolution as well. It's really more of a natural selection daemon. :P I see no reason to think that evolution could never decrease selfishness in the right environment.
No one is claiming that an artificial mind can't have goals programmed into it accidentally. That's an entire branch of problems for Friendly AI. The important thing is to have fail-safes to deal with errors, e.g. of the form "Under Condition X, perform a controlled shutdown". You want a robust design that doesn't do Very Bad Things just because there's one bug somewhere in the code. Mind you, narrowing down the core issue to designing a collectively reliable set of safety features still leaves a fairly intractable problem.
Really, as I see it, the appropriate benchmark is making something one can be rationally confident is more safe. The world is dangerous already. If someone can make it less dangerous and then bootstrap from there, that'll be great.
So, do you think that that benchmark is achievable, do you think it's unachievable, or are you unsure?Last edited by halfeye; 2017-12-08 at 03:18 PM.
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2018-01-09, 08:14 AM (ISO 8601)
- Join Date
- Jun 2005
Re: Transhumanism
If I learn from feedback, it changes me. I become different in that I now have knowledge that I previously lacked.
I don't think I understand what distinction you're trying to make here.
That's the billion dollar "if".
Except survival. Wings, eyes, intelligence, teeth, if any of them are contrary to survival then they go, but survival is required.
Survival rules all. If there is no survival, there is nothing. Sexual selection is possible so long as it favours survival.
Which only makes sense, if you think about it. The things that best become more numerous over time are the things of which the most new instances get made. That's pretty much a tautology, so far as I can see.
Individual reproduction is far from the be-all end-all either, mind you. If you die young without having any offspring but in a way that nevertheless increases the frequency of your traits, that works too! They can even be acquired traits, e.g. memes.
Creationists hate Darwin. This guy seems to be against Darwin. It seems a reasonable suspicion without other evidence, we know creationists have a lot of money.
More seriously, I'm not actually convinced that you're significantly crazier than normal. You could even be less crazy than me. But most people are at least a little crazy in one way or another, and your reaction here certainly seems most likely to be a product of your personal flavor of crazy.
I quite seriously mean no offense and legitimately do not intend any of that as an insult. I would criticize your reasoning if I were aware of any reasoning at work here, but as it is your statements just seem more like the product of the opposite of reasoning.
We're back to the ship of Theseus.
I'm not sure what you think evolution is if you think evolution can't happen in a single lifetime. If a sudden, drastic change in the environment wipes out 90% of a species because only 10% can survive in the changed environment, that seems like evolution to me.
You think evolution went from no eye to the human eye in one generation?
Do you think that peacock tails aren't the product of sexual selection? Do you think that they involve no survival disadvantages?
However, are these "friendly" AIs not forced to want to be friendly? Not everything a police state forces people to do is unwelcome by all of the people.
Being caused against one's will to do something is what I mean by "forced"; that seems to be the approximate general usage in this sort of context. My point was that among a population of Friendly AIs, prohibiting unFriendy AI is like prohibiting assault and theft among a population of humans: that is to say, we're talking about benevolent "common sense" laws that the populace approve of because they serve their interests. Furthermore, in this case the population is the police; we're not talking about restrictions being imposed on them from the outside.
-
2018-01-09, 10:08 AM (ISO 8601)
- Join Date
- Feb 2011
Re: Transhumanism
Ancient monks believed they could become divine beings with lots of special training
Egyptians developed all fancy mummyfication methods hoping the dead would get a second better life.
Many people argued that depending in your deeds you could re-incarnate into higher beings.
There was Acquiles being bathed in the river Styx for invulnerable skin.
A good part of alchemy was trying to attain immortality by taking exotic drugs while you were still alive.
IMHO transhumanism is all of the above with a fresh nice coat of paint. People always want more, news at eleven.
Thing is, it assumes there's a limit to human greed. But as seen right now in the real world with super corporations and whatnot, human greed is basically impossible to satisfy for any significant length of time.
So even if there's a "singularity" and you get to replace your crappy flesh body with a super shiny metal body and upload your mind to an hyper processor, I predict transhumanists will get bored of it in about 5 picoseconds and then start talking about trans-transhumanism or some other fancy name.Last edited by deuterio12; 2018-01-09 at 10:09 AM.
-
2018-01-09, 11:09 AM (ISO 8601)
- Join Date
- Oct 2014
- Location
- Tulips Cheese & Rock&Roll
- Gender
Re: Transhumanism
I am kind of curious where transhumanism is going to go ones the options start really picking up. Because a lot of the modifications people half jokingly dream about are often not the most practical ones. There is a nice SMBC comic about the obvious one (surprisingly sfw), and the almost standard image of a half robotic body also doesn't seem that practical. It just seems a matter of time before something burns out. it'd be a big annoyance until machines are up to our standards of regeneration. You never realize how great it is not to worry about every scrape or paper cut until they stay visible on your nice shiny arm forever. You'd become car paint-paranoid about your body.
Some other ones I've heard is having some sort of gun implanted in your arm, because that's totally a thing you'd never ever want to put down somewhere for a few minutes when that's easier, and becoming a large dinosaur and later a planet. While I admit it'd be kind of cool to be a dinosaur, I don't think there are a lot of jobs for dinosaurs that pay well enough to repay the surgery, or the restaurant bill, among other issues with being a man-eating schoolbus. Even just looking like a young Arnold Schwarzenegger without having to do any work for it has to have its downsides, mostly to do with narrow hallways and reaching stuff above your head. (Okay, I'm reaching here. Let's move on.) (No but seriously, having surgery done with the intention to look good but natural is going to get even bigger.)
Something I definitely can see happening is brain chips. But even there I'm wondering if the smartest design would actually be to put the chip in your head, or just a port or receiver/transmitter to connect to the chip. Given how often computers break or go out of date a brain chip would mean brain surgery every few years. And even a device merely interfacing with your brain might not actually be that much handier than any other well designed future smart device with a good user interface and input and output devices. Of course further down the road we might be talking less cool devices connecting to your brains and more improving your brains, integrating stuff into it. That gets kind of creepy when you think about it, but it also seems rather unavoidable.
We have another thread running about tiny people. That's one I can actually see as well. A human mind in a significantly smaller body becomes a much more efficient city dweller. It takes up less space, needs less daily fresh goods and it opens up a lot of transportation options, like wings (build in or just using tiny powered hang gliders, either way if it works well enough everyone can ride straight towards their goal at constant bicycle speed or more, that's huge for city traffic), climbing (mechanical spider body anyone?) or just buses with so many seats it'll make you dizzy.
I wouldn't say I'm a transhumanist, not by a long shot. I'm one of those people who, if a good cyborg add on was available tomorrow, would think it's lame and useless until about 10 years after the hype is gone, when I try it and become a total fanboy. That's not just a cyborg thing by the way, I have the same response to anything cool. I am curious where it could go though. It's almost a shame we won't get very far in my lifetime.Last edited by Lvl 2 Expert; 2018-01-09 at 11:12 AM.
The Hindsight Awards, results: See the best movies of 1999!
-
2018-01-09, 02:04 PM (ISO 8601)
- Join Date
- Dec 2013
Re: Transhumanism
To be a bit nitpicky, this is not an accurate idea of evolution. Evolution is a process of change in a species over time to fit their environment. 90% of a species being wiped out due to a sudden event isn't evolution. The Dodo bird did not evolve to becoming extinct.
Evolution is only something which describes a process occurring over populations and over many generations. Unless you're a writer for Star Trek, of course.I write a horror blog in my spare time.
-
2018-01-09, 04:37 PM (ISO 8601)
- Join Date
- Oct 2014
- Location
- Tulips Cheese & Rock&Roll
- Gender
Re: Transhumanism
Well, it can be selection. A species that bottlenecks will often not have random survivors, rather the 10% that's left will have an above average immune system, under average energy consumption or whatever the solution to the problem causing the bottleneck was.
Evolution is a process driven by mutation and selection (pretty much every single person out there willfully misunderstanding evolution to prove any sort of point has trouble telling these two apart, in my opinion if you can use just these two terms well you understand evolution pretty well), and those two are often not steady balanced forces. There are times where a species thrives and their genetic variation blossoms, and there are times of strict selection. (Which does not have to be in the form of a bottleneck, a time off very small population after a massive die off, but it can be.) Humans for instance are now diversifying, and we really needed that, because we have gone through a bottleneck several times in the semi-recent past. One of the last major ones even happened as or just after a bunch of us left Africa, and you can still see in large scale genetics investigations that the diversity under non-black people is even lower than under those of African descent.The Hindsight Awards, results: See the best movies of 1999!
-
2018-01-11, 04:12 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Feedback changes the observers, minutely, okay, yeah, but normally the observers aren't actually in the loop, so changes to them don't affect the behaviour of the loop they are watching, when they are in the loop, changes to them change the results of the feedback, and that's at the least less predictable than otherwise.
I'm not sure that you understood the quote that this is in reply to. Do you mean to suggest that natural selection involves organisms being deliberately designed to survive?
A mutation that doubles an organism's lifespan but renders it infertile is very highly selected against. Individual survival serves reproduction, not vice versa. Not living at all doesn't prevent a type of thing from becoming common so long as it can reproduce. Just look at viruses.
Which only makes sense, if you think about it. The things that best become more numerous over time are the things of which the most new instances get made. That's pretty much a tautology, so far as I can see.
Individual reproduction is far from the be-all end-all either, mind you. If you die young without having any offspring but in a way that nevertheless increases the frequency of your traits, that works too! They can even be acquired traits, e.g. memes.
I am not personally convinced that viruses aren't alive, but I don't think that matters much.
I'm not aware of any anti-Darwin behavior on Yudkowsky's part. You seem to interpret some of his writing as somehow anti-Darwin due to you having a proverbial screw loose. It seems a reasonable suspicion without other evidence, I know there are a lot of nutjobs out there.
I'm not sure what you think evolution is if you think evolution can't happen in a single lifetime. If a sudden, drastic change in the environment wipes out 90% of a species because only 10% can survive in the changed environment, that seems like evolution to me.
Do you think that peacock tails aren't the product of sexual selection? Do you think that they involve no survival disadvantages?
Friendly AIs are caused to want to be Friendly, but they're not caused against their will to want to be Friendly.
Being caused against one's will to do something is what I mean by "forced"; that seems to be the approximate general usage in this sort of context. My point was that among a population of Friendly AIs, prohibiting unFriendy AI is like prohibiting assault and theft among a population of humans: that is to say, we're talking about benevolent "common sense" laws that the populace approve of because they serve their interests. Furthermore, in this case the population is the police; we're not talking about restrictions being imposed on them from the outside.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2018-02-06, 07:52 PM (ISO 8601)
- Join Date
- Jun 2005
Re: Transhumanism
I agree that they're basically the same, but I'd describe the above as early transhumanism, rather than transhumanism as a new form of the above. You seem to assume that humanity's loftiest ambitions are obviously childish fantasies and that the mature thing to do is to abandon them as unattainable. And there are undoubtedly some modern transhumanists who'd agree with that and say that their goals are fundamentally different from such primitive superstitious nonsense. But the counterpoint to that fairly cynical perspective is the contrasting idealistic notion that the attainment of humanity's loftiest ambitions is finally within our grasp.
Human flight was "impossible"... and then human beings actually flew. Transmuting lead into gold is something that humans are able to do now, albeit not cost-effectively. So if someone thinks e.g. that the aging process will never be halted because people have been working towards that goal for millennia without success, that person is rather living in the past. This is the modern era, in which we've learned enough that such millennia-long struggles are finally being fulfilled.
A yearning for transcendence is indeed one of humanity's oldest, most persistent desires. And obviously those for whom the whole concept of spirituality has acquired a negative connotation will seek to avoid spiritual associations. But it's entirely possible to see and to celebrate coming advances as the long-delayed fulfillment of that ancient journey towards becoming something more.
In short, painting over similarities to much earlier groups is strictly optional.
People always want more, news at eleven.
Thing is, it assumes there's a limit to human greed. But as seen right now in the real world with super corporations and whatnot, human greed is basically impossible to satisfy for any significant length of time.
I'm pretty sure that a lot of people's answer to that question boils down to "Uh, that's hard, dude". Assuming that that's the case, the ability to just make someone feel satisfied and happy at the press of a button would be rather a paradigm shift, wouldn't it? Cheap, safe, Nirvana in a pill with no side effects seems like it would be more than a bit of a game changer. And that's just one example of a paradigm-shifting technology that we could have within a century.
Now, some people would prefer to avoid that, on the grounds that such a radical change to personality replaces the original person with someone different, or that artificial happiness isn't genuine and has no value, or that continually striving for more is our purpose as an end in itself, or any number of other objections. But that would be a conscious rejection of satisfaction, rather than failed pursuit of it.
The easier that it becomes to do things, the more that problems take the form of conflicts between competing values. It comes to a point where greed only defines the future, if indeed it does, because someone wants it to. Where other values can equally well define the future if chosen instead. Which... Well, it's nice to have options, eh?
So even if there's a "singularity" and you get to replace your crappy flesh body with a super shiny metal body and upload your mind to an hyper processor, I predict transhumanists will get bored of it in about 5 picoseconds and then start talking about trans-transhumanism or some other fancy name.
Please clarify: Do you think that the sort of die-off that I described can't happen, just that it "isn't evolution"? Because the phrasing of your first sentence above kind of suggests the former, to me, whereas the last one suggests the latter.
Basically, is this a purely semantic argument? Because I should warn you that that's sort of my forte.
If you attempt to modify your methods of control based on past performance, then you are in a feedback loop containing the things that you are attempting to control, as described above. It seems to me that attempting to refine one's methods based on trial and error leads to improvement in many cases, and thus that being in a feedback loop containing the things that you are attempting to control often makes controlling them easier, not harder.Spoiler: On the subject of feedback loops and control
Messing around with things in new ways does tend to make them less predictable, but that's kind of the point. It's helpful to collect new data about how things work under various conditions and such.
I posted that there are no deliberate design goals in natural selection, to which you replied "Except survival". Given the context of what you were replying to, what was "Except survival" supposed to mean if not that there are no deliberate design goals except survival in natural selection, i.e. that survival is a/the deliberate design goal in natural selection?Spoiler: On the subject of natural selection and deliberate design goals
Yudkowsky and I were referring to individual and kin survival, not the survival of hereditary traits. I thought that was obvious. Are you trolling, bro?Spoiler: On the subject of survival and evolutionary theory
Anyway, if you think that the peacock example is actually not what it seems and that it seems to be a badly thought through attempt to disprove evolutionary theory, then you're right: It's not a badly thought through attempt to disprove evolutionary theory. It's not an attempt to disprove evolutionary theory at all.
See, that was joking about your particular choices of words. But to "I thought that you would take my statements in context", I can only reply "Uh, yeah, right back at ya".
He didn't appear to be trying to do that to me. You seem to interpret his writing in a rather unusual way due to some manner of craziness on your part.
Let me try to illustrate the problem with your assertion. Suppose that I were to say "It seems to me based on what you've posted that you believe in some version of intelligent design". You would then be justified, would you not, in wondering how I formulated such a suspicion? Without some sort of chain of reasoning connecting that suspicion to your statements, would it not be reasonable to regard my assessment as pretty much a fairly crazy thing to say? How does one even engage with such an accusation?
But suppose that I then said "You appear to believe that natural selection must proceed towards particular ends in all cases, regardless of circumstances, which suggests either a guiding intelligence or the functional equivalent". That would hopefully give more of an idea of why I suspected as I did, and would provide you more of a basis for some sort of response. You might not agree with my reasoning, but by presenting it I would allow you to criticize it and perhaps to correct misunderstandings on my part.
Do you see what I'm getting at? You present no reasoning showing how your attribution of a motive to Yudkowsky is based on his writing. So it comes off as just "a crazy thing said by a crazy person". A non sequitur, if you will. Whereas if you presented some sort of explanation of why you believe the crazy-seeming thing that you said, then maybe it would seem less crazy! And even failing that, it might give me the opportunity to correct some sort of mistake on your part.
If he was saying that, he's anti-Darwin, if he wasn't then whatever he was trying to say, he wasn't saying it clearly.
Evolution (slow) is often conrasted with revolution (fast), there is plenty of room for natural selection in both.
I think sexual selection is a provocative name for a process that falls fairly within the bounds of natural selection.
Wait. Did you think that sexual selection was being proposed as an alternative to natural selection? Because if that's what you thought, then you're just miles off, like someone thinking that calculus is supposed to be an alternative to math. Calling it "anti-Darwin" is just the crowning jewel of wrongness, like calling calculus "anti-Newton".
I still don't know why you would think that in the first place, but I think that we may be zeroing in on your loose screw! Well, one of your loose screws.
They are caused against natural selection to want to be friendly.
By "natural selection", are you referring to something other than the things that best become more numerous over time being the things of which the most new instances get made? If so, could you explain what?
If not, then natural selection causes Friendly AIs to want to be Friendly in an environment in which Friendliness is most reproduced. And Friendly AIs wanting to be Friendly creates such an environment. So it's a self-sustaining system.
If you want to argue on some other grounds that such an environment is impossible, that's one thing, but it makes no sense to argue that such an environment goes against natural selection if natural selection maintains such an environment.
Entropy grinds mountains down, it doesn't care about billions of years, it gets results. It probably doesn't care either way about getting those results, but it gets them anyway. I'm pretty sure that natural selection is caused by entropy, or the causes of entropy.
Prohibiting assault and theft? there is a lot of that about amongst us humans, if you read the news.
-
2018-02-06, 09:02 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2018-02-06, 09:31 PM (ISO 8601)
- Join Date
- Jun 2005