Results 61 to 90 of 111
Thread: Transhumanism
-
2017-08-18, 05:22 PM (ISO 8601)
- Join Date
- Jul 2017
-
2017-08-18, 11:08 PM (ISO 8601)
- Join Date
- Jul 2016
Re: Transhumanism
i was once an ardent transhumanist. i had thoughts about transplanting my brain into a jar connected to a computer ... i very seriously studied whether this could be possible and it turned out to be an interesting thought experiment. now i'm not concerned with changing who i am ... you could say i'm more in tune with my humanity now.
check out my D&D-inspired video game, not done yet but you can listen to the soundtrack if you're bored: https://www.facebook.com/TheCityofScales/
my game's soundcloud: https://soundcloud.com/user-77807407...les-soundtrack
my website with homebrew and stuff on it: http://garm230.wixsite.com/scales
-
2017-08-22, 01:36 PM (ISO 8601)
- Join Date
- Jul 2017
-
2017-08-23, 10:37 AM (ISO 8601)
- Join Date
- Jul 2013
Re: Transhumanism
How do you all feel about this drilldown of Transhumanism, as seen through the lens of the Avengers?
http://blogs.discovermagazine.com/sc...transhumanism/
-
2017-08-23, 10:59 AM (ISO 8601)
- Join Date
- Jul 2017
-
2017-08-29, 11:14 AM (ISO 8601)
- Join Date
- Sep 2010
-
2017-08-29, 01:09 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
-
2017-09-14, 07:35 AM (ISO 8601)
- Join Date
- Oct 2014
- Location
- Tulips Cheese & Rock&Roll
- Gender
Re: Transhumanism
Well, the oldest traces of written language we know where scratched into rocks, and those are still readable.
Funnily enough, pretty much every step forward from there (clay tablets, parchment, paper, printed books, vinyl records, magnetic hard drives) has made the data less time resistant.
But sure, if someone is taking care of it, digital data can be almost forever. Imagine a setup with four disks (say solid state hard drives) with the same contents, and every hour you synchronize them, if there are differences you use the version that appears on most disks. Replace each data carrier ones every five years, or as soon as it breaks. It's just more work than checking if your rock is still there every century.Last edited by Lvl 2 Expert; 2017-09-14 at 07:36 AM.
The Hindsight Awards, results: See the best movies of 1999!
-
2017-09-20, 06:40 PM (ISO 8601)
- Join Date
- Sep 2017
Re: Transhumanism
On the topic of Transhumanism, do ya think anybody plays Eclipse Phase on these forums? It's a pretty sweet post-apocalyptic transhumanist RPG...
-
2017-09-27, 07:13 PM (ISO 8601)
- Join Date
- Jul 2017
-
2017-10-09, 06:30 PM (ISO 8601)
- Join Date
- Jun 2005
Re: Transhumanism
Transhumanism is pretty much just simplified humanism.
Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.
Yeah, but we don't exert godlike command over the flow of all electrical currents.
We have limited control over both electricity and evolution. You seem to treat "limited control" as some sort of contradiction in terms if applied to evolution. So... do you think that it's a contradiction in terms in general, and if so, why? And if not, why does it suddenly become one when applied to natural and/or artificial selection?
Is high fiber toast ice cream?
More to the point, why focus on bodies? There seems to me to be a general consensus that the mind is the self. E.g., a brain transplant is different from other organ transplants in that, even if it goes perfectly, it's the donor who survives, not the recipient; or, to put it another way, it's really a body transplant.
In which case, if my mind is uploaded into a computer, that's still me, even though I no longer have a human body. In that case, am I still human? Do I still have a human mind? Well, yeah. So long as my software still functions the same in its new hardware, generalizations across human minds can still be applied to me just as readily as before. But what if I start changing my software, too? Heck, where do you even draw the line between my mind and other software?
If I hook my mind up to a calculator program so I can get math answers super fast just by thinking, how different is that from a biological human using a physical calculator? If the connection between my mind and the calculator program is just as immediate as the connection between different parts of my mind, then isn't it a bit dubious to consider the add-on separate, rather than a new part of my mind? How many additions before my original mind makes up only a tiny percentage of the whole? And is that something to worry about?
And what about getting rid of stuff? I have personality traits that I don't like. If you could instantly remove from yourself any characteristic you chose, are there any you'd opt to get rid of? Even if there are potential negative side effects, might a change nevertheless be worth the risk? Is it somehow bad to fix one's mental problems through an artificial "quick hack" rather than through other means, even if everything goes as intended? What exactly are the cost and benefits of that approach versus other approaches? For that matter, what even constitutes a "problem"? Is that something that we each need to decide for ourselves? And do we each need to decide for ourselves what our "essential properties" are, such that eliminating them can't improve you because doing so instead replaces you with someone else? I assume that most of us would prefer to make that determination for ourselves...
A lot of the questions I just asked apply to use of psychoactive drugs in the present, so this isn't all "distant future" stuff.
Sure, by definition, replacing a thing's essential properties replaces the thing. But any or no properties can be held to be essential; to put it in simple terms, it's arbitrary. So a single physical object can have multiple identities attributed to it by different people, or even by a single person, such that the object is two things, each of which has some of the other's essential properties as its accidental properties!
(So, for example, a gestalt entity can be all of the beings that went into it, and yet only one being, without contradiction. Sure, they were multiple different individuals, but now they're not. There's no contradiction in also giving the combined being its own new identity, either! Identities aren't a conserved quantity!)
This seems to sneakily conflate {things that only humans can presently do} with {things that only humans can ever do}.
Including all of our decedents forever under "human" does not strike me as common usage. That's... well, it's basically the "birds are dinosaurs" argument, which has its fans, but is pretty far from universally accepted, I think.
The word and the very concept of "humanity" is vague, as are words in general and concepts in general. Right now, there are relatively few especially grey areas (which is not at all the same thing as zero remotley grey areas). But it's quite possible that future developments will create lots of new grey areas, such that there will be several important cases where there is no consensus on whether something is "human".
What do you mean by "human", and what's the basis for that definition? In particular, do you have compelling reason to believe that no competing definition is valid?
That's super vague, though. You could mean any of the following:
1. Things aren't just "more than" or "less than" other things. Not only will no one be "more than" human in the future, but humans aren't "more than" squirrels, for example, now. It's meaningless nonsense.
2. It's impossible in practice for anything to be to humans as humans are to squirrels, relatively speaking. There's nothing internally contradictory about the concept, but it just can't ever be done.
3. Humans exist today who have all relevant qualities so close to maxed out that a categorical shift is definitionally impossible.
Or you could mean something else. Could you clarify?
Quite frankly, if you're insisting that you're in the highest possible category of being, that strikes me as fairly gratuitous ego-stroking.
-
2017-10-09, 08:36 PM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
It becomes a contradiction in terms when applied to natural selection only, because people are subject to natural selection, which makes natural selection part of a feedback loop within the control system you are trying to apply to natural selection, and that feedback loop is unbreakable so long as people can die. We can probably eliminate obvious faults like some forms of heart disease if they are genetically based, because that's the direction natural selection is probably going in, but trying to steer natural selection to somewhere it wouldn't naturally go, such as making heart attacks more likely (perhaps for some weird future aesthetic), would not tend to work out the way it was desired. For an example of the failure of an attempt to push a weird aesthetic, consider the resistence that is growing against the fashion industry's desire for us all to become anoretic (obesity is not good for us, but neither is being underweight).
On the other hand, this bloke argued once that it ought to be a right to pass along disabilities to your offspring, I'm not sure about that, but he argues intelligently:
https://en.wikipedia.org/wiki/Tom_Shakespeare
Is high fiber toast ice cream?
More to the point, why focus on bodies? There seems to me to be a general consensus that the mind is the self. E.g., a brain transplant is different from other organ transplants in that, even if it goes perfectly, it's the donor who survives, not the recipient; or, to put it another way, it's really a body transplant.
In which case, if my mind is uploaded into a computer, that's still me, even though I no longer have a human body. In that case, am I still human? Do I still have a human mind? Well, yeah. So long as my software still functions the same in its new hardware, generalizations across human minds can still be applied to me just as readily as before. But what if I start changing my software, too? Heck, where do you even draw the line between my mind and other software?
If I hook my mind up to a calculator program so I can get math answers super fast just by thinking, how different is that from a biological human using a physical calculator? If the connection between my mind and the calculator program is just as immediate as the connection between different parts of my mind, then isn't it a bit dubious to consider the add-on separate, rather than a new part of my mind? How many additions before my original mind makes up only a tiny percentage of the whole? And is that something to worry about?
http://www.giantitp.com/forums/showt...ivided-by-zero
And what about getting rid of stuff? I have personality traits that I don't like. If you could instantly remove from yourself any characteristic you chose, are there any you'd opt to get rid of? Even if there are potential negative side effects, might a change nevertheless be worth the risk? Is it somehow bad to fix one's mental problems through an artificial "quick hack" rather than through other means, even if everything goes as intended? What exactly are the cost and benefits of that approach versus other approaches? For that matter, what even constitutes a "problem"? Is that something that we each need to decide for ourselves? And do we each need to decide for ourselves what our "essential properties" are, such that eliminating them can't improve you because doing so instead replaces you with someone else? I assume that most of us would prefer to make that determination for ourselves...
A lot of the questions I just asked apply to use of psychoactive drugs in the present, so this isn't all "distant future" stuff.
Sure, by definition, replacing a thing's essential properties replaces the thing. But any or no properties can be held to be essential; to put it in simple terms, it's arbitrary. So a single physical object can have multiple identities attributed to it by different people, or even by a single person, such that the object is two things, each of which has some of the other's essential properties as its accidental properties!
Including all of our decedents forever under "human" does not strike me as common usage. That's... well, it's basically the "birds are dinosaurs" argument, which has its fans, but is pretty far from universally accepted, I think.
Species is a word that has some problems, in rare cases, in any particular present time, those problems become acute over long periods of time. I am sure there will be post-humans at some time, if we get off this rock (it's a nice rock, and we rightly like it, but it's a rock), there will be billions of species of post humans. They won't all be better than us in all ways. I'm pretty sure some of those species will be in conflict with other species, post humans all.
On the other hand, this geezer:
https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
seems to me to be foolish, he's talking about subsystems controlling the growth of programs, and that ain't going to happen.
Programming is hard, difficult and takes a lot of study to become competant, and then there are bugs. A good bug hunt can take days, it's almost always something obvious in hindsight, but it still almost always happens. Those are the bugs we know about because they do something that we can see. There are almost certainly bugs that do nothing, that are undetected, and just sit there doing nothing. They will do nothing until something specific happens, then you can't tell what they may do. A learning AI would be like an almost infinite tree, you might design the first couple of branches, but once it gets into hundreds of branches, telling where it will go next is going to be impossible, there will be branches everywhere, in all directions, and humans just won't be able to keep up with where the branches are branching towards.Last edited by halfeye; 2017-10-09 at 09:02 PM.
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-09, 09:09 PM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
Re: Transhumanism
On the infinite branching tree: exactly, hence why his position is that such a development needs to be extremely carefully managed. See also: The Singularity.
Incidentally, I think you'll find that real life brains are rife with bugs anyway, so any given AI need not be bug free to be "successful," whatever that definition might be.
-
2017-10-10, 09:04 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Yeah, but my point is that it takes time, and if you don't trust machines, you have to have humans examining every branch point, which is not a thing people are going to bother to do. The guy's profile reads as if he never programmed anything, and is telling programmers how to do their jobs without any clue as to what it is they do.
Incidentally, I think you'll find that real life brains are rife with bugs anyway, so any given AI need not be bug free to be "successful," whatever that definition might be.
I am not worried about rogue AI because I don't think we're anywhere near making a working AI yet.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-10, 09:37 AM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
-
2017-10-10, 09:38 AM (ISO 8601)
- Join Date
- Sep 2017
Re: Transhumanism
I be a Transhumanist. But keep in mind there are several different ideas in Transhumanism and I do not agree with all of them.
I am primarily focused on the idea of sentient AI having equal rights as well as using machinery and biological engineering to help the disabled and bring humanity to a better state.
-
2017-10-10, 10:04 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Wikipedia:
https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
He never attended high school or college and has no formal education in artificial intelligence. Yudkowsky claims that he is self-taught in the fieldYudkowsky argues that it is important for advanced AI systems to be cleanly designed and transparent to human inspection, both to ensure stable behavior and to allow greater human oversight and analysis.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-10, 10:09 AM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
Re: Transhumanism
Did you also read the part where despite the no formal training, his ideas are an important part of that formal training now?
Yudkowsky's views on the safety challenges posed by future generations of AI systems are discussed in the standard undergraduate textbook in AI,*Stuart Russell*and*Peter Norvig's*Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time
-
2017-10-10, 10:16 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-10, 11:30 AM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
Re: Transhumanism
Her ideas also aren't the foundational work for university courses that aren't about Objectivism
Seriosuly though, in what way does "self-taught individual who cares enough about a subject to co-found a research institute devoted the problem, works actively as part of said institute, amd who's ideas are fundamental enough to merit inclusion in the foundational instruction for the non-self-taught" not count as qualified? Who is someone who you would accept as knowledgeable on the subject?
-
2017-10-11, 10:09 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
It would really help if he didn't say things that as reported appear to be stupid. It may be that he's actually saying sensible things and being misrepresented by oversimplification, but if not, then he's being silly.
People can't write bug free software. That's the nature of the universe, more or less. Typos are us. We are not perfect, we can write compilers and interpretters that will give us warning of errors in our code, most errors are in the code, but compilers and interpretters are code too. Most of the time, the error is in the code you just wrote, but one time in 10,000, there actually is a bug in the compiler.
I wrote above about natural selection being part of a feedback loop. That's how it works, more or less. It is basically entropy in action, what doesn't survive fails. Entropy acts on information, or code. Our code is in DNA, and RNA, for computers their code is in whatever language they are written in. The feedback loop that is intrinsic to natural selection will make survival a priority for any system that reproduces. So far machines don't reproduce, and so long as they don't the worst we are likely to face are computer viruses, most of which are under the control of criminals, not wild.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-11, 10:19 AM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
Re: Transhumanism
Well there's your problem, he is talking about code that reproduces To oversimplify a bit more, his argument is that with all the work going into AI, "programs" that write other programs are going to happen eventually (it's a matter of when, not if, even if the specific "when" hasn't been nailed down) so we need to do it right the first time. That is, make sure the bugs that are there, don't involve things like being able to adjust itself into a Skynet type of scenario. Or rather, design the AI so that wjat it "wants" doesn't involve bad things for humanity. There's a lot of discussion about what that entails in what he writes.
-
2017-10-11, 10:39 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
He's mistaken then.
You can't write code that reproduces and have control of it. Because of natural selection, that set is in the long term null.
Natural Selection makes things that like to live, it does that by elimination of things that don't. Code that reproduces will be selected by natural selection, it only needs to be able to delete/kill to maintain in full its feedback loop.The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-11, 10:48 AM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
-
2017-10-11, 11:11 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-11, 11:15 AM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
Re: Transhumanism
-
2017-10-11, 11:27 AM (ISO 8601)
- Join Date
- Jun 2013
- Location
- Bristol, UK
Re: Transhumanism
Sure, cooperation is an option. Some, eg mosquitos and viruses don't go for it, but it's an option. However, control of a freely reproducing species is not an option, and non-cooperation is always an option too. Your man is talking about control, and that can't happen if the reproduction is unsupervised. Resources are finite, which leads to competition, sometimes the best way to compete is to cooperate, but something somewhere loses out by it, always.
Last edited by halfeye; 2017-10-11 at 11:29 AM.
The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.
-
2017-10-11, 11:43 AM (ISO 8601)
- Join Date
- Aug 2011
- Location
- Sharangar's Revenge
- Gender
Re: Transhumanism
We are already using AI for mechanical design. Dreamcatcher by Autodesk (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.
For more, check out this site: www.aee.odu.edu/proddesign/.Last edited by Lord Torath; 2017-11-28 at 12:33 PM. Reason: AutoDeck vs AutoDesk
Warhammer 40,000 Campaign Skirmish Game: Warpstrike
My Spelljammer stuff (including an orbit tracker), 2E AD&D spreadsheet, and Vault of the Drow maps are available in my Dropbox. Feel free to use or not use it as you see fit!
Thri-Kreen Ranger/Psionicist by me, based off of Rich's A Monster for Every Season
-
2017-10-11, 12:06 PM (ISO 8601)
- Join Date
- Sep 2011
- Location
- Calgary, AB
- Gender
Re: Transhumanism
Right, I'm taking their argument at face value to try to convince him of what he's missing. Currently I'm trying to draw a comparisson between Halfeye's argument about natural selection leading to murderous intent is countered by the fact that they, a product of natural selection, don't feel the need to kill stuff in his usual day because it's not something they want or care about. Then leading to the idea that AI have the goals we give them, so we should be careful when making said goals. But apparently mosquitos cooperating is where they're trying to draw the conversation, so I'm not sure they're actually engaging my point.
-
2017-10-11, 01:12 PM (ISO 8601)
- Join Date
- Feb 2014
Re: Transhumanism
It's not really about murderous intent, but we do go around killing everything that could be an inconvenience. Together- as a species.
We use pesticides to kill insects that compete with us for food. We have drugs to kill parasites and diseases, and any animal capable of killing a human is carefully monitored, zooed or just killed. We end or control the lives of pretty much every species we come into contact with, at our convenience.Last edited by Spojaz; 2017-10-11 at 01:13 PM.
"The error is to be human"