New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 3 of 4 FirstFirst 1234 LastLast
Results 61 to 90 of 111

Thread: Transhumanism

  1. - Top - End - #61
    Dwarf in the Playground
     
    BardGuy

    Join Date
    Jul 2017

    Default Re: Transhumanism

    Quote Originally Posted by veti View Post
    Right, so Captain Hook was a transhumanist. And I definitely am, since my laser eye surgery.

    Seriously, this is a silly category. We've had "enhanced humans" since forever, and outside a few regrettable fringe cases, we've come to the consensus that they're humans. Calling them anything else is just asking for a world of hurt. X-Men is not a social blueprint we should aspire to.
    I'd argue that those don't count because they don't actually improve anything, just S restore baseline functionality, now if Captain Hook could bend rebar and you could see in infared, then that would make you transhuman.

  2. - Top - End - #62
    Dwarf in the Playground
     
    DwarfBarbarianGuy

    Join Date
    Jul 2016

    Default Re: Transhumanism

    i was once an ardent transhumanist. i had thoughts about transplanting my brain into a jar connected to a computer ... i very seriously studied whether this could be possible and it turned out to be an interesting thought experiment. now i'm not concerned with changing who i am ... you could say i'm more in tune with my humanity now.
    check out my D&D-inspired video game, not done yet but you can listen to the soundtrack if you're bored: https://www.facebook.com/TheCityofScales/

    my game's soundcloud: https://soundcloud.com/user-77807407...les-soundtrack

    my website with homebrew and stuff on it: http://garm230.wixsite.com/scales

  3. - Top - End - #63
    Dwarf in the Playground
     
    BardGuy

    Join Date
    Jul 2017

    Default Re: Transhumanism

    Quote Originally Posted by Goodkill View Post
    i was once an ardent transhumanist. i had thoughts about transplanting my brain into a jar connected to a computer ... i very seriously studied whether this could be possible and it turned out to be an interesting thought experiment. now i'm not concerned with changing who i am ... you could say i'm more in tune with my humanity now.
    Humanity's overrated, jars FTW!

  4. - Top - End - #64
    Ogre in the Playground
     
    NinjaGuy

    Join Date
    Jul 2013

    Default Re: Transhumanism

    How do you all feel about this drilldown of Transhumanism, as seen through the lens of the Avengers?

    http://blogs.discovermagazine.com/sc...transhumanism/

  5. - Top - End - #65
    Dwarf in the Playground
     
    BardGuy

    Join Date
    Jul 2017

    Default Re: Transhumanism

    Quote Originally Posted by Vogie View Post
    How do you all feel about this drilldown of Transhumanism, as seen through the lens of the Avengers?

    http://blogs.discovermagazine.com/sc...transhumanism/
    Seems interesting, it brings up a lot of valid reasons to be apprehensive or excited and transhumanism.

  6. - Top - End - #66
    Banned
     
    DruidGirl

    Join Date
    Sep 2010

    Default Re: Transhumanism

    Quote Originally Posted by Jackalias View Post
    Humanity's overrated, jars FTW!
    Considering that it is not possible to put your brain in a jar and maintain consciousness, it might be better to come to terms with being human ...

  7. - Top - End - #67
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Themrys View Post
    Considering that it is not possible to put your brain in a jar and maintain consciousness, it might be better to come to terms with being human ...
    Not yet anyway. The Ship who Sang is from 1969, and it's still fifty or more years away.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  8. - Top - End - #68
    Troll in the Playground
     
    Lvl 2 Expert's Avatar

    Join Date
    Oct 2014
    Location
    Tulips Cheese & Rock&Roll
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by Yuki Akuma View Post
    I don't disagree with your point, but I just want to be a pedant and say... no, not really. Entropy exists. You'll always have copy errors, data will always spontaneously degrade over time, electrons will occasionally switch positions for no discernible reason, and so forth.

    You need really good error-handling systems and redundant backups is what I'm saying.
    Well, the oldest traces of written language we know where scratched into rocks, and those are still readable.

    Funnily enough, pretty much every step forward from there (clay tablets, parchment, paper, printed books, vinyl records, magnetic hard drives) has made the data less time resistant.

    But sure, if someone is taking care of it, digital data can be almost forever. Imagine a setup with four disks (say solid state hard drives) with the same contents, and every hour you synchronize them, if there are differences you use the version that appears on most disks. Replace each data carrier ones every five years, or as soon as it breaks. It's just more work than checking if your rock is still there every century.
    Last edited by Lvl 2 Expert; 2017-09-14 at 07:36 AM.
    The Hindsight Awards, results: See the best movies of 1999!

  9. - Top - End - #69
    Halfling in the Playground
     
    Daemon

    Join Date
    Sep 2017

    Default Re: Transhumanism

    On the topic of Transhumanism, do ya think anybody plays Eclipse Phase on these forums? It's a pretty sweet post-apocalyptic transhumanist RPG...

  10. - Top - End - #70
    Dwarf in the Playground
     
    BardGuy

    Join Date
    Jul 2017

    Default Re: Transhumanism

    Quote Originally Posted by CoyoteBlue View Post
    On the topic of Transhumanism, do ya think anybody plays Eclipse Phase on these forums? It's a pretty sweet post-apocalyptic transhumanist RPG...
    I'm a fan, although sometimes the creators political views bleed into the setting a bit too much (anarchy is good, transhumanism is good, capitalism is bad, etc.)

  11. - Top - End - #71
    Ogre in the Playground
     
    Devil

    Join Date
    Jun 2005

    Default Re: Transhumanism

    Quote Originally Posted by joeltion View Post
    Yeah, I got that part. But, taking aside the "eternal life" issue, isn't that what scientists already do, broadly speaking? That's why I have trouble figuring out what truly defines people within the movement; because most of the things they claim are just sensible claims in essence. So I have trouble differentiating a "strong supporter of rationalism and scientific development" (say, like me) from a true "transhumanist".
    Transhumanism is pretty much just simplified humanism.

    Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.
    -- Eliezer Yudkowsky

    Quote Originally Posted by halfeye View Post
    We stop electrical things all the time, we turn them off.
    Yeah, but we don't exert godlike command over the flow of all electrical currents.

    We have limited control over both electricity and evolution. You seem to treat "limited control" as some sort of contradiction in terms if applied to evolution. So... do you think that it's a contradiction in terms in general, and if so, why? And if not, why does it suddenly become one when applied to natural and/or artificial selection?

    Quote Originally Posted by Themrys View Post
    I agree. A human body to which some changes are made remains a human body. With additional changes. Just as with anything else.
    Is high fiber toast ice cream?

    More to the point, why focus on bodies? There seems to me to be a general consensus that the mind is the self. E.g., a brain transplant is different from other organ transplants in that, even if it goes perfectly, it's the donor who survives, not the recipient; or, to put it another way, it's really a body transplant.

    In which case, if my mind is uploaded into a computer, that's still me, even though I no longer have a human body. In that case, am I still human? Do I still have a human mind? Well, yeah. So long as my software still functions the same in its new hardware, generalizations across human minds can still be applied to me just as readily as before. But what if I start changing my software, too? Heck, where do you even draw the line between my mind and other software?

    If I hook my mind up to a calculator program so I can get math answers super fast just by thinking, how different is that from a biological human using a physical calculator? If the connection between my mind and the calculator program is just as immediate as the connection between different parts of my mind, then isn't it a bit dubious to consider the add-on separate, rather than a new part of my mind? How many additions before my original mind makes up only a tiny percentage of the whole? And is that something to worry about?

    And what about getting rid of stuff? I have personality traits that I don't like. If you could instantly remove from yourself any characteristic you chose, are there any you'd opt to get rid of? Even if there are potential negative side effects, might a change nevertheless be worth the risk? Is it somehow bad to fix one's mental problems through an artificial "quick hack" rather than through other means, even if everything goes as intended? What exactly are the cost and benefits of that approach versus other approaches? For that matter, what even constitutes a "problem"? Is that something that we each need to decide for ourselves? And do we each need to decide for ourselves what our "essential properties" are, such that eliminating them can't improve you because doing so instead replaces you with someone else? I assume that most of us would prefer to make that determination for ourselves...

    A lot of the questions I just asked apply to use of psychoactive drugs in the present, so this isn't all "distant future" stuff.

    Quote Originally Posted by Themrys View Post
    You can't change something's fundamental nature. All changes you can do to anything are already inherent in the nature of said thing.
    Sure, by definition, replacing a thing's essential properties replaces the thing. But any or no properties can be held to be essential; to put it in simple terms, it's arbitrary. So a single physical object can have multiple identities attributed to it by different people, or even by a single person, such that the object is two things, each of which has some of the other's essential properties as its accidental properties!

    (So, for example, a gestalt entity can be all of the beings that went into it, and yet only one being, without contradiction. Sure, they were multiple different individuals, but now they're not. There's no contradiction in also giving the combined being its own new identity, either! Identities aren't a conserved quantity!)

    Quote Originally Posted by Themrys View Post
    Considering that humans are the only species that's able to even think about such a thing, it is especially ridiculous to claim that by doing things that only humans can do, humans can change themselves into something that's not human.
    This seems to sneakily conflate {things that only humans can presently do} with {things that only humans can ever do}.

    Quote Originally Posted by Themrys View Post
    If the human species evolves to something different, it'll still be called human. Unless the word evolves independently.
    Including all of our decedents forever under "human" does not strike me as common usage. That's... well, it's basically the "birds are dinosaurs" argument, which has its fans, but is pretty far from universally accepted, I think.

    The word and the very concept of "humanity" is vague, as are words in general and concepts in general. Right now, there are relatively few especially grey areas (which is not at all the same thing as zero remotley grey areas). But it's quite possible that future developments will create lots of new grey areas, such that there will be several important cases where there is no consensus on whether something is "human".

    Quote Originally Posted by druid91 View Post
    You cannot evolve 'past' being human. Any 'higher form' you could become, would by it's very nature be Human.
    What do you mean by "human", and what's the basis for that definition? In particular, do you have compelling reason to believe that no competing definition is valid?

    Quote Originally Posted by druid91 View Post
    I vehemently disagree with the notion that it's possible to be more than human. It's possible to add functionality onto a Human, but there's no categorical shift that can be made. It's divisive nonsense for the sake of stroking certain peoples egos.
    That's super vague, though. You could mean any of the following:

    1. Things aren't just "more than" or "less than" other things. Not only will no one be "more than" human in the future, but humans aren't "more than" squirrels, for example, now. It's meaningless nonsense.

    2. It's impossible in practice for anything to be to humans as humans are to squirrels, relatively speaking. There's nothing internally contradictory about the concept, but it just can't ever be done.

    3. Humans exist today who have all relevant qualities so close to maxed out that a categorical shift is definitionally impossible.

    Or you could mean something else. Could you clarify?

    Quite frankly, if you're insisting that you're in the highest possible category of being, that strikes me as fairly gratuitous ego-stroking.
    Quote Originally Posted by icefractal View Post
    Abstract positioning, either fully "position doesn't matter" or "zones" or whatever, is fine. If the rules reflect that. Exact positioning, with a visual representation, is fine. But "exact positioning theoretically exists, and the rules interact with it, but it only exists in the GM's head and is communicated to the players a bit at a time" sucks for anything even a little complex. And I say this from a GM POV.

  12. - Top - End - #72
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by Devils_Advocate View Post
    We have limited control over both electricity and evolution. You seem to treat "limited control" as some sort of contradiction in terms if applied to evolution. So... do you think that it's a contradiction in terms in general, and if so, why? And if not, why does it suddenly become one when applied to natural and/or artificial selection?
    It becomes a contradiction in terms when applied to natural selection only, because people are subject to natural selection, which makes natural selection part of a feedback loop within the control system you are trying to apply to natural selection, and that feedback loop is unbreakable so long as people can die. We can probably eliminate obvious faults like some forms of heart disease if they are genetically based, because that's the direction natural selection is probably going in, but trying to steer natural selection to somewhere it wouldn't naturally go, such as making heart attacks more likely (perhaps for some weird future aesthetic), would not tend to work out the way it was desired. For an example of the failure of an attempt to push a weird aesthetic, consider the resistence that is growing against the fashion industry's desire for us all to become anoretic (obesity is not good for us, but neither is being underweight).

    On the other hand, this bloke argued once that it ought to be a right to pass along disabilities to your offspring, I'm not sure about that, but he argues intelligently:

    https://en.wikipedia.org/wiki/Tom_Shakespeare

    Is high fiber toast ice cream?

    More to the point, why focus on bodies? There seems to me to be a general consensus that the mind is the self. E.g., a brain transplant is different from other organ transplants in that, even if it goes perfectly, it's the donor who survives, not the recipient; or, to put it another way, it's really a body transplant.

    In which case, if my mind is uploaded into a computer, that's still me, even though I no longer have a human body. In that case, am I still human? Do I still have a human mind? Well, yeah. So long as my software still functions the same in its new hardware, generalizations across human minds can still be applied to me just as readily as before. But what if I start changing my software, too? Heck, where do you even draw the line between my mind and other software?
    Transplanting a human mind into any sort of computer is is a non-trivial problem that is not yet anywhere near to being solved, we barely know how the brain works, and there is almost certainly no current computer that is powerful enough to simulate it at full speed.

    If I hook my mind up to a calculator program so I can get math answers super fast just by thinking, how different is that from a biological human using a physical calculator? If the connection between my mind and the calculator program is just as immediate as the connection between different parts of my mind, then isn't it a bit dubious to consider the add-on separate, rather than a new part of my mind? How many additions before my original mind makes up only a tiny percentage of the whole? And is that something to worry about?
    Maths is a vast subject, you almost certainly mean arithmetic, and even that is very complex in it's limits, see this thread:

    http://www.giantitp.com/forums/showt...ivided-by-zero

    And what about getting rid of stuff? I have personality traits that I don't like. If you could instantly remove from yourself any characteristic you chose, are there any you'd opt to get rid of? Even if there are potential negative side effects, might a change nevertheless be worth the risk? Is it somehow bad to fix one's mental problems through an artificial "quick hack" rather than through other means, even if everything goes as intended? What exactly are the cost and benefits of that approach versus other approaches? For that matter, what even constitutes a "problem"? Is that something that we each need to decide for ourselves? And do we each need to decide for ourselves what our "essential properties" are, such that eliminating them can't improve you because doing so instead replaces you with someone else? I assume that most of us would prefer to make that determination for ourselves...

    A lot of the questions I just asked apply to use of psychoactive drugs in the present, so this isn't all "distant future" stuff.
    We don't understand the brain well enough to excise parts of it without causing serious side effects on other parts.

    Sure, by definition, replacing a thing's essential properties replaces the thing. But any or no properties can be held to be essential; to put it in simple terms, it's arbitrary. So a single physical object can have multiple identities attributed to it by different people, or even by a single person, such that the object is two things, each of which has some of the other's essential properties as its accidental properties!
    A brain is much much more complicated than the most complicated modern ship.

    Including all of our decedents forever under "human" does not strike me as common usage. That's... well, it's basically the "birds are dinosaurs" argument, which has its fans, but is pretty far from universally accepted, I think.
    It's a way of describing the origin of birds. It's more accurate than any other way of describing the origin of birds.

    Species is a word that has some problems, in rare cases, in any particular present time, those problems become acute over long periods of time. I am sure there will be post-humans at some time, if we get off this rock (it's a nice rock, and we rightly like it, but it's a rock), there will be billions of species of post humans. They won't all be better than us in all ways. I'm pretty sure some of those species will be in conflict with other species, post humans all.

    On the other hand, this geezer:

    https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

    seems to me to be foolish, he's talking about subsystems controlling the growth of programs, and that ain't going to happen.

    Programming is hard, difficult and takes a lot of study to become competant, and then there are bugs. A good bug hunt can take days, it's almost always something obvious in hindsight, but it still almost always happens. Those are the bugs we know about because they do something that we can see. There are almost certainly bugs that do nothing, that are undetected, and just sit there doing nothing. They will do nothing until something specific happens, then you can't tell what they may do. A learning AI would be like an almost infinite tree, you might design the first couple of branches, but once it gets into hundreds of branches, telling where it will go next is going to be impossible, there will be branches everywhere, in all directions, and humans just won't be able to keep up with where the branches are branching towards.
    Last edited by halfeye; 2017-10-09 at 09:02 PM.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  13. - Top - End - #73
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    On the infinite branching tree: exactly, hence why his position is that such a development needs to be extremely carefully managed. See also: The Singularity.

    Incidentally, I think you'll find that real life brains are rife with bugs anyway, so any given AI need not be bug free to be "successful," whatever that definition might be.
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  14. - Top - End - #74
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    On the infinite branching tree: exactly, hence why his position is that such a development needs to be extremely carefully managed.
    Yeah, but my point is that it takes time, and if you don't trust machines, you have to have humans examining every branch point, which is not a thing people are going to bother to do. The guy's profile reads as if he never programmed anything, and is telling programmers how to do their jobs without any clue as to what it is they do.

    Incidentally, I think you'll find that real life brains are rife with bugs anyway, so any given AI need not be bug free to be "successful," whatever that definition might be.
    Sure. The point is that bugs build up, and a sufficient complex of them may enable an AI that contains them to bypass the programming that keeps it human friendly. We've had human unfriendly people before now, megalomaniacs and serial and spree killers for examples.

    I am not worried about rogue AI because I don't think we're anywhere near making a working AI yet.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  15. - Top - End - #75
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Which part of what you've read suggests he doesn't understand what he's talking about? I'm not going to claim that he's infallible or anything, but understanding this stuff is literally his job. Also, TIL that being almost 40 counts as a geezer
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  16. - Top - End - #76
    Orc in the Playground
     
    WorldAdventurer's Avatar

    Join Date
    Sep 2017

    Default Re: Transhumanism

    I be a Transhumanist. But keep in mind there are several different ideas in Transhumanism and I do not agree with all of them.

    I am primarily focused on the idea of sentient AI having equal rights as well as using machinery and biological engineering to help the disabled and bring humanity to a better state.

    All credit for my avatar goes to linklele whom I am very grateful towards.


    Spoiler: Homebrew
    Show

  17. - Top - End - #77
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Which part of what you've read suggests he doesn't understand what he's talking about? I'm not going to claim that he's infallible or anything, but understanding this stuff is literally his job. Also, TIL that being almost 40 counts as a geezer
    Wikipedia:

    https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

    He never attended high school or college and has no formal education in artificial intelligence. Yudkowsky claims that he is self-taught in the field
    Yudkowsky argues that it is important for advanced AI systems to be cleanly designed and transparent to human inspection, both to ensure stable behavior and to allow greater human oversight and analysis.
    How the heck do you cleanly design an AI? human inspection of what? Windows isn't an AI, but allegedly it contains one million lines of code or by now probably much more than that. You need a couple of seconds to inspect a line of code, but that won't show you how it interacts with all the other million lines of code, I'm pretty sure there's noone in Microsoft who knows what every line of code in Windows does.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  18. - Top - End - #78
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Did you also read the part where despite the no formal training, his ideas are an important part of that formal training now?

    Yudkowsky's views on the safety challenges posed by future generations of AI systems are discussed in the standard undergraduate textbook in AI,*Stuart Russell*and*Peter Norvig's*Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time
    Same article, about two inches down.
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  19. - Top - End - #79
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Did you also read the part where despite the no formal training, his ideas are an important part of that formal training now?


    Same article, about two inches down.
    Yes I saw that, but it was someone else writing a book that mentions his idea, not his book being a textbook that is widely used.

    Ayn Rand is often quoted, but hardly respected.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  20. - Top - End - #80
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    Yes I saw that, but it was someone else writing a book that mentions his idea, not his book being a textbook that is widely used.

    Ayn Rand is often quoted, but hardly respected.
    Her ideas also aren't the foundational work for university courses that aren't about Objectivism

    Seriosuly though, in what way does "self-taught individual who cares enough about a subject to co-found a research institute devoted the problem, works actively as part of said institute, amd who's ideas are fundamental enough to merit inclusion in the foundational instruction for the non-self-taught" not count as qualified? Who is someone who you would accept as knowledgeable on the subject?
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  21. - Top - End - #81
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Seriosuly though, in what way does "self-taught individual who cares enough about a subject to co-found a research institute devoted the problem, works actively as part of said institute, amd who's ideas are fundamental enough to merit inclusion in the foundational instruction for the non-self-taught" not count as qualified? Who is someone who you would accept as knowledgeable on the subject?
    It would really help if he didn't say things that as reported appear to be stupid. It may be that he's actually saying sensible things and being misrepresented by oversimplification, but if not, then he's being silly.

    People can't write bug free software. That's the nature of the universe, more or less. Typos are us. We are not perfect, we can write compilers and interpretters that will give us warning of errors in our code, most errors are in the code, but compilers and interpretters are code too. Most of the time, the error is in the code you just wrote, but one time in 10,000, there actually is a bug in the compiler.

    I wrote above about natural selection being part of a feedback loop. That's how it works, more or less. It is basically entropy in action, what doesn't survive fails. Entropy acts on information, or code. Our code is in DNA, and RNA, for computers their code is in whatever language they are written in. The feedback loop that is intrinsic to natural selection will make survival a priority for any system that reproduces. So far machines don't reproduce, and so long as they don't the worst we are likely to face are computer viruses, most of which are under the control of criminals, not wild.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  22. - Top - End - #82
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Well there's your problem, he is talking about code that reproduces To oversimplify a bit more, his argument is that with all the work going into AI, "programs" that write other programs are going to happen eventually (it's a matter of when, not if, even if the specific "when" hasn't been nailed down) so we need to do it right the first time. That is, make sure the bugs that are there, don't involve things like being able to adjust itself into a Skynet type of scenario. Or rather, design the AI so that wjat it "wants" doesn't involve bad things for humanity. There's a lot of discussion about what that entails in what he writes.
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  23. - Top - End - #83
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Well there's your problem, he is talking about code that reproduces To oversimplify a bit more, his argument is that with all the work going into AI, "programs" that write other programs are going to happen eventually (it's a matter of when, not if, even if the specific "when" hasn't been nailed down) so we need to do it right the first time. That is, make sure the bugs that are there, don't involve things like being able to adjust itself into a Skynet type of scenario. Or rather, design the AI so that wjat it "wants" doesn't involve bad things for humanity. There's a lot of discussion about what that entails in what he writes.
    He's mistaken then.

    You can't write code that reproduces and have control of it. Because of natural selection, that set is in the long term null.

    Natural Selection makes things that like to live, it does that by elimination of things that don't. Code that reproduces will be selected by natural selection, it only needs to be able to delete/kill to maintain in full its feedback loop.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  24. - Top - End - #84
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    He's mistaken then.

    You can't write code that reproduces and have control of it. Because of natural selection, that set is in the long term null.

    Natural Selection makes things that like to live, it does that by elimination of things that don't. Code that reproduces will be selected by natural selection, it only needs to be able to delete/kill to maintain in full its feedback loop.
    Do you kill everything that gets in your way? Do you have the urge to hunt insects to gather their protein?
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  25. - Top - End - #85
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Do you kill everything that gets in your way? Do you have the urge to hunt insects to gather their protein?
    Natural Selection is that strong. We control some animals and plants, but when we do we control their breeding. Freely reproducing populations are by nature wild. Humans are thus far wild. We cooperate, but we are not controlled, so far.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  26. - Top - End - #86
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by halfeye View Post
    Natural Selection is that strong. We control some animals and plants, but when we do we control their breeding. Freely reproducing populations are by nature wild. Humans are thus far wild. We cooperate, but we are not controlled, so far.
    Okay, but the fact remains though that you don't go around killing everything that could be an inconvenience, yes? So therefor natural selection doesn't inherently lead to an urge to destroy, but rather to acquire the things needed for survival and comfort?
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  27. - Top - End - #87
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Okay, but the fact remains though that you don't go around killing everything that could be an inconvenience, yes? So therefor natural selection doesn't inherently lead to an urge to destroy, but rather to acquire the things needed for survival and comfort?
    Sure, cooperation is an option. Some, eg mosquitos and viruses don't go for it, but it's an option. However, control of a freely reproducing species is not an option, and non-cooperation is always an option too. Your man is talking about control, and that can't happen if the reproduction is unsupervised. Resources are finite, which leads to competition, sometimes the best way to compete is to cooperate, but something somewhere loses out by it, always.
    Last edited by halfeye; 2017-10-11 at 11:29 AM.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  28. - Top - End - #88
    Ettin in the Playground
     
    Lord Torath's Avatar

    Join Date
    Aug 2011
    Location
    Sharangar's Revenge
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by georgie_leech View Post
    Well there's your problem, he is talking about code that reproduces To oversimplify a bit more, his argument is that with all the work going into AI, "programs" that write other programs are going to happen eventually (it's a matter of when, not if, even if the specific "when" hasn't been nailed down) so we need to do it right the first time. That is, make sure the bugs that are there, don't involve things like being able to adjust itself into a Skynet type of scenario. Or rather, design the AI so that wjat it "wants" doesn't involve bad things for humanity. There's a lot of discussion about what that entails in what he writes.
    We are already using AI for mechanical design. Dreamcatcher by Autodesk (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.

    For more, check out this site: www.aee.odu.edu/proddesign/.
    Last edited by Lord Torath; 2017-11-28 at 12:33 PM. Reason: AutoDeck vs AutoDesk
    Warhammer 40,000 Campaign Skirmish Game: Warpstrike
    My Spelljammer stuff (including an orbit tracker), 2E AD&D spreadsheet, and Vault of the Drow maps are available in my Dropbox. Feel free to use or not use it as you see fit!
    Thri-Kreen Ranger/Psionicist by me, based off of Rich's A Monster for Every Season

  29. - Top - End - #89
    Ettin in the Playground
     
    georgie_leech's Avatar

    Join Date
    Sep 2011
    Location
    Calgary, AB
    Gender
    Male

    Default Re: Transhumanism

    Quote Originally Posted by Lord Torath View Post
    We are already using AI for mechanical design. Dreamcatcher by Autodeck (makers of AutoCAD, Inventor, et. al.) has been used to redesign some airplane internals to be lighter and stronger for the Airbus A320.

    For more, check out this site: www.aee.odu.edu/proddesign/.
    Right, I'm taking their argument at face value to try to convince him of what he's missing. Currently I'm trying to draw a comparisson between Halfeye's argument about natural selection leading to murderous intent is countered by the fact that they, a product of natural selection, don't feel the need to kill stuff in his usual day because it's not something they want or care about. Then leading to the idea that AI have the goals we give them, so we should be careful when making said goals. But apparently mosquitos cooperating is where they're trying to draw the conversation, so I'm not sure they're actually engaging my point.
    Last edited by georgie_leech; 2017-10-11 at 12:10 PM.
    Quote Originally Posted by Grod_The_Giant View Post
    We should try to make that a thing; I think it might help civility. Hey, GitP, let's try to make this a thing: when you're arguing optimization strategies, RAW-logic, and similar such things that you'd never actually use in a game, tag your post [THEORETICAL] and/or use green text

  30. - Top - End - #90
    Orc in the Playground
     
    Goblin

    Join Date
    Feb 2014

    Default Re: Transhumanism

    It's not really about murderous intent, but we do go around killing everything that could be an inconvenience. Together- as a species.
    We use pesticides to kill insects that compete with us for food. We have drugs to kill parasites and diseases, and any animal capable of killing a human is carefully monitored, zooed or just killed. We end or control the lives of pretty much every species we come into contact with, at our convenience.
    Last edited by Spojaz; 2017-10-11 at 01:13 PM.
    "The error is to be human"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •