New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 5 of 12 FirstFirst 123456789101112 LastLast
Results 121 to 150 of 347

Thread: The Singularity

  1. - Top - End - #121
    Ogre in the Playground
     
    RPGuru1331's Avatar

    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    If the body produces it, it consumes resources, and that which consumes resources without utility is harmful to the organism.
    A fine answer in engineering school, but in actual biology, if it doesn't decrease your reproductive success, it isn't harmful, even if its minutely less efficient. And we have real world examples of these things happening.


    I'm straining to see what this has to do with the question, since human researchers surely aren't less capable of speculative experiment.
    Not when Basic Research is put on hold, as it so often is even without a crisis.

    It doesn't. But my point is that even inelegant approaches tend to get there eventually. Remember: I'm giving a rough timescale of anywhere between a decade (if some peerless genius has a sudden epiphany under ideal research conditions) and a millennium (through brute force and ignorance, trial-and-error in some post-doomsday bunker) for sentient AI. I can't really rule out either possibility, but I'd prefer to cover my bets.
    Then don't treat it as a threat of immediate, overriding importance.
    Asok: Shouldn't we actually be working?
    And then Asok was thrown out of the car.

  2. - Top - End - #122
    Ogre in the Playground
     
    RPGuru1331's Avatar

    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    Let us, very conservatively, assume that the average person makes 32K per year, works for 30 years, and that a 20 point IQ boost correlates to, on average, a 2K increase in salary. That works out to 60K extra in earnings, for a one-time cost of 10K at birth, for a kid that will likely cost you well upwards of 10K per year for 20 years regardless. And this is assuming that costs don't come down drastically by this point, or that robots haven't taken most of the jobs where IQ doesn't matter.
    oh gosh, just 10k up front. Do you know any parents? Further, you have genes correlated to an increase in IQ, which itself has a tenuous connection to income. That's plausibly a safe bet, but its not the safest one.

    Again, I realise that these financial costs aren't nothing at the moment, which is why I advocate state intervention to make these services universally available.
    10K times 350 million children comes to something like 35 trillion dollars. 7 billion children is 700 trillion dollars, each in under a generation.

    The human genome project wasn't even finished 10 years ago, and there's already a serious prospect, for better or worse, of eliminating Down's Syndrome. You need to extract your head from your ass.
    If even the science reporting puts that as "Beyond the Horizon", It is not particularly in range.
    Asok: Shouldn't we actually be working?
    And then Asok was thrown out of the car.

  3. - Top - End - #123
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by RPGuru1331 View Post
    Then don't treat it as a threat of immediate, overriding importance.
    I'm sorry, did I not mention that bit about financial trading? We are, like in the middle of a recession right now? That's bad, right? Things that contribute to the likelihood of bad events should be considered threats, right? But no, please, let's wait another 10, 20, 50 years for this to actually happen. I'm sure we'll take it in stride.

    I am not denigrating the importance of social policies and political will here. But these technologies are going to happen. And no social policy or political movement, however well-intentioned, can function on the basis of denying reality.
    Quote Originally Posted by RPGuru1331 View Post
    oh gosh, just 10k up front. Do you know any parents?...
    I know of plenty willing to take out a 200K mortgage for the sake of a house they could just as easily rent. I'm not saying this will happen in a single generation. But given the technology already exists, widespread adoption within a century or two is not, I think, infeasible.
    If even the science reporting puts that as "Beyond the Horizon", It is not particularly in range.
    You just don't get it. This is already happening.
    Last edited by Carry2; 2012-11-24 at 10:13 PM.

  4. - Top - End - #124
    Ogre in the Playground
     
    RPGuru1331's Avatar

    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    I'm sorry, did I not mention that bit about financial trading? We are, like in the middle of a recession right now? That's bad, right? Things that contribute to the likelihood of bad events should be considered threats, right? But no, please, let's wait another 10, 20, 50 years for this to actually happen. I'm sure we'll take it in stride.
    "Immediate" is not defined as "Whenever". Your rant doesn't fit well in light of that.

    I am not denigrating the importance of social policies and political will here. But these technologies are going to happen. And no social policy or political movement, however well-intentioned, can function on the basis of denying reality.
    Bold statements like "Going to happen", on speculative technology, strikes me as denying reality. At any rate, I repeat for the umpteenth time, a future threat should be addressed, just not treated as a dire existential threat coming up tomorrow.
    Asok: Shouldn't we actually be working?
    And then Asok was thrown out of the car.

  5. - Top - End - #125
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by RPGuru1331 View Post
    Bold statements like "Going to happen", on speculative technology, strikes me as denying reality.
    Remind me again how examples of selective abortion policies from ten years ago count as 'more speculative' than predictions about global ocean temperatures over the next century?

  6. - Top - End - #126
    Ogre in the Playground
     
    RPGuru1331's Avatar

    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    Remind me again how examples of selective abortion policies from ten years ago count as 'more speculative' than predictions about global ocean temperatures over the next century?
    ...are you talking about a cure for Downs Syndrome via Gattaca, or the ability to detect it by ultrasound? Because neither your link to the Wikipedia page, nor your mention of this, fits well with talk of genetic manipulation.
    Asok: Shouldn't we actually be working?
    And then Asok was thrown out of the car.

  7. - Top - End - #127
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Actually, I believe it's amniocentesis. Other forms of embryonic screening raise precisely the same ethical concerns, except that it can apply to a far wider range of conditions with greater precision, and direct genetic manipulation goes further.

    But answer the question: How do you justify demanding immediate action over climate change problems that may not hit home for decades, yet dismiss concerns over genetics, robotics and nanotech as impossibly far-fetched? I support immediate action on climate change. I believe projections on the subject are entirely plausible. I'm just not willingly blinding myself to equally plausible disaster scenarios associated with GNR research.
    Last edited by Carry2; 2012-11-24 at 10:53 PM.

  8. - Top - End - #128
    Ogre in the Playground
     
    RPGuru1331's Avatar

    Join Date
    Oct 2008

    Default Re: The Singularity

    But answer the question: How do you justify demanding immediate action over climate change problems that may not hit home for decades, yet dismiss concerns over genetics, robotics and nanotech as impossibly far-fetched? I support immediate action on climate change. I believe projections on the subject are entirely plausible. I'm just not willingly blinding myself to equally plausible disaster scenarios associated with GNR research.
    You're assuming your 'equally plausible' conclusion. The justification is simple; Climate Change isn't speculative, particularly since the damage done is actually harsher than the worst projections. Its a thing we know exists, right now, and requires immediate action to stop. And you posit I should give it equal weight to problems with technologies that aren't a gleam in our collective eyes because?

    And don't attempt to claim that real marketbots, or extant discrimination (both bad, but neither apocalytic) are things you were talking about 2 or 3 pages ago. Y'all were talking about averting robot rebellions and discrimination based on super science. Plausible, in the long run. But far from immediate.
    Last edited by RPGuru1331; 2012-11-25 at 04:47 AM.
    Asok: Shouldn't we actually be working?
    And then Asok was thrown out of the car.

  9. - Top - End - #129
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by RPGuru1331 View Post
    You're assuming your 'equally plausible' conclusion. The justification is simple; Climate Change isn't speculative, particularly since the damage done is actually harsher than the worst projections...
    Uh... no. The worst projections involve a 5-degree increase in global temperatures triggering the release of undersea methane clathrates to trigger a further 5-degree increase, triggering a mass extinction event similar to the end of the Permian.

    We'd have noticed that. Honest.

    Now, I can't entirely rule out this scenario over the next several decades, given certain extrapolations from current trends, but it is speculative. Speculative doesn't mean 'wrong', and it certainly doesn't mean 'okay to ignore'.

    Likewise, I do not believe it is unreasonable to look at existing trends in the development of GNR techs, extrapolate that to some potentially threatening scenarios, and try to head those off at the pass.


    Also, for the record: I am not convinced that selective abortion, embryonic screening, or outright gattaca babies are intrinsically bad things. Nobody would disagree that certain forms of congenital disease are severe enough to warrant prevention, and unless you have a problem with abortion itself, it's hard to see why being selective about it becomes suddenly wrong. And a world where the endowments of a Sachmo, Maradonna, Curie or Gandhi are considered normal, possibly even in combination, is pretty damn exciting. The problem is that there's no sharp dividing line between:
    * Severe disabilities
    * Mild disabilities that still permit good quality of life
    * Genes that confer benefits as well as drawbacks
    * Genes that are of benefit only by comparison to those that lack them
    * Genes that benefit society but not always the individual
    * 'Being wierd'- because it can lead to 'social problems'
    * Genes that predispose toward any form of emotional instability
    * Genes that do not conform to a prevailing ideal of physical beauty
    * Genes that result in lower-than-average intelligence
    * Genes that result in lower-than-average health
    * Anything short of perfection, whatever that means

  10. - Top - End - #130
    Banned
     
    Terraoblivion's Avatar

    Join Date
    Mar 2008
    Location
    Århus, Denmark
    Gender
    Female

    Default Re: The Singularity

    Yes, it is true that the worst case scenarios for global warming are not happening yet. However, immediate and severe consequences for millions of people are happening. Annual flooding in Bangladesh has been growing steadily more severe for years, ruining farmlands and displacing large proportions of the population in one of the poorest and most densely populated countries on the planet. Similarly, much of the sahel has had drought for years now, with especially the poorest country in the world, *****,* hit severely with starvation due to failed harvest being the norm. East Africa has likewise had drought more years than not for the last decade. Going closer to home, warmer seas and rising sea levels meant that New York harbor had far less capacity to contain the storm surges of the hurricane Sandy, just like hurricanes have been growing larger and more frequent in both the Atlantic and the Pacific over the last decade. Major flooding due to the ice on Greenland melting also led to major flooding around the main airport of the country at Kangerlussuaq just a few months ago.

    These are all more or less catastrophic results of the global warming we're already experiencing and have for years. Not just that, melting of the polar ice caps is proceeding faster than projected, suggesting that tipping points might be closer than expected and even if not suggests that the kind of problems we're seeing are not going to go away on their own and are likely to just become more prevalent.

    Also, looking at your projections for genetics...You don't really know how the field works, right? Most of the selective pregnancies we're seeing has basically nothing to do with actual genetics, but simply with knowing the signs of developmental problems and having the scanning technology to spot it during development. And choosing to abort fetuses showing major defects is quite a far cry from actively messing with genes. It's a completely passive process of monitoring and abandoning those that are not viable.

    Not just that, Down's Syndrome is a pretty bad choice of example as it is caused by a defect visible on the chromosomal level, as well as causing visible physical defects that can be spotted during development. The nature of the defect causing it has been known for decades and easily spotted if the relevant technologies were employed to test the genetic material of the fetus in question. What has changed is better and more frequent scans, along with a shift in attitude regarding abortion, making it possible and easy to spot in advance and making people inclined to abort a fetus with Down's. It has nothing to do with advances in actual genetics.

    In general, genetics are hard and full of stuff we have no clue how works and general consensus is that most complex phenomena, like intelligence, don't have simply genes to mess around with. Also, almost no genes relate solely to one issue and we still don't know how genes are actually decoded other than that RNA and supposed junk DNA is somehow involved. There's hell of a lot of very basic research to be done before we can modify genes on a very detailed level, especially when ethical restrictions on human experimentation enters the picture.

    *And thanks to word censoring, it's impossible to write the name of the world's poorest country. It's called the same as Nigeria, just without the -ia at the end.
    Last edited by Terraoblivion; 2012-11-25 at 09:16 AM.

  11. - Top - End - #131
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Terraoblivion View Post
    Yes, it is true that the worst case scenarios for global warming are not happening yet. However, immediate and severe consequences for millions of people are happening...
    And robots are already taking over the jobs that used to be available to the least skilled. Trading algorithms already exert influence on our systems of commerce. GM crops are already leading to monopolistic control of seed banks. Parents are deliberately terminating- and deliberately creating- deaf children. The scale of these problems is smaller for now, but I don't want negative projections surrounding these technologies to be dismissed as irrelevant alarmism when their long-term implications should be abundantly clear to anyone with an ounce of imagination. And especially if responsible uses of these technologies could help to address the very factors leading to, e.g, climate change.
    And choosing to abort fetuses showing major defects is quite a far cry from actively messing with genes. It's a completely passive process of monitoring and abandoning those that are not viable...
    There is no long-term ethical distinction here. Biological evolution proceeds through exactly the same methods- culling of 'non-viable' specimens- and still leads to drastic modifications of the organism over time. We know what selective breeding can do, for better or worse, to other species, and active gene modifications are only going to magnify the effects upon ours.

  12. - Top - End - #132
    Banned
     
    Terraoblivion's Avatar

    Join Date
    Mar 2008
    Location
    Århus, Denmark
    Gender
    Female

    Default Re: The Singularity

    My point is that you're extrapolating from very basic technologies within these fields to say that very complex, hypothetical results are inevitable and imminent. My point was never about ethics, they were about science and what vast amounts of scientific advancement in genetics, robotics and AI research is needed to start developing the technologies you're worried about. You seem to think that all genetics, robotics and AI research is really just the same. That sapient AIs are just a bit of extrapolation away from purely mathematical algorithms that could theoretically be done on paper if people were smart enough and had sufficient time and that gene tailoring requires no more knowledge of how the hell genes work than spotting defects and terminating pregnancy in those cases.

    Science does not work like that, nor do we have a consistent theory for how technology will be developed based on knowledge that we do not posses, on account of us not knowing the details of what we don't know. In this these problems, created by specific technologies getting developed, are fundamentally dissimilar with global warming which is simply predictions based on existing knowledge of the climate of the planet. Predictions that have so far either been proven true or erred on the side of underestimating the scope of events.

  13. - Top - End - #133
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Terraoblivion View Post
    My point is that you're extrapolating from very basic technologies within these fields to say that very complex, hypothetical results are inevitable and imminent.
    I am not saying that these specific results are inevitable, or 'imminent' in the sense of 'within the next decade', or whatever. I am saying that the widespread adoption of these technologies in some form is extremely likely, that some scenarios for their application are disturbing, and that policies to promote better scenarios are likely to be more effective if implemented in advance.
    My point was never about ethics, they were about science and what vast amounts of scientific advancement in genetics, robotics and AI research is needed to start developing the technologies you're worried about.
    Let me repeat myself: These technologies already exist. They already have market applications. They already raise ethical quandaries. There are no further technical barriers to their widespread adoption. It is extremely short-sighted to imagine that improvements in manufacturing and economies of scale will not considerably drive down associated price points and lead to wide-scale adoption over the next few decades, as we have seen happen with a vast range of other consumer technologies.

    And just as the great majority of climate scientists agree that anthropogenic climate change is likely to have a marked impact on our planet, I think it is fair to say that most experts working in the field of genetics, AI and nanotech agree that these technologies will have large-scale social impacts. (Not necessarily for the worse, but certainly large impacts.)
    You seem to think that all genetics, robotics and AI research is really just the same. That sapient AIs are just a bit of extrapolation away from purely mathematical algorithms that could theoretically be done on paper if people were smart enough and had sufficient time...
    What on earth is this supposed to prove? The quantum-mechanical interactions of neurotransmitters are 'purely mathematical algorithms that could theoretically be done on paper, etc.' That's an extraordinarily brute-force reductionist approach to the problem, and I suspect that more holistic abstractions would be equally tractable, but at some level or another the mind does obey laws, and laws are amenable to simulation.
    ...and that gene tailoring requires no more knowledge of how the hell genes work than spotting defects and terminating pregnancy in those cases.
    With respect, I don't think you understand how evolution works. If an entirely blind and unplanned process that works purely through subtraction of 'less viable' specimens from the gene pool can produce drastic alterations in phenotype, it should be abundantly clear that conscious interventions on this front will operate many orders of magnitude faster.

  14. - Top - End - #134
    Banned
     
    Terraoblivion's Avatar

    Join Date
    Mar 2008
    Location
    Århus, Denmark
    Gender
    Female

    Default Re: The Singularity

    Dude, the technical difference between sorting fertilized eggs to avoid the ones with gross damage to their chromosomes and active tailoring of genes. We don't know how genes are read, we have only a vague idea of what a few of them do and most of them are still black boxes. We don't even know why different species have different numbers of genes or grossly different setups for how those genes are organized. The selective breeding we're seeing now is fundamentally unlike selectively optimizing for intelligence or whatever and we're only dipping our toes in all the basic research needed to even begin that kind of selective breeding. Hell, for all we know genetics might be such a tangled mess that designer babies are impossible. To use an analogue I saw a geneticist use in a Danish newspaper, genes are a cookbook written in a language we don't know.

    Similarly, the algorithms on the stock market are not AIs. They're mathematical models created to mindlessly and repetitively carry out a set of directions based on economical models fed them by humans when they were coded. They're as intelligent or sentient as the weather, they just happen to be made by people rather than emergent systems based on physics.

    Finally, I'm sorry to say, but you don't know evolution. At all. Your portrayal of it is a kind of garbled, somewhat outdated version of what's taught in middle school biology. It is not creating better and better creatures, it is simply allowing creatures suitably adapted to their environment to survive. Sometimes that means losing complexities in order to be more efficient in a given environment, sometimes it means becoming more versatile. Sometimes it simply means becoming different in ways that are completely irrelevant for survival. There's nothing teleological about it and you cannot in any meaningful sense talk, nor does it purely remove less viable specimens, it also removes specimens that happen to fail arbitrary tests of coloration or are unlucky enough to die in accidents. It also works to a very large degree by dormant mutations somehow getting triggered in ways we plain and simple don't understand. This stuff is hard and we're still only starting to learn about it.

    I'd also like to point out that we cannot speed anything up with conscious intervention until we know how to intervene. Running in blindly and messing around with genes will most likely simply provide an unviable fetus and result in a miscarriage. Until we have a very thorough knowledge of how genetics work and what does what, all we can do is remove fetuses we believe to be less viable and that is pretty much restricted to birth defects gross enough that they can be spotted easily.

    Basically, just stop talking about science until you learn even the basics about the sciences in question as well as scientific theory. Your ideas of what will undoubtedly happen in the foreseeable future are at heart gibberish based on a not particularly learned layman's understanding of basic science that you try to base huge, elaborate projections on. So quite frankly, I think I'll just leave it here.

  15. - Top - End - #135
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Terraoblivion View Post
    Dude, the technical difference between sorting fertilized eggs to avoid the ones with gross damage to their chromosomes and active tailoring of genes. We don't know how genes are read, we have only a vague idea of what a few of them do and most of them are still black boxes...
    And yet, even that information has been sufficient to identify the root causes of a significant variety of genetic diseases. We are already establishing significant correlations between measured IQ and an assortment of genes (and the two groups are not mutually exclusive.) Is this work still in the early stages? Yes. But it's foolish to assume that our knowledge on the subject will simply stand still, given that only 50 years seperated DNA's identification as the mechanism of heredity from the full sequencing of the human genome.

    I am well aware that tampering with these genes without a fuller understanding of the complexities of their interaction could lead to adverse side-effects. To give but one example, milder forms of autism spectrum disorders seem to confer advantages in math and science, and it is conceivable that either selection for such skills might inflict debilities in social interactions, or conversely that selection for social aptitude might eliminate useful genes from the population.

    That is part of the reason why I am wary about the application of this technology. But I think, as the (individual or social) economic advantages to gene-tailoring become clearer, as prices come down, and as the functions of various genes become better-mapped, the temptation to apply these technologies will be too powerful for the market to ignore. I am also concerned that an outright ban on such technology will lead to similar outcomes to those on banning abortions and contraception- a reduction in quality of service and attendant rise in health risks associated with the procedure.
    Finally, I'm sorry to say, but you don't know evolution. At all. Your portrayal of it is a kind of garbled, somewhat outdated version of what's taught in middle school biology. It is not creating better and better creatures, it is simply allowing creatures suitably adapted to their environment to survive.
    Nowhere in my description do I suggest that the results of embryonic screening will be inherently superior. I you review my earlier post, you will see that I go to some pains to outline how embryonic screening could lead to arguably-bad long-term outcomes, because traits prioritised within our current social framework are not always intrinsically superior.

    This is the point I am making: Simply saying 'let free competition decide' does not lead to inherently optimal outcomes. What it does often lead to is transformative change. If we want that change to be for the better, rather than for the worse, I believe that far-sighted regulation of this industry needs to be on the table.
    I'd also like to point out that we cannot speed anything up with conscious intervention until we know how to intervene.
    The technology needed to insert hereditary fluorescant genes from jellyfish into mice neurons already exists. These are genes that didn't even belong to same species, and combining the two was still feasible (albeit with a lot of false starts.) The technology needed to deliver foreign genes to the cells of adult organisms already exists (albeit limited to surface tissues.) A knowledge of the genes involved in intelligence and health is in the early stages, but developing rapidly.

    It requires only a very modest extrapolation of current trends to imagine these technologies being combined to permit direct genetic modification of a developing embryo. Will it take time to develop and refine these technologies? Absolutely. But even our existing, very crude techniques using very limited knowledge, are seeing practical market applications. It seems unlikely that our knowledge and techniques will not have substantially advanced 50 years from now.
    Your ideas of what will undoubtedly happen in the foreseeable future are at heart gibberish based on a not particularly learned layman's understanding of basic science that you try to base huge, elaborate projections on.
    I freely confess that my knowledge of the subject is lamentably incomplete, but as I mentioned earlier, there appear to be number of extremely qualified experts in these fields who agree that technology of this type is very likely to have a transformative social impact in the 21st century. I am willing to take their word on the subject, to the same extent that I am willing to grant that experts in climatology probably know what they are talking about. And to the extent that my limiting understanding permits, I agree with their concensus on this point.

  16. - Top - End - #136
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Similarly, the algorithms on the stock market are not AIs...
    Yes (though see my remarks that any AI-that-works is no longer considered AI.) But this is why I am concerned by the possible side-effects of actual, cutting-edge AI being put to work on stock market picks. If relatively crude algorithms can do economic damage today, more advanced (but irresponsibly programmed) algorithms in the future could well do proportionately greater harm. (Conversely, I can see great potential benefits to closely related technologies being used to, for example, provide medical diagnosis services to third-world nations whose doctors have all fled to greener pastures.)

    The problem that I see in this thread is generally one of claims to the tune that "X will probably never happen in my lifetime" when, in some cases, it was happening 10 years ago with significantly cruder technology than we have now. But that's okay, because Moore's Law clearly required the daily sacrifice of a thousand vestal virgins on AMD's corporate altar, and that won't be viable once population growth stalls.

  17. - Top - End - #137
    Ogre in the Playground
     
    RPGuru1331's Avatar

    Join Date
    Oct 2008

    Default Re: The Singularity

    Nowhere in my description do I suggest that the results of embryonic screening will be inherently superior.
    With respect, I don't think you understand how evolution works. If an entirely blind and unplanned process that works purely through subtraction of 'less viable' specimens from the gene pool can produce drastic alterations in phenotype, it should be abundantly clear that conscious interventions on this front will operate many orders of magnitude faster.
    I you review my earlier post, you will see that I go to some pains to outline how embryonic screening could lead to arguably-bad long-term outcomes, because traits prioritised within our current social framework are not always intrinsically superior.
    Congratulations on not addressing the point raised by saying you don't know jack about evolution. Seriously, you're behind grade school on the matter if 'most mutations are neutral' is a revelation to you.

    And no, I'm not arguing that embryonic screening could be bad.

    This is the point I am making: Simply saying 'let free competition decide' does not lead to inherently optimal outcomes. What it does often lead to is transformative change. If we want that change to be for the better, rather than for the worse, I believe that far-sighted regulation of this industry needs to be on the table.
    Who said anything about free competition, besides you?** And again, what part of this necessitates it be treated as a breathless, immediate emergency?

    Likewise, I do not believe it is unreasonable to look at existing trends in the development of GNR techs, extrapolate that to some potentially threatening scenarios, and try to head those off at the pass.
    Island nations are being abandoned, Moscow is losing more and more contact with Vladivostok due to their rail line being more and more inoperable, the carribean has been utterly sacked by hurricanes for the last 10 years, Australia, Africa, and North America are experiencing ever-increasing amounts of drought, and monsoon-dependent nations all report massive problems with the monsoon, and you're going to pretend it's equally plausible to robot rebellions or Gattaca, and that you're not assuming your conclusion.

    Yes (though see my remarks that any AI-that-works is no longer considered AI.)
    If I'm understanding Terra Oblivion correctly, she's stating that marketbots are on par with extant spambots in their 'AI', which is to say they can work out how to buy and sell on the site itself, and don't make intelligent decisions on what to buy and sell. Which isn't far from what I remember from them, to say the least.

    The problem that I see in this thread is generally one of claims to the tune that "X will probably never happen in my lifetime" when, in some cases, it was happening 10 years ago with significantly cruder technology than we have now
    Blatant lies. Every time you've said this, it's not really even a similar thing. Amniocentesis was your basis to say a 'cure for Down's Syndrome via genetic tailoring' was almost here, for chrissakes. Almost everything* you've claimed is totes a seriously incoming thing is at best in the concept stage. We're nowhere near the point to enact what you say is an imminent problem. And once again, you have to forget that I say these things should still be handled ahead of time, just not treated as imminent.

    But that's okay, because Moore's Law clearly required the daily sacrifice of a thousand vestal virgins on AMD's corporate altar, and that won't be viable once population growth stalls.
    Do you enjoy misrepresenting arguments as much as you do aping better writers?

    *That almost doesn't mean I found anything beyond the concept stage, it means I might have missed one of your claims.

    **Also, competition, both in markets and in evolution, isn't really the norm. The former's norm is collusion, the latter's is best referred to as creating different niches. There's a particular word for it I'm forgetting. but competition is the last resort. It comes up a lot despite that, but it isn't really the first choice.
    Last edited by RPGuru1331; 2012-11-25 at 05:39 PM.
    Asok: Shouldn't we actually be working?
    And then Asok was thrown out of the car.

  18. - Top - End - #138
    Titan in the Playground
     
    Tyndmyr's Avatar

    Join Date
    Aug 2009
    Location
    Maryland
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    Let me repeat myself: These technologies already exist. They already have market applications. They already raise ethical quandaries. There are no further technical barriers to their widespread adoption. It is extremely short-sighted to imagine that improvements in manufacturing and economies of scale will not considerably drive down associated price points and lead to wide-scale adoption over the next few decades, as we have seen happen with a vast range of other consumer technologies.
    Hi. I'm a software engineer with an extensive background in R&D. I assure you, real AI of the "like a human" type is still a pretty long ways off. Long enough that anyone who is confident of a timetable for it is pretty much just making stuff up. There's some nifty work being done on AI, but not nearly as much as reporting would make you think...a strange amount of AI reporting is about rather unimportant chatbot results on turning tests and the like, not actual discoveries. We have some fairly good results with simulations within a specific domain(say, playing chess), but this is wildly different from doing everything a human does.

    Now, I'm not a biologist in the slightest, but I do keep up a bit, and I know that equating testing/selection to post-hoc changing is wildly inaccurate. Detecting and avoiding or selecting for something in advance by choosing to keep the fetus with the trait you want...not that crazy. Entirely reasonable. But basically nothing at all like modifying genes after the fact. Retroviruses can, theoretically, be used to do this, but calling this challenging is underselling the difficulty. As a very, very short list of possible complications that I'm sure is incomplete, retroviruses mutate, retroviruses have an annoying tendency to insert changes at least semi-randomly, so far as I know, no retrovirus will hit all DNA throughout the body, and of course, complications like Chimeraism are hideously nasty to sort through.

    I don't deny that the behavior of the human mind can be simulated, but I do contest that mere extrapolation based off Moore's law is a good way to estimate when it will be. Moore's law literally has to fail at some point. Nobody really knows when, but infinitely fast computers can't be possible, thanks to the speed of light. Since the entire premise of the singularity is based on the approach of infinitely fast change, it is absolutely guaranteed to fail.

    Quote Originally Posted by Carry2 View Post
    Yes (though see my remarks that any AI-that-works is no longer considered AI.) But this is why I am concerned by the possible side-effects of actual, cutting-edge AI being put to work on stock market picks. If relatively crude algorithms can do economic damage today, more advanced (but irresponsibly programmed) algorithms in the future could well do proportionately greater harm.
    This seems fairly unfounded, as the problem with existing algorithms has nothing to do with being too smart. No, it's about speed. The actual trades they are making generally boil down to simple arbitration, the complications come in because of the ludicrous speed(which, of course, enables ludicrous volume, etc). So, when something goes wrong, it frequently goes wrong rapidly many, many times. Making the decision making agencies involved smarter will, if anything, improve this.

    This is the point I am making: Simply saying 'let free competition decide' does not lead to inherently optimal outcomes. What it does often lead to is transformative change. If we want that change to be for the better, rather than for the worse, I believe that far-sighted regulation of this industry needs to be on the table.
    Pareto efficiency is indeed achieved by a competitive market. If you want optimal efficiency, then yes, you do want lots of free competition. It's a well known principle of economics.

    Now, if by optimal, you mean something other than optimal efficiency...you might want to specify what that something is. In addition to leaving your desired goals somewhat ambiguous...you've failed to demonstrate how any particular schema of "far-sighted regulation" would lead to them.

  19. - Top - End - #139
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by Tyndmyr View Post
    Now, I'm not a biologist in the slightest, but I do keep up a bit, and I know that equating testing/selection to post-hoc changing is wildly inaccurate.
    I'm not equating the two. I'm simply stating that testing/selection, by itself, is likely to have a transformative social impact, as we discover more and more of which genes account for which human characteristics. If you have, say, a hundred embryos to select from (which is quite possible with artificial fertilisation), and you have a detailed knowledge of which genes will maximise, for example, height, then it's easy to imagine selection techniques boosting each generation by a full standard deviation in that particular attribute.

    Yes, tailored retroviruses have problems. But they're not the only means available for delivering genes (the example of curing mouse asthma, for example, did not employ retroviruses.) Yes, for the present, gene therapies are limited to surface tissues, but a developing embryo only a dozen cells wide only has surface tissues.

    Yes, an individual embryo may well suffer serious developmental problems if genes are inserted. But we're not necessarily talking about individual embryos. We could easily be talking about selecting from hundreds of potential embryos, in which case 90%+ failure isn't a big issue.

    And all of this assumes that coming decades will not see major refinements of these technologies.
    I assure you, real AI of the "like a human" type is still a pretty long ways off...
    I don't deny that the behavior of the human mind can be simulated, but I do contest that mere extrapolation based off Moore's law is a good way to estimate when it will be. Moore's law literally has to fail at some point...
    I didn't lay claim to a specific timetable (aside from the remarkably generous timeframe of 'anywhere between a decade and a thousand years'.) Nor am I am saying that Moore's Law will spontaneously cause it to pop into being (Though, once the basic principle is understood, it seems extraordinarily unlikely that Moore's Law would reach the exact point needed to accomodate simulation of human-level intelligence, then just stop.) I am simply pointing out that not all forms of exponential technological progress have been dependant on population growth. And I don't demand exponential progress, simply a continuation of progress in, e.g, genetics comparable or greater to what was seen over the last century. Which I think simply requires maintenance of current levels of research.
    This seems fairly unfounded, as the problem with existing algorithms has nothing to do with being too smart.. ...making the decision making agencies involved smarter will, if anything, improve this.
    ...Pareto efficiency is indeed achieved by a competitive market.
    Enron also placed a high priority on the intelligence of it's employees acting within a framework of free competition, but this did no good whatsoever when their fundamental motives were sociopathic.

    Yes, I am ambiguous on the subject of what 'far-sighted regulation' might entail. Because the potential interactions of these technologies with human psychology, economics and culture, and vice versa, is enormously complex. And I would love to discuss those potential interactions, rather than having to defend the basic position that such interactions could even occur in the near future.
    Last edited by Carry2; 2012-11-29 at 08:08 PM.

  20. - Top - End - #140
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by RPGuru1331 View Post
    Congratulations on not addressing the point raised by saying you don't know jack about evolution. Seriously, you're behind grade school on the matter if 'most mutations are neutral' is a revelation to you.
    It's not. Honestly. (Though most mutations are neutral because they have no discernible effect on phenotype.) But I honestly don't want to get too dragged down into this debate, because it's missing the point. It is silly to insist that a process of conscious artificial design cannot, over time, accomplish what a totally unplanned natural process can. And by all indications, the process of conscious design has been catching up on a scale of decades, not aeons.
    If I'm understanding Terra Oblivion correctly, she's stating that marketbots are on par with extant spambots in their 'AI'...
    And how is this going to make more powerful algorithms innately safer?
    Blatant lies. Every time you've said this, it's not really even a similar thing. Amniocentesis was your basis to say a 'cure for Down's Syndrome via genetic tailoring' was almost here, for chrissakes...
    No. My statement was that there was a serious prospect for' the elimination of Down's Syndrome'. I did not say 'cure for living individuals'. I meant the elimination of Down's Syndrome as a phenomenon... along with the people who might have had it. Make of that what you will.

    I am pointing out that substantially cruder technologies than those we now possess were raising troubling ethical problems ten years ago. It is foolhardy to suggest that the more powerful technologies of the present and near future- within ten years, if the NHGRI get their way- are going to have lesser impacts.
    Island nations are being abandoned, Moscow is losing more and more contact with Vladivostok due to their rail line being more and more inoperable, the carribean has been utterly sacked by hurricanes for the last 10 years, Australia, Africa, and North America are experiencing ever-increasing amounts of drought, and monsoon-dependent nations all report massive problems with the monsoon, and you're going to pretend it's equally plausible to robot rebellions or Gattaca, and that you're not assuming your conclusion.
    Again, I am not trying to diminish the importance of climate change as a potential existential threat to humanity. (Even if the worst-case scenarios seem to assume that peak oil won't happen this century.) This does not mean there is no intellectual value in exploring the ramifications of GNR technologies, particularly if climate change did not actively prevent their adoption and might actually stimulate research in these areas. Moreover, if some potential scenarios associated with these technologies' adoption are negative, I do not see the harm in legislation to minimise those risks (whatever form that might take.)
    Last edited by Carry2; 2012-11-29 at 07:47 PM.

  21. - Top - End - #141
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    As regards specific suggestions for legislation in these areas, I might suggest the following:

    I would tentatively suggest that, for example, it should be illegal to modify (or select) an embryo for the possession of a particular trait when the genetic mechanisms influencing that trait are not yet fully understood. (This is the sort of thing that a back-alley clinic might be willing to go through, even if barred by mainstream medical ethics.)

    I would tentatively suggest that patents on naturally-existing genes should not be considered valid, and that there should be a severe statute of limitations for patents on GMOs in widespread public use.

    I would tentatively suggest that any algorithms used for online trading be mandatorily open-sourced, and implicitly optimised for the economic benefit of the larger economy, rather than the sole benefit of specific corporations, perhaps in a similar fashion to the development of internet protocols over past decades.

    And I would tentatively suggest a complete ban on any form of autonomous AI research for military purposes.
    Last edited by Carry2; 2012-11-29 at 08:25 PM.

  22. - Top - End - #142
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    As regards specific suggestions for legislation in these areas, I might suggest the following:

    I would tentatively suggest that, for example, it should be illegal to modify (or select) an embryo for the possession of a particular trait when the genetic mechanisms influencing that trait are not yet fully understood. (This is the sort of thing that a back-alley clinic might be willing to go through, even if barred by mainstream medical ethics.)
    Let's just go with flat-out illegal here, baring specifically granted exceptions. Fully understood is awfully open to interpretation.

    I would tentatively suggest that patents on naturally-existing genes should not be considered valid, and that there should be a severe statute of limitations for patents on GMOs in widespread public use.
    No argument from me on that. Patenting genetic materials is perhaps one of the stupider ideas the civilized world has come up with for a while.

    I would tentatively suggest that any algorithms used for online trading be mandatorily open-sourced, and implicitly optimised for the economic benefit of the larger economy, rather than the sole benefit of specific corporations, perhaps in a similar fashion to the development of internet protocols over past decades.
    A better, much easier solution: transaction taxes. If you make every trade cost $.50, high frequency trading suddenly becomes a fast way to automate losing money for the most part. For the public good is, again, open to lots of interpretation.

    Plus that has the side benefit of causing pain and suffering to a particularly insufferable breed of rather useless mathematicians.

    And I would tentatively suggest a complete ban on any form of autonomous AI research for military purposes.
    Sadly we're way too late on that.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  23. - Top - End - #143
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by warty goblin View Post
    Let's just go with flat-out illegal here, baring specifically granted exceptions. Fully understood is awfully open to interpretation.
    Well, my main concern would be the avoidance of serious adverse side-effects to inserting/selecting for particular, otherwise-advantageous, genes. (Or at a bare minimum, that the parents would be informed of, and prepared to accommodate, such side-effects.) Ideally, you'd have some kind of statistical analysis and clinical-study protocol similar to what pharma companies have to go through before putting drugs on the market.

    A flat-out ban on this technology runs into two objections, as I see it: One, despite serious potential risks, I think one can't totally ignore there are equally serious potential benefits to be had from active genetic enhancement. Two: once low-cost clinical genomics becomes a reality, it enters the realm of private consumer transactions, and trying to ban those outright has serious side-effects of it's own.

    Making allowance for 'specific exceptions' could be workable, but I think we may see something of a sliding scale at work here too. Selecting against seriously disabling or even fatal genetic diseases like, progeria, cystic fibrosis or torsion dystonia is one thing. But does having an IQ 30 points below average count as a serious disability? It won't kill you, but it certainly won't make your life any easier in a society where robots drive all the taxis. If it counts, and you select against it, that drives up average IQs... and the definition of 'serious disability'.
    A better, much easier solution: transaction taxes. If you make every trade cost $.50, high frequency trading suddenly becomes a fast way to automate losing money for the most part. For the public good is, again, open to lots of interpretation.

    Plus that has the side benefit of causing pain and suffering to a particularly insufferable breed of rather useless mathematicians.
    That's not a bad idea. A % of the value of the shares involved might be a better approach, though, since otherwise you might discriminate against small investors without affecting high-volume trades?

    I still reckon mandatory open-sourcing could be of benefit here (and could help to ensure the whole 'public good' thing, if any code involved had to survive public scrutiny.)
    Sadly we're way too late on that.
    Oddly enough, I'm more optimistic on this front. As long as you're not currently in the middle of an actual war, public pressure can have a significant impact on forms of military R&D that are considered kosher. Consider our current bans on biological and nuclear weapons testing.
    Last edited by Carry2; 2012-12-01 at 10:06 AM.

  24. - Top - End - #144
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    Well, my main concern would be the avoidance of serious adverse side-effects to inserting/selecting for particular, otherwise-advantageous, genes. (Or at a bare minimum, that the parents would be informed of, and willing to accommodate, such side-effects.) Ideally, you'd have some kind of statistical analysis and clinical-study protocol similar to what pharma companies have to go through before putting drugs on the market.
    The problem here is that until you do it a lot, you can't advise people about the side effects, because nobody knows what they are. It's not even a case where I can see animal testing as being all that instructive; particularly for things like intelligence.

    A flat-out ban on this technology runs into two objections, as I see it: One, despite serious potential risks, I think one can't totally ignore there are equally serious potential benefits to be had from active genetic enhancement. Two: once low-cost clinical genomics becomes a reality, it enters the realm of private consumer transactions, and trying to ban those outright has serious side-effects of it's own.

    Making allowance for 'specific exceptions' could be workable, but I think we may see something of a sliding scale at work here too. Selecting against seriously disabling or even fatal genetic diseases like, progeria, cystic fibrosis or torsion dystonia is one thing. But does having an IQ 30 points below average count as a serious disability? It won't kill you, but it certainly won't make your life any easier in a society where robots drive all the taxis. If it counts, and you select against it, that drives up average IQs... and the definition of 'serious disability'.
    My thinking on this is pretty simple. Humans are dumb bastards when it comes to collective decision making if the decisions are made in aggregate at the individual level.

    The only example of anything like this that we have is abortion, which in India and China has created far more men than women due to latent cultural judgments about the worth of the sexes. This is likely to prove a bad thing for those societies in the coming years.

    So hand over the genetic reins to Matt and Kaylee Suburban, and I'd be very surprised if we somehow ended up with massive societal benefits. More likely we'd end up inundated with Jaydons and Tiffanys all set up to be hyper-competitive and whatever else some Tom, **** or Harry with a gene sequencer, a copy of the DSM and a SAS licence thinks is important to becoming a CEO.

    Sure, regulating at the 'truly understood' level would shut this down as well. (However any black market argument would apply to that as well.) Truly understood is a slippery slope, and as I point out above, isn't even something you can talk about having until you've done it a lot. Even if you had clinical trials, clinical trials miss things - often fairly major things. Like, say, calcium supplements not working, or murdering people while asleep. Holding clinical trials for messing with your baby's genetics to select against every trait currently out of vogue seems a fast way to a real horror show. 'Don't do it unless to avoid X, Y, Z' is consistent and much more sensible to enforce. I



    That's not a bad idea. A % of the value of the shares involved might be a better approach, though, since otherwise you might discriminate against small investors without affecting high-volume trades?
    The reason I went flat is that only really makes a difference if you're trying to make money on huge numbers of very tiny trades that exploit what amounts to noise in the system. It essentially smooths out price changes less than the tax. A percent tax hits people who actually legitimately investing, instead of building their data centers closer to Wall Street than everybody else so they can get a better ping.

    Another idea is to add a minimum investment time. That is if you buy stock in something, you are committed to owning it for some reasonable period of time, like a week.

    I still reckon mandatory open-sourcing could be of benefit here (and could help to ensure the whole 'public good' thing, if any code involved had to survive public scrutiny.)
    Could be. I'm not really a convert to the whole open source deal, mostly because I use the stuff professionally, and half the time the features suck massively because nobody's paid to write 'em. Separate issue though.

    Oddly enough, I'm more optimistic on this front. As long as you're not currently in the middle of an actual war, public pressure can have a significant impact on forms of military R&D that are considered kosher. Consider our current bans on biological and nuclear weapons testing.
    The reason I said that is that the Pentagon is already working on entirely autonomous drones, in some cases, I believe, fully autonomous armed drones.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  25. - Top - End - #145
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by warty goblin View Post
    The problem here is that until you do it a lot, you can't advise people about the side effects, because nobody knows what they are. It's not even a case where I can see animal testing as being all that instructive; particularly for things like intelligence.
    It might not necessarily be a case of large-scale active testing, but statistical analysis of gene-frequencies within existing populations and using- I dunno, hidden markov models, or something- to tease out the causal factors involved. Then taking that knowledge and applying it clinically to individuals- desperate parents with a history of genetic illness in the family, for example- to confirm the hypotheses.

    I suspect it'll be limited to treating simple, and severe, mendelian disorders for the first decade or so. But it will move up from there, and by the end of the century, I suspect most aspects of gene function will be thoroughly mapped.
    My thinking on this is pretty simple. Humans are dumb bastards when it comes to collective decision making if the decisions are made in aggregate at the individual level.

    The only example of anything like this that we have is abortion, which in India and China has created far more men than women due to latent cultural judgments about the worth of the sexes. This is likely to prove a bad thing for those societies in the coming years.

    So hand over the genetic reins to Matt and Kaylee Suburban, and I'd be very surprised if we somehow ended up with massive societal benefits. More likely we'd end up inundated with Jaydons and Tiffanys all set up to be hyper-competitive and whatever else some Tom, **** or Harry with a gene sequencer, a copy of the DSM and a SAS licence thinks is important to becoming a CEO.

    Sure, regulating at the 'truly understood' level would shut this down as well. (However any black market argument would apply to that as well.)
    In theory, yes. What I'm hoping for is that advances in detailed understanding of genetic cause-and-effect will be able to keep reasonable pace with black market offerings, given the latter's drawbacks in terms of price and safety. Consumers are willing to tolerate restrictions on legal services provided they (A) give you most of what you might want and (B) have some acceptable moral mandate behind the limitations.

    I don't disagree with any of your critiques of pure free-market solutions or with the range of negative outcomes you project. And in the short term- say, maybe the next 10-30 years- blanket bans on everything except severe and specific developmental abnormalities might be viable. But beyond that, as treatments become cheaper, the techniques more refined, and more genetic traits are identified, the pressure to turn to the black market will become more and more intense if legal outlets for genetic augmentation are absent.

    Now, if you want to argue against, say, gender descrimination, or against genes for hyper-competitive sociopathy (assuming those exist,) or against tailoring for prevailing beauty-standards (as distinct from metabolic health,) by all means do. I think there are reasonable grounds for banning forms of gene tailoring which confer individual advantage without any tangible social benefits, and I think most people will be willing to accept the moral reasoning behind such legislation. But a blanket ban on gene-selection/tailoring for anything but curing the nastiest diseases, however laudable in intent, strikes me as difficult to permanently enforce within our current socioeconomic framework.
    The reason I went flat is that only really makes a difference if you're trying to make money on huge numbers of very tiny trades that exploit what amounts to noise in the system. It essentially smooths out price changes less than the tax. A percent tax hits people who actually legitimately investing, instead of building their data centers closer to Wall Street than everybody else so they can get a better ping.
    Hmm. Fair enough.
    Could be. I'm not really a convert to the whole open source deal, mostly because I use the stuff professionally, and half the time the features suck massively because nobody's paid to write 'em. Separate issue though.
    Oh, open-source doesn't necessarily mean 'not paid for it'. (It does mean that any key ideas are easier to copy/steal, but that's not necessarily a problem if different economic niches are addressed by different products.)

    Anyways. My more general, long-term concern here would be that a sufficiently devious AI, even if limited to low-frequency trades, might be capable of large-scale analysis of market forces and long-term price manipulation in favour of it's parent corporation. That's a ways off yet, but I wouldn't put it outside the realm of possibility within a century.
    The reason I said that is that the Pentagon is already working on entirely autonomous drones, in some cases, I believe, fully autonomous armed drones.
    True. But nukes were developed- and even used on human targets- before they got banned. Rollbacks in these areas are possible, though I hope we don't have to repeat the precedent.

  26. - Top - End - #146
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Carry2 View Post
    It might not necessarily be a case of large-scale active testing, but statistical analysis of gene-frequencies within existing populations and using- I dunno, hidden markov models, or something- to tease out the causal factors involved. Then taking that knowledge and applying it clinically to individuals- desperate parents with a history of genetic illness in the family, for example- to confirm the hypotheses.
    In theory that'd be nice, but that's not really something that I think can be statistically teased out to a degree of specificity I'd feel comfortable messing around with people's babies.

    Another data point: over 25% of pregnancies miscarry within six weeks as it is. Monkeying with the process is hardly likely to decrease this, and there's obvious reasons to object to highly expensive ways to get more miscarriages. While it's not impossible that eventually such genetic therapies could prevent miscarriage by selecting against embryos with a high likelihood of not carrying to term, we certainly aren't there yet.

    I suspect it'll be limited to treating simple, and severe, mendelian disorders for the first decade or so. But it will move up from there, and by the end of the century, I suspect most aspects of gene function will be thoroughly mapped.

    In theory, yes. What I'm hoping for is that advances in detailed understanding of genetic cause-and-effect will be able to keep reasonable pace with black market offerings, given the latter's drawbacks in terms of price and safety. Consumers are willing to tolerate restrictions on legal services provided they (A) give you most of what you might want and (B) have some acceptable moral mandate behind the limitations.

    I don't disagree with any of your critiques of pure free-market solutions or with the range of negative outcomes you project. And in the short term- say, maybe the next 10-30 years- blanket bans on everything except severe and specific developmental abnormalities might be viable. But beyond that, as treatments become cheaper, the techniques more refined, and more genetic traits are identified, the pressure to turn to the black market will become more and more intense if legal outlets for genetic augmentation are absent.

    Now, if you want to argue against, say, gender descrimination, or against genes for hyper-competitive sociopathy (assuming those exist,) or against tailoring for prevailing beauty-standards (as distinct from metabolic health,) by all means do. I think there are reasonable grounds for banning forms of gene tailoring which confer individual advantage without any tangible social benefits, and I think most people will be willing to accept the moral reasoning behind such legislation. But a blanket ban on gene-selection/tailoring for anything but curing the nastiest diseases, however laudable in intent, strikes me as difficult to permanently enforce within our current socioeconomic framework.
    Most of the reason I'm highly skeptical of gene tailoring is that the costs of being wrong are high, and the probabilities of being wrong are high.

    As I pointed out, we get things wrong with clinical trials fairly frequently. And that's when there's at least some ability to control confounding variables, outside variation, and using a well understood probability structure.

    You don't get that with genetic tinkering in humans, at least until you start doing it.

    Statistics is a marvelous tool for modeling, predicting and describing things. It's absolutely terrible at making a cause and effect argument - to the point where most of my statistics professors basically just say 'don't even try.' If you have a very well designed experiment for controlling outside variation, hidden and confounding variables, replication, and a matching theory to support your conclusions you can maybe, maybe make a causal argument. Industrial statisticians doing reliability studies can do this, because they have very good controls, and physics is a well developed theory.

    Genetics is a well developed theory, but we can't control people's genetics, or their lifestyles. There's all kinds of confounding and variation there. I'm not into gene mapping or bioinformatics, so I won't go so far as to say its impossible, but it strikes me as a very problematic application of the methodology.

    (From a slightly different angle, statistics is good at prediction and modeling because it doesn't care if you've looked at the truth or not, only something that tracks the truth reasonably well. Predicting things from genetic material is a reasonable application - I've talked with people who do that - but actually being able to concretely say 'gene X causes Y' with enough certainty to bet somebody's life on plugging gene X into them is not something you can do every day of the week. It's not impossible, and I've seen cases where it's happened. I'd be surprised however if its occurrence is common enough that we're anywhere close to being able to choose a child's intelligence. Not least because measuring intelligence in the first place is hard*.)

    (*Quite a few things that seek to measure intelligence and ability turn out to be essentially useless. The GRE for instance is a very good measure of socio-economic background, and a terrible measure of actual ability to succeed as a graduate student. By some methodologies it's actually negatively correlated with success for women. In the science of education, I've heard it said that an R square of .3 or so is a really good model. Yes this is a footnote to a parenthetical. You may now shoot me)

    Oh, open-source doesn't necessarily mean 'not paid for it'. (It does mean that any key ideas are easier to copy/steal, but that's not necessarily a problem if different economic niches are addressed by different products.)
    I know, I'm just skeptical of open source as a movement. Mostly when I'm trying to get R to do something SAS does automatically.

    Anyways. My more general, long-term concern here would be that a sufficiently devious AI, even if limited to low-frequency trades, might be capable of large-scale analysis of market forces and long-term price manipulation in favour of it's parent corporation. That's a ways off yet, but I wouldn't put it outside the realm of possibility within a century.
    That strikes me as unlikely. The stock market is notoriously hard to predict, stock prices actually make reasonably good random numbers.

    True. But nukes were developed- and even used on human targets- before they got banned. Rollbacks in these areas are possible, though I hope we don't have to repeat the precedent.
    I think the key difference is that nukes - once more than one person have them - come with a built in lose-lose guarantee, one that getting better nukes doesn't take away. Robots, at least sub-nuclear robots, don't come with that. The presence of nuclear weapons on both sides raises the cost of hostilities to the unbearably high for the 'victor,' before anybody fires a shot. Combat robots are, unless somebody's idiotic enough to give them nukes, entirely favorable to the party who spends the most and builds the best.

    In other words the fact that a potential enemy has Mk.III Skullreaper Killdroids doesn't bother me if I have Mk.V Skullreaper Killdroids, since I'm secure in the knowledge that my army will provide a more effective skull-reaping solution for the competitive modern battlespace. The fact that my enemy has a 5 megaton nuclear weapon that can vaporize me and my little dog does bother me, even if I have a 10 megaton weapon pointed right back at them. I don't get bonus points for killing the enemy more dead because dead is binary. It doesn't come in fifty shades of grey.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  27. - Top - End - #147
    Banned
    Join Date
    Oct 2008

    Default Re: The Singularity

    Quote Originally Posted by warty goblin View Post
    While it's not impossible that eventually such genetic therapies could prevent miscarriage by selecting against embryos with a high likelihood of not carrying to term, we certainly aren't there yet.
    That's not entirely true.

    Most of the reason I'm highly skeptical of gene tailoring is that the costs of being wrong are high, and the probabilities of being wrong are high...

    ...As I pointed out, we get things wrong with clinical trials fairly frequently. And that's when there's at least some ability to control confounding variables, outside variation, and using a well understood probability structure...

    ...From a slightly different angle, statistics is good at prediction and modeling because it doesn't care if you've looked at the truth or not, only something that tracks the truth reasonably well...
    I don't disagree with any of those objections in themselves, though I feel this is somewhat missing the larger socio-economic argument about supply and demand. (But I'll get to that)

    * Although tinkering with actual, fully-grown human beings in large-scale clinical settings isn't really practical or ethical, you can always perform comparisons with tissue cultures based on, e.g, induced pluripotent stem cells (which is already being used to study schizophrenia.) Animal research is yielding similar insights.

    * If that fails, or if you need a closer look at the metabolic goings-on, there's also proteome simulation to turn to (studying gene expression is actually an intended aspect of the blue brain project).

    * Besides, while these arguments are cogent in the case of active gene-insertion, they seem less relevant in the case of selection from existing embryos. We may not strictly know, on the basis of statistics alone, if factor A implies factor B or if some hidden factor C is causing both, but if a conventionally-conceived embryo pops up with factor A, you can still have a pretty high confidence of B showing up. ('Conventional' in the sense of normal sperm-plus-ovum, even if it happens in a test tube with hundreds of blastocyst siblings.)

    Again, in the short term, I wouldn't disagree with your proposed restrictions on legal use. And I agree that there are very real risks involved in tampering with human genetics. But black market practitioners will not care about those risks, and a sufficiently desperate clientele will accept those risks for a shot at a better life for them or their kids. The category of what is legally permissible needs to keep reasonable pace with what is technically feasible and in commercial demand, or you will likely see worse health outcomes than under total deregulation.

    If people want to relax, a certain proportion WILL buy drugs and booze. If people feel horny, a certain proportion WILL pay for sex. If people don't feel safe, a certain proportion WILL own guns. And if people fear their kids won't be able to compete in life, a certain proportion WILL select for advantageous genes. Even when it's dangerous.

    We have to consider whether the social consequences of banning a service are worse than the social consequences of that service, whether the ban would actually reduce supply, whether money spent on enforcing the ban couldn't be more efficiently spent on treating the adverse side-effects of the service, whether regulating the industry couldn't reduce those risks to begin with, and whether that service is itself addressing a legitimate need.


    That strikes me as unlikely. The stock market is notoriously hard to predict, stock prices actually make reasonably good random numbers.
    Not entirely. As in understand it, trading algorithms for trend following already exist, and presumably turn enough of a profit to be worth developing. Sorting through vast reams of noisy data in search of subtle statistical correlations is something that machines are getting quite good at, and, neural nets and genetic programming are starting to see application in these areas- heck, they're even working on language recognition for parsing the 'sentiment' of news stories specifically so that these algorithms know when and how to trade.

    It wouldn't astound me if the first AI capable of passing the Turing Test came out of Wall Street instead of MIT- and was running a ponzi scheme.
    ...In other words the fact that a potential enemy has Mk.III Skullreaper Killdroids doesn't bother me if I have Mk.V Skullreaper Killdroids, since I'm secure in the knowledge that my army will provide a more effective skull-reaping solution for the competitive modern battlespace. The fact that my enemy has a 5 megaton nuclear weapon that can vaporize me and my little dog does bother me, even if I have a 10 megaton weapon pointed right back at them. I don't get bonus points for killing the enemy more dead because dead is binary. It doesn't come in fifty shades of grey.
    * More advanced autonomous combat AI, virtually by definition, will have to become increasingly independant of direct human supervision. (As distinct from the hardware to provide firepower/mobility/protection, which humans, at any rate, could also avail of.)

    * Software systems can be hijacked and re-programmed. The more powerful your Skullreaper Killdroids are, the greater the risks involved to your side if the other side manages to ghost-hack the damn things. Especially if they have other AIs responsible for the hacking.

    * What happens if the programmer forgets a semi-colon, or the transmission gets garbled in static, and their virus-script no longer specifies 'stop when the insurgents are dead'?

    * Seriously, this has got to be one of the dumbest ideas my species ever came up with.

  28. - Top - End - #148
    Bugbear in the Playground
     
    BlueKnightGuy

    Join Date
    Apr 2012
    Location
    NY, USA
    Gender
    Male

    Default Re: The Singularity

    So genetic enhancement is too dangerous to be ethical because we haven't done enough human testing, and actually doing those tests would also be unethical?

    Ethics are great, but ultimately the human race isn't the dominant species on the planet because of how incredibly ethical we are. If we want to actually have the benefits of an improved species, we also need to be willing to take the risks associated with that. A scientific revolution is not a tea party; people die whenever new methods of treatment emerge, every new discovery is ultimately used by people who don't understand it, and initial applications are always limited by cultural biases.

    Human nature isn't going to change unless we change it, so why not do so now? There's no actual existential threat, especially since implementation is likely going to be very limited for the first few decades, so any failures will be instructive. With such a vast reward for fairly low risk, it would be an uncharacteristically poor judgment call for humanity to turn our backs on this technology.

  29. - Top - End - #149
    Titan in the Playground
    Join Date
    May 2007
    Location
    Tail of the Bellcurve
    Gender
    Male

    Default Re: The Singularity

    Quote Originally Posted by Water_Bear View Post
    So genetic enhancement is too dangerous to be ethical because we haven't done enough human testing, and actually doing those tests would also be unethical?
    I don't see anything inherently inconsistent with that argument.

    Ethics are great, but ultimately the human race isn't the dominant species on the planet because of how incredibly ethical we are. If we want to actually have the benefits of an improved species, we also need to be willing to take the risks associated with that. A scientific revolution is not a tea party; people die whenever new methods of treatment emerge, every new discovery is ultimately used by people who don't understand it, and initial applications are always limited by cultural biases.
    Mostly I don't see genetic alteration actually improving the species. I'm not even sure improvement is a meaningful concept for a species, beyond becoming better suited to its environment. Genetic alteration for economic success does not provide this. Arguably, given the strong correlation between economic success and fewer (or no) children, such genetic tampering is actually mal-adaptive.

    Nor is it like the species is under imminent threat of collapse due to genetic defectiveness. Civilization may be under threat of collapse, or serious compromise, due to unmitigated environmental exploitation. That is an ethical concern. Spending huge amounts of money so wealthy first world people can have marginally better babies is an ethical concern, and not one that I see contributing to the solution of the former problem.

    Human nature isn't going to change unless we change it, so why not do so now? There's no actual existential threat, especially since implementation is likely going to be very limited for the first few decades, so any failures will be instructive. With such a vast reward for fairly low risk, it would be an uncharacteristically poor judgment call for humanity to turn our backs on this technology.
    I've never been convinced that the 'reward' actually is a reward. I am aware however that my views on questions like this are fairly non-normative. I've gotten to a point this year where I don't think I could actually justify having children.
    Blood-red were his spurs i' the golden noon; wine-red was his velvet coat,
    When they shot him down on the highway,
    Down like a dog on the highway,
    And he lay in his blood on the highway, with the bunch of lace at his throat.


    Alfred Noyes, The Highwayman, 1906.

  30. - Top - End - #150
    Bugbear in the Playground
     
    BlueKnightGuy

    Join Date
    Apr 2012
    Location
    NY, USA
    Gender
    Male

    Default Re: The Singularity

    Well, if you don't consider people having better health, longer life-spans, greater learning and problem solving abilities, less disruptive personalities and more aesthetically pleasing appearances worthy goals then I'd say "non-normative" is a somewhat of an understatement.

    Also, it's one thing to say that you don't see "tampering" with our genomes as morally right, but as I tried to make clear earlier, it is absolutely our best option for survival as a species. These aren't marginal changes we're talking about; even at our current limited understanding we've done everything from making cancer-resistant mice who live much longer than their peers, made mice who are are virtually impossible to traumatize, greatly improved their abilities to learn and recall information, increased their musculature and muscles density safely and given them photo-receptors to perceive entirely novel (to them) colors just to name a handful. The potential for humans to become better tool-builders and users, to become more social, to be more reactive to changes in our environment and more resistant to harm; those are all traits that not only increase fitness, but do so in such a broad way that I can't see them as maladaptive under any circumstances.

    I'm not sure where you're going with the environmental thing. We need to make sure the planet continues to be livable, but doesn't changing human behavior fit into that? People who need less food, who can survive more extreme environments, and who are better able to make and use new less wasteful technologies are all good things for the environment. Not a short-term solution, obviously, but a part of a larger long-term one.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •