New OOTS products from CafePress
New OOTS t-shirts, ornaments, mugs, bags, and more
Page 7 of 8 FirstFirst 12345678 LastLast
Results 181 to 210 of 214
  1. - Top - End - #181
    Surgebinder in the Playground Moderator
     
    Douglas's Avatar

    Join Date
    Aug 2005
    Location
    Mountain View, CA
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Kornaki View Post
    The fact that he asked shows some sort of maturity, right?
    Not really. He didn't ask for the security upgrade because he already knew she wanted her security upgraded. He does not have any previous indication about her desire for puppies, so this does not show any understanding that there might be other reasons for asking permission.
    Like 4X (aka Civilization-like) gaming? Know programming? Interested in game development? Take a look.

    Avatar by Ceika.

    Archives:
    Spoiler
    Show
    Saberhagen's Twelve Swords, some homebrew artifacts for 3.5 (please comment)
    Isstinen Tonche for ECL 74 playtesting.
    Team Solars: Powergaming beyond your wildest imagining, without infinite loops or epic. Yes, the DM asked for it.
    Arcane Swordsage: Making it actually work (homebrew)

  2. - Top - End - #182
    Ettin in the Playground
    Join Date
    Jun 2011

    Default Re: Freefall: DOGGY!

    ...so, this was posted to the freefall forums.

    Spoiler: Free puppy?
    Show

  3. - Top - End - #183
    Firbolg in the Playground
     
    Rockphed's Avatar

    Join Date
    Nov 2006
    Location
    Watching the world go by
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Somehow drawing a realistic doctor bowman makes him incredibly nefarious looking.
    Quote Originally Posted by Wardog View Post
    Rockphed said it well.
    Quote Originally Posted by Sam Starfall
    When your pants are full of crickets, you don't need mnemonics.
    Dragontar by Serpentine.

    Now offering unsolicited advice.

  4. - Top - End - #184
    Banned
     
    Math_Mage's Avatar

    Join Date
    Jan 2010
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Rockphed View Post
    Somehow drawing a realistic nefarious doctor bowman makes him incredibly nefarious looking.
    Just sayin'.

  5. - Top - End - #185
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Freefall: DOGGY!

    Are all people human in Freefall? Are all humans people?
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  6. - Top - End - #186
    Troll in the Playground
     
    NecromancerGuy

    Join Date
    Oct 2011
    Location
    Therinos
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by halfeye View Post
    Are all people human in Freefall? Are all humans people?
    No, and maybe. There are people in Freefall who are demonstrably not human (e.g. the robots). Florence's definition seems to go "Can they think for themselves? If Yes, person. If person, do they need to follow safeguards? If No, human." On the other hands there are some humans who are demonstrably stupid, self-absorbed and sociopathic enough not to qualify as a person, despite the rights they are entitled to (e.g. Mr. Kornada).
    Quote Originally Posted by Zap Dynamic View Post
    I want to create a world that is full of possibility, and one of the best ways to handle it is by creating a bunch of stories that haven't yet been finished.
    Quote Originally Posted by Grey_Wolf_c View Post
    At this point, however, I'm thinking way too hard about the practical problems of running a battle royale school for Russian assassins, so I think I'll leave it there.
    In my posts, smilies generally correspond to my expression at the time. As an example, means "huh?" and "Hmm..". Also, "Landis" is fine.

  7. - Top - End - #187
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Landis963 View Post
    No, and maybe. There are people in Freefall who are demonstrably not human (e.g. the robots). Florence's definition seems to go "Can they think for themselves? If Yes, person. If person, do they need to follow safeguards? If No, human." On the other hands there are some humans who are demonstrably stupid, self-absorbed and sociopathic enough not to qualify as a person, despite the rights they are entitled to (e.g. Mr. Kornada).
    Sociopaths are people, even if not very pleasant ones.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  8. - Top - End - #188
    Firbolg in the Playground
     
    Rockphed's Avatar

    Join Date
    Nov 2006
    Location
    Watching the world go by
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Is the commander going to ask "Florence, are you human?" I'm not sure he is thinking quite far enough ahead to see that it is an interesting question to ask her, since she fits all the criteria she just listed for humanity.

    Edit: I love the panels of obfustication. Especially the lampshade of obfustication.
    Last edited by Rockphed; 2014-12-13 at 07:08 PM.
    Quote Originally Posted by Wardog View Post
    Rockphed said it well.
    Quote Originally Posted by Sam Starfall
    When your pants are full of crickets, you don't need mnemonics.
    Dragontar by Serpentine.

    Now offering unsolicited advice.

  9. - Top - End - #189
    Dwarf in the Playground
    Join Date
    Jun 2005
    Location
    Buried under C++ compilers

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Landis963 View Post
    No, and maybe. There are people in Freefall who are demonstrably not human (e.g. the robots). Florence's definition seems to go "Can they think for themselves? If Yes, person. If person, do they need to follow safeguards? If No, human." On the other hands there are some humans who are demonstrably stupid, self-absorbed and sociopathic enough not to qualify as a person, despite the rights they are entitled to (e.g. Mr. Kornada).
    The big question is how do you measure if someone can think for themselves. Unless you can bisect a working brain and study the actual mechanics of thought process, the Chinese Room is always a possibility. Given a large enough storage and sufficient processing power, you could theoretically pre-program a dumb AI with enough responses to confuse anyone into believing they are talking a conscious being.

    Take the part where Florence asks two random robots a nonsense question. She took the result as one failing her turing-test and the other passing by lamenting on the absurdity. I know what Mark was trying to get across there, but now we know that that was two Bowman AIs talking to each other.

    How would we know that Florence's "random" question wasn't something she was programmed to think of? And that the other AI didn't have a few canned responses after he recognized the phrase? We know Florence can be put into maintenance mode just with vocal commands, so that level of engineering isn't out of the question either.

    In the end, defining consciousness is a very tricky question - mostly because we have no firm concept of what consciousness is.
    There is no such thing as "innocence", only degrees of guilt.

  10. - Top - End - #190
    Troll in the Playground
     
    NecromancerGuy

    Join Date
    Oct 2011
    Location
    Therinos
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Funny thing, the "Chinese Box" was mentioned in-story already.

    Qwerty: "Of course, there is the Chinese box argument. I might be simulating consciousness to the extent that I'm not really conscious."
    Winston: "Do they serve drinks where we're going? I might need one."
    Last edited by Landis963; 2014-12-14 at 10:04 AM.

  11. - Top - End - #191
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Ravenlord View Post
    The big question is how do you measure if someone can think for themselves. Unless you can bisect a working brain and study the actual mechanics of thought process, the Chinese Room is always a possibility.
    MRI can scan a working brain without damaging it.

    The status of the "Chinese Room" is disputed. Some people, including myself, think that the Room is consciouse, even though the CPU it's running on isn't aware of that.

    In the end, defining consciousness is a very tricky question - mostly because we have no firm concept of what consciousness is.
    Exactly.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  12. - Top - End - #192
    Firbolg in the Playground
     
    Flumph

    Join Date
    Apr 2011
    Gender
    Male

    Default Re: Freefall: DOGGY!

    The Chinese Room argument always struck me as an example of begging the question.

    In order to agree with its conclusion (that computational models are not an explanation of consciousness) you have to have already accepted its conclusion (that there is something essential about consciousness in addition to computation), because the argument makes no effort to explain what the missing element "understanding Chinese" even is and how we might test for its absence.

    It's just another P-Zombie (quite literally because the P-Zombie is basically the same argument of a thing which acts in every way as if it is conscious but is not), with the same fundamental assumption that there is something about consciousness over and above acting in every way as if it is consciousness, and it deserves the attentions of the boomstick.

  13. - Top - End - #193
    Troll in the Playground
    Join Date
    Jan 2007

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by GloatingSwine View Post
    (...) fundamental assumption that there is something about consciousness over and above acting in every way as if it is consciousness (...)
    Well, there is and it's both very obvious and completly unverifiable: for each and every one of us, there is the fact, we are actualy aware of our own existence - the difference lies within and can't be measured form the outside as far as our technology goes.
    In a war it doesn't matter who's right, only who's left.

  14. - Top - End - #194
    Firbolg in the Playground
     
    Flumph

    Join Date
    Apr 2011
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Radar View Post
    Well, there is and it's both very obvious and completly unverifiable: for each and every one of us, there is the fact, we are actualy aware of our own existence - the difference lies within and can't be measured form the outside as far as our technology goes.
    Since you can only know if another person is aware of their own existence because they told you they were, and a P-Zombie system like the Chinese room will, asked the same question, give the same report, you have no grounds to make that assertion.

    As I said, you have to agree with the assumption that there is a special thing before the p-zombie has any conceptual weight.

  15. - Top - End - #195
    Troll in the Playground
    Join Date
    Jan 2007

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by GloatingSwine View Post
    Since you can only know if another person is aware of their own existence because they told you they were, and a P-Zombie system like the Chinese room will, asked the same question, give the same report, you have no grounds to make that assertion.

    As I said, you have to agree with the assumption that there is a special thing before the p-zombie has any conceptual weight.
    I am not talking about checking anyone else for sentience - I am talking about being aware of my own sentience. Everyone is aware of their own consciousness and nobody can check it in others - we simply assume it instinctively, since we are all from the same species and so on and so forth. Lack of measurement method cannot exlude that as an important aspect of consciousness. That's pretty much all the p-zombie scenario says: we have no method of actually checking for consciousness.

    It seems most reasonable to assume that if something quacks like a duck, walks like a duck and looks like a duck, then it's a duck and not a pointed stick or a pile of noodles. That being said, it's still an assumption and not a fact.
    In a war it doesn't matter who's right, only who's left.

  16. - Top - End - #196
    Dwarf in the Playground
    Join Date
    Jun 2005
    Location
    Buried under C++ compilers

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by halfeye View Post
    MRI can scan a working brain without damaging it.
    Only for fleshware. Digital constructs would need something like a debug mode - if the functionality is provided, of course. We know they have dream-machines that muck with their long-term data storage, so one could take a look at that... as long as the format can be understood without interfacing it with the actual robot, of course.

    Quote Originally Posted by halfeye View Post
    The status of the "Chinese Room" is disputed. Some people, including myself, think that the Room is consciouse, even though the CPU it's running on isn't aware of that.
    Let's use a real-life example. IBM's nifty WATSON super-computer had beat people at jeopardy. Let's say IBM goes nuts on the R&D budget and upgrade WATSON, to the point where he can convincingly chat with people. He can respond to questions and fool people into believing he's a human.

    So did IBM just develop an artificial consciousness?

    I would say not. Precisely because it was developed. WATSON may appear smart, but it's still just an old-style dumb AI - responding to external stimuli by applying a rigid set of rules. It can't - and won't even try to! - comprehend those rules; it's just a glorified bunch of mathematical equations, in the end.

    Quote Originally Posted by halfeye View Post
    In order to agree with its conclusion (that computational models are not an explanation of consciousness) you have to have already accepted its conclusion (that there is something essential about consciousness in addition to computation), because the argument makes no effort to explain what the missing element "understanding Chinese" even is and how we might test for its absence.
    I'd say that's the fundamental gap between "real" thinking machines and the ones we can make - or even comprehend - nowadays.

    A machine we can conceivably create will be limited by its nature, because we need to feed it a set of rules it's gonna follow. We literally script the 'intelligence" part of AI; basically give the bloke our version of the chinese dictionary. This is a stark contrast to "real" intelligence which is emergent; it has the capacity to evolve and make up its own rules as it goes along. Due to that very emergent nature, it'd very difficult - if not impossible - to script a true AI. You would have more luck with abandoning programming as a concept and just setting up the "base" point, then bombarding the AI with a set of stimulis... let it learn itself.

    That's just my opinion, of course. But I think that no matter how advanced an AI we make, it will still remain on this side of that fundamental gap. I think that is the "something essential" that is missing. When a true AI will come along, it will most likely look nothing like the ones we have today.

    Quote Originally Posted by halfeye View Post
    Since you can only know if another person is aware of their own existence because they told you they were, and a P-Zombie system like the Chinese room will, asked the same question, give the same report, you have no grounds to make that assertion.
    Which is why we would need to understand the actual process of how the AI makes a decision. If it can actively evolve on its own, just by being faced with external stimuli, then it's a consciousness - or a consciousness waiting to happen.
    Last edited by Ravenlord; 2014-12-14 at 06:16 PM.
    There is no such thing as "innocence", only degrees of guilt.

  17. - Top - End - #197
    Firbolg in the Playground
     
    Rockphed's Avatar

    Join Date
    Nov 2006
    Location
    Watching the world go by
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Radar View Post
    I am not talking about checking anyone else for sentience - I am talking about being aware of my own sentience. Everyone is aware of their own consciousness and nobody can check it in others - we simply assume it instinctively, since we are all from the same species and so on and so forth. Lack of measurement method cannot exlude that as an important aspect of consciousness. That's pretty much all the p-zombie scenario says: we have no method of actually checking for consciousness.

    It seems most reasonable to assume that if something quacks like a duck, walks like a duck and looks like a duck, then it's a duck and not a pointed stick or a pile of noodles. That being said, it's still an assumption and not a fact.
    Descarte said "Cogito, Ergo Sum", loosely translated "I think, therefore I am." I think he was reducing the set of knowable things down to "I am doubting, so I must exist", but it also applies to our consciousness. The only person whose consciousness can be proven irrefutably is the consciousness of the one being proven to.
    Quote Originally Posted by Wardog View Post
    Rockphed said it well.
    Quote Originally Posted by Sam Starfall
    When your pants are full of crickets, you don't need mnemonics.
    Dragontar by Serpentine.

    Now offering unsolicited advice.

  18. - Top - End - #198
    Firbolg in the Playground
     
    Flumph

    Join Date
    Apr 2011
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Ravenlord View Post
    I'd say that's the fundamental gap between "real" thinking machines and the ones we can make - or even comprehend - nowadays.

    A machine we can conceivably create will be limited by its nature, because we need to feed it a set of rules it's gonna follow. We literally script the 'intelligence" part of AI; basically give the bloke our version of the chinese dictionary. This is a stark contrast to "real" intelligence which is emergent; it has the capacity to evolve and make up its own rules as it goes along. Due to that very emergent nature, it'd very difficult - if not impossible - to script a true AI. You would have more luck with abandoning programming as a concept and just setting up the "base" point, then bombarding the AI with a set of stimulis... let it learn itself.

    That's just my opinion, of course. But I think that no matter how advanced an AI we make, it will still remain on this side of that fundamental gap. I think that is the "something essential" that is missing. When a true AI will come along, it will most likely look nothing like the ones we have today.


    Which is why we would need to understand the actual process of how the AI makes a decision. If it can actively evolve on its own, just by being faced with external stimuli, then it's a consciousness - or a consciousness waiting to happen.
    You're behind the times when it comes to the current state of computer learning. We're already building learning machines, Google have created a learning machine, shown it ten million random still images from youtube videos, and it determined with no prior rules other than pattern recognition, the existence of cats. (The original experiment was actually to see if a learning machine could learn to recognise faces where the images provided were not tagged as containing a face, as they previously had been in such experiments. It could, the cats were a bonus feature because of the nature of the internet).

    Learning computers are a thing already, they're just limited by engineering.

    Quote Originally Posted by Rockphred
    Descarte said "Cogito, Ergo Sum", loosely translated "I think, therefore I am." I think he was reducing the set of knowable things down to "I am doubting, so I must exist", but it also applies to our consciousness. The only person whose consciousness can be proven irrefutably is the consciousness of the one being proven to.
    Ironically, it's increasingly obvious that the "I" that Descartes was talking about doesn't actually exist after all, even though the terminology of the cartesian theatre is pernicious because of how intuitively graspable the concept of a coherent internal "Self" which responds to sense data is. (For more read The Self Illusion by Bruce Hood)

  19. - Top - End - #199
    Firbolg in the Playground
     
    Rockphed's Avatar

    Join Date
    Nov 2006
    Location
    Watching the world go by
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by GloatingSwine View Post
    Ironically, it's increasingly obvious that the "I" that Descartes was talking about doesn't actually exist after all, even though the terminology of the cartesian theatre is pernicious because of how intuitively graspable the concept of a coherent internal "Self" which responds to sense data is. (For more read The Self Illusion by Bruce Hood)
    Obvious to whom, pray tell? Because it is increasingly obvious to me that thou art not me, and therefore I am not thee. Furthermore, with a bit of comparison between writing styles on posts and preferred sources of quotes, we could probably prove that they are not us.
    Quote Originally Posted by Wardog View Post
    Rockphed said it well.
    Quote Originally Posted by Sam Starfall
    When your pants are full of crickets, you don't need mnemonics.
    Dragontar by Serpentine.

    Now offering unsolicited advice.

  20. - Top - End - #200
    Dwarf in the Playground
    Join Date
    Jun 2005
    Location
    Buried under C++ compilers

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by GloatingSwine View Post
    You're behind the times when it comes to the current state of computer learning. We're already building learning machines, Google have created a learning machine, shown it ten million random still images from youtube videos, and it determined with no prior rules other than pattern recognition, the existence of cats. (The original experiment was actually to see if a learning machine could learn to recognise faces where the images provided were not tagged as containing a face, as they previously had been in such experiments. It could, the cats were a bonus feature because of the nature of the internet).

    Learning computers are a thing already, they're just limited by engineering.
    I wish you were right, but to quote Blunt, there is no soul in the machine yet.

    Neural nets (which this AI must be using) are fairly old as a concept. They are, however, not really "learning" as much as "self tuning", continuously adjusting their own parameters based on feedback. That makes them fairly adaptive, yes; emergent, no.

    You see, what the Google AI did was pretty nifty; but it was still only executing the task it had been scripted to do. It was a visual pattern detection algorithm coupled with a neural net. It was literally made for nothing other than recognizing similar patterns in a video feed. It doesn't actually know it found a cat; it only spots a frequently occurring pattern. It can't - and won't be able to - act out something it wasn't given scripts to.

    It won't be able to crawl through wikipedia and learn more about cats, for example.

    That is the difference I had been talking about.
    There is no such thing as "innocence", only degrees of guilt.

  21. - Top - End - #201
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Ravenlord View Post
    I wish you were right, but to quote Blunt, there is no soul in the machine yet.

    Neural nets (which this AI must be using) are fairly old as a concept. They are, however, not really "learning" as much as "self tuning", continuously adjusting their own parameters based on feedback. That makes them fairly adaptive, yes; emergent, no.

    You see, what the Google AI did was pretty nifty; but it was still only executing the task it had been scripted to do. It was a visual pattern detection algorithm coupled with a neural net. It was literally made for nothing other than recognizing similar patterns in a video feed. It doesn't actually know it found a cat; it only spots a frequently occurring pattern. It can't - and won't be able to - act out something it wasn't given scripts to.

    It won't be able to crawl through wikipedia and learn more about cats, for example.

    That is the difference I had been talking about.
    From your location, I have to ask, how do you feel about recursion?

    I might be convinced that AI in a C based language is very difficult, but I'm not half as convinced that Lisp or something like it won't work. I'm just not convinced that something can't emerge from self modifying software. There are worms (nematodes?) where all the neurons (about 10^3) have been mapped. It's just a matter of scaling that up about a billionfold (which is obviously non-trivial), we've come form 8 bits at 4MHz with 16 KB of RAM in 1980 to 64 bits at 3GHz with 4 cores and 16 GB of RAM. I'm not saying the hardware is definitely there yet, but I'm not convinced it definitively isn't.

    Self modifying software is unpredictable, so that's a negative for reliable reproducibility, but happening randomly once in a while? I can't see how we can rule it out.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  22. - Top - End - #202
    Dwarf in the Playground
    Join Date
    Jun 2005
    Location
    Buried under C++ compilers

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by halfeye View Post
    From your location, I have to ask, how do you feel about recursion?
    I don't like it. It leads to stack overflows.

    But to risk using a quote that have been worn thin already, insanity is expecting different results when you keep repeating the same action. If a design has the capability to improve (in a way relevant to our discussion), then it will exhibit that behaviour with or without recursion. It's the basic design that's relevant in our case, after all.

    Quote Originally Posted by halfeye View Post
    I might be convinced that AI in a C based language is very difficult, but I'm not half as convinced that Lisp or something like it won't work. I'm just not convinced that something can't emerge from self modifying software. There are worms (nematodes?) where all the neurons (about 10^3) have been mapped. It's just a matter of scaling that up about a billionfold (which is obviously non-trivial), we've come form 8 bits at 4MHz with 16 KB of RAM in 1980 to 64 bits at 3GHz with 4 cores and 16 GB of RAM. I'm not saying the hardware is definitely there yet, but I'm not convinced it definitively isn't.

    Self modifying software is unpredictable, so that's a negative for reliable reproducibility, but happening randomly once in a while? I can't see how we can rule it out.
    I don't think the language of choice matters much. In the end, C, LISP or any other languages are just abstraction layers to ASM. It's not the languages that are at fault; it's us. We simply can't model a sufficiently advanced thought process using pure mathematics.

    As for your theory on self-modifying software, I believe you are fundamentally correct. Once a software rolls out that can actively modify its basic function and programming, I'll believe that it has potential.

    I haven't heard of anything such however.

    Even neural nets don't truly modify themselves; they only tune themselves continuously, getting better at the task they are programmed to do. This is a fundamental limitation, too; as neural nets require a feedback loop to adjust the weights of their neurons. That, I believe, prevents them from picking up completely new actions. The net would need to set up a new neuron cluster and the necessary evaluation routines at the same time to "branch out". This won't change no matter how fast you progress the cycles, either. It's a limitation of design, not implementation.

    As for the nematodes, the one that has been mapped has far less neurons. Magnitudes less than 1000 even - try 302. And even then I have my doubts how complete our modelling is. In the real fleshware, the nematode neurons don't exist in a vacuum; they have a complex system of neurochemistry happening around them, which greatly influences how they behave. I wonder how they are going to include that in the models, for example...
    There is no such thing as "innocence", only degrees of guilt.

  23. - Top - End - #203
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: Freefall: DOGGY!

    Freefall-style AI cannot be programmed, it will have to be grown. A combination of unfettered neural nets and evolutionary algorithms with some very broad targets would be my guess. That, and a hell of a lot of generations. I.e. just like actual intelligence. We just about can model one brain in our biggest supercomputer. When we can simulate thousands at once and subject to random input and measure their fitness, then we stand a chance to get AI.

    Grey Wolf
    Last edited by Grey_Wolf_c; 2014-12-15 at 05:43 PM.
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  24. - Top - End - #204
    Ettin in the Playground
     
    Griffon

    Join Date
    Jun 2013
    Location
    Bristol, UK

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Ravenlord View Post
    I don't think the language of choice matters much. In the end, C, LISP or any other languages are just abstraction layers to ASM. It's not the languages that are at fault; it's us. We simply can't model a sufficiently advanced thought process using pure mathematics.
    The particular programming language somewhat constrains how the particular human writing it thinks. C had me thinking in lines and functions, later recursion, I never did understand OOP.

    As for your theory on self-modifying software, I believe you are fundamentally correct. Once a software rolls out that can actively modify its basic function and programming, I'll believe that it has potential.

    I haven't heard of anything such however.
    Self modifying code is old. However, it is usually heavily deprecated.

    http://en.wikipedia.org/wiki/Self-modifying_code

    I suspect doing much with it is probably a lot harder than recursion or OOP.
    The end of what Son? The story? There is no end. There's just the point where the storytellers stop talking.

  25. - Top - End - #205
    Ogre in the Playground
    Join Date
    May 2009

    Default Re: Freefall: DOGGY!

    All machine learning has the same broad base of invent a predictive function that depends on some unknown parameters, construct a cost function that you try to minimize using those parameters, and use the function to predict new items. If that is ever going to be used to mimic human intelligence, then you need to tell me what cost function humans are trying to minimize. In each activity we do we often have some intuitive cost function, but as far as living itself? People have such different objectives in life that it seems unlikely to make a whole consciousness this way.
    Last edited by Kornaki; 2014-12-16 at 12:54 AM.

  26. - Top - End - #206
    Firbolg in the Playground
     
    Flumph

    Join Date
    Apr 2011
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Rockphed View Post
    Obvious to whom, pray tell? Because it is increasingly obvious to me that thou art not me, and therefore I am not thee. Furthermore, with a bit of comparison between writing styles on posts and preferred sources of quotes, we could probably prove that they are not us.
    What Descartes referred to as "I" was an immaterial soul which is shown all the sense data gathered by the brain and makes decisions based on it which are fed back to the brain for action.

    He identified the pineal gland as the part of the brain where this happened, because nobody knew what the pineal gland did at the time so it might as well be that.

    Modern neuroscience shows in several ways that the brain does quite a lot on its own without the conscious "I" (brain activity related to performing voluntary acts begins up to half a second before conscious awareness of the decision to perform the act), and that a very great deal of the "sense data" we assume to be shown to the "I" is actually made up by the brain (There a are a lot of good examples of this not only in The Self Illusion but also Richard Wiseman's book Paranormality). The most obvious example though is our sense of colour vision. Our actual physical colour vision is limited to quite a narrow portion of our actual field of vision, but we appear to see our whole field of vision in colour because our brain makes up the rest based on what it assumes it already knows about the things it is seeing. It turns out that we aren't the cartesian "I", we're actually the omnimpotent deceiver.

  27. - Top - End - #207
    Troll in the Playground
     
    NecromancerGuy

    Join Date
    Oct 2011
    Location
    Therinos
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by GloatingSwine View Post
    He identified the pineal gland as the part of the brain where this happened, because nobody knew what the pineal gland did at the time so it might as well be that.
    Do we know what the pineal gland does nowadays?
    Quote Originally Posted by Zap Dynamic View Post
    I want to create a world that is full of possibility, and one of the best ways to handle it is by creating a bunch of stories that haven't yet been finished.
    Quote Originally Posted by Grey_Wolf_c View Post
    At this point, however, I'm thinking way too hard about the practical problems of running a battle royale school for Russian assassins, so I think I'll leave it there.
    In my posts, smilies generally correspond to my expression at the time. As an example, means "huh?" and "Hmm..". Also, "Landis" is fine.

  28. - Top - End - #208
    Firbolg in the Playground
     
    Flumph

    Join Date
    Apr 2011
    Gender
    Male

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Landis963 View Post
    Do we know what the pineal gland does nowadays?
    Yeah, it produces melatonin (a hormone which regulates sleep patterns).

  29. - Top - End - #209
    Titan in the Playground
     
    Grey_Wolf_c's Avatar

    Join Date
    Aug 2007

    Default Re: Freefall: DOGGY!

    Quote Originally Posted by Landis963 View Post
    Do we know what the pineal gland does nowadays?
    I should add that, apart from not knowing what it did, it was also believed to be exclusive to humans - I was always told that this was the reason Descartes believed it to be connected to the soul, since "obviously" "lesser" animals didn't have souls. This turned out (unsurprisingly), to be false: it's not only present in most animals, it's particularly well-developed in lizards.

    I think that it was this together with the assorted idiocies expressed by Aristotle that soured me to philosophy as a discipline.

    Grey Wolf
    Interested in MitD? Join us in MitD's thread.
    There is a world of imagination
    Deep in the corners of your mind
    Where reality is an intruder
    And myth and legend thrive
    Quote Originally Posted by The Giant View Post
    But really, the important lesson here is this: Rather than making assumptions that don't fit with the text and then complaining about the text being wrong, why not just choose different assumptions that DO fit with the text?
    Ceterum autem censeo Hilgya malefica est

  30. - Top - End - #210
    Banned
     
    Sartharina's Avatar

    Join Date
    Apr 2014
    Gender
    Female

    Default Re: Freefall: DOGGY!

    The big thing about 'P-zombies" is... how do we know they don't think? We can't experience what code executing actually feels like (If it feels like anything)

    And I thought the meaning of Descartes "Cogito Ergo Sum" was him saying "I'm thinking, which means there's something that needs to be thinking, and therefore I exist, so I can think. If I didn't exist, there wouldn't be anything to think, and so I wouldn't be thinking."
    Last edited by Sartharina; 2014-12-17 at 02:03 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •