Results 61 to 83 of 83
-
2014-08-26, 05:19 PM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
Would we? I don't have a figure, but I'm willing to bet there are far more computers than there are people. Clearly, your flash drive can't outperform one of those Intel i7 octacore processors, but there exists the possibility, however remote, of something developing emergently. Do you think such a thing would or even could announce itself with fanfare? Can you recognize intelligence when you see it? Or would you just "fix" it, like any other software error?
Ignoring the possibility of emergence, there are other, more likely, avenues as well. I've spent time recently collecting a number of programming techniques and mathematical tricks, and I'm going to hazard a guess that neither you nor the author of this article really comprehend what computers are really capable of. In fact, your author goes so far as to liken computers to lawnmowers. These days, no, your typical home computer isn't going to cut it, but there's no reason to believe that computers are just going to stop here.
-
2014-08-26, 06:10 PM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
I'd guess that when we actually understand intelligence, a modern-day cell-phone will be able to run the minimal intelligence algorithms we come up with at reasonable speeds. Whatever we come up with initially will be sluggish and bloated because we'll be doing all sorts of unnecessary computations that we don't know are unnecessary.
As far as 'can a computer be intelligent', you can simulate ab initio quantum mechanics (slowly) on a computer, so in the worst-case scenario where intelligence really is strongly dependent on quantum effects and the physical properties of matter, you can still build a human out of simulated electrons, protons, and neutrons on the computer. So at least as far as whether or not its possible, 'its possible'.
Whether or not that will be what we end up making is of course the more relevant question. No one is going to go do ab initio QM calculations of a human brain as anything but a publicity stunt. So that leaves us either creating strong AI through intentional research efforts, or losing interest in strong AI And not pursuing it (as has been the trend for awhile, though I'd guess there's a resurgence in interest in strong AI of late).
If we decide to stop pursuing strong AI directly, then any strong AI we end up with will have to be something that emerges on its own due to the unanticipated interactions of the systems we create. That may be possible, but more importantly it'd be really hard to detect or even define. It may well involve a step that passes through human-based hardware in its computations (I mean, an AI which uses humans for its image recognition and language generation but has thought processes that are provably independent from the humans who act as its inputs and outputs would be particularly hard to detect as an AI since it would be easy to argue that the humans involved in the external layer are doing the thinking). So the direction to proceed here if you want to get ahead of that curve and look for existing AIs is to come up with better tests than the Turing test to see whether something is displaying a form of intelligence.
Basically you need a non-invasive, indirect test to ask whether or not a given pattern of behavior displays (different forms of) intelligence - learning, goal-based behavior, logic, etc: you need to define these precisely and distinctly - and then run that test on whatever big distributed systems we make and see what happens.
If we want to directly try to construct strong AI, this might also be a reasonable way to proceed, but now we use the measurements to test and design algorithms that have high scores on those metrics. Alternately, we can keep taking particular 'bits' of human thought and can try to map them out well enough that we can replicate the essence of them on the machine - this is what we're doing with things like computer vision, natural language processing, etc. The problem is that we don't seem to be doing very well in chaining those things together across very different types of algorithms (though I guess we're getting better - every high-end machine learning algorithm these days is some random chaining-together of a dozen different ML methods: lets do convolutional RBMs that generate feature sets that we process with PCA, classify with k-means, and then feed into a random forest of decision trees...)
What we won't have is just a sudden 'aha, now its intelligent!' moment. It's too much of a spectrum, and its also something where people like to move the goalposts quite a bit. So maybe the third approach is, target the sensibilities of the audience. Make something that gets people to interact with a system as if it's fully intelligent even when much of that intelligence is constructed via artifice, fakery, and even human assistance. Then slowly let that support structure be replaced with automated systems - when it degrades enough that people start doubting that it's an intelligent being, then double back and fix the blemishes. In other words, it may be easier for us to be honest with our expectations if we start from the point of something that is intelligent (because it is actually handled 80% via direct human response) and then start to remove bits of it, instead of starting with something whose intelligence we're naturally skeptical about and trying to get it to do human-like things.
-
2014-08-26, 06:50 PM (ISO 8601)
- Join Date
- Apr 2014
- Gender
Re: The Mechanical Mind - How close are we?
A computer is a collection of on/off switches, a machine used to execute per-designed programs. It is far more complex than a lawnmower, but both are utterly mindless machines. Stacking enough on/off switches on top of each other will not magically become a mind because of "emergent complexity". If it was going to, at least something would have emerged by now.
The human mind did not suddenly spring into existence when brains got big enough, it is descended from simpler minds that nevertheless, had recognizable thought. Computers do not think. A chess program does not invent strategies or even plan it brute forces every possible move and chooses based on naked probability. A chess grandmaster will have other skills that key off many of the same skills and can also walk, talk, and eat. A chess program is just a chess program, nothing more.
Given the utter lack of progress on creating a strong AI on purpose (fusion stays 50 years away, AI gets further away) evidence suggests that this is also a pipe dream. And before someone brings up flight, hangilders could at least get off the ground. Sure computers have have passed many milestones, but these were accomplished via brute force and pre-programing. The common pattern is for theorists to declare only a strong AI could accomplish something, computer scientists working on the problem, and then creating a brute force solution that has nothing to do with actual thought, and then said solution being trumpeted as a triumph of AI. In fact one description of programing is "Saying what you want so clearly even a computer can understand you". I'm pretty sure if you gave me access to wikipedia and instant reaction time I could clobber Watson at Jeopardy.
Of course I have no idea why on earth you would want to create strong AI. I've certainly never thought "Gee, my desktop is nice, but would be even better if it argued with me over browsing my habits to play and needs days off."Last edited by Avian Overlord; 2014-08-26 at 06:54 PM.
We are the Blorg. You will be befriended. Resistance is impolite.
-
2014-08-26, 07:08 PM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
And if you take biology at face value, humans are nothing more than sacks of chemical soup, but that doesn't seem to have stopped us.
You're correct in saying that biology is iterative. Everything is. No one gets everything right on the first try. Not programmers, not writers, not scientists, nor anyone else. What's to say that artificial intelligence should be any different?
On the topic of brute force, I've thought for some time now that the ideal solution to artificial intelligence is brute-force itself...Isn't that all that evolution is?
Just because we can.Last edited by Grinner; 2014-08-26 at 07:11 PM.
-
2014-08-26, 07:24 PM (ISO 8601)
- Join Date
- Apr 2014
- Gender
Re: The Mechanical Mind - How close are we?
Even if true AI isn't impossible, consciousness is so poorly understood that it certainly is not a foreseeable tech. Actually, I'd like to make a request. Specifically, I'd like to see an article or outside argument that AI is possible. Everyone I've seen talk about it just assumes it is. I'm not immune to persuasion (in fact, the blog I linked is what convinced me of the position I've been espousing in this thread), and I would like to see a more detailed opposing view than quickly written forum posts can provide.
We are the Blorg. You will be befriended. Resistance is impolite.
-
2014-08-26, 07:57 PM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
The problem here, I think, is that you're focusing on learning. Learning is one aspect of intelligence, but it doesn't constitute the whole of it. You really need something to tie all of these disparate elements together. You need a motive force.
Honestly, I really don't care what you think.
I could google a couple things for you, or maybe I've even got something saved on my hard drive. It doesn't matter either way. Those articles would probably make the same mistakes that blog post you referenced did. There'd be some vague discourse on Moore's law, p-zombies, or something, but chances are that they'd tell you nothing substantive, nothing useful.
You're probably better off reading a book about perceptrons than AI theory in general.Last edited by Grinner; 2014-08-26 at 07:59 PM.
-
2014-08-26, 08:46 PM (ISO 8601)
- Join Date
- Aug 2005
- Location
- Mountain View, CA
- Gender
Re: The Mechanical Mind - How close are we?
Arguing that AI is impossible - actually, literally impossible - is inherently also arguing that natural intelligence is impossible. The universe doesn't care whether something was built by design, or evolution, or anything else, only that it exists. If intelligence is possible, then it is possible (though not necessarily practical) to make intelligence. Worst case, somehow manually assembling atoms in the pattern of a human brain (or, more practically, running a sufficiently accurate simulation of such a construct) would do it.
Since intelligence is manifestly possible, unless you're prepared to deny you yourself are an example of it, then artificial intelligence must also be possible.
If you want to argue that it is not practical, or that there's no good motivation to develop it, that's a completely different issue.Like 4X (aka Civilization-like) gaming? Know programming? Interested in game development? Take a look.
Avatar by Ceika.
Archives:
SpoilerSaberhagen's Twelve Swords, some homebrew artifacts for 3.5 (please comment)
Isstinen Tonche for ECL 74 playtesting.
Team Solars: Powergaming beyond your wildest imagining, without infinite loops or epic. Yes, the DM asked for it.
Arcane Swordsage: Making it actually work (homebrew)
-
2014-08-26, 08:48 PM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
Yes, this is certainly true. But I think that no one really takes this seriously in the actual field. The actual understanding of how to solve complex problems has evolved over the last 50 years (and of course is subject to fads), but I don't think people are just creating giant boolean networks and hoping they will become conscious.
The actual pursuit of strong AI is a synthesis of a lot of different directions of research that each tackle a small part of the problem, because the problem can't be tackled all at once as a whole right now. So for example:
- We have a much better understanding of the visual system of biological brains, and computer vision has improved as a result of that and as a result of a better understanding of scale/rotation/translation free unsupervised and supervised learning algorithms.
- We have a much better understanding of the structure of language. In the last decade several algorithms have been invented that can learn the grammar of a language simply by reading through large bodies of text in an unsupervised way.
- We understand a bit more about the connection between emotional expression and language, due to research in sentiment analysis.
- We understand a bit more about tasks like motion planning, gait selection and generation, etc.
There's probably quite a bit more, but I can't really give an off-the-cuff review of machine learning. Of course none of these is 'intelligence' in its own right, but they're all more complex approaches than just throwing a lot of stuff together and hoping it works. Something like IBM Watson combines a lot of different techniques and uses internal estimations and projections as to the success rates of its different techniques to come up with an ensemble answer to Jeopardy questions. That's not exactly 'self-awareness', but its a little bit closer to self-awareness than, say, early perceptron research. The idea of creating an internal model of the world is used in modern unsupervised learning algorithms like contrastive divergence, where you have a network that generates some kind of classification, uses that classification to create a model of their senses, re-classifies that model, and then uses the way in which the internal view of the world diverges from the external view of the world in order to learn which aspects of the input are important and which are not. Again, its not consciousness in its own right, but its a step towards using the ideas of self-reflection and internal state as part of the computational toolbox.
Sure computers have have passed many milestones, but these were accomplished via brute force and pre-programing. The common pattern is for theorists to declare only a strong AI could accomplish something, computer scientists working on the problem, and then creating a brute force solution that has nothing to do with actual thought, and then said solution being trumpeted as a triumph of AI. In fact one description of programing is "Saying what you want so clearly even a computer can understand you". I'm pretty sure if you gave me access to wikipedia and instant reaction time I could clobber Watson at Jeopardy.
Of course I have no idea why on earth you would want to create strong AI. I've certainly never thought "Gee, my desktop is nice, but would be even better if it argued with me over browsing my habits to play and needs days off."
True, this is part of it. In some sense though, this may be the part where we can be a little sloppy. The 'motive force' in humans has a strong instinctual component, which is basically external conditioning due to biology and evolution, so this may be a place where its kosher to put some things in by hand rather than have it all be generated internally. The trick is to capture the effect where over a long timescale, a person's 'motive force' can change due to internal dynamics. Achieving that particular effect might be hard to do in a stable fashion.
If we want to do this in a constructive fashion (add together elements to make an AI) rather than destructive fashion (take human intelligence and subtract elements until we no longer consider it intelligent) I'd say the major categories that we need to capture and connect are:
- Emotional reasoning. How does emotional state vary in response to exterior events and internal reflection? How do the connections between events and emotional state change over time.
- Signal processing and pattern recognition. How do we hear a word? How do we see an image? What are the relevant features we extract?
- Signal generation. How do we control our muscles to walk, our vocal chords to speak, etc.
- Memory. How do we encode and recall things, and decide which things to remember and which to forget? How do we access and make use of memory via the other processes?
- Linguistic reasoning. How do we parse and compose language? Can we/do we perform tasks of reasoning using language as the primitive?
- Logical reasoning. We can make deductions about situations given information in a way that is structurally very different than machine learning approaches which require thousands of repetitions.
- Intuition. Lets call this the intersection between logical reasoning and memory. Occasionally we 'recall' certain facts that make significant changes to our logical processes. This is done through an indirect fashion but can be very effective/efficient (association-based maybe?)
- Decision-making. This is the intersection between Emotional Reasoning and Logical Reasoning. How do we evaluate our internal calculations against the emotional state we have/wish to be in/predict that we will be in?
- Internal Modeling, e.g. 'imagination' or 'self-reflection'. It's known that human brains create an internal model of parts of the world to use in problem solving and cognition in general. We make predictions of what other people might do, visualize the consequences of actions, etc.
-
2014-08-27, 09:26 PM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
I would question whether this is even necessary. If you want it to be socially adept, a significant degree of this is probably necessary, but otherwise, all it really needs to know is how to sit down, shut up, and do what it's told. I suppose it's really a matter of how "human" you want it to be, and as I said earlier, the concept emotions can function as a useful mechanism for driving action.
It's another question as to whether these things would actually be emotion, but that's outside the scope of this discussion.
I had a thought about this recently. I was thinking about a webcomic when it occurred to me that perhaps the robot's success rate was so low because of it's intelligence. Perhaps there was nothing wrong with it's vision algorithm, and the problem laid in the robot itself. Perhaps people tend to see what they expect to see?
If not that, well I don't know a great deal about computer vision or other sorts of pattern recognition, but I'd start by looking for large shapes and presenting those to the AI as individual objects. Perhaps recurse over the same image several times, looking at the image in increasingly fine detail.
If we're going to start sticking them in bodies of some kind or another, it might be useful to have a modular interface for this sort of thing. Maybe a "sub-AI" of one sort or another. That, or they can relearn to walk every time they're uploaded to a new body.
Someone (Yora, I think) wrote an insightful post on the subject of memory some time ago. I'll see if I can track it down later.
Ah-ha! Found it! (That was easy.)
Like emotion, is this really even necessary? Language is a semi-formal organization of associations between symbols (visual, aural, or otherwise) and concepts. It might be useful for grasping high-level concepts or aiding the process of learning, but I'd rank it as being only slightly more necessary than emotion, really. An AI without it would surely be less graceful in its intellectual capability, but I don't see how you couldn't make a perfectly functional one without it.
These three can probably be united into a single process. Something like creating an appropriate number of virtual actors and assigning properties to each of them. The trick is differentiating different sorts of situations. Estimating the trajectory of a baseball and estimating a person's reactions are two very different skills, after all.
Heuristics would probably be useful here.
As far as I can tell, human cognition is a big, confusing morass of risk/reward evaluations, individual preferences, various instincts, and self-reflection. Emotions bind these together and provide general impetus towards different courses of action. Still, emotions, in whatever form they would take, don't strike me as being particularly necessary. Sociopaths tend to get along without any great deal of emotional response, after all, though if you follow this line of thought, something else will need to be set in place to replace emotion's function...Maybe just logic mandated by plain old instinct, whatever form that takes.
If you decide to incorporate the concept of emotions into the design, then it becomes a matter of attempting to map a relationship between emotions and cognitions, perhaps through rigorous psychological research or just simple introspection.Last edited by Grinner; 2014-08-27 at 09:30 PM.
-
2014-08-28, 02:33 AM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
The concept of 'functional' is odd here. If the problem is 'strong AI', then the goal is to make something that can do everything a human can do intellectually. That means that e.g. language is essential even if its just on the basis that humans can use language with facility, so an AI that cannot do so has not fully solved the problem.
As far as specific things though, there are reasons beyond just 'humans do it':
Emotional reasoning is important because it is the generator of 'purpose'. Pure logic can optimize to a given goal, but in order to select a goal or patterns of goals you need to make absolute statements about purpose that exist outside of logical thought. E.g. without the ability to say 'I want to not feel hungry', you can't decide to make use of your cognitive abilities to determine that eating rice will make you not feel hungry, rice is in stores, rice cookers are in other stores, to buy something you have to be physically at a store, to be physically at a store you need transportation, to get rice or rice cookers from a store or to get physical transportation you can use money, money is paid for performing certain jobs, and therefore getting a job will help you not feel hungry. Unless you have some sort of model for that kind of emotional decision making 'I want X, this makes me feel good, this makes me feel bad, this will make me feel good, this makes me uneasy, etc' then the goals will always be externally imposed.
Language is important because it allows instant adaptation and transmission of information. A backprop neural network trained to control a robot's limbs for example needs thousands of training samples and trials in order to improve the robot's performance. It can't integrate external information directly in order to improve its function (e.g. you can't tell a backprop neural network 'try falling forward and catching yourself', but you can do that for a human). Also, linguistic structures hierarchically encode relationships between things in the world - that makes them very convenient for doing logic-based tasks. Trying to represent logical relationships without the richness and hierarchical structure of language is much, much harder.
I listed Intuition and Logical Reasoning as separate, because its pretty clear that Logical Reasoning proceeds in a nearly serial fashion, whereas Intuition seems to be more about networks of association. So the sorts of algorithms you'd design would have very different structures.
-
2014-08-28, 05:05 AM (ISO 8601)
- Join Date
- Jan 2012
-
2014-08-28, 06:42 AM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
One could similarly say 'hey, why should we limit ourselves to human modes of consciousness? I'm going to be non-humanocentric and declare that rock over there to be a strong AI. Victory!'
Humans can do things which we do not know how to reproduce either precisely or even approximately with a computer. Therefore 'given this phenomenon, come up with how to make it happen' is a reasonable way to explore holes in our understanding and knowledge. There may be many kinds of intelligence that can be produced - we've certainly already produced computers with many of them - but there are kinds which we definitely do not know how to produce, and so that indicates where there are things left to be discovered and learned.Last edited by NichG; 2014-08-28 at 06:43 AM.
-
2014-08-28, 07:14 AM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
Weeeell...In certain belief systems, that sort of is a thing, but that's neither here nor there.
To address your specific points:
Yes, but as I mentioned earlier, you don't need full-blown emotions such as fear, anger, or joy. They can be useful for optimizing behaviors*, but I don't see how they're strictly necessary. For your particular example, I'd classify hunger as more of an instinct.
*Kinda like how you learned to not touch the stove when you were little. The pain sparked a fear. The fear keeps you from trying it again, unless you have a really good reason to do otherwise.
If you're talking about language as a means of communication, then yes, I would have to agree with you on that point.
What I don't think is necessarily necessary is the idea of words as a means of thought. I can't say anything definite, but I don't think a strong AI would have to think in the exact same way humans do.
But logic inevitably involves casting about for answers until one is found which answers the question. You may not have much trouble with counting or math, but you've been doing it for so long that it's become ingrained.
Intuition, I think, involves just going with whatever seems to be the most likely answer, perhaps on a subconscious level.
One is brute-force, and the other is heuristic. Either way, I think they both involve the same solution set, whatever form that takes.Last edited by Grinner; 2014-08-28 at 07:14 AM.
-
2014-08-28, 08:24 AM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
The fact that we have more emotions than 'the good emotion' and 'the bad emotion' is evidence that there's something interesting going on computationally with emotions. Instinct is sort of like supervised learning - the goals are given by an external factor. This is clearly an important part of the emotional computation that humans do, so its not to be ignored. However, the thing that's more subtle is that over time humans can change their goals. Different emotions also correspond to a selection between different methods of thought, which could be important for extended sorts of problem-solving behavior.
That is to say, 'frustration' is what lets you evaluate that maybe even if you can't prove you won't succeed, its time to give up the current line of activity and try something else. Its very heuristic, but its also dynamic and responsive to prior experiences as well. So there's something non-trivial going on there that we should try to understand.
My guess is that if you leave out emotions, you'll end up with an AI that behaves like someone with a frontal lobotomy. Given a well-defined task which it has been conditioned to do, it'd be able to perform that task. But it wouldn't be able to make use of significant amounts of self-direction either in terms of solving a more open-ended task ('find a way for our business to propser'), or in its general behavior.
If you're talking about language as a means of communication, then yes, I would have to agree with you on that point.
What I don't think is necessarily necessary is the idea of words as a means of thought. I can't say anything definite, but I don't think a strong AI would have to think in the exact same way humans do.
- Creating a condensed internal representation of arbitrary complex states of the world or the situation of interest.
- Being able to do processing on these internal representations to check for mutual consistency
- Equivocation: being able to generate permutations of the internal representations in way that preserve certain desired properties.
- Generalization: being able to break down something in a hierarchical way, so that you can work at various levels of coarseness.
- Computation by reference: being able to take a set of things and assign it a label for the purpose of compression, classification, or identification
Language is nice because it naturally does all of those things. Arguably, anything that does all of those things probably comprises a language due to the first thing on the list.
But logic inevitably involves casting about for answers until one is found which answers the question. You may not have much trouble with counting or math, but you've been doing it for so long that it's become ingrained.
Intuition, I think, involves just going with whatever seems to be the most likely answer, perhaps on a subconscious level.
One is brute-force, and the other is heuristic. Either way, I think they both involve the same solution set, whatever form that takes.
-
2014-08-28, 09:30 AM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
And on that note, since we can go back and forth about this all day evidently, let's switch to something else.
So what about the mental architecture of a strong AI? What would that look like? To me, it seems that you could try to integrate everything into a single loop, though that might be too complex to program by hand. What might be interesting is implementing the idea of bicameralism in the mental architecture. Instead of creating a single complex consciousness, you could subdivide it into a collection of simpler thought processes.
Originally Posted by Wikipedia - Bicameralism (psychology)
You could go further and have each sense processed by a different module. They would then submit the results to each segment, which would interpret the data and act accordingly.
The benefits here are twofold. First, by splitting AI functionality into a high brain and a low brain, it's that much easier to program. Second, it might be a more efficient arrangement, as the self-aware unified intelligence might waste a lot of cycles on useless cognitions.
-
2014-08-28, 09:44 AM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
I'd say 'why stop at two?'
Some of the most successful models in current machine learning are based on the idea that you take a given situation and then you descend down a tree to figure out exactly what algorithms should be doing what. One way to do this is to have every algorithm you have in your arsenal runs on the data and return not just a suggested response but also an estimation of how appropriate it is for this particular kind of data set - that's basically what got Watson from the 60% accuracy to the 80% accuracy range.
Another way to do it is to have meta-computations taking place that over a long period of time learn to route stimuli appropriately. An example of this that was pretty successful is something called 'Adaptive Resonance Theory', where you basically take the input, check it against the stuff you've got, and have your current algorithms/processing modes compete over it. The closest one becomes more tightly attuned to that input in the future. However, if none of them is very close, then you basically allocate a new clump of neurons and train that clump to the inputs.
One thing I've been messing with recently for classifying phylogenetic trees for artificial chemistries (otherwise known as 'how many subject areas can we cram into one project') is to use a tree of classifiers to sort the data out. The basic element is something like a Restricted Boltzmann Machine, which is basically a kind of neural network that attempts to create a compressed encoding of the inputs that is best able to reconstruct the set of inputs from the compressed signal (so its kind of like Huffman coding, but it figures it out from exposure to the data). You have one RBM create a 1-bit representation of the world and do its best with that. Then, you take all the situations where that bit is '0' and all the situations where that bit is '1' and you use that to branch into a tree. For each sub-group, you compute the difference from the reconstructed input (e.g. the difference between it and the mean of its group), and then you run that through another 1-bit classifier, and keep descending through the tree as far as you want. This works pretty well for figuring out the best 'species' in a whole phylogeny of chemical sets.
I'm not saying that the human brain uses a tree of 1-bit RBMs, or that the tree of 1 bit RBMs is 'intelligent', but in general it seems that breaking up the space of problems hierarchically is a very powerful tool.Last edited by NichG; 2014-08-28 at 09:45 AM.
-
2014-08-28, 10:31 AM (ISO 8601)
- Join Date
- Sep 2009
- Gender
Re: The Mechanical Mind - How close are we?
... things have been emerging for a good while. There has been a game that's based entirely on emergent properties of a very simple set of rules - Life. Calculations have even been done suggesting that, if not a mind, then at least a Philosophical Zombie could be created by Life, if given a big enough grid and enough time.
Life also raises a rather interesting question of what you consider a designed program. It's possible to cause self-replicating, complex patterns in Life without intending, by just painting a random set of cells and then letting the game run its course."It's the fate of all things under the sky,
to grow old and wither and die."
-
2014-08-28, 10:38 AM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
I like it, but I have a few concerns.
First, you'd need some way of classifying and recognizing stimuli before you can direct them to the appropriate algorithms.
Second, this is all highly abstract. At some point, the AI needs to stop formulating solutions and actually do something. I imagine that they would need a set of "tools" to work from, like a library of functions.
Third, the first AI in my proposal was an essentially general-purpose one, but it was just focused on accomplishing goals either recognized from past experience or given to it by the second one, the "command center". Since this other one is continually generating new algorithms, coordinating between any one or more of the sprawling number of algorithms and the command center is bound to be a difficult problem to solve.
And is this how you would classify stimuli?
-
2014-08-28, 06:50 PM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
Well, in terms of novel stimuli, sort of. If I see something that might be new then the first question is 'Do I recognize this/is this familiar?' If yes, then that goes one way; if no, then I try to relate it to what it seems to be most similar to - possibly combining features from various previously encountered things ('this is like an armadillo, but it has cat-like ears'). But as far as the stuff that my brain does automatically on a 10-100ms timescale, I have no conscious access to that process, so I have no clue.
Its worth noting that the RBM approach automatically discovers classifications for stimuli based on the statistics of what it's shown. If you show it 20000 cat pictures and 20000 pictures of dogs, it will probably devote a feature to 'is it a cat or a dog?' (this by the way is the same kind of algorithm that 'spontaneously learned to recognize cats on the internet'). The tricky thing is when the input has symmetries that you have to respect, such as the fact that in an image there are spatial relationships between the pixels that allow things like 'translation', 'rotation', and 'scaling' to happen without changing the image content. So far, when you're dealing with something like that, it requires the researchers to implement a customized network topology that encodes those symmetries, because they're too expensive for those algorithms to learn in an automated way.
This is why I like the idea of 'equivocation' in language so much. Being able to encode statements like 'a translated version of an image is the same image' is very powerful, especially if you can test those statements against things of interest (e.g. a sort of hypothesis-forming system). Ostensibly you could do this with things other than language as well, but I don't know how to do so.
As far as 'actually doing something', whatever the toolbox of functions is can't be static. It has to be internally generated and able to be extended or replaced.Last edited by NichG; 2014-08-28 at 06:51 PM.
-
2014-08-28, 07:18 PM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
I'm not too familiar with the concept of encoding in the context of artificial intelligence, but if I understand it correctly, it's like hashing a set of raw data into a more manageable form, right?
If that's the case, I think that's the wrong approach entirely. In uncontrolled conditions, even if you try modifying the input stream, there's just too many things to account for. Moreover, when you compress the data, you lose some information.
When I look around myself, I see many things: books, notes, pens, an empty soda can, etc. I recognize these things as separate objects, not as single lump of information. In this fashion, an AI should also attempt to pick out objects from an image, which can then be incorporated into ongoing situational evaluations. It should not attempt to recognize images based on statistics alone, since that's unfeasible with modern technology outside of laboratory conditions. Even if it weren't, it would still be exceedingly easy to fool or confuse. Methods employed by facial recognition software are probably closer to what you want.
-
2014-08-28, 07:46 PM (ISO 8601)
- Join Date
- Dec 2010
Re: The Mechanical Mind - How close are we?
You actually want to lose information, but maybe thats not clear because its not clear exactly how much information there is coming in compared to what you actually end up using. Think about something like a handwritten letter 'a'. When you're reading a sentence, you're going to gloss over details like whether this particular 'a' was drawn a bit slanty, or whether its slightly larger or smaller than the last one, etc. You might get what font its in.
But what's coming into your eye is hundreds of thousands of individual patches of light and darkness. Even things like 'is it a bit mis-shapen' have already been hashed into a more manageable form. We know the brain does this, and we even know roughly what the mapping is in the human brain. Before we do any sort of higher processing on it, we've already processed our visual input into something which encodes for the presence or absence of sharp edges at particular angles, as well as the presence or absence of changes in time.
Doing this kind of dimensionality reduction is hugely important.
When I look around myself, I see many things: books, notes, pens, an empty soda can, etc. I recognize these things as separate objects, not as single lump of information. In this fashion, an AI should also attempt to pick out objects from an image, which can then be incorporated into ongoing situational evaluations. It should not attempt to recognize images based on statistics alone, since that's unfeasible with modern technology outside of laboratory conditions. Even if it weren't, it would still be exceedingly easy to fool or confuse. Methods employed by facial recognition software are probably closer to what you want.
There's lots of variations in the details - maybe you use k-means instead of RBMs, or you use different network architectures, or you just do logistic regression on the discovered features, or you do some ensemble thing on top of that - but dimensionality reduction to a small non-image-like feature space is the key to most if not all of these things.
-
2014-08-29, 02:57 PM (ISO 8601)
- Join Date
- Jan 2012
Re: The Mechanical Mind - How close are we?
I guess what I'm trying to say is that there's a big difference between identifying a single letter and reading a whole page. Similarly, there's a difference between recognizing a single object and recognizing multiple objects in the same image, unless you have some convenient means of isolating them. There needs to be something else between receiving the image and passing the extracted information onto whatever construct is responsible for its reasoning.
Point taken.
I had another idea, though. The Xbox Kinect uses what's called a depth sensor, which works by projecting infrared light and reading how it bounces back. By doing so, it can distinguish how far away a surface is. Using the depth map produced by the depth sensor, you can isolate objects from color images captured by a camera. After doing that, run each object through an image recognition algorithm/neural network/whatever and pass the results to the reasoning module.
I suppose you could apply ultrasound in the same manner.
Either way, this method would be no good for reading, but it ought to work for navigating an environment. Additional calculations would be needed for that sort of thing...Last edited by Grinner; 2014-08-29 at 03:00 PM.
-
2014-08-29, 03:04 PM (ISO 8601)
- Join Date
- Sep 2009
- Gender
Re: The Mechanical Mind - How close are we?
... doing that using ultrasound is called a sonar. Both it and the infrared variation are already used in robotics.
"It's the fate of all things under the sky,
to grow old and wither and die."