Quote Originally Posted by Grey_Wolf_c View Post
I do not think it is even comparable to a bright organism. A key component of intelligence is the ability to identify one's own mistakes (feel free to insert here a cheap joke about your most disliked politician). Neural networks & other modern AIs are capable of learning, but still require an outside intelligence - i.e. a human - to tell them what category something belongs to. A spam filter will happily keep marking real emails as spam forever if you don't tell it is making a mistake. In the end, yes, it is creating its own algorithm to follow, which is impressive, but ultimately, it is still a set of rules.

To be clear, I do not look down my nose at any of this - when it works it is impressive, and facial recognition is at this point better at it than humans are - but it is not something that I would call intelligent.

Grey Wolf
Predictive coding is basically AI built entirely out of 'identify one's own mistakes and adapt' - in fact, based on the way predictive coding based things work, their entire sensory input is essentially constructed only of their mistakes - they must err in order to detect. I don't tend to do much with that kind of network because its a bit fiddly (asking for something that produces behavior as an indirect byproduct is always a bit harder than just asking for it directly, and since often I do know what it is I want the AI to do...)

Anyhow, AI is a vast, vast field now. There's a dozen ways out there for any given situation or problem or task. So blanket statements like 'AI is/isn't X' tend to be pretty far off. Each technique has its own idiosyncracies as well.

Taking the email example:

If you wanted an AI that learned on its own to ignore a subset of emails, you'd need to give it an appropriate context in order to ground it - e.g. it can't just receive a bunch of emails and that's the end of the story (though I'd still bet on unsupervised clustering to at least distinguish the 'spam' cluster from the various other stuff). If you want that AI to behave like a person, it also has to have a life outside of the emails which the emails somehow make contact with - then it can learn that the spam emails are basically uninformative about anything it cares about, whereas other emails are useful and integrated with that context. One way to do it would be to provide a stream of emails but force the AI to pull information from them via a limited attention mechanism. Then, after some time spent with the emails, you could task the AI to e.g. answer questions or act in order to obtain a reward or outcome (which could also be self-supervised, using motivation functions like empowerment or curiosity).

Now where it really gets fun is, with the right kind of indexing, you can use that same attention mechanism (indexed properly) to decide e.g. which out of a set of people to send a particular email to in order to query for the information needed to answer those future questions or take those future actions. You just ask it the same way: if we pretend we had a response from everyone, which response would be most likely to be salient?