6.07.2011

Strong AI

Question: Is Strong AI possible? Can a mind be replicated in a machine?

20 comments:

Jared said...

First of all, we have to get some definitions out of the way:
AI
"mind"
"machine"

If by AI you mean a program capable of making complex calculations and decisions based upon its previous incorrect conclusions, then yes. If you mean a program which acts exactly as a specific person would in a specific situation, then I would argue that isn't an AI.

"Mind" --what portions of cognition are you referring to. You, of all people, should know that the "oneness" of the human brain is an illusion. Can we have a computer which is capable of perception, analysis, and response to stimuli? Of course, and retention of these responses and their consequences, adjusting future responses based upon these? Yes, many search engines do this now. Currently, such a computer won't be small, but it can be done.

What exactly do you mean by "machine" here, are you referring exclusively to transistor based computers, or can organic/quantum components play a role? I ask only because quantum tunneling and size may be a factor. If you want a "mind" the same size as the human brain, it would take some powerful 3D processors, which are notoriously hard to manufacture (and cool) right now, or some non-transistor-based computers.

A goal of a strong AI and replicating the mind on a computer seem to conflict... You can have a system that is capable of replicating the human mind or have a strong AI, not both.

Pliny-the-in-Between said...

Ah, I keep forgetting our little clutch has some very technical people in it - my bad.

For this question, let's assume that strong AI refers to a nonhuman or artificial machine (of any kind or configuration) that matches general human cognition as described by Searle's Chinese Room argument.

Exclude any discussions about mimicking a particular person.

Yes, I know this is a straw man set up, but humor me for a bit ;). The illusion of mind is hopefully going to be the point of this.

mac said...

Yeah, what jared said.

I think if a machine is to fully mimic a human mind, it would have to have experience similar to a human (ie: it'll have to "grow up").
As you know, intelligence is important, but not everything. For a machine to truly replicate a mind, it must learn pain, joy, suffering, defeat....empathy.


If not, it would just be another rightwing politician.

pboyfloyd said...

I think that a mind might be fairly easy to create considering the leaps and bounds in memory and computing power.

I think that the main thing in creating a mind, and not just a computing program is feedback loops. Basically two powerful programs which look at each other and are connected and 'think' of the other as the 'self'.

When we're thinking of a mind, we're thinking of awareness, yes?

Of course the first artificial mind/awareness would be obviously male.

If you asked it what it was thinking, for example, it'd likely say 'nothing', you know, unless it was actually thinking something.

Jared said...

"Yes, I know this is a straw man set up, but humor me for a bit ;). The illusion of mind is hopefully going to be the point of this."

Didn't I say that in the second sentence of the "mind" paragraph?

In any event, we're not talking about two or even ten programs with feedback loops, but hundreds of computers each running dozens of programs, one computer for each portion of the brain. All of these would need to work together in a cluster and be forced to ignore the errors of the other computers in the cluster.

If you're going to engineer a machine to "think," why not make it think well, rather than like a human?

Pliny-the-in-Between said...

You did make that point, it's true. I think that will be one of the hang ups that will be difficult to dislodge. People are wedded to this notion of the ghost in the machine, rather than mind as a description of neurochemical processes observed in brains.
------------------------------

If you're going to engineer a machine to "think," why not make it think well, rather than like a human?
------------------------------------
a great question! The second one really. The first being is it possible.

I would (and will in fact ;)) argue that the whole point of AI is to not make the same mistakes with cognitive design that are foundations of our own. That sounds pretty arrogant and presumptuous, but it's a good place to start an argument... which is the whole point of this post.

Jared said...

Now Pliny, the whole idea of starting an argument implies disagreement. If you're looking to start an argument with me, just say that the human brain isn't a series of computers! (it is...also in need of a serious engineering overhaul: more insulation from interference, improved cluster bridges, higher clock speeds, better memory fidelity, higher efficiency, and a less fragile matrix--I've thought about this far too much)

This is one of the reasons the "intelligent design" idea doesn't hold much sway with me; they're saying their god is a really a pathetic engineer...

Anyway, topic at hand:
If you're trying to make an AI, you don't start with the way people think, you start with the way computers process, then add layer upon layer of logic, memory, sensory, and additional "expansion" feedback mechanisms (for plasticity) until the result is a system which is capable of producing highly accurate (better than most people) results repeatedly (better than all people) results. A truly good AI still does have to "learn," but that "learning" can be transferred to duplicate AIs, although it could be said this "learning" is more akin to a very special kind of "calibration."

Pliny-the-in-Between said...

OK fine Jared - be that way! I could try to type that I thought the brain isn't a machine somewhat akin to a digital computer but probably my fingers would freeze.

Your comment on design reminds me of my label for creationism as 'somewhat wonky design'.

We actually approached AI from a different vantage point. For us it was based upon modelling desired outcomes from cognitive processes and reverse engineering the inputs needed to achieve them consistently. That was the start at least. And very effective.

I absolutely agree on the learning. That is the approach we use. It is a calibration exercise.

On a wide hair note we are also looking at a natural selection model for competitive calibration of separate AI kernels. Also looking at some rudimentary tagging and imprinting so that a particular AI can recognize (metaphor at this point) data and assets that are 'its'.

The systems are already better than most people (clinically) and on their way to being better than just about anyone.

Jared said...

Ahh, evolutionarily selective programming, I love it!

For the "imprinting," you can make the kernels store "their" data with a 64bit shared key so other kernels can "learn" from it, but recognize it as foreign. This would actually be a pretty neat trick; learning from the experiences of others; already doing better than most of us meatbags.

Pliny-the-in-Between said...

For the "imprinting," you can make the kernels store "their" data with a 64bit shared key so other kernels can "learn" from it, but recognize it as foreign.

----pretty close to what we are doing.
Also the imprinting allows us to reconstruct any version of the database and algorithms should something go off the track. Plus we can push selection points by altering input data (what meat puppets call experiences) from a common point and track the progress to see which if any path is better. At that point all divergent systems can be upgraded with the winner.

Jared said...

I'll just be assuming VMs and one MySQL VM are being heavily utilized for this as it's difficult to do that with a physical machine (not to mention very time consuming). Just what kind of machine is running that? (or is your database hosted on a separate physical machine?)

Pliny-the-in-Between said...

Ahh but that's the secret sauce! The system configuration is so compact that we can run most of this on a single machine.

pboyfloyd said...

I think I hear you saying that a mind and strong AI are two completely separate things.

Pliny-the-in-Between said...

It kind of depends on how you define it pboy. Where I come from strong AI is used to describe things that are artificial human minds.

Most of the arguments against it being possible tend to be philosophical rather than technical. I think it's because of the misperception that mind is somehow separate from the anatomy (the cause of some of the confusion we see over at Ste B's joint, I suspect).

Those of us who look at human cognition as more of a programmatic function (based on your writings, I'd put you in that camp as well), don't see it that way. Nature built a machine that thinks through trial and error, why not us?

pboyfloyd said...

Yea sure. I'm still wondering how you make it intelligent without making it aware and self-aware.

I think that that's where the money is. Artificial friends. Although I think it'll be kind of creepy but fun to have a guest over and suddenly Elmo in the corner starts quizzing him/her while you're off to get some drinks or whatnot, especially if Elmo is answering coherently.

Be hilarious if you walked in with the drinks and Elmo is busy expounding, "Oh yea, well I'm not too fond of you either ASS-hole! Pliny, kick this dead-beat on the street, he's a jerk!"

Jared said...

I think the problem with a truly human-like AI is the problem of current hardware; the human brain is a highly parallel system, to have the same kind of multi-threading, a computer would need dozens of processors (or lots of simultaneous threads) each with a memristor-based architecture. We're not quite there yet, but that doesn't mean we can't make some awfully near human-like AI's, they just won't be capable of "learning" and "remembering" in the same way as humans. High performance SSDs (on the order of GB/s) and 20nm architecture (power and space issues) may alleviate the need for memristors provided the processor architecture is fundamentally part of the programming.

In the human brain, there is no software, only hardware with intrinsic plasticity.

Pliny-the-in-Between said...

That is pretty spot on, in AI, software has to take the place of the plasticity of the CNS.

Jared said...

I thank my biology education for my cursory understanding of the CNS, and just general nerdiness for the IT bits (or bytes).

Let's run with the idea of memristors for a moment; if we were to build a system based upon these with the intent of modeling human cognition, couldn't we take the end result and "freeze" that into regular transistors? If that would be possible, why not use a supercomputer to model and design such a system?

Pliny-the-in-Between said...

Great question Jared - we have considered having our AI kernel etched. It is possible to do so because of its structure. That would obviously increase performance, though that has not been an issue for its current duties. But if we got more into the strong AI argument, it might be useful to pick up the extra performance.

Pliny-the-in-Between said...

Biology education is a must for computer science in my opinion. Our data structures are all designed around my past life understanding of molecular biology and the way biological systems are extremely efficient in how information is stored and used. Our AI even uses a rudimentary form of neurotransmitter analogs as both agonists and antagonists. These are very powerful in their effects upon the sophistication of the AI's performance. And we've only just begun to investigate the full potential of this.