Monday, November 24, 2008

a long, long way to go

One of the great hurdles in the creation of true human-style artificial intelligence is intentionality-- the world of meaning. How would an artificial mind navigate the following problem, which arose during conversation with my mother as we were prepping lunch yesterday?



MOM (looking at the bowl of food I'm about to stick in the microwave): Oh, that doesn't need to go in for even a minute. Just put it in for 39 minutes.

ME: OK.




An overly literal artificial mind would miss my mother's mistake and might get confused about what she really meant. It's pretty obvious to us humans that Mom meant "39 seconds," not "39 minutes."

But what makes that obvious? How would you design an artificial mind that could handle the fuzzy logic of human discourse, a domain in which such mistakes are both routine and correctly interpreted?

My "OK" in response to Mom's utterance indicates that I "saw through" the words and immediately grasped the intended meaning-- an easy feat for human minds. What would it take to bring an artifical mind up to speed on mistaken utterances, subtletly, puns, sarcasm, slang, lying, and so on?

While I'm optimistic about the possibility of human-style AI, I don't see it happening anytime soon because of problems like the one above.


_

9 comments:

Anonymous said...

This is the most concise illustration of the problem of artificial intelligence that I have seen. However, there is still another dimension to it--the sense of self. I am not a subscriber to Daniel Dennett's "no Cartesian Theater" hypothesis. Based on my own reading and experiences, I firmly believe there is a Cartesian Theater and a Central Meaner (to borrow two phrases from his book, "Consciousness Explained".) For artificial intelligence to be truly intelligent and not a super-logic machine, it will require a sense of self.

Kevin Kim said...

Bill,

Hey! Thanks for visiting!

Well, you know me: I lean more toward the Buddhist end of the spectrum regarding whether there's a fundamental self. There's a self, all right, a conventional "I," but it's both particulate and relational, which to my mind automatically disqualifies it from being some sort of self-existent monad.

Perhaps there is a Central Meaner, but if that Meaner can be analytically divided into parts and shown to relate to everything around it, then the Meaner's existence is at best conventional, not fundamental. It arises in the processual flow and turbulence of existence, retains coherent continuity for a time, and passes away, like all physical (and mental!) phenomena.

That's my take, anyway.

By the way, I hope all's well. Happy Thanksgiving!


Kevin
(not really here)

Malcolm Pollack said...

I don't see a problem with this one, really. Based on previous experience, you immediately noticed that the recommended timespan was far out of the normal range, then tried a units substitution to see if it would get you a more reasonable answer, which it did.

Kevin Kim said...

Bill,

It occurs to me that my above comment isn't necessarily in total disagreement with your stance. We might differ on the question of the fundamental existence of a Central Meaner, but might agree that some sense of self needs to be in place for an artificial mind to be able to navigate the waters of human interaction.

I obviously need to read more Dennett, not to mention more literature on AI.


Kevin

Anonymous said...

Malcolm, the recognition of the inappropriate units is exactly the problem Kevin is pointing out. What creates the inappropriateness?

Kevin, your description of a Central Meaner is much akin to Dennett's. Dennett takes it further, and considers everything to be like you describe, even the sense of self, which he doesn't really deal with, just implies from the mechanisms he proposes.

Kevin Kim said...

Malcolm,

Probably true, but what algorithm would you need to write in order to program a computer to follow exactly that chain of reasoning and not other ones-- ones that, even to an "experienced" artificial mind, might appear equally plausible even if they weren't?

Aren't we, at this point, talking about the so-called "framing problem" in artificial intelligence? Zipping right to the unit substitution is indeed easy for me, but I'm not convinced that we can, at this point in history, program a computer to note Mom's mistake and act as decisively as I did in that moment.

There's literally an infinity of possibilities that have to be discounted as irrelevant in order for me to know that Mom wants me to set the microwave for 39 seconds. I can "crop" almost all those irrelevant possibilities in an instant and zero in on the unit substitution, but could current AI do that? Might it be able to do that in five years? Ten?


Kevin

Malcolm Pollack said...

Hi all,

Well, I certainly admit that this sort of problem can get out of hand very quickly, but this is such a simple and quantitative example that I think it's well within the capabilities of existing systems.

As a software engineer I constantly have to "validate" user input to make sure that it falls within reasonable bounds. Thirty-nine minutes is an unusually long time to put anything in the microwave, and it would have to be something pretty large to justify it, like a ham or something.

So a program that knew about microwaving things could easily be on the lookout for input arguments that exceeded normal bounds - and if it were accustomed to getting input from fallible humans, checking that the input had been given in the correct units would be one of the first things it might do.

It isn't the units mistake that's the hard part; it's getting it to know what sort of thing you want in the first place. Much, much harder would be getting it to know what to do if you just told it "Yo, bro, this needs some nukage."

Kevin Kim said...

Malcolm,

Good points, but I guess what I'm talking about isn't so much the design of a highly focused, task-specific algorithm into which only limited inputs could be funneled, but an entire AI suite that is "aware" of the existence of an extremely wide range of conversational modes and interpersonal situations, and which must somehow narrow this universe down only to what's most relevant in deciding what to do in the next few moments.

I grant that creating a program that handles a limited scope of activities-- e.g., a program that guides a robot in how to microwave something, or a program that plays chess better than Garry Kasparov-- isn't a hard thing to do, relatively speaking, and can be accomplished with today's programming acumen. But the design of such a program is possible only because human programmers will have avoided the "framing problem" entirely by creating a program whose focus is already narrow. A microwaving program has its parameters set by its manufacturers.

What I was trying to get at in my blog post wasn't so much the ease or difficulty of interpreting the language specifically related to microwave cooking; I was thinking more in terms of how humans deal with relevance (which is linked to intentionality), and how this sort of thing is still hard for today's computer programs to handle, as is evident when we look at the still-atrocious state of translation programs.

Maybe I need a better example. Let's say that the programmer's task is to design an AI program that functions as an editor and proofreader: it reads a scripted dialogue given to it, and whenever it encounters something that doesn't make sense (in human terms), it either corrects the problem itself or, lacking sufficient data, flags the offending locution for clarification by real human editors. Could a program be designed today that could easily spot my mother's mistake and replace it with the correct locution? Further, is it currently within our power to design AI that can spot that type of error, not only in the microwaving contexts but in all conversational contexts? Your "Yo, bro, this needs some nukage" example falls under this umbrella, I think.


Kevin

Malcolm Pollack said...

Kevin,

Agreed.

We can do it with our biological machinery, though, so the proof-of-concept is right there in front of us.

Might be a while, though. I certainly think real AI machinery is somehow going to be not just a matter of writing software.