Thursday, February 19, 2004

 
When Strong AI Gets Wacky

So I am taking this course called "Philosophy of Cognitive Science" this semester. It's been good so far. The class is mainly organized around computational versus connectionist models/philosophies of cognition (with a bit of eliminative materialism thrown in at the end: I want to learn more about this!). For class today we're covering philosophically-based objections to computationalism, and boy, are there some good ones, especially when strong AI is the object of critique. Strong AI includes some wacky perspectives, my friends. Consider this gem, found in John Searle's "Minds, Brains and Programs" (reprinted in Minds, Brains, and Computers, ed. R. Cummins and D. Cummins): apparently, in 1979, J. McCarthy stated that problem-solving machines, even simple ones (his example is a thermostat) have beliefs. Searle's reaction (circa 1980)? "Anyone who thinks strong AI has a chance as a theory of mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore, that 'most' of the other machines in the room - the telephone, tape recorded, adding machine, electric light switch - also have beliefs in this literal sense" (144-145).

Wha? Um, I'm with Searle on this point - and many others.

Now, contrast that to the abovementioned eliminative materialism, a perspective in which the very existence of beliefs is rejected: there are no "beliefs" in a connectionist model, and therefore they don't, can't exist. If nothing else, the dichotomous and extreme positions being staked out will keep my noodle busy this semester!

- posted by laurie @ 2/19/2004 12:27:00 PM
Comments: Post a Comment