Making a Thinking Robot

Bill Grundy
Senior Honors Thesis
Symbolic Systems Program
Stanford University
Professor Fred Dretske

June 5, 1991

Introduction

Yesterday, as I was playing 3-D tic-tac-toe on my personal computer, my roommate described to me the computer's strategy: "He tries to get three corners on the top or bottom level. If he does that he guarantees a win." Undoubtedly, my roommate intended his description to be metaphorical, a sort of computational anthropomorphism. Few people would seriously entertain the possibility that my PC has the capacity to maintain beliefs or desires. Most people intuitively feel that the computer cannot truly want to win the tic-tac-toe game in the same way that its human opponent wants to win. Nor can the computer believe that a certain strategy will guarantee a win.

However, intuition is not always a trustworthy guide. Even the outward appearance of a computer can strongly affect our judgement about its internal functioning. Take away the computer's metal casing and sharp edges. Enclose it in a soft exterior and give it a coat of fur and an adorable, expressive face. Then try to dismantle it. The task may prove difficult if the fuzzy computer shrieks when you cut its skin, if it scuttles away on motorized wheels, if it bleeds simulated blood. Much of the popular culture fear of "thinking computers" undoubtedly stems from a distaste for the computer's unfriendly exterior, rather than from a firm understanding of why computers can or cannot think.

Flakey the Robot

This, then, is the question this paper will address: Is it possible to create an artifact that has beliefs and desires similar in kind to our own? Although the argumentation will be primarily philosophical in nature, the paper stems from practical experience. In the summer of 1990 I interned at SRI International where I helped program Flakey the Robot. Flakey is a black, wheeled octagon about three-and-a-half feet tall. It is powered by two golf-cart batteries and carries a powerful computer on board as well as a packet radio for communication with off-board computers. Flakey "sees" by means of a ring of twelve Polaroid sonars around its base, a laser light stripe, and a video camera.

The last day of my internship coincided with SRI Family Day, so I had a chance to test in a dynamic environment many of the behavior routines I'd written. As Flakey wandered around a room crowded with rambunctious children and worried parents, I'm sure that many of the observers were convinced that something profound was occurring within the robot. When Flakey spoke, "I am six years old. How old are you?" the children would reply truthfully and try to continue the conversation.

But I found myself entirely unimpressed. Understanding most of Flakey's internal functioning made the final product seem less like hocus-pocus and more like simple mechanics. Flakey's "artificial intelligence" existed only for those who didn't understand the internal workings of the robot. The experience forced me to reconsider my conception of human intelligence. For me, Flakey's relatively complex behavior did not appear to be cognitive because I understood it too well. Perhaps, if psychology and neurology can be combined, a similar reduction may be made for human intelligence. Or perhaps Flakey lacks some essential property --- a certain physical configuration or a type of history --- which prevents the robot from having beliefs and desires. This paper is an attempt to find that property, to determine what would be required to make Flakey think.


PDF version

Home