Q & A about Artificial Intelligence

This is an interview of Joanna Bryson of MIT and the University of Edinburgh, done by Cynthia Kinnan of Golden, Colorado in 1998.

Hi. I am a high school student from Golden, CO, and I am writing a term paper on artificial intelligence, namely the technological and social hurdles that preclude its development. AI is a topic that has interested me for many years, and I would really appreciate it if you could take to time to answer some questions that I would like to address in my paper, or if you could forward this letter to someone who might be able to do so. It would be tremendously helpful. Thanks in advance for your time.

  • Do you think a machine will ever be truly sentient? Why?
    Unfortunately, this is impossible to answer, because people don't understand what "truly sentient" means. Philosophy has been working on this for a long time. Many people, including the philosopher Dan Dennett, hope that we can use machines to help us understand what it means to be human. You might look at Dennett's web page too.

  • Do you think that the difference between the human brain and a computer is a fundamental one that can never be overcome. Or, is it merely a difference of complexity, one that can be surmounted?
    It certainly is a fundamental one we're never likely to fully overcome (genetic engineering would get there first even if we tried!), but whether that matters depends on what you want from AI. Don't forget, we already have AI! If you showed people 100 years ago the calculator you own, they would have called that intelligent and frighteningly human. Let alone machines playing chess! That we don't think of this as true AI now is part of what I said in the previous paragraph. We are redefining what we think it means to be human as we explore machines.

  • What caused you to become interested in AI?
    I was interested in psychology first, and I was good with computers. I think AI's a great way to study natural intelligence (and also fun.)

  • What are your feelings about the concern that AI would render humans obsolete, or that intelligent machines would turn on their creators?
    This is a danger, just like atomic bombs, crashing trains, failing water and power systems are all dangers. We have to be careful when we create powerful machines. But the critical thing to remember is that we are the ones who create these machines! AI is already in our society -- it makes video cameras work better, helps computer companies design boards, and helps credit card companies recognize when someone's using a stolen card. It would be downright silly of us to write a program to make ourselves "obsolete." Of course, AI may make certain jobs unnecessary, but this is true of industry in general. Essentially, we have to decide for ourselves whether we write music because we want to hear good music, or we write music because we enjoy writing it (for example.) We have to decide what we want to do for ourselves.

    Of course, not everyone gets to make all the decisions! Many craftsmen would have liked factories to never have been invented. Though, of course, now most people in industrial countries are much wealthier than they were before there were factories, some people starved when they lost their job. So I don't mean to totally trivialize this question. But in general I think people are more scared of AI than they need to be, because they over-identify with the computers. They think that anything intelligent must be human, which of course isn't true.

    My greatest fear then is that, if we did find the kind of knowledge that could help us save life and lives, that the wrong people might use it to take away life and liberty. This is the threat of any new technology -- they are used both for good and for bad purposes. But right now, the threats of doing nothing seem worse.

  • If AI is developed, will it be aware? Why? Also, would we have the right to exploit the labor of sentient, albeit artificial, beings, or would they deserve the same rights as humans?
    I really don't think it makes sense for us to create beings that it wouldn't be right to "exploit." The problem is, people feel empathy for random things, like stuffed animals, but then don't worry about homeless people or people living in other countries. In other words, many of our ethical instincts are just wrong. I think that we need to address this intellectually, and realize that basically, we have no ethical obligation to a machine beyond the extent to which that machine is helping people or is a work of art.
  • Good luck, and I hope you study AI some more, either as a programmer or as a social scientist!

    For more information on AI, and a more formal treatment of AI and ethics, go here .

    Update in 2003, Cynthia Kinnan became a Marshall Scholar. Congratulations!

    page date: February 12, 1998
    page author: Joanna Bryson
    photo credit: Laird Popkin