Newsnight 02/10/2014 10:30pm (live) – Google DeepMind
I'd never been interviewed on a major news show before, so I was
nervous to see how I'd done. I finally decided to watch, but
to keep from being overly self-critical I decided to make
a complete transcript of all three of us talking, since it
was only 7.5 minutes. Below is a slightly tidied version.
The section was started with a DeepMind
video (youtube), which was apparently the first
official interview of Demis Hassaabis since DeepMind
was acquired for £400,000,000 by Google. The video
emphasises what Hassaabis calls Artificial
General Intelligence [AGI] that allows "superhuman
performance" on Atari video games, which it learns overnight; the
new Google ethics board; and that Google has agreed not to give
DeepMind's software to the US military.
Laura Kuenssberg: Should we be excited or terrified or maybe
both? With us to discuss whether artificial intelligence is a
problem or a solution is Doctor Joanna Bryson, a Lecturer in
Artificial Intelligence and Ethics at the University of Bath [Note:
I'm really a Reader, in Computer Science] and Professor Nick
Bostrom, a Philosopher at Oxford and the author of a recent book on
the dangers posed by superintelligence. Thanks both for being
with us.
Dr Bryson, how far are we really from the kind of intelligence where
a machine can think like us and goes beyond being just an inanimate
object?
Joanna Bryson: Those are almost two different questions. We already
have machines that can do arithmetic, and if you'd shown that to
somebody 300 years ago it would have been magic. It would have
been black magic; it would have been completely scary.
The "go on beyond being a mere machine", well, again, these machines
that do arithmetic, they do it faster than we can. You've just
seen [on the video] a machine that can play Atari faster than we
can. Actually chimpanzees can learn to do things faster than humans
too. So… you want to get a little more precise, don't you?
L: In terms of thinking, thinking for themselves, going
beyond just executing a simple task?
J: I think we already have machines that do that to
some extent. The real question is about– One of the things that's
been wonderful about artificial intelligence is it's helped
us think about what we can do, and how we think. How much do
we break out of habits, and how much are we "pre-programmed"?
I think we [AI] have some of that, and I don't think it's that much
further [to go]. I think if you look at for example
Watson– it was solving some of these kinds of problems,
cryptic-crossword problems kinds of things. I think the real
question, (and this is something you'll probably come back to Nick
with too), is about motivation. Who would build a machine that
would have the motivation to take actions that we wouldn't like,
rather than just to do things that we would want?
L: And indeed, Prof. Bostrom, that is the essence of this.
What happens if there are machines that we lose control of?
And they then do things that…we don't like?
Nick Bostrom: Yes, I think that the issues are quite different if
we're thinking about the near-term incremental progress, like
L: computer games…
N: yeah, and have a lot of applications in many different
sectors. Overall it's just good. But there's a very distinctive
set of issues that arise if and when you figure out how to perform general
intelligence, that Demis was alluding to [on the video], the
general purpose learning ability that makes us humans unique and
special on this planet. So when machines start to exceed us in this
general intelligence then it's a totally different game.
L: And how far are we from that do you…
N: So nobody knows
L: Ah!
N: is the short answer. We did a survey last year–we sampled
some of the world's leading AI experts, and one of the questions we
asked was by which year do you think that there's a 50% probability
that we will have human-level machine intelligence? So the median
answer to that was 2045 or 2050 depending on precisely which group
of experts. But with a big uncertainty on either side–it could
happen much sooner or it could take a lot longer.
L: And what happens then? N: Well, that's [all laugh] that's
the 40 thousand dollar question.
J: I actually completely disagree with the idea that this
"artificial general intelligence" is the thing that we're
missing. So the most basic kind of general
intelligence is just matching action to context. It's called
associative learning. That's the most general intelligence. Animals
have it, but we tend to be limited in situations when we can apply
it. The big thing that Deep Learning did – the big break
through – was actually to restrict, to add a little bit of
extra knowledge in [hand gestures the multi-layered neural network
seen drawn in the video] to restrict the ways it would learn, so
that you could get this kind of thing. It's not it's not
it's not magic.
L: But
N: These are baby steps yeah
L: the thing that Nick said, that there is the possibility (is there
not?) that we are going to move towards the stage where machines are
thinking for themselves making decisions for themselves.
J: We would have to make that decision. We are the ones making
the machines; they're much more brittle than life; so it's something
where we would have to decide to give… to displace our
authorship onto the machines.
L: Well who makes that decision, are you reassured by that?
N: Well so nobody really does [make that decision] I think that the
drivers are irresistible. Maybe it would be better if we could
globally decide to slow down progress in computer research. It's not
going to happen, I think. We will continue to move forward…
L: But do…
N: towards the precipice.
L: the precipice! [J laughs] It's interesting you use those
words I mean in this particular case [gestures towards
DeepMind video] Google are the people who have bought this company
and given its insurance it will set up an ethics panel [J laughs, N
"yeah"] Should we trust Google to be making these decisions on our
behalf?
J: I really didn't like this, this claim [from the video]
that it's OK as long as the military doesn't have it. [Note: Demis
said US military, I duck "US" due to my accent.] You
know the military kills, well War kills a fifth as many
people as murder. It's a red herring to just focus on one
thing.
L: How should we control this then? I mean, you both seem to
think we do have to have some element of control?
J: yes
N: Well I think the first thing is to recognise that there's a
distinctive problem that will arise when we succeed in making
machines generally intelligent. At that point it's no longer
just another tool, just another nifty gadget. At that point, it's
potentially the end of the human era and the beginning of the
machine intelligence era. And that transition to the machine
intelligence era has to be gotten right. And that could
involve some really deep technical questions that we don't yet know
the answer to.
J: I respectfully disagree with professor Bostrom on a lot of this.
N: [smiles] we do
J: I think we're already in the situation of many predictions you
[to Nick] make. [These] are really big issues, but we're in that
situation now with human culture. So human culture is doing
more than we realise. We don't understand the extent to which
we are taking over the planet. We're taking over all of the
available (uh) life stuff [I wasn't sure I should say "biomass"].
There's a collapse in the amount of wild animals, everything is
coming down to a relatively small number of species. So we are
already changing the world into a strange thing and we've not
noticed. Artificial intelligence might actually be one of the
ways that we get a handle on ourselves, and start making a more
sustainable future. It's a tool for thinking. [Smiles at Nick]
L: We will certainly see in the years to come. It's a very, very
interesting debate. I'm afraid we've run out of time but thank you
both very much indeed for coming in and speaking to us tonight. Now,
if imitation really is the sincerest form of flattery, then Sir
Michael Caine has untold numbers of fans…
page author: Joanna
Bryson
Back to AI
and Robot Ethics