Newsnight 02/10/2014 10:30pm (live)
video, which emphasises AGI that allows "superhuman
performance" on Atari video games, learned overnight; the new Google
ethics board; and that Google has agreed not to give DeepMind's
software to the US military.]
Laura Kuenssberg: Should we be excited or terrified or maybe
both? With us to discuss whether artificial intelligence is a
problem or a solution is Doctor Joanna Bryson, a Lecturer in
Artificial Intelligence and Ethics at the University of Bath [n.b.
I'm really a Reader in Computer Science] and Professor Nick Bostrom,
a Philosopher at Oxford and the author of a recent book on the
dangers posed by superintelligence; thanks both for being with us.
Um, Dr Bryson, how far are we really from the kind of intelligence
where a machine can think like us and goes beyond being just an
Joanna Bryson: Those are ah- almost two different questions. So ah
we already have machines that can do arithmetic, and if you'd shown
that to somebody 300 years ago it would have been magic. It
would have been black magic; it would have been completely
scary. The "go on beyond being a mere machine", well, again,
these machines that do arithmetic, they do it faster than we
can. You've just seen a machine that can play Atari faster
than we can. Um, actually chimpanzees can learn to do things faster
than humans too. So… you want to get a little more precise, don't
L: In terms of thinking, thinking for themselves, going
beyond just executing a simple task.
J: –[thinks] I think we already have machines that do that to some
extent. The real question is about, you know…and one of the things
that's been wonderful about artificial intelligence is it's
helped us think about what we can do too, and how we think, how much
do we break out of habits, and how much are we "pre-programmed" [air
quotes]. I think we have some of that, and I don't think it's
that much further. I think if you looked at for example Watson
it was solving some of the kinds of problems that we would think of,
cryptic crossword problems kinds of things. I think the real
question, (and this is something you'll probably come back to Nick
with too), is about motivation. Who would build a machine that
would have the motivation to take actions that we wouldn't like,
rather than just to do things that we would want.
L: And indeed, Prof. Bostrom, that is the essence of this.
What happens if there are machines that we lose control of?
And they then do things that–we don't like?
Nick Bostrom: Yes, I think that the issues, the issues are quite
different if we're thinking about the near-term incremental
progress, like L: computer games… N: yeah, like and have a lot of
applications in many different sectors overall it's just good.
But there's a very distinctive set of issues that arise if
and when you figure out how to perform general intelligence,
that that Demis was alluding to, the general purpose learning
ability that makes us humans unique and special on this planet. So
when machines start to exceed us in this general intelligence then
it's a totally different game.
L: And how far are we from that do you
N: So nobody knows L: Ah N: is the short answer. We did a
survey last year we sampled some of the world's leading AI experts,
and one of the questions we asked was by which year do you think
that there's a 50% probability that we will have human-level machine
intelligence? So the median answer to that was 2045 or 2050
depending on precisely which group of experts. L: So: N: Um but with
a big uncertainty on either side–it could happen much sooner or it
could take a lot longer.
L: And what happens then? N: Well, that's [all laugh] that's
the 40 thousand dollar question.
J: I actually ah completely disagree with the idea that this
"artificial general intelligence" is the thing that we're
missing. So the most basic kind of in…general
intelligence is just matching action to context. It's called
associative learning. That's the most general intelligence. Animals
have it, but we tend to be limited in situations when we can apply
it. The big thing that Deep, that Deep Learning did, right?
The big break through was actually to restrict, to add a
little bit of extra knowledge in [hand gestures the multi-layered
NN] to restrict the ways it would learn, so that you could
get this kind of thing. It's not it's not it's not magic. L: But N:
These are baby steps yeah L: the thing that Nick said, that there is
the possibility is there not that we are going to move towards the
stage where machines are thinking for themselves making decisions
N: [fails to get in] J: We would have, we would have to make that
decision. We are the ones making the machines; they're much
more brittle than life; so it's something where we would have to
decide to give–displace our authorship onto the machines
L: Well who makes that decision, are you reassured by that?
N: Well so nobody really does [makes that decision] I think that the
drivers are irresistable. Maybe it would be better if we could
globally decide to slow down progress in computer research. It's not
going to happen, I think. We will continue to move forward L:
But do N: towards the precipice. L: the precipice. [J
laughs] It's interesting you use those words I mean in this
particular case [gestures towards DeepMind video] Google are the
people who have bought this company and given its insurance it will
set up an ethics panel [J laughs, N "yeah"] Should we trust Google
to be making these J: I L: decisions on our behalf. J: I really
didn't like this ah this ah claim that it's OK as long as the
military doesn't have it. You know the military kills, well,
military in general War kills a fifth as many people as
murder. I mean it's a red herring to just focus on one thing.
L: how should we control this then?
L: I mean, you both seem to think we do have to have some element of
control? J: yes
N: Well I think the first thing is to recognise that there's a
distinctive problem that will arise when we succeed in making
machines generally intelligent. At that point it's no longer
just another tool, just another nifty gadget. At that point, it's
potentially the end of the human era and the beginning of the
machine intelligence era. And that transition to the machine
intelligence era has to be gotten right. And that could
involve some really deep technical questions that we don't yet know
the answer to.
J: I respectfully disagree with professor Bostrom on a lot of this.
N: [smiles] we do J: I think we're already in the situation of a lot
of the many predictions you [to Nick] make are really big issues,
but we're in that situation now with human culture. So human
culture is doing more than we realise. We don't understand the
extent to which we are taking over the planet we're taking over all
of the available uh life stuff ["uh" means "I can't say 'biomass' on
TV, can I?"] that we could eat. There's (you know did you just see
this) collapse in the amount of wild animals, everything is coming
down to a relatively small number of species. So we're already
ah changing the world into a strange thing and we've not
noticed. Artificial intelligence might actually be one of the
ways that we get a handle on ourselves, and start making a more
sustainable future. It's a tool for thinking. [flashes a big smile
L: We will certainly see in the years to come. It's a very, very
interesting debate. I'm afraid we've run out of time but thank you
both very much indeed for coming in and speaking to us tonight. Now,
if imitation really is the sincerest form of flattery, well then Sir
Michael Caine has untold numbers of fans…