AI Ethics: Artificial
Intelligence, Robots, and Society
Everyone should think about the ethics of the work they do, and
the work they choose not to do. Artificial Intelligence (AI)
and robots often seem like fun science fiction, but in fact already
affect our daily lives. For example, services like
Google and Amazon help us find what we want by using AI.
Every aspect of how Facebook works is based on AI and Machine
Learning (ML). The reason your phone is so useful is it is full of
AI – sensing, acting, and learning about you. All these tools not
only make us smarter, their intelligence is based partly on what
they learn both from us and about us when we use
them. So we make the tools smarter too.
Since 1996 I have been writing about AI and Society, including
maintaining this web page. Originally I was worried because
some researchers got into the news by claiming that AI or
intelligent robots would one day take over the world. The
first goal of this page is to explain why that isn't going to
happen.
But by 2008 the USA had more robots in Iraq (then about 9,000)
than its allied countries had human troops (then about
8,500). Also, prominent scientists like Ron
Arkin began saying we should make the robots themselves
ethical, and even getting money from the US military to do
that. I've
taken money from the US military too; my problem is not with
that but with the claim that robots should ever be considered
responsible for their actions. The problem here is not with
robots taking over the world, but with people taking over the
world (or at least corrupting large parts of it) by pretending
that robots are responsible. In fact people and corporations that
decide how robots act.
I wrote this page because many
people worry about the wrong things when they worry about AI.
It's not that there's nothing to worry about AI. It's that many
people are confused about the word "intelligent" – they think it
means "like a human." Humans are intelligent, but we're also
tall, and we (mostly) walk on two legs. We don't think ostriches or
giraffes are human, and we shouldn't think robots are human
either. I hope that by writing this page, I can help us worry
about the right things.
Why Build AI?
If some people think robots might take over the world, or if
machines really are learning to predict our everything we do, or
even if a president might try to put the blame on a robot for the
president's own bad military decisions, then why would anyone work
on advancing AI at all?
My personal reason for building AI is simple: I want to
help people think.
Our society faces many hard problems, like finding ways to work
together yet maintain our diversity. Figuring out how to avoid war,
and ending the ones that already started. Learning to live truly
sustainably – so our children consume no more space and time than
our parents, and no more other resources than can be replaced in a
lifetime – to do all that while still protecting human rights, human
dignity, and human flourishing. These problems are so hard,
they might actually be impossible to solve. But building and
using AI is one way we might figure out some answers. If we
have tools to help us think, they might make us smarter. And
if we have tools that help us understand how we think, that
might help us find ways to be happier, and to treat each other and
everything else on our world better.
Of course, all knowledge and tools, including AI, can be used for
good or for bad. This is why it's important to think about
what AI is, and how we want it to be used. This page is
designed to help people (including me) think about the ethics of
AI research.
My students and I are among the many researchers who work on
building artificial
consciousness and synthetic
emotions. These aren't any more magic or deserving of
ethical obligation than artificial hands or legs. In humans consciousness and ethics
are associated with our morality, but that is because of our
evolutionary and cultural history. In artefacts, moral
obligation is not tied by either logical or mechanical necessity
to awareness or feelings. This is one of the reasons we
shouldn't make AI responsible: we can't punish it in a meaningful
way, because good AI systems are designed to be modular, so the
"pain" of punishment could always be excised, unlike in nature.
Why We Shouldn't Fear AI or
Robots – Machines Aren't People (or Even Apes)
As I said, I think most people are worrying about the wrong things
when they worry about Robots and AI. First, here are some
reasons not to worry.
1) AI has the same
ethical problems as other, conventional artifacts.
In the mid-1990s I attended a number of talks that made me
realize that some people really expected AI to replace
humans. Some people were excited about this, and some were
afraid. Some of these people were well-known
scientists. Nevertheless, it seemed to me that they were all
making a very basic mistake. They were afraid that whatever
was smartest would "win", somehow. But we already have
calculators and phones that can do math better than us, and they
don't even take over our pockets, let alone the world.
My friend Phil
Kime agreed with me, and added that he thought the problem
was that people didn't have enough direct, personal experience of
AI to really understand whether or not it was human. So we
wrote one of my first published papers, Just Another Artifact: Ethics and the
Empirical Experience of AI. We argued that realistic
experience of AI would help us better judge what it means to be
human, and help us get over our over-identification with AI
systems. We pointed out that there are ethical
issues with AI, but they are all the same issues we have with
other artifacts we build and value or rely on, such as fine art or
sewage plants.
We first wrote it in 1996; it eventually got partially published
in 1998 in a cybernetics workshop but that didn't have very good
academic credibility. So in 2010 we got back together and
rewrote it for the leading international conference in AI. The
updated version is called Just an Artifact: Why
Machines are Perceived as Moral Agents and is in the
proceedings of The Twenty-Second International Joint Conference on
Artificial Intelligence (IJCAI '11).
2) It's wrong to exploit
people's ignorance and make them think AI is human.
It is not enough for experts to understand AI. We also have
a professional obligation to communicate to everyone else.
The people who will use and buy AI should know what its risks
really are. Unfortunately, it's easier to get famous and
sell robots (or cars) if you go around pretending that your robot
really needs to be loved, or otherwise really is human – or super
human! In 2000 I wrote an article about this called A Proposal
for the Humanoid Agent-builders League (HAL) for The
Symposium on Artificial Intelligence, Ethics and (Quasi-)Human
Rights at AISB
2000. I proposed creating a league of programmers
dedicated to opposing the misuse of AI technology to exploit
people's natural emotional empathy. The slogans would be
things like "AI: Art not Soul" or Robot's
Won't
Rule.
In 2000 I didn't know that the US military might try to give
robots ethical obligations, so the whole paper is written is
written with some humor. But as we've made better AI, these
issues have gotten more serious. Fortunately, academics and
other experts are also getting serious. In 2010 I was one of
a couple dozen people invited by two United Kingdom research
councils to work on making sure robots would fit into British
society. We decided to write the Principles
of Robotics, the world's first national-level soft law
on AI ethics. So a bunch of the ideas in my HAL paper
are now at least informal UK policy. The five principles
are:
Robots should not be designed as weapons, except for national
security reasons.
Robots should be designed and operated to comply with existing
law, including privacy.
Robots are products: as with other products, they should be
designed to be safe and secure.
Robots are manufactured artefacts: the illusion of emotions
and intent should not be used to exploit vulnerable users.
It should be possible to find out who is responsible for any
robot.
Notice that the first three
correct Asimov's laws – it's not the robot that's responsible.
For the full legal versions of the principles and their
explanations, see their
EPSRC web page. For an account of how they were written, see The
Making of the EPSRC Principles of Robotics, from the AISB
Quarterly, Issue 133, Spring 2012, pp. 14-15. More
importantly, to understand why they were written the way
they were (to minimise social disruption and maximise social
utility), see The
meaning of the EPSRC Principles of Robotics, an articile I
wrote in 2016 about the Principles as policy for the fifth
anniversary of their publication (6 months after development) by the
EPSRC on its webpage. The Principles of Robotics have made it into:
In October 2007, I was invited to participate in a workshop
called Artificial
Companions
in
Society:
Perspectives
on the Present and Future at the Oxford Internet Institute.
I took the chance to write my third ethics article, Robots Should Be Slaves.
In 2010, this (finally) came out as a book chapter in Close
Engagements
with Artificial Companions: Key social, psychological,
ethical and design issues, a book edited by Yorick
Wilks. The idea is not that we should abuse robots (and
of course it isn't that human
slavery was OK!) The idea is that robots, being
authored by us, will always be owned—completely. Building a robot is nothing like having a
child.Robots
are more like novels than children. There's no chance
involved unless you deliberately put the chance into it. There's
no giant machinery of biology that evolved 4,000,000,000 years
before you even had the idea of having a baby determining what
components that baby can have. Fortunately, even though we
may need robots to have and understand things like artificial intelligence emotions, it is
perfectly possible not to
make them suffer from neglect, a lack of self-actualization, or
their low social status in the way a person would. In fact I'd say
it's an obligation; what right do we have to make a person-like
thing that would be owned, not human? Robots are things we
build, and so we can pick their goals and behaviours. Both
buyers and builders ought to pick those goals responsibly.
People have trouble believing this, so I've written it again a
bunch of times trying to clarify it in a bunch of different ways:
I got asked to comment on an article by Anne Foerst
called Robots and Theology. Foerst is a theologian, we
both worked on the Cog project at the
MIT AI Laboratory in the 1990s. Foerst has the interesting
perspective that robots are capable of being persons and knowing
sin, and as such are a part of the spiritual world. I
argue in my commentary, Building Persons is a
Choice, that while it is interesting to use robots to
reason about what it means to be human, calling them
"human" dehumanises real people. Worse, it gives people
the excuse to blame robots for their actions, when really
anything a robot does is entirely our own responsibility.
Sometime around 2012 I started working on a paper that was
finally called Patiency
Is Not a Virtue: The Design of Intelligent Systems and Systems
of Ethics when it finally got published in a journal in
2018 (Ethics
and Information Technology – getting ideas right AND
communicating them can both take a long time, so scientists
often talk about them at conferences before final publication.)
The idea is complicated: it's that both AI and ethical
systems are things we build together, so whether AI is a moral
subject (that is, is responsible for itself, or is something
we're responsible to) is a matter of choice, not something
scientists can discover. Since it's easier to author AI
than to totally overhaul our ethics, I recommend building AI to
minimise human social disruption.
In 2016 the European Parliament suggested sometimes AI should
be legally responsible for itself. This is a super bad idea. Two
law professors (Mihailis E.
Diamantis and Thomas
D. Grant) and I dropped everything to explain why
inOf,
For, and By the People: The Legal Lacuna of Synthetic Persons,
which came out in the journal Artificial
Intelligence and Law in September 2017. We wanted to make
sure the European Commission didn't require or even encourage EU
national governments to do this. An organisation that
contains humans is sometimes considered "a legal person", but
that only works because real humans are held accountable if the
organisation does bad things. In fact, if anything, there's
already a problem of people not being held sufficiently to
account for organisations. This is when the organisation is
called a "shell company" and is a major source of many forms of
corruption, like money laundering. An AI legal person would be
the ultimate shell company.
Why We Should Worry About AI
Anyway
Being worried about the wrong things doesn't mean that there's
nothing to worry about. Artificial Intelligence is not as
special as many people think, but it is
further accelerating a rapidly-accelerating phenomenon that's been
going on for about 10,000 years: human culture. Human
culture is changing almost every aspect of life on earth,
particularly human society.
4) Human culture is
already a superintelligent machine turning the planet into apes,
cows, and paper clips.
One of the reasons I object to AI scaremongering is that even
where the fears are realistic, such as Nick
Bostrom and colleague's description of overwhelming,
self-modifying superintelligence, making AI into the
bogeyman displaces that fear 30-60 years into the future. In
fact, AI
is here now, and even without AI, our hyperconnected
socio-technical culture already creates radically new dynamics and
challenges for both human society and our environment.
Bostrom writes about (among
other things) a future machine intelligence autonomously
pursuing a worthwhile goal might incidentally convert the planet
into paper clips. We might better think of our current
culture itself as the superintelligent but non-cognizant machine –
a machine that has learned to support more biomass on the planet
than ever before (by mining fossil fuels) but is changing all that
life (at least the large animals) into just a few species (humans,
dogs, cats, sheep, goats, and cows). No one ever
specifically intended to wipe out the rest of the large animals
and other biodiversity on the planet, but we're doing it.
Similarly, no one specifically decided that children weren't
sufficiently monitored by their parents up until the 1990s, but
now childhood
and parenthood have been entirely transformed in just a few
decades. These are just two consequences or our expanding
cognition, and AI is very much a part of that.
5) Big data + better
models = ever-improving prediction, even about individuals.
AI and computer science, particularly machine learning but also
HCI, are increasingly able to help out research in the social
sciences. Fields that are benefiting include political
science, economics, psychology, anthropology and business /
marketing. As I said at the top of the page,
understanding human behaviour may be the greatest benefit of
artificial intelligence if it helps us find ways to reduce
conflict and live sustainably. However, knowing fully well
what an individual person is likely to do in a particular
situation is obviously a very, very great power. Bad
applications of this power include the deliberate addiction of
customers to a product or service, skewing vote outcomes through
disenfranchising some classes of voters by convincing them their
votes don't matter, and even just old-fashioned stalking.
It's pretty easy to guess when someone will be somewhere these
days.
As science – and commerce, and government – learns more and more,
our models of human behaviour get better and better. As our models improve, we need less and less
data about any particular individual to predict what they are
going to do. So just practising good data
hygiene is not enough, even if that were a skill we could teach
everyone. My professional opinion is that there is no going
back on this, but that isn't to say society is doomed. What we do
matters.
Think of it this way. We all know that the police, the
military, even most of our neighbours could get into our house if
they wanted to. But we don't expect them to do that.
And, generally speaking, if anyone does get into our house, we are
able to prosecute them legally, and to claim any damages back from
insurance. I think our personal data should be like our
houses. First of all, we shouldn't ever be seen as selling
our own data, just leasing it for a particular purpose. This
is the model software companies already use for their products; we
should just apply the same legal reasoning to we humans.
Then if we have any reason to suspect our data has been used in a
way we didn't approve, we should be able to prosecute. That
is, the applications of our data should be subject to regulations
that protect ordinary citizens from the intrusions of governments,
corporations and even friends.
You might also be interested in the
courses I teach, which include AI ethics materials
CM
50230 and CM30229: Intelligent Control and Cognitive
Systems This course I devleoped (first taught in
2011) looks at building cognitive systems, with coursework and
laboratories on Robots, Biological Simulations, and VR Avatars /
Game AI. Also emphasises research reading and writing,
particularly in the postgraduate version of the course.
Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter
Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas
Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C.
Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh,
Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle,
Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman
Yampolskiy, Dario Amodei, The Malicious Use of
Artificial Intelligence: Forecasting, Prevention, and
Mitigation , a technical report apparently published by
all seven of the Future of Humanity Institute, University of
Oxford, Centre for the Study of Existential Risk, University of
Cambridge, Center for a New American Security, Electronic
Frontier Foundation, and OpenAI, from February 2018.
Joanna J. Bryson and Alan F. T. Winfield, Standardizing
Ethical Design for Artificial Intelligence and Autonomous
Systems, IEEE Computer 50(5):116-119. What do you do
when technology like AI changes faster than the law can keep
up? One thing is have law enforce standards maintained by
professional organisations, which are hopefully more agile and
informed. Invited commentary. Open access version, authors' final copy.
Building
Persons is a Choice an invited commentary on an article by
Anne Foerst called Robots and Theology, in Erwägen Wissen Ethik20(2):195-197
(2009). It isn't that AI couldn't conceivably deserve ethical
obligation, rather it would be unethical for us to allow it to.
Newspaper interview: “Kabelbündel
fürs Grobe,” full page interview with art and photo,
Taz, p. 13, 26 July 2017.
Individual interview for the blog of Austrian National Radio
(ORF.at), “Maschinen
haben längst Bewusstsein [Machines are already
conscious]”, 4 September 2017.
“The real project of AI ethics” keynote, O’Reilly’s Strata
Data Conference, New York, NY, 28 September 2017. The 15
minute video was called ‘Fascinating’
by Garry Kasparov and Tim O’Reilly.
I'm on the Guardian Tech Podcast from 8 October Robots
are coming for your job ... and that's not all,
discussing Martin Ford's "The Rise of the Robots" with
Deloitte's cheif economist, Ian Stewart, and the Guardian's
Olly Mann and Alex Hern.
I also appeared at the Global Women's Forum in France in a
plenary panel On the
cusp: The promise of breakthrough brain research (that's
a youtube video) with Maria Cattauli-Ilvanos as the moderator
and Stephanie Lacour as the other panel member.
I was on Newsnight again, this time with Chris Bishop!
But again they aren't keeping their videos online (I do have a
500M recording though but I'm not sure where / whether to post
it.)
January 2015: I debated James Barratt again for
the Channel 4 News feature Will
super-intelligent machines kill us all?. Unfortunately,
that page has a lot of Bostrom / Musk on it, but also our video.
We were supposed to be talking about middle-class income, but
Barratt made it about battlefield robots. Of course, I know a
lot about those too...
December 2014: My university loved the
Newsnight thing, so talked me into taking
on Stephen Hawking and his warnings concerning AI as a
threat. The BBC also called again, so you can listen
to me debate AI risk on their World Service.
October 2014: Appeared on BBC Newsnight
discussing Google DeepMind and the Google ethics board. Transcript and pictures.
April 2011: The
EPSRC released their Principles
of Robotics. I'm one of the authors and contributed
a lot to the text there. The principles were written at a
special meeting the EPSRC held on robot ethics in September
2010.
January 2011: I've accepted an invitation to join Lifeboat,
though I don't know much about them. Apparently I am helping
safeguard humanity from robots and AI. If you send me
email with any comments pro and con about Lifeboat, I'd
appreciate it!
August 2008: I'm one of the experts interviewed for
the Heart Robot
project.
Thanks for linking to my 1998 paper (Just Another
Artifact: Ethics and the Empirical Experience of AI), but I
think your argument is a gross oversimplification of my and
Phil Kime's point. Of course autonomous robot weapons
can kill you, and are killing people now. But it isn't
because some AI has turned evil. AI is no more to blame
than other artifacts of our culture, like our foreign
policy. Rather than worrying about AI specifically,
people should be worrying about government, culture and
decision making in general. The threats (and promises)
of AI are real, but not as unique as people think. I
believe the "singularity" & "ethical robots" (e.g. Arkin)
debates are a distraction from the real problem of designing
and choosing appropriate governing techniques and assigning
appropriate responsibility and blame for societal-level
decisions that affect us all.
July 2000: A snippet of private email & some
off-the-record comments on robots taking over the world were reported
by The Register. I didn't correct the record until the
same text mysteriously turned up in The Guardian four
years later (and therefore on the first page of Google searches
for me). Blay & I apparently got the first ever
"correction"/apology from Bad Science, but they still
got my title & institution wrong.
On a less related note than a lot of people think, I also write
about consciousness, both machine &
not. This work came around partly because so many people
associate consciousness and ethics, but do they know why?
Now
for the tricky bit... (originally "Consciousness Is Easy,
but Learning Is Hard"), invited article for The
Philosopher's Magazine 28(4):70-72. Explains that
everything with RAM has functional self awareness, video cameras
have perfect memory, what makes us intelligent (and is
computationally difficult) is generalising from experience,
which involves forgetting and unconsciousness. (To be
honest, I think our obsession with consciousness comes from our
lack of conscious access to so much of our own minds, but I
haven't written about that. Yet.)
Similarly to AI consciousness,
here's some of my papers on emotions, which I
also don't believe determine ethical obligation, but are clearly
involved in humans' ethical intuitions:
Swen E. Gaudl and Joanna J. Bryson, The
extended ramp model: A biomimetic model of behaviour
arbitration for lightweight cognitive architectures, in
Cognitive Systems Research, 50:1-9 (this journal seems
to count issues as volumes). Like the title says, an attempt to
simplify and improve on the systems for representing emotions
and drives I wrote with Emmanuel Tanguy and Phil Rolphshagen
(the Dynamic Emotion Representation (DER) and Fexible Latching
respectively, see below).
Creating Friendly AI
from the singularity
folks. One day in 1995, my friends from the MIT AI Lab and
I went over to the Media Lab to see a talk about the "coming
singularity" (when AI becomes smarter than people) by Vernor
Vinge. That talk was one of the reasons I wanted to write
"Just Another Artifact". We left the talk before it was
over because it generally seemed silly and was getting
repetitive and we needed to get back to work. But on the
way back, while I was listening to some of my fairly brilliant
friends (e.g. Charles Isbell and Carl de Marken) belittle the
chances of their AI ever being able to take over a toaster, it
did remind me of the scientists of Los Alamos betting
facetiously on the effect size of the first atomic bomb.
Here's what Vinge
was thinking in 2008. A bit more positive than a
decade earlier, but otherwise similar. I do think it's
good to have people who really think about the long term.
Some of the arguments in the AI Companions piece were
inspired by White Dot.