My university also maintains a page of my publications with Open Access versions. It is sometimes wildly inaccurate, but does contain green Open Access versions of papers I haven't bothered to list below, either because they aren't very academic, they have errors not in the final version, or I just haven't gotten around to it. If you want the final PDF of any publication, you can can't find it by googling the title, email me.
My very early papers are in postscript, recent years are in pdf,
a few are in both. If you have trouble viewing any papers,
try these helpful
instructions, or email me.
Joanna J. Bryson, The
Artificial Intelligence of the Ethics of Artificial
Intelligence: An Introductory Overview for Law and Regulation.
solicited and reviewed for M. Dubber, F. Pasquale, & S. Das
(Eds.), The Oxford Handbook of Ethics of Artificial Intelligence,
Oxford University Press. Author's final version, from late July
Holly Wilson, Paul Rauwolf, and Joanna J. Bryson, Evolutionary Psychology and Artificial Intelligence: The Impact of Artificial Intelligence on Human Behaviour. The SAGE Handbook of Evolutionary Psychology Shackelford, T. (ed) Somewhat speculative, but reviewing a lot of science, including the modelling results out of our own group. Author's final version, from August 2019.
DRAFTS – I mostly stopped posting drafts of my work years
ago, since I stopped being a graduate student. I seldom get much
feedback on them, until I get peer review that then points out
embarrassing errors. Nevertheless...
Andreas Theodorou, Bryn Bandt-Law and Joanna J. Bryson The
Sustainability Game: AI Technology as an Intervention for Public
Understanding of Cooperative Investment, to appear at the IEEE Conference on Games (CoG)
August 2019. This is the first paper unifying our transparency
work, our games work, and our work on cultural
variation in human cooperation and anti-social punishment.
Hopefully there will be a bunch more in 2020 and 2021. Camera
ready from July 2019 or so. Software on github, linked from the AmonI
Alexandros Rotsidis, Andreas Theodorou, Joanna J Bryson, and
Robert H. Wortham, Improving
Robot Transparency: An Investigation With Mobile Augmented
Reality. To be presented at RO-MAN 2019. Authors' final
copy. Alex (with Andreas & Rob) got ABOD3 working on mobile
phones so you can point your phone at a POSH / BOD robot and see
what it's trying to do. Software is probably available on github,
Past Decade and Future of AI’s Impact on Society a solicited
and reviewed chapter in Towards
a New Enlightenment? A Transcendent Decade, published by
BBVA OpenMind. This is a major policy document with my
perspectives on how AI has been and should be incorporated into
human society. It was originally written as a solicited white
paper for the OECD on AI policy (May 2017), which I then revised
to BBVA's title (I'm almost the only person in the book who didn't
realise you could change the title). Final version submitted
Joanna J. Bryson and Andreas Theodorou, How Society Can Maintain Human-Centric Artificial Intelligence. Solicited and reviewed chapter (title given) in the collection Human-Centered Digitalization and Services, Marja Toivonen-Noroand Eveliina Saari (eds.), Springer. Includes justification, motivation, strategies for systems engineering of AI, and strategies for regulating it. tl;dr see the bullet-point version in my blogpost, A smart bureaucrat's guide to AI regulation.
Robert H. Wortham, Swen E. Gaudl and Joanna J. Bryson, Instinct:
A biologically inspired reactive planner for intelligent
embedded systems in Cognitive
Systems Research 57 (October):207-215. Another nice
piece of work on lightweight intelligent control and systems
engineering of AI. Software is
available online, like most software
from my group.
Robot, All Too Human, in ACM XRDS 25(3):56-59. Sorry Nietzsche fans, I only borrowed a little of his structure. Originally written as a blogpost.
Joanna J. Bryson, No One Should Trust AI, an invited, reviewed, and edited blogpost by the United Nations University Centre for Policy Research, for their Artificial Intelligence and Global Governance blog series. No one should trust AI because we ought to build it for accountability. Then we would have certain knowledge of who's at fault, and trust isn't needed, see the scientific trust paper just below with Paul Rauwolf as first author. Published November 2018.
Paul Rauwolf & I, Expectations of Fairness and Trust Co-Evolve in Environments of Partial Information, in Dynamic Games and Applications 8(4):891-917. Highly relevant to information technology policy: The more you know, the less you need to trust, though if you know nothing or don't have a choice of who you work with, people have no reason to be trustworthy. Trust comes with PARTIAL information, AND at least some freedom. Open access because Bath+Springer.
Joanna J. Bryson, Patiency
Is Not a Virtue: The Design of Intelligent Systems and Systems
of Ethics, in Ethics and
Information Technology 20(1):15-26. Both AI
and Ethics are artefacts, so there is no necessary position for AI
artefacts in society, rather we need to decide what we should
build and how we should treat what we build. So why build
something to compete for the rights we already struggle to offer 8
billion people? Gold open access paid for by Bath out of our
library budget. There are also older versions of this paper which
was a discussion paper for a long time, but this is the archival
Swen E. Gaudl and Joanna J. Bryson, The
extended ramp model: A biomimetic model of behaviour arbitration
for lightweight cognitive architectures, in Cognitive
Systems Research, 50:1-9 (this journal seems to count
issues as volumes). Like the title says, an attempt to simplify
and improve on the systems for representing emotions and drives I
wrote with Emmanuel Tanguy and Phil Rolphshagen (the Dynamic
Emotion Representation (DER) and Fexible Latching respectively,
Rob Wortham and Joanna J. Bryson, Communication (open access version). In Living Machines: A Handbook of Research in Biomimetic and Biohybrid Systems, Prescott and Verschure, eds, Oxford University Press. A summary of everything biology and biological anthropology have to say on the subject, for the benefit of roboticists in particular. Open access is as of late November 2014, a lightly updated version is now available in the Handbook.Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , a technical report apparently published by all seven of the Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, and OpenAI. Apparently exactly one other author did even less than I did on this report; aside from turning up to the meeting I think my main contribution was insisting "use of" was in the title. Final (not peer reviewed except by the authors), February, 2018
Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant, Of, For, and By the People: The Legal Lacuna of Synthetic Persons. Artificial Intelligence and Law 25(3):273–291 [Sep 2017]. Two professors of law and I argue that it would be a terrible, terrible idea to make something strictly AI (in contrast to an organisation also containing humans) a legal person. In fact, the only good thing about this is that it gives us a chance to think about where legal personhood has already been overextended (we give examples). "Gold" open access, not because I think it's right to make universities or academics pay to do their work, but because Bath has some deal with Springer / has already been coerced into paying. Notice you can read below all my papers going back to 1993 (when I started academia); I don't think "green" open access is part of the war on science.
Joanna J. Bryson, & Arvind
derived automatically from language corpora contain human biases.
Science 356 (6334):183-186 [14 Apr 2017]. Be
sure to also look at the
supplement, which gives the stimuli and shows similar
results for a different corpus and word-embedding
model. Meaning really is no more or less than how a
word is used, so AI absorbs true meaning, including
prejudice. We demonstrate this empirically. This is an
extension of my
research programme into semantics originally deriving from my
interest in the origins of human cognition, but now with
help from the awesome researchers at Princeton I've merged this
with my AI
ethics work, and also managed to pitch for cognitive systems approaches to AI.
Open access version: authors' final copy
of both the main article and the supplement.
Joanna J. Bryson and Alan F. T. Winfield, Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems, IEEE Computer 50(5):116-119. What do you do when technology like AI changes faster than the law can keep up? One thing is have law enforce standards maintained by professional organisations, which are hopefully more agile and informed. Invited commentary. Open access version, authors' final copy.
Robert H Wortham, Andreas Theodorou, Joanna J Bryson, Improving Robot Transparency: Real-Time Visualisation of Robot AI Substantially Improves Understanding in Naive Observers. In The 26th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2017. What's visualised is the system's priorities (the upper part of a POSH plan hierarchy), and which priorities are active in real time. Extends the IJCAI ethics workshop results to direct interaction with robots, and for an archival conference.
Elizabeth M. Gallagher and Joanna J. Bryson, Agent-Based
Modelling, in the "continuously updating" Encyclopedia
of Animal Cognition and Behavior, Springer, 2017. Open
access version, authors' final
In honour of the EPSRC Principles of Robotics' fifth anniversary
in 2016, Tony Prescott and Michael Szollosy ran an AISB symposium
which was followed up by a special
issue of the journal Connection Science the following
Joanna J. Bryson, The Vital Need For The Unconscious City, in Anthologies, Conscious Cities: Bridging Neuroscience, Architecture, and Technology, Anne Fritz and Itai Palti, eds. A short popular science piece on the danger of everyone knowing so much no one is any longer unique.
Draft: Miles Brundage and I, Smart Policies for Artificial Intelligence. Policy and regulation don't need to kill innovation; in fact there is a great deal of policy set up to help industry innovate and flourish. We review the de facto set of policies AI already operates under, and recommend a more explicit and coherent set. Comments very much welcome.
Dominic Mitchell, Joanna J. Bryson, Paul Rauwolf, and Gordon Ingram, On the reliability of unreliable information: Gossip as cultural memory, in Interaction Studies, vol. 17:1 pp. 1–25. There's a tradeoff between how fast gossip spreads vs problems with its potential for corruption: it can be a lot more useful than direct experience if it spreads faster than that experience and there isn't too much false information. Actually, in the real world gossip may give you more information than your perception, but that's not one of the things we deal with here. This work was actually done prior to (and informed) our 2015 article Value homophily benefits cooperation but motivates employing incorrect social information, but took longer to get out for a few reasons. Open access draft.
Bidan Huang, Miao Li, Ravin Luis De Souza, me, and Aude Billard, A modular approach to learning manipulation strategies from human demonstration, Autonomous Robots 40(5):903-927. Precise manipulation (e.g. opening bottle caps) on real robots using social learning via contact sensors. Published online October 2015, in print June 2016. Open access available from authors, or Bath one day I hope.
I have two short book sections in the collection Memory in the Twenty-First Century, edited by Sebastian Groes, published by Palgrave Macmillan. I made unacknowledged contributions to those part introductions too.
Me & Paul Rauwolf, Trust Communication, and
Inequality. From The
38th Annual Meeting of the Cognitive Science Society, 10-13
August in Philadelphia, PA. Reviews Paul's awesome work on
understanding why it's adaptive to both have blind faith in others
in your community, yet refuse to cooperate with those making
unfair but mutually-beneficial offers. Unfortunately
there was a bug in the software; there are actually effects
when you add distance in! So don't cite this paper, cite the
journal version from 2017; I mean to update this but haven't yet.
Robert H Wortham, Andreas Theodorou, Joanna J Bryson, What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems. In IJCAI-2016 Ethics for Artificial Intelligence Workshop. Our first experiments on making robot intelligence more transparent.
Robert H Wortham, Swen E Gaudl, Joanna J
A Biologically Inspired Reactive Planner for Embedded
Environments, in the Proceedings of ICAPS 2016 PlanRob
Workshop, and as a poster at ICAPS itself.
Andreas Theodorou, Rob Wortham and I also each had draft papers at the AISB Workshop on Principles of Robotics, on April 4th 2016, Sheffield UK, which marked the Fifth anniversary of the publication of the EPSRC’s Principles of Robotics. See 2017 for the final journal versions of these papers, they're better.
Robert H. Wortham and Joanna J. Bryson, A role for action selection in consciousness: an investigation of a second-order Darwinian mind, in the CEUR Workshop Proceedings, published December 2016. The title references my earlier paper, A Role for Consciousness in Action Selection in the International Journal of Machine Consciousness 4(2):471-482, which I'm not sure is well enough known to take the confusion of the joke, but this paper has a fun model of selection for metacognition, mostly by Rob.
Patiency Is Not a Virtue: AI and the Design of Ethical Systems.
Morality and AI are both artefacts of our culture, so of course we
could construct them such that AI would be a moral patient, but
who would that benefit? Neither us nor the machines.
In the proceedings of the AAAI Spring Symposium Ethical
and Moral Considerations in Nonhuman Agents; also presented
at the AAAI Workshop AI,
Ethics, and Society. Update of previous paper from
David Gunkel's & my 2013 AISB symposium. Updated again in 2018
for a journal, please see above for a link to the better version.
Intelligence and Pro-Social Behaviour from the October 2015
Springer volume, Collective
Agency and Cooperation in Natural and Artificial Systems:
Explanation, Implementation and Simulation derived from Catrin
Misselhorn's 2013 meeting, Collective
Agency and Cooperation in Natural and Artificial Systems.
This brings together all three threads of my research: action
selection, natural cognition and collective behaviour, and the
mischaracterisation of AI as an active threat. In response
to the apocalyptic futurism typified by Bostrom's
Superintelligence, I frame AI as an ordinary part of human
culture, which for 10,000 years has included physical artefacts
that enhance our cognitive capacities, and is apocalyptic enough
in its own right. Open access: here's the post-review submitted
version from September 2014, or email me for the corrected
Yifei Wang, Yinghong Lan, Daniel Weinreich, Nick Priest and I, Recombination Is Surprisingly Constructive for Artificial Gene Regulatory Networks in the Context of Selection for Developmental Stability, in the proceedings of The 13th European Conference on Artificial Life, July 20-24 2015, York, UK. The title says it all, except I think that I think we may be onto something really significant for machine learning as well as theoretical biology here. Camera ready copy from May 2015.
Paul Rauwolf, Dominic Mitchell, and Joanna J. Bryson, Value homophily benefits cooperation but motivates employing incorrect social information, Journal of Theoretical Biology 367:246–261. Preferring to cooperate with those with a similar cooperation style supports the evolution of cooperation. Reputations spread through gossip supports this strategy. But now that you are spreading two kinds of information (reputations of others, and your own style of cooperation) you can have a conundrum when these conflict. When there is such a conflict, signalling honestly about your cooperation strategy can be more beneficial to your community than telling the truth about someone else. Free open access draft is here. Software is coming soon. Draft is from October 2014, yet the work originally derived from On the reliability of unreliable information: Gossip as cultural memory, which came out in 2016. Such is academia.
Swen E. Gaudl, Joseph Carter Osborn, and I, Learning from Play: Facilitating Character Design Through Genetic Programming and Human Mimicry, from Progress in Artificial Intelligence: Proceedings of 17th Portuguese Conference on Artificial Intelligence, EPIA 2015, Coimbra, Portugal, September 8-11, 2015. Also solving AI through social learning, this time game character strategies derived from human game traces. Open access camera ready version.
Daniel J. Taylor
and Joanna J. Bryson, Replicators,
lineages, and interactors, Behavioral and Brain Sciences,
Volume 37, Issue 03, June 2014, pp 276-277, a commentary on Paul
cultural evolution of emergent group-level traits. Open access version.
Joanna J. Bryson, James Mitchell, Simon T. Powers, and Karolina
and Addressing Cultural Variation in Costly Antisocial
Punishment. which appears in Applied
Evolutionary Anthropology: Darwinian Approaches to Contemporary
World Issues, Gibson & Lawson (eds.), Springer.
This book follows from a
workshop. Here is a free version of the
chapter, the revised draft from May 2013. See further
Variation in Costly Punishment project page. Note:
Google Scholar managed to find a USAF white paper derived from our
final report by the same title which has a lot of irrelevant
detail and a couple theoretical errors we've since
discovered. The book chapter is 15 months more recent.
Yifei Wang, Stephen
G. Matthews and Joanna J. Bryson, Evolving
Evolvability in the Context of Environmental Change: A Gene
Regulatory Network (GRN) Approach, Artificial Life 2014.
Interesting for both biology & machine learning, looks at the
role of a potentially-hierarchical network representation in the
genome for handling semi-predictable environmental change.
Final version is open access, because computer science has
We had three non-archival but interesting 4-page extended
abstracts at Collective
Intelligence (June 10-12 2014, MIT):
Swen Gaudl has been working on improving the emotional / durative
action selection work I started with Emmanuel Tanguy (see
below.) Swen's published two papers on his new Extended Ramp
Goal (ERGo) model:
Jekaterina Novikova, Leon Watts and I, The role of emotions in inter-action selection. Interaction Studies, 15 (2), pp. 216-223. A commentary on Social behaviours in dog-owner interactions can serve as a model for designing social robots in the same issue. Submitted version is open access.
|David Gunkel and Joanna J. Bryson (eds.) Introduction to the Special Issue on Machine Morality: The Machine as Moral Agent and Patient, Philosophy & Technology, 27(1):5--8, March 2014.|
Role of Stability in Cultural Evolution: Innovation and
Conformity in Implicit Knowledge Discovery, book chapter in
on Culture and Agent-Based Simulations, Virginia and Frank
Dignum, (eds), Springer, Berlin 2014. Some simple
simulations of culture and modularity showing interesting
stability effects, inspired by a talk Dan Sperber gave in 2008. Open access draft
from 2010. Open source netlogo model described in the paper on the
Eugene Y. Bann and Joanna J. Bryson, The Conceptualisation of Emotion
Qualia: Semantic Clustering of Emotional Tweets, Proceedings of the Thirteenth
Neural Computation and Psychology Workshop (NCPW) which actually took
place in July 2012, but finally got published in 2014 (Julien
Mayor, ed.). A chapter length
description of our attempt to use social media as a source for a
more accurate portrayal of the space of human emotions.
Derived from Eugene Bann's undergraduate dissertation. A
more recent paper by the same authors on a related topic came out
Karolina Sylwester, Benedikt Herrmann, and Joanna J. Bryson, Homo homini lupus? Explaining antisocial punishment. In Journal of Neuroscience, Psychology, and Economics, 6(3):167-188. A review article; see further our Cultural Variation in Costly Punishment project page. If you don't have access to APA, here is the revised final version submitted to the publisher (May 2013).
Bidan Huang, Joanna J. Bryson and Tetsunari Inamura, Learning Motion Primitives of Object Manipulation Using Mimesis Model, presented at Robotics and Biomimetics (IEEE-ROBIO) in December 2013. Describes a system of biases for allowing machine learning by observation of sequences of behaviour. Final version from November 2013.
Eugene Y. Bann and Joanna J. Bryson, Measuring Cultural Relativity of Emotional Valence and Arousal using Semantic Clustering and Twitter, Proceedings of Cognitive Science. Considers the most common "emotion" keywords on Twitter, and discovers that some concepts e.g. sleepiness and sadness are relatively culturally invariant, but others like "surprise" and "stressed" seem to be used quite differently in different global regions. Also, Europeans are the most positive and excited tweeters. Camera-ready from April 2013.Swen Gaudl, Simon Davies and Joanna J. Bryson, Behaviour Oriented Design for Real-Time-Strategy Games: An Approach on Iterative Development for StarCraft AI, Foundations of Digital Games (FDG), Chania, Crete 14-17 May 2013. Describes Simon Davies' undergraduate project on building strategic game AI, as extended using Swen Gaudl's new version of ABODE.
Harvey Whitehouse, Ken Kahn, Michael E. Hochberg, and Joanna J. Bryson, The role for simulations in theory construction for the social sciences: Case studies concerning Divergent Modes of Religiosity, Religion, Brain & Behaviour 2(3):182-224 (including commentaries and response.) I'm particularly pleased about this paper because it shows clearly how models can advance even well-established social-scientific theories provided that we work directly with domain experts who really understand the theory and data. There are some very pithy, quotable text about this in our response to commentaries, From the imaginary to the real: the back and forth between reality and simulation. Open access pre-proof version of the target article, and of the response to commentaries. Associated software is available from the AmonI software page, and also in the electronic appendix. Oxford Anthropology have made a web page about our simulation of religion work.
Simon T. Powers, Daniel J. Taylor and Joanna J. Bryson, Punishment can promote defection in group-structured populations, The Journal of Theoretical Biology, 311:107-116. Penultimate version on arXiv. This paper shows that punishment alone can't explain altruism, the papers that thought it could didn't take into account the well-documented behaviour of anti-social punishment. Basically, some people punish those that contribute to the public good. This is the first article of at least five we expect to publish explaining this phenomenon, and why it varies by culture. See our Cultural Variation in Costly Punishment project page.
A Role for Consciousness in Action Selection in the International Journal of Machine Consciousness 4(2):471-482. The role is not so much for immediate selection, but for updating models for future selection. In case you don't have access, here's the submitted draft from July 2012.
Machine Question: AI, Ethics and Moral Responsibility,
David J. Gunkel,
Joanna J. Bryson and Steve
Torrance, eds.. A symposium proceedings
published by The Society for the Study of Artificial
Intelligence and Simulation of Behaviour. Papers
focus on the ethics of considering artificially
intelligent artefacts as moral agents (actors responsible
for their behaviour) and / or moral patients (individuals
deserving of moral consideration / ethical
treatment.) Symposium ran 3-5 July, 2012, these
papers were delivered a few weeks before that.
Patiency Is Not a Virtue: Suggestions for Co-Constructing an
Ethical Framework Including Intelligent Artefacts.
Appeared in The
Machine Question: AI, Ethics and Moral Responsibility,
(Gunkel, Bryson and Torrance, eds, see above), pp. 73-75, but
there's a nice new version now, see 2016. Argues that both
ethical systems and robots are artefacts of our society, so we
have a good deal of control over whether we choose to make our
agents moral subjects. Doing so would be a displacement of
responsibility that currently rests in us, and that displacement
probably isn't justified or advisable. There are newer, better
versions of this paper in 2018.
me, Yasushi Ando & Hagen Lehmann, Agent-based models as
scientific methodology: A case study analysing the DomWorld
theory of primate social structure and female dominance,
Natural Action Selection (Seth, Prescott & Bryson eds.),
CUP. The discussion is updated from our 2007 PTRS-B article,
though the models are not. Penultimate draft from 2010, related
(including improved) software is available here.
Structuring Intelligence: The Role of Hierarchy, Modularity and Learning in Generating Intelligent Behaviour, from McFarland, D., Stenning, K. and McGonigle-Chalmers, M. (eds.) The Complex Mind, on Palgrave MacMillan. An invited chapter for a book written in honour of the late Brendan McGonigle. The chapter mostly takes a neuro and psychological approach, but last section is Eco Evo Devo, with some ideas I've been working on lately on the origins of cognition. This is the draft sent to the publisher in March 2010.
||References in Bibtex|
Natural Action Selection (Seth, Prescott &
Bryson eds.) Book on Cambridge
University Press. 20%
discount for clicking this link. An expanded and
updated version of our 2007 PTRS-B special issue which was
a condensed version of our 2005 conference
proceedings. I think this is the last version of
this work we'll see since we've now run through all the
primary permutations for the order of the editors (see
an Artifact: Why Machines are Perceived as Moral Agents,
with Philip P. Kime, in the proceedings of The
Twenty-Second International Joint Conference on Artificial
Intelligence (IJCAI '11). Final camera-ready version from
April 2011. This is a substantial updating & improvement
of one of my (and Phil's) very first papers "Just Another
Artifact" which we gave at a workshop in 1998.
Gideon M. Gluckmann & I, An Agent-Based Model of the Effects of a Primate Social Structure on the Speed of Natural Selection, in Evolutionary Computation and Multi-Agent Systems and Simulation (ECoMASS) at GECCO 2011 in Dublin. The paper is final from April 2011.
Jakub Gemrot, Cyril Brom, Joanna Bryson
& Michal Bida, How to
compare usability of techniques for the specification of
virtual agents behavior? An experimental pilot study with
human subjects, in the AAMAS 2011 Workshop on the uses of Agents for Education,
Games and Simulations. Draft from January
Role for Consciousness in Action Selection, in Proceedings
of the AISB 2011 Symposium Machine
Consciousness. Post-final version with typos corrected
& a sensible citation style from April 2011. There's now
an improved journal version, see 2012 above.
John Grey and I, Procedural Quests: A Focus for Agent Interaction in Role-Playing-Games, in Proceedings of the AISB 2011 Symposium AI & Games. Final version from March 2011.
||References in Bibtex|
Philipp Rohlfshagen and Joanna J. Bryson, Flexible Latching: A Biologically-Inspired Mechanism for Improving the Management of Homeostatic Goals in Cognitive Computation 2(3):230-241. Discusses a simple add-on mechanism for dynamic plans to allow sensible ordering of high-level drives, and explains why this problem is different from detailed action selection. Lots of experiments, some maths and some discussion of the literature on cognitive control in natural and artificial intelligence. Associated software comes with the standard python/jython distribution of BOD.
The Need for Cognitive Systems in Medical Care, in CyberTherapy and Rehabilitation Magazine 3(3):35-36, 2010. This is not the journal of the same name by the same group, but rather their trade publication. PDF is the draft copy sent them.
Crude, Cheesy, Second-Rate Consciousness from the proceedings of Brain Inspired Cognitive Systems (BICS) 2010. This is an update of the AISB update of the Vienna Consciousness paper. The next step should be a journal article. The title is a reference to a Dennett quote well worth knowing. The paper claims we already have conscious robots and it's not that big of a deal. It also puts forward some cool ideas about the functional role of the action-selection-related process that we experience as consciousness.
Cultural Ratcheting Results Primarily from Semantic Compression. From The Proceedings of Evolution of Language 2010, Smith, Schouwstra, de Boer & Smith (eds.) pp. 50-57. Discriminates the size of a culture (how much information can be transmitted from one generation to the next) from its extent (how much useful behaviour can be generated) and argues that the vast majority of cultural ratcheting is because the size of human culture finally got large enough that cultural evolution could start increasing its extent.
Why Robot Nannies Probably Won't Do Much Psychological Damage. A commentary on an article by Sharkey and Sharkey, The Crying Shame of Robot Childcare Companions, in Interaction Studies 11(2):196-200.
Robots Should Be Slaves. In Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, published by John Benjamins in March 2010, edited by Yorick Wilks. The chapter says companion is the wrong metaphor for robots, which leads to the misallocation of both resources and responsibility to the detriment of our society. Draft from 21 May 2009.
the Design of Human-Like Behaviour: Emotions as Durative
Dynamic State for Action Selection, with Emmanuel A. R.
Tanguy, in The
Journal of Synthetic Emotions, 1(1):30-50 January
2010. Penultimate draft, from May 2009. Related
||References in Bibtex|
Building Persons is a Choice an invited commentary on an article by Anne Foerst called Robots and Theology, in Erwägen Wissen Ethik 20(2):195-197 (2009). It isn't that AI couldn't conceivably deserve ethical obligation, rather it would be unethical for us to allow it to. See further my page on ethics and AI.
Age-Related Inhibition and Learning Effects: Evidence from Transitive Performance, in Proceedings of the 31st Annual Meeting of the Cognitive Science Society (CogSci 2009) pp. 3040-3045. The paper is a scientific consequence of the ideas put forward in "Crude, Cheesy, Second-Rate Consciousness" (see below), and the work I am doing on understanding the evolution of cognition. It concerns the tradeoffs between individual and genetic learning, and whether these may be shifted on the basis of individual experience over an agent's life history. Evidence is derived from models of macaque task learning. Camera ready from April 2009. Associated software comes with the standard lisp distribution of BOD.
Crude, Cheesy, Second-Rate Consciousness. Presented at the 2nd AISB Symposium on Computing and Philosophy in April. Final draft from March 2009. An earlier, shorter version discussing Dennett more directly is listed under 2008. The scientific ideas (without mention of robots or consciousness) are supported in my CogSci09 paper, above.
Underlying Social Learning and Cultural Evolution. In
is the penultimate draft from December 2008.
||References in Bibtex|
Embodiment vs. Memetics, in Mind & Society 7(1):77-94 May. Discusses the importance of the discovery that human-like semantics can be learned simply from observing large corpora, with ramifications for the evolution of language. The final version is from November 2007, here is a penultimate draft from August for those who do not subscribe, although it has a couple gaffs in it.
Impact of Durative State on Action Selection, appeared in Emotion,
Personality, and Social Behavior at the AAAI 2008
Spring Symposia at Stanford in March. This is a
somewhat pedantic overview of the improvements we've made to BOD,
POSH and of course AI action selection in general in the last
three years, with an eye to pleasing the EPSRC since my
grant with the same title just ran out. Final version
from January 2008.
||References in Bibtex|
DRAFT: Hagen Lehman and I, Modelling Primate Social Order: Ultimate and Proximate Explanations. A peek at the new model we're building of the egalitarian / despotic variation in primate social order. (Lest you think we only criticise other people's!) DRAFT from April 2007.
as scientific methodology: A case study analysing primate social
behaviour, with ANDO
Yasushi and Hagen
LEHMANN. In Philosophical
B, Biology 362(1485):1685-1698. This paper
talks about how ABM fits in as a part of scientific methodology,
and in particular analyzes as a case study the macaque social
structure simulation in Hemelrijk's
DomWorld. The DomWorld
link includes the associated software. An earlier version of
this paper with the predictions and not the full analysis appears
under 2005 below (Lehmann et al). Green
open access draft.
| Two special issues I edited last
year have come out this year:
Lehmann and I, Tolerance
and Sexual Attraction in Despotic Societies: A Replication and
Analysis of Hemelrijk (2002). In the proceedings
Natural Action Selection. Final, 18 June 2005.
Extended in 2006 (printed in 2007), see above.
|Edited Conference Proceedings: Joanna J. Bryson, Tony J. Prescott and Anil K. Seth, Modelling Natural Action Selection: Proceedings of an International Workshop. Published by AISB, Sussex UK. For more information, see the MNAS Home Page. (July 2005)|
Samuel J. Partington and I, The Behavior Oriented Design of an Unreal Tournament Character. A case study of using BOD, presents a very complicated POSH plan. In the proceedings of Intelligent Virtual Agents 2005. Final version is copyright Springer. Updated 23 June 2005.
Paula M. Ellis and I, The Significance of Textures for Affective Interfaces. Shows that what picture is on a VR face has very significant impact on how the emotions are perceived. In the proceedings of Intelligent Virtual Agents 2005. Final version is copyright Springer. Updated 23 June 2005.
Learning Discretely: Behaviour and Organisation in Social Learning, with Mark Wood. Talks about memetics, task learning, neuroscience and VR games --- what more could you ask? Proceedings of the Third International Symposium on Imitation in Animals and Artifacts. Updated 16 January 2005.
Ivana Cace and I, Why
Information Can Be Free. Shows that the tendency to
communicate information can be adaptive even though it has
immediate costs to the communicators and there are free riders /
information hoarders around the place. Proceedings of the Second
Symposium on the Emergence and Evolution of Linguistic
Communication (EELC'05). Updated 16 January
2005. Extended in 2006, see above.
Evidence of Modularity from Primate Errors during Task Learning, in the proceedings of the Ninth Neural Computation and Psychology Workshop (NCPW '04). Relates my transitive inference work to the localist vs. modularist neural representation debate that used to be a big deal at NCPW. Final: 29 Dec 2004.
and Specialized Learning: Reexamining Behavior-Based Artificial
Intelligence, in the Proceedings of The Third International Conference on Development
and Learning (ICDL'04): Developing Social Brains. A slightly
& less informative version was presented at Adaptive Behavior
in Anticipatory Learning Systems (ABiALS'02), but missed
being in the proceedings due to an error on my part. Final
version: 20 September 2004.
Now for the tricky bit... (originally "Consciousness Is Easy, but Learning Is Hard"), invited article for The Philosopher's Magazine 28(4):70-72. Explains that everything with RAM has functional self awareness, video cameras have perfect memory, what makes us intelligent (and is computationally difficult) is generalising from experience, which involves forgetting / unconsciousness. PDF wanted...but you can get the first third of the article from the link.The Role of Emotions in Modular Intelligent Control, with Emmanuel Tanguy, and Phil Willis. Cover article (well, blurb!) for the 2004 Summer AISB Quarterly. Final, 17 May 2004.
Emmanuel Tanguy, Phil Willis and I, A Layered Dynamic Emotion Representation for the Creation of Complex Facial Expressions. In the proceedings of Intelligent Virtual Agents. Final version from July 2003.
The Behavior-Oriented Design of Modular Agent Intelligence (pdf). A practical guide to Behavior-Oriented Design (BOD). In the Proceedings of Agent Technology and Software Engineering (AgeS 02), edited by Jörg P. Müller. The final version is © Springer. Updated 27 November 2002
I. McIlraith, Lynn Andrea
Composite Services in DAML-S: The Behavior-Oriented Design of an
Intelligent Semantic Web, in Web
Intelligence, Springer 2003. Ning Zhong, Jiming Liu,
and Yiyu Yao,
eds. This is a longer, more detailed version of our IEEE Computer
(2002) paper. Here's a nice review of
the book from IEEE Distributed Systems.
Where Should Complexity Go? Cooperation in Complex Agents with Minimal Communication (in pdf). Discusses when to use communication between agents in a multi-agent system vs. when to use behavior arbitration between modules in a modular single agent. Shows code from the primate colony simulation I'm working on with Jessica Flack. Final version is Ã‚Â© Springer-Verlag. In the proceedings of the First GSFC/JPL Workshop on Radical Agent Concepts (WRAC). Updated 3 July 2002.
Language Isn't Quite That Special (HTML). Commentary on The cognitive functions of language by Peter Carruthers, both in Behavioral & Brain Sciences (BBS). Updated 10 Dec, 2002. BBS called the issue this got printed in `Dec 2002' but it must have come out in 2003! Well, I try to keep this page in sync with the actual publication dates, maybe that's crazy...
Action Selection for an Artificial Life Model of Social Behavior in Non-Human Primates with Jessica Flack. Three-page abstract presented at Self-Organization and Evolution of Social Behaviour. Talks about exciting new research I hope to be spending more time on one day. Updated 6 June, 2002 (older, 8 page version from 30 March, 2001).
David Martin, Sheila I. McIlraith, Lynn Andrea Stein and I wrote two versions of the same paper. The shorter and somewhat cleaner one we called Semantic Web Services as Behavior-Oriented Agents. IEEE Computer rewrote this a bit into something called Toward Behavioral Intelligence in the Semantic Web for a special issue. More technical details are in a Springer chapter which can be found above under 2003. Both the book and the IEEE Computer special issue are about Web Intelligence, and edited by Ning Zhong, Jiming Liu, and Yiyu Yao. The papers recommend that the semantic web be viewed as containing intelligence, not just information. They also provide recommendations for altering the DAML-S spec. to better support this. Read about it in Russian. Updated 25 July, 12 October & 7th of July 2002, respectively.
Representing Cognitive Phenomena in Biological Systems. An invited 3-page rant (plus 1 page of references) about modularity and `cognition'. May be coming out in a book edited by Alex Meystel. Updated 22 May 2002.
What Monkeys See and Don't Do: Agent Models of Safe Learning in Primates (in pdf), with Marc D. Hauser. A position paper, describes the importance of constraints in learning in artificial and natural agents. In the proceedings of the AAAI Spring Symposium on Safe Learning Agents. Final revision, 21 January 2002.
Intelligent Control Requires More Structure than the Theory of Event Coding Provides (HTML). Commentary on The Theory of Event Coding: A Framework for Perception and Action Planning by Bernhard Hommel, Jochen MÃƒÂ¼sseler, Gisa Aschersleben and Wolfgang Prinz, both appeared in Behavioral & Brain Sciences (BBS). Updated Nov 13, 2001.
Embodiment vs. Memetics: Does Language Need a Physical Plant? from the Proceedings of the Workshop on Developmental Embodied Cognition (DECO 01). I describe my model of how language connects to modular embodied intelligence in nature, and what this implies for AI. Just a position/review paper, no novel results, but good fun. Updated October, 2001 Er... my revisions were made far too late to make the proceedings, but the original isn't very clear! There is now an even more revised version, see 2007.
PhD Dissertation: Massachusetts Institute of Technology, Department of Electrical Engineering and Computer ScienceIntelligence by Design in pdf (postscript version). Warning: that version is 344 pages long, due to 140 pages of lisp code. I have broken the dissertation into its main text, code appendices and bibliography, (all in postscript), in the likely event you just want to read the text. You can also email me to ask for a copy of the printed Tech Report, which is paperback-like and doesn't have the code. The files above are from the TR, which is clearer than the submitted dissertation (pdf).
I also have had the Intelligence by Design Thesis Defense materials online since just after that 30 April 2001 defense.
Modularity and Design in Reactive Intelligence in pdf (postscript), with Lynn Andrea Stein. From IJCAI 2001. A 6 page summary of my dissertation, including a description of BOD, differences between reactive agent architectures, and an overly brief example of using BOD. (Final from April 10, 2001.)
HOW-TO: How to Make a Monkey Do Something Smart (13 pages of pdf) A brief document about Behavior Oriented Design (BOD). (Slightly modified April 18, 2001.)
Design Approach to Character Based Creative Play in pdf (draft
version from 18 Dec. 2000). With Kris Thórisson.
Final version appeared in Virtual
a special issue on Intelligent Virtual Agents edited by Daniel Ballin.
Article concerns the design of constructive narratives. Describes
SoL (a hybrid architecture composed of Edmund
a slightly modified form of BOD to
support SoL, and our experiences developing AI for constructive
narratives at LEGO.
Modularity and Specialized Learning: Mapping Between Agent Architectures and Brain Organization in pdf (postscript version), with Lynn Andrea Stein. From the proceedings of EmerNet2000 (Emerging computational neural network architectures based on neuroscience). Final version is © Springer-Verlag. Discusses the relationship between agent architectures and neuroscience, and proposes a model for an agent capable of developing its own behavior / skill modules as well as learning new patterns of behavior. Target audience is neuroscientists and computer scientists interested in expanding neural networks to exploit modularity and specialized learning. Updated 3 December, 2000.
Modularity and Specialized Learning in the Organization of Behavior in pdf (or postscript). Written with Lynn Andrea Stein. Presented at NCPW6 in September, 2000, in the proceedings (the final version is © Springer-Verlag). Summary: This chapter is similar to the EMERNET one just above, but shorter (10 vs 15 pages) and focuses on my BOD systems rather than agent architectures in general. Targeted for psychologists and cognitive scientists who use neural networks to model human behavior. Updated 30 October, 2000.
Hypothesis Testing for Complex Agents in pdf (or postscript), with Will Lowe and Lynn Andrea Stein. In the proceedings of the NIST Workshop on Performance Metrics for Intelligent Systems held in August, 2000 (original website). Talks in detail about experimental method and the development of complex (even socially competent) agents. Updated July 2000.
Architectures and Idioms: Making Progress in Agent Design in pdf (or postscript). Written with Lynn Andrea Stein. Presented at ATAL 2000, now a book chapter, the final version is © Springer-Verlag. Summary: discusses the importance of methodology and the utility of alternative architectures - among other contributions, it distinguishes between these. Also gives a good one-page summary of what reactive planning really is. We suggest that the most useful thing to do with a new architecture is to identify its contributions and then express them in terms of one or more main-stream architectures. An extended example is made of an idiom we call a Basic Reactive Plan, taken from my architecture, Edmund, among other places. Updated 29 October, 2000.
Also at ATAL, Agent Development Tools with Keith Decker, Scott DeLoach, Michael Huhns and Michael Wooldridge. This is a synopsis of a panel discussion, with a fine two-page rant from me I still stand by. There's a free version on Scott DeLoach's publications page.
Hierarchy and Sequence vs. Full Parallelism in Reactive Action Selection Architectures (that's postscript), in The Sixth International Conference on the Simulation of Adaptive Behavior (SAB2000) (Note: there's a pdf version with an extra blank page.) Summary: demonstrates that hierarchy does not necessarily lead to a reduction of performance, even in highly dynamic environments. An illustration (with statistical evaluation) of the importance of clean design approaches to creating good AI systems. A shorter, less clear version of this paper appeared in Intelligent Virtual Agents 2, 1999. Final version; published in August 2000.
I presented two papers at two different symposia for AISB 2000. In Aaron Sloman's How to Design a Functioning Mind (DAM) I have Making Modularity Work: Combining Memory Systems and Intelligent Processes in a Dialog Agent (postscript version). First published description of BOD, uses dialog systems for the examples. (Final from March 2000.)
A Proposal for the Humanoid Agent-builders League (HAL) (or postscript) also appeared in the proceedings of John Barnden's symposium on Artificial Intelligence, Ethics and (Quasi-)Human Rights. See further my AI and Society web page. (Final from March 2000.)
Cross-Paradigm Analysis of Autonomous Agent Architecture in pdf (or compressed postscript) in the Journal of Experimental and Theoretical Artificial Intelligence (JETAI) 12(2). Summary: article about trends in agent architectures and what they imply about optimal strategies for designing intelligence.
MPhil Dissertation: University of Edinburgh, Faculty of Social Sciences (Department of Psychology)The Study of Sequential and Hierarchical Organisation of Behaviour via Artificial Mechanisms of Action Selection. That is the 173-page 11-point 1.5-spaced PDF version, with a little source code my examiners asked for. There's also a 94-page 10-point single-spaced compressed postscript version with no source code. Complete source code is available at the bottom of this page. Summary: gives evidence for the need for structured control from three sources: the history of AI agent architectures, my experiments in two domains (robotics and artificial life), and a review of the neurological / behavioral literature. Also discusses the dialect differences between Psychology and AI, and AI as a research tool for Psychology. (Final corrections, January 2000).
Creativity by Design: A Character Based Approach to Creating Creative Play , in the AISB Symposium on AI and Creativity in Entertainment in April, 1999 Summary: another proto-BOD paper, talks about combining Edmund with another agent architecture, Ymir, in the context of virtual reality characters. More about SoL, Ymir and the project is in the "Dragons, Bats & Evil Knights" paper above; some of the technical details of implementing Edmund's POSH architecture in SoL are in "Architectures and Idioms" paper also above.
TALK: Intelligence by Design was presented to the Generator Studios artists group in Dundee Scotland, as part of the It's In Your Head art and neuroscience initiative in August, 1999.
Agent Architecture as Object Oriented Design, presented in Agent Theories, Architectures and Languages 1997, and published in Intelligent Agents IV by Springer in 1998. Summary: a proto-BOD paper, this describes developing behaviors and control scripts in a way similar to developing object hierarchies in OOD. Also mentions the way I have localized learning in the behavior libraries.
Just Another Artifact: Ethics and the Empirical Experience of AI. Coauthored with my friend Phil Kime, presented at the Fifteenth International Congress on Cybernetics, 1998, but only partially appearing in their proceedings. We kept trying to make a journal version of this paper & finally in 2011 put a version into IJCAI which is I think a lot better. See also my AI and Society page.
Cognition without Representational Rediscription coauthored with Will Lowe. This is a commentary on Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook, and Rajesh P. N. Rao, Deictic Codes for the Embodiment of Cognition; both articles appeared in Behavioral & Brain Sciences (BBS) in late 1997.
DRAFT: (from 1997): Specialized Learning and the Design of Intelligent Agents Old PhD proposal cum journal article under revision on the potential equivalence and trade-offs between control state and learning -- it's pretty rough in places but apparently also interesting (according to the reviewers.) This has been extended into two chapters of my PhD dissertation, the ones about learning. Maybe this year I'll resubmit it...
The Design of Learning for an Artifact from the AISB96 workshop on Learning in Robots and Animals. (There's also an older, longer version about Cog.) Learning in animals seems to be highly specialized and constrained as much as possible, primarily to things that cannot be learned in evolutionary time scales. As a developers of behavior-based AI, we largely take on the role of evolutionary learning ourselves. Our robots or avatars should only have the special-purpose sorts of learning built-in to their everyday actions.
DRAFT: (from 1996) Modular Adaptivity and Behavior Based Control A short paper which discusses the role of episodic memory in navigation. I got some of this working on my robot, see my MPhil thesis, but some of it still needs to get worked out and written up...
DRAFT: (from 1995) The Use of State in Intelligent Control . This short paper compares Shakey and Genghis, and demonstrates the necessity of using control state even in the simplest reactive system. I'm not sure anyone cares enough for me to ever get this one published! But I still think it could be useful for some people. (The original Genghis didn't actually back up and turn when it bumped into something with a feeler. It just lifted its leg higher. Ooops. Oh well, the same arguments still all apply. The behavior I described was on the commercial version of Genghis available then from ISR (now iRobot.)
In Luc Steels (ed.) The Biology and Technology of Intelligent Autonomous Agents, (1994) The Reactive Accompanist: Adaptation and Behavior Decomposition in a Music System Describes my MSc project, is also the first place I suggest adaptive requirements can serve as a key to determining how to decompose intelligence into modularized behaviors (key point in BOD).
The Reactive Accompanist: Applying Subsumption Architecture to Software Design Edinburgh University Department of AI tech report 606 (1992) This paper is temporarily (January 2003) inaccessible due to the Edinburgh fire. Since `temporary' has lasted for over two years, here's a draft version I still had the latex for. Compares Subsumption Architecture with Object-Oriented Design in the context of my MSc. Draft was last modified around November of 1992, it was placed here March 2005.
MSc Dissertation, University of Edinburgh, Faculty of Science (Department of Artificial Intelligence)The Subsumption Strategy Development of a Music Modelling System. For more information, see my page on the Reactive Accompanist. (September 1992)