My principle scientific passion is understanding cognition,
particularly as it relates to explaining human culture,
but also natural
intelligence more broadly. My main methodology for doing
this is designing
intelligent systems to model and test scientific
theories. We build theories of intelligence into working AI models.
Modelling allows us to learn more than we could using unassisted
human reasoning about whether a theory is reasonable and what its
implications are. Once we understand a theory's implications
and predictions, we can compare these to data empirical scientists
collect from the natural system we are trying to explain.
Increasingly, we are collecting our own data too.
Most (but not all) of my research has focused on the unintentional
and non- or proto-linguistic aspects of human intelligence, and how
intelligence evolves more broadly. The more we understand both
the universals of cognition and computation, and the variation we
see across species (particularly but not exclusively in animals),
the better we will understand the context of specifically human
behaviour, including human
culture. From 2000–2007 I worked primarily on
understanding non-human primate
behaviour. Since 2008 my group has been more
focused on characteristics of human cognition such as consciousness,
artificial intelligence "feelings"
(emotions), language,
religion,
and especially cultural variation in cooperation.
This doesn't mean we've lost interest in comparative cognition; my
group now studies evolutionary and learning dynamics in many
contexts, from public goods games in humans to instruction sharing
in microbia, from evolvability and epistasis in gene regulatory
networks to computer game strategies. We have looked at cognition
and information sharing in species from macaques to tortoises to
ravens to Mongolian asses.
Designing AI models of natural intelligence isn't as easy as it
should be. My research therefore has always included a great
deal of work on systems
AI (intelligence
by design) including my work in action selection and the development
methodology I initially developed at MIT, Behavior
Oriented Design (BOD). We apply this work into
a variety of domains besides science, including cognitive robotics,
computer game characters, and intelligent environments / "smart
homes".
Given I work on both bettering AI and understanding human
society, I feel obliged to also work on Robot and
AI Ethics. This work there didn't initially seem
like research but rather just public understanding of science, for
example informing discussions on robot
rights (tldr: anthropomorphising AI is the flip side of
dehumanising humans.) However, it has become clear that humans
have a lot of trouble understanding AI, partly because AI is built
without sufficient concern for accountability, partly because
humans over-identify with the computational aspects of our
intelligence and thus inject a pschological block to AI
transparency, and perhaps mostly because we just don't understand
ourselves, including our ethics.
Consequently my group now also does empirical research in AI
ethics. Much of our work concerns systems engineering and devops –
making the use of AI more transparent and accountable. We also do
work on HCI/HRI on what makes AI comprehensible to ordinary users,
and we do both experimental and theoretical psychology on
understanding what leads people to overidentify with AI.
Research Group and Students
Research is done by researchers. Many of the people I've
written papers with have been affiliated with Artificial models of natural
Intelligence. These include my PhD and other dissertation
students. I have also been founder and "research
leader" for Bath's
Artificial Intelligence Group. "Research leader" is very much
a Frederick Brooks' type of leadership-as-service role, and
has now been taken over by my awesome colleague Özgür
Şimşek.
If you would like to do research with me, and
- you are a first or second year undergraduate, you
should come
talk to me about possible projects, probably working with
my PhD students on game or robot AI.
- you want to do a final-year undergraduate or masters
dissertation, you should look at my teaching page, which includes
links to previous project suggestions as well as to my dissertations page.
Unfortunately I'm not on campus much right now because I'm
living in Plainsboro, so I'm not allowed to take
dissertation students myself, but again my PhD students may be
able to supervise you.
- you think you want to do a PhD, you should look at my dissertations page, the
Bath
AI project and funding page, and this
excellent advice on chosing a PhD supervisor. After
doing all that, you should email me, even if you aren't yet sure
how to fund yourself. Try to do this at least a year before you
want to start!
- you want a postdoc or lectureship (lectureship: UK
Assistant/Associate Professor, but permanent, no "up or out"
tenure case), you should check the Bath Job Vacancies to
see if you've been lucky and there's something on offer right
now. Postdocs and lectureships are both advertised briefly
in the UK (about 6 weeks) and at utterly random times (when
someone with money has decided it's a good idea) throughout the
year. If you haven't been that lucky, you should email me
and ask whether I can help you write a fellowship or research
proposal to bring you to Bath (postdoc), or if I know about any
upcoming opportunities for Bath hiring (lecturers, readers,
profressors; right as I type this, I in fact do.)
Other and Older Research
Projects, Funding
I have been involved in promoting European Cognitive Systems
research and education. Some time ago I used to occasionally
get around to maintaining this research-oriented list of
Related Web Sites. Even my really old Code is
on line. All of my code from published projects is available
either there or from the Amoni Software
Page.
Research projects and labs previous to
Bath
- The Reactive
Accompanist — an AI music system that was one of the
first applications of Behavior-Based
AI outside of robotics. Edinburgh 1992
- Cog
— a humanoid robot that was intended to use BBAI
to simulate the staged development of human infants. MIT
1993-1995.
- I started really working on the interaction between planning
and learning during a year with Ian Horswill
using his cheap vision machine and LEGO robots. MIT
1995.
- Brendan
McGonigle's Intelligent Systems Laboratory, later the
Laboratory for Cognitive Neuroscience. I worked on hierarchical
action selection structures for robot control in a laboratory
that was also actively exploring hierarchical representations in
children and non-human primates. At that point, called my
own architecture Edmund.
I also tested it in Tyrrell's
Artificial Life world. Edinburgh 1996-1997.
- I was hired for the LEGO Darwin
project to work with Kris Thórisson on
a humanoid agent architecture for Virtual Reality
characters. It was named SoL (for Spark of Life). LEGO
1998.
- I did a little research on reactive planning for dialog with
the Tutorial
Dialogue
Group of the Edinburgh
Human Communication
Research Centre. Edinburgh, 1999
- After developing BOD at
MIT for my PhD, I worked with my former advisor Lynn Andrea Stein
to apply it to DAML.
Olin College & Stanford, 2001.
- I also did a postdoc examining how modularity could explain
the patterns of skill learning in primates. I did this in Marc Hauser's
Primate Lab, which was rebranded the Cognitive Evolution Lab
and was then at Harvard
Psychology. Of course, now it is even
more rebranded and the web pages seem entirely gone, which
is a shame because good science happened there too. When I
started the postdoc I was hoping to switch permanently to
psychology, but my experience in that lab convinced me I needed
to spend more time in computer science in order to further
develop software tools to the point they will be useful for
people without programming experience. Being an alumnae of
Marc's lab has also helped motivate my interest in reproducible
research, particularly in open sharing of published
models. Harvard 2001-2002.
Bath Projects and Funding Acknowledgement
Please see the AmonI
Web pages for descriptions of projects and links to code.
- 2017-2020: The AXA Research Fund, The
Limits of Transparency for Humanoid Robotics.
- 2012-present: Transparency for real-time AI. Some
day we'll have a web page for this. We did get given money for
this now, thanks AXA!
- 2005-2017: Implicit
bias in large corpus semantics. Actually, we may
still do more of this. Maybe someone will give us money for it
now.
- 2010-2011: US Air Force
Office of Scientific Research. Project:
Understanding Cultural Variation in Anti-Social Punishment.
(two
research officers, one programmer).
- 2008: European Network for
the Advancement of Artificial Cognitive Systems.
Network action outreach grant. Project: Public Understanding
of European Cognitive Systems. (two Outreach
Officers).
- 2008: European Network for
the Advancement of Artificial Cognitive Systems.
Network action outreach grant. Project: Higher
Education
Curriculum
Support
for
European
Cognitive Systems. (two Outreach
Officers).
- 2007-2009, The Konrad
Lorenz Institute for Evolution and Cognition Research.
Project: Factors
Limiting
the Evolution of Cultral Evolution (A three-year
fellowship for me, of which two were taken, with sabbatical
support from Bath).
- 2006: European Network for
the Advancement of Artificial Cognitive Systems.
Network action travel grant for Mr. Cyril Brom to visit Bath
from Charles University, Prague. Project: Action
Selection for Cognitive Systems.
- 2005-2008, The Engineering and Physical Sciences Research
Council (EPSRC), Grant GR/S79299/01
(AIBACS), ``The
Impact of Durative Variable state on the Design and Control of
Action Selection''. Emmanuel Tanguy, co-author and named
researcher. (One PhD
student, one research assistant).
- 2005-2006 Anonymous industrial collaborator,
``Development of Graphical IDE
for pyPOSH'' (contract for programming and six months
support).
- 2005 The Nuffield Foundation Undergraduate Research
Bursary, ``Understanding the Adaptive Advantage to Costly
Communication''. Avri Bilovich, named researcher.
- 2005 European Network for the Advancement of Artificial
Cognitive Systems, Action
selection for Intelligent Systems , with Cyril Brom as
named exchange student.
- 2005 Biotechnology and Biological Sciences Research
Council, Conference funding for ``Modeling Natural
Action Selection''. Workshop coorganized with Tony
Prescott, Anil Seth.
- 2005–2006, British Council
Alliance: Franco-British Partnership Programme ,
“Origins of Egalitarianism: Improving our understanding primate
society through modelling two organizational norms for various
species of Macaque”, with Bernard Thierry, Centre d’Ecologie et
Physiologie Energétiques.
- 2001-2002 US National Science Foundation Grant
EIA-0132707, Primate-Inspired
Specialized Learning in an Agent Architecture: Safe, Robust,
Adaptive Action Selection, with Marc D. Hauser co-author
and PI.
J J Bryson
Last updated February 2019