Last updated: 3 March 2019
Lecture notes for CM30229
& CM50230
Intelligent Control
& Cognitive Systems 2019
Lecture 1: Course Introduction,
Intelligence Broadly, & Sensing
Note: You should have free access to the papers linked below
so long as you are on campus or tunneled to campus (e.g. VPN.)
- Slides (pdf)
- Required Reading
- First and very key academic reading (always in the
exam!): Intelligence
without representation
- This is the most cited article that comes up when you type
"artificial intelligence" into Google Scholar. By a
long shot. (There's a book that's more cited though.)
- Suggested Reading
- Other Notes
- I used to talk about why we don't use vision by showing this
highly cited, insanely mathematically dense article from
February 2011: Bundle
adjustment—a
modern synthesis. It's also been cited 3000
times. But honestly, a lot of people (particularly at
Google) are working hard to make it easier to program with
machine learning without doing the maths yourself. They
want it to just be a tool for most of their programmers.
That doesn't get around a core topic of this course: how do
you put together the intelligent system?
- Yes, I know you really want to see Spiders
on Drugs
- You may be interested in a similar course that was taught to
graduate students at MIT in 2016, MAS.S67
Machine Learning, Society & Autonomy, and its syllabus.
Coursework 1: Wall Following
- HTML Version
- PDF Version
- Latex Version
- Bibtex for Latex Version
- Please note that CW1 is not
marked anonymously, because we need to check the group
structure. Please do include all your names on your
report. Everyone should upload the report, even if you all
upload the same zip (which needs to include the video!)
Lecture 2: An Introduction to
Artificial Intelligence (its history) & Cognition
- Slides (pdf)
- In 2012, there was a special issue of an open access journal
on AI Heaven, although they called it Mind
Uploading.
- On a related topic, here's Jaron Lanier talking about The Myth of
AI.
- What is intelligence for? In response to question after
a 2014 lecture, here's a link to an open-source science project
to understand all the neurons in a simple creature, OpenWorm.
Lecture 3: Action Selection
- Slides (pdf)
- Reading
- Other People's Notes
- A bunch of the slides for this lecture were taken from Jim Blythe's
course, which includes a lot of suggested
readings. Note that is pretty old (2002, but with
updates maybe indicating 2004) but it's still fine for the
history of the approach.
- The full, original
Shakey video is on YouTube. I have no idea why I can
link to it here but can't leave it in the lecture video.
Lecture 4: Cognitive Architectures
- Slides (pdf)
- Optional reading
- Derbinsky, N., Laird, J.E.: Extending Soar with Dissociated Symbolic
Memories. Symposium on Human Memory for Artificial
Agents, AISB (2010)
- This is an example of a postgraduate student paper such as
CM50230 students should be able to write for their fourth
coursework, and all the courseworks are vaguely heading
towards.
- It is also an example of one step in the progress of a
group working over decades to achieve human-level AI.
- Further, it talks about memory, which will be the topic of
next week's lecture(s).
- Anderson, J.R., Fincham, J. M., Qin, Y., & Stocco, A.
(2008). A Central circuit of the mind. Trends in
Cognitive Science. 12(4), 136-143. [] [info]
- This is a paper arguing ACT-R is a model of the brain from
the highest impact journal in Cognitive Science (higher
impact than any
journal in AI.)
- Architecture home pages:
- Soar
- ACT-R
- CogAff
- Shanahan has a book
about his theory of consciousness, which is in some ways an
elaborated version of Maes' ANA.
- Despite being very, very influential. Subsumption
Architecture does not have a web page. See the Brooks
paper above, but the original formulation (and the pictures
from today's lecture was A
robust layered control system for a mobile robot (cited
6680 times as of Feb 2011).
Lecture 5: Perception
- Slides (pdf)
- Optional Reading
- Tom Mitchell's book Machine
Learning has a lot of information on line.
- If you are interested in natural intelligence like the vision
examples I had in lecture today, you might want to read
Michael Mann's book which is also available entirely on line,
The Nervous
System in Action. The Carlson textbook cited in
the slides is more up to date but less conveniently actually
in the library.
- Great
book on modelling for neuroscience. I'm pretty
sure there's a copy in the library, if not let me know &
I'll order it.
- Clarification on the nature of science writing: There
are two things a scientist has to do:
- Learn & understand.
- Communicate new learning to other people.
If you only learn & understand without communicating then
you may be living the life of the mind, but you are not doing
science because science is a social process involving the
advancement of human knowledge generally. But you will never be
able to communicate everything you have learned, so you have to
selectively work on communicating well what is most likely to be
of use to others.
Lecture 6: Learning, Neural
Networks, and Evolutionary Algorithms.
- Slides (pdf)
- Optional Reading
- Tom Mitchell's book Machine
Learning has a lot of information on line.
- John Winn and Christopher Bishop are coauthoring a new, more
accessible (and up to date!) book, Model Based Machine
Learning. It's free online http://www.mbmlbook.com/
- Part of the motivation is clearly to clean up the
DeepLearning hype.
- They emphasise that the goal of ML is not one magic
algorithm, but matching algorithms to problems.
- November 2012 article in the New York Times about how Deep
Learning is about to solve AI.
- Gary Marcus' New Yorker blog
post rebutting the NYT article.
- My critique from 2012: I think Marcus over-rebutts and that
AI is moving faster than he implies, though not as fast as the
NYT implies. I personally wouldn't be surprised by
human-level AI within 10 years, I have a blog post on Watson
about this but you may want to leave reading it until after
the Watson lecture (coming up!)
- As of 2018 I think we passed human level AI in around
2012, we'll talk about this in April.
- Certainly Geoff
Hinton is someone to take seriously. He was the
first Rummelhart
Prize winner. His web page is great and links to
some basic papers on deep learning.
- Looking at the science not the hype, here's a Deep Learning Portal, and
here's a monograph (short, focused, academic book) about Learning
Deep Architectures for AI.
- One of the very few really cool examples of artificial
evolution is Karl
Simms' work. I show it in the first year so I
haven't shown it in lecture, but basically it works because
- he biased the search space – he linked sensing, morphology
& action into units, so that the search was more likely to
produce something functional.
- He "cheated" at the selection phase & kept things that
didn't work very well but were cute, thus increasing the
variation of the population.
- He used a good physics simulator for testing the agents, so
they do look like creatures.
- The results are extremely creative, and in a non-human,
unnatural way, so very engaging. Even if you've seen
them before, I recommend watching them again now that you know
more about GAs.
Lecture 7: Design &
Learnability
Lecture 8: Science, Agents and
Spatial Simulations
- Slides (pdf)
- Craig Reynolds is vaguely trying to maintain a Boids Web Page with
a list of all the applications (and "rip offs" :-). Here is the
first, original movie that Boids was used for, Stanley and Stella.
- Readings about simulations and replication:
- Bryson, J. J., Ando, Y., and Lehmann, H. (2007). Agent-based
models as scientific methodology: A
case study analysing primate social behaviour.
Philosophical Transactions of the Royal Society, B
— Biology, 362(1485):1685–1698. (just the first two
sections about ABM)
- King, G. (1995). Replication,
replication. PS: Political Science and Politics,
XXVIII(3):443–499.
Coursework
4: The Workshop Paper
Postgraduates only (undergraduates get
an exam). The assignment is straight-forward. You should
extend one of the first three courseworks to be a conference
paper. Please talk to one of the lecturers (we will set up office
hours where we can talk individually in person or by video chat
during the consolidation week) about what project you want to do.
But the paper should be about 2-3 pages of double-column length
such as is used by IJCAI, AAAI, ACM or IEEE for all of
their conferences. The deadline is 9 AM 6 May 2019. To work to
distinction, you should actually find some recent workshop or
conference papers and use these to establish the current state of
the art / knowledge boundary, and then see if you can replicate
and / or extend one or more of them. To work to passing
level, you should extend your coursework with more citations to
the literature, add more methods and results and possibly another
hypothesis for more than one experiment. It should be on a related
topic, so share a single motivation & literature review. Each
experiment should be objectively evaluated in the results section
and its implications discussed in the discussion. This coursework
must be conducted individually, but up to 40% of the text may come
from your prior co-authored coursework. Note: as of 3 March
this coursework is temporary and has not yet been checked.
Tutorial 1: NetLogo
This lecture is just a howto reviewing code and the NetLogo
IDE. NetLogo is very well documented and supported on line,
but you are free to use any ABM environment you choose for CW2,
including building your own. But you should provide links to
whatever you use (or all the code if you build your own) so we can
double check your code runs. Here are my lecture notes (to
myself):
- Show how to download NetLogo
- Show how the library / reference & tutorial stuff works
- ESPECIALLY the behaviour
space tutorial.
- Show the three main panes at high level
- Show how to add widgets
- Run the model, show them the parameter settings.
- Promise to talk about the model in second half; dig up that
talk.
- Show the code.
- point out the widgets define globals not in the code, how
I deal with that.
- in my code I comment these as defs.
- Show how monitors & plots work.
- Talk about gotchas in synatx, writing own simulators, other
platforms (esp. RePast)
Lecture 9: Social Simulation and
Social Structure
Lecture 10: Hypothesis Testing and
Evidence
notes below this line are from 2017 and
have not been updated yet for this year.
Coursework 2: Simulations
Lecture 11: Multiple Conflicting Goals –
Intro to Game AI
Lecture 12: Chatbots, Turing Tests
& Believability
Tutorial 2: BUNG
This lecture is just a howto reviewing BUNG
code and the ABOD3
IDE.
Coursework 3: Ethical Decisions in Game
AI
- HTML
Version
- PDF
Version
- Technical Notes for Game AI are at the end of the assignment.
- Joanna J. Bryson ``Behavior-Oriented
Design
of
Modular Agent Intelligence'' (HTML, there is also a pdf
version), Agent Technologies, Infrastructures, Tools,
and Applications for e-Services, R. Kowalszyk, J. P.
Müller, H. Tianfield and R. Unland, eds., pp. 61-76, Springer,
2003.
- or just see the BOD web
pages.
Lecture 13–14: Culture, Language &
Cognition (double-length two-day lecture)
Lecture 15: Emotions, Drives &
Complex Control
- Slides (pdf)
- 2012
news article (Wired) on USC's VR characters for veterans
with PTSD (mentions Eliza the chatbot too)
- Speaking of USC, Jon Gratch's links relating to AI
emotions research
- The emotional robot, IT.
- And the intelligent slime mold
(1.5 minutes of movies).
- Andreas Thomasz left GA Tech for UTexas, Austin, then I think
went into industry.
- Paro
promotional video (if you don't like the footage I shot
myself that I used in the lecture!) One thing they don't
mention is that it costs $8,000.00 (last I knew.)
- Optional reading
Lecture 16: Consciousness &
Cognitive Systems
- Slides (pdf)
- Note (in case you missed the lecture) the slides from my own
paper at the end are not
examinable, they were just to let you know what my biases are.
You can also find that on my Robot
Ethics page.
Lecture 17: Ethics &
Cognitive Systems
Lecture 18: Regulation &
Governance of AI
- Slides (pdf)
- WRT absolutely everyone getting hacked: I didn't want to show
something illegal on panopto, but Slate
published the entire Snowden slide with links to Google's
reactions to it back in 2013 when it was revealed. AI is only as
trustworthy as its cybersecurity.