last modified 5 May, 2001
Here are the main slides. They are in pdf - I showed them using acroread in slide mode.
I also had some overhead slides in order to show the reactive plans in parallel to the behavior libraries for examples. The first overhead slide demonstrates a non-real-time drive, and the second shows that it really ran (for a more convincing account, see my SAB 2000 paper). Next I showed a video of a robot (see below). The next two slides are the reactive plans for the robot (the second is only changes to the first) and go in parallel to slides after "examples" after Drives in the main slides (see above.) The final slides should line up with the behavior libraries for the Transitive Inference behaviors in the main slides.
The video. This is 38M, compressed (gzipped)! The full video runs 3 minutes. It looks a little weird because it's PAL -> NTSC -> DV -> Quicktime (it's in quicktime format now.) This is old work, but got included in the talk because it's a vivid illustration that:
This system first made me think about Behavior-Oriented Design, when I was reimplementing the sensor-fusion routines after rebuilding my action-selection architecture in C++ (it had been in Perl 5.003). Previously I had just been thinking about action selection and took behavior decomposition more or less for granted.
The system behind the first clip in the video is discussed in my MPhil dissertation, the second
two
scale up from that system to add short-term episodic memory and
long-term learning. The first clip in the video is of the fully
autonomous robot with a persistent goal (I have to make it think
it's
really stuck before it changes the direction it's trying to go
in.)
The middle and end of that clip is at 2x real time. The second
clip
has actually restricted this behavior so that it only does a "leg"
-
it goes until it can't find a way forward. This refinement makes
it
easier for the robot to learn a map by instruction. The middle of
the
second clip is sped up 3x. The final clip shows the robot learning
a
very simple little map. The robot asks where to go when it is
learning, but says "pick