CM10228 / Programming Ib:   Lecture 16


Applications of Search II:  Representation & Learning


-I.  AI at Bath

  1. Courses in Logic, Agents, Vision, HCI. and Cognitive Systems
    1. ICCS teaches robots, scientific simulation and game AI and cognitive science more generally (so about human and animal minds.)
    2. Pattern recognition &  vision teach more machine learning.
      1. taster in pattern recognition maybe next week.
    3. Talk to your SSLC rep if you think we need a dedicated machine learning course (I do!)
  2. AI Seminar series
    1. There are also AI-related talks sometimes in the HCI, Mathematical Foundations (logic, formal reasoning), MTRC (vision, learning) and general Department Seminars.
    1. Look at Seminar link on dept. home page.
    2. Undergraduates can go to most of these, although we don't organize your timetables around them.
  3. If you are keen, you can often work with a lecturer / help with research.
    1. Look at faculty web pages (here are the AI ones) to find out what people are interested in.
      1. Write someone specific, not the whole department or a secretary!
    2. Only do this if you have time after your coursework – if you tend to finish things both well and early.

I.  Representation and Searching in Advance I: Heuristics from Programmers

  1. Review of Intelligence as Search
    1. AI problems are those that are computationally intractable.
      1.  AI is about searching faster (improving testing), better (improving generating) or both.
      2. Action is often represented as a tree, where each node is an action, and its children are the possible actions you could do next.
        1. Draw tree with root as T0, actions as a0.. an, expand one to also have a0.. an as a subtree.
    2. Must use heuristics to prune that tree and find a reasonable action.
  2. (Search==Intelligence) & Time
    1. Where the tree is not fully described, need to decide how far to look ahead (yes, that's really a technical term.)
    2. How far do you look ahead?
      1. May be a fixed depth (breadth first). (label T1,T2 )
      2. More often a mix of depth & breadth first such as A*
        1. May search for a fixed amount of time (anytime)
        2. May search until sufficient evidence of success (normally represented in probability of success).
        3. Brains (and swarms) seem to do both, --- they can force a decision, or will eventually act when they have enough evidence to pass a threshold certainty.
    3. Important AI Concept:  Anytime Algorithm.
      1. Have an algorithm that always gives you some solution.
      2. The longer you give the algorithm, the better the solution gets.
      3. How many chess programs work, some general planning systems.
      4. Your coursework should (towards the end) be sort of like this–iteratively improve until you run out of time, then submit whatever you have (with a bit of time to tidy up & document budgeted in).
  3. Design is search by the programmer!
    1. My PhD research area, see Designing Intelligent Systems
    2. Developing AI is basically a process of trading off search / development with search / planning / learning done by the machine.
      1. Want to do as little work as you can, but
      2. Want machine to reliably work.
      3. As usual, modularity can help simplify the system and therefore reduce search for everyone/thing.
    3. You can think of programming languages as sets of heuristics
      1. You are the intelligent system that other programmers have tried to optimise.
      2. Languages like Java are designed to give you a heavily-pruned search space so you are likely to succeed.
      3. This is why sometimes programmers can chance on the right answer to a question even sooner than they can understand how to solve the question.
      4. Using good technique makes it likely you will find good answers!
  4. Letting AI do some of the search over known good strategies is the key to many industrial applications of AI...
    1. Production Rule Systems
      1. Production: perceptual context / precondition connected to an action,
        1. Many productions in a system
        2. action selection:
          1. go through all productions,
          2. find one that's precondition matches the current context
          3. execute / `fire' its action
      2. Problem: many productions can be triggered (be able to fire / have their preconditions match the context) at the same time.
        1. e.g. it's day, AND it's raining
      3. Needed: a system for choosing which production to run.
        1. Many systems have rules like recency, efficacy (must learn for both of these)
        2. Soar does a search if more than one rule files, then saves the results of the search as another production.
          1. Supposed to be like the brain.
          2. Often learns too much!
          3. Steve Payne has shown that people don't learn if a look-up is easy enough.
        3. ACT-R has Bayesian statistics on efficacy as well as learning new productions.
        4. Both use "problem space" to narrow number of possible productions tested.
      4. Expert Systems are production rule systems that encode knowledge elicited from experts.
        1. The elicitation part is hard – experts know things implicitly.
          1. becoming expert involves learning new categories / triggers (pattern recognition, next lecture) and new actions (below & AI lecture IV)
        2. Perhaps should be called "novice systems"
          1. novices follow sets of rules
          2. experts recognize situations.
        3. Have got bad press (partly due to too large of sales pitch) but still used in industry.
      5. Leading cognitive modelling systems are also based on productions
        1. forming productions is very similar to a basic part of animal intelligence, associative learning.
        2. ACT-R,
        3. Soar
    2. Dynamic (Reactive) AI (that's my web page on the subject!)
      1. Designers replace evolution in providing the small search area for what to do next.
      2. Will hopefully show some movies.
      3. Excellent slides about this approach.

II.  Searching in Advance II:  Learning

  1. Many people think learning is "real" AI – "no one programs humans!"
    1. Wrong.  Will learn about evolution next week.
    2. Just like AI is a tradeoff between search by a designer & by a machine, human intelligence exploits search from many sources: information from genes (search by evolution), information from culture (search by society), information you figure out yourself (individual search).
      1. In humans, individual search is also split between many representations.
      2. episodic memory vs. semantic memory
        1. need to record events, but can't record everything, use a low-bit representation based on previous semantic  knowledge.
        2. improve the previous semantic knowledge from recorded experience.
        3. This is why testimony can change if a lawyer teaches a witness a new way to think about the world.
        4. Brains didn't evolve to record veridical history, they evolved to do the right thing at the right time.
      3. Perceptual memory
        1. can remember any line, but not all, of a grid of stimuli seen only briefly
        2. time & the observer -- hit something (in a car) THEN see it
      4. Physical bodies are also an example
        1. E.g. how we pick something up is limited / enabled by the number of arms we have, which way the joints work.
        2. Reduces search space => affects expressed behaviour => part of intelligence!
        3. Examples with legs which became a famous company recently bought by Google.
  2. In AI, learning is another kind of search
    1. search done in advance
    2. developers determine what is changeable and how it changes i.e. parameters.
    3. experience determines how those parameters have to be set.
  3. Some people think no AI should learn!
    1. Intelligence is too complicated to be changed on the fly.
      1. Konrad Lorenz
      2. Rodney Brooks
      3. Nick Bostrom (sort of, for a different reason)
    2. But some things can't be programmed in advance
      1. maps of environment
      2. name of robot & owner
      3. perceptual aliasing (draw bug robot on board)
    3. Also, some things are more efficient to learn than to pre-program
      1. parameters for control that depend on individual specification in motors.
      2. anything that can be learned reliably (see above about tradeoffs.)
  4. Kinds of representation:
    1. none --- environmental determinism
      1. can still get very varied behaviour, because the environment keeps changing
      2. Game of Life
      3. Ant on a beach
    2. deictic representation
      1. minimalist reference that can change, driven just by action
        1. "the surface in front of me" instead of "floor"
        2. "the person shooting at me" instead of "Simo Häyhä"
        3. "the obstacle" instead of "elephant"
    3. specialised representations
      1. if you know what the agent needs to learn, give it a place to store it and a special-purpose memory, e.g. a map for your bots
    4. general purpose representations
      1. logic
      2. neural networks (today)
      3. genetic programming (friday)
      4. these are all fail ultimately for the same reason as the rest of intelligence – combinatorics
      5. however, they have all also been quite useful in specific situations

III.  Summary

  1. Bottom line: learning is still search, it's just search in advance, and search with a lot of biases / pruning already provided.
  2. Many kinds of machine learning, again, could easily be a whole course.
    1. Tom Mitchell Book
    2. Chris Bishop Book
    3. These are in the library!
  3. Don't neglect simple answers : giving variables and filling in solutions when they become apparent.
    1. This approach can also be formalized: frames, case-based reasoning.
    2. My work: Behaviour Oriented Design,
      1. extends OOD to AI
      2. just use state in objects for learning.
  4. Come to seminars if you like!

page author: Joanna Bryson
26 March 2015