CM10228 / Programming Ib:   Lecture 17


Applications of Search IV:  Robots, Humans & Parallel Search

-I Shape of unit

  1. No more lectures likely to enter into CW.
  2. Old intro:  these lectures for a) personal enrichment, b) professionalism.
    1. Education about way more than marks!
    2. But then – don't want weak programmers to fail programming
    3. So now these lectures are all on the exam...
  3. Remember CW3! 
    1. Have a lot of CW due next week, so try to finish some this week.
  4. Revision lecture
    1. Week 11 or Tuesday of Revision week?

I. Robots vs. Agents

  1. Definition of a robot:  something that generates actions in the real world based on perception of the real world
    1. This actually includes internet agents that book plane tickets etc.
    2. This also includes thermostats.
  2. Intelligent behaviour is defined by interactions with the real world.
    1. A robot's morphology can be as important as its control.
  3. The Chinese Room vs the Turing Test.
  4. Should robots be given ethical consideration?
    1. Moral agents are considered responsible for their own behaviour, like adults, unlike children or dogs (mostly).
    2. Moral patients are things we owe obligation to, may not be moral agents or even agents.  e.g. paintings, the environment.
    3. Ethical systems are not fixed, so there's no one answer.
      1. They vary by culture.
      2. They change over time
        1. legislation
        2. shifts in norms
    4. My opinion on robot ethics is that since AI is something we build, we are morally obliged to maintain our own moral agency (not pass it to robots), and to try to limit our obligations to them as well.  But this is an open question.
  5. Can robots think like humans?
    1. Face the same laws of computation:  tractability.
    2. But very different processors, algorithms, sensors, effectors.
    3. Humans are smart partly because we share computation.
    4. Can AI & robots exploit our computation too?

II. Types of Search III: Parallel Search

  1. One way to solve the problem of combinatorics is to have the search run in parallel.
  2. Every time you have multiple options, have one thread take each option.
  3. Think of people exploring tunnels in Scooby Doo.
    1. One of the problems of concurrency -- you find the villain down your tunnel, how do you let the others know?
  4. If you could get enough threads, & you had some way to let all of them know when one of them found a solution, then you could turn all NP problems into P problems.
  5. Not something you can do on one PC -- threads share the same processor, no big win.
  6. Even massively parallel computers usually only have 1,000 processors, doesn't get you that far in chess.
  7. Solutions people are currently working on:
    1. Quantum computing :
      1. particles can be potentially in many places at one time, wave distributions.
      2. weird!  but has really been done for a small number of bits.
    2. Biological computing:
      1. use genetics to create and & or gates in bacteria.
      2. Takes maybe 48 hours for one operation, BUT can be done billions of times in a single testtube.
      3. Again, people are really working on this (for example, Ron Weiss).
    3. Evolution / memetics:
      1. If a few members of a species or a culture find a good / winning solution, the entire species or culture may eventually adopt that solution.
      2. This works much faster in memetics (ideas) then genetics (requires waiting for children to grow up.)
      3. One consequence of increasing communications in the world is that solutions with clear impact spread much more quickly, many expect this to fundamentally change society.
        1. Of course so do other ideas, e.g. fashion.
      4. Some of the best AI works by exploiting human culture.  Like Google spell checking.
    4. Cloud computing
      1. The richest companies in the world are running on (Google) or at least inseparably from (Apple, Microsoft, Amazon, Oracle) AI.
      2. Many have their own rivers to power their server farms.
  8. Trying to come up with good design rules is like memetics.
    1. This is why sometimes programmers can chance on the right answer to a question even sooner than they can understand how to solve the question.
    2. Languages like java are designed to give you a heavily pruned search space.
    3. Using good technique makes it likely you will find good answers!
  9. Using very limited search of known good strategies is the key to several other kinds of AI:
    1. Will talk about this Thursday.

III.  Searching in Advance III: Natural Selection (background for GAs)

  1. Natural Selection is a theory for explaining the fact of evolution which is observed in nature and the laboratory.
    1. Fact is based on changes in morphology over time,
      1. more recently we can also measure changes of gene frequencies, the technical definition of biological evolution now.
      2. Darwin guessed this all not from fossils or measurements, but just on how animals look that live on islands – as if one happened to get there from nearby & then speciated.
    2. Darwin's theory has three parts
      1. Every population has variation between individuals.
      2. Parents have children that are more like them than like the average member of their population – heritability.
      3. They also have more children than survive to reproduce themselves.  Selection.
      4. Outcome: To the extent that which children survive is systematic, species will change (evolve) to follow what works well.
        1. Will also get some random change even when there's no systematic pressure.
  2. Evolution doesn't necessarily discover optimal solutions.
    1. "I only have to be faster than you."
    2. Can think of it as an anytime algorithm.
    3. Can think of it as a search for heuristics.
    4. Sometimes  heuristics work well in some environments and not others.
      1. Other organisms or ideas are evolving too, will find hacks around what used to be a good solution.
      2. E.g. giant birds as predators in SA, nailed when cats got across from Asia.
  3. Evolution does come up with a bunch of `good tricks' (Dennett)
    1. This is true of cultural evolution as well.
    2. Although we think we are rational agents who do things that provably make sense, in fact much of what we do and know we get by imitation of authority.
    3. Learning when you don't even know you've learned is called Implicit Learning.
      1. Adopting mannerisms of people you admire or just observe.
      2. Language -- you pick up new words and even dialects without noticing.
      3. May be most of what you learn.
  4. Intelligent behaviour is not just brains
    1. As per the first lecture, intelligent behaviour derives from the interaction between agents and their environment.
    2. Brains co-evolved with the body.
    3. Big Dog (from Boston Dynamics) works by exploiting both evolved control (captured with motion capture) and evolved shape.
      1. Big dog throwing bricks.  This isn't really to teach you anything (except maybe fear).

IV  Biologically Inspired (& Parallel) Learning Algorithms

  1. Genetic Algorithms
    1. Evolution-like rules.
      1. Pick a representation of variables that affect the behavior of an individual agent.
      2. Start with a population of possible values for those variables.
      3. Run all the programs and evaluate how well they do.
      4. Select the ones that have done the job best,
      5. Create `children' by systematically permuting the variables of the winners
        1. crossover : choose two (or more) parents, choose some of the variable settings from each.
        2. mutation : just change a small number of variables randomly
        3. any other operator -- doesn't have to be naturally inspired.
      6. These children make a new population, now iterate! (go back to 2)
    1. Good for searching around likely solutions
      1. Need to have a good idea what the right representation / set of variables is.
      2. Need to have a good idea of how to tell if the individuals are doing well
        1. (tricky when starting from a solution too far from right --
        2. what solves the first steps well may not be able to solve the last steps.)
    2. Many variations / applications: 
      1. See demo from Karl Sims (you'll have to chase some links...)
    3. Just the tip of the iceberg.
  2. Neural Networks
    1. Perceptron Learning
      1. start from production
      2. more inputs
      3. more outputs
      4. WTA
    2. Backpropogation
    3. Applied Statistics (Chris Bishop Book NN Book)
  3. Category Perception (Pattern Recognition: A second year course)
    1. importance of category perception in action selection.
    2. progress from perceptrons to formal GMM

V.  Summary

  1. Learning and intelligence are forms of search.
  2. But it's specialized so likely to succeed.
  3. Still any bias to search may mean that you miss a good solution.
  4. Intelligence is (NP) hard!
    1. at least, the parts we call "intelligent".
  5. Come to the seminars if you like.

page author: Joanna Bryson
14 April 2015