Projects with Daniel Richardson: Neural nets and liquid state machines 1) Standard neural nets have numbers as inputs and outputs between the neurons. Neurons are connected into networks with weights on the connections. The weights determine the behaviour of the network. Networks can be trained by changing the weights so as to move the behaviour progressively closer to some desired behaviour. There are many applications. Refer to Google: neural nets application area. Character recognition, quality control, pattern recognition in general, sunspot prediction, financial prediction, credit rating, game playing. A typical project would be to choose a problem and then design and train a neural net to solve it. The usual training method is back propagation. 2) There are other versions of neural net which are less well understood. Spiking networks, liquid state machines for example. In these inputs and outputs are time series of spikes. These are closer to biological reality, and much less well understood. The inputs and outputs are functions rather than numbers. The networks are dynamic rather than static. A typical project would be to simulate and visualize some of these networks in action. Or possibly to listen to them, since the time series inputs and outputs could be expressed as sound. The object of such a project would be to gain insight into how these things behave, ie to observe them in action rather than to train them or use them to solve a problem.