August 18, 1998 - August 23, 1998

UW Summer Institute 1998

Location: Seattle and San Juan Island, Washington, USA

  • Eric Horvitz

    Successful decision-making systems immersed in complex, competitive environments must grapple with uncertainty, and with potentially high-stakes, time-critical challenges. I will discuss decision-theoretic representations and inference strategies for sensing and action under uncertainty. After reviewing basic principles of decision-theoretic reasoning, I will present research on methods for deliberating about ideal actions under limited computational resources. The theoretical complexity of computing ideal actions suggests that resource limitations are likely prevalent in real-world decision making. I will describe key issues that arise in situations of limited or varying cognitive resources including the importance of relying on compiled reflexes in lieu of deliberative processes, and of employing flexible problem-solving procedures. Flexible procedures allow systems to trade off the accuracy of decision making with allocated cognitive resources, and to exhibit a graceful degradation in performance with increasing resource limitations, rather than dramatic failures. Adding flexible procedures can be shown to increase the expected value of a system’s behavior. I will describe the importance of control systems for guiding flexible procedures and touch on the overall goal of understanding principles of bounded optimality—ideal performance in an environment conditioned on architectural or resource constraints at hand. Finally, I will highlight opportunities for applying key concepts on decision making under limited resources to better understand biological and computational decision-making systems. My interest in establishing deeper collaborations with neurobiologists on these concepts lays behind my enthusiasm for organizing this Summer Institute.

  • Fred Rieke

    Spiking neurons encode continuous, time-varying sensory input signals in trains of discrete action potentials or spikes. Understanding how sensory signals are represented in spike trains is a fundamental problem in neuroscience. Relatively simple algorithms allow estimation of the sensory signal from the spike train, providing an estimate of the rate at which the spike train provides information about the input signal. In several systems this information rate is close to upper limits set by the entropy of the spike train itself; thus neural coding is remarkably precise when viewed on an absolute scale. This precision suggests that theoretical models for spike generation based on ‘optimal design’ may be relevant to real neurons.

  • John Doyle

  • Sue Becker

    Many neuroscientists take a bottom-up approach to modelling intelligence, using synaptic or cellular level biophysical data to drive the model-building process. I argue the virtues of a more top-down approach, using simplified neuronal models but more powerful cost functions for describing the systems-level behavior of collections of neurons. Where these two approaches meet, anatomical and physiological data constrain top-down models, and we begin to understand how intelligent systems can emerge from biological machinery.

    Becker, S., Meeds, E. and Chin, A. (submitted) The hippocampus as a cascaded autoencoder network.

    Becker, S. (to appear), Implicit learning in 3D object recognition: The importance of temporal context, Neural Computation.

    Becker, S. (1996), Mutual information maximization: Models of cortical self-organization. Network: Computation in Neural Systems, Vol. 7, pp. 7-31.

    Becker, S. (1995) Unsupervised learning with global objective functions. In The Handbook of Brain Theory and Neural Networks, M. Arbib (ed), MIT Press.

    Becker, S. and Hinton, G. E. (1992), A self-organizing neural network that discovers surfaces in random-dot stereograms,

    Nature, Vol. 355, pp. 161-163.

  • James Anderson

    Cognitive computation displays an intriguing blend of memory, discrete processes, and continuous processes. I will present:

    (1) A brief introduction of the interplay between discrete and continuous computation in the work of McCulloch and Pitts;

    (2) A worked out example of how humans might do do simple arithmetic when they don’t use logic along with some thoughts about how one could control the direction of such a computation; and

    (3) sketch the outline of a scalable intermediate level neural network system called the “Network of Networks”, formed from a large array of attractor networks, that combines discrete and continuous operations in a hybrid computational architecture.

    J.A. Anderson, 1995,Introduction to Neural Networks Cambridge, MA: MIT Press.

    J.A. Anderson, 1998, Seven times seven is about 50. Chapter 7 pp. 255-300, In “An Invitation to Cognitive Science (2nd Ed). Volume 4. Methods, Models and Conceptual Issues” Ed. D. Scarborough and S. Sternberg Cambridge, MA: MIT Press.

    J.A. Anderson and J.P. Sutton, 1997, If we compute faster, do we understand better? In: Behavior Research Methods, Instruments, and Computers, v. 29. 67-77.

  • Marvin Minsky

    I’ll talk about why computers don’t have common sense yet, why AI researchers don’t work on this, and what we should do about it. In my view, the trouble is that each type of representation has serious limitations, so we’ll have to learn how to use and switch between several different ones. One way would be to use the concept of ‘paranomes’ described in the final chapters of “The Society of Mind.”

  • Tom Daniel

  • Mike Dickenson

  • Blake Hanneford

    Our work is focused on understanding the lowest level domains of motor control through design and experimentation with robotic models of the human arm. We aim to create a robotic arm which emulates the human arm on the levels of biomechanics, dynamics, and control at the level of spinal reflexes, because we believe that intelligent manipulation behavior must be built upon a substrate which can effectively modulate posture and biomechanical impedance according to task objectives. To test these ideas we are building a human arm replica, aiming for dynamic as well as kinematic accuracy, and including a real-time simulation of neural circuits in the human spinal cord segment. This talk will review our arm design, some of our postural control simulations, and future directions.

  • Peter A.V. Anderson

    The earliest known nervous systems are those found in members of the phylum Cnidaria, the jellyfish, anemones and corals. These nervous systems are very simple, structurally, yet display many of the physiological properties found in higher nervous system and are capable of controlling some remarkably elaborate behavior, including a limited degree of plasticity. The first part of this presentation will focus on the organization and capabilities of these early nervous systems.

    Communication by way of nervous systems are not, however, the only means of achieving rapid and widespread communication. Many organisms, ranging from lower invertebrates through chordates, possess epithelial conduction, the production and propagation of action potentials between epithelial cells. This is a very efficient and economical means of coordinating widespread effectors (i.e. muscles, and secretory and bioluminescent tissues), coordinating the activities of individuals in colonies, and can provide animals with important sensory information. The latter part of this presentation will describe this phenomenon and its functional capabilities.

  • Pamela Reinagel

    Early stages of visual processing may take advantage of the characteristic low-level statistics of natural image ensembles, in order to encode them efficiently. Animals actively select visual stimuli by orienting their eyes. In humans, eye positions determine which parts of a scene will fall on the fovea and thus be sampled at high resolution. We recorded eye positions of human subjects while they viewed images of natural scenes. We found that active selection changes the local statistics of the ensemble the fovea

    encounters. Specifically, the effective visual stimulus has higher contrast and lower spatial correlations, resulting in a higher estimated signal entropy. Thus eye movements serve to increase the information available for visual processing.

    Collaborator – Anthony Zador

  • Les Atlas

    In spite of years of research effort, many pattern recognition tasks are still much more difficult for computers than for humans. Perhaps the greatest skill humans have is that of generalization, where experience from one domain maps onto another. Our working hypothesis is that instead of using complex computational schemes for recognition, humans generalize by learning those signal representations and transformations which allow us to ignore most of the irrelevant detail of the representation. These learned transformations are then used for other similar tasks and thus form the bases for generalization.

    From a signal processing perspective our point can be illustrated by using past observations to learn optimal smoothing functions of the usual representations in time and frequency. We will demonstrate this approach in several acoustic signal processing applications: sonar transient identification, condition monitoring of helicopter gearboxes, and reduced representations of distinctive features in speech. As a side-effect of these results, we observed that the information content in these acoustic signals was often manifest as a low bandwidth modulator. This observation matches some recent results in mammalian auditory cortical recordings and suggests a new signal processing approach–variational frequency—which we will define and illustrate with audio examples.

  • John Platt, Microsoft Research

    Classical statistical machine learning suggests that learning a category is accomplished by learning a typical member of that category and the distribution of typical variations of the category. Since 1979, Vapnik and his colleagues have shown just the opposite: learning a category can be done by learning those positive examples that are most unlike the category and those negative examples that are most like the category. The method that accomplishes this sort of learning is called a Support Vector Machine (SVM). An SVM learns a yes/no function that determines whether an example is in a category or not.

    Recently, I have invented an algorithm for learning SVMs (related to certain numerical analysis algorithms from the 1960s). This new algorithm has various interesting neuromimetic properties: resistance to input noise, synaptic weights that have a predetermined sign and a maximum strength, and, most importantly, one-shot learning and allocation of new memories. The algorithm may have interesting implications for learning of concepts in real neurobiology.

  • Katherine Graubard

    The stomatogastric nervous system may not be able to pass a can-he-talk-and chew-gum-at-the-same-time intelligence test, but it can switch between the crabs versions of chewing and swallowing motorpatterns. The 1×2 mm stomatogastric ganglion contains the thirty neuroncircuit that runs two motor patterns and participates in at least three others. The activation of the motor patterns, the flexibility of each pattern, and the mode switches between patterns all require neuromodulatorinput from sensory neurons and from neurons in other parts of the nervous system. The big four neuromodulators in humans — norepinephrine, dopamine, serotonin, and acetylcholine — help switch us between such

    modes as waking, deep sleep, and dreaming sleep. In the stomatogastric ganglion, the modulator inputs act by changing the excitability of individual neurons and by changing the strength of the connections (both chemical synapses and electrical coupling) between selected parts of the circuit. Each modulator input has its own unique set of effects and

    sculpts a new circuit by selective modification of the baseline anatomical network. Modulator inputs do not normally work in isolation and the effects of an input on a neuron can change depending on what other inputs are or have been active. The system is non-linear and has complex historic effects. Somehow the system is flexible and stable and mode-switching (and crabs do eat and grow). Our challenge is to extract the essential features that allow this to be accomplished.

    Harris-Warrick, R.M., F. Nagy, and M.P. Nusbaum. (1992) Neuromodulation of

    stomatogastric networks by identified neurons and transmitters. In:

    Dynamic Biological Networks: The Stomatogastric Nervous System, R.M.

    Harris-Warrick, E. Marder, A.I. Selverston, and M. Moulins, eds.

    Cambridge, MA: MIT Press, pp. 87-138.

    Marder, E., and Selverston, E. (1992) Modeling the Stomatogastric Nervous

    System. In: Dynamic Biological Networks: The Stomatogastric Nervous

    System, R.M. Harris-Warrick, E. Marder, A.I. Selverston, and M. Moulins,

    eds. Cambridge, MA: MIT Press, pp. 161-196.

  • Alan Gelperin

    Olfactory systems are characterized by oscillatory dynamics of local field potential in neural centers receiving direct input from olfactory receptors. The computational role of the oscillatory dynamics is unknown. The central olfactory processing network of the terrestrial mollusc Limax maximus provides a convenient system for experiments to explore the role of oscillatory dynamics using electrical and optical recording and simulations of network dynamics. The oscillatory circuit in the central olfactory processing region is also involved in odor learning, which is highly developed in Limax. A coupled oscillator model of the Limax odor processing circuit will be discussed and its relation to current experimental data will be shown. The Limax olfactory processor also propagates waves of excitation and the spatial dynamics of the activity waves is altered by odor stimulation. New modeling work suggests how the spatial and temporal dynamics of the Limax odor processing structure contribute to odor discrimination and odor memory storage. The relation of dynamics in the molluscan model system to the mammalian olfactory system will also be addressed.

    Ermentrout, B., Flores, J., Gelperin, A. 1998 Minimal model of oscillations and waves in the Limax olfactory lobe with tests of the model’s predictive power. J. Neurophysiol. 79:2677-2689.

  • Frank Krasne

    My presentation will be in three parts:

    1. I will discuss the overall organization of tail flip escape behavior circuitry emphasizing that it is mediated by two parallel systems. One operates very rapidly, produces very stereotyped responses, and is organized in a “localist” fashion. The other is slow but very flexible and seems to have a more “parallel-distributed” sort of organization. I will discuss why two systems operate in parallel to produce rather similar behavioral responses.

    2. I will discuss how crayfish decide which of the two systems to use. This decision is based on abruptness of stimulus onset. Abruptness is detected by the system of rectifying electrical synapses that converge on the decision/command neurons of the fast system. Similar junctions would be ideal for detecting synchrony in other processing tasks requiring synchrony detection such as in current models of time sharing circuits in visual perception where the binding together of the representations for features that are part of a single object amongst many is based on synchrony of firing.

    3. Escape by the fast system is modulated by an inhibitory system that descends from higher centers. This inhibition is targeted to distaldendrites of the decision/command neurons for escape rather than to more “standard” proximal sites near the spike initiation region of the neuron where spike initiation is more conveniently controlled. I will describe results from simple two compartment models of distal and proximal inhibitory synapses which show that (1) distal inhibition is preferable when it is desirable for excitation provided by environmental threats to compete with modulatory inhibition on a relatively equal footing, with excitation and inhibition each being able to overcome the other by sufficient increases of excitatory or inhibitory drive and (2) proximal inhibition is preferable when it is desirable that inhibition should prevent the decision/command neuron from firing no matter how strong the excitatory drive. Many neural models require that excitation and inhibition should compete on a relatively equal footing; for these distal inhibition seems preferable.

  • A.O. Dennis Willows

    Briefly, we are interested in the problem of geomagnetic orientation, why marine molluscs do it in the field, and how they do it in electrophysiological and biochemical/molecular terms. We are motivated by our earlier findings (1,2) that show that intact animals, in the lab show a robust capability to orient to the earth’s magnetic field. (As you may realize, no one yet knows the physiological basis for geomagnetic field detection in any organism, except bacteria.)

    In the field, using SCUBA and individual animals (labeled and tracked), we try to answer the ecological question, “Why do they do it?” An answer is beginning to emerge, and suggests that they use their geomagnetic sense to move shoreward when they are disoriented. This makes sense because their food (and therefore mates) are arranged patchily in a band along the shoreline. To move in any other direction puts the animal at risk of losing contact with food/mates because it is a big ocean out there offshore. The shoreline presents a safe boundary against which to navigate.

    In electrophysiological terms, we seek specific neurons in the brain that respond when earth strength fields around the animal change directionally. And we have found 2 pairs of re-identifiable neurons that fire impulses when the horizontal component of the geomagnetic field is changed (2). They are also sensitive to local water currents (3, 5), which serves as another directional cue about shoreline orientation, since tidal currents occur reliably parallel to the shoreline. Recent experiments by my student Razvan Popescu show that these same neurons still respond to changing geomagnetic fields when their peripheral axons are intact, and all other sources of input and output to/from the brain are cut off. This raises the suspicion (and exciting prospect) that we may be very close to the site of the transducer-detector—it must be in those uncut nerves or the foot of the animal to which they are attached.

    In biochemical terms, we have dissected those identified neurons mentioned above (and cited in 2, and 3) extracted and purified their contents and discovered that they make and use a previously undescribed trio of peptides (4). The peptides have now been sequenced, synthesized and used to make antibodies, suitable for immunohistochemical studies. We have found that they are released from the peripheral endings of these neurons near the ciliated cells of the foot epithelium, where they apparently promote ciliary beating (6), which in turn promotes locomotion—that is how these molluscs crawl—on their cilia, beating in the film of mucus. Maybe, these neurons are responsible for both the sensory and the motor components of the behavioral responses mentioned above. How could this be???? A wonderful and difficult question!! But the answer holds the key to the geomagnetic transducer, which is the prime motivator of all our work at the moment. I hope to find the solution to this question over the next 2-3 years or so, and am very excited by the prospect.

  • William Calvin

    To have meaningful, connected experiences — ones that we can comprehend and reason about — we must be able to discern patterns to our actions, perceptions, and conceptions. There’s a hunger for discovering hidden patterns. Most of the tasks of consciousness are coping with the novel, finding suitable patterns amid confusion or creating new choices. We know the Darwinian reputation for achieving quality on the timescale of species (millennia) and antibodies (weeks). The brain may also be able to use a Darwinian process to repeatedly improve the raw material beyond that seen in our nighttime dreams, with their jumble of people, places, and occasions that don’t fit together very well (“incoherent”). We can create coherence, i.e., quality, with enough generations of variation and selection. But we need to achieve coherence quickly, on the time scale of conversational replies.

    The newer parts of our cerebral cortex have the neural circuitry necessary for pattern copying, with variants competing for a workspace much as bluegrass and crabgrass might compete for a back yard; the multifaceted environment that makes one pattern outreproduce the other is a memorized one. This offers mechanisms for implementing both convergent and divergent thinking, even the structured types we associate with syntax.

  • Nelson Spruston

    Most neurons in the central nervous system have elaborate, branching dendritic trees that form the input structures for tens of thousands of synapses arriving from other neurons. The simplest view of the neuron is that excitatory and inhibitory potentials are summed, and funneled passively toward the cell body, where an action potential is initiated in the axon if a threshold membrane potential is reached. Recent efforts to characterize the properties of neuronal dendrites have made it clear that this is a vast oversimplification of dendritic function. Many voltage-activated channels are present in dendritic membranes, thus endowing the neuron with complex, nonlinear integrative properties. Research in my laboratory has focused on improving our understanding of the active properties of dendrites. Using simultaneous patch-clamp recordings from the somata and dendrites of hippocampal CA1 pyramidal neurons, we are able to explore how synaptic potentials and action potentials propagate within the dendritic tree. We have also studied the properties of voltage-activated sodium channels in CA1 dendrites in an attempt to understand how the properties of these channels influence dendritic integration. I will present recent results showing that although the axon has the lowest threshold for action potential initiation in CA1 neurons, spikes are generated in the dendrites in response to synchronous synaptic activation. These dendritic spikes, however, do not propagate reliably to the soma and axon, suggesting that they may serve a function different from all-or-none signalling via axonal action potentials. These results suggest that the active properties of neuronal dendrites may facilitate complex dendritic computation.

  • Glen Brown

    Experimental studies of population codes in the nervous system usually involve the collection and analysis of action potential trains from many neurons recorded simultaneously. Using a voltage-sensitive dye and a photodiode array, we have recorded action potential activity from the swimming neural network of the seaslug Tritonia–up to several hundred neurons at a time. Often, each photodiode detector recorded multiple

    neurons, and each neuron appeared on multiple detectors. A new technique in statistical signal processing, independent component analysis (ICA) has several applications in large multivariate data sets of this type. ICA was first used to sort action potentials from different neurons into separate channels. In the same step, artifacts were removed from the data and noise was reduced. Action potential trains were then converted into spike frequency plots and smoothed. A second ICA step revealed groups of neurons within the population. These groups corresponded to neuron types that have been classified previously by other methods indicating that ICA can be used as an effective method for dimensionality reduction of population data.

  • Christopher M. Bishop

    Probability theory provides a consistent framework for the quantification of uncertainty. Learning can be viewed as a reduction in uncertainty resulting from the acquisition of new data, and can be formalised through Bayes’ theorem. In this talk I will give a basic introduction to the Bayesian view of learning, and will illustrate the key ideas using a simple regression example. I will also highlight the distinction between the discriminative and generative paradigms as a central issue in both computational and biological learning.

  • Bill Frost

    When surprised by an unexpected aversive stimulus, the marine mollusc Aplysia californica becomes sensitized for about an hour. During this period, the animal modifies its neural circuitry to enhance the amplitude and duration of its defensive gill and siphon withdrawal reflex. This presentation will first review data showing that the “memory” for this example of non-associative learning is encoded by a distributed set of nervous system modifications. The construction of a data-based computational model of the siphon-elicited siphon withdrawal circuit will then be reviewed, followed by the use of this realistic simulation to evaluate the “information content” of each component of the distributed memory for sensitization. This process involved placing physiologically recorded, learning-related synaptic modifications into the simulation, generating their effects on the firing responses of the simulated motor neurons, and then using these to drive actual siphon motor neurons and siphon contractions. This approach allowed us to dissect the contribution of each physiologically recorded circuit modification to the alterations in reflex amplitude and duration observed in the animal during sensitization.

  • Rhanor Gillette

    The behavior of all motil animals is organized along similar lines. A useful view of the complex behavior of higher animals, including humans, is that most complexity arises from increasing the number of sub-routines running under a few basic drives; these drives are shared with the simpler consciousness of sea-slugs. Evidence suggests that animals with even the simplest nervous systems organize their behavior hedonically to make informed cost-benefit decisions in foraging and reproductive behavior. Observations on behavior and neural organization from Octopus and the predatory snail Pleurobranchaea will be presented to derive a simple and generalizable neural model for such decision-making in foraging behavior. The model integrates sensation, experience and internal state in terms of documented neuromodulatory gain-control of the known neural circuitry via specific ion currents, and appears amenable to computational/robotic simulation.

  • John Lewis

    In many organisms, correlating neuronal activity with sensory input and behavioral output has revealed that information is encoded using populations of neurons. We investigate a population coding algorithm and its neural implementation in a simple reflexive behavior of the medicinal leech. This reflex is elicited by a light touch to the tubular body and results in a body-bend away from the touch stimulus. The bend is achieved by the contraction and relaxation of longitudinal muscles on the same and opposite sides of the body as the touch, respectively. The underlying neuronal network solves the problem of encoding touch location and then producing the appropriately directed bend. Because of the relatively small size of this network, we are able to monitor and manipulate its complete set of sensory inputs. We show that a “population vector” formed by the spike counts of the active sensory neurons contains sufficient information to account for the behavior. This population vector is also well-correlated with bend direction. And finally, the connectivity among the identified neurons in the network is well-suited for reading out this neural population vector.

    Lewis & Kristan (1998). Nature 391:76-79

  • Chris Diorio

    Digital computers are adept at numerical computation, but, in metrics like adaptation and intelligence, have proved woefully inadequate by comparison with animal brains. Many computer scientists (myself included) retain the expectation that we can learn how to program intelligence into our digital machines. The future will ultimately decide this expectation. But I believe that there is another question that we should ask: “Does a machine’s structure, and the representation that it uses, predispose the machine to certain computations?” Put another way, will neuronal circuits always surpass digital ones at adaptation and learning? The principles underlying digital computation-Boolean algebra and low bit-error probabilities-are not the ones that neurobiology uses for its computations. Can we build artificial computing machines that use principles like local adaptation, rather than switching, to compute? And will these machines enable intelligence more naturally than do digital machines? In my talk, I will present recent work on building electronic (silicon) circuits modeled after neurobiology, and I will explore the possibilities and opportunities of artificial neuronlike computation.