Skip to content
Adaptive

Learn Computational Neuroscience

Read the notes, then try the practice. It adapts as you go.When you're ready.

Session Length

~17 min

Adaptive Checks

15 questions

Transfer Probes

8

Lesson Notes

Computational neuroscience is a branch of neuroscience that uses mathematical models, theoretical analysis, and computer simulations to understand the principles governing the structure, physiology, and function of the nervous system. It operates at multiple levels of analysis, from the biophysics of individual ion channels and single neurons to the dynamics of large-scale neural circuits and whole-brain networks. By translating biological observations into formal computational frameworks, the field seeks to answer fundamental questions about how neurons encode information, how networks of neurons process signals, and how these processes give rise to perception, cognition, and behavior.

The discipline emerged from foundational work spanning several decades. Alan Hodgkin and Andrew Huxley developed the first quantitative model of the action potential in 1952, demonstrating that neuronal electrical activity could be described by differential equations. Warren McCulloch and Walter Pitts introduced formal models of neural logic in 1943, laying groundwork for both neuroscience and artificial intelligence. David Marr's influential tri-level framework of analysis, published in 1982, proposed that understanding neural systems requires addressing the computational goal, the algorithmic strategy, and the physical implementation. These contributions, along with advances in Bayesian inference, information theory, and dynamical systems, established computational neuroscience as a rigorous scientific discipline.

Today, computational neuroscience is central to progress in both basic science and applied technology. It underpins the development of brain-computer interfaces, neuromorphic computing hardware, and computational psychiatry approaches that model mental disorders as disruptions in neural computation. Machine learning and deep learning architectures draw ongoing inspiration from neural circuit principles, while neuroscience in turn uses computational tools to interpret the massive datasets generated by modern recording technologies such as calcium imaging, high-density electrode arrays, and functional MRI. The field thus sits at a productive intersection of biology, mathematics, physics, and computer science.

You'll be able to:

  • Identify the mathematical frameworks used to model neural activity including rate models and spiking networks
  • Apply differential equations and Bayesian inference to simulate neural population dynamics and sensory coding
  • Analyze how network architecture and synaptic plasticity rules give rise to learning and memory in neural circuits
  • Evaluate computational models of brain function by comparing predicted neural responses with experimental recordings

One step at a time.

Key Concepts

Hodgkin-Huxley Model

A mathematical model that describes how action potentials in neurons are initiated and propagated, using a set of nonlinear ordinary differential equations representing ionic conductances across the cell membrane.

Example: The model accurately predicts the shape, threshold, and refractory period of an action potential in the squid giant axon by tracking sodium and potassium channel gating variables over time.

Neural Coding

The study of how neurons represent and transmit information through patterns of electrical activity, including rate coding (average firing frequency) and temporal coding (precise spike timing).

Example: In the visual cortex, some neurons fire at higher rates in response to edges of a preferred orientation (rate code), while in the auditory system, neurons may lock their spike times to the phase of a sound wave (temporal code).

Synaptic Plasticity and Hebbian Learning

The ability of synapses to strengthen or weaken over time in response to activity. Hebbian learning, summarized as 'neurons that fire together wire together,' describes how correlated pre- and post-synaptic activity strengthens the connection between them.

Example: Long-term potentiation (LTP) in the hippocampus, where repeated stimulation of a synapse leads to a lasting increase in signal transmission strength, is a biological implementation of Hebbian plasticity.

Attractor Networks

Recurrent neural network models in which stable patterns of activity (attractors) represent stored memories or decision states. The network dynamics cause neural activity to converge toward these stable states.

Example: Hopfield networks model associative memory: given a partial or noisy input pattern, the network settles into the nearest stored attractor, effectively completing or correcting the pattern.

Population Coding

The representation of information by the joint activity of a group of neurons rather than by any single neuron. The stimulus is decoded from the combined firing pattern of the population.

Example: In the motor cortex, the intended direction of an arm movement is encoded by a population vector computed from the preferred directions and firing rates of many neurons, not by any single cell.

Bayesian Brain Hypothesis

The theory that the brain performs approximate Bayesian inference, combining prior expectations with incoming sensory evidence to form probabilistic estimates of the state of the world.

Example: When visual information is ambiguous, such as viewing a Necker cube, the brain alternates between two perceptual interpretations, consistent with switching between two high-probability hypotheses under a Bayesian framework.

Predictive Coding

A theoretical framework proposing that the brain continuously generates predictions about incoming sensory input, and that neural processing primarily involves computing and propagating prediction errors between hierarchical levels.

Example: When a repeated auditory tone suddenly changes pitch, a large mismatch negativity signal is observed in EEG recordings, reflecting a prediction error between the expected and actual stimulus.

Integrate-and-Fire Neuron Model

A simplified neuron model in which incoming synaptic inputs are summed (integrated) over time, and when the membrane potential reaches a threshold, the neuron emits a spike and resets. It captures essential spiking dynamics while remaining computationally efficient.

Example: Large-scale simulations of cortical networks often use leaky integrate-and-fire neurons because they balance biological realism with computational tractability, enabling modeling of millions of neurons.

More terms are available in the glossary.

Explore your way

Choose a different way to engage with this topic β€” no grading, just richer thinking.

Explore your way β€” choose one:

Explore with AI β†’

Concept Map

See how the key ideas connect. Nodes color in as you practice.

Worked Example

Walk through a solved problem step-by-step. Try predicting each step before revealing it.

Adaptive Practice

This is guided practice, not just a quiz. Hints and pacing adjust in real time.

Small steps add up.

What you get while practicing:

  • Math Lens cues for what to look for and what to ignore.
  • Progressive hints (direction, rule, then apply).
  • Targeted feedback when a common misconception appears.

Teach It Back

The best way to know if you understand something: explain it in your own words.

Keep Practicing

More ways to strengthen what you just learned.

Computational Neuroscience Adaptive Course - Learn with AI Support | PiqCue