Skip to content
Adaptive

Learn Probability

Read the notes, then try the practice. It adapts as you go.When you're ready.

Session Length

~17 min

Adaptive Checks

15 questions

Transfer Probes

8

Lesson Notes

Probability is the branch of mathematics concerned with quantifying uncertainty and analyzing random phenomena. At its core, probability assigns a numerical value between 0 and 1 to events, where 0 indicates impossibility and 1 indicates certainty. The discipline provides rigorous frameworks for reasoning about chance, randomness, and likelihood, enabling us to make informed decisions even when outcomes are uncertain. From counting favorable outcomes in simple experiments to constructing sophisticated models of complex systems, probability theory supplies the foundational language for virtually every quantitative field.

The formal study of probability traces back to the correspondence between Blaise Pascal and Pierre de Fermat in 1654, who analyzed games of chance. The field matured dramatically in the twentieth century when Andrey Kolmogorov established the axiomatic foundations in 1933, grounding probability in measure theory and enabling a unified treatment of discrete and continuous random variables. Key results such as the Law of Large Numbers and the Central Limit Theorem explain why statistical regularities emerge from randomness, providing the theoretical backbone for statistical inference, hypothesis testing, and predictive modeling.

Today, probability is indispensable across science, engineering, finance, medicine, and artificial intelligence. It underpins machine learning algorithms, actuarial science, quantum mechanics, epidemiological modeling, and risk management. Bayesian probability, in particular, has become a powerful paradigm for updating beliefs in the face of new evidence, influencing fields from spam filtering to clinical trials. A solid understanding of probability equips learners with the analytical tools to evaluate risks, interpret data, and reason rigorously about an uncertain world.

You'll be able to:

  • Apply probability axioms and counting principles to calculate the likelihood of events in discrete and continuous sample spaces
  • Analyze conditional probability and Bayes' theorem to update beliefs and solve problems involving dependent and independent events
  • Evaluate common probability distributions including binomial, Poisson, and normal and their applications in modeling random phenomena
  • Distinguish between frequentist and Bayesian interpretations of probability and their implications for statistical inference methods

One step at a time.

Statistical distribution curves
The mathematics of uncertaintyPexels

Interactive Exploration

Adjust the controls and watch the concepts respond in real time.

Key Concepts

Sample Space and Events

The sample space is the set of all possible outcomes of a random experiment, while an event is any subset of the sample space. Defining these precisely is the first step in any probability analysis, since probabilities are assigned to events rather than to individual outcomes in many frameworks.

Example: When rolling a standard six-sided die, the sample space is $\{1, 2, 3, 4, 5, 6\}$. The event 'rolling an even number' is the subset $\{2, 4, 6\}$, which has a probability of $3/6 = 0.5$.

Conditional Probability

Conditional probability measures the likelihood of an event occurring given that another event has already occurred. It is defined as $P(A|B) = \frac{P(A \cap B)}{P(B)}$, provided $P(B) > 0$. This concept is essential for updating beliefs when new information becomes available.

Example: If 5% of emails are spam and 80% of spam emails contain the word 'free,' then the probability that an email is spam given it contains 'free' can be computed using Bayes' theorem with these conditional probabilities.

Bayes' Theorem

Bayes' theorem provides a formula for reversing conditional probabilities: $P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}$. It allows us to update a prior belief about event $A$ after observing evidence $B$, forming the foundation of Bayesian inference and decision-making under uncertainty.

Example: A medical test is 99% accurate. If 1 in 1,000 people has a disease, Bayes' theorem shows that a positive test result only yields roughly a 9% chance the person actually has the disease, because the base rate is so low.

Law of Large Numbers

The Law of Large Numbers states that as the number of independent, identically distributed trials increases, the sample average converges to the expected value. This theorem explains why casinos are profitable in the long run and why polling works despite individual unpredictability.

Example: Flipping a fair coin 10 times might yield 7 heads, but flipping it 10,000 times will almost certainly produce a heads proportion very close to 50%.

Central Limit Theorem

The Central Limit Theorem states that the sum or average of a large number of independent random variables, regardless of their original distribution, tends toward a normal (Gaussian) distribution. This result justifies the widespread use of normal-distribution-based methods in statistics.

Normal distribution arising from the Central Limit Theorem

Example: The average height of 100 randomly selected adults will be approximately normally distributed even if individual heights follow a skewed distribution, enabling construction of confidence intervals.

Random Variables and Distributions

A random variable is a function that assigns a numerical value to each outcome in a sample space. Its probability distribution describes the likelihood of each possible value. Distributions can be discrete (e.g., binomial, Poisson) or continuous (e.g., normal, exponential).

Example: The number of heads in 10 coin flips follows a binomial distribution with parameters $n = 10$ and $p = 0.5$, giving a specific probability to each outcome from 0 through 10 heads.

Expected Value and Variance

The expected value (mean) of a random variable is the long-run average of its outcomes, weighted by their probabilities. Variance measures how spread out values are around the mean. Together, they summarize a distribution's center and dispersion.

Example: A fair six-sided die has an expected value of 3.5 and a variance of about 2.92. A gambler who bets on the die many times can expect an average roll of 3.5 over time.

Independence and Dependence

Two events are independent if the occurrence of one does not affect the probability of the other, meaning $P(A \cap B) = P(A) \cdot P(B)$. When events are dependent, their joint probability requires conditioning. Recognizing independence is critical for simplifying calculations and building correct models.

Example: Successive flips of a fair coin are independent: getting heads on flip one does not change the probability of heads on flip two. However, drawing two cards from a deck without replacement creates dependent events.

More terms are available in the glossary.

Explore your way

Choose a different way to engage with this topic β€” no grading, just richer thinking.

Explore your way β€” choose one:

Explore with AI β†’

Concept Map

See how the key ideas connect. Nodes color in as you practice.

Worked Example

Walk through a solved problem step-by-step. Try predicting each step before revealing it.

Adaptive Practice

This is guided practice, not just a quiz. Hints and pacing adjust in real time.

Small steps add up.

What you get while practicing:

  • Math Lens cues for what to look for and what to ignore.
  • Progressive hints (direction, rule, then apply).
  • Targeted feedback when a common misconception appears.

Teach It Back

The best way to know if you understand something: explain it in your own words.

Keep Practicing

More ways to strengthen what you just learned.

Probability Adaptive Course - Learn with AI Support | PiqCue