Core Statistics for Modern Machine Learning
Modern machine learning rests on a handful of core statistical ideas. Understanding these ideas does not only improve model accuracy—it also improves your ability to reason about uncertainty, diagnose failure modes, and communicate results to non‑technical stakeholders. In this guide we build a practical toolkit of statistics for machine learning practitioners: probability distributions, estimation, hypothesis testing, resampling, regularisation, and model evaluation.
1) Probability as the Language of Uncertainty
Every prediction produced by a model is uncertain to some degree. Probability distributions quantify this uncertainty. In applied settings we rarely need advanced measure theory; instead we need a mental library of common distributions (normal, binomial, Poisson, exponential, log‑normal) and when to use each one. For example, Poisson models are a natural starting point for counts per unit time, while the binomial distribution models the number of successes in a fixed number of trials. Heavy‑tailed phenomena—web session lengths, claim sizes—often fit log‑normal or Pareto laws better than the normal distribution.
Useful checklists
- Identify the outcome type: continuous, count, category, ranking.
- Match a distribution family: normal for symmetric continuous noise, Poisson/negative binomial for counts, multinomial for categories.
- Look for skew and tails: plot histograms and empirical CDFs; consider log transforms.
2) Estimation and the Bias–Variance Trade‑off
Point estimates (means, proportions, coefficients) are useful, but intervals are better. Confidence intervals and credible intervals communicate the range of plausible values for a parameter. In machine learning the bias–variance trade‑off shows up as underfitting vs. overfitting. Techniques like ridge and lasso shrink coefficients to reduce variance at the cost of small bias, typically improving out‑of‑sample error.
3) Hypothesis Testing Without the Jargon
Classical tests (t‑test, chi‑square, Mann–Whitney) remain invaluable for quick experiments and A/B tests. The key is to pre‑register an analysis plan, avoid peeking, and report effect sizes with intervals rather than only p‑values. Where assumptions fail, permutation tests and bootstrapping offer robust, distribution‑light alternatives.
4) Resampling: Cross‑Validation & Bootstrapping
Cross‑validation provides an honest estimate of generalisation error. For small datasets prefer repeated stratified k‑fold CV; for time‑series use forward‑chaining folds. Bootstrapping draws samples with replacement to approximate sampling distributions of metrics and parameters. It is a powerful way to attach uncertainty bars to ROC‑AUC, F1, or calibration curves.
5) Model Evaluation Beyond Accuracy
Accuracy is misleading on imbalanced data. Prefer metrics aligned to the decision: precision–recall AUC for rare event detection, calibration error when decisions depend on predicted probabilities, mean absolute error (MAE) for cost functions that scale linearly with deviation. Always complement a single number with diagnostic plots: residuals, lift charts, and reliability diagrams.
6) Practical Regularisation
Regularisation is not merely a hyperparameter to tune; it encodes prior beliefs about plausible models. L2 discourages large coefficients smoothly, L1 induces sparsity, elastic‑net blends both, early stopping regularises iterative learners, and dropout randomises network structure to improve robustness. Choose regularisation guided by domain knowledge and feature engineering constraints.
7) Communicating Uncertainty
Stakeholders rarely ask for a p‑value but they do ask “how sure are we?”. Provide interval estimates, scenario ranges (“pessimistic, likely, optimistic”), and decision thresholds tied to business costs. Where possible, express uncertainty visually with error bars or fan charts.
Quick FAQ
How many folds should I use?
Five is a good default; use ten when datasets are small and computation is cheap. For time‑series use rolling windows.
When is accuracy acceptable?
Only when classes are balanced and the cost of false positives and false negatives is comparable.
Do I need Bayesian methods?
Not always, but Bayesian thinking helps: it makes assumptions explicit and yields full posterior distributions.