From MLE to Bayesian Inference: Why Your Estimate Needs a Prior
In the MLE tutorial , we estimated a coin's bias by finding the single parameter value that maximises the likelihood. Flip a coin 3 times, get 3 heads, and MLE says $\hat{\theta} = 1.0$ — the coin always lands heads. That feels wrong. With only 3 flips, we shouldn't be certain of anything. The problem isn't the likelihood — it's that MLE gives you a point estimate with no way to express doubt. Bayesian inference fixes this by computing an entire distribution over parameter values, weighted by how plausible each value is given both the data and your prior knowledge. By the end of this post, you'll implement Bayesian updating from scratch, understand conjugate priors, and see why a 99% accurate medical test can still be wrong 98% of the time. Quick Win: A Coin Flip with a Prior Let's revisit the coin flip from the MLE post , but this time we'll incorporate a prior belief. Suppose you think the coin is probably fair, but you're not certain. import numpy as np import matplotlib.pyplot as p
Continue reading on Dev.to
Opens in a new tab


