Importance Sampling (and why the name is misleading)
We call it ‘importance,’ but the algorithm never picks ‘important’ points—it re-weights ordinary ones. It should have been called occurrence adjustment or something like that and base that on the relationship between samples/sampling and likelihood and why we are doing what we are doing.
When we are averaging stuff (say ) in real life, we are actually estimating an expectation of a function, , with respect to a random variable, , that is distributed by . In equal but different words, (if you can collect/create observations), you are sampling from . That literally means you will have more observations at some points that is governed by .
But what if you can’t sample from directly? If you sample from some other distribution instead, you’re getting the “wrong” occurrence rates. Values that should appear frequently under might appear rarely under , and vice versa.
The fix is simple: adjust each sample by the ratio .
This ratio tells you how much more (or less) likely that value is under compared to . If , that sample should “count 3 times more” to correct for it being underrepresented in your -samples. This is literally occurrence adjustment — you’re fixing the mismatch between how often values actually appeared (under ) versus how often they should have appeared (under ).
Hidden Requirements
This method implicitly assumes two critical things that are often glossed over:
Matching support: If assigns zero probability where doesn’t, the weight blows up and the estimator is undefined. .
Computable likelihoods: You need to evaluate both and for every sample. This seems obvious but it’s a huge constraint — we often resort to sampling precisely because we can’t compute these densities easily, things like MCMC.
There’s a beautiful duality here: if you can compute but can’t sample from it, importance sampling provides a backdoor. Just find ANY distribution that you can both sample from AND evaluate, and you’ve converted your “likelihood evaluation ability” into “sampling ability.” In theory, this creates an equivalence between these two capabilities.
Life is not so easy though. While you keep all samples, samples with tiny importance weights contribute essentially nothing. You’ve spent computational effort generating them, evaluating densities, computing — only to multiply by . These “zombie samples” exist but don’t matter, making your effective sample size much smaller than your actual sample count.
In policy-gradient RL, is the new policy, the behavior policy, so IS lets us reuse old roll-outs.
The name “importance sampling” obscures this elegant mechanism. We’re not sampling “important” regions — we’re correcting occurrence rates. “Likelihood-weighted sampling” or “distribution correction sampling” would immediately convey what’s actually happening: you sampled from the wrong distribution, so you reweight by to fix it.
The Math (or: How this magic actually works)
So we want but can only sample from . Here’s the trick in all its glory:
Start with what you want:
Now multiply by 1 :
The integral is just an expectation with respect to :
In practice with samples from :