Several years back, I wrote a series of blog posts about probability monads inspired by this quote:

A very senior Microsoft developer who moved to Google told me that Google works and thinks at a higher level of abstraction than Microsoft. "Google uses Bayesian filtering the way Microsoft uses the if statement," he said. -Joel Spolsky

My goal was to make it easier to reason about evidence by combining Bayes' Rule and probability monads:

fluStatusGivenPositiveTest = do
  fluStatus  <- percentWithFlu 10
  testResult <- if fluStatus == Flu
                  then percentPositive 70
                  else percentPositive 10
  guard (testResult == Pos)
  return fluStatus

-- This function will return a probability distribution of
-- 44% Flu and 56% Healthy. Because only a small portion of
-- the population is actually infected, the false positives
-- actually outnumber the true ones.

You can find the original series of blog posts here:

  1. Refactoring probability distributions, part 1: PerhapsT
  2. Refactoring probability distributions, part 2: Random sampling
  3. Bayes' rule in Haskell, or why drug tests don't work
  4. Smart classification using Bayesian monads in Haskell
  5. Robot localization using a particle system monad

I then turned these into a paper: Build your own probability monads.

The source code is now available on GitHub. Be warned: This probably won't run on modern Haskell without some tweaking, but patches are welcome.