Random Hacks: Robot localization using a particle system monad
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad
en-us40Technology and Other Fun Stuff"Robot localization using a particle system monad" by Christopher Tay<p>Chris, you’re right. Its sequential monte carlo. Thanks for pointing it out.</p>Thu, 06 Sep 2007 10:14:14 +0000urn:uuid:502fa05d-44be-44b5-98a7-90a2da526851
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-493
"Robot localization using a particle system monad" by chris<p>Christopher, you are wrong. A particle filter is not a markov chain monte carlo (MCMC) algorithm.
<span class="caps">A MCMC</span> algorithm is an iterative algorithm and is used for batch tasks. Particle filters only rely on importance sampling and resampling mechanisms and allow you to address sequential problems.</p>Thu, 02 Aug 2007 06:41:03 +0000urn:uuid:189fe971-7a5a-4e52-ad40-b7d7c7edf956
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-486
"Robot localization using a particle system monad" by Eric<p>Adding <code>mzero</code> to a probability monad has tricky semantics.</p>
<p>In the particle system above, I keep the “impossible” particles and give them a weight of 0. In a real implementation, you would generally want to replace these particles by “spliting” high-probability particles in half, or by resampling the particles periodically, resetting all the weights to 1.</p>
<p>In an <a href="http://www.randomhacks.net/articles/2007/02/22/bayes-rule-and-drug-tests">earlier hack</a> I used the “missing probability” to represent the normalization factor needed by Bayes’ rule. It’s important to understand exactly how <code>guard</code> fits into your larger semantics; there are quite a few traps for the unwary.</p>
<p>At a minimum, if you’re going to build a sampling-based probability monad with an <code>mzero</code>, you’re going to have to make sure that <code>sample 200 (guard False)</code> immediately returns zero samples.</p>Fri, 25 May 2007 16:17:08 +0000urn:uuid:8d427617-1696-427e-9937-d327b1b6e9b8
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-461
"Robot localization using a particle system monad" by Samer Abdallah<p>I’ve been playing with sampling monads and I’ve run up against a problem which I believe will afflict this code as well. The problem comes up when we add a conditioning operator, or equivalently, if we allow an mzero into the monad. A trivial example might be something like</p>
<blockquote>
<p>y = do u <- uniform01; guard (u>=0.5); return u</p>
</blockquote>
<p>or, in my code</p>
<blockquote>
<p>y = cond (>=0.5) uniform01</p>
</blockquote>
<p>this is a (not particularly good) way of sampling
uniformly on [0.5,1), but you get the idea.
If we now elaborate this to</p>
<blockquote>
<p>z = uniform01 >>= \u->cond (>=2u) uniform01</p>
</blockquote>
<p>or</p>
<blockquote>
<p>z = do
u <- uniform01;
v <- uniform01;
guard (v>=2u);
return v</p>
</blockquote>
<p>we get something where, if the first sampling returns u>=0.5, the guard on the second sample
cannot succeed. A naive sampling method which tries to repeat the second sampling operation until the guard succeeds will get stuck in an inifite loop, as there is no way for it to realise that its task is hopeless and it needs to backtrack and retry the first sampling operation.</p>
<p>In your code, the sample problem would occur
when calling |sample1 x| if x cannot return any
samples (due to guards being present in the definition of x).</p>
<p>As I see it, I can think of two ways around this.</p>
<p>One is to supplement the representation of the random variable to include sufficient static information about the support of the distribution so that cond can immediately reduce to mzero if the conditioning event (ie a set) does not intersect with the support.</p>
<p>The other is to have the representation be a stream of |Maybe a| and implement a kind of ‘fair’ binding
in x >>= f (or equivalently a ‘fair’ join) which <strong>interleaves</strong> the stream of samples from each application of f to samples from x. That way,
if x returns [x1,x2,...], and guards make |f x1| a stream of |Nothing|, then at some point, we will get
to try |f x2|, |f x3| and so on. This is the same as the ‘fair conjuction’ proposed by Shan et al in their
LogicT monad (see http://okmij.org/ftp/Computation/monads.html). I’m just trying it now but looks like it’s going to be very inefficient because it can waste a lot of samples and suffers
from combinatorial explosion when you start tupling up random variables.</p>
<p>Have you come up against this problem and do you have any thoughts on the matter?</p>Mon, 21 May 2007 19:44:56 +0000urn:uuid:65516e57-33e0-4d36-8e99-c2415a3bfaec
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-456
"Robot localization using a particle system monad" by Eric<p>Noel & Christopher: Thanks for comments!</p>
<p>Kalman filters, <span class="caps">AFAICT</span>, are one of the few models of belief functions that don’t map cleanly to Haskell monads. So far, I haven’t been able to define a suitable version of <code>fmap</code>.</p>
<p>One especially cool thing about particle systems is that you can use a <span class="caps">GPU</span> to simulate <a href="http://www.2ld.de/gdc2004/">a million particles</a> in real-time.</p>
<p>Given the handy monadic structure of particle systems—and their ability to represent non-linear functions—it’s worth keeping particle systems in mind. With the right hardware, they’re fast, flexible and easy to program.</p>Mon, 23 Apr 2007 19:07:42 +0000urn:uuid:0e610de6-5c87-4649-a320-bae8d5ec7765
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-407
"Robot localization using a particle system monad" by Christopher Tay<p>I would like to first say that although I do not program haskell, I really enjoy your blog :) Some points I would like to add…</p>
<p>A particle filter is essentially monte carlo (sampling). It is also known as markov chain monte carlo (MCMC). The particles are samples which represents a certain probability distribution which in this case, represents the state space of the robot. Essentially, the set of particles gives us an approximation of the probability distribution on the state of the robot. Its advantage over the kalman filter are essentially its non linearity (of robot state transitions in this case) and its ability to represent multi-modal distributions (which is not the case for kalman where it is assumed to be gaussian).</p>
<p>In contrast, the viterbi algorithm frequently associated with hidden markov models estimates not the probability distribution, but rather the most likely solution.</p>
<p>The application of the two methods are not limited by whether the state space is continuous or discret (although it does often have practical implications). Both the viterbi algorithm and particle filters can be applied to discrete and continuous spaces.</p>Mon, 23 Apr 2007 09:35:21 +0000urn:uuid:1055ef24-5eb6-4b10-82db-09ac092b37be
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-406
"Robot localization using a particle system monad" by Noel Welsh<p>Particle filters and the Viterbi algorithm are used in similar situations. The Viterbi algorithm is typically used to calculate the most likely transitions through a model given observations when the state space is discrete. Particle filters are typically used to calculate a set of high probability transitions when the state space is continuous, and not of an easily analysable form (so a Kalman filter cannot be used).</p>Mon, 23 Apr 2007 08:48:02 +0000urn:uuid:b2a7851c-e037-4332-86a5-5466c48990e4
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-405
"Robot localization using a particle system monad" by Eric<p>I’m looking forward to your blog post! I don’t know much about the Viterbi algorithm.</p>
<p>Particle systems are similar to random sampling, except that the samples are taken “in parallel” instead of one at a time. This offers two advantages:</p>
<p>1) You can maintain a series of hypotheses about the world, and update them as new evidence arrives. This is especially handy in robotics.</p>
<p>2) You can periodically reallocate your particles towards the most promising hypotheses. This is very handy in long-running computations where a large fraction of hypotheses are ruled out at each step.</p>
<p>I’ve seen nice 2D movies of particle systems, which make these ideas very intuitive. I’ll have to see if I can dig one up.</p>Sat, 21 Apr 2007 18:45:26 +0000urn:uuid:d746c1a0-60e4-4841-8dd9-dfd62682d0cd
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-404
"Robot localization using a particle system monad" by Dan P<p>I haven’t fully read this yet but does this approach must have any relationship with the Viterbi algorithm?</p>
<p>I’m writing a blog entry on Viterbi and this stuff looks remarkably similar.</p>Fri, 20 Apr 2007 14:44:15 +0000urn:uuid:900e0107-2915-4156-84f9-52f7b2802ef5
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad#comment-403
Robot localization using a particle system monad<p><b>Refactoring Probability Distributions:</b>
<a href="http://www.randomhacks.net/articles/2007/02/21/refactoring-probability-distributions" title="PerhapsT">part 1</a>, <a href="http://www.randomhacks.net/articles/2007/02/21/randomly-sampled-distributions" title="Random sampling">part 2</a>, <a href="http://www.randomhacks.net/articles/2007/02/22/bayes-rule-and-drug-tests" title="Bayes' rule">part 3</a>, <a href="http://www.randomhacks.net/articles/2007/03/03/smart-classification-with-haskell" title="Bayesian classification">part 4</a>, <b>part 5</b></p>
<p>Welcome to the 5th (and final) installment of <i>Refactoring Probability
Distributions!</i> Today, let’s begin with an example from
<a href="http://seattle.intel-research.net/people/jhightower/pubs/fox2003bayesian/fox2003bayesian.pdf" title="Fox and colleagues, 2003">Bayesian Filters for Location Estimation</a> (PDF), an excellent paper by Fox and colleagues.</p>
<p>In their example, we have a robot in a hallway with 3 doors. Unfortunately, we don’t know <i>where</i> in the hallway the robot is located:</p>
<p style="text-align: center"><img
src="http://www.randomhacks.net/files/robot-door-1.png" title="Robot at
first door" /></p>
<p>The vertical black lines are “particles.” Each particle represents a
possible location of our robot, chosen at random along the hallway. At
first, our particles are spread along the entire hallway (the top
row of black lines). Each particle begins life with a weight of 100%, represented by the height of the black line.</p>
<p>Now imagine that our robot has a “door sensor,” which currently tells us that
we’re in front of a door. This allows us to rule out any particle which is located <i>between</i> doors. </p>
<p>So we multiply the weight of each particle by 100% (if it’s in front of a door) or 0% (if it’s between doors), which gives us the lower row of particles. If our sensor was less accurate, we might use 90% and 10%, respectively.</p>
<p>What would this example look like in Haskell? We <i>could</i> build a
giant list of particles (with weights), but that would require us to do a
lot of bookkeeping by hand. Instead, we use a <a href="http://www.randomhacks.net/articles/2007/03/12/monads-in-15-minutes" title="Monads in 15 minutes">monad</a> to hide all the
details, allowing us to work with a single particle at a time.</p>
<div class="typocode"><pre><code class="typocode_haskell "><span class='hs-definition'>localizeRobot</span> <span class='hs-keyglyph'>::</span> <span class='hs-conid'>WPS</span> <span class='hs-conid'>Int</span>
<span class='hs-definition'>localizeRobot</span> <span class='hs-keyglyph'>=</span> <span class='hs-keyword'>do</span>
<span class='hs-comment'>-- Pick a random starting location.</span>
<span class='hs-comment'>-- This will be our first particle.</span>
<span class='hs-varid'>pos1</span> <span class='hs-keyglyph'><-</span> <span class='hs-varid'>uniform</span> <span class='hs-keyglyph'>[</span><span class='hs-num'>0</span><span class='hs-keyglyph'>..</span><span class='hs-num'>299</span><span class='hs-keyglyph'>]</span>
<span class='hs-comment'>-- We know we're at a door. Particles</span>
<span class='hs-comment'>-- in front of a door get a weight of</span>
<span class='hs-comment'>-- 100%, others get 0%.</span>
<span class='hs-keyword'>if</span> <span class='hs-varid'>doorAtPosition</span> <span class='hs-varid'>pos1</span>
<span class='hs-keyword'>then</span> <span class='hs-varid'>weight</span> <span class='hs-num'>1</span>
<span class='hs-keyword'>else</span> <span class='hs-varid'>weight</span> <span class='hs-num'>0</span>
<span class='hs-comment'>-- ...</span>
</code></pre></div>
<p>What happens if our robot drives forward?</p><p><a href="http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad">Read More</a></p>Thu, 19 Apr 2007 20:43:00 +0000urn:uuid:d14628b2-732e-4b87-934f-c6373df35a5cEric Kidd
http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad
HaskellMathMonadsProbabilityRobotsProbabilityMonadshttp://www.randomhacks.net/articles/trackback/399