Our representation of $P(X)$ is now a list of $N$ particles (samples)
Generally, $N << |X|$
Storing map from $X$ to counts would defeat the point
$P(x)$ approximated by number of particles with value $x$
So, many $x$ may have $P(x) = 0$
More particles, more accuracy
For now, all particles have a weight of 1
Particle Filtering: Elapse Time
Each particle is moved by sampling its next position from the transition model
\[x' = sample(P(X'|x))\]
This is like prior sampling – samples’ frequencies reflect the transition probabilities
Here, most samples move clockwise, but some move in another direction or stay in place
This captures the passage of time
If enough samples, close to exact values before and after (consistent)
Particle Filtering: Observe
Slightly trickier:
Do not sample observation, fix it
Similar to likelihood weighting, downweight samples based on the evidence
\begin{equation}
\begin{aligned}
w(x) &= P(e|x) \nonumber \\
B(X) &\propto P(e|X)B'(X) \nonumber
\end{aligned}
\end{equation}
As before, the probabilities don’t sum to one, since all have been downweighted (in fact they now sum to ($N$ times) an approximation of $P(e)$)
Particle Filtering: Resample
Rather than tracking weighted samples, we resample
$N$ times, we choose from our weighted sample distribution (i.e. draw with replacement)
This is equivalent to renormalizing the distribution
Now the update is complete for this time step, continue with the next one
Summary: Particle Filtering
Particles: track samples of states rather than an explicit distribution
Robot Localization
In robot localization:
We know the map, but not the robot’s position
Observations may be vectors of range finder readings
State space and readings are typically continuous (works basically like a very fine grid) and so we cannot store $B(X)$
Particle filtering is a main technique
Particle Filter Localization (Sonar)
Robot Mapping
SLAM: Simultaneous Localization And Mapping
We do not know the map or our location
State consists of position AND map
Main techniques: Kalman filtering (Gaussian HMMs) and particle methods
Particle Filter SLAM - Video 1
Particle Filter SLAM - Video 2
Dynamic Bayes Net
Dynamic Bayes Nets (DBNs)
We want to track multiple variables over time, using multiple sources of evidence
Idea: Repeat a fixed Bayes net structure at each time
Variables from time t can condition on those from $t-1$
Dynamic Bayes nets are a generalization of HMMs
Exact Inference in DBNs
Variable elimination applies to dynamic Bayes nets
Procedure: "unroll" the network for T time steps, then eliminate variables until $P(X_T|e_{1:T})$ is computed
Online belief updates: Eliminate all variables from the previous time step; store factors for current time only
DBN Particle Filters
A particle is a complete sample for a time step
Initialize: Generate prior samples for the t=1 Bayes net
Example particle: $G_2^a = (3,3)$ $G_1^b=(5,3)$
Elapse time: Sample a successor for each particle
Example successor: $G_2^a = (2,3)$ $G_2^b=(6,3)$
Observe: Weight each entire sample by the likelihood of the evidence conditioned on the sample
Finding the words given the acoustics is an HMM inference problem
Which state sequence $x_{1:T}$ is most likely given the evidence $e_{1:T}$?
\[x^*_{1:T} = \underset{x_{1:T}}{\operatorname{arg max}}P(x_{1:T}|e_{1:T}) = \underset{x_{1:T}}{\operatorname{arg max}} P(x_{1:T},e_{1:T})\]
From the sequence $x$, we can simply read off the words