• Praktische Grundlagen der Informatik
  • Algorithmen und Datenstrukturen
  • Foundations for Robotic and AI
  • Übersicht Data Science:
  • Data Science / Machine Learning
  • Projekt: Deep Teaching
  • Alte Veranstaltungen:
  • Grundlagen der Informatik (NF)
  • Octave/Matlab Tutorial
  • Game Physik
  • Home
  • Lehre
  • Publikationen
  • Kontakt/Impressum

The Kalman Filter: A Complete Summary¶

The Kalman Filter is an optimal recursive estimator. It is designed to estimate the state of a dynamic system when the available measurements are noisy and the underlying physics are not perfectly predictable.

The Core Philosophy¶

The filter treats every variable as a Gaussian Distribution (a bell curve).

  • The Mean ($\hat{\mathbf{x}}$) represents our best guess of the value.
  • The Covariance ($\mathbf{P}$) represents our uncertainty (the width of the bell curve).

The filter works by multiplying these Gaussian curves together. When you multiply two bell curves (a prediction and a measurement), the resulting curve is narrower than both, meaning the filter is more certain than either the model or the sensor alone.

The Universal Predict-Correct Cycle¶

The filter operates in two distinct phases that repeat at every time step ($k$).

Phase 1: Prediction (Moving forward in time)¶

The filter uses its internal mathematical model to project the state into the future.

  • State Prediction: $$\hat{\mathbf{x}}_k^- = \mathbf{F}_{k-1} \hat{\mathbf{x}}_{k-1}^+ + \mathbf{B}_{k-1} \mathbf{u}_{k-1}$$ The transition matrix $\mathbf{F}_{k-1}$ defines how the state evolves from 2$k-1$ to 3$k$
  • Uncertainty Growth: $$\mathbf{P}_k^- = \mathbf{F}_{k-1} \mathbf{P}_{k-1}^+ \mathbf{F}_{k-1}^T + \mathbf{Q}_{k-1}$$ The uncertainty at $k-1$ is propagated through the dynamics and increased by the process noise $\mathbf{Q}_{k-1}$ encountered during the transition.

Key Insight: During prediction, uncertainty always increases because of the Process Noise ($\mathbf{Q}$). We are "guessing" what happened while we weren't looking.

Phase 2: Correction (Updating with data)¶

This phase incorporates the new measurement $\mathbf{z}_k$ received at time $k$.

  • Innovation (Measurement Residual): $$\mathbf{y}_k = \mathbf{z}_k - \mathbf{H}_k \hat{\mathbf{x}}_k^-$$ The difference between the actual observation and the observation expected by the model $\mathbf{H}_k$.

  • Kalman Gain Calculation: $$\mathbf{K}_k = \mathbf{P}_k^- \mathbf{H}_k^T (\mathbf{H}_k \mathbf{P}_k^- \mathbf{H}_k^T + \mathbf{R}_k)^{-1}$$ The gain $\mathbf{K}_k$ is optimal for time $k$, balancing the current predicted uncertainty against the current sensor noise $\mathbf{R}_k$.

  • State Update: $$\hat{\mathbf{x}}_k^+ = \hat{\mathbf{x}}_k^- + \mathbf{K}_k \mathbf{y}_k$$ The final estimate for time $k$ is the weighted sum of the prediction and the measurement surprise.

  • Covariance Update: $$\mathbf{P}_k^+ = (\mathbf{I} - \mathbf{K}_k \mathbf{H}_k) \mathbf{P}_k^-$$ The uncertainty is updated to reflect the gain in information from the measurement $\mathbf{z}_k$.

Why is it "Optimal"?¶

If the noise in your system follows a Gaussian (normal) distribution and your model is linear, the Kalman Filter is the Best Linear Unbiased Estimator (BLUE). It is mathematically impossible to do better with the information provided.

The Kalman Filter: Complete Algorithm (Time-Varying Notation)¶

This table summarizes the two-step recursive process used to estimate the state $\mathbf{x}$ at time $k$.

Step Phase Mathematical Operation Physical Intuition
1 State Predict $\hat{\mathbf{x}}_k^- = \mathbf{F}_{k-1} \hat{\mathbf{x}}_{k-1}^+ + \mathbf{B}_{k-1} \mathbf{u}_{k-1}$ Use physics to guess the new state based on the past.
2 Covariance Predict $\mathbf{P}_k^- = \mathbf{F}_{k-1} \mathbf{P}_{k-1}^+ \mathbf{F}_{k-1}^T + \mathbf{Q}_{k-1}$ Account for how uncertainty grows during movement.
3 Compute Gain $\mathbf{K}_k = \mathbf{P}_k^- \mathbf{H}_k^T (\mathbf{H}_k \mathbf{P}_k^- \mathbf{H}_k^T + \mathbf{R}_k)^{-1}$ Determine the "trust ratio" between model and sensor.
4 State Update $\hat{\mathbf{x}}_k^+ = \hat{\mathbf{x}}_k^- + \mathbf{K}_k (\mathbf{z}_k - \mathbf{H}_k \hat{\mathbf{x}}_k^-)$ Correct the guess using the new measurement $\mathbf{z}_k$.
5 Covariance Update $\mathbf{P}_k^+ = (\mathbf{I} - \mathbf{K}_k \mathbf{H}_k) \mathbf{P}_k^-$ Shrink uncertainty now that we have more information.

Mathematical Conventions Used:

  • The Minus Superscript ($\,^-$): Represents the a priori (predicted) estimate. It is our best guess before incorporating the measurement at time $k$.
  • The Plus Superscript ($\,^+$): Represents the a posteriori (updated) estimate. It is our best guess after incorporating the measurement.
  • The $k-1$ Index: Refers to matrices or states associated with the previous time step or the transition from the previous step to the current one.
  • The $k$ Index: Refers to the current time step and current measurements.
Kontakt/Impressum