• Praktische Grundlagen der Informatik
  • Algorithmen und Datenstrukturen
  • Foundations for Robotic and AI
  • Übersicht Data Science:
  • Data Science / Machine Learning
  • Projekt: Deep Teaching
  • Alte Veranstaltungen:
  • Grundlagen der Informatik (NF)
  • Octave/Matlab Tutorial
  • Game Physik
  • Home
  • Lehre
  • Publikationen
  • Kontakt/Impressum

Introduction to Continuous Convolution¶

Continuous convolution is the mathematical operation that plays the same role in continuous probability density functions (PDFs) as discrete convolution plays in discrete distributions. It is the defining operation for the Prediction Step of the Kalman Filter and other continuous state estimators.

Definition and Notation¶

Given two continuous functions, $f(x)$ and $g(x)$, their continuous convolution, denoted by $(f * g)$, is defined by the following integral:

$$(f * g)(t) = \int_{-\infty}^{\infty} f(\tau) \cdot g(t - \tau) \, d\tau$$

Where:

  • $f(\tau)$ and $g(t - \tau)$ are the two continuous functions being blended.
  • The integral sign ($\int$) is the continuous equivalent of the summation ($\sum$) used in discrete convolution.
  • $\tau$ (tau) is the dummy variable of integration, analogous to the index $k$ in the discrete sum.
  • $t$ is the final output point, analogous to the index $n$.

Convolution and Sums of Random Variables¶

In the context of probability, continuous convolution is the operation used to find the PDF of the sum of two independent continuous random variables.

If you have:

  1. A random variable $X$ with PDF $f(x)$ (e.g., the robot's previous position).
  2. An independent random variable $Y$ with PDF $g(y)$ (e.g., the random motion displacement).

Then the PDF of the new random variable $Z = X + Y$ (the new position) is the convolution of their individual PDFs:

$$p(Z) = (f * g)(Z)$$

Kontakt/Impressum