• Praktische Grundlagen der Informatik
  • Algorithmen und Datenstrukturen
  • Foundations for Robotic and AI
  • Übersicht Data Science:
  • Data Science / Machine Learning
  • Projekt: Deep Teaching
  • Alte Veranstaltungen:
  • Grundlagen der Informatik (NF)
  • Octave/Matlab Tutorial
  • Game Physik
  • Home
  • Lehre
  • Publikationen
  • Kontakt/Impressum

Linear Independence, Column-Space, Null-Space and Rank of a Matrix¶

Matrix as a linear transformation of an vector¶

Matrix-Vector Multiplication as a Linear Combination¶

The matrix-vector product $\mathbf{A} \mathbf{x}$ is fundamentally a linear combination of the columns of $\mathbf{A}$, where the components of $\mathbf{x}$ serve as the scalar weights. Consider a matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ and a vector $\mathbf{x} \in \mathbb{R}^n$:

$$\mathbf{A} \mathbf{x} = \begin{pmatrix} \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n$$

For example, given:

$$\mathbf{A} \mathbf{x} = \begin{pmatrix} 2 & 0 \\ 1 & 1 \\ 1 & 3 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = x_1 \begin{pmatrix} 2 \\ 1 \\ 1 \end{pmatrix} + x_2 \begin{pmatrix} 0 \\ 1 \\ 3 \end{pmatrix} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2$$

Linear Independence¶

A set of vectors $\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_n\}$ is considered linearly independent if no vector in the set can be defined as a linear combination of the others. Mathematically, this means the equation:$$x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \dots + x_n \mathbf{a}_n = \mathbf{0}$$has only the trivial solution: $x_1 = x_2 = \dots = x_n = 0$.

If there exists a set of weights where at least one $x_i \neq 0$ that satisfies the equation, the vectors are linearly dependent. In such a case, at least one vector is "redundant" and does not contribute a new dimension to the span.

The Span¶

The Span of a set of vectors $\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_n\}$ is the collection of all possible vectors that can be reached by forming linear combinations of that set. Mathematically, any vector $\mathbf{b}$ in the span can be expressed as:$$\mathbf{b} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \dots + x_n \mathbf{a}_n$$where $x_i$ are any real scalars.

Geometric Intuition

The span describes the "geometric footprint" created by the vectors:

  • One non-zero vector: Its span is a 1D line passing through the origin.
  • Two linearly independent vectors: Their span is a 2D plane passing through the origin.
  • Three linearly independent vectors: Their span is the entire 3D space ($\mathbb{R}^3$).

The Role of Redundancy

If a set of vectors is linearly dependent, adding a redundant vector does not expand the span. For example, if $\mathbf{a}_3$ is already a linear combination of $\mathbf{a}_1$ and $\mathbf{a}_2$, then the span of $\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}$ is exactly the same as the span of $\{\mathbf{a}_1, \mathbf{a}_2\}$. In this case, $\mathbf{a}_3$ does not "contribute a new dimension."

The Null Space: $N(\mathbf{A})$¶

The null space of a matrix $\mathbf{A}$ is the set of all vectors $\mathbf{x}$ that satisfy the homogeneous equation: $$\mathbf{A}\mathbf{x} = \mathbf{0}$$

Connection between Independence and the Null Space

This is where the concepts of independence and the null space converge:

  • Linearly Independent Columns: The only way to combine the columns to reach $\mathbf{0}$ is to multiply them all by zero. Therefore, $N(\mathbf{A})$ contains only the zero vector ($\mathbf{x} = \mathbf{0}$).
  • Linearly Dependent Columns: There exist non-zero weights that can combine the columns to reach $\mathbf{0}$. Therefore, $N(\mathbf{A})$ contains non-zero vectors, forming a line, plane, or higher-dimensional subspace.

The Column Space: $C(\mathbf{A})$¶

While the null space describes the "redundancy" of the input, the column space describes the "reach" of the output. It is the set of all possible linear combinations of the columns of $\mathbf{A}$

The column space (or range) of a matrix $\mathbf{A}$ is denoted as $C(\mathbf{A})$. Geometrically, if the columns are linearly independent, they span a subspace of $\mathbb{R}^m$. In the example above, $C(\mathbf{A})$ is the plane in $\mathbb{R}^3$ spanned by $\mathbf{a}_1$ and $\mathbf{a}_2$.

For the linear system $\mathbf{A}\mathbf{x} = \mathbf{b}$ to be consistent (possess at least one solution), the vector $\mathbf{b}$ must reside within the column space:

Existence Condition: $\mathbf{A}\mathbf{x} = \mathbf{b}$ has a solution if and only if $\mathbf{b} \in C(\mathbf{A})$.

If $\mathbf{b}$ lies outside this subspace, no combination of the columns of $\mathbf{A}$ can reach it, rendering the system inconsistent.

If there are non-zero vectors in the null space, then any solution to $\mathbf{A}\mathbf{x} = \mathbf{b}$ is not unique, as any vector $\mathbf{x}_{null} \in N(\mathbf{A})$ can be added to a particular solution $\mathbf{x}_p$ without changing the result:$$\mathbf{A}(\mathbf{x}_p + \mathbf{x}_{null}) = \mathbf{A}\mathbf{x}_p + \mathbf{A}\mathbf{x}_{null} = \mathbf{b} + \mathbf{0} = \mathbf{b}$$

Relation to Span

The Column Space $C(\mathbf{A})$ of a matrix is defined as the span of its column vectors. Therefore, when we ask if the equation $\mathbf{A}\mathbf{x} = \mathbf{b}$ has a solution, we are asking a geometric question: "Is the vector $\mathbf{b}$ located within the span of the columns of $\mathbf{A}$?"

Rank of a Matrix¶

The rank of a matrix, denoted as $\text{rank}(\mathbf{A})$, is a single number that summarizes the "dimensions" of the information contained within the matrix. It is formally defined in two equivalent ways:

  • The dimension of the column space $C(\mathbf{A})$ (the number of linearly independent columns).
  • The dimension of the row space $C(\mathbf{A}^\top)$ (the number of linearly independent rows).

In our earlier example, because $\mathbf{a}_1$ and $\mathbf{a}_2$ are not multiples of one another, they are linearly independent. Therefore, $\text{rank}(\mathbf{A}) = 2$.

Literature¶

  • Gilbert Strang, Linear Algebra and Learning from Data, Wellesley-Cambridge Press, 2019
  • Jim Hefferon, Linear Algebra, Saint Michael's College, USA, 2017
  • Gilbert Strang, Introduction to Linear Algebra, Wellesley-Cambridge Press, 2016
  • Philip Klein, Coding the Matrix: Linear Algebra through Applications to Computer Science, Newtonian Press, 2013
In [ ]:
 
Kontakt/Impressum