Skip to main content

Section 4.3 Matrices over \(\CV{n}\)

We begin with the traditional definition of a matrix, but we will quickly put it to a non-traditional use to provide motivation for matrix arithmetic.

Definition 4.3.1. Matrix over \(\Comps\).

Consider a set
\begin{equation*} \set{\vec{A}_c:c\in\set{0,1,\dotsc,n-1}}\subset\CV{m}\text{.} \end{equation*}
We denote
\begin{equation*} A = \left[\vec{A}_0|\vec{A}_1|\cdots|\vec{A}_{n-1}\right] \end{equation*}
to be the \(m\times n\) matrix whose cth column is the vector \(\vec{A}_c\text{,}\) for \(c\in\set{0,1,\dotsc,n-1}\text{.}\) We extend our bracket notation from vectors as follows:
\begin{equation*} \entry{A}{r,c} = \entry{\vec{A}_c}{r}\text{.} \end{equation*}
To remove the bracket notation, if we define \(a_{r,c}=\entry{\vec{A}_c}{r}\in\Comps\) then
\begin{equation*} A = \begin{bmatrix} a_{0,0} \amp a_{0,1} \amp a_{0,2} \amp \cdots \amp a_{0,n-1} \\ a_{1,0} \amp a_{1,1} \amp a_{1,2} \amp \cdots \amp a_{1,n-1} \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots \\ a_{m-1,0} \amp a_{m-1,1} \amp a_{m-1,2} \amp \cdots \amp a_{m-1,n-1} \end{bmatrix}\text{.} \end{equation*}
For a matrix \(A\in\Mats{m,n}\) we define the row dimension of \(A\) to be \(m\) and the column dimension of \(A\) to be \(n\text{.}\)
Our use of the bracket notation allows the following theorem.

Subsection 4.3.1 Matrix-vector products

Matrices are more than a convenient way to store rectangular data, but to build their very useful features we need another idea from vector theory.

Definition 4.3.3. Linear combination and span of a set.

Let \(S=\set{\vv_0,\vv_1,\vv_2,\dotsc,\vv_{\ell-1}}\subset\CV{n}\) be a set of \(\ell\) complex vectors each of length \(n\text{.}\) A vector \(\vec{w}\in\CV{n}\) is a linear combination from the set \(S\) if and only if there is a sequence \((z_0,z_1,z_2,\dotsc,z_{\ell-1})\) of complex numbers such that
\begin{equation*} z_0\vv_0+z_1\vv_1+\cdots+z_{\ell-1}\vv_{\ell-1} = \vec{w}\text{.} \end{equation*}
The set of all such linear combinations from \(S\) is the span of \(S\text{,}\) denoted \(\spanset{S}\text{.}\)

Example 4.3.4. The standard unit basis.

An interesting example of the span of a set arises from looking at a special set of vectors \(S=\set{\vec{e}_k:k\in\set{0,1,\dotsc,n-1}}\subset \CV{n}\text{,}\) where \(\vec{e}_k\) is the unique vector which satisfies
\begin{equation*} \entry{\vec{e}_k}{r} = \begin{cases} 1 \amp r=k \\ 0 \amp r\neq k \end{cases}\text{.} \end{equation*}
With these vectors and any other vector \(\vv\in\CV{n}\) which satisfies \(\entry{\vv}{r} = z_r\in\Comps\text{,}\) we can write
\begin{align*} \entry{\vv}{r} \amp= z_r = (n-1)\cdot 0 + 1\cdot z_r\\ \amp= z_0\entry{\vec{e}_0}{r} + z_1\entry{\vec{e}_1}{r} + \cdots + z_r\entry{\vec{e}_r}{r} + \cdots + z_{n-1}\entry{\vec{e}_{n-1}}{r}\\ \amp= \entry{z_0\vec{e}_0}{r} + \entry{z_1\vec{e}_1}{r} + \cdots + \entry{z_r\vec{e}_r}{r} + \cdots + \entry{z_{n-1}\vec{e}_{n-1}}{r}\\ \amp= \entry{\sum_{k=0}^{n-1}z_k\vec{e}_k}{r}\text{.} \end{align*}
Thus \(\spanset{S} = \CV{n}\text{,}\) which means every vector can be written as a linear combination from \(S\text{.}\)

Definition 4.3.5. Matrix-vector product.

Let \(\vec{A}_0,\vec{A}_1,\dotsc,\vec{A}_{n-1}\in\CV{m}\) be vectors such that
\begin{equation*} A=\left[\vec{A}_1|\cdots|\vec{A}_{n-1}\right]\in\Mats{m,n} \end{equation*}
and let \(\vv\in\CV{n}\text{.}\) Then the matrix-vector product \(A\vv\) is the linear combination
\begin{equation*} A\vv = \vec{A}_0\entry{\vv}{0} + \vec{A}_1\entry{\vv}{1} + \cdots + \vec{A}_{n-1}\entry{\vv}{n-1}\in\CV{m}\text{.} \end{equation*}
Bracket notation gives us another way to think of the matrix-vector product.
What is more fascinating than this product in its own right is the behavior obtained when you consider products of arbitrary vectors by a fixed matrix.

Proof.

Suppose \(\vv_1,\vv_2\in\CV{n}\) and \(z\in\Comps\text{,}\) and assume \(\vec{A}_0,\vec{A}_1,\dotsc,\vec{A}_{n-1}\in\CV{m}\) are vectors such that
\begin{equation*} A=\left[\vec{A}_1|\cdots|\vec{A}_{n-1}\right]\text{.} \end{equation*}
Then
\begin{align*} T_A(\vv_1+\vv_2) \amp = A(\vv_1+\vv_2)\\ \amp = \entry{\vv_1+\vv_2}{0}\vec{A}_0 + \cdots + \entry{\vv_1+\vv_2}{n-1}\vec{A}_{n-1}\\ \amp = \left(\entry{\vv_1}{0}+\entry{\vv_2}{0}\right)\vec{A}_0 + \cdots + \left(\entry{\vv_1}{n-1}+\entry{\vv_2}{n-1}\right)\vec{A}_{n-1}\\ \amp = \entry{\vv_1}{0}\vec{A}_0+\cdots+\entry{\vv_1}{n-1}\vec{A}_{n-1} + \entry{\vv_2}{0}\vec{A}_0+\cdots+\entry{\vv_2}{n-1}\vec{A}_{n-1}\\ \amp = A\vv_1 + A\vv_2 = T_A(\vv_1) + T_A(\vv_2)\text{.} \end{align*}
Also,
\begin{align*} T_A(z\vv_1) \amp = A(z\vv_1)\\ \amp = \entry{z\vv_1}{0}\vec{A}_0 + \cdots + \entry{z\vv_1}{n-1}\vec{A}_{n-1}\\ \amp = z\entry{\vv_1}{0}\vec{A}_0 + \cdots + z\entry{\vv_1}{n-1}\vec{A}_{n-1}\\ \amp = z\left(\entry{\vv_1}{0}\vec{A}_0 + \cdots + \entry{\vv_1}{n-1}\vec{A}_{n-1}\right)\\ \amp = zA\vv_1 = zT_A(\vv_1)\text{.} \end{align*}
Hence \(T_a\) is a linear transformation.
This might not be surprising, given the fact that we established \(\Mats{m,n}\) as a vector space. The following result is perhaps more surprising.

Proof.

For each \(c\in\set{0,\dotsc,n-1}\text{,}\) let \(\vec{e}_c\in\CV{m}\) be the vector satisfying
\begin{equation*} \entry{\vec{e}_c}{r} = \begin{cases} 0 \amp \text{if }r\neq c \\ 1 \amp \text{if }r=c \end{cases}\text{.} \end{equation*}
Further, define \(\vec{A}_c = L(\vec{e}_c)\) for each \(c\text{.}\) It is left as an exercise to the reader to verify that the matrix
\begin{equation*} A=\left[\vec{A}_1|\cdots|\vec{A}_{n-1}\right] \end{equation*}
satisfies \(T_A = L\text{.}\)

Subsection 4.3.2 Matrix-matrix products

Since we have established that left-multiplication of a vector by a matrix (the matrix-vector product) is actually the same as an application of a function to a vector, it should be unsurprising that matrix-matrix products act like function composition!

Definition 4.3.9. Matrix-matrix product.

Let \(m,n,p,q\in\Zp\text{,}\) \(A\in\Mats{m,n}\text{,}\) and
\begin{equation*} B = \left[\vec{B}_0|\vec{B}_1|\cdots|\vec{B}_{q-1}\right]\in\Mats{p,q}\text{.} \end{equation*}
Then the matrix-matrix product \(AB\) exists if and only if \(n=p\text{.}\) When \(n=p\text{,}\) then
\begin{equation*} AB = \left[A\vec{B}_0|A\vec{B}_1|\cdots|A\vec{B}_{q-1}\right]\text{.} \end{equation*}
The more traditional way to evaluate matrix-matrix products is component-wise.