Theorems

Note: Theorem numbering typically occurs in order, however, there are a few times when a number gets skipped. There are no theorems missing from this file.

Theorem 1.1.1
If \(\vec{x}, \vec{y}, \vec{w} \in \mathbb{R}^n\) and \(c, d \in \mathbb{R}\), then
V1  \(\vec{x}+\vec{y} \in \mathbb{R}^n\)
V2  \((\vec{x}+\vec{y})+\vec{w}=\vec{x}+(\vec{y}+\vec{w})\)
V3  \(\vec{x}+\vec{y}=\vec{y}+\vec{x}\)
V4  There exists a vector \(\vec{0} \in \mathbb{R}^n\), called the zero vector, such that \(\vec{x}+\vec{0} = \vec{x}\)
V5  There exists a vector \(-\vec{x} \in \mathbb{R}^n\) such that \(\vec{x}+(-\vec{x})=\vec{0}\)
V6  \(c\vec{x} \in \mathbb{R}^n\)
V7  \(c(d\vec{x})=(cd)\vec{x}\)
V8  \((c+d)\vec{x}=c\vec{x}+d\vec{x}\)
V9  \(c(\vec{x}+\vec{y})=c\vec{x}+c\vec{y}\)
V10  \(1\vec{x}=\vec{x}\)
Theorem 1.1.2
If \(\vec{v}_k\) can be written as a linear combination of \(\vec{v}_1,\ldots,\vec{v}_{k-1}\), then \[\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{k-1}\}\]
Theorem 1.1.3
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in \(\mathbb{R}^n\) is linearly dependent if and only if \(\vec{v}_i\in \operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_k\}\) for some \(i\), \(1\leq i\leq k\).
Theorem 1.1.4
If a set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) contains the zero vector then it is linearly dependent.
Theorem 1.2.1 - The Subspace Test
Let \(\mathbb{S}\) be a non-empty subset of \(\mathbb{R}^n\). If \(\vec{x}+\vec{y}\in \mathbb{S}\) and \(c\vec{x}\in \mathbb{S}\) for all \(\vec{x},\vec{y}\in \mathbb{S}\) and \(c\in \mathbb{R}\), then \(\mathbb{S}\) is a subspace of \(\mathbb{R}^n\).
Theorem 1.2.2
If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a set of vectors in \(\mathbb{R}^n\), then \(\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a subspace of \(\mathbb{R}^n\).
Theorem 1.3.1
If \(\vec{x},\vec{y}\in \mathbb{R}^2\) and \(\theta\) is the angle between \(\vec{x}\) and \(\vec{y}\), then \[\vec{x}\cdot \vec{y}=\|\vec{x}\|\thinspace \|\vec{y}\|\cos \theta\]
Theorem 1.3.2
If \(\vec{x},\vec{y},\vec{z}\in\mathbb{R}^n\) and let \(s,t \in \mathbb{R}\), then
  1. \(\vec{x}\cdot \vec{x}\geq 0\) and \(\vec{x}\cdot \vec{x}=0\) if and only if \(\vec{x}=\vec{0}\).
  2. \(\vec{x}\cdot \vec{y}=\vec{y}\cdot \vec{x}\)
  3. \(\vec{x}\cdot(s\vec{y}+t\vec{z})= s(\vec{x}\cdot\vec{y}) + t(\vec{x}\cdot \vec{z})\)
Theorem 1.3.3
If \(\vec{x},\vec{y}\in\mathbb{R}^n\) and let \(c \in \mathbb{R}\), then
  1. \(\|\vec{x}\| \geq 0\) and \(\|\vec{x}\|=0\) if and only if \(\vec{x}=\vec{0}\).
  2. \(\|c\vec{x}\|=|c|\: \|\vec{x}\|\)
  3. \(|\vec{x}\cdot\vec{y}|\leq \|\vec{x}\| \|\vec{y}\| \quad\) (Cauchy-Schwarz-Buniakowski Inequality)
  4. \(\|\vec{x}+\vec{y}\|\leq \|\vec{x} \| + \|\vec{y}\| \quad\) (Triangle Inequality)
Theorem 1.3.4
Suppose that \(\vec{v},\vec{w}, \vec{x} \in \mathbb{R}^3\) and \(c \in \mathbb{R}\).
  1. If \(\vec{n}=\vec{v}\times \vec{w}\), then for any \(\vec{y}\in \operatorname{Span}\{\vec{v},\vec{w}\}\) we have \(\vec{y}\cdot \vec{n}=0\).
  2. \(\vec{v}\times \vec{w}=-\vec{w} \times \vec{v}\)
  3. \(\vec{v}\times \vec{v}=\vec{0}\)
  4. \(\vec{v}\times \vec{w}=\vec{0}\) if and only if either \(\vec{v}=\vec{0}\) or \(\vec{w}\) is a scalar multiple of \(\vec{v}\).
  5. \(\vec{v} \times (\vec{w} + \vec{x})=\vec{v} \times \vec{w} + \vec{v} \times \vec{x}\)
  6. \((c\vec{v})\times (\vec{w})=c(\vec{v}\times \vec{w})\).
  7. \(\|\vec{v}\times \vec{w}\|=\|\vec{v}\|\|\vec{w}\|\big|\sin \theta\big|\) where \(\theta\) is the angle between \(\vec{v}\) and \(\vec{w}\).
Theorem 1.3.5
Let \(\vec{v},\vec{w},\vec{b}\in \mathbb{R}^3\) with \(\{\vec{v},\vec{w}\}\) being linearly independent and let \(P\) be a plane in \(\mathbb{R}^3\) with vector equation \(\vec{x}=s\vec{v}+t\vec{w}+\vec{b}\), \(s,t\in \mathbb{R}\). If \(\vec{n}=\vec{v}\times \vec{w}\), then an equation for the plane is \[(\vec{x}-\vec{b})\cdot \vec{n}=0\]
Theorem 2.1.1
If the system of linear equations \begin{align*} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n&=b_1\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n&=b_2\\ \vdots \hskip80pt \vdots \hskip10pt &= \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n&=b_m \end{align*} has two distinct solutions \(\vec{s}=\begin{bmatrix}s_1\\ \vdots \\s_n\end{bmatrix}\) and \(\vec{t}=\begin{bmatrix}t_1\\ \vdots \\t_n\end{bmatrix}\), then \(\vec{x}=\vec{s} + c(\vec{s}-\vec{t})\) is a distinct solution for each \(c\in\mathbb{R}\).
Theorem 2.2.1
If the augmented matrix \(\left[ A_2 \mid \vec{b}_2 \right]\) can be obtained from the augmented matrix \(\left[ A_1 \mid \vec{b}_1 \right]\) by performing elementary row operations, then the corresponding systems of linear equations are equivalent.
Theorem 2.2.2
Every matrix has a unique reduced row echelon form.
Theorem 2.2.3
The solution set of a homogeneous system of \(m\) linear equations in \(n\) variables is a subspace of \(\mathbb{R}^n\).
Theorem 2.2.4
Let \(A\) be the coefficient matrix of a system of \(m\) linear equations in \(n\) unknowns \(\left[ A \mid \vec{b} \right]\).
  1. If the rank of \(A\) is less than the rank of the augmented matrix \(\left[ A \mid \vec{b} \right]\), then the system is inconsistent.
  2. If the system \(\left[ A \mid \vec{b} \right]\) is consistent, then the system contains \((n-\operatorname{rank} A)\) free variables (parameters).
  3. \(\operatorname{rank} A=m\) if and only if the system \(\left[ A \mid \vec{b} \right]\) is consistent for every \(\vec{b}\in \mathbb{R}^m\).
Theorem 2.2.5
Let \(\left[ A \mid \vec{b} \right]\) be a consistent system of \(m\) linear equations in \(n\) variables with RREF \(\left[ R \mid \vec{c}\right]\). If \(\operatorname{rank} A=k \lt n\), then the general solution of the system of linear equations \(\left[ A \mid \vec{b} \right]\) has the form \[\vec{x}=\vec{c}+t_1\vec{v}_1 + \cdots + t_{n-k}\vec{v}_{n-k}, \qquad t_1,\ldots,t_{n-k}\in \mathbb{R}\] where \(\{\vec{v}_1,\ldots,\vec{v}_{n-k}\}\) is linearly independent.
Theorem 3.1.1
For all \(A,B,C\in M_{m\times n}(\mathbb{R})\) and \(s,t\in \mathbb{R}\) we have
V1  \(A+B\in M_{m\times n}(\mathbb{R})\)
V2  \((A+B)+C=A+(B+C)\)
V3  \(A+B=B+A\)
V4  There exists a matrix, denoted by \(O_{m,n}\), such that \(A+O_{m,n}=A\). In particular, \(O_{m,n}\) is the \(m\times n\) matrix with all entries zero and is called the zero matrix
V5  There exists an \(m\times n\) matrix \((-A)\), with the property that \(A+(-A)=O_{m,n}\). \((-A)\) is called the addititive inverse of \(A\).
V6  \(sA \in M_{m\times n}(\mathbb{R})\)
V7  \(s(tA)=(st)A\)
V8  \((s+t)A=sA+tA\)
V9  \(s(A+B)=sA+sB\)
V10  \(1A=A\)
Theorem 3.1.2
For any \(A,B\in M_{m\times n}(\mathbb{R})\) and scalar \(c\in \mathbb{R}\) we have
  1. \((A^T)^T=A\)
  2. \((A+B)^T=A^T+B^T\)
  3. \((cA)^T=cA^T\)
Theorem 3.1.3
If \(A\), \(B\), and \(C\) are matrices of the correct size so that the required products are defined, and \(t\in \mathbb{R}\), then
  1. \(A(B+C)=AB+AC\)
  2. \(t(AB)=(tA)B=A(tB)\)
  3. \(A(BC)=(AB)C\)
  4. \((AB)^T=B^TA^T\)
Theorem 3.1.4
If \(A\) and \(B\) are \(m\times n\) matrices such that \(A\vec{x}=B\vec{x}\) for every \(\vec{x}\in \mathbb{R}^n\), then \(A=B\).
Theorem 3.1.5
If \(I=\begin{bmatrix} \vec{e}_1 & \cdots & \vec{e}_n\end{bmatrix}\), then \(AI=A=IA\) for any \(n\times n\) matrix \(A\).
Theorem 3.1.6
The multiplicative identity for \(M_{n\times n}(\mathbb{R})\) is unique.
Theorem 3.2.1
Let \(A\) be an \(m\times n\) matrix and let \(f(\vec{x})=A\vec{x}\). Then, for any vectors \(\vec{x},\vec{y}\in \mathbb{R}^n\) and \(s,t\in \mathbb{R}\) we have \[f(s\vec{x} + t\vec{y})=sf(\vec{x}) + tf(\vec{y})\]
Theorem 3.2.2
If \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a linear mapping, then \(L\) can be represented as a matrix mapping with the corresponding \(m\) by \(n\) matrix \([L]\) given by \[[L]=\begin{bmatrix}L(\vec{e}_1) & L(\vec{e}_2) & \cdots & L(\vec{e}_n)\end{bmatrix}\]
Lemma 3.3.1
If \(L:\mathbb{R}^n\to \mathbb{R}^m\) is linear, then \(L(\vec{0})=\vec{0}\).
Theorem 3.3.2
If \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a linear mapping, then \(\operatorname{Range}(L)\) is a subspace of the codomain, \(\mathbb{R}^m\).
Theorem 3.3.3
If \(L:\mathbb{R}^n\to \mathbb{R}^m\) is linear, then \(\ker(L)\) is a subspace of \(\mathbb{R}^n\).
Theorem 3.3.4
Let \(L:\mathbb{R}^n\to\mathbb{R}^m\) be a linear mapping with standard matrix \(A=[L]\). Then \(\vec{x}\in \ker(L)\) if and only if \(A\vec{x}=\vec{0}\).
Theorem 3.3.5
If \([L]=\begin{bmatrix} \vec{v}_1 \cdots \vec{v}_n\end{bmatrix}\) is the standard matrix of a linear mapping \(L:\mathbb{R}^n\to\mathbb{R}^m\), then \[\operatorname{Range}(L)=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_n\}\]
Theorem 3.4.1
Let \(L:\mathbb{R}^n\to\mathbb{R}^m\) and \(M:\mathbb{R}^n\to \mathbb{R}^m\) both be linear mappings and let \(t\in \mathbb{R}\). Then, \(L+M\) and \(tL\) are both linear mappings. Moreover, \[\begin{align*} {[}L+M{]} & = {[}L{]}+{[}M{]}\\ {[}tL{]}&=t{[}L{]} \end{align*}\]
Theorem 3.4.2
If \(L,M,N\in \mathbb{L}\) and \(c,d\in \mathbb{R}\), then
V1  \(L+M\in \mathbb{L}\)
V2  \((L+M)+N=L+(M+N)\)
V3  \(L+M=M+L\)
V4  There exists a linear mapping \(O:\mathbb{R}^n\to\mathbb{R}^m\), such that \(L+O=L\). In particular, \(O\) is the linear mapping defined by \(O(\vec{x})=\vec{0}\) for all \(\vec{x}\in \mathbb{R}^n\)
V5  There exists a linear mapping \((-L):\mathbb{R}^n\) to \(\mathbb{R}^m\) with the property that \(L+(-L)=O\). In particular, \((-L)\) is the linear mapping defined by \((-L)(\vec{x})=-L(\vec{x})\) for all \(\vec{x} \in \mathbb{R}^n\)
V6  \(cL\in \mathbb{L}\)
V7  \(c(dL)=(cd)L\)
V8  \((c+d)L=cL+dL\)
V9  \(c(L+M)=cL+cM\)
V10  \(1L=L\)
Theorem 3.4.3
If \(M:\mathbb{R}^m\to \mathbb{R}^p\) and \(L:\mathbb{R}^n\to\mathbb{R}^m\) are both linear mappings then \(M\circ L\) is also a linear mapping. Moreover, \[{[}M\circ L{]}={[}M{]}{[}L{]}\]
Theorem 4.1.1
If \(\mathbb{V}\) is a vector space and \(\vec{v}\in \mathbb{V}\), then
  1. \(\vec{0}=0\vec{v}\)
  2. \((-\vec{v})=(-1)\vec{v}\)
Theorem 4.1.2 - Subspace Test
A non-empty subset \(\mathbb{S}\) of a vector space \(\mathbb{V}\) is a subspace of \(\mathbb{V}\) if for all \(\vec{x},\vec{y}\in \mathbb{S}\) and \(t\in \mathbb{R}\) we have
V1  \(\vec{x}+\vec{y}\in \mathbb{S}\) (closed under addition)
V6  \(t\vec{x}\in \mathbb{S}\) (closed under scalar multiplication)
under the operations of \(\mathbb{V}\).
Theorem 4.1.3
If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a set of vectors in a vector space \(\mathbb{V}\), then \(\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a subspace of \(\mathbb{V}\).
Theorem 4.1.4
Let \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) be a set of vectors in a vector space \(\mathbb{V}\). If \(\vec{v}_i\in \operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_k\}\), then \[\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_k\}=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\]
Theorem 4.1.5
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in a vector space \(\mathbb{V}\) is linearly dependent if and only if \(\vec{v}_i\in \operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_n\}\) for some \(i\), \(1\leq i\leq k\).
Theorem 4.1.6
Any set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in a vector space \(\mathbb{V}\) which contains the zero vector is linearly dependent.
Unique Representation Theorem
If \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is a basis for a vector space \(\mathbb{V}\), then every \(\vec{v}\in \mathbb{V}\) can be written as a unique linear combination of the vectors in \(\mathcal{B}\).
Theorem 4.2.1
Suppose that \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is a basis for a vector space \(\mathbb{V}\). If \(\{\vec{w}_1,\ldots,\vec{w}_k\}\) is a linearly independent set in \(\mathbb{V}\) then \(k\leq n\).
Theorem 4.2.2
If \(\{\vec{v}_1,\ldots,\vec{v}_n\}\) and \(\{\vec{w}_1,\ldots,\vec{w}_k\}\) are both bases of a vector space \(\mathbb{V}\), then \(k=n\).
Theorem 4.2.3
If \(\mathbb{V}\) is an \(n\)-dimensional vector space with \(n \gt 0\), then
  1. no set of more than \(n\) vectors in \(\mathbb{V}\) can be linearly independent,
  2. no set of fewer than \(n\) vectors can span \(\mathbb{V}\),
  3. a set \(\mathcal{B}\) with \(n\) elements is a spanning set for \(\mathbb{V}\) if and only if it is linearly independent.
Theorem 4.2.4
If \(\mathbb{V}\) is an \(n\)-dimensional vector space and \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a linearly independent set in \(\mathbb{V}\) with \(k\lt n\), then there exist vectors \(\vec{w}_{k+1},\ldots,\vec{w}_{n}\) in \(\mathbb{V}\) such that \(\{\vec{v}_1,\ldots,\vec{v}_k,\vec{w}_{k+1},\ldots,\vec{w}_n\}\) is a basis for \(\mathbb{V}\).
Corollary 4.2.5
If \(\mathbb{S}\) is a subspace of a finite dimensional vector space \(\mathbb{V}\), then \(\dim \mathbb{S}\leq \dim \mathbb{V}\).
Theorem 4.3.2
If \(\mathbb{V}\) is a vector space with basis \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\), then for any \(\vec{v},\vec{w}\in \mathbb{V}\) and \(s,t\in \mathbb{R}\) we have \[[s\vec{v}+t\vec{w}]_{\mathcal{B}}=s[\vec{v}]_{\mathcal{B}} + t[\vec{w}]_{\mathcal{B}}\]
Theorem 4.3.3
If \(\mathcal{B}\) and \(\mathcal{C}\) are both bases of a finite dimensional vector space \(\mathbb{V}\), then the change of coordinate matrices \(_{\mathcal{C}}P_{\mathcal{B}}\) and \(_{\mathcal{B}}P_{\mathcal{C}}\) satisfy \[_{\mathcal{C}}P_{\mathcal{B}}\thinspace _{\mathcal{B}}P_{\mathcal{C}}=I = _{\mathcal{B}}P_{\mathcal{C}}\thinspace _{\mathcal{C}}P_{\mathcal{B}}\]
Theorem 5.1.1
If \(A\) is an \(m\times n\) matrix with \(m \gt n\), then \(A\) cannot have a right inverse.
Corollary 5.1.2
If \(A\) is an \(m\times n\) matrix with \(m \lt n\), then \(A\) cannot have a left inverse.
Theorem 5.1.3
If \(A\) is invertible, then the inverse of \(A\) is unique.
Theorem 5.1.4
If \(A\) and \(B\) are \(n\times n\) matrices such that \(AB=I\), then \(A\) is the inverse of \(B\) and \(B\) is the inverse of \(A\). Moreover, the RREF of \(A\) and \(B\) is \(I\).
Theorem 5.1.5
If \(A\) is an \(n\times n\) matrix with RREF \(I\), then \(A\) is invertible.
Theorem 5.1.6
If \(A\) and \(B\) are invertible \(n\times n\) matrices and \(k\) is a non-zero real number, then
  1. \((kA)^{-1}=\frac{1}{k}A^{-1}\)
  2. \((AB)^{-1}=B^{-1}A^{-1}\)
  3. \((A^T)^{-1}=(A^{-1})^T\)
Theorem 5.1.7 - Invertible Matrix Theorem
For an \(n\times n\) matrix \(A\), the following are all equivalent.
  1. \(A\) is invertible
  2. The RREF of \(A\) is \(I\)
  3. \(\operatorname{rank}A=n\)
  4. The system of equations \(A\vec{x}=\vec{b}\) is consistent with a unique solution for all \(\vec{b}\in \mathbb{R}^n\)
  5. The nullspace of \(A\) is \(\{\vec{0}\}\)
  6. The columns of \(A\) form a basis for \(\mathbb{R}^n\)
  7. The rows of \(A\) form a basis for \(\mathbb{R}^n\)
  8. \(A^T\) is invertible
Theorem 5.2.1
If \(A\) is an \(m\times n\) matrix and \(E\) is the \(m\times m\) elementary matrix corresponding to the row operation \(R_i+cR_j\), for \(i\neq j\), then \(EA\) is the matrix obtained from \(A\) by performing the elementary row operation \(R_i+cR_j\) on \(A\).
Theorem 5.2.2
If \(A\) is an \(m\times n\) matrix and \(E\) is the \(m\times m\) elementary matrix corresponding to the row operation \(cR_i\), \(c\neq 0\), then \(EA\) is the matrix obtained from \(A\) by performing the elementary row operation \(cR_i\) on \(A\).
Theorem 5.2.3
If \(A\) is an \(m\times n\) matrix and \(E\) is the \(m\times m\) elementary matrix corresponding to the row operation \(R_i \leftrightarrow R_j\), for \(i\neq j\), then \(EA\) is the matrix obtained from \(A\) by performing the elementary row operation \(R_i \leftrightarrow R_j\) on \(A\).
Corollary 5.2.4
If \(A\) is an \(m\times n\) matrix and \(E\) is an \(m\times m\) elementary matrix, then \[\operatorname{rank}(EA)=\operatorname{rank} A\]
Theorem 5.2.5
If \(A\) is an \(m\times n\) matrix with reduced row echelon form \(R\), then there exists a sequence \(E_1,\ldots,E_k\) of \(m\times m\) elementary matrices such that \(E_k\cdots E_2E_1A=R\). In particular, \[A=E_1^{-1}E_2^{-1}\cdots E_k^{-1}R\]
Corollary 5.2.6
If \(A\) is an invertible matrix, then \(A\) and \(A^{-1}\) can be written as a product of elementary matrices.
Theorem 5.3.1
Let \(A\) be an \(n\times n\) matrix. For \(1\leq i\leq n\) we get \[\det A=a_{i1}C_{i1} + \cdots + a_{in}C_{in}\] called the cofactor expansion across the \(i\)-th row. Also, for \(1\leq j\leq n\) we get \[\det A=a_{1j}C_{1j} + \cdots + a_{nj}C_{nj}\] called the cofactor expansion across the \(j\)-th column.
Theorem 5.3.2
If an \(n\times n\) matrix \(A\) is upper triangular or lower triangular, then \[\det A=a_{11}a_{22}\cdots a_{nn}\]
Theorem 5.3.3
If \(B\) is the matrix obtained from \(A\) by multiplying one row of \(A\) by a non-zero constant \(c\), then \(\det B=c\det A\).
Theorem 5.3.4
If \(B\) is the matrix obtained from \(A\) by swapping two rows of \(A\), then \(\det B=-\det A\).
Theorem 5.3.5
If \(B\) is the matrix obtained from \(A\) by adding \(c\) times the \(k\)-th row of \(A\) to the \(j\)-th row, then \(\det B=\det A\).
Theorem 5.3.6
If \(A\) is an \(n\times n\) matrix, then \(\det A=\det A^T\).
Corollary 5.3.7
If \(A\) is an \(n\times n\) matrix and \(E\) is an \(n\times n\) elementary matrix, then \(\det EA=\det E\det A\).
Theorem 5.3.8 - Addition to the Invertible Matrix Theorem
An \(n\times n\) matrix \(A\) is invertible if and only if \(\det A\neq 0\).
Theorem 5.3.9
If \(A\) and \(B\) are \(n\times n\) matrices then \(\det(AB)=\det A \det B\).
Corollary 5.3.10
If \(A\) is an invertible matrix, then \(\det A^{-1}=\dfrac{1}{\det A}\).
Theorem 5.4.2
If \(A\) is an invertible \(n\times n\) matrix, then \[A^{-1}=\frac{1}{\det A}\operatorname{adj} A\]
Theorem 5.4.3 - Cramer's Rule
If \(A\) is an \(n\times n\) invertible matrix, then the solution \(\vec{x}\) of \(A\vec{x}=\vec{b}\) is given by \[\displaystyle x_i=\frac{\det A_i}{\det A}, \quad 1\leq i\leq n\] where \(A_i\) is the matrix obtained from \(A\) by replacing the \(i\)-th column of \(A\) by \(\vec{b}\).
Theorem 6.1.1
If there exists an invertible matrix \(P\) such that \(P^{-1}AP=B\), then
  1. \(\operatorname{rank} A=\operatorname{rank} B\)
  2. \(\det A= \det B\)
  3. \(\operatorname{tr} A=\operatorname{tr} B\) where \(\operatorname{tr} A\) is defined by \(\operatorname{tr} A=\sum\limits_{i=1}^n a_{ii}\) and is called the trace of a matrix.
Theorem 6.2.1
A scalar \(\lambda\) is an eigenvalue of a square matrix \(A\) if and only if \(C(\lambda)=0\).
Theorem 6.2.3
If \(\lambda\) is an eigenvalue of \(A\), then \(1\leq g_\lambda\leq a_\lambda\).
Theorem 6.2.4
If \(\lambda_1,\ldots,\lambda_n\) are all eigenvalues of an \(n\times n\) matrix \(A\), then \begin{align*} \det A&=\lambda_1\cdots\lambda_n\\ \operatorname{tr} A&=\lambda_1 + \cdots + \lambda_n \end{align*}
Lemma 6.3.1
Suppose that \(A\) is \(n\times n\) and that \(\lambda_1,\ldots,\lambda_k\) are distinct eigenvalues with corresponding eigenvectors \(\vec{v}_1,\ldots,\vec{v}_k\) then \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is linearly independent.
Lemma 6.3.2
If \(A\) is matrix with distinct eigenvalues \(\{\lambda_1,\ldots,\lambda_k\}\) and \(\mathcal{B}_{i}=\{\vec{v}_{i,1},\ldots, \vec{v}_{i,g_{\lambda_i}}\}\) is a basis for the eigenspace of \(\lambda_i\) for \(1\leq i\leq k\), then \[\mathcal{B}_1 \cup \mathcal{B}_2 \cup \cdots \cup \mathcal{B}_k\] is a linearly independent set.
Theorem 6.3.3 - The Diagonalization Theorem
Let \(\lambda_1,\ldots,\lambda_k\) be the distinct eigenvalues of a matrix \(A\). Then, \(A\) is diagonalizable if and only if \(g_{\lambda_i}=a_{\lambda_i}\) for \(1\leq i\leq k\).
Corollary 6.3.4
If an \(n\times n\) matrix \(A\) has \(n\) distinct eigenvalues, then \(A\) is diagonalizable.
Lemma 6.4.1
If \(D=\operatorname{diag}(d_1,\ldots,d_n)\), then \(D^m=\operatorname{diag}(d_1^m,\ldots,d_n^m)\) for any positive integer \(m\).
Theorem 6.4.2
If there exists an invertible matrix \(P\) such that \(P^{-1}AP=D\) is diagonal, then \[A^m=PD^mP^{-1}\] for any positive integer \(m\).
Theorem 7.1.1
If \(A\) is an \(m\times n\) matrix, then \(\operatorname{Col}(A)\) and \(\operatorname{Null}(A^T)\) are subspaces of \(\mathbb{R}^m\), and \(\operatorname{Row}(A)\) and \(\operatorname{Null}(A)\) are subspaces of \(\mathbb{R}^n\).
Theorem 7.1.2
Let \(A\) be an \(m\times n\) matrix. The columns of \(A\) which correspond to leading ones in the reduced row echelon form of \(A\) form a basis for \(\operatorname{Col}(A)\). Moreover,\[\dim \operatorname{Col}(A)=\operatorname{rank }(A)\]
Theorem 7.1.3
Let \(A\) be an \(m\times n\) matrix. The set of all non-zero rows in the reduced row echelon form of \(A\) form a basis for \(\operatorname{Row}(A)\). Hence,\[\dim \left( \operatorname{Row}(A) \right)=\operatorname{rank }(A)\]
Corollary 7.1.4
For any \(m\times n\) matrix \(A\) we have \(\operatorname{rank } A=\operatorname{rank } A^T\).
Theorem 7.1.5 - Dimension Theorem
If \(A\) is an \(m\times n\) matrix, then \[\operatorname{rank }(A)+\dim \left(\operatorname{Null}(A) \right)=n\]
Theorem 7.2.1
If \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a linear mapping, then \(\operatorname{Range}(L)=\operatorname{Col}([L])\).
Theorem 7.2.2
If \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a linear mapping, then \(\operatorname{Ker}(L)=\operatorname{Null}([L])\).
Theorem 7.2.3
Let \(L:\mathbb{R}^n\to\mathbb{R}^m\) be a linear mapping. Then, \[\dim \left(\operatorname{Range}(L)\right)+\dim \left(\operatorname{Ker}(L)\right)=\dim(\mathbb{R}^n)\]
Theorem 8.1.1
Let \(\mathbb{V}\) and \(\mathbb{W}\) be vector spaces and let \(L:\mathbb{V}\to\mathbb{W}\) be a linear mapping. Then, \[L(\vec{0}_\mathbb{V})=\vec{0}_\mathbb{W}\]
Theorem 8.1.2
Let \(\mathbb{V}\) and \(\mathbb{W}\) be vector spaces. The set \(\mathbb{L}\) of all linear mappings \(L:\mathbb{V}\to\mathbb{W}\) is a vector space.
Theorem 8.1.3
If \(L:\mathbb{V}\to \mathbb{W}\) and \(M:\mathbb{W}\to \mathbb{U}\) are linear mappings, then \(M\circ L\) is a linear mapping from \(\mathbb{V}\) to \(\mathbb{U}\).
Theorem 8.2.1
Let \(L:\mathbb{V}\to \mathbb{W}\) be a linear mapping. Then, \(\operatorname{Ker}(L)\) is a subspace of \(\mathbb{V}\) and \(\operatorname{Range}(L)\) is a subspace of \(\mathbb{W}\).
Theorem 8.2.2 - The Rank-Nullity Theorem
Let \(\mathbb{V}\) be an \(n\)-dimensional vector space and let \(\mathbb{W}\) be a vector space. If \(L:\mathbb{V}\to \mathbb{W}\) is linear, then \[\operatorname{rank }(L) + \operatorname{nullity}(L) = n\]
Lemma 8.4.1
Let \(L:\mathbb{V}\to \mathbb{W}\) be a linear mapping. \(L\) is one-to-one if and only if \(\operatorname{Ker}(L)=\{\vec{0}\}\).
Theorem 8.4.2
If \(\mathbb{V}\) and \(\mathbb{W}\) are finite dimensional vector spaces, then \(\mathbb{V}\) and \(\mathbb{W}\) are isomorphic if and only if they have the same dimension.
Theorem 8.4.3
If \(\mathbb{V}\) and \(\mathbb{W}\) are \(n\)-dimensional vector spaces, and \(L:\mathbb{V}\to \mathbb{W}\) is linear, then \(L\) is one-to-one if and only if \(L\) is onto.
Theorem 8.4.4
If \(\mathcal{B}\) is any basis for an \(n\)-dimensional vector space \(\mathbb{V}\), \(\mathcal{C}\) is any basis for an \(m\)-dimensional vector space \(\mathbb{W}\), and \(L:\mathbb{V}\to \mathbb{W}\) is a linear mapping, then \[\operatorname{rank }(L)=\operatorname{rank }(\thinspace _C[L]_B)\]
Theorem 9.1.1
If \(\mathbb{V}\) is an inner product space with inner product \(\langle \thinspace, \rangle\), then for any \(\vec{v}\in \mathbb{V}\) we have \(\langle \vec{v},\vec{0}\rangle=0\).
Theorem 9.2.1
Let \(\vec{v},\vec{w}\) be any two vectors in an inner product space \(\mathbb{V}\) and \(t\in\mathbb{R}\). Then
  1. \(\|\vec{v}\|\geq 0\) and \(\|\vec{v}\|=0\) if and only if \(\vec{v}=\vec{0}\)
  2. \(\|t\vec{v}\|=|t|\|\vec{v}\|\)
  3. \(\lvert\langle \vec{v},\vec{w}\rangle\rvert\leq \|\vec{v}\|\|\vec{w}\|\)
  4. \(\|\vec{v} + \vec{w}\|\leq \|\vec{v}\| + \|\vec{w}\|\)
Theorem 9.2.2
Let \(\mathbb{V}\) be an inner product space. If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is an orthogonal set in \(\mathbb{V}\), then \[\|\vec{v}_1 + \cdots + \vec{v}_k\|^2=\|\vec{v}_1\|^2 + \cdots + \|\vec{v}_k\|^2\]
Theorem 9.2.3
Let \(\mathbb{V}\) be an inner product space. If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is an orthogonal set of non-zero vectors in \(\mathbb{V}\), then the set \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is linearly independent.
Theorem 9.2.4
If \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is an orthogonal basis for an inner product space \(\mathbb{V}\) and \(\vec{v}\in \mathbb{V}\), then the coefficient of \(\vec{v}_i\) when \(\vec{v}\) is written as a linear combination of the vectors in \(\mathcal{B}\) is \[\frac{\langle \vec{v},\vec{v}_i\rangle }{\|\vec{v}_i\|^2}\] In particular, \[\vec{v}=\frac{\langle \vec{v},\vec{v}_1\rangle }{\|\vec{v}_1\|^2}\vec{v}_1 + \cdots + \frac{\langle \vec{v},\vec{v}_n\rangle}{\|\vec{v}_n\|^2}\vec{v}_n\]
Corollary 9.2.5
If \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is an orthonormal basis for an inner product space \(\mathbb{V}\) and \(\vec{v}\in \mathbb{V}\), then \[\vec{v}=\langle \vec{v},\vec{v}_1\rangle \vec{v}_1 + \cdots + \langle \vec{v},\vec{v}_n\rangle \vec{v}_n\]
Theorem 9.2.6
If \(P\in M_{n\times n}(\mathbb{R})\), then the following are equivalent:
  1. The columns of \(P\) form an orthonormal basis for \(\mathbb{R}^n\).
  2. \(P^T=P^{-1}\)
  3. The rows of \(P\) form an orthonormal basis for \(\mathbb{R}^n\).
Theorem 9.2.7
If \(P\) and \(Q\) are \(n\times n\) orthogonal matrices and \(\vec{x},\vec{y}\in \mathbb{R}^n\), then:
  1. \((P\vec{x})\cdot (P\vec{y})=\vec{x}\cdot \vec{y}\)
  2. \(\|P\vec{x}\|=\|\vec{x}\|\)
  3. \(\det P=\pm 1\)
  4. All real eigenvalues of \(P\) are \(1\) or \(-1\).
  5. \(PQ\) is also an orthogonal matrix.
Theorem 9.3.1 - Gram-Schmidt Orthogonalization Theorem
Let \(\{\vec{w}_1,\ldots,\vec{w}_n\}\) be a basis for an inner product space \(\mathbb{W}\). If we define \(\vec{v}_1,\ldots,\vec{v}_n\) successively as follows: \begin{align*} \vec{v}_1 &= \vec{w}_1\\ \vec{v}_2 &= \vec{w}_2-\frac{\langle \vec{w}_2,\vec{v}_1\rangle }{\|\vec{v}_1\|^2}\vec{v}_1\\ \vec{v}_i &= \vec{w}_i-\frac{\langle \vec{w}_i,\vec{v}_1\rangle}{\|\vec{v}_1\|^2}\vec{v}_1-\frac{\langle \vec{w}_i,\vec{v}_2\rangle }{\|\vec{v}_2\|^2}\vec{v}_2 - \cdots - \frac{\langle \vec{w}_i,\vec{v}_{i-1}\rangle }{\|\vec{v}_{i-1}\|^2}\vec{v}_{i-1} \end{align*} for \(3\leq i\leq n\), then \(\{\vec{v}_1,\ldots,\vec{v}_i\}\) is an orthogonal basis for \(\operatorname{Span}\{\vec{w}_1,\ldots,\vec{w}_i\}\) for \(1\leq i\leq n\).
Theorem 9.4.1
Let \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) be a basis for a subspace \(\mathbb{W}\) of an inner product space \(\mathbb{V}\). If \(\vec{x}\) is orthogonal to \(\vec{v}_1,\ldots,\vec{v}_k\), then \(\vec{x}\in \mathbb{W}^{\perp}\).
Theorem 9.4.2
If \(\mathbb{W}\) is a finite dimensional subspace of an inner product space \(\mathbb{V}\), then
  1. \(\mathbb{W}^\perp\) is a subspace of \(\mathbb{V}\)
  2. If \(\dim \mathbb{V}=n\), then \(\dim \mathbb{W}^\perp=n-\dim \mathbb{W}\)
  3. If \(\mathbb{V}\) is finite dimensional, then \((\mathbb{W}^{\perp})^{\perp}=\mathbb{W}\)
  4. \(\mathbb{W}\cap \mathbb{W}^\perp=\{\vec{0}\}\)
  5. If \(\dim \mathbb{V}=n\), \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is an orthogonal basis for \(\mathbb{W}\), and \(\{\vec{v}_{k+1},\ldots,\vec{v}_n\}\) is an orthogonal basis for \(\mathbb{W}^{\perp}\), then \(\{\vec{v}_1,\ldots,\vec{v}_k,\vec{v}_{k+1},\ldots,\vec{v}_n\}\) is an orthogonal basis for \(\mathbb{V}\)
Theorem 9.4.3
Suppose \(\mathbb{W}\) is a \(k\)-dimensional subspace of an inner product space \(\mathbb{V}\). For any \(\vec{v}\in \mathbb{V}\), we have \[\operatorname{perp}_{\mathbb{W}} (\vec{v})=\vec{v}-\operatorname{proj}_{\mathbb{W}} (\vec{v})\in \mathbb{W}^{\perp}\]
Theorem 9.4.4
If \(\mathbb{W}\) is a \(k\)-dimensional subspace of an inner product space \(\mathbb{V}\), then \(\operatorname{proj}_\mathbb{W}\) is a linear operator on \(\mathbb{V}\) with kernel \(\mathbb{W}^\perp\).
Theorem 9.4.5
If \(\mathbb{W}\) is a subspace of a finite dimensional inner product space \(\mathbb{V}\), then for any \(\vec{v}\in \mathbb{V}\) we have \[\operatorname{proj}_{\mathbb{W}^{\perp}}(\vec{v})=\operatorname{perp}_{\mathbb{W}} (\vec{v})\]
Theorem 9.5.1
If \(\mathbb{U}\) and \(\mathbb{W}\) are subspaces of a vector space \(\mathbb{V}\), then \(\mathbb{U}\oplus \mathbb{W}\) is a subspace of \(\mathbb{V}\). Moreover, if \(\{\vec{u}_1,\ldots,\vec{u}_k\}\) is a basis for \(\mathbb{U}\) and \(\{\vec{w}_1,\ldots,\vec{w}_\ell\}\) is a basis for \(\mathbb{W}\), then \(\{\vec{u}_1,\ldots,\vec{u}_k,\vec{w}_1,\ldots,\vec{w}_{\ell}\}\) is a basis for \(\mathbb{U}\oplus \mathbb{W}\).
Theorem 9.5.2
If \(\mathbb{V}\) is a finite dimensional inner product space and \(\mathbb{W}\) is a subspace of \(\mathbb{V}\), then \[\mathbb{W}\oplus \mathbb{W}^{\perp}=\mathbb{V}\]
Theorem 9.5.3 - The Fundamental Theorem of Linear Algebra
If \(A\) is an \(m\times n\) matrix, then \(\operatorname{Col}(A)^{\perp}=\operatorname{Null}(A^T)\), \(\operatorname{Row}(A)^{\perp}=\operatorname{Null}(A)\). In particular, \[\mathbb{R}^n=\operatorname{Row}(A) \oplus \operatorname{Null}(A) \quad \text{ and } \quad \mathbb{R}^m=\operatorname{Col}(A)\oplus \operatorname{Null}(A^T)\]
Theorem 9.6.1 - The Approximation Theorem
Let \(\mathbb{W}\) be a finite dimensional subspace of an inner product space \(\mathbb{V}\). If \(\vec{v}\in \mathbb{V}\), then \[\|\vec{v} - \vec{w}\|>\|\vec{v}-\operatorname{proj}_{\mathbb{W}}(\vec{v})\|\] for all \(\vec{w}\in \mathbb{W}\), \(\vec{w}\neq \operatorname{proj}_\mathbb{W}(\vec{v})\).
Theorem 9.6.2
Let \(n\) data points \((x_1,y_1),\ldots,(x_n,y_n)\) be given, let \[\vec{y}=\begin{bmatrix}y_1\\\vdots \\y_n\end{bmatrix}, \qquad X=\begin{bmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^m\\ 1 & x_2 & x_2^2 & \cdots & x_2^m\\ \vdots & &&&\vdots\\ 1 & x_n & x_n^2 & \cdots & x_n^m \end{bmatrix}\] If \(\vec{a}=\begin{bmatrix} a_0\\ \vdots \\a_m\end{bmatrix}\) is any solution to the normal system \[X^TX\vec{a}=X^T\vec{y}\] then the polynomial \[p(x)=a_0 + a_1x + \cdots + a_mx^m\] is a best fitting polynomial of degree \(m\) for the given data. Moreover, if at least \(m+1\) of the numbers \(x_1,\ldots,x_n\) are distinct, then the matrix \(X^TX\) is invertible and hence \(\vec{a}\) is unique with \[\vec{a}=(X^TX)^{-1}X^T\vec{y}\]
Theorem 10.1.1 - Triangularization Theorem
If \(A\) is an \(n\times n\) matrix with all real eigenvalues, then \(A\) is orthogonally similar to an upper triangular matrix \(T\).
Theorem 10.2.1
If \(A\) is orthogonally diagonalizable, then \(A^T=A\).
Lemma 10.2.2
If \(A\) is a matrix such that \(A^T=A\), then all eigenvalues of \(A\) are real.
Theorem 10.2.3 - The Principal Axis Theorem
If \(A\) is a matrix such that \(A^T=A\), then \(A\) is orthogonally diagonalizable.
Theorem 10.2.4
A matrix \(A\) is symmetric if and only if \(\vec{x}\cdot(A\vec{y})=(A\vec{x})\cdot \vec{y}\) for all \(\vec{x},\vec{y}\in \mathbb{R}^n\).
Theorem 10.2.5
If \(A\) is a symmetric matrix with eigenvectors \(\vec{v}_1,\vec{v}_2\) corresponding to distinct eigenvalues \(\lambda_1,\lambda_2\) then \(\vec{v}_1\) and \(\vec{v}_2\) are orthogonal.
Theorem 10.3.1
If \(Q(\vec{x})=\vec{x}^TA\vec{x}\) is a quadratic form in \(n\) variables with corresponding symmeric matrix \(A\) and \(P\) is an orthogonal matrix such that \(P^TAP=D=\operatorname{diag}(\lambda_1,\ldots,\lambda_n)\), then performing the change of variables \(\vec{y}=P^T\vec{x}\) gives \[Q(\vec{x})=\lambda_1 y_1^2 + \cdots + \lambda_n y_n^2\]
Theorem 10.3.2
If \(A\) is a symmetric matrix, then \(A\) is
  1. positive definite \(\Leftrightarrow\) the eigenvalues of \(A\) are all positive.
  2. negative definite \(\Leftrightarrow\) the eigenvalues of \(A\) are all negative.
  3. indefinite \(\Leftrightarrow\) some of the eigenvalues of \(A\) are positive and some are negative.
  4. positive semidefinite \(\Leftrightarrow\) the eigenvalues of \(A\) are all non-negative.
  5. negative semidefinite \(\Leftrightarrow\) the eigenvalues of \(A\) are all non-positive.
Theorem 10.4.1
If \(Q(x_1,x_2)=ax_1^2 + bx_1x_2 + cx_2^2\) where \(a,b,c\) are not all zero, then there exists an orthogonal matrix \(P\), which corresponds to a rotation, such that the change of variables \(\vec{y}=P^T\vec{x}\) brings \(Q(\vec{x})\) into diagonal form.
Theorem 10.5.1
If \(Q(\vec{x})=\vec{x}^TA\vec{x}\) is a quadratic form with symmetric matrix \(A\), then the maximum of \(Q(\vec{x})\) subject to \(\|\vec{x}\|=1\) is the largest eigenvalue of \(A\) and the minimum of \(Q(\vec{x})\) subject to \(\|\vec{x}\|=1\) is the smallest eigenvalue of \(A\). Moreover, the maximum and minimum occur at unit eigenvectors of \(A\) corresponding to the respective eigenvalues.
Theorem 10.6.1
If \(A\) is an \(m\times n\) matrix and \(\lambda_1,\ldots,\lambda_n\) are the eigenvalues of \(A^TA\) with corresponding unit eigenvectors \(\vec{v}_1,\ldots,\vec{v}_n\), then \(\lambda_1,\ldots,\lambda_n\) are all non-negative and \[\|A\vec{v}_i\|=\sqrt{\lambda_i}\]
Lemma 10.6.2
If \(A\) is an \(m\times n\) matrix, then \(\operatorname{Null}(A^TA)=\operatorname{Null}(A)\).
Theorem 10.6.3
If \(A\) is an \(m\times n\) matrix, then \(\operatorname{rank }(A^TA)=\operatorname{rank }(A)\).
Corollary 10.6.4
If \(A\) is an \(m\times n\) matrix and \(\operatorname{rank }(A)=r\), then \(A\) has \(r\) non-zero singular values.
Theorem 10.6.5
Let \(A\) be an \(m\times n\) matrix. A vector \(\vec{v}\in \mathbb{R}^n\) is a right singular vector of \(A\) if and only if \(\vec{v}\) is an eigenvector of \(A^TA\). A vector \(\vec{u}\in \mathbb{R}^m\) is a left singular vector of \(A\) if and only if \(\vec{u}\) is an eigenvector of \(AA^T\).
Lemma 10.6.6
Let \(A\) be an \(m\times n\) matrix with \(\operatorname{rank }(A)=r\). If \(\{\vec{v}_1,\ldots,\vec{v}_n\}\) is an orthonormal basis for \(\mathbb{R}^n\) consisting of the eigenvectors of \(A^TA\) arranged so that the corresponding eigenvalues \(\lambda_1,\ldots,\lambda_n\) are arranged from greatest to least and \(\sigma_1,\ldots,\sigma_n\) are the singular values of \(A\), then \(\{\frac{1}{\sigma_1}A\vec{v}_1,\ldots, \frac{1}{\sigma_r}A\vec{v}_r\}\) is an orthonormal basis for \(\operatorname{Col} A\).
Theorem 10.6.7
If \(A\) is an \(m\times n\) matrix with rank \(r\), then there exists an orthonormal basis \(\{\vec{v}_1,\ldots,\vec{v}_n\}\) for \(\mathbb{R}^n\) of right singular vectors of \(A\) and an orthonormal basis \(\{\vec{u}_1,\ldots,\vec{u}_m\}\) for \(\mathbb{R}^m\) of left singular vectors of \(A\).
Theorem 11.1.1
If \(z_1,z_2,z_3\in \mathbb{C}\), then
  1. \(z_1+z_2=z_2+z_1\) addition is commutative
  2. \(z_1z_2=z_2z_1\) multiplication is commutative
  3. \(z_1+(z_2+z_3)=(z_1+z_2)+z_3\) addition is associative
  4. \(z_1(z_2z_3)=(z_1z_2)z_3\) multiplication is associative
  5. \(z_1(z_2+z_3)=z_1z_2 + z_1z_3\) multiplication is distributive
Theorem 11.1.2
If \(z=a+bi, z_1, z_2\in \mathbb{C}\), then
  1. \(\overline{\overline{z}}=z\)
  2. \(z\) is real if and only if \(\overline{z}=z\)
  3. If \(z\neq 0\), then \(z\) is imaginary if and only if \(\overline{z}=-z\)
  4. \(\overline{z_1+z_2}=\overline{z_1}+\overline{z_2}\)
  5. \(\overline{z_1z_2}=\overline{z_1}\thinspace \overline{z_2}\)
  6. \(z+\overline{z}=2\operatorname{Re}(z)=2a\)
  7. \(z-\overline{z}=2i\operatorname{Im}(z)=2bi\)
  8. \(z\overline{z}=a^2+b^2\)
Theorem 11.1.3
If \(w,z\in \mathbb{C}\), then
  1. \(|z|\in \mathbb{R}\) and \(|z|\geq 0\)
  2. \(|z|=0\) if and only if \(z=0\)
  3. \(|wz|=|w|\thinspace |z|\)
  4. \(|w+z|\leq |w|+|z|\)
Theorem 11.2.1
If \(\vec{z}\in \mathbb{C}^n\), then there exists vectors \(\vec{x},\vec{y}\in \mathbb{R}^n\) such that \[\vec{z}=\vec{x}+i\vec{y}\]
Theorem 11.2.2
If \(A\in M_{m\times n}(\mathbb{C})\) and \(\vec{z}\in \mathbb{C}^n\), then \(\overline{A\vec{z}}=\overline{A} \ \overline{\vec{z}}\).
Theorem 11.3.1
If \(A\in M_{n\times n}(\mathbb{R})\) has a non-real eigenvalue \(\lambda\) with corresponding eigenvector \(\vec{z}\), then \(\overline{\lambda}\) is also an eigenvalue of \(A\) with corresponding eigenvector \(\overline{\vec{z}}\).
Corollary 11.3.2
If \(A\in M_{n\times n}(\mathbb{R})\) and \(n\) is odd, then \(A\) has at least one real root.
Theorem 11.4.1
If \(\vec{v},\vec{z},\vec{w}\in \mathbb{C}^n\) and \(\alpha\in \mathbb{C}\), then
  1. \(\langle \vec{z},\vec{z}\rangle\in \mathbb{R}\), \(\langle \vec{z},\vec{z}\rangle \geq 0\), and \(\langle \vec{z},\vec{z}\rangle=0\) if and only if \(\vec{z}=\vec{0}\).
  2. \(\langle \vec{z},\vec{w}\rangle = \overline{\langle \vec{w},\vec{z}\rangle}\)
  3. \(\langle \vec{v}+\vec{z},\vec{w}\rangle = \langle \vec{v},\vec{w}\rangle + \langle \vec{z},\vec{w}\rangle\)
  4. \(\langle \alpha\vec{z},\vec{w}\rangle = \alpha\langle \vec{z},\vec{w}\rangle\)
Theorem 11.4.2
Let \(\mathbb{V}\) be a Hermitian inner product space with Hermitian inner product \(\langle \thinspace, \thinspace \rangle\). For any \(\vec{z},\vec{w}\in \mathbb{V}\) and \(\alpha\in \mathbb{C}\) we have \begin{align*} \|\alpha\vec{z}\|&=|\alpha|\|\vec{z}\|\\ \|\vec{z}+\vec{w}\|&\leq \|\vec{z}\| + \|\vec{w}\| \end{align*}
Theorem 11.4.3
If \(\{\vec{z}_1,\ldots,\vec{z}_k\}\) is an orthonormal set in a Hermitian inner product space, then \(\{\vec{z}_1,\ldots,\vec{z}_k\}\) is linearly independent and \[\|\vec{z}_1 + \cdots + \vec{z}_k\|^2=\|\vec{z}_1\|^2+\cdots+\|\vec{z}_k\|^2\]
Theorem 11.4.4
If \(U\in M_{n\times n}(\mathbb{C})\), then the following are equivalent.
  1. The columns of \(U\) form an orthonormal basis for \(\mathbb{C}^n\).
  2. \(U^{-1}=\overline{U}^T\)
  3. The rows of \(U\) form an orthonormal basis for \(\mathbb{C}^n\).
Theorem 11.4.5
If \(U_1\) and \(U_2\) are \(n\times n\) unitary matrices, then
  1. \(\|U_1\vec{z}\|=\|\vec{z}\|\) for all \(\vec{z}\in \mathbb{C}^n\)
  2. \(|\det U_1|=1\)
  3. \(U_1U_2\) is unitary.
Theorem 11.4.6
If \(A\) and \(B\) are complex matrices and \(\alpha\in \mathbb{C}\), then
  1. \(\langle A\vec{z},\vec{w}\rangle =\langle \vec{z},A^*\vec{w}\rangle\) for all \(\vec{z},\vec{w}\in \mathbb{C}^n\)
  2. \((A^{*})^{*}=A\)
  3. \((A+B)^*=A^* + B^*\)
  4. \((\alpha A)^*=\overline{\alpha}A^*\)
  5. \((AB)^*=B^*A^*\)
Theorem 11.5.1
An \(n\times n\) matrix \(A\) is Hermitian if and only if \[\langle \vec{z},A\vec{w}\rangle =\langle A\vec{z},\vec{w}\rangle \] for all \(\vec{z},\vec{w}\in \mathbb{C}^n\).
Theorem 11.5.2 - Schur's Theorem
If \(A\) is an \(n\times n\) matrix, then \(A\) is unitarily similar to an upper triangular matrix whose diagonal entries are the eigenvalues of \(A\).
Theorem 11.5.3 - Spectral Theorem for Hermitian Matrices
If \(A\) is Hermitian, then it is unitarily diagonalizable.
Theorem 11.5.4
Every eigenvalue of a Hermitian matrix is real.
Theorem 11.5.5
Every skew-Hermitian matrix \(A\) is unitarily diagonalizable.
Theorem 11.5.6
If \(\lambda\) is an eigenvalue of a skew-Hermitian matrix \(A\), then \(\lambda=ti\) for some \(t\in \mathbb{R}\).
Theorem 11.5.7
Every unitary matrix \(A\) is unitarily diagonalizable.
Theorem 11.5.8
If \(\lambda\) is an eigenvalue of a unitary matrix \(A\), then \(|\lambda|=1\).
Theorem 11.5.9 - Spectral Theorem for Normal Matrices
A matrix \(A\in M_{n\times n}(\mathbb{C})\) is unitarily diagonalizable if and only if it is normal.
Theorem 11.5.10
If \(A\) is a normal matrix, then
  1. \(\|A\vec{z}\|=\|A^*\vec{z}\|\), for all \(\vec{z}\in \mathbb{C}^n\).
  2. \(A-\lambda I\) is normal for every \(\lambda\in \mathbb{C}\).
  3. If \(A\vec{z}=\lambda \vec{z}\), then \(A^*\vec{z}=\overline{\lambda}\vec{z}\).
  4. If \(\vec{z}_1\) and \(\vec{z}_2\) are eigenvectors of \(A\) corresponding to distinct eigenvalues \(\lambda_1\) and \(\lambda_2\) of \(A\), then \(\vec{z}_1\) and \(\vec{z}_2\) are orthogonal.
Theorem 11.6.1 - Cayley-Hamilton Theorem
If \(A\) is an \(n\times n\) matrix, then \(A\) is a root of its characteristic polynomial \(C(\lambda)\).