Theorems

Note: Theorem numbering typically occurs in order, however, there are a few times when a number gets skipped. There are no theorems missing from this file.

Theorem 1.1.1
If \(\vec{x}, \vec{y}, \vec{w} \in \mathbb{R}^n\) and \(c, d \in \mathbb{R}\), then
V1  \(\vec{x}+\vec{y} \in \mathbb{R}^n\)
V2  \((\vec{x}+\vec{y})+\vec{w}=\vec{x}+(\vec{y}+\vec{w})\)
V3  \(\vec{x}+\vec{y}=\vec{y}+\vec{x}\)
V4  There exists a vector \(\vec{0} \in \mathbb{R}^n\), called the zero vector, such that \(\vec{x}+\vec{0} = \vec{x}\) for all \(\vec{x} \in \mathbb{R}\)
V5  There exists a vector \((-\vec{x}) \in \mathbb{R}^n\) such that \(\vec{x}+(-\vec{x})=\vec{0}\)
V6  \(c\vec{x} \in \mathbb{R}^n\)
V7  \(c(d\vec{x})=(cd)\vec{x}\)
V8  \((c+d)\vec{x}=c\vec{x}+d\vec{x}\)
V9  \(c(\vec{x}+\vec{y})=c\vec{x}+c\vec{y}\)
V10  \(1\vec{x}=\vec{x}\)
Theorem 1.1.2
If \(\vec{v}_k\) can be written as a linear combination of \(\vec{v}_1,\ldots,\vec{v}_{k-1}\), then \[\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{k-1}\}\]
Theorem 1.1.3
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in \(\mathbb{R}^n\) is linearly dependent if and only if \(\vec{v}_i\in \operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_k\}\) for some \(i\), \(1\leq i\leq k\).
Theorem 1.1.4
If a set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) contains the zero vector then it is linearly dependent.
Theorem 1.2.1 - The Subspace Test
Let \(\mathbb{S}\) be a non-empty subset of \(\mathbb{R}^n\). If \(\vec{x}+\vec{y}\in \mathbb{S}\) and \(c\vec{x}\in \mathbb{S}\) for all \(\vec{x},\vec{y}\in \mathbb{S}\) and \(c\in \mathbb{R}\), then \(\mathbb{S}\) is a subspace of \(\mathbb{R}^n\).
Theorem 1.2.2
If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a set of vectors in \(\mathbb{R}^n\), then \(\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a subspace of \(\mathbb{R}^n\).
Theorem 1.3.1
If \(\vec{x},\vec{y}\in \mathbb{R}^2\) and \(\theta\) is the angle between \(\vec{x}\) and \(\vec{y}\), then \[\vec{x}\cdot \vec{y}=\|\vec{x}\|\thinspace \|\vec{y}\|\cos \theta\]
Theorem 1.3.2
If \(\vec{x},\vec{y},\vec{z}\in\mathbb{R}^n\) and let \(s,t \in \mathbb{R}\), then
  1. \(\vec{x}\cdot \vec{x}\geq 0\) and \(\vec{x}\cdot \vec{x}=0\) if and only if \(\vec{x}=\vec{0}\).
  2. \(\vec{x}\cdot \vec{y}=\vec{y}\cdot \vec{x}\)
  3. \(\vec{x}\cdot(s\vec{y}+t\vec{z})= s(\vec{x}\cdot\vec{y}) + t(\vec{x}\cdot \vec{z})\)
Theorem 1.3.3
If \(\vec{x},\vec{y}\in\mathbb{R}^n\) and let \(c \in \mathbb{R}\), then
  1. \(\|\vec{x}\| \geq 0\) and \(\|\vec{x}\|=0\) if and only if \(\vec{x}=\vec{0}\).
  2. \(\|c\vec{x}\|=|c|\: \|\vec{x}\|\)
  3. \(|\vec{x}\cdot\vec{y}|\leq \|\vec{x}\| \|\vec{y}\| \quad\) (Cauchy-Schwarz-Buniakowski Inequality)
  4. \(\|\vec{x}+\vec{y}\|\leq \|\vec{x} \| + \|\vec{y}\| \quad\) (Triangle Inequality)
Theorem 1.3.4
Suppose that \(\vec{v},\vec{w}, \vec{x} \in \mathbb{R}^3\) and \(c \in \mathbb{R}\).
  1. If \(\vec{n}=\vec{v}\times \vec{w}\), then for any \(\vec{y}\in \operatorname{Span}\{\vec{v},\vec{w}\}\) we have \(\vec{y}\cdot \vec{n}=0\).
  2. \(\vec{v}\times \vec{w}=-\vec{w} \times \vec{v}\)
  3. \(\vec{v}\times \vec{v}=\vec{0}\)
  4. \(\vec{v}\times \vec{w}=\vec{0}\) if and only if either \(\vec{v}=\vec{0}\) or \(\vec{w}\) is a scalar multiple of \(\vec{v}\).
  5. \(\vec{v} \times (\vec{w} + \vec{x})=\vec{v} \times \vec{w} + \vec{v} \times \vec{x}\)
  6. \((c\vec{v})\times (\vec{w})=c(\vec{v}\times \vec{w})\).
  7. \(\|\vec{v}\times \vec{w}\|=\|\vec{v}\|\|\vec{w}\|\big|\sin \theta\big|\) where \(\theta\) is the angle between \(\vec{v}\) and \(\vec{w}\).
Theorem 1.3.5
Let \(\vec{v},\vec{w},\vec{b}\in \mathbb{R}^3\) with \(\{\vec{v},\vec{w}\}\) being linearly independent and let \(P\) be a plane in \(\mathbb{R}^3\) with vector equation \(\vec{x}=s\vec{v}+t\vec{w}+\vec{b}\), \(s,t\in \mathbb{R}\). If \(\vec{n}=\vec{v}\times \vec{w}\), then an equation for the plane is \[(\vec{x}-\vec{b})\cdot \vec{n}=0\]
Theorem 2.1.1
If the system of linear equations \begin{align*} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n&=b_1\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n&=b_2\\ \vdots \hskip80pt \vdots \hskip10pt &= \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n&=b_m \end{align*} has two distinct solutions \(\vec{s}=\begin{bmatrix}s_1\\ \vdots \\s_n\end{bmatrix}\) and \(\vec{t}=\begin{bmatrix}t_1\\ \vdots \\t_n\end{bmatrix}\), then \(\vec{x}=\vec{s} + c(\vec{s}-\vec{t})\) is a distinct solution for each \(c\in\mathbb{R}\).
Theorem 2.2.1
If the augmented matrix \(\left[ A_2 \mid \vec{b}_2 \right]\) can be obtained from the augmented matrix \(\left[ A_1 \mid \vec{b}_1 \right]\) by performing elementary row operations, then the corresponding systems of linear equations are equivalent.
Theorem 2.2.2
Every matrix has a unique reduced row echelon form.
Theorem 2.2.3
The solution set of a homogeneous system of \(m\) linear equations in \(n\) variables is a subspace of \(\mathbb{R}^n\).
Theorem 2.2.4
If \(A\) is a matrix with \(m\) rows and \(n\) columns, then \(\operatorname{rank}A \leq \min(m,n)\).
Theorem 2.2.5
Let \(A\) be the coefficient matrix of a system of \(m\) linear equations in \(n\) unknowns \(\left[ A \mid \vec{b} \right]\).
  1. If the rank of \(A\) is less than the rank of the augmented matrix \(\left[ A \mid \vec{b} \right]\), then the system is inconsistent.
  2. If the system \(\left[ A \mid \vec{b} \right]\) is consistent, then the system contains \((n-\operatorname{rank} A)\) free variables (parameters).
  3. \(\operatorname{rank} A=m\) if and only if the system \(\left[ A \mid \vec{b} \right]\) is consistent for every \(\vec{b}\in \mathbb{R}^m\).
Theorem 2.2.6
Let \(\left[ A \mid \vec{b} \right]\) be a consistent system of \(m\) linear equations in \(n\) variables with RREF \(\left[ R \mid \vec{c}\right]\). If \(\operatorname{rank} A=k \lt n\), then a vector equation of the solution set of \(\left[ A \mid \vec{b} \right]\) has the form \[\vec{x}=\vec{d}+t_1\vec{v}_1 + \cdots + t_{n-k}\vec{v}_{n-k}, \qquad t_1,\ldots,t_{n-k}\in \mathbb{R}\] where \(\vec{d} \in \mathbb{R}\) and \(\{\vec{v}_1,\ldots,\vec{v}_{n-k}\}\) is a linearly independent set in \(\mathbb{R}^n\). In particular, the solution set of \(\left[ A \mid \vec{b} \right]\) is an (\(n-k\))-flat in \(\mathbb{R}^n\).
Theorem 3.1.1
For all \(A,B,C\in M_{m\times n}(\mathbb{R})\) and \(s,t\in \mathbb{R}\) we have
V1  \(A+B\in M_{m\times n}(\mathbb{R})\)
V2  \((A+B)+C=A+(B+C)\)
V3  \(A+B=B+A\)
V4  There exists a matrix, denoted by \(O_{m,n}\), such that \(A+O_{m,n}=A\) for all \(A \in M_{m \times n}(\mathbb{R})\). In particular, \(O_{m,n}\) is the \(m\times n\) matrix with all entries zero and is called the zero matrix.
V5  There exists an \(m\times n\) matrix \((-A)\), with the property that \(A+(-A)=O_{m,n}\). \((-A)\) is called the addititive inverse of \(A\).
V6  \(sA \in M_{m\times n}(\mathbb{R})\)
V7  \(s(tA)=(st)A\)
V8  \((s+t)A=sA+tA\)
V9  \(s(A+B)=sA+sB\)
V10  \(1A=A\)
Theorem 3.1.2
For any \(A,B\in M_{m\times n}(\mathbb{R})\) and scalar \(c\in \mathbb{R}\) we have
  1. \((A^T)^T=A\)
  2. \((A+B)^T=A^T+B^T\)
  3. \((cA)^T=cA^T\)
Theorem 3.1.3
If \(A\), \(B\), and \(C\) are matrices of the correct size so that the required products are defined, and \(t\in \mathbb{R}\), then
  1. \(A(B+C)=AB+AC\)
  2. \(t(AB)=(tA)B=A(tB)\)
  3. \(A(BC)=(AB)C\)
  4. \((AB)^T=B^TA^T\)
Theorem 3.1.4
If \(A\) and \(B\) are \(m\times n\) matrices such that \(A\vec{x}=B\vec{x}\) for every \(\vec{x}\in \mathbb{R}^n\), then \(A=B\).
Theorem 3.1.5
If \(I=\begin{bmatrix} \vec{e}_1 & \cdots & \vec{e}_n\end{bmatrix}\), then \(AI=A=IA\) for any \(n\times n\) matrix \(A\).
Theorem 3.1.6
The multiplicative identity for \(M_{n\times n}(\mathbb{R})\) is unique.
Theorem 3.2.1
Let \(A\) be an \(m\times n\) matrix and let \(f(\vec{x})=A\vec{x}\). Then, for any vectors \(\vec{x},\vec{y}\in \mathbb{R}^n\) and \(s,t\in \mathbb{R}\) we have \[f(s\vec{x} + t\vec{y})=sf(\vec{x}) + tf(\vec{y})\]
Theorem 3.2.2
If \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a linear mapping, then \(L\) can be represented as a matrix mapping with the corresponding \(m\) by \(n\) matrix \([L]\) given by \[[L]=\begin{bmatrix}L(\vec{e}_1) & L(\vec{e}_2) & \cdots & L(\vec{e}_n)\end{bmatrix}\]
Lemma 3.3.1
If \(L:\mathbb{R}^n\to \mathbb{R}^m\) is linear, then \(L(\vec{0})=\vec{0}\).
Theorem 3.3.2
If \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a linear mapping, then \(\operatorname{Range}(L)\) is a subspace of the codomain, \(\mathbb{R}^m\).
Theorem 3.3.3
If \(L:\mathbb{R}^n\to \mathbb{R}^m\) is linear, then \(\ker(L)\) is a subspace of \(\mathbb{R}^n\).
Theorem 3.3.4
Let \(L:\mathbb{R}^n\to\mathbb{R}^m\) be a linear mapping with standard matrix \(A=[L]\). Then \(\vec{x}\in \ker(L)\) if and only if \(A\vec{x}=\vec{0}\).
Theorem 3.3.5
If \([L]=\begin{bmatrix} \vec{v}_1 \cdots \vec{v}_n\end{bmatrix}\) is the standard matrix of a linear mapping \(L:\mathbb{R}^n\to\mathbb{R}^m\), then \[\operatorname{Range}(L)=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_n\}\]
Theorem 3.4.1
Let \(L:\mathbb{R}^n\to\mathbb{R}^m\) and \(M:\mathbb{R}^n\to \mathbb{R}^m\) both be linear mappings and let \(t\in \mathbb{R}\). Then, \(L+M\) and \(tL\) are both linear mappings. Moreover, \[\begin{align*} {[}L+M{]} & = {[}L{]}+{[}M{]}\\ {[}tL{]}&=t{[}L{]} \end{align*}\]
Theorem 3.4.2
If \(L,M,N\in \mathbb{L}\) and \(c,d\in \mathbb{R}\), then
V1  \(L+M\in \mathbb{L}\)
V2  \((L+M)+N=L+(M+N)\)
V3  \(L+M=M+L\)
V4  There exists a linear mapping \(O:\mathbb{R}^n\to\mathbb{R}^m\), such that \(L+O=L\) for all \(L \in \mathbb{L}\). In particular, \(O\) is the linear mapping defined by \(O(\vec{x})=\vec{0}\) for all \(\vec{x}\in \mathbb{R}^n\).
V5  There exists a linear mapping \((-L):\mathbb{R}^n\) to \(\mathbb{R}^m\) with the property that \(L+(-L)=O\). In particular, \((-L)\) is the linear mapping defined by \((-L)(\vec{x})=-L(\vec{x})\) for all \(\vec{x} \in \mathbb{R}^n\).
V6  \(cL\in \mathbb{L}\)
V7  \(c(dL)=(cd)L\)
V8  \((c+d)L=cL+dL\)
V9  \(c(L+M)=cL+cM\)
V10  \(1L=L\)
Theorem 3.4.3
If \(M:\mathbb{R}^m\to \mathbb{R}^p\) and \(L:\mathbb{R}^n\to\mathbb{R}^m\) are both linear mappings then \(M\circ L\) is also a linear mapping. Moreover, \[{[}M\circ L{]}={[}M{]}{[}L{]}\]
Theorem 4.1.1
If \(\mathbb{V}\) is a vector space and \(\vec{v}\in \mathbb{V}\), then
  1. \(\vec{0}=0\vec{v}\)
  2. \((-\vec{v})=(-1)\vec{v}\)
Theorem 4.1.2 - Subspace Test
A non-empty subset \(\mathbb{S}\) of a vector space \(\mathbb{V}\) is a subspace of \(\mathbb{V}\) if for all \(\vec{x},\vec{y}\in \mathbb{S}\) and \(t\in \mathbb{R}\) we have
V1  \(\vec{x}+\vec{y}\in \mathbb{S}\) (closed under addition)
V6  \(t\vec{x}\in \mathbb{S}\) (closed under scalar multiplication)
under the operations of \(\mathbb{V}\).
Theorem 4.1.3
If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a set of vectors in a vector space \(\mathbb{V}\), then \(\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a subspace of \(\mathbb{V}\).
Theorem 4.1.4
Let \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) be a set of vectors in a vector space \(\mathbb{V}\). If \(\vec{v}_i\in \operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_k\}\), then \[\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_k\}=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\]
Theorem 4.1.5
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in a vector space \(\mathbb{V}\) is linearly dependent if and only if \(\vec{v}_i\in \operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_{i-1},\vec{v}_{i+1},\ldots,\vec{v}_n\}\) for some \(i\), \(1\leq i\leq k\).
Theorem 4.1.6
Any set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in a vector space \(\mathbb{V}\) which contains the zero vector is linearly dependent.
Unique Representation Theorem
If \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is a basis for a vector space \(\mathbb{V}\), then every \(\vec{v}\in \mathbb{V}\) can be written as a unique linear combination of the vectors in \(\mathcal{B}\).
Theorem 4.2.1
Suppose that \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is a basis for a vector space \(\mathbb{V}\). If \(\{\vec{w}_1,\ldots,\vec{w}_k\}\) is a linearly independent set in \(\mathbb{V}\) then \(k\leq n\).
Theorem 4.2.2
If \(\{\vec{v}_1,\ldots,\vec{v}_n\}\) and \(\{\vec{w}_1,\ldots,\vec{w}_k\}\) are both bases of a vector space \(\mathbb{V}\), then \(k=n\).
Theorem 4.2.3
If \(\mathbb{V}\) is an \(n\)-dimensional vector space with \(n \gt 0\), then
  1. no set of more than \(n\) vectors in \(\mathbb{V}\) can be linearly independent,
  2. no set of fewer than \(n\) vectors can span \(\mathbb{V}\),
  3. a set \(\mathcal{B}\) with \(n\) elements is a spanning set for \(\mathbb{V}\) if and only if it is linearly independent.
Theorem 4.2.4
If \(\mathbb{V}\) is an \(n\)-dimensional vector space and \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a linearly independent set in \(\mathbb{V}\) with \(k\lt n\), then there exist vectors \(\vec{w}_{k+1},\ldots,\vec{w}_{n}\) in \(\mathbb{V}\) such that \(\{\vec{v}_1,\ldots,\vec{v}_k,\vec{w}_{k+1},\ldots,\vec{w}_n\}\) is a basis for \(\mathbb{V}\).
Corollary 4.2.5
If \(\mathbb{S}\) is a subspace of a finite dimensional vector space \(\mathbb{V}\), then \(\dim \mathbb{S}\leq \dim \mathbb{V}\).
Theorem 4.3.2
If \(\mathbb{V}\) is a vector space with basis \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\), then for any \(\vec{v},\vec{w}\in \mathbb{V}\) and \(s,t\in \mathbb{R}\) we have \[[s\vec{v}+t\vec{w}]_{\mathcal{B}}=s[\vec{v}]_{\mathcal{B}} + t[\vec{w}]_{\mathcal{B}}\]
Theorem 4.3.3
If \(\mathcal{B}\) and \(\mathcal{C}\) are both bases of a finite dimensional vector space \(\mathbb{V}\), then the change of coordinate matrices \(_{\mathcal{C}}P_{\mathcal{B}}\) and \(_{\mathcal{B}}P_{\mathcal{C}}\) satisfy \[_{\mathcal{C}}P_{\mathcal{B}}\thinspace _{\mathcal{B}}P_{\mathcal{C}}=I = _{\mathcal{B}}P_{\mathcal{C}}\thinspace _{\mathcal{C}}P_{\mathcal{B}}\]
Theorem 5.1.1
If \(A\) is an \(m\times n\) matrix with \(m \gt n\), then \(A\) cannot have a right inverse.
Corollary 5.1.2
If \(A\) is an \(m\times n\) matrix with \(m \lt n\), then \(A\) cannot have a left inverse.
Theorem 5.1.3
If \(A\) is invertible, then the inverse of \(A\) is unique.
Theorem 5.1.4
If \(A\) and \(B\) are \(n\times n\) matrices such that \(AB=I\), then \(A\) is the inverse of \(B\) and \(B\) is the inverse of \(A\). Moreover, the RREF of \(A\) and \(B\) is \(I\).
Theorem 5.1.5
If \(A\) is an \(n\times n\) matrix with RREF \(I\), then \(A\) is invertible.
Theorem 5.1.6
If \(A\) and \(B\) are invertible \(n\times n\) matrices and \(k\) is a non-zero real number, then
  1. \((kA)^{-1}=\frac{1}{k}A^{-1}\)
  2. \((AB)^{-1}=B^{-1}A^{-1}\)
  3. \((A^T)^{-1}=(A^{-1})^T\)
Theorem 5.1.7 - Invertible Matrix Theorem
For an \(n\times n\) matrix \(A\), the following are all equivalent.
  1. \(A\) is invertible
  2. The RREF of \(A\) is \(I\)
  3. \(\operatorname{rank}A=n\)
  4. The system of equations \(A\vec{x}=\vec{b}\) is consistent with a unique solution for all \(\vec{b}\in \mathbb{R}^n\)
  5. The nullspace of \(A\) is \(\{\vec{0}\}\)
  6. The columns of \(A\) form a basis for \(\mathbb{R}^n\)
  7. The rows of \(A\) form a basis for \(\mathbb{R}^n\)
  8. \(A^T\) is invertible
Theorem 5.2.1
If \(A\) is an \(m\times n\) matrix and \(E\) is the \(m\times m\) elementary matrix corresponding to the row operation \(R_i+cR_j\), for \(i\neq j\), then \(EA\) is the matrix obtained from \(A\) by performing the elementary row operation \(R_i+cR_j\) on \(A\).
Theorem 5.2.2
If \(A\) is an \(m\times n\) matrix and \(E\) is the \(m\times m\) elementary matrix corresponding to the row operation \(cR_i\), \(c\neq 0\), then \(EA\) is the matrix obtained from \(A\) by performing the elementary row operation \(cR_i\) on \(A\).
Theorem 5.2.3
If \(A\) is an \(m\times n\) matrix and \(E\) is the \(m\times m\) elementary matrix corresponding to the row operation \(R_i \leftrightarrow R_j\), for \(i\neq j\), then \(EA\) is the matrix obtained from \(A\) by performing the elementary row operation \(R_i \leftrightarrow R_j\) on \(A\).
Corollary 5.2.4
If \(A\) is an \(m\times n\) matrix and \(E\) is an \(m\times m\) elementary matrix, then \[\operatorname{rank}(EA)=\operatorname{rank} A\]
Theorem 5.2.5
If \(A\) is an \(m\times n\) matrix with reduced row echelon form \(R\), then there exists a sequence \(E_1,\ldots,E_k\) of \(m\times m\) elementary matrices such that \(E_k\cdots E_2E_1A=R\). In particular, \[A=E_1^{-1}E_2^{-1}\cdots E_k^{-1}R\]
Corollary 5.2.6
If \(A\) is an invertible matrix, then \(A\) and \(A^{-1}\) can be written as a product of elementary matrices.
Theorem 5.3.1
Let \(A\) be an \(n\times n\) matrix. For \(1\leq i\leq n\) we get \[\det A=a_{i1}C_{i1} + \cdots + a_{in}C_{in}\] called the cofactor expansion across the \(i\)-th row. Also, for \(1\leq j\leq n\) we get \[\det A=a_{1j}C_{1j} + \cdots + a_{nj}C_{nj}\] called the cofactor expansion across the \(j\)-th column.
Theorem 5.3.2
If an \(n\times n\) matrix \(A\) is upper triangular or lower triangular, then \[\det A=a_{11}a_{22}\cdots a_{nn}\]
Theorem 5.3.3
If \(B\) is the matrix obtained from \(A\) by multiplying one row of \(A\) by a non-zero constant \(c\), then \(\det B=c\det A\).
Theorem 5.3.4
If \(B\) is the matrix obtained from \(A\) by swapping two rows of \(A\), then \(\det B=-\det A\).
Theorem 5.3.5
If \(B\) is the matrix obtained from \(A\) by adding \(c\) times the \(k\)-th row of \(A\) to the \(j\)-th row, then \(\det B=\det A\).
Theorem 5.3.6
If \(A\) is an \(n\times n\) matrix, then \(\det A=\det A^T\).
Corollary 5.3.7
If \(A\) is an \(n\times n\) matrix and \(E\) is an \(n\times n\) elementary matrix, then \(\det EA=\det E\det A\).
Theorem 5.3.8 - Addition to the Invertible Matrix Theorem
An \(n\times n\) matrix \(A\) is invertible if and only if \(\det A\neq 0\).
Theorem 5.3.9
If \(A\) and \(B\) are \(n\times n\) matrices then \(\det(AB)=\det A \det B\).
Corollary 5.3.10
If \(A\) is an invertible matrix, then \(\det A^{-1}=\dfrac{1}{\det A}\).
Theorem 5.4.2
If \(A\) is an invertible \(n\times n\) matrix, then \[A^{-1}=\frac{1}{\det A}\operatorname{adj} A\]
Theorem 5.4.3 - Cramer's Rule
If \(A\) is an \(n\times n\) invertible matrix, then the solution \(\vec{x}\) of \(A\vec{x}=\vec{b}\) is given by \[\displaystyle x_i=\frac{\det A_i}{\det A}, \quad 1\leq i\leq n\] where \(A_i\) is the matrix obtained from \(A\) by replacing the \(i\)-th column of \(A\) by \(\vec{b}\).
Theorem 6.1.1
If there exists an invertible matrix \(P\) such that \(P^{-1}AP=B\), then
  1. \(\operatorname{rank} A=\operatorname{rank} B\)
  2. \(\det A= \det B\)
  3. \(\operatorname{tr} A=\operatorname{tr} B\) where \(\operatorname{tr} A\) is defined by \(\operatorname{tr} A=\sum\limits_{i=1}^n a_{ii}\) and is called the trace of a matrix.
Theorem 6.2.1
A scalar \(\lambda\) is an eigenvalue of a square matrix \(A\) if and only if \(C(\lambda)=0\).
Theorem 6.2.3
If \(\lambda\) is an eigenvalue of \(A\), then \(1\leq g_\lambda\leq a_\lambda\).
Theorem 6.2.4
If \(\lambda_1,\ldots,\lambda_n\) are all eigenvalues of an \(n\times n\) matrix \(A\), then \begin{align*} \det A&=\lambda_1\cdots\lambda_n\\ \operatorname{tr} A&=\lambda_1 + \cdots + \lambda_n \end{align*}
Lemma 6.3.1
Suppose that \(A\) is \(n\times n\) and that \(\lambda_1,\ldots,\lambda_k\) are distinct eigenvalues with corresponding eigenvectors \(\vec{v}_1,\ldots,\vec{v}_k\) then \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is linearly independent.
Lemma 6.3.2
If \(A\) is matrix with distinct eigenvalues \(\{\lambda_1,\ldots,\lambda_k\}\) and \(\mathcal{B}_{i}=\{\vec{v}_{i,1},\ldots, \vec{v}_{i,g_{\lambda_i}}\}\) is a basis for the eigenspace of \(\lambda_i\) for \(1\leq i\leq k\), then \[\mathcal{B}_1 \cup \mathcal{B}_2 \cup \cdots \cup \mathcal{B}_k\] is a linearly independent set.
Theorem 6.3.3 - The Diagonalization Theorem
Let \(\lambda_1,\ldots,\lambda_k\) be the distinct eigenvalues of a matrix \(A\). Then, \(A\) is diagonalizable if and only if \(g_{\lambda_i}=a_{\lambda_i}\) for \(1\leq i\leq k\).
Corollary 6.3.4
If an \(n\times n\) matrix \(A\) has \(n\) distinct eigenvalues, then \(A\) is diagonalizable.
Lemma 6.4.1
If \(D=\operatorname{diag}(d_1,\ldots,d_n)\), then \(D^m=\operatorname{diag}(d_1^m,\ldots,d_n^m)\) for any positive integer \(m\).
Theorem 6.4.2
If there exists an invertible matrix \(P\) such that \(P^{-1}AP=D\) is diagonal, then \[A^m=PD^mP^{-1}\] for any positive integer \(m\).