(Scroll down to see terms from Math 106)
Addition of Matrices
Let \(A\) and \(B\) be \(m\times n\) matrices. We define addition of matrices by \[ (A+B)_{ij}=(A)_{ij}+(B)_{ij} \] That is, the \(ij\)-th entry of \(A+B\) is the sum of the \(ij\)-th entry of \(A\) with the \(ij\)-th entry of \(B\).
Addition of Complex Numbers
Addition of complex numbers \(z_1=x_1+y_1i\) and \(z_2=x_2+y_2i\) is defined by \[ z_1+z_2=(x_1+x_2)+(y_1+y_2)i \]
Addition of Vectors, Scalar Multiplication
If \(\vec{x}=\left[ \begin{array}{c} x_1\\ \vdots \\ x_n \end{array} \right],\ \vec{y}=\left[ \begin{array}{c} y_1 \\ \vdots \\ y_n\end{array} \right]\ \in\mathbb{R}^n\) and \(t\in\mathbb{R}\), then we define addition of vectors componentwise by \[ \vec{x}+\vec{y}=\left[ \begin{array}{c} x_1\\ \vdots \\ x_n \end{array} \right] + \left[ \begin{array}{c} y_1\\ \vdots \\ y_n \end{array} \right] = \left[ \begin{array}{c} x_1+y_1\\ \vdots \\ x_n+y_n \end{array} \right] \]We define scalar multiplication componentwise by \[ t\vec{x}=t\left[ \begin{array}{c} x_1\\ \vdots \\ x_n \end{array} \right] = \left[ \begin{array}{c} tx_1\\ \vdots \\ tx_n \end{array} \right] \]
Argument
If \(|z|\neq 0\), let \(\theta\) be the angle measure counterclockwise from the positive \(x\)-axis such that \(x=r\cos\theta\) and \(y=r\sin\theta\). The angle \(\theta\) is called an argument of \(z\).
Basis
A set \(\mathcal{B}\) of vectors in a vector space \(\mathbb{V}\) is a basis for \(\mathbb{V}\) if it is a linearly independent spanning set for \(\mathbb{V}\).
\(\mathbb{C}^n\)
The vector space \(\mathbb{C}^n\) is defined to be the set \[ \mathbb{C}^n=\left\{\left[\begin{array}{c} z_1\\ \vdots \\ z_n \end{array}\right]\ |\ z_1,\ldots,z_n\in\mathbb{C} \right\} \] with addition of vectors defined by \[ \left[\begin{array}{c} z_1\\ \vdots \\ z_n \end{array}\right] + \left[\begin{array}{c} w_1\\ \vdots \\ w_n \end{array}\right] = \left[\begin{array}{c} z_1+w_1\\ \vdots \\ z_n+w_n \end{array}\right] \] and scalar multiplication of vectors defined by \[ s\left[\begin{array}{c} z_1\\ \vdots \\ z_n \end{array}\right] = \left[\begin{array}{c} sz_1\\ \vdots \\ sz_n \end{array}\right] \] for all \(s\in\mathbb{C}\). (Notice that our scalars are now from \(\mathbb{C}\), not \(\mathbb{R}\).)
Change of Coordinates Matrix
Let \(\mathcal{B}\) and \(\mathcal{C}=\{\mathbf{w}_1,\ldots,\mathbf{w}_n\}\) both be bases for a vector space \(\mathbb{V}\). The matrix \(P= \left[\begin{array}{ccc} [\mathbf{w}_1]_{\mathcal{B}} & \cdots & [\mathbf{w}_n]_{\mathcal{B}} \end{array}\right]\) is called the change of coordinates matrix from \(\mathcal{C}\)-coordinates to \(\mathcal{B}\)-coordinates, and satisfies \[ [\mathbf{x}]_{\mathcal{B}} = P [\mathbf{x}]_{\mathcal{C}} \]
Columnspace
Let \(A\) be an \(m\times n\) matrix, and let \(\vec{z}_1,\vec{z}_2,\ldots,\vec{z}_n\in\mathbb{C}^m\) be the columns of \(A\). Then the columnspace of \(A\), written \(\mbox{Col}(A)\), is \(\mbox{Span}\{\vec{z}_1,\vec{z}_2,\ldots,\vec{z}_n\}\).
Complex Conjugate
The complex conjugate of \(\vec{z}=\left[\begin{array}{c} z_1\\ \vdots \\ z_n \end{array}\right]\in\mathbb{C}^n\) is defined to be \(\overline{\vec{z}}=\left[\begin{array}{c} \overline{z_1}\\ \vdots \\ \overline{z_n} \end{array}\right]\).
The complex conjugate of the complex number \(z=x+yi\) is \(z=x-yi\), and is denoted \(\overline{z}\).
Let \(A\) be an \(m\times n\) matrix. We define the complex conjugate of \(A\), \(\overline{A}\), by \[ \left(\overline{A}\right)_{ij}=\overline{(A)_{ij}} \]
Complex Inner Product
Let \(\mathbb{V}\) be a vector space over \(\mathbb{C}\). A complex inner product on \(\mathbb{V}\) is a function \(\langle\ ,\ \rangle:\mathbb{V}\times\mathbb{V}\rightarrow\mathbb{C}\) such that - For all \(\mathbf{z}\in\mathbb{V}\), we have that \(\langle \mathbf{z},\mathbf{z} \rangle\) is a non-negative real number, and \(\langle \mathbf{z},\mathbf{z} \rangle=0\) if and only if \(\mathbf{z}=\mathbf{0}\).
- For all \(\mathbf{w},\mathbf{z}\in\mathbb{V}\), \(\langle \mathbf{z},\mathbf{w} \rangle=\overline{\langle \mathbf{w},\mathbf{z} \rangle}\)
- For all \(\mathbf{u},\mathbf{v},\mathbf{w},\mathbf{z}\in\mathbb{V}\) and all \(\alpha\in\mathbb{C}\) we have
- \(\langle \mathbf{v}+\mathbf{z},\mathbf{w} \rangle=\langle \mathbf{v},\mathbf{w} \rangle+\langle\mathbf{z},\mathbf{w} \rangle\)
- \(\langle \mathbf{z},\mathbf{w}+\mathbf{u} \rangle = \langle \mathbf{z},\mathbf{w} \rangle + \langle \mathbf{z},\mathbf{u} \rangle\)
- \(\langle \alpha\mathbf{z},\mathbf{w} \rangle=\alpha \langle \mathbf{z},\mathbf{w} \rangle\)
- \(\langle \mathbf{z},\alpha\mathbf{w} \rangle=\overline{\alpha}\langle \mathbf{z},\mathbf{w} \rangle\)
Complex Number
A complex number is a number of the form \(z=x+yi\), where \(x,y\in\mathbb{R}\) and \(i\) is an element such that \(i^2=-1\). The set of all complex numbers is denoted by \(\mathbb{C}\).
Congugate Transpose
Let \(A\) be an \(n\times n\) matrix with complex entries. We define the conjugate transpose \(A^*\) of \(A\) to be \[ A^*=\overline{A}^T \]
Coordinate Vector
Suppose that \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_n\}\) is a basis for the vector space \(\mathbb{V}\). If \(\mathbf{x}\in\mathbb{V}\) with \(\mathbf{x}=x_1\mathbf{v}_1+x_2\mathbf{v}_2+\cdots+x_n\mathbf{v}_n\), then the coordinate vector of \(\mathbf{x}\) with respect to the basis \(\mathcal{B}\) is \[ [\mathbf{x}]_{\mathcal{B}} = \left[\begin{array}{c} x_1\\x_2 \\ \vdots \\ x_n \end{array}\right] \]
Dependent Member
When one element of a set can be written as a linear combination of the other members of the set, we say that this element is a dependent member of the set.
Dimension
If a vector space \(\mathbb{V}\) has a basis with \(n\) vectors, then we say that the dimension of \(\mathbb{V}\) is \(n\) and write \[ \mbox{dim }\mathbb{V}=n \]
Distance
For any vectors \(\mathbf{v},\mathbf{w}\in\mathbb{V}\), the distance between \(\mathbf{v}\) and \(\mathbf{w}\) is \(||\mathbf{v}-\mathbf{w}||\).
Dot Product
Let \(\vec{x}=\left[\begin{array}{c} x_1\\ \vdots \\ x_n \end{array}\right]\) and \(\vec{y}=\left[\begin{array}{c} y_1\\ \vdots \\ y_n \end{array}\right]\) be vectors in \(\mathbb{R}^n (\mathbb{C}^n)\). Then the dot product of \(\vec{x}\) and \(\vec{y}\) is \[ \vec{x}\cdot\vec{y}=x_1y_1+\cdots+x_ny_n \]
\(e^z\)
For any complex number \(z=x+iy\), we define \(e^z\) \(=e^{x+iy}=e^xe^{iy}\).
Eigenvalue, Eigenvector
Let \(L:\mathbb{C}^n\rightarrow\mathbb{C}^n\) be a linear mapping. If for some \(\lambda\in\mathbb{C}\) there exists a non-zero vector \(\vec{z}\in\mathbb{C}^n\) such that \(L(\vec{z})=\lambda\vec{z}\), then \(\lambda\) is an eigenvalue of \(L\) and \(\vec{z}\) is called an eigenvector of \(L\) that corresponds to \(\lambda\). Similarly, a complex number \(\lambda\) is an eigenvalue of an \(n\times n\) matrix \(A\) with complex entries with corresponding eigenvector \(\vec{z}\in\mathbb{C}^n\), \(\vec{z}\neq\vec{0}\), if \(A\vec{z}=\lambda\vec{z}\).
Equal
Two matrices \(A\) and \(B\) are equal if and only if they have the same size (that is, the same number of rows and the same number of columns) and their corresponding entries are equal. That is, if \(a_{ij}=b_{ij}\) for all \(1\leq i \leq m\) and \(1\leq j \leq n\).
Euler's Formula
Euler's Formula says that \(e^{i\theta}=\cos \theta + i\sin \theta\).
Hermitian
An \(n\times n\) matrix \(A\) with complex entries is called Hermitian if \(A^*=A\) or, equivalently, if \(\overline{A}=A^T\).
Inner Product
Let \(\mathbb{V}\) be a vector space over \(\mathbb{R}\). An inner product on \(\mathbb{V}\) is a function \(\langle\ ,\ \rangle\ :\mathbb{V}\times\mathbb{V} \rightarrow \mathbb{R}\) such that
1. \(\langle\mathbf{v},\mathbf{v}\rangle\geq 0\) for all \(\mathbf{v}\in\mathbb{V}\), and \(\langle\mathbf{v},\mathbf{v}\rangle = 0\) if and only if \(\mathbf{v}=\mathbf{0}\) (positive definite)
2. \(\langle\mathbf{v},\mathbf{w}\rangle=\langle\mathbf{w},\mathbf{v}\rangle\) for all \(\mathbf{v},\mathbf{w}\in\mathbb{V}\) (symmetric)
3. \(\langle\mathbf{v},s\mathbf{w}+t\mathbf{z}\rangle= s\langle\mathbf{v},\mathbf{w}\rangle +t\langle\mathbf{v},\mathbf{z}\rangle\) for all \(s,t\in\mathbb{R}\) and \(\mathbf{v},\mathbf{w},\mathbf{z}\in\mathbb{V}\) (bilinear)
Inner Product Space
A vector space \(\mathbb{V}\) with an inner product is called an inner product space.
Invariant Subspace
If \(T:\mathbb{V}\rightarrow\mathbb{V}\) is a linear operator and \(\mathbb{U}\) is a subspace of \(\mathbb{V}\) such that \(T(\mathbf{u})\in\mathbb{U}\) for all \(\mathbf{u}\in\mathbb{U}\), then \(\mathbb{U}\) is called an invariant subspace of \(T\).
Isomorphism, Isomorphic
If \(\mathbb{U}\) and \(\mathbb{V}\) are vector spaces over \(\mathbb{R}\), and if \(L:\mathbb{U}\rightarrow\mathbb{V}\) is a linear, one-to-one, and onto mapping, then \(L\) is called an isomorphism (or a vector space isomorphism), and \(\mathbb{U}\) and \(\mathbb{V}\) are said to be isomorphic.
Length
Let \(\vec{z}\) be a vector in \(\mathbb{C}^n\). Then we define the length of \(\vec{z}\) by \(\sqrt{\langle \vec{z},\vec{z} \rangle}=\sqrt{\vec{z}\cdot\overline{\vec{z}}}\)
Linearly Independent, Linearly Dependent (in \(\mathbb{C}^n\))
If \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_k\}\) is a set of vectors in a vector space \(\mathbb{V}\) over \(\mathbb{C}\), then \(\mathcal{B}\) is said to be linearly independent if the only solution to the equation \[ \alpha_1\mathbf{v}_1+\cdots+\alpha_k\mathbf{v}_k=\mathbf{0} \] is \(\alpha_1=\cdots=\alpha_k=0\); otherwise, \(\mathcal{B}\) is said to be linearly dependent.
Linearly Independent, Linearly Dependent (in matrices)
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices. Then \(\mathcal{B}\) is said to be linearly independent if the only solution to the equation \[ t_1A_1+\cdots+t_kA_k=0_{m,n} \] is the trivial solution \(t_1=\cdots = t_k=0\). Otherwise, \(\mathcal{B}\) is said to be linearly dependent.
Linearly Independent, Linearly Dependent (in polynomials)
The set \(\mathcal{B}=\{p_1(x),\ldots,p_k(x)\}\) is said to be linearly independent if the only solution to the equation \[ t_1p_1(x)+\cdots+t_kp_k(x)=0 \] is the trivial solution \(t_1=\cdots=t_k=0\). Otherwise, \(\mathcal{B}\) is said to be linearly dependent.
Linearly Independent, Linearly Dependent, Trivial Solution (in \(\mathbb{R}^n\))
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly independent if the only solution to \[ \vec{0}=t_1\vec{v}_1+\cdots+t_k\vec{v}_k \] is \(t_1=\cdots=t_k=0\). This is called the trivial solution. A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly dependent if there exists coefficients \(t_1,\ldots,t_k\) not all zero such that \[ \vec{0}=t_1\vec{v}_1+\cdots+t_k\vec{v}_k \]
Linearly Independent, Linearly Dependent (in \(\mathbb{V}\))
If \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_k\}\) is a set of vectors in a vector space \(\mathbb{V}\), then \(\mathcal{B}\) is said to be linearly independent if the only solution to the equation \[ t_1\mathbf{v}_1+\cdots+t_k\mathbf{v}_k=\mathbf{0} \] is \(t_1=\cdots=t_k=0\); otherwise, \(\mathcal{B}\) is said to be linearly dependent.
Linear Mapping
If \(\mathbb{V}\) and \(\mathbb{W}\) are vector spaces over the complex numbers, then a mapping \(L:\mathbb{V}\rightarrow\mathbb{W}\) is a linear mapping if for any \(\alpha\in\mathbb{C}\) and \(\mathbf{v}_1,\mathbf{v}_2\in\mathbb{V}\) we have \[ L(\alpha\mathbf{v}_1+\mathbf{v}_2)=\alpha L(\mathbf{v}_1)+L(\mathbf{v}_2) \]
Linear Mapping, Linear Operator
If \(\mathbb{V}\) and \(\mathbb{W}\) are vector spaces over \(\mathbb{R}\), a function \(L: \mathbb{V} \rightarrow \mathbb{W}\) is a linear mapping if it satisfies the linearity properies L1. \(L(\mathbf{x}+\mathbf{y})=L(\mathbf{x})+L(\mathbf{y})\)
L2. \(L(t\mathbf{x})=tL(\mathbf{x})\) for all \(\mathbf{x},\mathbf{y}\in\mathbb{V}\) and \(t\in\mathbb{R}\). If \(\mathbb{W}=\mathbb{V}\), then \(L\) may be called a linear operator.
Matrix
A matrix is a rectangular array of numbers. We say that \(A\) is an \(m\times n\) matrix when \(A\) has \(m\) rows and \(n\) columns, such as \[ A=\left[\begin{array}{cccccc} a_{11} &a_{12} & \cdots & a_{1j} & \cdots & a_{1n} \\ a_{21} &a_{22} & \cdots & a_{2j} & \cdots& a_{2n} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{i1} &a_{i2} & \cdots & a_{ij} & \cdots & a_{in} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{m1} &a_{m2} & \cdots & a_{mj} & \cdots& a_{mn} \end{array}\right] \]
Matrix of \(L\) with Respect to the Bases \(\mathcal{B}\) and \(\mathcal{C}\)
Let \(\mathbb{V}\) be a vector space with basis \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_n\}\), let \(\mathbb{W}\) be a vector space with basis \(\mathcal{C}\), and let \(L:\mathbb{V}\rightarrow\mathbb{W}\) be a linear mapping. We define the matrix of \(L\) with respect to the bases \(\mathcal{B}\) and \(\mathcal{C}\) to be the matrix\[ _{\mathcal{C}}[L]_{\mathcal{B}}=\left[\begin{array}{ccc} [L(\mathbf{v}_1)]_{\mathcal{C}} & \cdots & [L(\mathbf{v}_n)]_{\mathcal{C}} \end{array}\right] \]
Matrix Multiplication
Let \(B\) be an \(m\times n\) matrix with rows \(\vec{b}_1^T,\ldots,\vec{b}_m^T\) and \(A\) be an \(n\times p\) matrix with columns \(\vec{a}_1,\ldots,\vec{a}_p\). Then we define \(BA\) to be the matrix whose \(ij\)-th entry is \((BA)_{ij}=\vec{b}_i\cdot\vec{a}_j\).
Modulus
Given a complex number \(z=x+yi\), we define the modulus of \(z\) (denoted \(|z|\)) to be the real number \[ |z|=r=\sqrt{x^2+y^2} \]
Multiplication
Multiplication of complex numbers \(z_1=x_1+y_1i\) and \(z_2=x_2+y_2i\) is defined by \[\begin{array}{rl} z_1z_2 & = (x_1+y_1i)(x_2+y_2i) \\ & = x_1x_2+x_1y_2i+x_2y_1i+y_1y_2i^2 \\ & = x_1x_2 + (x_1y_2+x_2y_1)i + (y_1y_2)(-1) \\ & = (x_1x_2-y_1y_2)+(x_1y_2+x_2y_1)i \end{array} \]
Norm (Length)
Let \(\mathbb{V}\) be an inner product space. Then, for any \(\mathbf{v}\in\mathbb{V}\), we define the norm (or length) of \(\mathbf{v}\) to be \[ ||\mathbf{v}||=\sqrt{\langle \mathbf{v},\mathbf{v} \rangle} \]
Nullity of a Linear Mapping
Let \(\mathbb{V}\) and \(\mathbb{W}\) be vector spaces over \(\mathbb{R}\). The nullity of a linear mapping \(L:\mathbb{V}\rightarrow\mathbb{W}\) is the dimension of the nullspace of \(L\): \[ \mbox{nullity}(L)=\mbox{dim}(\mbox{Null}(L)) \]
Nullspace
The nullspace of \(L\) is the set of all vectors in \(\mathbb{V}\) whose image under \(L\) is the zero vector \(\mathbf{0}_{\mathbb{W}}\). We write
\[ \mbox{Null}(L)=\{\mathbf{x}\in\mathbb{V}\ |\ L(\mathbf{x})=\mathbf{0}_{\mathbb{W}} \} \]
Nullspace
The nullspace of an \(m\times n\) matrix \(A\) is \[ \mbox{Null}(A)=\{\vec{z}\in\mathbb{C}^n\ |\mathbb{A}\vec{z}=\vec{0} \} \]
One-to-One
\(L\) is said to be one-to-one if \(L(\mathbf{u}_1)=L(\mathbf{u}_2)\) implies \(\mathbf{u}_1=\mathbf{u}_2\).
Onto
\(L:\mathbb{U}\rightarrow\mathbb{V}\) is said to be onto if for every \(\mathbf{v}\in\mathbb{V}\), there exists some \(\mathbf{u}\in\mathbb{U}\) such that \(L(\mathbf{u})=\mathbf{v}\). That is, \(\mbox{Range}(L)=\mathbb{V}\).
Orthogonal, Orthonormal (general)
Let \(\mathbb{V}\) be an inner product space over \(\mathbb{R}\) (\(\mathbb{C}\)), with inner product \(\langle , \rangle\). Then two vectors \(\mathbf{v},\mathbf{w}\in\mathbb{V}\) are said to be orthogonal if \(\langle \mathbf{v}, \mathbf{w} \rangle=0\). The set of vectors \(\{ \mathbb{v}_1,\ldots,\mathbb{v}_n\}\) in \(\mathbb{V}\) is said to be orthogonal if \(\langle \mathbb{v}_j , \mathbb{v}_k \rangle = 0 \) for all \(j\neq k\). If we also have \(\langle \mathbb{v}_j , \mathbb{v}_j \rangle = 1\) for all \(1\leq j \leq n\), then the set is said to be orthonormal.
Orthogonal, Orthonormal (in \(\mathbb{R}^n\))
Two vectors \(\vec{x}\) and \(\vec{y}\) in \(\mathbb{R}^n\) are said to be orthogonal if \(\vec{x}\cdot\vec{y}=0\).
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in \(\mathbb{R}^n\) is orthogonal if \(\vec{v}_i\cdot\vec{v}_j=0\) whenever \(i\neq j\). If we also have that every vector \(\vec{v}_i\) in the set is a unit vector, then the set is said to be orthonormal.
We say that a vector \(\vec{x}\) is orthogonal to a subspace \(\mathbb{S}\) of \(\mathbb{R}^n\) if \(\vec{x}\cdot\vec{s}=0\) for all \(\vec{s}\in\mathbb{S}\)
Orthogonal Complement
We call the set of all vectors orthogonal to \(\mathbb{S}\) the orthogonal complement of \(\mathbb{S}\) and denote it \(\mathbb{S}^\perp\). That is \[ \mathbb{S}^\bot=\{\vec{x}\in\mathbb{R}^n\ |\ \vec{x}\cdot\vec{s}=0 \mbox{ for all }\vec{s}\in\mathbb{S}\} \]
Orthogonal Matrix
An \(n\times n\) matrix \(P\) such that \(P^TP=I\) is called an orthogonal matrix. It follows that \(P^{-1}=P^T\) and that \(PP^T=I=P^TP\).
Orthogonally Diagonalizable
A matrix \(A\) is said to be orthogonally diagonalizable if there exists an orthogonal matrix \(P\) and a diagonal matrix \(D\) such that \(P^TAP=D\).
Overdetermined System
An overdetermined system of linear equations is a system that has more equations than variables.
Perpendicular
The projection of \(\vec{x}\) perpendicular to \(\mathbb{S}\) is defined to be\[ \mbox{perp}_{\mathbb{S}}\vec{x}=\vec{x}- \mbox{proj}_{\mathbb{S}}\vec{x} \]
Polar Form
If \(r=|z|\) and if \(\theta\) is an argument for \(z\), then the polar form for \(z\) is \(r(\cos \theta + i \sin \theta)\).
Projection
Let \(\mathbb{S}\) be a \(k\)-dimensional subspace of \(\mathbb{R}^n\) and let \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_k\}\) be an orthogonal basis of \(\mathbb{S}\). If \(\vec{x}\) is any vector in \(\mathbb{R}^n\), the projection of \(\vec{x}\) onto \(\mathbb{S}\) is defined to be\[ \mbox{proj}_{\mathbb{S}}\vec{x}=\frac{\vec{x}\cdot\vec{v}_1}{||\vec{v}_1||^2}\vec{v}_1+\frac{\vec{x}\cdot\vec{v}_2}{||\vec{v}_2||^2}\vec{v}_2 + \cdots + \frac{\vec{x}\cdot\vec{v}_k}{||\vec{v}_k||^2}\vec{v}_k \]
Projection of \(\mathbf{x}\) Onto \(\mathbb{S}\), Projection of \(\mathbf{x}\) Perpendicular to \(\mathbb{S}\)
If \(\mathbb{V}\) is an inner product space, and if \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_k\}\) is an orthogonal basis for a subspace \(\mathbb{S}\), then for any \(\mathbf{x}\in\mathbb{V}\) we have that the projection of \(\mathbf{x}\) onto \(\mathbb{S}\) is given by \[ \mbox{proj}_{\mathbb{S}}\mathbf{x}=\frac{\langle \mathbf{x},\mathbf{v}_1 \rangle}{\langle \mathbf{v}_1,\mathbf{v}_1 \rangle}\mathbf{v}_1+\cdots+\frac{\langle \mathbf{x},\mathbf{v}_k \rangle}{\langle \mathbf{v}_k,\mathbf{v}_k \rangle}\mathbf{v}_k \] and that the projection of \(\mathbf{x}\) perpendicular to \(\mathbb{S}\)\[ \mbox{perp}_{\mathbb{S}}\mathbf{x}=\mathbf{x}-\mbox{proj}_{\mathbb{S}}\mathbf{x} \]
Projection of \(\vec{z}\in\mathbb{C}^n\) Onto \(\mathbb{S}\)
Let \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_k\}\) be an orthogonal basis for a subspace \(\mathbb{S}\) of \(\mathbb{C}^n\). Then the projection of \(\vec{z}\in\mathbb{C}^n\) onto \(\mathbb{S}\) is given by \[ \mbox{proj}_{\mathbb{S}}\vec{z}=\frac{\langle \vec{z},\vec{v}_1 \rangle}{\langle \vec{v}_1,\vec{v}_1 \rangle}\vec{v}_1 + \cdots + \frac{\langle \vec{z},\vec{v}_k \rangle}{\langle \vec{v}_k,\vec{v}_k\rangle}\vec{v}_k \]
Polynomial Addition, Polynomial Scalar Multiplication
If \(p(x)=a_0+a_1x+\cdots+a_nx^n\) and \(q(x)=b_0+b_1x+\cdots+b_nx^n\) are both polynomials of degree less than or equal to \(n\), and if \(t\) is a scalar (that is, \(t\in\mathbb{R}\)), then there are polynomials \((p+q)(x)\) and \((tp)(x)\) of degree less than or equal to \(n\) defined as follows: \[ (p+q)(x)=(a_0+b_0)+(a_1+b_1)x+\cdots+(a_n+b_n)x^n \] and \[ (tp)(x)=(ta_0)+(ta_1)x+\cdots+(ta_n)x^n \]
Quotient
The quotient of two complex numbers \(z_1=x_1+y_1i\) and \(z_2=x_2+y_2i\) is \[ \frac{z_1}{z_2} = \frac{z_1\overline{z_2}}{z_2\overline{z_2}} = \frac{x_1x_2+y_1y_2}{x_2^2+y_2^2} + \frac{y_1x_2-x_1y_2}{x_2^2+y_2^2}i \]
\(\mathbb{R}^n\)
\(\mathbb{R}^n\) is the set of all vectors of the form \(\left[ \begin{array}{c} x_1\\ \vdots \\ x_n \end{array} \right]\), where \(x_i\in\mathbb{R}\). That is \[ \mathbb{R}^n = \left\{ \left[ \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right] \ \mid \ x_1,\ldots,x_n \in \mathbb{R} \right\} \]
Range
Let \(\mathbb{V}\) and \(\mathbb{W}\) be vector spaces over \(\mathbb{R}\). The range of a linear mapping \(L:\mathbb{V}\rightarrow\mathbb{W}\) is defined to be the set \[ \mbox{Range}(L)=\{L(\mathbf{x})\in\mathbb{W}\ |\ \mathbf{x}\in\mathbb{V} \} \]
Rank of a Linear Mapping
Let \(\mathbb{V}\) and \(\mathbb{W}\) be vector spaces over \(\mathbb{R}\). The rank of a linear mapping \(L:\mathbb{V}\rightarrow\mathbb{W}\) is the dimension of the range of \(L\):\[ \mbox{rank}(L)=\mbox{dim}(\mbox{Range}(L)) \]
Real Canonical Form
Let \(A\) be a \(2\times 2\) real matrix with eigenvalue \(\lambda=a+ib\), \(b\neq 0\). The matrix \(\left[\begin{array}{rr} a&b \\ -b&a \end{array}\right]\) is called a real canonical form for \(A\).
Real Part, Imaginary Part, Purely Real, Purely Imaginary
If \(z=x+yi\), we say that the real part of \(z\) is \(x\), and we write \(\mbox{Re}(z)=x\), and we say that the imaginary part of \(z\) is \(y\), and we write \(\mbox{Im}(z)=y\). If \(\mbox{Im}(z)=0\), then \(z\) is a real number, and we sometimes say that \(z\) is purely real. If \(\mbox{Re}(z)=0\), the we say that \(z\) is purely imaginary.
Rowspace
Given an \(m\times n\) matrix \(A\), the rowspace of \(A\) is the subspace spanned by the rows of \(A\) (regarded as vectors) and is denoted \(\mbox{Row}(A)\).
Scalar Multiplication
Let \(A\) be an \(m\times n\) matrix, and \(t\in\mathbb{R}\) a scalar. We define the scalar multiplication of matrices by \[ (tA)_{ij}=t(A)_{ij} \] That is, the \(ij\)-th entry of \(tA\) is \(t\) times the \(ij\)-th entry of \(A\).
Span (in matrices)
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices. Then the span of \(\mathcal{B}\) is defined as \[ \mbox{Span }\mathcal{B}=\{t_1A_1+\cdots+t_kA_k\ \mid \ t_1,\ldots,t_k\in\mathbb{R} \} \] That is, \(\mbox{Span }\mathcal{B}\) is the set of all linear combinations of the matrices in \(\mathcal{B}\).
Span (in polynomials)
Let \(\mathcal{B}=\{p_1(x),\ldots,p_k(x)\}\) be a set of polynomials of degree at most \(n\). Then the span of \(\mathcal{B}\) is defined as \[ \mbox{Span }\mathcal{B}=\{t_1p_1(x)+\cdots+t_kp_k(x)\ |\ t_1,\ldots,t_k\in\mathbb{R} \} \]
Spanned, Spans, Spanning Set
If \(\mathbb{S}\) is the subspace of the vector space \(\mathbb{V}\) consisting of all linear combinations of the vectors \(\mathbf{v}_1,\ldots,\mathbf{v}_k\in\mathbb{V}\), then \(\mathbb{S}\) is called the subspace spanned by \(\mathcal{B}=\{\mathbf{v}_1,\ldots,\mathbf{v}_k\}\), and we say that the set \(\mathcal{B}\) spans \(\mathbb{S}\). The set \(\mathcal{B}\) is called a spanning set for the subspace \(\mathbb{S}\). We denote \(\mathbb{S}\) by \[ \mathbb{S}=\mbox{Span }\{\mathbf{v}_1,\ldots,\mathbf{v}_k\}=\mbox{Span }\mathcal{B} \]
Standard Inner Product
In \(\mathbb{C}^n\) the standard inner product \(\langle\ ,\ \rangle\) is defined by \[ \langle\vec{z},\vec{w}\rangle=\vec{z}\cdot\overline{\vec{w}} = z_1\overline{w}_1+\cdots+z_n\overline{w}_n, \mbox{for }\vec{w},\vec{z}\in\mathbb{C}^n \]
Subspace
Suppose that \(\mathbb{V}\) is a vector space, and that \(\mathbb{U}\) is a subset of \(\mathbb{V}\). If \(\mathbb{U}\) is a vector space using the same definition of addition and scalar multiplication as \(\mathbb{V}\), then \(\mathbb{U}\) is called a subspace of \(\mathbb{V}\).
Alternate Definition: Suppose that \(\mathbb{V}\) is a vector space. Then \(\mathbb{U}\) is a subspace of \(\mathbb{V}\) if is satisfies the following three properties:
S0: \(\mathbb{U}\) is a non-empty subset of \(\mathbb{V}\)
S1: \(\mathbf{x}+\mathbf{y}\in\mathbb{U}\) for all \(\mathbf{x},\mathbf{y}\in\mathbb{U}\) (\(\mathbb{U}\) is closed under addition)
S2: \(t\mathbf{x}\in\mathbb{U}\) for all \(\mathbf{x}\in\mathbb{U}\) and \(t\in\mathbb{R}\) (\(\mathbb{U}\) is closed under scalar multiplication)
Symmetric
A matrix \(A\) is symmetric if \(A^T=A\) or, equivalently, if \(a_{ij}=a_{ji}\) for all \(i\) and \(j\).
Unit Vector
A vector \(\mathbf{v}\) in an inner product space \(\mathbb{V}\) is called a unit vector if \(||\mathbf{v}||=1\).
Unitary
An \(n\times n\) matrix with complex entries is said to be unitary if its columns form an orthonormal basis for \(\mathbb{C}^n\).
Vector Space Over \(\mathbb{R}\) (\(\mathbb{C}\)) (Addition, Scalar Multiplication, Zero Vector)
A vector space over \(\mathbb{R}\) (\(\mathbb{C}\)) is a set \(\mathbb{V}\) together with an operation of addition, usually denoted \(\mathbf{x}+\mathbf{y}\) for any \(\mathbf{x}, \mathbf{y} \in \mathbb{V}\), and an operation scalar multiplication, usually denoted \(s\mathbf{x}\) for any \(\mathbf{x} \in \mathbb{V}\) and \(s \in \mathbb{R}\) (\(\mathbb{C}\)), such that for any \(\mathbf{x}, \mathbf{y}, \mathbf{z} \in \mathbb{V}\) and \(s, t, \in \mathbb{R}\) (\(\mathbb{C}\)) we have the following properties:
V1. \(\mathbf{x}+\mathbf{y} \in \mathbb{V}\)closed under addition
V2. \((\mathbf{x}+\mathbf{y})+\mathbf{z}=\mathbf{x}+(\mathbf{y}+\mathbf{z})\)addition is associative
V3. There is an element \(\mathbf{0} \in \mathbb{V}\), (called the zero vector) such that \(\mathbf{x}+\mathbf{0} = \mathbf{x}=\mathbf{0}+\mathbf{x}\)additive identity
V4. For each \(\mathbf{x} \in \mathbb{V}\), there exists element \(-\mathbf{x}\) such that \(\mathbf{x}+(-\mathbf{x})=\mathbf{0}\)additive inverse
V5. \(\mathbf{x}+\mathbf{y}=\mathbf{y}+\mathbf{x}\)addition is commutative
V6. \(s\mathbf{x} \in \mathbb{V}\)closed under scalar multiplication
V7. \(s(t\mathbf{x})=(st)\mathbf{x}\) scalar multiplication is associative
V8. \((s+t)\mathbf{x}=s\mathbf{x}+t\mathbf{x}\) scalar addition is distributive
V9. \(s(\mathbf{x}+\mathbf{y})=s\mathbf{x}+s\mathbf{y}\)scalar multiplication is distributive
V10. \(1\mathbf{x}=\mathbf{x}\) scalar multiplicative identity
Math 106
\(\mathbb{R}^n\)
\(\mathbb{R}^n\) is the set of all vectors of the form \(\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix}\), where \(x_i\in\mathbb{R}\). In set notation, we write \[ \mathbb{R}^n = \left\{ \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \mid x_1, \ldots ,x_n \in \mathbb{R} \right\} \]
Algebraic Multiplicity
Let \(A\) be an \(n\times n\) matrix with eigenvalue \(\lambda\). The algebraic multiplicity of \(\lambda\) is the number of times \(\lambda\) is repeated as a root of the characteristic polynomial.
Basis
If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a spanning set for a subspace \(S\) of \(\mathbb{R}^n\) and \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is linearly independent, then \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is called a basis for \(S\).
Characteristic Polynomial
Let \(A\) be an \(n\times n\) matrix. Then \(C(\lambda)=\det(A-\lambda I)\) is called the characteristic polynomial of \(A\).
Cofactor
Let \(A\) be an \(n\times n\) matrix, and let \(a_{ij}\) be an entry of \(A\). Then the cofactor of \(a_{ij}\) is defined to be \[ C_{ij}=(-1)^{i+j}\det A(i,j) \]
Cofactor Matrix
Let \(A\) be an \(n\times n\) matrix. We define the cofactor matrix of \(A\), denoted \(\mbox{cof }A\), by \[ (\mbox{cof }A)_{ij} = C_{ij} \] That is, the \(ij\) entry of \(\mbox{cof }A\) is the cofactor of the \(ij\) entry of \(A\).
Columnspace
Let \(A\) be an \(m\times n\) matrix, and let \(\vec{c}_1,\vec{c}_2,\ldots,\vec{c}_n\in\mathbb{R}^m\) be the columns of \(A\). Then the columnspace of \(A\), written \(\mbox{Col}(A)\), is \(\mbox{Span}\{\vec{c}_1,\vec{c}_2,\ldots,\vec{c}_n\}\).
Columnspace (Textbook Definition)
The columnspace of an \(m\times n\) matrix \(A\) is the set \(\mbox{Col}(A)\) defined by \[ \mbox{Col}(A)=\{A\vec{x}\in\mathbb{R}^m \ |\ \vec{x}\in\mathbb{R}^n \} \]
Consistent
A system of linear equations that has at least one solution is called consistent.
Cross-Product
The cross-product of vectors \(\vec{u}=\begin{bmatrix} u_1\\ u_2\\ u_3 \end{bmatrix}\) and \(\vec{v}= \begin{bmatrix} v_1\\v_2\\v_3 \end{bmatrix}\) is defined by \[\vec{u}\times\vec{v}=\begin{bmatrix} u_2v_3-u_3v_2 \\ u_3v_1-u_1v_3 \\ u_1v_2-u_2v_1 \end{bmatrix} \]
Determinant (\(2\times2\))
The determinant of a \(2\times 2\) matrix \(A=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}\) is defined by \[ \det A = \det \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} = a_{11}a_{22}-a_{12}a_{21} \]
Determinant (\(n\times n\))
The determinant of an \(n\times n\) matrix \(A\) is defined by \[ \det A = a_{11}C_{11}+a_{12}C_{12}+\cdots+a_{1n}C_{1n} \]
Diagonal Matrix
A matrix \(D\) that is both upper and lower triangular is called a diagonal matrix (that is, \(d_{ij}=0\) for all \(i\neq j\)). In this case, the non-zero entries are only on the “main diagonal” of the matrix.
Diagonalizable
If there exists an invertible matrix \(P\) and diagonal matrix \(D\) such that \(P^{-1}AP=D\), then we say \(A\) is diagonalizable and that the matrix \(P\) diagonalizes \(A\) to its diagonal form \(D\).
Dilation, Contraction
For \(t\in\mathbb{R}\), \(t>1\), the dilation of \(\vec{x}\) by a factor of \(t\) is the function \(T(\vec{x})=t\vec{x}\). If \(0 \lt t \lt 1\), the function \(T(\vec{x})=t\vec{x}\) is called the contraction of \(\vec{x}\) by a factor of \(t\). As these are the same function, they have the same standard matrix, which is obtained by multiplying the identity matrix by \(t\).
Dimension
If \(S\) is a non-trivial subspace of \(\mathbb{R}^n\) with a basis containing \(k\) vectors, then we say that the dimension of \(S\) is \(k\) and write \(\operatorname{dim}S=k\).
Directed Line Segment
The directed line segment from a point \(P\) in \(\mathbb{R}^2\) to a point \(Q\) in \(\mathbb{R}^2\) is drawn as an arrow with starting point \(P\) and tip \(Q\). It is denoted by \(\vec{PQ}\).
Directed Line Segment - Equivalence
We define two directed line segments \(\vec{PQ}\) and \(\vec{RS}\) to be equivalent if \(\vec{q}-\vec{p}=\vec{s}-\vec{r}\), in which case we shall write \(\vec{PQ}=\vec{RS}\). In the case where \(R=O\), we get that \(\vec{PQ}\) is equivalent to \(\vec{OS}\) if \(\vec{q}-\vec{p}=\vec{s}\).
Distance From a Line to a Point
The distance from the line \(\vec{x}=\vec{p}+t\vec{d}\) to the point \(Q\) is the minimum distance from the point \(Q\) to any point on the line, which equals \(||\mbox{perp}_{\vec{d}}\vec{PQ}||\).
Dot Product
Let \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix}\) and \(\vec{y}=\begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}\) be vectors in \(\mathbb{R}^n\). Then the dot product of \(\vec{x}\) and \(\vec{y}\) is \[ \vec{x} \cdot \vec{y} = x_1y_1+x_2y_2+\cdots + x_ny_n \]
Eigenspace
Let \(\lambda\) be an eigenvalue of an \(n\times n\) matrix \(A\). Then the set containing the zero vector and all eigenvectors of \(A\) corresponding to \(\lambda\) is called the eigenspace of \(\lambda\).
Elementary Matrix
A matrix that can be obtained from the identity matrix by a single elementary row operation is called an elementary matrix.
Elementary Row Operations
There are three types of elementary row operations (EROs) which correspond to the three steps of Gaussian elimination:- Multiply one row by a non-zero constant.
- Interchange two rows.
- Add a scalar multiple of one row to another row.
Function, Domain, Codomain
A function \(f\) is a rule that assigns to every element \(x\) of a set called the domain a unique value \(y\) in another set called the codomain.
Geometric Multiplicity
Let \(A\) be an \(n\times n\) matrix with eigenvalue \(\lambda\). The geometric multiplicity of \(\lambda\) is the dimension of the eigenspace of \(\lambda\).
Homogeneous
A linear equation is homogeneous if the right-hand side is zero. A system of linear equations is homogeneous if all of the equations of the system are homogeneous.
Hyperplane
Let \(\vec{v}_1,\ldots,\vec{v}_{n-1},\vec{p}\in\mathbb{R}^n\), with \(\{\vec{v}_1,\ldots,\vec{v}_{n-1}\}\) being a linearly independent set. Then the set with vector equation \[\vec{x}=\vec{p}+t_1\vec{v}_1+\cdots+t_{n-1}\vec{v}_{n-1}, \quad t_1,\ldots,t_{n-1}\in\mathbb{R}\] is called a hyperplane in \(\mathbb{R}^n\) that passes through \(\vec{p}\).
Identity Matrix
The \(n\times n\) matrix \(I_n=\mbox{diag}(1,1,\ldots,1)\) is called the identity matrix. That is, the identity matrix is a diagonal matrix, with all the diagonal entries equal to \(1\).
Inconsistent
A system of linear equations that does not have any solutions is called inconsistent.
Length/Norm
Let \(\vec{x}= \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}\). Then we define the norm or length of \(\vec{x}\) by \[ ||\vec{x}|| = \sqrt{\vec{x} \cdot\vec{x}}=\sqrt{x_1^2+\cdots +x_n^2} \]
Line
Let \(\vec{p},\vec{v}\in\mathbb{R}^n\) with \(\vec{v}\neq\vec{0}\). Then we call the set with vector equation \(\vec{x}=\vec{p}+t\vec{v},\ t\in\mathbb{R}\) a line in \(\mathbb{R}^n\) that passes through \(\vec{p}\), with direction vector \(\vec{v}\).
Linear Equation
A linear equation in \(n\) variables \(x_1,\ldots,x_n\) is an equation that can be written in the form \[ a_1x_1+a_2x_2+\cdots+a_nx_n=b\]
Linear Mapping/Transformation
A function \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is called a linear mapping or linear transformation if for every \(\vec{x},\vec{y}\in\mathbb{R}^n\) and \(t\in\mathbb{R}\) it satisfies the following properties:
L1\(\quad L(\vec{x}+\vec{y})=L(\vec{x})+L(\vec{y})\)
L2\(\quad L(t\vec{x})=tL(\vec{x})\)
Linear Mapping - Addition
Let \(L\) and \(M\) be linear mappings from \(\mathbb{R}^n\) to \(\mathbb{R}^m\). We define \((L+M)\) to be the mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) such that \[ (L+M)(\vec{x})=L(\vec{x})+M(\vec{x}) \]
Linear Mapping - Composition
Let \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) and \(N:\mathbb{R}^m\rightarrow\mathbb{R}^p\) be linear mappings. The composition \(N\circ L:\mathbb{R}^n\rightarrow\mathbb{R}^p\) is defined by \[ (N\circ L)(\vec{x})=N(L(\vec{x})) \qquad \mbox{for all } \vec{x}\in\mathbb{R}^n \]
Linear Mapping - Eigenvector, Eigenvalue, Eigenpair
Suppose that \(L:\mathbb{R}^n\rightarrow \mathbb{R}^n\) is a linear mapping. A non-zero vector \(\vec{v}\in\mathbb{R}^n\) such that \(L(\vec{v})=\lambda\vec{v}\) (for some real number \(\lambda\)) is called an eigenvector of \(L\), and the scalar \(\lambda\) is called an eigenvalue of \(L\). The pair \(\lambda,\vec{v}\) is called an eigenpair.
Linear Mapping - Inverse
If \(L:\mathbb{R}^n\rightarrow\mathbb{R}^n\) is a linear mapping and there exists another linear mapping \(M:\mathbb{R}^n\rightarrow\mathbb{R}^n\) such that \(M\circ L=\mbox{Id}=L\circ M\), then \(L\) is said to be invertible, and \(M\) is called the inverse of \(L\), usually denoted \(L^{-1}\).
Linear Mapping - Nullspace
The nullspace of a linear mapping \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is the set of all vectors in \(\mathbb{R}^n\) whose image under \(L\) is the zero vector, \(\vec{0}\). We write \[ \mbox{Null}(L)=\{ \vec{x} \in \mathbb{R}^n \mid L(\vec{x})=\vec{0} \} \]
Linear Mapping - Scalar Multiplication
Let \(L\) be a linear mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\), and let \(t\in\mathbb{R}\) be a scalar. We define \((tL)\) to be the mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) such that \[ (tL)(\vec{x})=t(L(\vec{x})) \]
Linear Operator
A linear operator is a linear mapping whose domain and codomain are the same.
Lower Triangular
A square matrix \(L\) is said to be lower triangular if the entries above the main diagonal are all zero (that is, \(l_{ij}=0\) whenever \(i \lt j\)). This means that the only non-zero entries are in the “lower” part of the matrix.
Matrix
A matrix is a rectangular array of numbers. We say that \(A\) is an \(m\times n\) matrix when \(A\) has \(m\) rows and \(n\) columns, such as \[ A=\left[\begin{array}{cccccc} a_{11} &a_{12} & \cdots & a_{1j} & \cdots & a_{1n} \\ a_{21} &a_{22} & \cdots & a_{2j} & \cdots& a_{2n} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{i1} &a_{i2} & \cdots & a_{ij} & \cdots & a_{in} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{m1} &a_{m2} & \cdots & a_{mj} & \cdots& a_{mn} \end{array}\right] \]
Matrix - Addition
Let \(A\) and \(B\) be \(m\times n\) matrices. We define addition of matrices by \[ (A+B)_{ij}=(A)_{ij}+(B)_{ij} \] That is, the \(ij\)-th entry of \(A+B\) is the sum of the \(ij\)-th entry of \(A\) with the \(ij\)-th entry of \(B\).
Matrix - Eigenvector, Eigenvalue, Eigenpair
Suppose that \(A\) is an \(n\times n\) matrix. A non-zero vector \(\vec{v}\in\mathbb{R}^n\) such that \(A\vec{v}=\lambda\vec{v}\) is called an eigenvector of \(A\), and the scalar \(\lambda\) is called an eigenvalue of \(A\). The pair \(\lambda,\vec{v}\) is called an eigenpair.
Matrix - Equivalence
Two matrices \(A\) and \(B\) are equal if and only if they are the same size (ie. \(A\) and \(B\) have the same number of rows and the same number of columns) and their corresponding entries are equal. That is, if \(a_{ij}=b_{ij}\) for all \(1\leq i \leq m\) and \(1\leq j \leq n\).
Matrix - Inverse
Let \(A\) be an \(n\times n\) matrix. If there exists an \(n\times n\) matrix \(B\) such that \(AB=I=BA\), then \(A\) is said to be invertible, and \(B\) is called the inverse of \(A\) (and \(A\) is the inverse of \(B\)). The inverse of \(A\) is denoted \(A^{-1}\).
Matrix - Linear Combination
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices, and let \(t_1,\ldots,t_k\) be real scalars. Then \(t_1A_1+t_2A_2+\cdots+t_kA_k\) is a linear combination of the matrices in \(\mathcal{B}\).
Matrix - Linear Independence/Dependence
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices. Then \(\mathcal{B}\) is said to be linearly independent if the only solution to the equation \[t_1A_1+\cdots+t_kA_k=O_{m,n}\] is the trivial solution \(t_1=\cdots = t_k=0\). Otherwise, \(\mathcal{B}\) is said to be linearly dependent.
Matrix - Nullspace
The nullspace of an \(m\times n\) matrix \(A\) is \[ \mbox{Null}(A)=\{\vec{x}\in\mathbb{R}^n \mid A\vec{x}=\vec{0} \} \]
Matrix - Product
Let \(B\) be an \(m\times n\) matrix with rows \(\vec{b}^T_1,\ldots,\vec{b}^T_m\), and let \(A\) be an \(n\times p\) matrix with columns \(\vec{a}_1,\ldots,\vec{a}_p\). Then we define the matrix product \(BA\) to be the matrix whose \(ij\)-th entry is \((BA)_{ij}=\vec{b}_i \cdot \vec{a}_j\).
That is, \[BA= \left[ \begin{array}{c} {\vec{b}_1}^T \\ {\vec{b}_2}^T \\ \vdots \\ {\vec{b}_i}^T \\ \vdots \\ {\vec{b}_m}^T \end{array} \right] \left[ \begin{array}{cccccc} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_j & \cdots & \vec{a}_p \end{array} \right]= \left[ \begin{array}{cccccc} {\vec{b}_1} \cdot \vec{a}_1 & {\vec{b}_1} \cdot \vec{a}_2 & \cdots & {\vec{b}_1} \cdot \vec{a}_j & \cdots & {\vec{b}_1} \cdot \vec{a}_p \\ {\vec{b}_2} \cdot \vec{a}_1 & {\vec{b}_2} \cdot \vec{a}_2 & \cdots & {\vec{b}_2} \cdot \vec{a}_j & \cdots & {\vec{b}_2} \cdot \vec{a}_p \\ \vdots & \vdots & & \vdots & & \vdots \\ {\vec{b}_i} \cdot \vec{a}_1 & {\vec{b}_i} \cdot \vec{a}_2 & \cdots & {\vec{b}_i} \cdot \vec{a}_j & \cdots & {\vec{b}_i} \cdot \vec{a}_p \\ \vdots & \vdots & & \vdots & & \vdots \\ {\vec{b}_m} \cdot \vec{a}_1 & {\vec{b}_m} \cdot \vec{a}_2 & \cdots & {\vec{b}_m} \cdot \vec{a}_j & \cdots & {\vec{b}_m} \cdot \vec{a}_p \end{array} \right]\]
Matrix - Product (Alternate Definition)
Let \(B\) be an \(m\times n\) matrix and let \(A\) be an \(n\times p\) matrix. Then the \(ij\)-th entry of \(BA\) is \[(BA)_{ij} = \sum_{k=1}^{n} b_{ik}a_{kj} = \sum_{k=1}^{n} (B)_{ik}(A)_{kj}\]
Matrix - Scalar Multiplication
Let \(A\) be an \(m\times n\) matrix, and \(t\in\mathbb{R}\) a scalar. We define the scalar multiplication of matrices by \[ (tA)_{ij}=t(A)_{ij} \] That is, the \(ij\)-th entry of \(tA\) is \(t\) times the \(ij\)-th entry of \(A\).
Matrix - Span
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices. Then the span of \(\mathcal{B}\) is defined as \[\operatorname{Span}\mathcal{B}=\{t_1A_1+\cdots+t_kA_k\ |\ t_1,\ldots,t_k\in\mathbb{R} \}\]That is, \(\operatorname{Span}\mathcal{B}\) is the set of all linear combinations of the matrices in \(\mathcal{B}\).
Matrix Mapping
For any \(m\times n\) matrix \(A\), we define a function \(f_A:\mathbb{R}^n\rightarrow\mathbb{R}^m\) called the matrix mapping corresponding to \(A\) by \[ f_A(\vec{x})=A\vec{x} \qquad \mbox{for any \(\vec{x}\in\mathbb{R}^n\)} \]
Nullity
Let \(A\) be an \(m\times n\) matrix. We call the dimension of \(\mbox{Null}(A)\) the nullity of \(A\) and denote it by \(\mbox{nullity}(A)\).
Orthogonal
Two vectors \(\vec{x}\) and \(\vec{y}\) in \(\mathbb{R}^n\) are orthogonal to each other if and only if \(\vec{x} \cdot\vec{y}=0\).
Parametric Equation
The parametric equation of the line \(\vec{x}=\vec{p}+t\vec{d}\) is the collection of equations \[ \begin{array}{rl} \begin{array}{l} x_1=p_1+td_1 \\ x_2=p_2+td_2 \end{array} & t\in\mathbb{R} \end{array} \]
Perpendicular
For any vectors \(\vec{x},\vec{y}\in\mathbb{R}^n\), with \(\vec{x}\neq\vec{0}\), we define the projection of \(\vec{y}\) perpendicular to \(\vec{x}\) to be \[ \mbox{perp}_{\vec{x}}\vec{y}=\vec{y}-\mbox{proj}_{\vec{x}}\vec{y} \]
Pivot
The leading entry in a non-zero row of a matrix in row echelon form in known as a pivot.
Plane
Let \(\vec{v}_1,\vec{v}_2,\vec{p}\in\mathbb{R}^n\), with \(\{\vec{v}_1,\vec{v}_2\}\) being a linearly independent set. Then the set with vector equation \[\vec{x}=\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2,\ t_1,t_2\in\mathbb{R}\] is called a plane in \(\mathbb{R}^n\) that passes through \(\vec{p}\).
Plane - Orthogonal
We say that two planes are orthogonal to each other if their normal vectors are orthogonal to each other.
Plane - Parallel
Two planes in \(\mathbb{R}^3\) are defined to be parallel if the normal vector to one plane is a non-zero scalar multiple of the normal vector of the other plane.
Position Vector
A directed line segment that starts at the origin and ends at a point \(P\) is called the position vector for \(P\).
Projection
The part of \(\vec{y}\) that is in the direction of \(\vec{x}\) is called the projection of \(\vec{y}\) onto \(\vec{x}\), and is denoted by \(\mbox{proj}_{\vec{x}}(\vec{y})\).
Range
The range of a linear mapping \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is defined to be the set \[ \mbox{Range}(L)=\{L(\vec{x})\in\mathbb{R}^m\ |\ \vec{x}\in\mathbb{R}^n\} \]
Rank
The rank of a matrix \(A\) is the number of leading \(1\)s in its reduced row echelon form, and is denoted by \(\operatorname{rank}(A)\).
Reduced Row Echelon Form, Leading \(1\)
A matrix \(R\) is said to be in reduced row echelon form (RREF) if- It is in row echelon form.
- The leading entry of every non-zero row is a \(1\) (called a leading \(1\)).
- In a column containing a leading \(1\), all the other entries are zeros.
Reflection
Let \(\vec{n}\cdot\vec{x}=0\) define a line (or a plane) through the origin in \(\mathbb{R}^2\) (or \(\mathbb{R}^3\)). A reflection in the line/plane with normal vector \(\vec{n}\) will be denoted \(\operatorname{refl}_{\vec{n}}\), and we have that \[\operatorname{refl}_{\vec{n}}(\vec{p})=\vec{p}-2\operatorname{proj}_{\vec{n}}(\vec{p})\]
Rotation Mapping
\(R_{\theta}:\mathbb{R}^2\rightarrow\mathbb{R}^2\) is defined to be the transformation that rotates \(\vec{x}\) counterclockwise through angle \(\theta\) to the image \(R_{\theta}(\vec{x})\). The standard matrix for \(R_{\theta}\) is \(\left[\begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right]\).
Row Echelon Form
A matrix is in row echelon form (REF) if - When all entries in a row are zeros, this row appears below all rows that contain a non-zero entry.
- When two non-zero rows are compared, the first non-zero entry (called the leading entry), in the upper row is to the left of the leading entry in the lower row.
Row Equivalence
If a matrix \(M\) is row reduced into a matrix \(N\) by a sequence of elementary row operations, then we say that \(M\) is row equivalent to \(N\), and we write \(M\sim N\). Note that this is the same as saying that the corresponding systems of equations are equivalent.
Row Reduction
The process of performing elementary row operations on a matrix is called row reduction.
Rowspace
Given an \(m\times n\) matrix \(A\), the rowspace of \(A\) is the subspace spanned by the rows of \(A\) (regarded as vectors) and is denoted \(\operatorname{Row}(A)\).
Scalar Form
The scalar form of the equation of a line is \(x_2=p_2+\dfrac{d_2}{d_1}(x_1-p_1)\), where \(\begin{bmatrix} p_1 \\ p_2 \end{bmatrix}\) is a point on the line, and \(\begin{bmatrix} d_1 \\ d_2 \end{bmatrix}\) is a direction vector for the line.
Shear
For \(s\in\mathbb{R}\), a shear in the \(x_i\) direction by a factor of \(sx_j\) means to “push” \(\vec{x}\) in the \(x_i\) direction by \(sx_j\) (where \(j\neq i\)). Thus, the amount of shear applied to \(\vec{x}\) depends both on \(s\) and on how far \(\vec{x}\) is from the origin (which is approximated by \(x_j\)). The matrix for a shear is obtained by replacing the \(0\) in the \(ij\)-th term of the identity matrix with \(s\).
Solution
A vector \(\begin{bmatrix} s_1 \\ \vdots \\ s_n \end{bmatrix}\) in \(\mathbb{R}^n\) is called a solution of a linear equation if the equation is satisfied when we make the substitution \(x_1=s_1\), \(x_2=s_2\), \(\ldots,\ x_n=s_n\).
Solution Set
The solution set to a system of linear equations is the collection of all vectors that are solutions to all the equations in the system. This set will be a subset of \(\mathbb{R}^n\), but it may be the empty set.
Square Matrix
We say that a matrix that has the same number of columns and rows (that is, an \(n \times n\) matrix for some \(n\)) is a square matrix.
Standard Basis for \(\mathbb{R}^n\)
In \(\mathbb{R}^n\), let \(\vec{e}_i\) represent the vector whose \(i\)-th component is \(1\) and all other components are \(0\). The set \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) is called the standard basis for \(\mathbb{R}^n\).
Stretch, Shrink
For \(t\in\mathbb{R}\), \(t \gt 0\), a stretch by a factor of \(t\) in the \(x_i\) direction means to multiply the \(x_i\) term by \(t\), but leave all other terms unchanged. Visually, we are pulling \(\vec{x}\) in the \(x_i\) direction, but the amount of pulling depends on the distance of \(\vec{x}\) from the origin (approximated by the \(x_i\) term). If \(t \lt 1\), this is is sometimes referred to as a shrink instead of a stretch. The matrix for a stretch is obtained by replacing the \(1\) in the \(ii\)-th term of the identity matrix with \(t\).
Submatrix
Let \(A\) be an \(n\times n\) matrix. Let \(A(i,j)\) denote the \((n-1)\times (n-1)\) submatrix obtained from \(A\) by deleting the \(i\)-th row and the \(j\)-th column.
Subspace
A subset \(S\) of \(\mathbb{R}^n\) is called a subspace of \(\mathbb{R}^n\) if the following conditions hold:- \(S\) is non-empty
- \(S\) is closed under addition (that is, for \(\vec{x},\vec{y}\in S\) we have \(\vec{x}+\vec{y}\in S\))
- \(S\) is closed under scalar multiplication (that is, for \(t\in\mathbb{R}\) and \(\vec{x}\in S\), we have \(t\vec{x}\in S\))
System of Equations - Equivalence
We say that two systems of equations are equivalent if they have the same solution set.
System of Linear Equations
A general system of \(m\) linear equations in \(n\) variables is written in the form \begin{align*} a_{11}x_1+a_{12}x_2+&\cdots+a_{1n}x_n=b_1 \\ a_{21}x_1+a_{22}x_2+&\cdots+a_{2n}x_n=b_2 \\ \vdots \\ a_{m1}x_1+a_{m2}x_2+&\cdots+a_{mn}x_n=b_m \\ \end{align*}
Transpose
Let \(A\) be an \(m\times n\) matrix. Then the transpose of \(A\) is the \(n\times m\) matrix \(A^T\) whose \(ij\)-th entry is the \(ji\)-th entry of \(A\). That is, \[ (A^T)_{ij}=(A)_{ji} \]
Trivial Solution
If the only solution to a vector equation\[ \vec{0}=t_1\vec{v}_1+\cdots+t_k\vec{v}_k \] is \(t_1=\cdots=t_k=0\), then we say that the vector equation has the trivial solution.
Unit Vector
A vector \(\vec{x} \in\mathbb{R}^n\) such that \(||\vec{x}||=1\) is called a unit vector.
Upper Triangular
A square matrix \(U\) is said to be upper triangular if the entries beneath the main diagonal are all zero (that is, \(u_{ij}=0\) whenever \(i \gt j\)). This means that the only non-zero entries are in the “upper” part of the matrix.
Vector - Addition
If \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix},\ \vec{y}=\begin{bmatrix} y_1 \\ \vdots \\ y_n\end{bmatrix} \in\mathbb{R}^n\), then we define addition of vectors by \[ \vec{x}+\vec{y}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix} + \begin{bmatrix} y_1\\ \vdots \\ y_n \end{bmatrix} = \begin{bmatrix} x_1+y_1\\ \vdots \\ x_n+y_n \end{bmatrix} \]
Vector - Linear Independence/Dependence
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly independent if the only solution to \[ \vec{0}=t_1\vec{v}_1+\cdots+t_k\vec{v}_k \] is \(t_1=\cdots=t_k=0\). Otherwise, \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly dependent.
Vector - Scalar Multiplication
If \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix} \in\mathbb{R}^n\), and \(t\in\mathbb{R}\), then we define scalar multiplication by \[ t\vec{x}=t\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} tx_1\\ \vdots \\ tx_n \end{bmatrix} \]
Vector - Span, Spanning Set
If \(S\) is the subspace of \(\mathbb{R}^n\) consisting of all possible linear combinations of the vectors \(\vec{v}_1,\ldots,\vec{v}_k\in\mathbb{R}^n\), then \(S\) is called the subspace spanned by the set of vectors \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_k\}\), and we say that the set \(\mathcal{B}\) spans \(S\). The set \(\mathcal{B}\) is called a spanning set for the subspace \(S\). We write\[ S=\mbox{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}=\mbox{Span}\mathcal{B} \]
Volume of a Parallelepiped
The volume of the parallelepiped determined by linearly independent vectors \(\vec{u}\), \(\vec{v}\) and \(\vec{w}\) is \[|\vec{w}\cdot(\vec{u}\times\vec{v})|\]