\(\mathbb{R}^n\)
\(\mathbb{R}^n\) is the set of all vectors of the form \(\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix}\), where \(x_i\in\mathbb{R}\). In set notation, we write \[ \mathbb{R}^n = \left\{ \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \mid x_1, \ldots ,x_n \in \mathbb{R} \right\} \]
Algebraic Multiplicity
Let \(A\) be an \(n\times n\) matrix with eigenvalue \(\lambda\). The algebraic multiplicity of \(\lambda\) is the number of times \(\lambda\) is repeated as a root of the characteristic polynomial.
Augmented Matrix
Given the system of equations \[ \begin{array}{cccc} a_{11}x_1 & + \cdots & + a_{1n}x_n & = b_1 \\ \vdots & & \vdots & \vdots \\ a_{m1}x_1 & + \cdots& + a_{mn}x_m & = b_m \end{array} \] the augmented matrix for the system is the matrix \[ \left[\begin{array}{ccc|c} a_{11} & \cdots & a_{1n} & b_1 \\ \vdots & & \vdots & \vdots \\ a_{m1} & \cdots & a_{mn} & b_m \end{array}\right] \]
Back-Substitution
The act of plugging one equation into the other equations in a system is called back-substitution.
Bad Row
A bad row in an augmented matrix is a row of the form \(\begin{array}{ccc|c} 0 & \cdots & 0 & c \end{array}\), where \(c\neq 0\).
Basis
If \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is a spanning set for a subspace \(S\) of \(\mathbb{R}^n\) and \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is linearly independent, then \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is called a basis for \(S\).
Block Multiplication
A method for multiplying matrices that consists of multiplying submatrices by each other.
Characteristic Polynomial
Let \(A\) be an \(n\times n\) matrix. Then \(C(\lambda)=\det(A-\lambda I)\) is called the characteristic polynomial of \(A\).
Coefficient Matrix
Given the system of equations \[ \begin{array}{cccc} a_{11}x_1 & + \cdots & + a_{1n}x_n & = b_1 \\ \vdots & & \vdots & \vdots \\ a_{m1}x_1 & + \cdots & + a_{mn}x_m & = b_m \end{array} \] the coefficient matrix for the system is the matrix \[ \left[\begin{array}{ccc} a_{11} & \cdots & a_{1n} \\ \vdots & & \vdots \\ a_{m1} & \cdots & a_{mn} \end{array}\right] \]
Cofactor
Let \(A\) be an \(n\times n\) matrix, and let \(a_{ij}\) be an entry of \(A\). Then the cofactor of \(a_{ij}\) is defined to be \[ C_{ij}=(-1)^{i+j}\det A(i,j) \]
Cofactor Matrix
Let \(A\) be an \(n\times n\) matrix. We define the cofactor matrix of \(A\), denoted \(\mbox{cof }A\), by \[ (\mbox{cof }A)_{ij} = C_{ij} \] That is, the \(ij\) entry of \(\mbox{cof }A\) is the cofactor of the \(ij\) entry of \(A\).
Columnspace
Let \(A\) be an \(m\times n\) matrix, and let \(\vec{c}_1,\vec{c}_2,\ldots,\vec{c}_n\in\mathbb{R}^m\) be the columns of \(A\). Then the columnspace of \(A\), written \(\mbox{Col}(A)\), is \(\mbox{Span}\{\vec{c}_1,\vec{c}_2,\ldots,\vec{c}_n\}\).
Columnspace (Textbook Definition)
The columnspace of an \(m\times n\) matrix \(A\) is the set \(\mbox{Col}(A)\) defined by \[ \mbox{Col}(A)=\{A\vec{x}\in\mathbb{R}^m \ |\ \vec{x}\in\mathbb{R}^n \} \]
Consistent
A system of linear equations that has at least one solution is called consistent.
Cramer's Rule
Let \([ \, A \mid \vec{b} \, ]\) be the augmented matrix for a system of linear equations, with \(\det(A)\neq 0\). Let \(N_i\) be the matrix you get by replacing the \(i\)-th column of \(A\) with \(\vec{b}\). Then, if \(\vec{x}\) is the solution to \(A\vec{x}=\vec{y}\), we have that \(x_i=(\det N_i)/(\det A)\).
Cross-Product
The cross-product of vectors \(\vec{u}=\begin{bmatrix} u_1\\ u_2\\ u_3 \end{bmatrix}\) and \(\vec{v}= \begin{bmatrix} v_1\\v_2\\v_3 \end{bmatrix}\) is defined by \[\vec{u}\times\vec{v}=\begin{bmatrix} u_2v_3-u_3v_2 \\ u_3v_1-u_1v_3 \\ u_1v_2-u_2v_1 \end{bmatrix} \]
Determinant (\(2\times2\))
The determinant of a \(2\times 2\) matrix \(A=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}\) is defined by \[ \det A = \det \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} = a_{11}a_{22}-a_{12}a_{21} \]
Determinant (\(n\timesn\))
The determinant of an \(n\times n\) matrix \(A\) is defined by \[ \det A = a_{11}C_{11}+a_{12}C_{12}+\cdots+a_{1n}C_{1n} \]
Diagonal Matrix
A matrix \(D\) that is both upper and lower triangular is called a diagonal matrix (that is, \(d_{ij}=0\) for all \(i\neq j\)). In this case, the non-zero entries are only on the “main diagonal” of the matrix.
Diagonalizable
If there exists an invertible matrix \(P\) and diagonal matrix \(D\) such that \(P^{-1}AP=D\), then we say \(A\) is diagonalizable and that the matrix \(P\) diagonalizes \(A\) to its diagonal form \(D\).
Dilation, Contraction
For \(t\in\mathbb{R}\), \(t>1\), the dilation of \(\vec{x}\) by a factor of \(t\) is the function \(T(\vec{x})=t\vec{x}\). If \(0 \lt t \lt 1\), the function \(T(\vec{x})=t\vec{x}\) is called the contraction of \(\vec{x}\) by a factor of \(t\). As these are the same function, they have the same standard matrix, which is obtained by multiplying the identity matrix by \(t\).
Dimension
If \(S\) is a non-trivial subspace of \(\mathbb{R}^n\) with a basis containing \(k\) vectors, then we say that the dimension of \(S\) is \(k\) and write \(\operatorname{dim}S=k\).
Directed Line Segment
The directed line segment from a point \(P\) in \(\mathbb{R}^2\) to a point \(Q\) in \(\mathbb{R}^2\) is drawn as an arrow with starting point \(P\) and tip \(Q\). It is denoted by \(\vec{PQ}\).
Directed Line Segment - Equivalence
We define two directed line segments \(\vec{PQ}\) and \(\vec{RS}\) to be equivalent if \(\vec{q}-\vec{p}=\vec{s}-\vec{r}\), in which case we shall write \(\vec{PQ}=\vec{RS}\). In the case where \(R=O\), we get that \(\vec{PQ}\) is equivalent to \(\vec{OS}\) if \(\vec{q}-\vec{p}=\vec{s}\).
Distance From a Line to a Point
The distance from the line \(\vec{x}=\vec{p}+t\vec{d}\) to the point \(Q\) is the minimum distance from the point \(Q\) to any point on the line, which equals \(||\mbox{perp}_{\vec{d}}\vec{PQ}||\).
Dot Product
Let \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix}\) and \(\vec{y}=\begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}\) be vectors in \(\mathbb{R}^n\). Then the dot product of \(\vec{x}\) and \(\vec{y}\) is \[ \vec{x} \cdot \vec{y} = x_1y_1+x_2y_2+\cdots + x_ny_n \]
Eigenspace
Let \(\lambda\) be an eigenvalue of an \(n\times n\) matrix \(A\). Then the set containing the zero vector and all eigenvectors of \(A\) corresponding to \(\lambda\) is called the eigenspace of \(\lambda\).
Eigenvector, Eigenvalue, Eigenpair
Suppose that \(A\) is an \(n\times n\) matrix. A non-zero vector \(\vec{v}\in\mathbb{R}^n\) such that \(A\vec{v}=\lambda\vec{v}\) is called an eigenvector of \(A\), and the scalar \(\lambda\) is called an eigenvalue of \(A\). The pair \(\lambda,\vec{v}\) is called an eigenpair.
Elementary Matrix
A matrix that can be obtained from the identity matrix by a single elementary row operation is called an elementary matrix.
Elementary Row Operations
There are three types of elementary row operations (EROs) which correspond to the three steps of Gaussian elimination:
  1. Multiply one row by a non-zero constant.
  2. Interchange two rows.
  3. Add a scalar multiple of one row to another row.
Function, Domain, Codomain
A function \(f\) is a rule that assigns to every element \(x\) of a set called the domain a unique value \(y\) in another set called the codomain.
Geometric Multiplicity
Let \(A\) be an \(n\times n\) matrix with eigenvalue \(\lambda\). The geometric multiplicity of \(\lambda\) is the dimension of the eigenspace of \(\lambda\).
Homogeneous
A linear equation is homogeneous if the right-hand side is zero. A system of linear equations is homogeneous if all of the equations of the system are homogeneous.
Hyperplane
Let \(\vec{v}_1,\ldots,\vec{v}_{n-1},\vec{p}\in\mathbb{R}^n\), with \(\{\vec{v}_1,\ldots,\vec{v}_{n-1}\}\) being a linearly independent set. Then the set with vector equation \(\vec{x}=\vec{p}+t_1\vec{v}_1+\cdots+t_{n-1}\vec{v}_{n-1}\), \(\ t_1,\ldots,t_{n-1}\in\mathbb{R}\) is called a hyperplane in \(\mathbb{R}^n\) that passes through \(\vec{p}\).
Identity Matrix
The \(n\times n\) matrix \(I_n=\mbox{diag}(1,1,\ldots,1)\) is called the identity matrix. That is, the identity matrix is a diagonal matrix, with all the diagonal entries equal to \(1\).
Inconsistent
A system of linear equations that does not have any solutions is called inconsistent.
Length/Norm
Let \(\vec{x}= \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}\). Then we define the norm or length of \(\vec{x}\) by \[ ||\vec{x}|| = \sqrt{\vec{x} \cdot\vec{x}}=\sqrt{x_1^2+\cdots +x_n^2} \]
Line
Let \(\vec{p},\vec{v}\in\mathbb{R}^n\) with \(\vec{v}\neq\vec{0}\). Then we call the set with vector equation \(\vec{x}=\vec{p}+t\vec{v},\ t\in\mathbb{R}\) a line in \(\mathbb{R}^n\) that passes through \(\vec{p}\), with direction vector \(\vec{v}\).
Linear Equation
A linear equation in \(n\) variables \(x_1,\ldots,x_n\) is an equation that can be written in the form \[ a_1x_1+a_2x_2+\cdots+a_nx_n=b\] \(a_1, \ldots, a_n\) are called the coefficients, \(x_1, \ldots, x_n\) are called the variables or unknowns, and \(b\) is called the constant term of the linear equation.
Linear Mapping/Transformation
A function \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is called a linear mapping or linear transformation if for every \(\vec{x},\vec{y}\in\mathbb{R}^n\) and \(t\in\mathbb{R}\) it satisfies the following properties:
        L1  \(L(\vec{x}+\vec{y})=L(\vec{x})+L(\vec{y})\)
        L2  \(L(t\vec{x})=tL(\vec{x})\)
Linear Mapping - Addition
Let \(L\) and \(M\) be linear mappings from \(\mathbb{R}^n\) to \(\mathbb{R}^m\). We define \((L+M)\) to be the mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) such that \[ (L+M)(\vec{x})=L(\vec{x})+M(\vec{x}) \]
Linear Mapping - Composition
Let \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) and \(N:\mathbb{R}^m\rightarrow\mathbb{R}^p\) be linear mappings. The composition \(N\circ L:\mathbb{R}^n\rightarrow\mathbb{R}^p\) is defined by \[ (N\circ L)(\vec{x})=N(L(\vec{x})) \qquad \mbox{for all } \vec{x}\in\mathbb{R}^n. \]
Linear Mapping - Eigenvector, Eigenvalue, Eigenpair
Suppose that \(L:\mathbb{R}^n\rightarrow \mathbb{R}^n\) is a linear mapping. A non-zero vector \(\vec{v}\in\mathbb{R}^n\) such that \(L(\vec{v})=\lambda\vec{v}\) (for some real number \(\lambda\)) is called an eigenvector of \(L\), and the scalar \(\lambda\) is called an eigenvalue of \(L\). The pair \(\lambda,\vec{v}\) is called an eigenpair.
Linear Mapping - Inverse
If \(L:\mathbb{R}^n\rightarrow\mathbb{R}^n\) is a linear mapping and there exists another linear mapping \(M:\mathbb{R}^n\rightarrow\mathbb{R}^n\) such that \(M\circ L=\mbox{Id}=L\circ M\), then \(L\) is said to be invertible, and \(M\) is called the inverse of \(L\), usually denoted \(L^{-1}\).
Linear Mapping - Nullspace
The nullspace of a linear mapping \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is the set of all vectors in \(\mathbb{R}^n\) whose image under \(L\) is the zero vector, \(\vec{0}\). We write \[ \mbox{Null}(L)=\{ \vec{x} \in \mathbb{R}^n \mid L(\vec{x})=\vec{0} \} \]
Linear Mapping - Scalar Multiplication
Let \(L\) be a linear mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\), and let \(t\in\mathbb{R}\) be a scalar. We define \((tL)\) to be the mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) such that \[ (tL)(\vec{x})=t(L(\vec{x})) \]
Linear Operator
A linear operator is a linear mapping whose domain and codomain are the same.
Lower Triangular
A square matrix \(L\) is said to be lower triangular if the entries above the main diagonal are all zero (that is, \(l_{ij}=0\) whenever \(i \lt j\)). This means that the only non-zero entries are in the “lower” part of the matrix.
Matrix
A matrix is a rectangular array of numbers. We say that \(A\) is an \(m\times n\) matrix when \(A\) has \(m\) rows and \(n\) columns, such as \[ A=\left[\begin{array}{cccccc} a_{11} &a_{12} & \cdots & a_{1j} & \cdots & a_{1n} \\ a_{21} &a_{22} & \cdots & a_{2j} & \cdots& a_{2n} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{i1} &a_{i2} & \cdots & a_{ij} & \cdots & a_{in} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{m1} &a_{m2} & \cdots & a_{mj} & \cdots& a_{mn} \end{array}\right] \]
Matrix - Addition
Let \(A\) and \(B\) be \(m\times n\) matrices. We define addition of matrices by \[ (A+B)_{ij}=(A)_{ij}+(B)_{ij} \] That is, the \(ij\)-th entry of \(A+B\) is the sum of the \(ij\)-th entry of \(A\) with the \(ij\)-th entry of \(B\).
Matrix - Equivalence
Two matrices \(A\) and \(B\) are equal if and only if they are the same size (ie. \(A\) and \(B\) have the same number of rows and the same number of columns) and their corresponding entries are equal. That is, if \(a_{ij}=b_{ij}\) for all \(1\leq i \leq m\) and \(1\leq j \leq n\).
Matrix - Inverse
Let \(A\) be an \(n\times n\) matrix. If there exists an \(n\times n\) matrix \(B\) such that \(AB=I=BA\), then \(A\) is said to be invertible, and \(B\) is called the inverse of \(A\) (and \(A\) is the inverse of \(B\)). The inverse of \(A\) is denoted \(A^{-1}\).
Matrix - Linear Combination
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices, and let \(t_1,\ldots,t_k\) be real scalars. Then \(t_1A_1+t_2A_2+\cdots+t_kA_k\) is a linear combination of the matrices in \(\mathcal{B}\).
Matrix - Linear Independence/Dependence
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices. Then \(\mathcal{B}\) is said to be linearly independent if the only solution to the equation \[t_1A_1+\cdots+t_kA_k=O_{m,n}\] is the trivial solution \(t_1=\cdots = t_k=0\). Otherwise, \(\mathcal{B}\) is said to be linearly dependent.
Matrix - Nullspace
The nullspace of an \(m\times n\) matrix \(A\) is \[ \mbox{Null}(A)=\{\vec{x}\in\mathbb{R}^n \mid A\vec{x}=\vec{0} \} \]
Matrix - Product
Let \(B\) be an \(m\times n\) matrix with rows \(\vec{b}^T_1,\ldots,\vec{b}^T_m\), and let \(A\) be an \(n\times p\) matrix with columns \(\vec{a}_1,\ldots,\vec{a}_p\). Then we define the matrix product \(BA\) to be the matrix whose \(ij\)-th entry is \((BA)_{ij}=\vec{b}_i \cdot \vec{a}_j\).
That is, \[BA= \left[ \begin{array}{c} {\vec{b}_1}^T \\ {\vec{b}_2}^T \\ \vdots \\ {\vec{b}_i}^T \\ \vdots \\ {\vec{b}_m}^T \end{array} \right] \left[ \begin{array}{cccccc} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_j & \cdots & \vec{a}_p \end{array} \right]= \left[ \begin{array}{cccccc} {\vec{b}_1} \cdot \vec{a}_1 & {\vec{b}_1} \cdot \vec{a}_2 & \cdots & {\vec{b}_1} \cdot \vec{a}_j & \cdots & {\vec{b}_1} \cdot \vec{a}_p \\ {\vec{b}_2} \cdot \vec{a}_1 & {\vec{b}_2} \cdot \vec{a}_2 & \cdots & {\vec{b}_2} \cdot \vec{a}_j & \cdots & {\vec{b}_2} \cdot \vec{a}_p \\ \vdots & \vdots & & \vdots & & \vdots \\ {\vec{b}_i} \cdot \vec{a}_1 & {\vec{b}_i} \cdot \vec{a}_2 & \cdots & {\vec{b}_i} \cdot \vec{a}_j & \cdots & {\vec{b}_i} \cdot \vec{a}_p \\ \vdots & \vdots & & \vdots & & \vdots \\ {\vec{b}_m} \cdot \vec{a}_1 & {\vec{b}_m} \cdot \vec{a}_2 & \cdots & {\vec{b}_m} \cdot \vec{a}_j & \cdots & {\vec{b}_m} \cdot \vec{a}_p \end{array} \right]\]
Matrix - Product (Alternate Definition)
Let \(B\) be an \(m\times n\) matrix and let \(A\) be an \(n\times p\) matrix. Then the \(ij\)-th entry of \(BA\) is \[(BA)_{ij} = \sum_{k=1}^{n} b_{ik}a_{kj} = \sum_{k=1}^{n} (B)_{ik}(A)_{kj}\]
Matrix - Scalar Multiplication
Let \(A\) be an \(m\times n\) matrix, and \(t\in\mathbb{R}\) a scalar. We define the scalar multiplication of matrices by \[ (tA)_{ij}=t(A)_{ij} \] That is, the \(ij\)-th entry of \(tA\) is \(t\) times the \(ij\)-th entry of \(A\).
Matrix - Span
Let \(\mathcal{B}=\{A_1,\ldots, A_k\}\) be a set of \(m\times n\) matrices. Then the span of \(\mathcal{B}\) is defined as \[\operatorname{Span}\mathcal{B}=\{t_1A_1+\cdots+t_kA_k\ |\ t_1,\ldots,t_k\in\mathbb{R} \}\]That is, \(\operatorname{Span}\mathcal{B}\) is the set of all linear combinations of the matrices in \(\mathcal{B}\).
Matrix Mapping
For any \(m\times n\) matrix \(A\), we define a function \(f_A:\mathbb{R}^n\rightarrow\mathbb{R}^m\) called the matrix mapping corresponding to \(A\) by \[ f_A(\vec{x})=A\vec{x} \qquad \mbox{for any \(\vec{x}\in\mathbb{R}^n\).} \]
Normal Vector to a Line
The normal vector to a line is a vector that is orthogonal to every vector on the line. That is, if \(\vec{n}\) is the normal vector to a line \(l\), and if \(\vec{x}\) is a vector on the line \(l\), then \(\vec{n}\cdot\vec{x}=0\).
Normal Vector to a Plane
The normal vector to a plane is a vector that is orthogonal to every vector in the plane. That is, if \(\vec{n}\) is the normal vector to a plane \(P\), and if \(\vec{x}\) is a vector on the plane \(P\), then \(\vec{n}\cdot\vec{x}=0\).
Nullity
Let \(A\) be an \(m\times n\) matrix. We call the dimension of \(\mbox{Null}(A)\) the nullity of \(A\) and denote it by \(\mbox{nullity}(A)\).
Orthogonal
Two vectors \(\vec{x}\) and \(\vec{y}\) in \(\mathbb{R}^n\) are orthogonal to each other if and only if \(\vec{x} \cdot\vec{y}=0\).
Parametric Equation
The parametric equation of the line \(\vec{x}=\vec{p}+t\vec{d}\) is the collection of equations \[ \begin{array}{rl} \begin{array}{l} x_1=p_1+td_1 \\ x_2=p_2+td_2 \end{array} & t\in\mathbb{R} \end{array} \]
Perpendicular
For any vectors \(\vec{x},\vec{y}\in\mathbb{R}^n\), with \(\vec{x}\neq\vec{0}\), we define the projection of \(\vec{y}\) perpendicular to \(\vec{x}\) to be \[ \mbox{perp}_{\vec{x}}\vec{y}=\vec{y}-\mbox{proj}_{\vec{x}}\vec{y} \]
Pivot
The leading entry in a non-zero row of a matrix in row echelon form in known as a pivot.
Plane
Let \(\vec{v}_1,\vec{v}_2,\vec{p}\in\mathbb{R}^n\), with \(\{\vec{v}_1,\vec{v}_2\}\) being a linearly independent set. Then the set with vector equation \(\vec{x}=\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2,\ t_1,t_2\in\mathbb{R}\) is called a plane in \(\mathbb{R}^n\) that passes through \(\vec{p}\).
Plane - Orthogonal
We say that two planes are orthogonal to each other if their normal vectors are orthogonal to each other.
Plane - Parallel
Two planes in \(\mathbb{R}^3\) are defined to be parallel if the normal vector to one plane is a non-zero scalar multiple of the normal vector of the other plane.
Position Vector
A directed line segment that starts at the origin and ends at a point \(P\) is called the position vector for \(P\).
Preserves Addition
If a function \(f:\mathbb{R}^n\rightarrow\mathbb{R}^m\) satisfies \(f(\vec{x}+\vec{y})=f(\vec{x})+f(\vec{y})\), we say that \(f\) preserves addition.
Preserves Scalar Multiplication
If a function \(f:\mathbb{R}^n\rightarrow\mathbb{R}^m\) satisfies \(f(s\vec{x})=sf(\vec{x})\), we say that \(f\) preserves scalar multiplication.
Projection
The part of \(\vec{y}\) that is in the direction of \(\vec{x}\) is called the projection of \(\vec{y}\) onto \(\vec{x}\), and is denoted by \(\mbox{proj}_{\vec{x}}(\vec{y})\).
Range
The range of a linear mapping \(L:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is defined to be the set \[ \mbox{Range}(L)=\{L(\vec{x})\in\mathbb{R}^m\ |\ \vec{x}\in\mathbb{R}^n\} \]
Rank
The rank of a matrix \(A\) is the number of leading \(1\)s in its reduced row echelon form, and is denoted by \(\operatorname{rank}(A)\).
Reduced Row Echelon Form, Leading \(1\)
A matrix \(R\) is said to be in reduced row echelon form (RREF) if
  1. It is in row echelon form.
  2. The leading entry of every non-zero row is a \(1\) (called a leading \(1\)).
  3. In a column containing a leading \(1\), all the other entries are zeros.
Reflection
Let \(\vec{n}\cdot\vec{x}=0\) define a line (or a plane) through the origin in \(\mathbb{R}^2\) (or \(\mathbb{R}^3\)). A reflection in the line/plane with normal vector \(\vec{n}\) will be denoted \(\operatorname{refl}_{\vec{n}}\), and we have that \[\operatorname{refl}_{\vec{n}}(\vec{p})=\vec{p}-2\operatorname{proj}_{\vec{n}}(\vec{p})\]
Rotation Mapping
\(R_{\theta}:\mathbb{R}^2\rightarrow\mathbb{R}^2\) is defined to be the transformation that rotates \(\vec{x}\) counterclockwise through angle \(\theta\) to the image \(R_{\theta}(\vec{x})\). The standard matrix for \(R_{\theta}\) is \(\left[\begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right]\).
Row Echelon Form
A matrix is in row echelon form (REF) if
  1. When all entries in a row are zeros, this row appears below all rows that contain a non-zero entry.
  2. When two non-zero rows are compared, the first non-zero entry (called the leading entry), in the upper row is to the left of the leading entry in the lower row.
Row Equivalence
If a matrix \(M\) is row reduced into a matrix \(N\) by a sequence of elementary row operations, then we say that \(M\) is row equivalent to \(N\), and we write \(M\sim N\). Note that this is the same as saying that the corresponding systems of equations are equivalent.
Row Reduction
The process of performing elementary row operations on a matrix is called row reduction.
Rowspace
Given an \(m\times n\) matrix \(A\), the rowspace of \(A\) is the subspace spanned by the rows of \(A\) (regarded as vectors) and is denoted \(\operatorname{Row}(A)\).
Scalar Form
The scalar form of the equation of a line is \(x_2=p_2+\dfrac{d_2}{d_1}(x_1-p_1)\), where \(\begin{bmatrix} p_1 \\ p_2 \end{bmatrix}\) is a point on the line, and \(\begin{bmatrix} d_1 \\ d_2 \end{bmatrix}\) is a direction vector for the line.
Shear
For \(s\in\mathbb{R}\), a shear of \(x_i\) by a factor of \(s\) in the \(x_j\) direction means to “push” \(\vec{x}\) in the \(x_i\) direction by \(sx_j\) (where \(j\neq i\)). Thus, the amount of shear applied to \(\vec{x}\) depends both on \(s\) and on how far \(\vec{x}\) is from the origin (which is approximated by \(x_j\)). The matrix for a shear is obtained by replacing the \(0\) in the \(ij\)-th term of the identity matrix with \(s\).
Solution
A vector \(\begin{bmatrix} s_1 \\ \vdots \\ s_n \end{bmatrix}\) in \(\mathbb{R}^n\) is called a solution of a linear equation if the equation is satisfied when we make the substitution \(x_1=s_1\), \(x_2=s_2\), \(\ldots,\ x_n=s_n\).
Solution Set
The solution set to a system of linear equations is the collection of all vectors that are solutions to all the equations in the system. This set will be a subset of \(\mathbb{R}^n\), but it may be the empty set.
Solution Set - Standard Form
The standard form for a solution set is \[ \vec{x}=s_1\vec{y}_1+\cdots+s_k\vec{y}_k \mbox{ for all } s_i\in \mathbb{R} \]
Solution Space
The set \(S=\{\vec{x}\in\R^n\ |\ A\vec{x}=\vec{0}\}\) of all solutions to a homogeneous system \(A\vec{x}=\vec{0}\) is called the solution space of the system \(A\vec{x}=\vec{0}\).
Square Matrix
We say that a matrix that has the same number of columns and rows (that is, an \(n \times n\) matrix for some \(n\)) is a square matrix.
Standard Basis for \(\mathbb{R}^n\)
In \(\mathbb{R}^n\), let \(\vec{e}_i\) represent the vector whose \(i\)-th component is \(1\) and all other components are \(0\). The set \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) is called the standard basis for \(\mathbb{R}^n\).
Stretch, Shrink
For \(t\in\mathbb{R}\), \(t \gt 0\), a stretch by a factor of \(t\) in the \(x_i\) direction means to multiply the \(x_i\) term by \(t\), but leave all other terms unchanged. Visually, we are pulling \(\vec{x}\) in the \(x_i\) direction, but the amount of pulling depends on the distance of \(\vec{x}\) from the origin (approximated by the \(x_i\) term). If \(t \lt 1\), this is is sometimes referred to as a shrink instead of a stretch. The matrix for a stretch is obtained by replacing the \(1\) in the \(ii\)-th term of the identity matrix with \(t\).
Submatrix
Let \(A\) be an \(n\times n\) matrix. Let \(A(i,j)\) denote the \((n-1)\times (n-1)\) submatrix obtained from \(A\) by deleting the \(i\)-th row and the \(j\)-th column.
Subspace
A subset \(S\) of \(\mathbb{R}^n\) is called a subspace of \(\mathbb{R}^n\) if the following conditions hold:
  1. \(S\) is non-empty
  2. \(S\) is closed under addition (that is, for \(\vec{x},\vec{y}\in S\) we have \(\vec{x}+\vec{y}\in S\))
  3. \(S\) is closed under scalar multiplication (that is, for \(t\in\mathbb{R}\) and \(\vec{x}\in S\), we have \(t\vec{x}\in S\))
System of Equations - Equivalence
We say that two systems of equations are equivalent if they have the same solution set.
System of Linear Equations
A general system of \(m\) linear equations in \(n\) variables is written in the form \begin{align*} a_{11}x_1+a_{12}x_2+&\cdots+a_{1n}x_n=b_1 \\ a_{21}x_1+a_{22}x_2+&\cdots+a_{2n}x_n=b_2 \\ \vdots \\ a_{m1}x_1+a_{m2}x_2+&\cdots+a_{mn}x_n=b_m \\ \end{align*}
Transpose
Let \(A\) be an \(m\times n\) matrix. Then the transpose of \(A\) is the \(n\times m\) matrix \(A^T\) whose \(ij\)-th entry is the \(ji\)-th entry of \(A\). That is, \[ (A^T)_{ij}=(A)_{ji} \]
Trivial Solution
If the only solution to a vector equation\[ \vec{0}=t_1\vec{v}_1+\cdots+t_k\vec{v}_k \] is \(t_1=\cdots=t_k=0\), then we say that the vector equation has the trivial solution.
Unit Vector
A vector \(\vec{x} \in\mathbb{R}^n\) such that \(||\vec{x}||=1\) is called a unit vector.
Upper Triangular
A square matrix \(U\) is said to be upper triangular if the entries beneath the main diagonal are all zero (that is, \(u_{ij}=0\) whenever \(i \gt j\)). This means that the only non-zero entries are in the “upper” part of the matrix.
Vector - Addition
If \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix},\ \vec{y}=\begin{bmatrix} y_1 \\ \vdots \\ y_n\end{bmatrix} \in\mathbb{R}^n\), then we define addition of vectors by \[ \vec{x}+\vec{y}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix} + \begin{bmatrix} y_1\\ \vdots \\ y_n \end{bmatrix} = \begin{bmatrix} x_1+y_1\\ \vdots \\ x_n+y_n \end{bmatrix} \]
Vector - Equivalence
Two vectors in \(\mathbb{R}^n\) are equivalent if they have the same length and direction.
Vector - Linear Combination
A linear combination of the vectors \(\vec{v}_1,\ldots,\vec{v}_k\in\mathbb{R}^n\) is anything of the form \[ a_1\vec{v}_1+\cdots+a_k\vec{v}_k \] where \(a_1,\ldots,a_k\in\mathbb{R}\).
Vector - Linear Independence/Dependence
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly independent if the only solution to \[ \vec{0}=t_1\vec{v}_1+\cdots+t_k\vec{v}_k \] is \(t_1=\cdots=t_k=0\). Otherwise, \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly dependent.
Vector - Scalar Multiplication
If \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix} \in\mathbb{R}^n\), and \(t\in\mathbb{R}\), then we define scalar multiplication by \[ t\vec{x}=t\begin{bmatrix} x_1\\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} tx_1\\ \vdots \\ tx_n \end{bmatrix} \]
Vector - Span, Spanning Set
If \(S\) is the subspace of \(\mathbb{R}^n\) consisting of all possible linear combinations of the vectors \(\vec{v}_1,\ldots,\vec{v}_k\in\mathbb{R}^n\), then \(S\) is called the subspace spanned by the set of vectors \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_k\}\), and we say that the set \(\mathcal{B}\) spans \(S\). The set \(\mathcal{B}\) is called a spanning set for the subspace \(S\). We write\[ S=\mbox{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}=\mbox{Span}\mathcal{B} \]
Volume of a Parallelepiped
The volume of the parallelepiped determined by linearly independent vectors \(\vec{u}\), \(\vec{v}\) and \(\vec{w}\) is \[|\vec{w}\cdot(\vec{u}\times\vec{v})|\]
Zero Matrix
The zero matrix is a matrix whose entries are all zero. The \(m\times n\) zero matrix is denoted \(O_{m,n}\), or simply \(O\) if the size of the matrix is clear from context.
Zero Vector
The zero vector is the vector whose entries are all zero, and is denoted \(\vec{0}\). Note: Context will determine how many components are in the zero vector.