Resources

LIST OF COMMONLY USED VECTOR SPACES

\(\mathbb{R}^n\)
For any positive integer \(n\), \(\mathbb{R}^n\) is the \(n\) dimensional vector space \[\mathbb{R}^n=\left\{\begin{bmatrix}x_1\\ \vdots \\ x_n\end{bmatrix} \mid x_1,\ldots,x_n\in \mathbb{R}\right\}\] where addition is defined by \[\begin{bmatrix}x_1\\ \vdots \\x_n\end{bmatrix}+\begin{bmatrix}y_1\\ \vdots \\y_n\end{bmatrix}=\begin{bmatrix}x_1 + y_1\\ \vdots \\ x_n+y_n\end{bmatrix}\] and scalar multiplication by \(t\in \mathbb{R}\) is defined by \[t\begin{bmatrix}x_1\\ \vdots \\x_n\end{bmatrix}=\begin{bmatrix}tx_1\\ \vdots \\tx_n\end{bmatrix}\]
\(M_{m\times n}(\mathbb{R})\)
For any positive integers \(m\) and \(n\), \(M_{m\times n}(\mathbb{R})\) is the \(mn\) dimensional vector space of \(m\times n\) matrices with all real entries where addition is defined by \[(A+B)_{ij}=(A)_{ij} + (B)_{ij}\] and scalar multiplication is defined by \[(tA)_{ij}=t(A)_{ij}\] for any \(A,B\in M_{m\times n}(\mathbb{R})\), \(t\in \mathbb{R}\).
\(\mathbb{L}\)
For any positive integers \(m\) and \(n\), the set \(\mathbb{L}\) of all linear mappings \(L:\mathbb{R}^n\to\mathbb{R}^m\) is a vector space where, for any \(L,M\in \mathbb{L}\) and \(t\in \mathbb{R}\), addition is defined by \[(L+M)(\vec{x})=L(\vec{x}) + M(\vec{x}), \qquad \text{ for all } \vec{x}\in \mathbb{R}^n\] and scalar multiplication is defined by \[(tL)(\vec{x})=tL(\vec{x}), \qquad \text{ for all } \vec{x}\in \mathbb{R}^n\]
\(P_n(\mathbb{R})\)
For any positive integer \(n\), \(P_n(\mathbb{R})\) is the \((n+1)\) dimensional vector spaces \[P_n(\mathbb{R})=\{a_0 + a_1x + \cdots + a_nx^n \mid a_0,\ldots,a_n\in \mathbb{R}\}\] where addition is defined by \[(a_0 + a_1x + \cdots + a_nx^n) + (b_0 + b_1x + \cdots + b_nx^n) = (a_0+b_0) + (a_1+b_1)x + \cdots + (a_n+b_n)x^n\] and scalar multiplication by \(t\in \mathbb{R}\) is defined by \[t(a_0 + a_1x + \cdots + a_nx^n)=ta_0 + ta_1x + \cdots + ta_nx^n\]
Trivial Vector Space
The set \(\{\vec{0}\}\) with addition defined by \(\vec{0} + \vec{0} =\vec{0}\) and scalar multiplication defined by \(t\vec{0}=\vec{0}\) for all \(t\in \mathbb{R}\) is called the trivial vector space. The empty set is the basis for the trivial vector space and hence the trivial vector space is \(0\) dimensional.
\(C[a,b]\)
\(C[a,b]\) is the infinite dimensional vector space of all functions that are continuous over the interval \([a,b]\) with standard addition and scalar multiplication of functions.

GLOSSARY

Adjugate
Let \(A\) be an \(n\times n\) matrix. The adjugate of \(A\) is the matrix defined by \[\big(\operatorname{adj} A \big)_{ij}=C_{ji}\] In particular, \(\operatorname{adj} A=(\operatorname{cof} A)^T\).
Algebraic Multiplicity
The algebraic multiplicity, \(a_{\lambda}\), of an eigenvalue \(\lambda\) is the number of times \(\lambda\) appears as a root of the characteristic polynomial \(C(\lambda)\).
Angle in \(\mathbb{R}^n\)
Let \(\vec{x},\vec{y}\in \mathbb{R}^n\). We define the angle between \(\vec{x}\) and \(\vec{y}\) to be an angle \(\theta\) such that\[\vec{x}\cdot\vec{y} =\|\vec{x}\|\|\vec{y}\|\cos \theta\]
Augmented Matrix
For a system of linear equations \begin{align*} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n&=b_1\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n&=b_2\\ \vdots \hskip75pt &= \hskip3pt \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n&=b_m \end{align*} the augmented matrix of the system is \[\left[\begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1n} & b_1\\ a_{21} & a_{22} & \cdots & a_{2n} & b_2\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m\end{array}\right]\]
\(\mathcal{B}\)-coordinates
See Coordinates/Coordinate vector
\(\mathcal{B}\)-matrix
See Matrix of a linear operator.
Basis of a subset of \(\mathbb{R}^n\)
If a subset \(S\) of \(\mathbb{R}^n\) can be written as a span of vectors \(\vec{v}_1,\ldots,\vec{v}_k\) where \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is linearly independent, then \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is called a basis for \(S\).
We define a basis for the set \(\{\vec{0}\}\) to be the empty set.
Basis
If \(\mathcal{B}\) is a linearly independent spanning set for a vector space \(\mathbb{V}\), then \(\mathcal{B}\) is called a basis for \(\mathbb{V}\).
We define a basis for the vector space \(\{\vec{0}\}\) to be the empty set.
Block Matrix
If \(A\) is an \(m\times n\) matrix, then we can write \(A\) as the \(k\times \ell\) block matrix\[A=\begin{bmatrix} A_{11} & \cdots & A_{1\ell}\\ \vdots & \ddots & \vdots \\A_{k1} & \cdots & A_{k\ell}\end{bmatrix}\] where \(A_{ij}\) is a block such that all blocks in the \(i\)-th row have the same number of rows and all blocks in the \(j\)-th column have the same number of columns.
Cartesian product
Let \(\mathbb{V}\) and \(\mathbb{W}\) be vector spaces. We define the Cartesian product of \(\mathbb{V}\) and \(\mathbb{W}\) by\[\mathbb{V}\times \mathbb{W}=\{(\vec{v},\vec{w}) \mid \vec{v}\in \mathbb{V}, \vec{w}\in \mathbb{W}\}\]
Change of coordinates matrix
Let \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) and \(\mathcal{C}\) both be bases for a vector space \(\mathbb{V}\). The matrix \[\thinspace _\mathcal{C} P_\mathcal{B}=\begin{bmatrix} [\vec{v}_1]_\mathcal{C} & \cdots & [\vec{v}_n]_\mathcal{C}\end{bmatrix}\] is called the change of coordinates matrix from \(\mathcal{B}\)-coordinates to \(\mathcal{C}\)-coordinates. It satisfies\[[\vec{x}]_{\mathcal{C}}=\thinspace _\mathcal{C} P_\mathcal{B}[\vec{x}]_{\mathcal{B}}\]
Characteristic Polynomial
If \(A\) is an \(n\times n\) matrix, then we call the \(n\)-th degree polynomial given by\[C(\lambda)=\det(A-\lambda I)\] the characteristic polynomial of \(A\).
Closed under addition
A set \(S\) is said to be closed under addition, if \(\vec{x}+\vec{y}\in S\) for all \(\vec{x},\vec{y}\in S\).
Closed under scalar multiplication
A set \(S\) is said to be closed under scalar multiplication, if \(t\vec{x}\in S\) for all \(\vec{x}\in S\) and all scalars \(t\).
Coefficient Matrix
For a system of linear equations \begin{align*} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n&=b_1\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n&=b_2\\ \vdots \hskip75pt &= \hskip3pt \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n&=b_m \end{align*} the coefficient matrix is defined to be the rectangular array\[\left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{array}\right]\]
Cofactor
Let \(A\) be an \(n\times n\) matrix with \(n\geq 2\). Let \(A(i,j)\) be the \((n-1)\times (n-1)\) matrix obtained from \(A\) by deleting the \(i\)-th row and the \(j\)-th column. The cofactor of \(a_{ij}\) is \[C_{ij}=(-1)^{i+j}\det A(i,j)\]
Cofactor Matrix
Let \(A\) be an \(n\times n\) matrix. The cofactor matrix of \(A\) is the matrix defined by\[\big(\operatorname{cof} A \big)_{ij}=C_{ij}\]
Columnspace
Let \(A=\begin{bmatrix} \vec{a}_1 & \cdots \vec{a}_n\end{bmatrix}\) be an \(m\times n\) matrix. The columnspace of \(A\) is\[\operatorname{Col}(A)=\operatorname{Span}\{\vec{a}_1,\ldots,\vec{a}_n\}=\{A\vec{x} \mid \vec{x}\in \mathbb{R}^n\}\]
Composition of Linear Mappings from \(\mathbb{R}^n\) to \(\mathbb{R}^m\)
If \(L:\mathbb{R}^n \to \mathbb{R}^m\) and \(M:\mathbb{R}^m\to \mathbb{R}^p\) are linear mappings, then we define the composition \(M\circ L:\mathbb{R}^n\to \mathbb{R}^p\) by \[(M\circ L)(\vec{x})=M(L(\vec{x}))\] for every \(\vec{x}\in \mathbb{R}^n\).
Consistent system of linear equations
A system of linear equations is said to be consistent if it has at least one solution.
Coordinates/Coordinate vector
Let \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) be any basis for a vector space \(\mathbb{V}\). If \(\vec{v}=c_1\vec{v}_1 + \cdots + c_n\vec{v}_n\in \mathbb{V}\), then we call \(c_1,\ldots,c_n\) the coordinates of \(\vec{v}\) with respect to the basis \(\mathcal{B}\). We define the coordinate vector of \(\vec{v}\) with respect to the basis \(\mathcal{B}\) by\[[\vec{v}]_\mathcal{B}=\begin{bmatrix}c_1\\ \vdots \\c_n\end{bmatrix}\]
Cross Product in \(\mathbb{R}^3\)
Let \(\vec{v},\vec{w}\in \mathbb{R}^3\). Then, the cross product of \(\vec{v}=\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}\) and \(\vec{w}=\begin{bmatrix}w_1\\w_2\\w_3\end{bmatrix}\) is \[\vec{v} \times \vec{w}=\begin{bmatrix} v_2w_3 - v_3w_2\\v_3w_1 - v_1w_3\\v_1w_2 -v_2w_1\end{bmatrix}\]
Determinant
Let \(A\) be an \(n \times n\) matrix. If \(n=1\), we define the determinant of \(A\) by \(\det A = \det [a]=a\). If \(n \gt 2\), then we define the determinant of \(A\) by \[\det A=a_{11}C_{11} + a_{12}C_{12} + \cdots + a_{1n}C_{1n}\]
Diagonal Matrix
An \(n\times n\) matrix \(D\) is said to be a diagonal matrix if \(d_{ij}=0\) for all \(i\neq j\). We denote a diagonal matrix by\[D=\operatorname{diag}(d_{11},d_{22},\ldots,d_{nn})\]
Diagonalizable
An \(n\times n\) matrix \(A\) is said to be diagonalizable if there exists an invertible matrix \(P\) such that \(P^{-1}AP=D\) is diagonal. We say that \(P\) diagonalizes \(A\).
Dimension
If a vector space \(\mathbb{V}\) has a basis with \(n\) elements, then we define the dimension of \(\mathbb{V}\) to be \(n\), write \(\dim V=n\) and we say that \(\mathbb{V}\) is an \(n\)-dimensional vector space.
The vector space \(\{\vec{0}\}\) is said to have dimension \(0\).
If \(\mathbb{V}\) has no basis with finitely many elements, then \(\mathbb{V}\) is called infinite-dimensional.
Dot Product
Let \(\vec{x}=\begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix},\vec{y}=\begin{bmatrix}y_1\\\vdots\\y_n\end{bmatrix}\in \mathbb{R}^n\), then the dot product \(\vec{x}\cdot \vec{y}\) is defined by \[\vec{x}\cdot \vec{y}=\begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}\cdot \begin{bmatrix}y_1\\\vdots\\y_n\end{bmatrix}=x_1y_1 + x_2y_2+\cdots+x_ny_n=\sum_{i=1}^n x_iy_i\]
Eigenspace
If \(\lambda\) is an eigenvalue of \(A\), then \(\operatorname{Null}(A-\lambda I)\) is called the eigenspace of \(\lambda\) and is denoted \(E_{\lambda}\).
Eigenvalue/Eigenvector of a linear mapping
Let \(L\) be a linear operator on \(\mathbb{R}^n\). If there is a non-zero vector \(\vec{v}\) such that \(L(\vec{v})=\lambda\vec{v}\) for some scalar \(\lambda\), then \(\lambda\) is called an eigenvalue of \(L\) and \(\vec{v}\) is called an eigenvector of \(L\) corresponding to \(\lambda\)
Eigenvalue/Eigenvector of a matrix
Let \(A\) be an \(n\times n\) matrix. If there is a non-zero vector \(\vec{v}\) such that \(A\vec{v}=\lambda\vec{v}\) for some scalar \(\lambda\), then \(\lambda\) is called an eigenvalue of \(A\) and \(\vec{v}\) is called an eigenvector of \(A\) corresponding to \(\lambda\).
Elementary Matrix
An elementary matrix is an \(n\times n\) matrix that is created by performing a single elementary row-operation on the \(n\times n\) identity matrix.
Elementary Row Operations
The three elementary row operations are:
  1. Multiply row \(i\) by a non-zero scalar. Denoted \(tR_i\).
  2. Add a multiple of row \(i\) to row \(j\). Denoted \(R_j+tR_i\).
  3. Swap row \(i\) and row \(j\). Denoted \(R_i \leftrightarrow R_j\).
Equivalent systems of linear equations
Two systems of linear equations are said to be equivalent if they have the same solution set.
Free Variable
Any variable whose column does not contain a leading one in the RREF of the coefficient matrix of a system of linear equations is called a free variable.
Fundamental Subspaces
The four fundamental subspaces of a matrix are the rowspace, columnspace, nullspace and left nullspace.
Geometric Multiplicity
The geometric multiplicity \(g_{\lambda}\) of an eigenvalue \(\lambda\) is the dimension of its eigenspace. That is \(g_{\lambda}=\dim \operatorname{Null}(A-\lambda I)\).
Homogeneous system of linear equations
A system of linear equations is said to be a homogeneous system if the right-hand side only contains zeros. That is, it has the form \([A \mid \vec{0}]\).
Hyperplane in \(\mathbb{R}^n\)
Let \(\vec{v}_1,\ldots,\vec{v}_{n-1},\vec{b} \in \mathbb{R}^n\) with \(\{\vec{v}_1,\ldots,\vec{v}_{n-1}\}\) being linearly independent. Then the set with vector equation \(\vec{x}=c_1\vec{v}_1 + \cdots + c_{n-1}\vec{v}_{n-1} + \vec{b}\), \(c_i\in \mathbb{R}\) is called a hyperplane in \(\mathbb{R}^n\) which passes through \(\vec{b}\).
Identity Mapping for \(\mathbb{R}^n\)
The linear mapping \(I:\mathbb{R}^n\to\mathbb{R}^n\) such that \(I(\vec{x})=\vec{x}\) for all \(\vec{x}\in \mathbb{R}^n\) is called the identity mapping.
Identity Matrix
The \(n\times n\) matrix \(I\) (or \(I_n\)) such that \((I)_{ii}=1\) for \(1\leq i\leq n\), and \((I)_{ij}=0\) whenever \(i\neq j\) is called the identity matrix.
Inconsistent system of linear equations
A system of linear equations is said to be inconsistent if it does not have any solutions.
Inverse
Let \(A\) be a square matrix. If \(B\) is a matrix such that \(AB=I=BA\), then \(B\) is called an inverse of \(A\) and is denoted \(B=A^{-1}\). If \(A\) has an inverse, then \(A\) is said to be invertible.
Invertible Mapping on \(\mathbb{R}^n\)
Let \(L:\mathbb{R}^n\to\mathbb{R}^n\) and \(M:\mathbb{R}^n\to\mathbb{R}^n\) be linear mappings. If \((L\circ M)(\vec{x})=\vec{x}\) and \((M\circ L)(\vec{x})=\vec{x}\) for all \(\vec{x}\in \mathbb{R}^n\), then \(L\) and \(M\) are said to be invertible. We write \(M=L^{-1}\) and \(L=M^{-1}\).
\(k\)-flat
Let \(\vec{v}_1,\ldots,\vec{v}_{k},\vec{b} \in \mathbb{R}^n\) with \(\{\vec{v}_1,\ldots,\vec{v}_{k}\}\) linearly independent. We call the set with vector equation \(\vec{x}=c_1\vec{v}_1 + \cdots + c_{k}\vec{v}_{k} + \vec{b}\), \(c_i\in \mathbb{R}\) a \(k\)-flat in \(\mathbb{R}^n\) through \(\vec{b}\).
Kernel of a linear mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\)
Let \(L:\mathbb{R}^n\to \mathbb{R}^m\) be a linear mapping. The kernel of \(L\) is the set of all vectors in the domain that have image \(\vec{0}\) under \(L\). In set notation,\[\ker(L)=\{\vec{x} \in \mathbb{R}^n \mid L(\vec{x})=\vec{0}\}\]
Left Inverse
Let \(A\) be an \(m\times n\) matrix. If \(C\) is an \(n\times m\) matrix such that \(CA=I_n\), then \(C\) is called a left inverse of \(A\).
Left Nullspace
Let \(A\) be an \(m\times n\) matrix. The nullspace of \(A^T\) is called the left nullspace of \(A\).\[\operatorname{Null}(A^T)=\{\vec{x}\in \mathbb{R}^m \mid A^T\vec{x}=\vec{0}\}\]
Length in \(\mathbb{R}^n\)
The length \(\|\vec{x}\|\) of \(\vec{x}\in \mathbb{R}^n\) is defined by \[\|\vec{x}\|=\sqrt{\vec{x}\cdot \vec{x}}\]
Line in \(\mathbb{R}^n\)
Let \(\vec{v}_1,\vec{b} \in \mathbb{R}^n\) with \(\vec{v}_1\neq \vec{0}\). Then we call the set with vector equation \(\vec{x}=c_1\vec{v}_1+\vec{b}\), \(c_1\in \mathbb{R}\) a line in \(\mathbb{R}^n\) which passes through \(\vec{b}\).
Linear Combination of vectors in \(\mathbb{R}^n\)
Let \(\vec{v}_1,\ldots,\vec{v}_k\) be vectors in \(\mathbb{R}^n\). A sum of scalar multiples \[t_1\vec{v}_1 + t_2\vec{v}_2 + \cdots + t_k\vec{v}_k\] where \(t_1,t_2,\ldots,t_k\) are real constants, is called a linear combination.
Linear Combination
Let \(\vec{v}_1,\ldots,\vec{v}_k\) be vectors in a vector space. A sum of scalar multiples \[t_1\vec{v}_1 + t_2\vec{v}_2 + \cdots + t_k\vec{v}_k\] where \(t_1,t_2,\ldots,t_k\) are real constants, is called a linear combination.
Linear Dependence/Independence in \(\mathbb{R}^n\)
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in \(\mathbb{R}^n\) is said to be linearly dependent if there exist coefficients \(c_1,\ldots,c_k\) not all zero such that\[\vec{0}=c_1\vec{v}_1 + \cdots + c_k\vec{v}_k\] Thus, a set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly independent if the only solution to\[\vec{0}=c_1\vec{v}_1 + \cdots + c_k\vec{v}_k\] is \(c_1=c_2=\cdots=c_k=0\).
Linear Dependence/Independence
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in a vector space is said to be linearly dependent if there exist coefficients \(c_1,\ldots,c_k\) not all zero such that\[\vec{0}=c_1\vec{v}_1 + \cdots + c_k\vec{v}_k\] Thus, a set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) is said to be linearly independent if the only solution to\[\vec{0}=c_1\vec{v}_1 + \cdots + c_k\vec{v}_k\] is \(c_1=c_2=\cdots=c_k=0\).
Linear Equation
An equation in \(n\) variables \(x_1,\ldots,x_n\) that can be written in the form\[a_1x_1 + \cdots + a_nx_n=b\] where \(a_1,\ldots,a_n,b\) are constants is called a linear equation. The constants \(a_i\) are called the coefficients of the equation and \(b\) is called the right-hand side.
Linear Mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\)
A function \(L:\mathbb{R}^n \to \mathbb{R}^m\) is called a linear mapping (or linear transformation) if it has the property that\[L(s\vec{x} + t\vec{y})=sL(\vec{x}) + tL(\vec{y})\] for every \(\vec{x},\vec{y}\in \mathbb{R}^n\) and \(s,t\in\mathbb{R}\).
Lower Triangular
An \(n\times n\) matrix \(L\) is said to be lower triangular if \(\ell_{ij}=0\) whenever \(i \lt j\).
Matrix
For any positive integers \(m\) and \(n\), an \(m\times n\) matrix \(A\) is a rectangular array with \(m\) rows and \(n\) columns. We denote the entry in the \(i\)-th row and \(j\)-th column by \(a_{ij}\) or \((A)_{ij}\). That is, \[A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1j} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2j} & \cdots & a_{2n} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{i1} & a_{i2} & \cdots & a_{ij} & \cdots & a_{in} \\ \vdots & \vdots & & \vdots & & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mj} & \cdots & a_{mn} \end{bmatrix}\]
Matrix-Matrix Multiplication
For an \(m\times n\) matrix \(A\) and an \(n\times p\) matrix \(B=\begin{bmatrix} \vec{b}_1 & \cdots & \vec{b}_p\end{bmatrix}\) we define \(AB\) to be the \(m\times p\) matrix \[AB=A\begin{bmatrix} \vec{b}_1 & \cdots & \vec{b}_p\end{bmatrix}=\begin{bmatrix} A\vec{b}_1 & \cdots & A\vec{b}_p\end{bmatrix}\]
Matrix-Vector Multiplication
Definition 1: Let \(A\) be an \(m\times n\) matrix whose rows are denoted \(\vec{a}_i^T\) for \(1\leq i\leq m\). Then, for any \(\vec{x}\in \mathbb{R}^n\), we define\[A\vec{x}=\begin{bmatrix} \vec{a}_1\cdot \vec{x}\\ \vdots \\ \vec{a}_m\cdot \vec{x}\end{bmatrix}\]
Definition 2: Let \(A\) be an \(m\times n\) matrix whose columns are denoted \(\vec{a}_i\) for \(1\leq i\leq n\). Then, for any \(\vec{x}=\begin{bmatrix}x_1\\ \vdots \\x_n\end{bmatrix}\in \mathbb{R}^n\), we define\[A\vec{x}=x_1\vec{a}_1+\cdots + x_n\vec{a}_n\]
Matrix of a linear operator on \(\mathbb{R}^n\) with respect to a basis \(\mathcal{B}\)
If \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_n\}\) is a basis for \(\mathbb{R}^n\) and \(L:\mathbb{R}^n\to\mathbb{R}^n\) is a linear operator, then the matrix of \(L\) with respect to the basis \(\mathcal{B}\) is defined to be\[[L]_{\mathcal{B}}=\begin{bmatrix} [L(\vec{v}_1)]_{\mathcal{B}} & \cdots & [L(\vec{v}_n)]_{\mathcal{B}} \end{bmatrix}\] It satisfies\[[L(\vec{x})]_{\mathcal{B}}=[L]_{\mathcal{B}}[\vec{x}]_{\mathcal{B}}\]
Matrix Mapping
For any \(A\in M_{m\times n}(\mathbb{R})\) a function \(f_A:\mathbb{R}^n\to\mathbb{R}^m\) defined by \(f_A(\vec{x})=A\vec{x}\) is called a matrix mapping.
Nullspace of a matrix
Let \(A\) be an \(m\times n\) matrix. The set of all \(\vec{x}\in \mathbb{R}^n\) such that \(A\vec{x}=\vec{0}\) is called the nullspace of \(A\) and is denoted\[\operatorname{Null}(A)=\{\vec{x}\in \mathbb{R}^n \mid A\vec{x}=\vec{0}\}\]
Nullspace of a linear mapping
See Kernel.
Orthogonal set in \(\mathbb{R}^n\)
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in \(\mathbb{R}^n\) is called an orthogonal set if \(\vec{v}_i\cdot \vec{v}_j=0\) for all \(i\neq j\).
Orthogonal vectors in \(\mathbb{R}^n\)
Let \(\vec{x},\vec{y}\in \mathbb{R}^n\) If \(\vec{x}\cdot \vec{y}=0\), then we say that \(\vec{x}\) and \(\vec{y}\) are orthogonal.
Orthonormal set in \(\mathbb{R}^n\)
A set of vectors \(\{\vec{v}_1,\ldots,\vec{v}_k\}\) in \(\mathbb{R}^n\) is called an orthonormal set if \(\vec{v}_i\cdot \vec{v}_j=0\) for all \(i\neq j\) and \(\vec{v}_i\cdot \vec{v}_i=1\) for all \(1\leq i\leq k\).
Perpendicular onto a vector in \(\mathbb{R}^n\)
Let \(\vec{a},\vec{b}\in\mathbb{R}^n\) with \(\vec{a}\neq\vec{0}\). The perpendicular of \(\vec{b}\) onto \(\vec{a}\) is defined by\[\operatorname{perp}_{\vec{a}} \vec{b}=\vec{b} - \operatorname{proj}_{\vec{a}}\vec{b}\]
Plane in \(\mathbb{R}^n\)
Let \(\vec{v}_1,\vec{v}_2,\vec{b}\in \mathbb{R}^n\) with \(\{\vec{v}_1,\vec{v}_2\}\) being a linearly independent set. Then we call the set with vector equation \(\vec{x}=c_1\vec{v}_1 + c_2\vec{v}_2 + \vec{b}\), \(c_1,c_2\in \mathbb{R}\) a plane in \(\mathbb{R}^n\) which passes through \(\vec{b}\).
Projection onto a plane in \(\mathbb{R}^3\)
The projection of \(\vec{u}\in \mathbb{R}^3\) onto a plane \(P\) with normal vector \(\vec{n}\) is defined by\[\operatorname{proj}_{P}\vec{u}=\operatorname{perp}_{\vec{n}} \vec{u}\]
Projection onto a vector in \(\mathbb{R}^n\)
Let \(\vec{a},\vec{b}\in\mathbb{R}^n\) with \(\vec{a}\neq \vec{0}\). The projection of \(\vec{b}\) onto \(\vec{a}\) is defined by\[\operatorname{proj}_{\vec{a}}\vec{b}= \frac{\vec{b}\cdot \vec{a}}{\|\vec{a}\hskip2pt \|^2}\vec{a}\]
Range of a linear mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\)
Let \(L:\mathbb{R}^n\to \mathbb{R}^m\) be a linear mapping. We define the range of \(L\) to be the set of all images that \(L\) produces for any \(\vec{x}\in \mathbb{R}^n\). That is, \[\operatorname{Range}(L)=\{L(\vec{x}) \mid \vec{x}\in \mathbb{R}^n\}\]
Rank
The rank of a matrix \(A\) is the number of leading ones in the RREF of the matrix and is denoted \(\operatorname{rank} A\).
Reduced Row Echelon Form (RREF)
A matrix \(R\) is said to be in reduced row echelon form if
  1. All rows containing a non-zero entry are above rows which only contains zeros.
  2. The first non-zero entry in each non-zero row is 1, called a leading one (or a pivot).
  3. The leading one in each non-zero row is to the right of the leading one in any row above it.
  4. A leading one is the only non-zero entry in its column.
Reflection
The reflection of \(\vec{x}\in \mathbb{R}^n\) over a hyperplane \(P\) with normal vector \(\vec{n}\) is defined by \[\operatorname{refl}_P(\vec{x})=\vec{x}-2\operatorname{perp}_P \vec{x}=\vec{x}-2\operatorname{proj}_{\vec{n}} \vec{x}\]
Right Inverse
Let \(A\) be an \(m\times n\) matrix. If \(B\) is an \(n\times m\) matrix such that \(AB=I_m\), then \(B\) is called a right inverse of \(A\).
Rotation
A rotation by an angle \(\theta\) is the linear mapping \(R_{\theta}:\mathbb{R}^2\to\mathbb{R}^2\) defined by\[R_{\theta}(x_1,x_2)=(x_1\cos \theta-x_2\sin \theta,x_1\sin\theta + x_2\cos\theta)\]
Rotation Matrix
The matrix \(\begin{bmatrix} \cos \theta & -\sin \theta\\\sin \theta & \cos\theta\end{bmatrix}\) is called a rotation matrix. It is the standard matrix of a rotation by an angle \(\theta\) in \(\mathbb{R}^2\).
Row Echelon Form (REF)
A matrix \(R\) is said to be in row echelon form if
  1. All rows containing a non-zero entry are above rows which only contains zeros.
  2. The first non-zero entry, called a leading entry, in each non-zero row is to the right of the first non-zero entry in any row above it.
  3. In a column containing a leading entry, all entries beneath the leading entry are zero.
Row Equivalent
Two matrices \(A\) and \(B\) are said to be row equivalent if there exists a sequence of elementary row operations that transform \(A\) into \(B\). We write \(A\sim B\).
Row Reducing
The procedure of applying elementary row operations on a matrix \(A\) is called row reducing.
Rowspace
Let \(A\) be an \(m\times n\) matrix with rows \(\vec{a}_i^T\) for \(1\leq i\leq m\). The span of the rows of \(A\) is called the rowspace of \(A\) and is denoted \[\operatorname{Row}(A)=\{A^T\vec{x} \mid \vec{x}\in \mathbb{R}^m\}\]
Similar matrices
Let \(A\) and \(B\) be \(n\times n\) matrices. If there exists an invertible matrix \(P\) such that \(P^{-1}AP=B\), then \(A\) and \(B\) are said to be similar.
Skew-symmetric
A matrix is called skew-symmetric if \(A^T=-A\).
Solution of a system of linear equations
A solution to a system of linear equations \(m\) with \(n\) variables is a vector \(\begin{bmatrix}s_1\\\vdots\\s_n\end{bmatrix}\) in \(\mathbb{R}^n\) such that all \(m\) equations are satisfied when we set \(x_1=s_1\), \(x_2=s_2\), \(\ldots\), \(x_n=s_n\).
Solution set of a system of linear equations
The solution set of a system of linear equations is the set of all solutions of the system.
Solution Space
The solution set of a homogeneous system is called the solution space of the system.
Span of vectors in \(\mathbb{R}^n\)
Let \(B=\{\vec{v}_1,\ldots,\vec{v}_k\}\) be a set of vectors in \(\mathbb{R}^n\). We define the span \(S\) of the set \(B\) by \[S=\operatorname{Span} B=\operatorname{Span} \{\vec{v}_1,\ldots,\vec{v}_k\}=\{t_1\vec{v}_1 + \cdots + t_k\vec{v}_k \mid t_1,\ldots,t_k\in \mathbb{R}\}\] We also say that \(S\) is spanned by \(B\) and that \(B\) is a spanning set for \(S\).
Span
Let \(\mathcal{B}=\{\vec{v}_1,\ldots,\vec{v}_k\}\) be a set of vectors in a vector space \(\mathbb{V}\). The span of \(\mathcal{B}\) is defined by\[\mathbb{S}=\operatorname{Span} \mathcal{B}=\{c_1\vec{v}_1+\cdots +c_k\vec{v}_k \mid c_1,\ldots,c_k\in \mathbb{R}\}\] We also say that \(\mathbb{S}\) is spanned by \(\mathcal{B}\) and that \(\mathcal{B}\) is a spanning set for \(\mathbb{S}\).
Square Matrix
An \(n\times n\) matrix is called a square matrix.
Standard basis for \(\mathbb{R}^n\)
The set \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) of vectors in \(\mathbb{R}^n\) such that \(\vec{e}_i\) represents the vector whose \(i\)-th component is 1 and all other components are 0 is called the standard basis for \(\mathbb{R}^n\).
Standard Matrix
Let \(L:\mathbb{R}^n\to\mathbb{R}^m\) be a linear mapping. The matrix \([L]=\begin{bmatrix}L(\vec{e}_1) & L(\vec{e}_2) & \cdots & L(\vec{e}_n)\end{bmatrix}\) is called the standard matrix of \(L\) and has the property that\[L(\vec{x})=[L]\vec{x}\]
Subspace of \(\mathbb{R}^n\)
A subset \(\mathbb{S}\) of \(\mathbb{R}^n\) is called a subspace of \(\mathbb{R}^n\) if for every \(\vec{x},\vec{y},\vec{w}\in \mathbb{S}\) and \(c,d\in \mathbb{R}\) we have
S1 \(\vec{x}+\vec{y}\in \mathbb{S}\)
S2 \((\vec{x}+\vec{y})+\vec{w}=\vec{x}+(\vec{y}+\vec{w})\)
S3 \(\vec{x}+\vec{y}=\vec{y}+\vec{x}\)
S4 There exists a vector \(\vec{0}\in \mathbb{S}\) such that \(\vec{x}+\vec{0}=\vec{x}\)
S5 For each \(\vec{x}\in \mathbb{S}\) there exists a vector \((-\vec{x})\in \mathbb{S}\) such that \(\vec{x}+(-\vec{x})=\vec{0}\)
S6 \(c\vec{x}\in \mathbb{S}\)
S7 \(c(d\vec{x})=(cd)\vec{x}\)
S8 \((c+d)\vec{x}=c\vec{x}+d\vec{x}\)
S9 \(c(\vec{x}+\vec{y})=c\vec{x}+c\vec{y}\)
S10 \(1\vec{x}=\vec{x}\)
Subspace of a vector space
If \(\mathbb{S}\) is a non-empty subset of a vector space \(\mathbb{V}\), and \(\mathbb{S}\) is also a vector space using the same operations as \(\mathbb{V}\), then \(\mathbb{S}\) is called a subspace of \(\mathbb{V}\).
Symmetric
A matrix is called symmetric if \(A^T=A\).
System of Linear Equations
A set of \(m\) linear equations in the same variables \(x_1,\ldots,x_n\) is called a system of \(m\) linear equations in \(n\) variables. A general system of \(m\) linear equations in \(n\) variables has the form \begin{align*} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n&=b_1\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n&=b_2\\ \vdots \hskip80pt \vdots \hskip10pt &= \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n&=b_m \end{align*}
Transpose
Let \(A\in M_{m\times n}(\mathbb{R})\). We define the transpose of \(A\), denoted \(A^T\), to be the \(n\times m\) matrix such that\[(A^T)_{ij}=(A)_{ji}\]
Unit Vector in \(\mathbb{R}^n\)
A vector \(\vec{x}\in \mathbb{R}^n\) for which \(\|\vec{x}\|=1\) is called a unit vector.
Upper Triangular
An \(n\times n\) matrix \(U\) is said to be upper triangular if \(u_{ij}=0\) whenever \(i \gt j\).
Vector Equation
If \(S=\operatorname{Span}\{\vec{v}_1,\ldots,\vec{v}_k\}\), then a vector equation for \(S\) is\[\vec{x}=t_1\vec{v}_1 + \cdots + t_k\vec{v}_k, \qquad t_1,\ldots,t_k\in \mathbb{R}\]
Vector Space (Real Vector Space)
A real vector space is a set \(\mathbb{V}\) together with an operation of addition, denoted \(\vec{x}+\vec{y}\), and an operation of scalar multiplication, denoted \(t\vec{x}\) for any \(t\in \mathbb{R}\) such that for any \(\vec{x},\vec{y},\vec{z}\in \mathbb{V}\) and \(a,b\in \mathbb{R}\) we have all of the following properties:
V1 \(\vec{x}+\vec{y}\in \mathbb{V}\);
V2 \((\vec{x}+\vec{y})+\vec{z}=\vec{x}+(\vec{y}+\vec{z})\);
V3 \(\vec{x}+\vec{y}=\vec{y}+\vec{x}\);
V4 There exists a vector \(\vec{0}\in \mathbb{V}\), called the zero vector, such that \(\vec{x}+\vec{0}=\vec{x}\) for all \(\vec{x} \in \mathbb{V}\);
V5 There exists a vector \((-\vec{x})\in \mathbb{V}\) such that \(\vec{x}+(-\vec{x})=\vec{0}\);
V6 \(a\vec{x}\in \mathbb{V}\);
V7 \(a(b\vec{x})=(ab)\vec{x}\);
V8 \((a+b)\vec{x}=a\vec{x}+b\vec{x}\);
V9 \(a(\vec{x}+\vec{y})=a\vec{x}+a\vec{y}\);
V10 \(1\vec{x}=\vec{x}\).
Zero Vector
The unique element, usually denoted \(\vec{0}\), in a vector space \(\mathbb{V}\) such that \(\vec{x}+\vec{0}=\vec{x}\) for all \(\vec{x}\in \mathbb{V}\) is called the zero vector of \(\mathbb{V}\).