Preliminary Mathematics for online MSc programmes in Data AnalyticsUnit 6: Vectors and Matrices
Vectors
Introduction to vectors
A vector is an ordered set of numbers. These could be expressed as a row
or as a column
The number of elements in a vector is referred to as its dimension. An n-dimensional
vector can be represented as a row vector.
We use subscripts to denote the individual entries of a vector x Subscript i denotes the i-th entry of the vector bold x. If we had called the above vector bold x then x left bracket 3 right bracket equals 5.
Operations on vectors
As a matter of definition, when we add two vectors, we add them element by element. If
bold x equals Start 5 By 1 Matrix 1st Row x 1 2nd Row x 2 3rd Row x 3 4th Row vertical ellipsis 5th Row x Subscript n Baseline EndMatrix and bold y equals Start 5 By 1 Matrix 1st Row y 1 2nd Row y 2 3rd Row y 3 4th Row vertical ellipsis 5th Row y Subscript n Baseline EndMatrix comma
we then have that
bold x plus bold y equals Start 5 By 1 Matrix 1st Row x 1 plus y 1 2nd Row x 2 plus y 2 3rd Row x 3 plus y 3 4th Row vertical ellipsis 5th Row x Subscript n Baseline plus y Subscript n Baseline EndMatrix period
Scalar multiplication is an operation that takes a number (or scalar) gamma and a vector bold x and produces
gamma dot bold x equals Start 5 By 1 Matrix 1st Row gamma dot x 1 2nd Row gamma dot x 2 3rd Row gamma dot x 3 4th Row vertical ellipsis 5th Row gamma dot x Subscript n Baseline EndMatrix period
The difference bold x minus bold y can be written as bold x plus left parenthesis negative 1 right parenthesis dot bold y. Thus we need to multiply the second vector with negative 1 and then add the two vectors.
A null vector is a vector whose elements are all zero. The difference between any vector and itself yields the null vector.
A unit vector is a vector whose length or modulus is 1, i.e. StartRoot sigma summation Underscript i equals 1 Overscript n Endscripts x Subscript i Superscript 2 Baseline equals 1 EndRoot.
Linear combination of vectors
Given n-vectors, bold x\ bold y and bold z, as well as scalars gamma and delta, we say that bold z is a linear combination of bold x and bold y if bold z equals gamma bold x plus delta bold y.
Specifically, for column vectors we have
gamma dot bold x plus delta dot bold y equals Start 5 By 1 Matrix 1st Row gamma dot x 1 plus delta dot y 1 2nd Row gamma dot x 2 plus delta dot y 2 3rd Row gamma dot x 3 plus delta dot y 3 4th Row vertical ellipsis 5th Row gamma dot x Subscript n Baseline plus delta dot y Subscript n Baseline EndMatrix period
Inner product of vectors
Given two n-vectors, bold x and bold y, their inner product (sometimes called the dot product) is given by bold x dot bold y equals x 1 dot y 1 plus x 2 dot y 2 plus ellipsis plus x Subscript n Baseline dot y Subscript n Baseline equals sigma summation Underscript i equals 1 Overscript n Endscripts x Subscript i Baseline dot y Subscript i Baseline period
Orthogonality of vectors
Two vectors are said to be orthogonal if their inner product is zero.
Norm of a vector
The square-root of the inner product of a vector bold x and itself is called the norm of a vector:
StartMetric bold italic x EndMetric equals StartRoot bold italic x dot bold italic x EndRoot equals StartRoot sigma summation Underscript i equals 1 Overscript n Endscripts x Subscript i Superscript 2 Baseline EndRoot
Linear (in)dependence of vectors
A set of vectors is linearly independent if no vector in the set is
(a) a scalar multiple of another vector in the set or
(b) a linear combination of other vectors in the set.
A set of vectors is linearly dependent if any vector in the set is
(a) a scalar multiple of another vector in the set or
(b) a linear combination of other vectors in the set.
Vectors a and b are linearly independent, because neither vector is a scalar multiple of the other.
Vectors a and d are linearly dependent, because d is a scalar multiple of a; since d equals 2 a.
Vector c is a linear combination of vectors a and b, because c equals a plus b. Therefore, the set of vectors a comma b, and c is linearly dependent.
Vectors d comma e, and f are linearly independent, since no vector in the set can be derived as a scalar multiple or a linear combination of any other vectors in the set.
Tasks
Task 1
Find the vector 2 bold italic u minus bold italic v when bold italic u equals Start 3 By 1 Matrix 1st Row negative 2 2nd Row 3 3rd Row 5 EndMatrix and bold italic v equals Start 3 By 1 Matrix 1st Row 0 2nd Row negative 4 3rd Row 7 EndMatrix.
2 bold italic u minus bold italic v equals Start 3 By 1 Matrix 1st Row 2 times left parenthesis negative 2 right parenthesis minus 4 2nd Row 2 times 3 minus left parenthesis negative 4 right parenthesis 3rd Row 2 times 5 minus 7 EndMatrix Start 3 By 1 Matrix 1st Row negative 4 2nd Row 10 3rd Row 3 EndMatrix
Task 2
Are the following vectors orthogonal?
(a) bold italic u equals StartBinomialOrMatrix 1 Choose 2 EndBinomialOrMatrix and bold italic v equals StartBinomialOrMatrix 2 Choose negative 1 EndBinomialOrMatrix
(b) bold italic u equals StartBinomialOrMatrix 3 Choose negative 1 EndBinomialOrMatrix and bold italic v equals StartBinomialOrMatrix 7 Choose 5 EndBinomialOrMatrix
(a) bold italic u dot bold italic v equals u 1 times v 1 plus u 2 times v 2 equals 1 times 2 plus 2 times left parenthesis negative 1 right parenthesis equals 0, therefore orthogonal.
(b) bold italic u dot bold italic v equals u 1 times v 1 plus u 2 times v 2 equals 3 times 7 plus left parenthesis negative 1 right parenthesis times 5 equals 16 not equals 0, therefore not orthogonal.
Task 3
Find the value of n such that the vectors bold italic u equals Start 3 By 1 Matrix 1st Row 2 2nd Row 4 3rd Row 1 EndMatrix and bold italic v equals Start 3 By 1 Matrix 1st Row n 2nd Row 1 3rd Row negative 8 EndMatrix are orthogonal.
bold italic u dot bold italic v equals u 1 times v 1 plus u 2 times v 2 plus u 3 times v 3 equals 2 times n plus 4 times 1 plus 1 times negative 8 equals 2 n minus 4 equals 0
Hence we have to set n equals four halves equals 2.
A matrix is a rectangular array of numbers
bold italic upper A equals Start 4 By 5 Matrix 1st Row 1st Column x 11 2nd Column x 12 3rd Column x 13 4th Column ellipsis 5th Column x Subscript 1 n Baseline 2nd Row 1st Column x 21 2nd Column x 22 3rd Column x 23 4th Column ellipsis 5th Column x Subscript 2 n Baseline 3rd Row 1st Column ellipsis 2nd Column ellipsis 3rd Column ellipsis 4th Column ellipsis 5th Column ellipsis 4th Row 1st Column x Subscript m Baseline 1 Baseline 2nd Column x Subscript m Baseline 2 Baseline 3rd Column x Subscript m Baseline 3 Baseline 4th Column ellipsis 5th Column x Subscript m n Baseline EndMatrix period
The notational subscripts in the typical element x Subscript i j refer to its row and column location
in the array: specifically, x Subscript i j is the element in the i-th row and the j-th column.
This matrix has m rows and n columns, so is said to be of dimension m times n. A matrix can be viewed as a set of column vectors, or alternatively as a set of row
vectors. A vector can be viewed as a matrix with only one row or column.
Some special matrices
A matrix with the same number of rows as columns is said to be a square matrix.
Matrices that are not square are said to be rectangular matrices.
A null matrix is composed of all 0's and can be of any dimension.
An identity matrix is a square matrix with 1's on the main diagonal, and all other elements
equal to 0. Formally, we have x Subscript i i Baseline equals 1 for all i and x Subscript i j Baseline equals 0 for all i not equals j. Identity matrices are often denoted by the symbol bold italic upper I (or sometimes as bold italic upper I Subscript n where n denotes the dimension).
The three-dimensional identity matrix is
bold italic upper I 3 equals Start 3 By 3 Matrix 1st Row 1st Column 1 2nd Column 0 3rd Column 0 2nd Row 1st Column 0 2nd Column 1 3rd Column 0 3rd Row 1st Column 0 2nd Column 0 3rd Column 1 EndMatrix period
A square matrix is said to be symmetric if x Subscript i j Baseline equals x Subscript j i.
A diagonal matrix is a square matrix whose non-diagonal entries are all zero. That is x Subscript i j Baseline equals 0 for i not equals j.
An upper-triangular matrix is a square matrix in which all entries below the diagonal are 0. That is x Subscript i j Baseline equals 0 for i greater than j.
A lower-triangular matrix is a square matrix in which all entries above the diagonal are 0. That is x Subscript i j Baseline equals 0 for i less than j.
Matrix operations
Matrices bold italic upper A and bold italic upper B are equal if and only if they have the same dimensions and if each
element of bold italic upper A equals the corresponding element of bold italic upper B.
For any matrix bold italic upper A, the transpose, denoted by bold italic upper A Superscript intercalate (or bold italic upper A prime) is obtained by interchanging rows and columns. That is, the i-th row of the original matrix forms the i-th column of the transpose matrix. Note that if bold italic upper A is of dimension m times n, its transpose is of dimension n times m. Finally the transpose of a transpose of a matrix will yield the original matrix, i.e. backslash left parenthesis bold italic upper A Superscript intercalate Baseline backslash right parenthesis Superscript intercalate Baseline equals bold italic upper A.
We can add two matrices as long as they are of the same dimension. Let's assume that we have matrices bold italic upper A and bold italic upper B of dimension m times n; their sum is defined as an m times n matrix
bold italic upper C equals bold italic upper A plus bold italic upper B.
For instance,
bold italic upper A plus bold italic upper B equals Start 2 By 2 Matrix 1st Row 1st Column x 11 2nd Column x 12 2nd Row 1st Column x 21 2nd Column x 22 EndMatrix plus Start 2 By 2 Matrix 1st Row 1st Column y 11 2nd Column y 12 2nd Row 1st Column y 21 2nd Column y 22 EndMatrix equals Start 2 By 2 Matrix 1st Row 1st Column x 11 plus y 11 2nd Column x 12 plus y 12 2nd Row 1st Column x 21 plus y 21 2nd Column x 22 plus y 22 EndMatrix
The transpose of a sum of matrices is the sum of the transpose matrices i.e. backslash left parenthesis bold italic upper A plus bold italic upper B backslash right parenthesis Superscript intercalate Baseline equals bold italic upper A Superscript intercalate Baseline plus bold italic upper B Superscript intercalate.
Multiplying the matrix by a scalar involves multiplying each element of the matrix by that scalar.
Example 2
Transpose of a matrix
The transpose of the matrix bold upper A equals Start 2 By 2 Matrix 1st Row 1st Column 1 2nd Column 2 2nd Row 1st Column 3 2nd Column 4 EndMatrix is
Matrix multiplication is an operation on pairs of matrices that satisfy certain restrictions.
The restriction is that the first matrix must have the same number of columns as the number of rows in the second matrix. When this condition holds the matrices are said to be conformable under multiplication. Let bold italic upper A be an m times n matrix and bold italic upper B an n times p matrix. As the number of columns in the first matrix and the number of rows in the second both equal n, the matrices are conformable.
The product matrix bold italic upper C equals bold italic upper A dot bold italic upper B is an m times p matrix whose i j-th element equals the inner product of the i-th row vector of the matrix bold italic upper A and the j-th column of matrix bold italic upper B.
Suppose we want to calculate bold italic upper A bold italic upper B. The matrices are conformal, as the number of rows of bold italic upper A (2) matches the number of columns of the matrix bold italic upper B (also 2).
Note that matrix multiplication is not commutative. The matrix product bold italic upper B bold italic upper A is also conformal (number of columns of bold italic upper B (3) matches the number of rows of bold italic upper A (also 3)), but the product bold italic upper B bold italic upper A not only contain different numbers from the product bold italic upper A bold italic upper B, it even has a different dimension!
Even when matrices bold italic upper A and bold italic upper B are conformable so that bold italic upper A bold dot bold italic upper B exists, bold italic upper B bold dot bold italic upper A may not exist.
Even when both product matrices exist, they may not have the same dimensions.
Even when both product matrices are of the same dimension, they may not be equal.
If bold italic upper A bold dot bold italic upper B=bold 0; that does not imply either bold italic upper A equals bold 0 or bold italic upper B equals bold 0 (as it would in the case of multiplying scalars).
If bold italic upper A bold dot bold italic upper B equals bold italic upper A bold dot bold italic upper C and bold italic upper A not equals bold 0, that does not imply bold italic upper B equals bold italic upper C.
Matrix multiplication is associative: left parenthesis bold italic upper A bold dot bold italic upper B right parenthesis dot bold italic upper C equals bold italic upper A dot left parenthesis bold italic upper B bold dot bold italic upper C right parenthesis period
Matrix multiplication is distributive across sums:
StartLayout 1st Row 1st Column bold italic upper A dot left parenthesis bold italic upper B plus bold italic upper C right parenthesis 2nd Column equals bold italic upper A bold dot bold italic upper B plus bold italic upper A bold dot bold italic upper C 2nd Row 1st Column left parenthesis bold italic upper B plus bold italic upper C right parenthesis dot bold italic upper A 2nd Column equals bold italic upper B bold dot bold italic upper A plus bold italic upper C bold dot bold italic upper A period EndLayout
Multiplication with the identity matrix yields the original matrix.
Multiplication with the null matrix yields a null matrix.
If bold italic upper A is a square matrix we can multiply the matrix by itself and get bold italic upper A squared equals bold italic upper A bold dot bold italic upper A. Similarly, we get bold italic upper A Superscript n Baseline equals ModifyingBelow bold italic upper A bold dot bold italic upper A bold dot bold ellipsis bold dot bold italic upper A With bottom brace Underscript n times Endscripts.
A square matrix bold italic upper A is said to be idempotent if bold italic upper A bold dot bold italic upper A equals bold italic upper A.
From now on, we will refer to the product of two matrices, bold italic upper A and bold italic upper B, as bold italic upper A bold italic upper B and drop the dot notation.
Linear dependence and rank
The rank of a matrix is the number of linearly independent rows (or
columns) in the matrix.
It doesn't matter whether you work with the rows or the columns.
The rank cannot be larger than the smaller of the number of rows and columns of a matrix. In other words, for an m times n matrix, i.e. a matrix with m rows and n columns, the maximum rank of the matrix is m if m less than or equals n, or n if n less than m.
If the rank is equal to the smaller of the number of rows and columns the matrix is said to be of full rank
The rank of a matrix can be interpreted as the dimension of the linear space spanned by the columns (or rows) of the matrix.
Example 4
Find the rank of the matrix by looking at both rows and columns.
The second row is a copy of the first row and the fourth row is a scalar (-1) multiple of the third row. The first and third row however are linearly independent, hence the rank of the matrix must be 2.
We obtain the same answer by looking at the columns. The first two columns are linearly independent, but the first column is the sum of the second and the third column, hence it is not linearly independent. Thus, the rank of the matrix must be 2.
If we define the matrix bold italic upper C equals bold italic upper A bold italic upper B as the product of the matrices bold italic upper A and bold italic upper B, then
*the rank of bold italic upper C cannot be larger than the rank of bold italic upper A or the rank of bold italic upper B.
if bold italic upper A and bold italic upper B are of full rank then the rank of bold italic upper C is the smaller of the rank of bold italic upper A and the rank of bold italic upper B.
Determinant
Calculating determinants of 2x2 matrices
The determinant of a square n times n matrix bold italic upper A is a one-dimensional measurement of the "volume" of a matrix.
det left parenthesis Start 2 By 2 Matrix 1st Row 1st Column a 2nd Column b 2nd Row 1st Column c 2nd Column d EndMatrix right parenthesis equals Start 2 By 2 Determinant 1st Row a b 2nd Row c d EndDeterminant equals a d minus b c period
Note that there are two ways to denote the determinant of a matrix bold italic upper A. That is det left parenthesis bold italic upper A right parenthesis or StartAbsoluteValue bold italic upper A EndAbsoluteValue.
Example 5
The determinant of the matrix bold italic upper A equals Start 2 By 2 Matrix 1st Row 1st Column 2 2nd Column 0 2nd Row 1st Column 1 2nd Column 1 EndMatrix is
det left parenthesis Start 2 By 2 Matrix 1st Row 1st Column 2 2nd Column 0 2nd Row 1st Column 1 2nd Column 1 EndMatrix right parenthesis equals Start 2 By 2 Determinant 1st Row 2 0 2nd Row 1 1 EndDeterminant equals 2 times 1 minus 0 times 1 equals 2
In general, one can show that the absolute value of the determinant of a matrix bold italic upper A is the volume of the parallelepiped spanned by its columns. In our example the (absolute value of the) determinant is the surface of the parallelogram spanned by the vectors StartBinomialOrMatrix 2 Choose 1 EndBinomialOrMatrix and StartBinomialOrMatrix 0 Choose 1 EndBinomialOrMatrix, i.e. the parallelogram with vertices in the points StartBinomialOrMatrix 0 Choose 0 EndBinomialOrMatrix, StartBinomialOrMatrix 2 Choose 1 EndBinomialOrMatrix, StartBinomialOrMatrix 2 Choose 1 EndBinomialOrMatrix plus StartBinomialOrMatrix 0 Choose 1 EndBinomialOrMatrix equals StartBinomialOrMatrix 2 Choose 2 EndBinomialOrMatrix and StartBinomialOrMatrix 0 Choose 1 EndBinomialOrMatrix.
There is also a closed-form formula for the determinant of a 3 times 3 matrix. Beyond dimension three there are no simple formulae. However if the matrix bold italic upper A is diagonal, then the determinant is simply the product of the diagonal elements of the matrix.
Calculating determinants of 3x3 matrices
The determinant of a 3 times 3 matrix bold italic upper A, is defined as
StartAbsoluteValue bold italic upper A EndAbsoluteValue equals Start 3 By 3 Determinant 1st Row 1st Column upper A 11 2nd Column upper A 12 3rd Column upper A 13 2nd Row 1st Column upper A 21 2nd Column upper A 22 3rd Column upper A 23 3rd Row 1st Column upper A 31 2nd Column upper A 32 3rd Column upper A 33 EndDeterminant equals upper A 11 dot left parenthesis upper A 22 dot upper A 33 minus upper A 23 dot upper A 32 right parenthesis minus upper A 12 dot left parenthesis upper A 21 dot upper A 33 minus upper A 23 dot upper A 31 right parenthesis plus upper A 13 dot left parenthesis upper A 21 dot upper A 32 minus upper A 22 dot upper A 31 right parenthesis period
Supplement 1
Higher-order determinants
These operations can be represented more conveniently using the notion of minors.
For any square matrix bold italic upper A, consider the sub-matrix bold italic upper A Subscript left parenthesis i j right parenthesis formed by deleting the i-th
row and j-th column of bold italic upper A. The determinant of the sub-matrix bold italic upper A Subscript left parenthesis i j right parenthesis is called the left parenthesis i comma j right parenthesis-th minor of the matrix (or sometimes the minor of element upper A Subscript i j). We denote this as upper M Subscript i j.
For instance, the minors associated with the first row of a 3 times 3 matrix are
upper M 11 equals Start 2 By 2 Determinant 1st Row 1st Column upper A 22 2nd Column upper A 23 2nd Row 1st Column upper A 32 2nd Column upper A 33 EndDeterminant comma upper M 12 equals Start 2 By 2 Determinant 1st Row 1st Column upper A 21 2nd Column upper A 23 2nd Row 1st Column upper A 31 2nd Column upper A 33 EndDeterminant comma upper M 13 equals Start 2 By 2 Determinant 1st Row 1st Column upper A 21 2nd Column upper A 22 2nd Row 1st Column upper A 31 2nd Column upper A 32 EndDeterminant period
Recalling how we specified determinants of order 2 and 3, we see that
det bold italic upper A equals upper A 11 dot upper M 11 minus upper A 12 dot upper M 12 plus upper A 13 dot upper M 13 period
Note the alternating positive and negative signs.
Supplement 2
Cofactor matrix
For any element upper A Subscript i j of a square matrix bold italic upper A, the cofactor element is given by upper C Subscript i j Baseline equals left parenthesis negative 1 right parenthesis Superscript i plus j Baseline StartAbsoluteValue upper M Subscript i j Baseline EndAbsoluteValue where upper M Subscript i j is the left parenthesis i comma j right parenthesis-th minor of the matrix (and StartAbsoluteValue upper M Subscript i j Baseline EndAbsoluteValue refers to the determinant of that matrix). Thus, if we want to calculate upper C 12 we have that
StartLayout 1st Row 1st Column upper C 12 2nd Column equals left parenthesis negative 1 right parenthesis Superscript 1 plus 2 Baseline StartAbsoluteValue upper M 12 EndAbsoluteValue 2nd Row 1st Column Blank 2nd Column equals left parenthesis negative 1 right parenthesis cubed StartAbsoluteValue upper M 12 EndAbsoluteValue 3rd Row 1st Column Blank 2nd Column equals minus StartAbsoluteValue upper M 12 EndAbsoluteValue EndLayout
The cofactor matrix bold italic upper C is obtained by replacing each element of the matrix bold italic upper A by its corresponding cofactor element upper C Subscript i j.
Let's assume that
bold italic upper A equals Start 1 By 1 Matrix 1st Row StartLayout 1st Row 1st Column 3 2nd Column 2 2nd Row 1st Column 4 2nd Column negative 1 EndLayout EndMatrix period
The cofactors are
StartLayout 1st Row 1st Column upper C 11 2nd Column equals left parenthesis negative 1 right parenthesis Superscript 1 plus 1 Baseline StartAbsoluteValue upper M 11 EndAbsoluteValue equals negative 1 2nd Row 1st Column upper C 12 2nd Column equals left parenthesis negative 1 right parenthesis Superscript 1 plus 2 Baseline StartAbsoluteValue upper M 12 EndAbsoluteValue equals negative 4 3rd Row 1st Column upper C 21 2nd Column equals left parenthesis negative 1 right parenthesis Superscript 2 plus 1 Baseline StartAbsoluteValue upper M 21 EndAbsoluteValue equals negative 2 4th Row 1st Column upper C 22 2nd Column equals left parenthesis negative 1 right parenthesis Superscript 2 plus 2 Baseline StartAbsoluteValue upper M 22 EndAbsoluteValue equals 3 EndLayout
since in this case upper M 11 equals negative 1 comma upper M 12 equals 4 comma upper M 21 equals 2 comma upper M 22 equals 3.
Thus, we have that bold italic upper C equals Start 2 By 2 Matrix 1st Row 1st Column upper C 11 2nd Column upper C 12 2nd Row 1st Column upper C 21 2nd Column upper C 22 EndMatrix equals Start 1 By 1 Matrix 1st Row StartLayout 1st Row 1st Column negative 1 2nd Column negative 4 2nd Row 1st Column negative 2 2nd Column 3 EndLayout EndMatrix period
Properties of determinants
One can show that the determinant of a product of square matrices is the product of the determinants
det left parenthesis bold italic upper A bold italic upper B right parenthesis equals det left parenthesis bold italic upper A right parenthesis det left parenthesis bold italic upper B right parenthesis period
This implies that we can exchange the order of multiplication inside the determinant, as long as we end up with conformant matrix multiplications, so for example det left parenthesis bold italic upper A bold italic upper B bold italic upper C right parenthesis equals det left parenthesis bold italic upper A right parenthesis det left parenthesis bold italic upper B right parenthesis det left parenthesis bold italic upper C right parenthesis equals det left parenthesis bold italic upper B right parenthesis det left parenthesis bold italic upper A right parenthesis det left parenthesis bold italic upper C right parenthesis equals det left parenthesis bold italic upper B bold italic upper A bold italic upper C right parenthesis period
The determinant of bold italic upper A Superscript intercalate is the same as the determinant of bold italic upper A, i.e. det left parenthesis bold italic upper A right parenthesis equals det left parenthesis bold italic upper A Superscript intercalate Baseline right parenthesis.
The determinant of a matrix is non-zero if and only if the matrix is of full rank.
Trace of a matrix
The trace of a square matrix is the sum of its diagonal elements, i.e.\ the trace of the n times n matrix bold italic upper A is
trace left parenthesis bold italic upper A right parenthesis equals sigma summation Underscript i equals 1 Overscript n Endscripts upper A Subscript i i Baseline period
Example 6
The trace of the matrix Start 2 By 2 Matrix 1st Row 1st Column 4 2nd Column 3 2nd Row 1st Column 3 2nd Column 1 EndMatrix is 4 plus 1 equals 5.
One can show that for conformable matrices bold italic upper A, bold italic upper B and bold italic upper C,
trace left parenthesis bold italic upper A bold italic upper B bold italic upper C right parenthesis equals trace left parenthesis bold italic upper C bold italic upper A bold italic upper B right parenthesis equals trace left parenthesis bold italic upper B bold italic upper C bold italic upper A right parenthesis, as long as we end up with conformant matrix multiplications. Note however that in general, trace left parenthesis bold italic upper A bold italic upper B bold italic upper C right parenthesis not equals trace left parenthesis bold italic upper A bold italic upper C bold italic upper B right parenthesis and trace left parenthesis bold italic upper A bold italic upper B bold italic upper C right parenthesis not equals trace left parenthesis bold italic upper B bold italic upper A bold italic upper C right parenthesis. In other words, when computing the trace of a product of matrices we can move the first matrix to the end and vice versa, but cannot swap the order of terms as freely as we can for the determinant.
The inverse of a matrix
Definition
For a square matrix bold italic upper A, there may exist a matrix bold italic upper B such that
bold italic upper A bold italic upper B equals bold italic upper B bold italic upper A equals upper I period
An inverse, if it exists is denoted as bold italic upper A Superscript negative 1, so the above definition can be written as
bold italic upper A bold italic upper A Superscript negative 1 Baseline equals bold italic upper A Superscript negative 1 Baseline bold italic upper A equals upper I period
If an inverse does not exist for a matrix, the matrix is said to be singular. If an inverse exists, the matrix is said to be non-singular. One can show that square matrices are non-singular if and only if they are of full rank.
Properties of inverse matrices
The inverse of an inverse yields the original matrix: left parenthesis bold italic upper A Superscript negative 1 Baseline right parenthesis Superscript negative 1 Baseline equals bold italic upper A period
The inverse of a product is the product of inverses with order switched: left parenthesis bold italic upper A bold italic upper B right parenthesis Superscript negative 1 Baseline equals bold italic upper B Superscript negative 1 Baseline bold italic upper A Superscript negative 1 Baseline period
The inverse of a transpose is the transpose of the inverse: left parenthesis bold italic upper A Superscript intercalate Baseline right parenthesis Superscript negative 1 Baseline equals left parenthesis bold italic upper A Superscript negative 1 Baseline right parenthesis Superscript intercalate Baseline period
Calculating inverses of 2x2 matrices
Calculating inverses of matrices can be time-consuming, but a simple formula exists for 2 times 2 matrices. If bold italic upper A equals Start 2 By 2 Matrix 1st Row 1st Column a 2nd Column b 2nd Row 1st Column c 2nd Column d EndMatrix, then
bold italic upper A Superscript negative 1 Baseline equals Start 2 By 2 Matrix 1st Row 1st Column a 2nd Column b 2nd Row 1st Column c 2nd Column d EndMatrix Superscript negative 1 Baseline equals StartFraction 1 Over a d minus b c EndFraction Start 2 By 2 Matrix 1st Row 1st Column d 2nd Column negative b 2nd Row 1st Column negative c 2nd Column a EndMatrix comma
Note that the denominator in the multiplicative constant is just the determinant of bold upper A.
Example 7
Start 2 By 2 Matrix 1st Row 1st Column 3 2nd Column negative 5 2nd Row 1st Column negative 4 2nd Column 7 EndMatrix Superscript negative 1 Baseline equals ModifyingBelow StartFraction 1 Over 3 times 7 minus left parenthesis negative 5 right parenthesis times left parenthesis negative 4 right parenthesis EndFraction With bottom brace Underscript equals 1 Endscripts Start 2 By 2 Matrix 1st Row 1st Column 7 2nd Column 5 2nd Row 1st Column 4 2nd Column 3 EndMatrix equals Start 2 By 2 Matrix 1st Row 1st Column 7 2nd Column 5 2nd Row 1st Column 4 2nd Column 3 EndMatrix
Calculating inverses of diagonal matrices
If bold italic upper A is a diagonal matrix, then bold italic upper A Superscript negative 1 is also diagonal, with diagonal elements StartFraction 1 Over upper A Subscript i j Baseline EndFraction period
Supplement 3
Inverses of matrices of matrices of arbitrary dimension
For any square matrix bold italic upper A, the adjoint of bold italic upper A is given by the transpose of the cofactor matrix. Denoting the associated cofactor matrix as bold italic upper C, we have adj left parenthesis bold italic upper A right parenthesis equals bold italic upper C Superscript intercalate Baseline period
For any square matrix bold italic upper A, the inverse bold italic upper A Superscript negative 1 is given by
bold italic upper A Superscript negative 1 Baseline equals StartFraction 1 Over det left parenthesis bold italic upper A right parenthesis EndFraction adj left parenthesis bold italic upper A right parenthesis comma which is defined as long as det left parenthesis bold italic upper A right parenthesis not equals 0.
A matrix bold italic upper A which satisfies the condition that bold italic upper P Superscript negative 1 Baseline equals bold italic upper P Superscript intercalate is said to be orthogonal.
In an orthogonal matrix the columns (or rows) are orthogonal (i.e. their inner product is 0) and the columns (or rows) have unit length (i.e. the squares of their entries sum to 1).
Quadratic forms
Quadratic forms play an important role in determining key properties of matrices.
Consider a quadratic function on double struck upper R squared: f left parenthesis x 1 comma x 2 right parenthesis equals a 11 x 1 squared plus left parenthesis a 12 plus a 21 right parenthesis x 1 x 2 plus a 22 x 2 squared