Deep Dive into Matrix Determinants

Travis Cooper
6 min readJun 26, 2022

--

If your linear algebra class was anything like mine, you learned how to describe matrices by computing determinants. However, I found that most courses stop short of fully defining exactly what a matrix determinant is. In this discussion, I hope to provide a deep dive into how to compute determinants, but also describe what you are computing.

Definition

A determinant is a scalar value associated with a square matrix. Let’s look at the general case to see how we compute the determinant

We refer to this expression as the Laplace expansion. There is a quite a bit to unpack here, but it’s rather simple once you get passed the symbolic representation. It basically says to divide the matrix A into smaller and smaller sub-matrices until you can compute the determinant with relative ease. We also add a “weight” to each subsequent sub-matrix via the ith and jth column of A. We will look at this generalization in the 3x3 case. But first let’s discuss the simplest computation: the 2x2 case.

The 2x2 case simply involves finding the difference between the main diagonal and the counterdiagonal. The sub-matrices in the Laplace expansion can be reduced to the 2x2 case. However, you can also reduce the sub-matrices to a 3x3 case.

Notice that the 3x3 case is a direct application of the Laplace expansion. We hold the top row fixed (but it could be any row or column), which act as the multipliers, and reduce the primary matrix down into 2x2 sub-matrices. Since we can easily compute the determinant of a 2x2 sub-matrix, the computation becomes much simpler. It is also important to notice the alternating plus and minus signs. This comes from the -1 term in the Laplace expansion. We alternate signs as we create sub-matrices.

Manual computation of determinants can quickly become infeasible. Imagine a 100x100 matrix. We would need to recursively perform the Laplace expansion many times, leading to an explosion of terms. Even in a smaller matrix, such as a 5x5 matrix, it quickly becomes mundane. However, numerical methods in software can quickly compute a determinant so manual computation is not necessary in most cases.

Geometric Interpretation

The algebraic interpretation of a determinant is great at providing a basis for computing a determinant. It does not, however, provide an intuitive understanding of determinants. It makes it seem like the determinant is merely a product of numbers.

The geographic interpretation does a much better job of explaining determinants. Let’s explore the 2x2 case.

The 2x2 depiction of matrix A is seen above. The above parallelogram is constructed by recognizing that the matrix A forms two linear maps: one that maps the standard basis vectors to the rows of A and one that maps the standard basis vectors to the columns of A. Put another way, we can plot the elements of A as seen in the diagram.

This makes the determinant the area of the parallelogram! The area, then, represents the scale by which A transforms other areas. Now I know what you are thinking

But determinants can be negative. How can there be negative area?

Great question! This leads us to our next topic of discussion: orientation. When we apply the sign to the area, we have an oriented area. The only difference between an oriented area and the area is the sign: a negative oriented area implies the angle from the first and second vectors turns in a clockwise direction rather than a counterclockwise direction. However, the magnitude of the area is the same between the two. It is analogous vectors having a magnitude and direction. A vector does not have a negative magnitude; it simply has a direction associated with the magnitude.

Okay, that describes the 2x2 case. What about the 3x3 case? Or the NxN case? The answer is it is the same concept! The only thing changes is the shape. In the 3x3 case, it forms a parallelpiped and in the NxN case, it forms a parallelotope. In other words, the 3x3 case calculates the volume, the 4x4 case calculates a hypervolume, etc. It becomes a lot more abstract above three dimensions, but the fundamentals remain the same: we are calculating the scale at which A transforms other objects.

Properties

Let’s review some properties of determinants. Many of these properties can be derived via the algebraic formulas above. I will not derive these properties directly, but it is a great check to verify each of these properties via the 2x2 case.

  1. The determinant changes sign when two rows are exchanged.
  2. If two rows are equal, the determinant is zero.
  3. Subtracting a multiple of one row from another row leaves the same determinant.
  4. If a matrix is singular, the determinant is zero.
  5. The determinant of AB is det(A) * det(B)
  6. The transpose of A has the same determinant as A

Applications

There are several applications of a determinant, but there are three that that warrant a discussion.

  1. Singular Matrices

One of the easier ways to determine if a matrix is non-singular (i.e. invertible) is to calculate its determinant. Singular matrices are not invertible if and only if the determinant is zero. It’s often extremely beneficial to know whether a matrix is invertible when working in linear algebra.

2. Cramer’s Rule

One can argue that a primary goal of linear algebra is to solve

Cramer’s rule is one method of computing x. We define the jth component of x as

where B is identical to A except the jth component in a is replaced by the vector b. While an interesting approach to solving the problem, Cramer’s Rule is highly inefficient from a computational complexity perspective. There are much more efficient algorithms for computing x that can be utilized.

3. Characteristic Equation

The characteristic equation in linear algebra is

which is used to determine the eigenvalues and eigenvectors of a matrix, A. While eigenvalues and eigenvectors are bit outside the scope of this discussion, they play a huge role in many areas of mathematics beyond linear algebra. In relation to determinants, notice that we directly compute the determinant of the above expression. We also find the eigenvalues such that the expression is a singular matrix, which is a requirement for it to be an eigenvalue.

Determinants in Python

As discussed earlier, manual computation is mundane and quickly becomes error prone and inefficient. Programming languages such as python and Matlab are capable of computing determinants efficiently and accurately. Let’s look at an example.

We could use the Laplace expansion to divide the matrix into 3x3 sub-matrices and divide those into 2x2 sub-matrices, but notice how many terms we would need to track. We would also need to ensure the signs of each term remained consistent with the expansion. However, there is a much easier way if you just need to find the determinant: python!

import scipy.linalg as la
import numpy as np
A = np.array([[2,4,2,5], [1,5,2,6], [8,5,3,2], [0,1,3,6]])
det = la.det(A)

Quick and simple. We were able to compute the determinant of a matrix efficiently and accurately. A 4x4 matrix is relatively small, but can you imagine if we had a 10x10 matrix? A 100x100 matrix? A 1000x1000 matrix? Programming languages becomes the only option in majority of applications.

Conclusion

Determinants are hard to explain. It is much easier to learn how to compute the matrices and use their properties to describe a matrix. It is much harder to gain an intuitive understanding of what you are computing. Hopefully, this helps give you that intuitive understanding.

--

--