Linear Systems of Equations

Blake's Newton

Back

A linear equation system is a collection of coefficients (aij), variables (xi) and constants (di) of the following kind:

a11x1 + a12x2 + ..... + a1nxn = d1
a21x1 + a22x2 + ..... + a2nxn = d2
...............................................
an1x1 + an2x2 + ..... + annxn = dn

This can be rewritten in matrix form:

Ax = d

where:

a11 a12 .... a1n
a21 a22 .... a2n
A = .... .... .... ....
an1 an2 .... ann

is the matrix of coefficients and:

x1
x = x2
...
xn

is the vector of variables or unknowns, while:

d1
d = d2
...
dn

is the vector of constants.

Defining the "augmented" matrix [A, d] as an n (n+1) matrix with d in the last column, it can be shown that there exists a solution x to the system Ax = d if and only if rank is such that r(A) = r([A, d]). The system has no solution if r(A) < r[A, d]. This is obvious as d ought to be a linear combination of the elements in A if a solution x to exist.

If r(A) = r([A, d]) = n, then the solution is unique (thus if A is non-singular, then Ax = d has a unique solution, x). If r(A) = r([A, d]) < n, then the solution is not unique - in fact, there will be an infinite number of solutions. Consider the following:

Theorem: If Ax = d has more than one solution, then it has an infinite number of solutions.

Proof: Suppose x* and x** are two solutions to the system Ax = d. Then, for any scalar a 0, we know that a Ax* = a d and (1-a )Ax** = (1-a )d, so A[a x* + (1-a )x**] = d, thus the constructed vector a x* + (1-a )x** is also a solution. As this is true for all a 0, then there are an infinite number of such vectors.

Thus a system, Ax = d has either no solution, a unique solution, or an infinite number of solutions. If A is a square matrix and non-singular (has no linearly dependent rows/columns, i.e. |A| 0), then we can solve for the unique solution vector x*.

(1) Inverting

The straightforward procedure is by inverting the matrix:

Ax = d x* = A-1d

where A-1 is the inverse of A and expressed as: A-1 = (adj A)/|A|, where adj A is the adjoint matrix of A (defined as the transpose of a matrix whose elements are the cofactors of the corresponding elements in A) while |A| is the determinant of A.

(2) Cramer's Rule

A more practical procedure is to consider the following:

xi* = |Ai|/|A|

where |Ai| is the determinant of matrix A with the ith column replaced by the solution vector d. So, for x1:

d1 a12 .... a1n
d2 a22 .... a2n
A1 = .... .... .... ....
dn an2 .... ann

We can do this for all xi, i = 1, ..., n and thus obtain the solution vector x*.

(3) Homogeneous Systems

If it turns out that d = 0 for our earlier system of equations, i.e. Ax = 0, then we have a homogeneous system. This causes a problem in solving for x. In fact, there are only two alternatives: either we have the trivial solution x = 0 or else x is indeterminate.

We can see triviality immediately from Cramer's rule. As xi = |Ai|/|A|, then because the system is homogeneous, the vector 0 is in the ith column of Ai. Thus, expanding by that column, xi = 0/|A| = 0. This will be true for all i = 1, ..., n, thus every xi = 0. The only alternative to the trivial case is if it also happens to be the case that |A| = 0, so then xi = 0/0 which is indeterminate, i.e. there are infinite number of solutions for xi.

However, we can still say something about x in the homogeneous case. Namely, if we have indeterminacy (i.e. if |A| = 0), we can still sometimes obtain proportional values xi/xj for every i, j = 1, .., n even though we cannot obtain xi and xj directly. But for this we need to analyze eigenvalues.

back Back top Top Selected References

nextNext


Home Alphabetical Index Schools of Thought Surveys and Essays
Web Links References Contact Frames