Matrices

Blake's Newton

Back

A matrix A is a rectangular array of numbers, coefficients or variables. A (n m)-dimensional matrix has n rows and m columns. A vector x is a particular type of matrix with only one column, i.e. if x is a (n 1) matrix, then x is a n-dimensional vector. Thus:

a11 a12 .... a1m
a21 a22 .... a2m
A = .... .... .... ....
an1 an2 .... anm

and:

x1
x = x2
...
xn

are examples of a matrix (A) and a vector (x). We can denote a matrix A by its typical element, thus A = [aij], where i = 1, ..., n; j = 1, ..., m. We now turn to some definitions and properties of matrices and operations on matrices.

Square matrix: If A has the same number of rows as columns, we refer to it as a square matrix.

Matrix addition: A + B - elements are added correspondingly, so A + B = [aij + bij]. This requires that A and B have same dimensions. It is clear that the following properties hold:

(i) A + B = B + A
(ii) (A + B) + C = A + (B + C).

Scalar multiplication: a A - every element is multiplied by scalar a , thus a A = [a aij].

Matrix multiplication AB - uses procedure whereby a typical element of AB, call it cij, is obtained by the summation of the products of the ith row of A (call it ai) and the jth column of B (call it bj). Thus, cij is the inner product of ai and bj, i.e. cij = ・/font> ai, bj・/font> = ・/font> k=1n aikbkj. For conformability to multiplication, the number of columns in A must be equal to the number of rows in B. The resulting matrix, AB has the same number of rows as A and columns as B. In particular, note that if x is an (m 1) vector and A is an (n m) matrix, then Ax is a (n 1) vector. Note that in general, AB BA even if both are defined. Assuming conformability, then the following properties hold:

(i) A(x + y) = Ax + Ay
(ii) A(a x) = a Ax.

Transpose: if we interchange the rows and columns of A the resulting matrix is called the transpose of A and we denote it A . Properties:

(i) (AB) = B A
(ii) (A+B) = A + B .

Symmetric: a matrix A is symmetric if A = A .

Skew-Symmetric: a matrix A is skew-symmetric if A = -A .

Idempotent: A is idempotent if A2 = A.

Null: O is a null matrix if all its elements are zero, aij = 0 for all i, j. Its main property is that A0 = 0 for any A.

Diagonal matrix: A is an diagonal matrix if it is all zeroes except for the principal diagonal, i.e. A = [aij] where aij = 0 for all i j.

Identity matrix: I is an identity matrix if it is a diagonal matrix where the principal diagonal is composed entirely of ones, i.e. I = [d ij] where d ii = 1 for all i; d ij = 0 for all i j. Its main property is that AI = IA = A for any A where I has dimension conformable for multiplication to A.

Linear independence: a matrix A is linearly independent if ・/font> i=1m l iai = 0 implies l i = 0 for all i, where ai is the ith column (row) of A. A is linearly dependent if it is not linearly independent. Any matrix which contains a null vector 0 as one of its rows or columns will be linearly dependent.

Rank: the rank of a matrix A, often denoted r(A), is the number of linearly independent rows or columns of that matrix. It is easily shown that the row rank of a matrix is equal to the column rank of a matrix. Properties:

(i) r(AB) min [r(A), r(B)]
(ii) r(A) = r(A )
(iii) r(A + B) r(A) + r(B).

Inverse: if A is a square matrix (i.e. n n matrix), then A-1 is an "inverse matrix" of A if A-1A = I or AA-1 = I. The inverse matrix is unique for any A. A matrix A is "singular" if it has no inverse; it is "non-singular" if it has an inverse. A is invertible if and only if r(A) = n, thus A must be a linearly independent matrix. To find an inverse, it can be shown that: A-1 = adjA/|A| where adjA is the adjoint matrix, which is defined as: adjA = [|Cij|] , the transpose of the matrix of cofactors of A (see below). Properties:

(i) (A-1)-1 = A
(ii) (A )-1 = (A-1)
(iii) (AB)-1 = B-1A-1
(iv) |A-1| = 1/|A|.

Orthogonality: A is orthogonal if A A = I. Properties: A-1 = A .

Minor: The ij minor of an n n matrix, A, denoted |Mij|, is the determinant of the (n-1) (n-1) matrix obtained by deleting the ith row and the jth column of A.

Cofactor: The ij cofactor of A is denoted |Cij| and is the ij minor of A if i+j is even and the negative of the ij minor of A if i+j is odd, i.e. |Cij| = (-1)i+j|Mij|.

Determinant: if A is a square matrix, then a "determinant" |A| is a scalar associated with that matrix which can be obtained by a Laplace cofactor-expansion process of the elements of the matrix. Expanding by the ith row, then: |A| = ・/font> j=1n aij|Cij|; equivalently, expanding by the jth column, then: |A| = ・/font> i=1n aij|Cij|. Some properties follow:

(i) in a 2 2 case, |A| is merely the product of the principal diagonal minus the product of the off-diagonal;

(ii) if A is singular, then |A| = 0; if A is non-singular, then |A| 0; (thus |A| 0 if all columns and rows of A are linearly independent of each other).

(iii) |A | = |A|

(iv) |AB| = |A||B|

(v) if A is a diagonal matrix, then |A| = ・/font> i=1n aii

(vi) multiplication of any one column (or row) by a scalar a will change the value of the determinant a -fold;

(vii) addition of a multiple of any column (or row) to any other column (or row) will leave the determinant unchanged.

(viiii) |A| = 0 if any of its columns (or rows) is linearly dependent on any other of its columns (or rows);

(ix) |A-1| = |A|-1

Trace: the trace of a matrix A (denoted trA) is the sum of the elements on the principal diagonal, i.e. trA = ・/font> i=1n aii

Principal Leading Minor: the kth order principal leading minor of n n matrix A, denoted |Mk|, is the determinant of the first k rows and columns of A.

Permutation: P is a permutation matrix if in each row and column of P there is an element equal to 1 and the rest of the elements are 0.

Decomposability: A is decomposable if its rows and columns can be renumbered such that A is transformed to:

A1 A12
0 A2

where A1 and A2 are square submatrices. A is decomposable if and only if there is a permutation matrix such that P-1AP yields the transformed matrix noted above.

Diagonalization: A is "diagonalizable" if there exists a matrix Z such that Z-1AZ is a diagonal matrix.

P-Matrix: (Gale and Nikaido, 1965) A is a P-matrix if all principal minors of A are positive.

N-Matrix: (Inada, 1966) A is a N-matrix if all the principal minors are negative.

N-P Matrix: (Nikaido, 1968) A is an N-P matrix if it has all the principal minors of odd orders negative and all those of even orders positive (and a P-N matrix if this is reversed).

back Back top Top Selected References

nextNext