Consider the system of ordinary differential equations **x**｢** **(t) = **Ax**(t) + **b**. Let **x**(t) = F (t)**c** + **x**_{p} where **x**_{p} is a
particular solution to the system and F(t)**c** is the
general solution of the homogeneous case **x**｢** **= **Ax**.
We define a matrix **A** as "*stable*" iff **x**(t) is stable, or:

Stable Matrix: The matrixAisstableifallthe solutionsx｢(t) =Ax(t) converge toward zero as t converges toward infinity; that is, F (t)cｮ 0 when t ｮ ･ .

We now prove a quite standard theorem:

Theorem: (Negative Eigenvalues) The matrixAis stable if and only if all its eigenvalues have negative real parts.

Proof: Suppose **A** is stable and let l _{i} = u_{i}
+ iw_{i} be an eigenvalue (complex or real) of **A**. Let **v**_{i}
be the associated eigenvector, thus **Av**_{i} = l _{i}**v**_{i}.
Now, we know that F(t) = [**v**_{1}e^{l1t}, **v**_{2}e^{l2t},
.., **v**_{n}e^{lnt}] thus, defining j _{i}(t) = **v**_{i}e^{lit},
then:

F(t) = [j_{1}(t), j_{2}(t), .., j_{n}(t)]

Now, as j _{i}(t) = **v**_{i}e^{lit}, then taking the first derivative j_{i}｢_{ }(t) = l _{i}**v**_{i}e^{lit} = **Av**_{i}e^{lit}
= **Aj **_{i}(t) by definition. Thus, j_{i}(t) is a solution to the homogeneous system, **x**｢** **= **Ax**. Now, by definition, **A** is stable iff F(t)**c** ｮ 0 when t ｮ ･ , thus it must be that j _{i}(t) ｮ 0 for all i = 1, ..,
n for **A** to be stable. We now show that this implies that the real parts of the l _{i} must be negative for all i, i.e. u_{i} < 0
for all i. Consider the kth coordinate of **v**_{i} and thus of j _{i}(t), i.e. j _{ik}(t)
= v_{ik}e^{lit}. Since lim_{tｮ ･ }j _{i}(t)
= 0, then it must be that lim_{tｮ ･
}j _{ik}(t) = 0, or lim_{tｮ
･ }|j _{ik}(t)| = 0 or
lim_{tｮ ･ }|v_{ik}e^{lit}| = 0. Dropping the subscript i for cleaner notation, so
that we now are consider l = u + iw as a representative
eigenvalue, then lim_{tｮ ･ }|v_{k}e^{lit}| = lim_{tｮ ･ }|v_{k}||e^{lit}|
= lim_{tｮ ･ }|v_{k}||e^{(u
+ iw)t}| = lim_{tｮ ･ }|v_{k}||e^{ut}||e^{
iwt}| = 0. As |e^{iwt}| has a finite upper bound, i.e. as |e^{iwt}| ｣ M for some M < ･ and e^{ut}
> 0, then necessarily u < 0, i.e. Re(l ) < 0. Since
this is true for any l (i.e. for all l
_{i}, i = 1, .., n), it follows that if **A** is stable, then Re(l _{i}) = u_{i} < 0 for all i = 1, .., n, i.e. the
real parts of all eigenvalues must be negative.

Conversely, suppose that the real parts of all eigenvalues l
_{i} = u_{i} + iw_{i} so Re(l _{i})
= u_{i} < 0 for all i = 1, .., n. Let **x**(t) be a solution to **x｢ **= **Ax** and let x_{k}(t) be the kth coordinate of **x**(t).
Then, x_{k}(t) = ・/font> _{j=1}^{n} a _{kj} e^{ljt} where a _{kj} are coefficients and l _{j}
= u_{j} + iw_{j}. Thus, |x_{k}(t)| = |・/font>
_{j=1}^{n} a _{kj}e^{ljt}| = |・/font> _{j=1}^{n}
a _{kj}e^{(uj + iwj) t}| =|・/font>
_{j=1}^{n} a _{kj}e^{ujt}e^{iwjt}|.
This implies, by triangular inequality that |x_{k}(t)| ｣
・/font> _{j=1}^{n}|a _{kj}e^{ujt}e^{iwjt}|
= ・/font> _{j=1}^{n}|a _{kj}||e^{ujt}||e^{iwjt}|.
Now, again, by upper boundedness |e^{iwt}| ｣ M and e^{ut
}> 0, then |x_{k}(t)| ｣ ・/font>
_{j=1}^{n}|a _{kj}|e^{ujt}M.
Thus, lim_{tｮ ･ }|x_{k}(t)|
｣ lim_{tｮ ･
}・/font>_{ j=1}^{n}|a
_{kj}|e^{ujt}M. Since u_{j} < 0 for all j = 1, 2, .., n, then
lim_{tｮ ･ }・/font>_{ j=1}^{n}|a _{kj}|e^{ujt}M
= 0, thus it follows that lim_{tｮ ･
}|x_{k}(t)| ｣ 0, i.e. lim_{tｮ ･ }|x_{k}(t)| = lim_{tｮ ･ }x_{k}(t) = 0. Since
this is true for any k = 1, 2, .., n, then the solution **x**(t) converges towards to
zero as t converges towards infinity. Thus, the matrix **A** is stable.ｧ

Under what conditions are all eigenvalues negative? The Routh-Hurwitz conditions on n-degree polynomials are necessary and sufficient for negativity of all eigenvalues. Murata's (1977: p.92) "modified Routh-Hurwitz" conditions can be simpler to use.

Nonetheless, we can still look for properties of **A** that will yield a stable **A**.
The following are straightforward:

Theorem: (Symmetric Case)The symmetric matrixAis stable iff (-1)^{k}|A_{k}| > 0 for all k = 1, 2, .., n, where |A_{k}| is the kth order principal leading minor ofA(or the determinant of a submatrix ofAobtained by deleting the last n - k rows and columns).

Proof: From quadratic forms on a real symmetric matrix, we know that **A** is
negative definite if and only if all its eigenvalues are negative. We also know for
symmetric matrices that **A** is negative definite if and only if (-1)^{k}|**A _{k}**|
> 0.ｧ

Theorem: (Necessity) If a n ｴ n matrixAwith real elements is stable, then its trace is negative and its determinant has the same sign as (-1)^{n}, that isAstable implies trA= ・/font>_{i=1}^{n}a_{ii}< 0 and (-1)^{n}|A| > 0.

Proof: This is elementary. Recall that tr**A** = ・/font> _{i=1}^{n}
l _{i }and |**A**| = ﾕ _{i=1}^{n}
l _{i}. Thus, if all l _{i}
< 0, then necessarily, tr **A** < 0. Similarly, if all l
_{i} < 0, then |**A**| > 0 if n is even and |**A**| < 0 if n is
odd, thus it must be that (-1)^{n}|**A**| > 0.ｧ

The above theorem is actually sufficient if **A** is a 2 ｴ
2 matrix. However, for higher dimensions, this is no longer the case. As a result we must
look elsewhere for sufficient conditions. The following are a few.

Theorem: (Lyapunov) A real n ｴ n matrixAis a stable matrix if and only if there exists a positive definite matrixHsuch thatA｢ H+HAis negative definite.

Proof: Given elsewhere via Lyapunov's method.ｧ

Theorem: (Quasi-Negative Case) A real n ｴ n matrixAis stable ifAis quasi-negative definite.

Proof: This follows from Lyapunov's theorem. Namely, if **A**
is quasi-negative definite, then **A** + **A**｢** **is
negative definite. Thus, **A**｢** I** + **IA** is
negative definite. Obviously, letting **I** = **H**, then we can see this is merely
a special case of the previous theorem.ｧ

Theorem: (Symmetric Negative Case) A real n ｴ n matrixAis stable ifAis symmetric and negative definite.

Proof: If **A** is symmetric and negative definite, then **A** + **A｢ **is negative definite, and Lyapunov's
theorem applies.ｧ

A more interesting set of sufficiency conditions has been derived in the course of the
analysis of the local stability of a Walrasian *tatonnement
process* in general equilibrium. In short, we have the following:

D-Stability: A matrixAis "D-Stable" ifBAis stable foranypositive diagonal matrixB.

It is obvious that if **A** is D-stable, then **A** is stable (just set **B**
= **I**). Sufficient conditions for D-stability are the following:

Theorem: (Arrow-McManus) matrixAis D-stable if there exists a positive diagonal matrixCsuch thatA｢C+CAis negative definite.

Proof: (Arrow and McManus, 1958). To see this,
we must prove that **BA** is stable. Thus, from the earlier theorem, we need (**BA**)｢ **H** + **H**(**BA**) to be negative definite. Let **H**
= **CB**^{-1}. As **B** is a positive diagonal matrix, then (**B**^{-1})｢ = **B**^{-1} and **B** = **B**｢**
**. If **C** is a positive diagonal matrix, then **H** = **CB**^{-1} is
also a positive diagonal matrix. Substituting in for **H**, then the condition becomes
(**A**｢** B**｢** **)**CB**^{-1}
+ **CB**^{-1}(**BA**) = **A**｢** C** + **CA**.
Since we assumed that **A｢ C** + **CA** is negative
definite, then it also must be that (**BA**)｢ **H** + **H**(**BA**)
is negative definite, thus **BA** is stable and thus **A** is D-stable.ｧ

It is easy to notice that if we let **C** = **I**, then we obtain the condition
that **A**｢** **+ **A** is negative definite. We know
that if (i) **A** is quasi-negative definite or if (ii) **A**
is symmetric and negative definite, then **A**｢** **+ **A** is negative definite. Thus (i) and (ii) are also
sufficient conditions for stability.

A more interesting set of results is derived from the property of "diagonal dominance" introduced into economics by Lionel McKenzie (1960).

Diagonal Dominance: a n ｴ n matrixAwith real elements is dominant diagonal (dd) if there are n real numbers d_{j }> 0, j = 1, 2, .., n such that

d

_{j}|a_{jj}| > ・/font>_{iｹ j}d_{i}|a_{ij}|

for j = 1, 2, .., n.

This is also known as "column" diagonal dominance. "Row" diagonal
dominance is also available and defined as the existence of d_{i} > 0 such that
d_{i}|a_{ii}| > ・/font> _{jｹ
i} d_{j}|a_{ij}| for all i = 1, 2, ...., n. It is easy to show that a
matrix which is column dd will also be row dd. This is from the Hawkins-Simon condition:
namely, suppose -|aij| is the ij entry of **A** for i ｹ j
and |a_{ii}| is the ii entry, then **y** > 0 such that **yA** > 0
implies **x** > 0 such that **Ax** > 0.

Hadamard: If n ｴ n matrixAis dominant diagonal and d_{j}= 1 for all j = 1,2, .., n, or:

|a

_{jj}| > ・/font>_{iｹ j}|a_{ij}|

for j = 1, 2, .., n, then

Ais a "Hadamard" matrix.

Alternatively, if we consider a matrix **D** with diagonal elements d_{j} as
defined in the dd case, and **A** is dd, then obviously **DA** is a Hadamard matrix.

Theorem: (McKenzie) IfAis dominant diagonal, then |A| ｹ 0.

Proof: (McKenzie, 1960) Suppose not. Suppose
|**A**| = 0. Then there is an **x** ｹ **0** such that
**DAx** = **0**, thus d_{j}a_{jj}x_{j} + ・/font>
_{iｹ j} d_{i}a_{ij}x_{i} = 0
for all j = 1, 2, .., n. In absolute value terms, then, using the triangular inequality,
this implies for all j:

|d

_{j}a_{jj}|_{ }|x_{j}| = |・/font>_{iｹ j}d_{i}a_{ij}x_{i}| ｣ ・/font>_{iｹ j}|d_{i}a_{ij }||x_{i}|

Let J be an index set such that if j ﾎ J, then |x_{j}|
ｳ |x_{i}| for all i = 1, .., n. Then, for j ﾎ J:

|d

_{j}a_{jj}|_{ }|x_{j}| ｣ ・/font>_{iｹ j}|d_{i}a_{ij }||x_{i}| ｣ ・/font>_{iｹ j}|d_{i}a_{ij }||x_{j}|

which implies:

|d

_{j}a_{jj}|_{ }｣ ・/font>_{iｹ j}|d_{i}a_{ij }|

or, as all d_{j} > 0, then:

d

_{j}|a_{jj}|_{ }｣ ・/font>_{iｹ j}d_{i }|a_{ij }|

for all j ﾎ J, which contradicts diagonal dominance.ｧ

The following theorem, due to McKenzie (1960), establishes that if **A** has a
negative dominant diagonal, then **A** is stable:

Theorem: (Sufficiency) If an n ｴ n matrixAis dominant diagonal and the diagonal is composed of negative elements (a_{ii}< 0 for all i = 1, .., n), then the real parts of all its eigenvalues are negative, i.e.Ais stable.

Proof: (McKenzie, 1960) Suppose that **A**
has an eigenvalue l = u + iv with a non-negative real part,
i.e. u ｳ 0. As a_{ii} < 0, then:

|l - a

_{ii}| = ﾖ ((u_{i}- a_{ii})^{2}+ v_{i}^{2}) ｳ u_{i}- a_{ii}ｳ |a_{ii}|

Since **A** has a dominant diagonal, there exist d_{j} > 0 (j = 1, .., n)
such that d_{j}|a_{jj}|_{ }ｳ ・/font> _{iｹ j} d_{i }|a_{ij
}|, thus:

d

_{j}|l - a_{jj}| ｳ d_{j}|a_{ii}| ｳ ・/font>_{iｹ j}d_{i }|a_{ij }|

But notice that this implies that the system (l **I** - **A**)
has a dominant diagonal as well. By our previous theorem, we know that if a matrix has dd,
then it is non-singular, i.e. it must be that |l **I** - **A|**
ｹ 0. But then that contradicts the postulate that l is an eigenvalue of **A**. Thus, there is no eigenvalue l with a non-negative real part, i.e. the real parts of all
eigenvalues of **A** are negative (u_{i} < 0 for all i) , thus **A** is
stable.ｧ

Finally, note the following corollary:

Corollary: IfAhas negative diagonal dominance, then it is D-stable.

Proof: We have to prove that **BA** is stable, where **B** is a positive diagonal
matrix (thus, b_{ii} > 0 for all i). If **A** has negative diagonal
dominance, then we know all its eigenvalues have negative real parts. Recall that diagonal
dominance implies that there are d_{j} > 0 (j = 1, .., n) such that d_{j}|a_{jj}|_{
}ｳ ・/font> _{iｹ j} d_{i }|a_{ij }|. Consequently, dividing
and multiplying by the relevant b_{jj} > 0, we obtain:

(d

_{j}/b_{jj})|b_{jj}a_{jj}|_{ }ｳ ・/font>_{iｹ j}(d_{i }/b_{ii})|b_{ii}a_{ij }| for all j = 1, 2, .., n.

Thus, d_{j}/b_{jj} > 0 will satisfy diagonal dominance for matrix **BA**.
Finally, as a_{jj} < 0 and b_{jj} > 0 for all j, then b_{jj}a_{jj}
< 0, thus **BA** has a negative diagonal. Thus, **BA** is negative dominant
diagonal and thus, by our previous theorem, is stable. Thus, **A** is D-stable.ｧ

Back | Top | Selected References | Next |