[ Pobierz całość w formacie PDF ]
xp(
_ = + b(
xAxt)
x(0):
= 0
The two solution components xh and xp can be written by means of the matrix exponential, introduced in the following.
Fore t we can write a Taylor series expansion
the scalar exponential
1
X
t 2t2 jtj
e t =1 +:
+ + =
1!j!
2!
j=0
Usually1, in calculus classes, the exponential is introduced by other means, and the Taylor series expansion above is
proven as a property.
ForeZZ2 Rn n is instead defined by the infinite series expansion
matrices, the exponential of a matrix
1
X
ZZ2Zj
eZI+:
= + + =
1!j!
2!
j=0
1
Not always. In some treatments, the exponential is defined through its Taylor series.
6.2. GENERAL SOLUTION OF A LINEAR DIFFERENTIAL SYSTEM 71
HereIisn nidentityZj=j!Zraisedjth power divided
the matrix, and the general term is simply the matrix to the
byj!. It turns out that thisn nmatrixeZ ) for every matrix
the scalar infinite sum converges (to an which we write as
Z.Z=Atgives
Substituting
1
X
AtA2t2A3t3Ajtj
eAtI+ + +:(6.5)
= + =
1! 2!j!
3!
j=0
Differentiating both sides of (6.5) gives
deAtA2tA3t2
=A+ + +
dt1! 2!
AtA2t2
=AI+ + +
1! 2!
deAt
=AeAt:
dt
Thus, fort)eAtw satisfies the homogeneous differential system
any vector w, the function xh ( =
_ =
xhAxh:
By using the initial values (6.2) we obtain v = x0, and
xh(t)eAtx(0) (6.6)
=
is a solution to the differentialt) = 0 and initial values (6.2). It can be shown that this solution is
system (6.1) with b(
unique.
From the elementary theory of differential equations, we also know that a particular solutionto the nonhomogeneous
(b(t) = 0) equation (6.1) is given by
6
Z
t
xp(t)eA(t;s)s)ds:
= b(
This is easily verified, since by differentiating this expression for xp we obtain
Z
t
_ = b( = + b(
xpAeAte;Asb(s)ds+eAte;Att)Axpt)
so xp satisfies equation (6.1).
In summary, we have the following result.
The solution to
_ = + b(
xAxt) (6.7)
with initial value
x(0) = x0 (6.8)
is
x(t)t)t) (6.9)
= xh( + xp(
where
xht)eAtx(0) (6.10)
( =
and
Z
t
xp(t)eA(t;s)s)ds: (6.11)
= b(
Since we now have a formula for the general solution to a linear differential system, we seem to have all we need.
However, we do not know how to compute the matrix exponential. The naive solution to use the definition (6.5)
72 CHAPTER 6. ORDINARY DIFFERENTIAL SYSTEMS
requires too many terms for a good approximation. As we have done for the SVD and the Schur decomposition, we
will only point out that several methods exist for computing a matrix exponential, but we will not discuss how this is
done2. In a fundamental paper on the subject, Nineteen dubious ways to compute the exponential of a matrix (SIAM
Review, vol. 20, no. 4, pp. 801-36), Cleve Moler and Charles Van Loan discuss a large number of different methods,
pointing out that no one of them is appropriate for all situations. A full discussion of this matter is beyond the scope
of these notes.
WhenAis constant, as we currently assume, we can be much more specific about the structure of the
the matrix
solution (6.9) of system (6.7), and particularlyt) to the homogeneous part. Specifically,
so about the solution xh (
the matrix exponential (6.10) can be written as a linear combination, with constant vector coefficients, of scalar
exponentials multiplied by polynomials. In the general theory of linear differential systems, this is shown via the
Jordan canonical form. However, in the paper cited above, Moler and Van Loan point out that the Jordan form cannot
be computed reliably, and small perturbations in the data can change the results dramatically. Fortunately, a similar
result can be found through the Schur decomposition introduced in chapter 5. The next section shows how to do this.
6.3 Structure of the Solution
Fort) = 0, consider the first order system of linear differential equations
the homogeneous case b(
_ =
xAx (6.12)
x(0): (6.13)
= x0
TwoAadmitsndistinct eigenvalues, or is does not. In chapter 5, we have seen that if (but not only
cases arise: either
if)Ahasndistinctnlinearly independent eigenvectors (theorem 5.1.1), and we have shown
eigenvalues then it has
howt) by solving an eigenvalue problem. In section 6.3.1, we briefly review this solution. Then, in section
to find xh (
6.3.2, we show how to computet)n nmatrixAwithn
the homogeneous solution xh( in the extreme case of an
coincident eigenvalues.
To be sure, we have seen that matrices with coincident eigenvalues can still have a full set of linearly independent
eigenvectors (see for instance the identity matrix). However, the solution procedure we introduce in section 6.3.2 for
thencoincident eigenvalues can be applied regardless to how many linearly independent eigenvectors exist.
case of
If the matrix has a full complement of eigenvectors, the solution obtained in section 6.3.2 is the same as would be
obtained with the method of section 6.3.1.
Once these two extreme cases (nondefective matrix or all-coincident eigenvalues) have been handled, we show a
[ Pobierz całość w formacie PDF ]