Matrix Equations
Go to: Introduction, Notation, Index
In all the equations below, x, y, z, X, Y
and Z are the unknown vectors or matrices.
The discretetime Lyapunov equation is AXA^{H}
 X + Q = 0 where Q is hermitian. This is a special case of the Stein equation.
 There is a unique solution X iff
(eig(A)eig(A)^{H}  1) has no
zero elements, i.e. iff no eigenvalue of A is the reciprocal of an
eigenvalue of A^{H}. If this condition is satisfied, the
unique X is Hermitian.
 If A is convergent then
X is unique and Hermitian and
X=SUM(A^{k}QB^{k},k=0..infinity)
where B=A^{H}.
 If A is convergent and
Q is positive definite (or semidefinite) then X is unique, Hermitian and positive definite (or
semidefinite).
The equivalent equation for continuoustime systems is the Lyapunov equation.
The discrete Riccati equation is the quadratic equation [A, X: n#n; B: n#m; C: m#n; R, Q: hermitian] X =
A^{H}XA 
(C+B^{H}XA)^{H}(R+B^{H}XB)^{1}(C+B^{H}XA)
+ Q
Suppose H_{[n#n]}=UDU^{H}
is hermitian, U is unitary and
D=diag(d)=diag(eig(H)) contains the eigenvalues in
decreasing order. Then the corresponding quadratic form is the realvalued
expression x^{H}Hx.
 CourantFischer Theorem: min_{W}
max_{x} (x^{H}Hx 
x^{H}x=1 and
W_{[n#k]}^{H}x=0)
= min_{W} max_{x}
(x^{H}Hx(x^{H}x)^{1}

W_{[n#k]}^{H}x=0)
= d_{nk} and this bound is attained by
W=U_{:,nk+1:n} and
y=u_{nk} [4.7].
 RayleighRitz
Theorem: max_{x} (x^{H}Hx 
x^{H}x=1) = max_{x}
(x^{H}Hx(x^{H}x)^{1}
 x!=0) = d_{1} and min_{x}
(x^{H}Hx  x^{H}x=1)
= min_{x}
(x^{H}Hx(x^{H}x)^{1}
 x!=0) = d_{n} and these bounds are attained
by x=u_{1} and y=u_{n}
respectively [4.8].
We can generalize the RayleighRitz theorem to multiple dimensions in either
of two ways which surprisingly turn out to be equivalent. If W is +ve
definite Hermitian and B is Hermitian, then
 max_{X}
tr((X^{H}WX)^{1}
X^{H}BX 
rank(X_{[n#k]})=k) =
sum(d_{1:k}) [4.11]
 max_{X}
det((X^{H}WX)^{1}
X^{H}BX 
rank(X_{[n#k]})=k) =
prod(d_{1:k}) [4.12]
where d are the eigenvalues of W^{1}B sorted into
decreasing order and these bounds are attained by taking the columns of
X to be the corresponding eigenvectors.
Linear Discriminant Analysis (LDA):
If vectors x are randomly generated from a number of classes with
B the covariance of the class means and W the average covariance
within each class, then tr((X^{H}WX)^{1}
X^{H}BX) and
det((X^{H}WX)^{1}
X^{H}BX) are two alternative measures of class
separability. We can find a dimensionreducing transformation that maximizes
separability by taking y = A^{T}x where the
columns of A_{[k#n]} are the eigenvectors of
W^{1}B corresponding to the k largest
eigenvalues. This choice maximizes both separability measures for any given
k.
 If W is +ve definite Hermitian and B is Hermitian and
A_{[n#m]} is a given matrix, then
max_{X} tr(([A X]^{H}W[A
X])^{1} [A X]^{H}B[A X] 
rank([A X_{[n#k]}])=m+k) =
tr((A^{H}WA)^{1}A^{H}BA)
+ sum(d_{1:k}) where d are
 the eigenvalues of
(IA(A^{H}WA)^{1}A^{H}W)W^{1}B
sorted into decreasing order and this maximum may be attained by taking the
columns of X to be the corresponding eigenvectors [4.13].
 the eigenvalues of
V^{H}F^{H}BF^{1}V
sorted into decreasing order where W=F^{H}F and the
columns of V are an orthonormal basis for the null space of
A^{H}F^{H}. This maximum may be
attained by taking the columns of X to be the corresponding eigenvectors
premultiplied by F^{1}V [4.14].
 If W is +ve definite Hermitian and B is Hermitian and
A_{[n#m]} is a given matrix, then
max_{X} det(([A X]^{H}W[A
X])^{1} [A X]^{H}B[A X] 
rank([A X_{[n#k]}])=m+k) =
det((A^{H}WA)^{1}A^{H}BA)×prod(l_{1:k})
where l are the eigenvalues of
W^{1}B(I  A
(A^{H}BA)^{1}A^{H}B
) sorted into decreasing order and this maximum may be attained by taking the
columns of X to be the corresponding eigenvectors. [4.15]
A linear equation has the form Ax  b = 0.
Exact Solution
 [A_{m}_{#n}] The linear equation has a
unique exact solution iff rank([A b]) = rank([A]) = n. The
solution is x = A^{1}b.
 [A_{m}_{#n}] The linear equation has
infinitely many exact solutions iff rank([A b]) = rank([A]) <
n.
 The complete set of solutions is x = x_{0}+y
where x_{0} is any solution and y ranges over the null
space of A.
Least Squares solutions
If there is no exact solution, we can find the x that minimizes
d = Axb = (Ax 
b)^{H}(Ax  b) .
 The x that minimizes d is given by
x=A^{#}b where A^{#} is any
generalized inverse of A.
 Of all the x that attain the minimum d, the one with least
x is given by x=A^{+}b where
A^{+} is the pseudoinverse of A.
 [rank(A_{m}_{#n})=n] The unique
x that minimizes d is given by x =
(A^{H}A)^{1}A^{H}b.
This x gives d =
b^{H}(I_{m#m}A(A^{H}A)^{1}A^{H})b.
 d is zero iff rank([A b]) = n.
Recursive Least Squares
We can express the least squares solution to the augmented equation
[A; U]y  [b; v] = 0 in terms of the least
squares solution to Ax  b = 0.
[rank(A_{m}_{#n})=n] The least
squares solution to the is y = x + K(vUx)
where x is the least squares solution to Axb=0 and
K =
(A^{H}A)^{1}U^{H}(I+U(A^{H}A)^{1}U^{H})^{1}.
The inverse of the augmented grammian is given by ([A;
U]^{H}[A; U])^{1} =
(A^{H}A)^{1}KU(A^{H}A)^{1}.
Thus finding the least squares solution of the augmented equation requires the
inversion of a matrix,
(I+U(A^{H}A)^{1}U^{H}),
whose dimension equals the number of rows of U instead of the number of
rows of [A; U]. The process is particularly simple if
U has only one row. The computation may be reduced at the expense of
numerical stability by calculating
(A^{H}A)^{1}U^{H}
as
(U(A^{H}A)^{1})^{H}.
The (continuous) Lyapunov equation is AX +
XA^{H} + Q = 0 where Q is hermitian.
This is a special case of the Sylvester
equation.
 There is a unique solution for X iff no eigenvalue of A has a
zero real part and no two eigenvalues are negative complex conjugates of each
other. If this condition is satisfied then the unique X is hermitian.
 If A is stable then X is
unique and Hermitian and equals INTEGRAL(EXP(At)
Q EXP(A^{H}t),t=0..infinity)
 If A is stable and Q is
positive definite (or semidefinite) then X is unique, hermitian
and positive definite (or semidefinite).
The equivalent equation for discretetime systems is the Stein equation.
The (continuous) Riccati equation is the quadratic equation [A, X, C, D: n#n; C, D: hermitian] XDX + XA +
A^{H}X  C = 0
A Stein equation has the form AXB  X + Q =
0.
 There is a unique solution for X iff
(eig(A)eig(B)^{T}  1) has no
zero elements, i.e. iff no eigenvalue of A is the reciprocal of an
eigenvalue of B.
 AXB  X + Q = 0 is equivalent to the linear
equation (IKRON(B^{T},A))x:
= q: where x: and q: contain the concatenated columns of
X and Q. This is a numerically poor way to determine
X.
 The discretetime lyapunov equation is a special
case of the Stein equation with B=A^{H} and
Q hermitian.
The Sylvester equation is AX + XB + Q =
0
 There is a unique solution for X iff no eigenvalue of A is the
negative of an eigenvalue of B.
 AX + XB + Q = 0 is equivalent to the linear
equation (KRON(I,
A)+KRON(B^{T},I))x:
= q: where x: and q: contain the concatenated columns of
X and Q. This is a numerically poor way to determine
X.
 The lyapunov equation is a special case of the
Sylvester equation with B=A^{H} and Q
hermitian.
This page is part of The Matrix Reference
Manual. Copyright © 19982019 Mike Brookes, Imperial
College, London, UK. See the file gfl.html for copying
instructions. Please send any comments or suggestions to "mike.brookes" at
"imperial.ac.uk".
Updated: $Id: equation.html 11011 20181203 09:59:00Z dmb $