Matrix Properties
Go to: Introduction, Notation, Index
The adjoint of A, ADJ(A)
is the transpose of the matrix formed by taking the
cofactor of each element of A.
 ADJ(A) A =
det(A) I
 If det(A) != 0, then A^{1} =
ADJ(A) / det(A) but this is
a numerically and computationally poor way of calculating the inverse.

ADJ(A^{T})=ADJ(A)^{T}

ADJ(A^{H})=ADJ(A)^{H}
Characteristic Equation
The characteristic equation of a matrix
A_{[n#n]} is tIA = 0. It is a
polynomial equation in t.
The properties of the characteristic
equation are described in the section on eigenvalues.
Characteristic Matrix
The characteristic matrix of A_{[n#n]}
is (tIA) and is a function of the scalar t.
The properties of the characteristic
matrix are described in the section on eigenvalues.
Characteristic Polynomial
The characteristic polynomial, p(t), of a matrix
A_{[n#n]} is p(t) = tI 
A.
The properties of the characteristic
polynomial are described in the section on eigenvalues.
The cofactor of a minor of
A:n#n is equal to the product of (i) the determinant of the submatrix
consisting of all the rows and columns that are not in the minor and (ii) 1
raised to the power of the sum of all the row and column indices that are in
the minor.
 The cofactor of the element a(i,j) equals
1^{i+j} det(B) where B is
the matrix formed by deleting row i and column j from
A.
See Minor, Adjoint
The k^{th} compound matrix of
A_{[m#n]} is the
m!(k!(mk)!)^{1}#n!(k!(nk)!)^{1}
matrix formed from the determinants of all k#k submatrices of A arranged
with the submatrix index sets in lexicographic order. Within this section, we
denote this matrix by C_{k}(A).
 C_{1}(A) = A
 C_{n}(A_{[n#n]}) =
det(A)
 C_{k}(AB) =
C_{k}(A)C_{k}(B)
 C_{k}(aX) =
a^{k}C_{k}(X)
 C_{k}(I) = I
 C_{k}(A^{H}) =
C_{k}(A)^{H}
 C_{k}(A^{T}) =
C_{k}(A)^{T}
 C_{k}(A^{1}) =
C_{k}(A)^{1}
The condition number of a matrix is its largest singular value divided by its smallest singular value.
 If Ax=y and A(x+p)=y+q
then p/x <= k q/y where
k is the condition number of A. Thus it provides a sensitivity
bound for the solution of a linear equation.
 If A_{[2#2]} is hermitian positive
definite then its condition number, r, satisfies 4 <=
tr(A)^{2}/det(A) = (r+1)^{2}/r.
This expression is symmetric between r and r^{1} and is
monotonically increasing for r>1. It therefore provides an easy way
to check on the range of r.
Conjugate Transpose
X=Y^{H} is the Hermitian transpose or Conjugate
transpose of Y iff
x_{i,j}=y_{j,i}^{C}.
See Hermitian Transpose.
The pair of matrices {A_{[n#n]},
C_{[m#n]}} are constructible iff
{A^{H}, C^{H}} are controllable.
 If {A, C} are observable then
they are constructible.
 If det(A)!=0 and {A, C} are constructible then they
are observable.
 If {A, C} are constructible then they are detectable.
The pair of matrices {A_{[n#n]},
B_{[n#m]}} are controllable iff any of the
following equivalent conditions are true
 There exists a G_{[mn#n]} such that
A^{n} = CG where C = [B AB
A^{2}B ...
A^{n1}B]_{[n#mn]} is the
controllability matrix.
 If x^{T}A^{r}B =
0 for 0<=r<n then
x^{T}A^{n} = 0.
 If x^{T}B = 0 and
x^{T}A = kx^{T} then
either k=0 or else x = 0.
 If {A, B} are reachable then they
are controllable.
 If det(A)!=0 and {A, B} are controllable then they are
reachable.
 If {A, B} are controllable then they are stabilizable.
 {DIAG(a), b} are controllable iff all nonzero elements of
a are distinct and all the corresponding elements of b are
nonzero.
A Hermitian square matrix A
is
This definition only applies to Hermitian and realsymmetric matrices; if
A is nonreal and nonHermitian then
x^{H}Ax is complex for some values of x and
so the concept of definiteness does not make sense. Some authors also call a
real nonsymmetric matrix positive definite if
x^{H}Ax > 0 for all nonzero real x;
this is true iff its symmetric part is positive definite (see below).
 A (not necessarily symmetric) real matrix A satisfies
x^{H}Ax > 0 for all nonzero real x iff
its symmetric part B=(A+A^{T})/2 is
positive definite. Indeed x^{T}Ax=
x^{T}Bx for all x.
 The following are equivalent
 A is Hermitian and +ve semidefinite
 A=B^{H}B for some B (not
necessarily square)
 A=C^{2} for some Hermitian C.
 D^{H}AD is Hermitian and +ve
semidefinite for any D
 If A is +ve definite then A^{1} exists and is +ve
definite.
 If A is +ve semidefinite, then for any integer k>0 there
exists a unique +ve semidefinite B with
A=B^{k}. This B also satisifes:
 AB=BA
 B=p(A) for some polynomial p()
 rank(B) = rank(A)
 if A is real then so is B.
 A is +ve definite iff all its eigenvalues are > 0.
 If A is +ve definite then det(A) > 0 and tr(A) >
0.
 A Hermitian matrix A_{[2#2]} is +ve definite iff
det(A) >0 and tr(A) > 0.
 The columns of B_{[m#n]} are linearly
independent iff B^{H}B is +ve definite.
 If S is +ve semidefinite, then
a^{H}Sb^{2} <=
a^{H}Sa×b^{H}Sb
for any a, b [3.6]
 s_{i,j} <=
sqrt(s_{i,i}s_{j,j}) [3.6]
 If B is +ve definite and A is +ve semidefinite then:
 B^{1}A is diagonalizable and has nonnegative
eigenvalues [3.7]
 tr(B^{1}A) = 0 iff A=0
The pair of matrices {A_{[n#n]},
C_{[m#n]}} are detectable iff
{A^{H}, C^{H}} are stabilizable.
If {A, C} are observable or
constructible then they are detectable..
For an n#n matrix A, det(A) is a scalar number
defined by
det(A)=sgn(PERM(n))'*prod(A(1:n,PERM(n)))
This is the sum of n! terms each involving the product of n
matrix elements of which exactly one comes from each row and each column. Each
term is multiplied by the signature (+1 or 1) of the columnorder permutation
. See the notation section for definitions of sgn(),
prod() and PERM().
The determinant is important because INV(A) exists iff det(A)
!= 0.
Geometric Interpretation
The determinant of a matrix equals the +area of the +parallelogram that has
the matrix columns as n of its sides. If a vector space is transformed
by multiplying by a matrix A, then all +areas will be multiplied by
det(A).
 det(A^{T}) = det(A)
 det(A^{H}) = conj(det(A))
 det(cA) = c^{n} det(A)
 det(A^{k}) = (det(A))^{k} ,
k must be positive if det(A)=0.
 Interchanging any pair of columns of a matrix multiplies its determinant by
1(likewise rows).
 Multiplying any column of a matrix by c multiplies its determinant
by c (likewise rows).
 Adding any multiple of one column onto another column leaves the
determinant unaltered (likewise rows).
 det(A) != 0 iff INV(A) exists.
 [A,B:n#m ;
m>=n]: If Q = CHOOSE(m,n). and
d(k) = det(A(:,Q(k,:))
det(B(:,Q(k,:)) for k=1:rows(Q) then
det(AB^{T}) = sum(d). This is the BinetCauchy
theorem.
 Suppose that for some r, P = CHOOSE(n,r) and
Q = CHOOSE(n,nr) with the rows of Q ordered so
that P(k,:) and Q(k,:) have no elements in common.
If we define D(m,k) = (1)^{sum([P(m,:)
P(k,:)])}
det(A(P(m,:)^{T},P(k,:))
det(A(Q(m,:)^{T},Q(k,:)) for
m,k=1:rows(P) then det(A) =
sum(D(m,:)) = sum(D(:,k)) for any
k or m. This is the Laplace
expansion theorem.
 If we set k=r=1 then P(m,:)=[m] and we
obtain the familiar expansion by the first column:
d(m)=(1)^{m+1} A(m,1)
det(A([1:m1 m+1:n]^{T},2:n)) and
det(A)=sum(d).
 det(A) = 0 iff the columns of A are linearly dependent
(likewise rows).
 det(A) = 0 if two columns are identical (likewise rows).
 det(A) = 0 if any column consists entirely of zeros (likewise
rows).
 [A:3#3]: If A = [a b c] then
det(A) = det([a b c]) =
a^{T} SKEW(b) c =
b^{T} SKEW(c) a =
c^{T} SKEW(a) b
 det([a b; c d]) = ad  bc
 det([a b c]) =
a_{1}b_{2}c_{3} –
a_{1}b_{3}c_{2} –
a_{2}b_{1}c_{3} +
a_{2}c_{1}b_{3} +
a_{3}b_{1}c_{2} –
a_{3}c_{1}b_{2}
 The determinant of a diagonal or
triangular matrix is the product of its
diagonal elements.
 The determinant of a unitary matrix has
an absolute value of 1.
 The determinant of a permutation
matrix equals the signature of the column permutation.
 [A,B:n#n ]:det(AB)
= det(A) det(B)
 [A,B:m#n ]:det(I +
A^{T}B) = det(I +
AB^{T}) = det(I +
B^{T}A) = det(I +
BA^{T}) [3.2]
 [A:n#n
]:det(A+xy^{T}) =
(1+y^{T}A^{1}x) det(A)
[3.4]
 det(I+xy^{T}) = 1+y^{T}x
= 1+x^{T}y [3.3]
 det(kI+xy^{T}) =
k^{n}+k^{n}^{1}y^{T}x
=
k^{n}+k^{n}^{1}x^{T}y
In this section we have A_{[m#m]},
B_{[m#n]}, C_{[n#m]}
and D_{[n#n]}.
 det([A, B; C, D]) = det([D, C; B, A]) =
det(A)*det(DCA^{1}B) =
det(D)*det(ABD^{1}C) [3.1]
 det([a, b^{T}; c, D]) = (a 
b^{T}D^{1}c)det(D)
 det([I, B; C, I]) =
det(I_{[m#m]}BC) =
det(I_{[n#n]}CB)
 det([A, B; 0, D]) = det([A, 0; C, D]) = det(A)
det(D)
 det([a, b^{T}; 0, D]) =
det([a, 0; c, D]) = a det(D)
 For the special case when m=n (i.e. A, B,
C, D all n#n):
 det([A, B; C, 0]) = det(BC^{T})
 [AB=BA]: det([A, B; C, D])
= det(DACB)
 [AC=CA]: det([A, B; C, D])
= det(ADCB)
 [BD=DB]: det([A, B; C, D])
= det(DABC)
 [CD=DC]: det([A, B; C, D])
= det(ADBC)
See also Grammian, Schur
Complement
The displacement rank of X_{[m#n]} is given by
dis_rank(X) = rank(X 
ZXZ^{T}) where the Z are shift matrices of size m#m and
n#n respectively.
 dis_rank(X+Y) <= dis_rank(X) +
dis_rank(Y)
 dis_rank(XY) <= dis_rank(X) + dis_rank(Y)
 dis_rank(X^{1})=dis_rank(JXJ) where J is the
exchange matrix.
 [X: Toeplitz] dis_rank(X) = 2
unless X is upper or lower triangular in which case dis_rank(X)=1
unless X = 0 , in which case dis_rank(X)=0.
 [X_{[n#n]}:
Toeplitz] If a = X_{1,1} and b =
X^{2}_{1,1}, then the characteristic polynomial of X 
ZXZ^{T} is (t^{2}  at +
a^{2}b) t^{n}^{2}
Eigenvalues
The eigenvalues of A are the roots of its characteristic equation: tIA
= 0.
The properties of the eigenvalues are
described in the section on eigenvalues.
The field of values of a square matrix A is the set of complex
numbers x^{H}Ax for all x with
x=1.
 The field of values is a closed convex set.
 The field of values contains the convex hull of the eigenvalues of
A.
 If A is normal then the field of
values equals the convex hull of its eigenvalues.
 [n<5]
A_{[n#n]} is normal iff its field of values is the convex hull of
its eigenvalues.
 A is hermitian iff its field of
values is a real interval.
 If A and B are unitarily similar, they have the same
field of values.
A generalized inverse of X:m#n is any matrix,
X^{#}:n#m satisfying
XX^{#}X=X. Note that if X is singular or
nonsquare, then X^{#} is not unique. This is also called a
weak generalized inverse to distinguish it from the pseudoinverse.
 If X is square and nonsingular, X^{#} is unique and
equal to X^{1}.
 (X^{#})^{H} is a generalized inverse of
X^{H}.
 [k!=0]
X^{#}/k is a generalized inverse of
kX.
 [A,B nonsingular]
B^{1}X^{#}A^{1} is a generalized
inverse of AXB
 rank(X^{#}) >= rank(X).
 rank(X)=rank(X^{#}) iff X is also the
generalized inverse of X^{#} ( i.e.
X^{#}XX^{#}=X^{#}.).
 XX^{#} and X^{#}X are idempotent and have the same rank as X.
 If Axb has any solutions, then
x=A^{#}b is a solution.
 If AA^{#} is hermitian,
a value of x that minimizes Axb is given by
x=A^{#}b. With this value of x, the error
Axb is orthogonal to the columns of A. If we define the projection matrix P=AA^{#},
then Ax=Pb and Axb=(IP)b.
 If X:m#n has rank r, we can find
A:n#nr, B:n#r and C:m#mr whose
columns form bases for the null space of X, the range of
X^{+}X and the null space of X^{H}
respectively.

 The set of generalized inverses of X is precisely given by
X^{#}=X^{+}+AY+BZC^{H}
for arbitrary Y:nr#m and Z:r#mr where
X^{+} is the pseudoinverse.
 For a given choice of A, B and C, each
X^{#} corresponds to a unique Y and Z.
 XX^{#} is hermitian iff
Z=0.
 If X:m#n has rank r, we can find
A:n#nr, F:n#r and C:m#mr whose
columns form bases for the null space of X, the range of
X^{+} and the null space of X^{H}
respectively. We can also find G:m#r such that
X^{+}=FG^{H}.

 The set of generalized inverses X^{#} of X, for which
X is also the generalised inverse of X^{#} is precisely
given by
X^{#}=(F+AV)(G+CW)^{H}
for arbitrary V:nr#r and W:mr#r.
 For a given choice of A, C, F and G each
X^{#} corresponds to a unique V and W.
See also: Pseudoinverse
The gram matrix of X, GRAM(X), is the matrix
X^{H}X.
If X is m#n, the elements of
GRAM(X) are the n^{2} possible inner
products between pairs of its columns. We can form such a matrix from
n vectors in any vector space having an inner product.
See also: Grammian
The grammain of a matrix X, gram(X), equals
det(GRAM(X)) =
det(X^{H}X).
 gram(X) is real and >= 0.
 gram(X) > 0 iff the columns of X are
linearly independent, i.e. iff Xy = 0 implies
y = 0
 [X_{m}_{#n}]:
gram(X)=0 if m<n.
 gram(X) = 0 iff a principal minor of
GRAM(X) is zero.
 [X_{n#n}]:
gram(X) = gram(X^{H}) =
det(X)^{2}
 gram(x) =
x^{H}x
 gram([X Y]) = gram([Y X]) =
gram(X)*det(Y^{H}YY^{H}X(X^{H}X)^{1}X^{H}Y)
=
gram(X)*det(Y^{H}(IX(X^{H}X)^{1}X^{H})Y)
 gram([X y]) = gram([y X]) =
gram(X)*y^{H}yy^{H}X(X^{H}X)^{1}X^{H}y
=
gram(X)*y^{H}(IX(X^{H}X)^{1}X^{H})y
 gram([X y]) = gram(X)
XX^{#}y  y^{2} where
X^{#} is the generalized inverse so
that XX^{#}y  y equals the distance between
y and its orthogonal projection onto the space spanned by the columns of
X.
 gram([X Y]) <= gram(X)
gram(Y); this is the generalised Hadamard inequality.
 gram([X Y]) = gram(X)
gram(Y) iff either
X^{H}Y = 0
or gram(X) gram(Y) = 0
 If X = [x_{1}
x_{2} ... x_{n}]
then gram(X) <=
prod(x_{i}^{2}) =
prod(diag(X^{H}X))
 [X_{n#n}]:
det(X)^{2}<=
prod(x_{i}^{2}) =
prod(diag(X^{H}X)); this is
the Hadamard inequality.
Geometric Interpretation
The grammian of X_{m#n} is the
squared "volume" of the ndimensional parallelepiped spanned by the
columns of X.
See also: Gram Matrix
X=Y^{H} is the Hermitian transpose or Conjugate
transpose of Y iff x(i,j)=conj(y(j,i)).
The inertia of an m#m square matrix is the triple
(p,n,z) where p+n+z=m and
p, n and z are respectively the number of eigenvalues,
counting multiplicities, with positive, negative and zero real parts.
B is a left inverse of A if
BA=I. B is a right
inverse of A if
AB=I.
If BA=AB=I then
B is the inverse of A and we write
B=A^{1}.
 [A:n#n] AB=I iff BA=I, hence
inverse, left inverse and right inverse are all
equivalent for square matrices.
 [A,B:n#n]
(AB)^{1}=B^{1}A^{1}
 [A:m#n] A has a left inverse iff
rank(A)=n and a right inverse iff
rank(A)=m.
 [A:n#m, B:m#n] AB=I implies that
n<=m and that
rank(A)=rank(B)=n.
Inverse of Block Matrices
 [A, B; C, D]^{1} = [Q^{1},
Q^{1}BD^{1};
D^{1}CQ^{1},
D^{1}(I+CQ^{1}BD^{1})]
where Q =(ABD^{1}C) is the Schur Complement of D [3.5]
= [A^{1}(I+BP^{1}CA^{1}),
A^{1}BP^{1};
P^{1}CA^{1}, P^{1}]
where P =(DCA^{1}B) is the Schur Complement of A
[3.5]
=[ I, A^{1}B; D^{1}C, I]
DIAG((ABD^{1}C)^{1},
(DCA^{1}B)^{1})
=DIAG((ABD^{1}C)^{1},
(DCA^{1}B)^{1}) [ I,
BD^{1}; CA^{1}, I]
=DIAG(A^{1}, 0) + [A^{1}B;
I]
(DCA^{1}B)^{1}[CA^{1},
I]
=DIAG(0, D^{1}) + [I;
D^{1}C]
(ABD^{1}C)^{1}[I,
BD^{1}]
 [A, 0; C, D]^{1} = [A^{1}, 0;
D^{1}CA^{1}, D^{1}]
=[ I, 0; D^{1}C, I] DIAG(A^{1},
D^{1})
=DIAG(A^{1}, D^{1}) [ I, 0;
CA^{1}, I]
 [A, B; C, 0]^{1} = DIAG(A^{1},
0)  [A^{1}B; I]
(CA^{1}B) ^{1}[CA^{1},
I]
 [A, b; c^{T}, d]^{1}
= [Q^{1},
d^{1}Q^{1}b;
d^{1}c^{T}Q^{1},
d^{1}(1+d^{1}c^{T}Q^{1}b)]
where Q
=(Ad^{1}bc^{T}),
=
[A^{1}(I+p^{1}bc^{T}A^{1}),
p^{1}A^{1}b;
p^{1}c^{T}A^{1},
p^{1}] where p
=(dc^{T}A^{1}b)
=[ I, A^{1}b;
d^{1}c^{T}, 1]
DIAG((Ad^{1}bc^{T})^{1},
(dc^{T}A^{1}b)^{1})
=DIAG((Ad^{1}bc^{T})^{1},
(dc^{T}A^{1}b)^{1})
[ I, bd^{1};
c^{T}A^{1}, 1]
=DIAG(A^{1}, 0) +
(dc^{T}A^{1}b)^{1}[A^{1}b;
1] [c^{T}A^{1}, 1]
=DIAG(0, d^{1}) + [I;
d^{1}c^{T}]
(Ad^{1}bc^{T})^{1}[I,
d^{1}b]
 [A, 0; c^{T}, d]^{1}
= [A^{1}, 0;
d^{1}c^{T}A^{1},
d^{1}]
=[ I, 0; d^{1}c^{T}, 1]
DIAG(A^{1}, d^{1})
=DIAG(A^{1}, d^{1}) [ I, 0;
c^{T}A^{1}, 1]
 [A, b; c^{T}, 0]^{1} =
DIAG(A^{1}, 0) 
(c^{T}A^{1}b)
^{1}[A^{1}b; 1]
[c^{T}A^{1}, 1]
See also: Generalized Inverse, Pseudoinverse, Inversion
Lemma
The kernel (or null space) of A is the subspace of vectors x
for which Ax = 0. The dimension of this subspace is the nullity
of A.
 The kernel of A is the orthogonal complement of the range of
A^{H}
The columns of A are linearly independent iff the only
solution to Ax=0 is x=0.
 rank(A_{[m#n]}) = n
iff its columns are linearly independent. [1.5]
 If the columns of A_{[m#n]} are linearly
independent then m >= n [1.3, 1.5]
 If A has linearly independent columns and
A=F_{[m#r]}G_{[r#n]}
then r>=n. [1.1]
A matrix norm is a realvalued function of a square matrix satisfying
the four axioms listed below. A generalized matrix norm satisfies only
the first three.
 Positive: X=0 iff X=0 else X>0
 Homogeneous: cX=c X for any real or
complex scalar c
 Triangle Inequality:
X+Y<=X+Y
 Submultiplicative: XY<=X Y
If y is a vector norm, then we define the
induced matrix norm to be X=max(Xy for
y=1)
The Euclidean or Frobenius norm of a matrix A equals
sqrt(sum(ABS(A).^{2})) and is written
A_{F}. It is always a real number.
 A_{F} =
A^{T}_{F} =
A^{H}_{F}
 A_{F}^{2} =
tr(A^{H}A) = sum(CONJ(A).*A)
 [Q: orthogonal]:
A_{F} = QA_{F} =
AQ_{F}
A_{p} = max(Ax_{p})
where the max() is taken over all x with x_{p}
= 1 where x_{p} denotes the vector pnorm for
p>=1.
 AB_{p} <= A_{p}
B_{p}
 Ax_{p} <= A_{p}
x_{p}
 [A:m#n]: A_{2} <=
A_{F} <= sqrt(n)
A_{2}
 [A:m#n]: max(ABS(A))
<= A_{2} <= sqrt(mn)
max(ABS(A))
 A_{2} <= sqrt(A_{1}
A_{inf})
 A_{1} =
max(sum(ABS(A^{T})))
 A_{inf} = max(sum(ABS(A)))
 [A:m#n]: A_{inf}
<= sqrt(n) A_{2} <= sqrt(mn)
A_{inf}
 [A:m#n]: A_{1} <=
sqrt(m) A_{2} <= sqrt(mn)
A_{1}
 [Q: orthogonal]:
A_{2} = QA_{2} =
AQ_{2}
A kthorder minor of A is the determinant
of a k#k submatrix of A.
A principal minor is the determinant of a submatrix whose diagonal
elements lie on the principal diagonal of A.
The null space (or kernel) of A is the subspace of vectors x
for which Ax = 0.
 The null space of A is the orthogonal complement of the range of
A^{H}
 The dimension of the null space of A is the nullity of A.
 Given a vector x, we can choose a Householder matrix
P=I2vv^{H} with v = (x +
ke_{1})/x + ke_{1} where
k=sgn(x(1))*x and e_{1} is the first
column of the identity matrix. The first row of P equals
k^{1}x^{T} and the remaining rows form
an orthonormal basis for the null space of x^{T}.
The nullity of a matrix A is the dimension of the null space of
A.
The pair of matrices {A_{[n#n]},
C_{[m#n]}} are observable iff
{A^{H}, C^{H}} are reachable.
For an n#n matrix A, pet(A) is a scalar number
defined by
pet(A)=sum(prod(A(1:n,PERM(n))))
This is the same as the determinant except that the individual terms within
the sum are not multiplied by the signatures of the column permutations.
Properties of Permanents
 pet(A.') = pet(A)
 pet(A') = conj(pet(A))
 pet(cA) = c^{n} pet(A)
 [P: permutation matrix]: pet(PA)
= pet(AP) = pet(A)
 [D: diagonal matrix]: pet(DA) =
pet(AD) = pet(A) pet(D) = pet(A)
prod(diag(D))
Permanents of simple matrices
 pet([a b; c d]) = ad + bc
 The permanent of a diagonal or triangular matrix is the product of its
diagonal elements.
 The permanent of a permutation matrix equals 1.
The potency of a nonnegative matrix
A is the smallest n>0 such that
diag(A^{n}) > 0 i.e. all diagonal elements of
A^{n} are strictly positive. If no such n exists
then A is impotent.
The pseudoinverse (also called the Natural Inverse or
MoorePenrose Pseudoinverse) of X_{m#n} is
the unique [1.19] n#m matrix
X^{+} that satisfies:
 XX^{+}X=X (i.e.
X^{+} is a generalized inverse of
X).

X^{+}XX^{+}=X^{+}
(i.e. X is a generalized inverse of
X^{+}).
 (XX^{+})^{H}=XX^{+}

(X^{+}X)^{H}=X^{+}X
 If X is square and nonsingular then
X^{+}=X^{1}.
 If X=UDV^{H} is the singular value decomposition of X, then
X^{+}=VD^{+}U^{H} where
D^{+} is formed by inverting all the nonzero elements of
D^{T}.
 If D is a (not necessarily square) diagonal matrix, then
D^{+} is formed by inverting all the nonzero elements of
D^{T}.
 The pseudoinverse of X is the generalized
inverse having the lowest Frobenius norm.
 If X is real then so is X^{+}.
 (X^{+})^{+}=X

(X^{T})^{+}=(X^{+})^{T}

(X^{H})^{+}=(X^{+})^{H}
 (cX)^{+}=c^{1}X^{+}
for any real or complex scalar c.

X^{+}=X^{H}(XX^{H})^{+}=(X^{H}X)^{+}X^{H}.
 If X_{m#n} =
F_{m#r} G_{r#n} has
rank r then X^{+}=
G^{+}F^{+}=
G^{H}(F^{H}XG^{H})^{1}F^{H}.
 If X_{m#n} has rank n (i.e. the
columns are linearly independent) then
X^{+}=(X^{H}X)^{1}X^{H}
and X^{+}X=I.
 If X_{m#n} has rank m (i.e. the
rows are linearly independent) then
X^{+}=X^{H}(XX^{H})^{1}
and XX^{+}=I.
 If X has orthonormal rows or orthonormal columns then
X^{+}= X^{H} .
 XX^{+} is a projection onto the column space of
X.
 [rank(X)=1]:
X^{+}=X^{H}/
X_{F}^{2}=X^{H}/tr(X^{H}X)
where X_{F} is the Frobenius Norm
See also: Inverse, Generalized
Inverse
The rank of an m#n matrix A is the smallest r
for which there exist F_{[m#r]} and
G_{[r#n]} such that A=FG. Such a
decomposition is a fullrank decomposition. As a special case, the rank
of 0 is 0.

A=F_{[m#r]}G_{[r#n]}
implies that rank(A) <= r .
 rank(A)=1 iff A = xy^{T} for some
x and y.
 rank(A_{[m#n]}) <= min(m,n).
[1.3]
 rank(A_{[m#n]}) = n iff its columns are
linearly independent.
[1.5]
 rank(A) = rank(A^{T}) =
rank(A^{H})
 rank(A) = maximum number of linearly independent columns (or rows)
of A.
 rank(A) is the dimension of the range of
A.
 rank(A_{[}_{n}_{#n]}) +
nullity(A_{[}_{n}_{#n]})
= n
 det(A_{[}_{n}_{#n]})=0
iff
rank(A_{[}_{n}_{#n]})<n.
 rank(A + B) <=
rank(A) + rank(B)
 rank([A B]) = rank(A) + rank(B 
AA^{#}B) where A^{#} is a generalized inverse of A.
 rank([A; C]) = rank(A) + rank(C 
CA^{#}A)
 rank([A B; C 0]) = rank(B) + rank(C) +
rank((I  BB^{#})A(I 
CC^{#}))
 rank(AA^{H}) =
rank(A^{H}A) = rank(A) [see grammian]
 rank(AB) + rank(BC) <= rank(B) + rank(ABC)
 rank(A_{[m#n]}) + rank(B)  n
<= rank(AB) <= min(rank(A),
rank(B))
 [X: nonsingular]:
rank(XA) = rank(AX) =
rank(A)
 rank(KRON(A,B)) =
rank(A)rank(B)

rank(DIAG(A,B,...,Z))
= sum(rank(A), rank(B), ...,
rank(Z))
The range (or image) of A is the subspace of vectors that equal
Ax for some x. The dimension of this subspace is the rank of
A.
 [A:m#n] The range of A is
the orthogonal complement of the null space of
A^{H}.
The pair of matrices {A_{[n#n]},
B_{[n#m]}} are reachable iff any of the
following equivalent conditions are true
 rank(C)=n where C = [B AB A^{2}B
...
A^{n}^{1}B]_{[n#mn]}
is the controllability matrix.
 If x^{H}A^{r}B =
0 for 0<=r<n then x = 0.
 If x^{H}B = 0 and
x^{H}A = kx^{H} then
x = 0.
 For any v, it is possible to choose
L_{[n#m]} such that
eig(A+BL^{H})=v.
 If {A, B} are reachable then they are controllable and stabilizable.
 If det(A)!=0 and {A, B} are controllable then they are reachable.
 {DIAG(a), b} are reachable iff all elements of a are
distinct and all elements of b are nonzero.
Given a block matrix M =
[A_{[m#m]}, B; C,
D_{[n#n]}], then
P_{[n#n]}=DCA^{1}B and
Q_{[m#m]}=ABD^{1}C are
respectively the Schur Complements of A and D in
M.
 det([A, B; C, D]) = det([D, C; B, A]) =
det(A)*det(P) =
det(Q)*det(D) [3.1]
 [A, B; C, D]^{1} = [Q^{1},
Q^{1}BD^{1};
D^{1}CQ^{1},
D^{1}(I+CQ^{1}BD^{1})]=
[A^{1}(I+BP^{1}CA^{1}),
A^{1}BP^{1};
P^{1}CA^{1}, P^{1}]
[3.5]
The spectral radius, rho(A), of
A_{[n#n]} is the maximum modulus of any of its
eigenvalues.
 rho(A) <= A where A is any matrix norm.
 For any a>0, there exists a matrix norm such that A 
a <= rho(A) <= A.
 If ABS(A)<=B then
rho(A)<=rho(ABS(A))<=rho(B)
 [A,B: real] If
B>=A>=0 then rho(B)>=rho(A)
 [A: real] If A>=0 then
rho(A)>=a_{ij} for all i,j
 [A,B: Hermitian]
abs(eig(A+B)eig(A))<=rho(B)
where eig(A) contains the eigenvalues of A sorted into ascending
order. This shows that perturbing a hermitian matrix slightly doesn't have too
big an effect on its eigenvalues.
The spectrum of A_{[n#n]} is the set of all its
eigenvalues.
The pair of matrices {A_{[n#n]},
B_{[n#m]}} are stabilizable iff either of
the following equivalent conditions are true
 If x^{T}B = 0 and
x^{T}A = kx^{T} then
either k< 1 or else x = 0.
 It is possible to choose L_{[n#m]} such that
all elements of eig(A+BL^{H}) have absolute
value < 1.
 If {A, B} are reachable or
controllable then they are stabilizable.
 {DIAG(a), b} are stabilizable iff all elements of a
with modulus >=1 are distinct and all the corresponding elements of b
are nonzero.
A submatrix of A is a matrix formed by the elements
a(i,j) where i ranges over a subset of the rows and
j ranges over a subset of the columns.
The trace of a square matrix is the sum of its diagonal elements:
tr(A)=sum(diag(A))
In the formulae below, we assume that matrix dimensions ensure that the
argument of tr() is square.
 tr(aA) = a * tr(A)
 tr(A^{T}) = tr(A)
 tr(A+B) = tr(A) + tr(B)
 tr(AB) = tr(BA) [1.17]
 tr((AB)^{k})
=tr((BA)^{k})
 a^{T}b = tr(ab^{T})
 a^{T}Xb =
tr(Xba^{T})
 tr(ab^{H}) =
conj(a^{H}b)
 tr(ABCD) = tr(BCDA) = tr(CDAB) = tr(DABC)
 Similar matrices have the same
trace: tr(X^{1}AX) = tr(A)
 tr(A^{T}B) = sum(A: •
B:) = A:^{T} B:
 tr(A^{H}B) =
sum(A^{C}: • B:) =
A:^{H} B:
 tr([A B]^{T} [C D]) =
tr(A^{T}C) +
tr(B^{T}D) [1.18]
 tr([A b]^{T} [C d]) =
tr(A^{T}C) +
b^{T}d
 tr([A B]^{T} X[C D]) =
tr(A^{T}XC) +
tr(B^{T}XD)
 tr([A b]^{T} X[C d]) =
tr(A^{T}XC) +
b^{T}Xd
 tr(A ¤ B) = [A,B:
n#n] tr(A) tr(B) where
¤ denotes the Kroneker
product.
 If D is diagonal then tr(XDX^{T}) =
sum_{i}(d_{i} ×
x_{i}^{T}x_{i}) and
tr(XDX^{H}) = sum_{i}(d_{i}
×
x_{i}^{H}x_{i}) =
sum_{i}(d_{i} ×
x_{i}^{2}) [1.16]
X=Y^{T} is the transpose of Y iff
x(i,j)=y(j,i).
The vector formed by concatenating all the columns of X is written
vec(X) or, in this website, X:. If y =
X_{[m#n]}: then
y_{i}_{+m(j1)} =
x_{i,j}.
 a ¤ b=(ba^{T}): where
¤ denotes the Kroneker
product.
 sum((A • B):) =
tr(A^{T}B) = sum(A: • B:) =
A:^{T} B: =
(A^{T}:)^{T}
B^{T}: where A • B denotes the
Hadamard or elementwise product.
 tr(A^{H}B) =
sum(A^{C}: • B:) =
A:^{H} B:
 (ABC): = (C^{T} ¤ A)
B:
 (AB): = (I ¤ A) B: =
(B^{T} ¤ I) A:=
(B^{T} ¤ A) I:
 (Abc^{T}): = (c ¤ A) b
= c ¤ Ab
 ABc = (c^{T} ¤ A) B:
 a^{T}Bc = (c ¤
a)^{T} B: = (c^{T}
¤ a^{T}) B: = B:^{T}
(a ¤ c)
 ab^{H} ¤ cd^{H} =
(a ¤ c)(b ¤ d)^{H} =
(ca^{T}):(db^{T}):^{H}
 a^{H}bc^{H}d =
a^{H}b ¤
c^{H}d = (a ¤
c)^{H}(b ¤ d) =
(ca^{T}):^{H}(db^{T}):
 (ABC):^{T} =
B:^{T} (C ¤ A^{T})
 (AB):^{T} =
B:^{T} (I ¤ A^{T})
= A:^{T} (B ¤ I)
= I:^{T} (B ¤
A^{T})
 (Abc^{T}):^{T} =
b^{T}(c^{T} ¤
A^{T}) = c^{T} ¤
b^{T}A^{T}
 a^{T}B^{T}C =
B:^{T} (a ¤ C)
 If Y=AXB+CXD+... then X: =
(B^{T} ¤ A + D^{T}
¤ C+...)^{1} Y: however this is a slow and often
illconditioned way of solving such equations.
 (A_{[m#n]}^{T}): =
TVEC(m,n) (A:) [see vectorized transpose]
A vector norm is a realvalued function of a vector satisfying the
three axioms listed below.
 Positive: x=0 iff x=0 else x>0
 Homogeneous: cx=c x for any real or
complex scalar c
 Triangle Inequality:
x+x<=x+x
If <x, y> is an inner
product then x = sqrt(<x, x>) is a vector
norm.
 A vector norm may be derived from an inner product iff it satisfies the
parallelogram identity:
x+y^{2}+xy^{2}=2x^{2}+2y^{2}
 If x is derived from <x, y> then
4Re(<x, y>) =
x+y^{2}xy^{2} =
2x+y^{2}x^{2}y^{2}
The Euclidean norm of a vector x equals the square root of the sum of
the squares of the absolute values of all its elements and is written
x. It is always a real number and corresponds to the normal notion
of the vector's length.
The pnorm of a vector x is defined by
x_{p} =
sum(abs(x).^p)^(1/p) for p>=1. The most
common values of p are 1, 2 and infinity.
 CityBlock Norm: x_{1} = sum(abs(x))
 Euclidean Norm: x = x_{2} =
sqrt(x'x)
 Infinity Norm: x_{inf} = max(abs(x))
 Hölder inequality: x'y <=
x_{p} y_{q} where
1/p + 1/q = 1
 x_{inf} <= x_{2} <=
x_{1} <= sqrt(n) x_{2} <=
n x_{inf}
This page is part of The Matrix Reference
Manual. Copyright © 19982005 Mike Brookes, Imperial
College, London, UK. See the file gfl.html for copying
instructions. Please send any comments or suggestions to "mike.brookes" at
"imperial.ac.uk".
Updated: $Id: property.html 4283 20140310 10:31:41Z dmb $