Matrix Relations
Go to: Introduction, Notation, Index
Square matrices A and B are congruent if there
exists a non-singular X such that B=
XTAX . Congruence is an equivalence
relation.
For Hermitian congruence, see Conjuctivity.
Congruence implies equivalence.
- Congruence preserves symmetry, skewsymmetry and definiteness
- A is congruent to a diagonal matrix iff it is Hermitian.
Square matrices A and B are conjunctive or
hermitely congruent or star-congruent if there exists a non-singular X such that
B= XHAX. Conjunctivity is an
equivalence relation.
- If A is hermitian, it is conjunctive to a diagonal matrix of the form
D=DIAG(Ip#p,-In#n,0z#z).
D is the intertia matrix of A and the inertia of
A is the scalar triple (p,n,z).
- Two Hermitian matrices are conjunctive iff they have the same inertia.
- If A is skew-hermitian, it is conjunctive to a matrix of the form
DIAG(jI,-jI,0).
- A is conjunctive to I iff it is positive definite hermitian
in which case A=UHIU for some non-singular upper
triangular U.
Conjugate
Two matrices are conjugate iff they are similar.
The Direct sum of matrices A, B,
... is written A⊕B⊕... =
DIAG(A, B, ...).
Two m#n matrices, A and B, are equivalent
iff there exists a non-singular m#m matix, M, and a non-singular
n#n matrix, N, with B=MAN.
Equivalence is an equivalence relation.
- A and B, are equivalent iff they have the same rank.
The Hadamard product of two m#n matrices A and
B, written in this website A • B, is formed by the
elementwise multiplication of their elements. The matrices must be the same
size. The functions DIAG(x) and diag(X) respectively
convert a vector into a diagonal matrix and the diagonal of a square matrix into a
vector. The function sum(X) sums the rows of X to produce a
vector.
- If A and B are +ve definite then A • B is
+ve definite.
- If A and B are +ve semi-definite then A •
B is +ve semi-definite and rank(A • B)
<=rank(A)rank(B)
- A • B = B • A
- AT • BT = (A
• B)T
- (a • b)(c • d)T =
acT • bdT =
adT • bcT
- a • b = DIAG(a) b
- DIAG(a • b) = DIAG(a) DIAG(b)
- (X • abT) = DIAG(a)
X DIAG(b)
- (X • abT)(Y •
cdT ) = adT •
(X DIAG(b • c) Y)
- (X • abT)y = a •
( X(b • y) )
- diag(XY) = sum(X •
YT)
The Kronecker product of A[m#n] and
B[p#q], written A ⊗ B or
KRON(A,B), is equal to the mp#nq matrix
[a(1,1)B • a(1,n)B ; • ;
a(m,1)B • a(m,n)B ]. It
is also known as the direct product or tensor product of A
and B. The Kronecker Product operation is often denoted by a • sign
enclosed in a circle which we approximate with ⊗. Note that in general
A ⊗ B != B ⊗ A. In the expressions
below a : suffix denotes vectorization.
- Associative: A ⊗ B ⊗ C = A
⊗ (B ⊗ C) = (A ⊗ B) ⊗
C
- Distributive: A ⊗ (B+C) = A ⊗
B + A ⊗ C
- Not Commutative: A ⊗ B = B ⊗
A iff A = cB for some scalar c
- det(A[m#m] ⊗
B[n#n]) =
det(A)ndet(B)m.
- tr(A[n#n] ⊗
B[n#n]) = tr(A) tr(B)
- AT ⊗ BT =
(A ⊗ B)T
- AH ⊗ BH =
(A ⊗ B)H
- A-1 ⊗ B-1 = (A ⊗
B)-1
- A ⊗ B is singular iff A or B is
singular.
- A ⊗ B = I iff A = cI and
B = c-1I for some scalar c.
- I[m#m] ⊗ I[n#n]
= I[mn#mn]
- A ⊗ B is orthogonal iff cA and
c-1B are orthogonal for some scalar c.
- A ⊗ B is diagonal iff A and B are
diagonal.
- If Aa=pa and Bb=qb then (A
⊗ B)(a ⊗ b)=pq(a ⊗
b). The algebraic multiplicity of the
eigenvalue pq is the product of the corresponding multiplicities of
p and q.
- If Aa=pa and Bb=qb then (A
⊗ I + I ⊗ B)(a ⊗
b)=(p+q)(a ⊗ b)
- (baT): = (aT
⊗ b): =a ⊗ b where the :
suffix denotes vectorization.
- a ⊗ (BC)= (a ⊗ B)C
- aT ⊗ (BC)=
B(aT ⊗ C)
- (AB) ⊗ c= (A ⊗ c)B
- a ⊗ bT =
abT
- aT ⊗ BCT ⊗
d = (B ⊗ d)(a ⊗
C)T
- a ⊗ BCT ⊗
dT = (a ⊗ B)(C ⊗
d)T
- (a ⊗ b)(c ⊗
d)T = (baT):
(dcT):T = a ⊗
bcT ⊗ dT=
cT ⊗ adT ⊗ b
= acT ⊗
bdT
- AB ⊗ CD = (A ⊗ C)(B
⊗ D)
- A[m#n] ⊗
B[p#q] = (A ⊗
I[p#p])(I[n#n]
⊗ B) = (I[m#m] ⊗
B)(A ⊗ I[q#q])
- a[m] ⊗
B[p#q] = (a ⊗
I[p#p])B
- A[m#n] ⊗
b[p] = (I[m#m]
⊗ b)A
- a[m] ⊗ B[p] =
(a ⊗ I[p#p])b =
(I[m#m] ⊗ b)a
- I[n#n] ⊗ AB =
(I[n#n] ⊗
A)(I[n#n] ⊗ B)
- AB ⊗ I[n#n] = (A
⊗ I[n#n])(B ⊗
I[n#n])
- abH ⊗ cdH =
(a ⊗ c)(b ⊗ d)H =
(caT):(dbT):H
- aHbcHd =
aHb ⊗
cHd = (a ⊗
c)H(b ⊗ d) =
(caT):H(dbT):
- (A ⊗ B)H(A ⊗ B)
= AHA ⊗
BHB
- (ABC): = (CT ⊗ A)
B:
- (AB): = (I ⊗ A) B: =
(BT ⊗ I) A:=
(BT ⊗ A) I:
- (AbcT): = (c ⊗ A) b
= c ⊗ Ab
- ABc = (cT ⊗ A) B:
- aTBc = (c ⊗
a)T B: =
(acT):T B: = (c
⊗ a)T B: = (a ⊗
c)T BT: =
B:T (a ⊗ c) =
B:T (caT):
- (ABC):T =
B:T (C ⊗ AT)
- (AB):T =
B:T (I ⊗ AT)
= A:T (B ⊗ I)
= I:T (B ⊗
AT)
- (AbcT):T =
bT(cT ⊗
AT) = cT ⊗
bTAT
- aTBTC =
B:T (a ⊗ C)
- ((ABC)T):
=(CTBTAT):
= (A ⊗ CT)
BT:
- ABc = (A ⊗ cT)
BT:
- B[m#n]c =
(I[m#m] ⊗ cT)
BT:
- If Y=AXB+CXD+... then Y: =
(BT ⊗ A + DT
⊗ C+...) X: however this is a slow and often
ill-conditioned way of solving such equations for X.
In the identities below, In =
I[n#n] and Tm,n =
TVEC(m,n) [see vectorized
transpose]
- B[p#q] ⊗
A[m#n] = Tp,m
(A ⊗ B) Tn,q
- (A[m#n] ⊗
B[p#q]) Tn,q =
Tm,p (B ⊗ A)
- a[m] ⊗
B[p#q] = (a ⊗
Ip)B = Tm,p
(B ⊗ a)
- A[m#n] ⊗
b[p] = (Im] ⊗
b)A = Tm,p (b ⊗
A)
- a[m] ⊗ b[p] =
(a ⊗ Ip)b =
(Im] ⊗ b)a
- (A ⊗ b): = A: ⊗ b
- (a[m] ⊗ B[p,q]): =
(Tq,m ⊗
Ip)(a ⊗ B:) =
(Iq ⊗ a ⊗
Ip)B:
- (A ⊗ B): = (In ⊗
Tq,m ⊗ Ip)(A:
⊗ B:) = (In ⊗
Tq,m ⊗ Ip)(A
⊗ B:): = (Tn,q
⊗ Imp)(A: ⊗
B): = (Inq ⊗
Tm,p)(B: ⊗ A):
The Kronecker sum of two square matrices,
A[m#m] and B[n#n], is equal
to (A ⊗ In) +
(Im ⊗ B). It is sometimes written
A⊕B but in these pages, this notation
is reserved for the direct sum.
We can define a partial order on the set of Hermitian matrices by writing
A>=B iff A-B is positive semidefinite and A>B iff
A-B is positive
definite.
- The partial order is:
- reflexive: A>=A for all A.
- antisymmetric: A>=B and B>=A are
both true iff A=B.
- transitive: If A>=B and B>=C then
A>=C.
- Any pair of hermitian matrices, A and B, satisfy precisely
one of the following:
- None of the relations A<B, A<=,B
A=B, A>=B, A>B is true.
- A<B and A<=B only are true.
- A<=B only is true.
- A=B, A<=B and A>=B only are
true.
- A>=B only is true.
- A>B and A>=B only are true.
- A>=B iff xHAx >=
xHBx for all x where >= has its
normal scalar meaning (likewise for >)
- A>=B iff DHAD >=
DHBD for any, not necessarily square, D.
(not true for >).
- A>B iff DHAD >
DHBD for any non-singular D.
Real square matrices A and B are orthogonally
similar if there exists an orthogonal Q such that B=
QTAQ .
Orthogonal similarity implies both similarity and
congruence.
See also: Unitary similarity
Square matrices A and B are similar (also called
conjugate) if there exists a non-singular X such that
B=X-1AX . Similarity is an equivalence relation,
i.e. it is reflexive, symmetric and transitive.
Similar matrices represent the same linear transformation in a different
basis. Similarity implies equivalence .
Square matrices A and B are unitarily similar if
there exists a unitary Q such that B=
QHAQ . Unitary similarity is an equivalence
relation and implies both similarity and
conjunctivity.
This page is part of The Matrix Reference
Manual. Copyright © 1998-2022 Mike Brookes, Imperial
College, London, UK. See the file gfl.html for copying
instructions. Please send any comments or suggestions to "mike.brookes" at
"imperial.ac.uk".
Updated: $Id: relation.html 11291 2021-01-05 18:26:10Z dmb $