# Matrix Relations

Go to: Introduction, Notation, Index

### Congruence

Square matrices A and B are congruent if there exists a non-singular X such that B= XTAX . Congruence is an equivalence relation.

For Hermitian congruence, see Conjuctivity.

Congruence implies equivalence.

• Congruence preserves symmetry, skewsymmetry and definiteness
• A is congruent to a diagonal matrix iff it is Hermitian.

### Conjunctivity

Square matrices A and B are conjunctive or hermitely congruent or star-congruent if there exists a non-singular X such that B= XHAX. Conjunctivity is an equivalence relation.

• If A is hermitian, it is conjunctive to a diagonal matrix of the form D=DIAG(Ip#p,-In#n,0z#z). D is the intertia matrix of A and the inertia of A is the scalar triple (p,n,z).
• Two Hermitian matrices are conjunctive iff they have the same inertia.
• If A is skew-hermitian, it is conjunctive to a matrix of the form DIAG(jI,-jI,0).
• A is conjunctive to I iff it is positive definite hermitian in which case A=UHIU for some non-singular upper triangular U.

### Conjugate

Two matrices are conjugate iff they are similar.

### Direct Sum

The Direct sum of matrices A, B, ... is written AB⊕... = DIAG(A, B, ...).

### Equivalence

Two m#n matrices, A and B, are equivalent iff there exists a non-singular m#m matix, M, and a non-singular n#n matrix, N, with B=MAN. Equivalence is an equivalence relation.

• A and B, are equivalent iff they have the same rank.

The Hadamard product of two m#n matrices A and B, written in this website AB, is formed by the elementwise multiplication of their elements. The matrices must be the same size.…

• If A and B are +ve definite then AB is +ve definite.
• If A and B are +ve semi-definite then  AB is +ve semi-definite and rank(AB) <=rank(A)•rank(B)
• AB = BA
• ATBT = (AB)T
• (ab)(cd)T = acTbdT = adTbcT
• a • b = DIAG(a) b
• DIAG(a • b) = DIAG(a) DIAG(b)
• (XabT) = DIAG(a) X DIAG(b)
• (XabT)(Y • cdT ) = adT • (X DIAG(bc) Y)
• (XabT)y = a • ( X(by) )

### Kronecker Product

The Kronecker product of A[m#n] and B[p#q], written AB or KRON(A,B), is equal to the mp#nq matrix [a(1,1)Ba(1,n)B ; • ; a(m,1)Ba(m,n)B ]. It is also known as the direct product or tensor product of A and B. The Kronecker Product operation is often denoted by a • sign enclosed in a circle which we approximate with ⊗. Note that in general AB != BA. In the expressions below a : suffix denotes vectorization.

• Associative: ABC = A(BC) = (AB) ⊗ C
• Distributive: A(B+C) = AB + AC
• Not Commutative: AB = BA iff A = cB for some scalar c
• det(A[m#m]B[n#n]) = det(A)ndet(B)m.
• tr(A[n#n]B[n#n]) = tr(A) tr(B)
• ATBT = (AB)T
• AHBH = (AB)H
• A-1B-1 = (AB)-1
• AB  is singular iff A or B is singular.
• AB = I iff A = cI and B = c-1I  for some scalar c.
• I[m#m]I[n#n]I[mn#mn]
• AB is orthogonal iff cA and c-1B are orthogonal for some scalar c.
• AB is diagonal iff A and B are diagonal.
• If Aa=pa and Bb=qb then (AB)(ab)=pq(ab). The algebraic multiplicity of the eigenvalue pq is the product of the corresponding multiplicities of p and q.
• If Aa=pa and Bb=qb then (AI + IB)(ab)=(p+q)(ab)
• (baT): = (aTb): =ab  where the : suffix denotes vectorization.
• a(BC)= (aB)C
• aT(BC)= B(aTC)
• (AB) ⊗ c= (Ac)B
• (AB) ⊗ cT= A(BcT)
• abT = abT
• aTb = baT
• aTBCTd = (Bd)(aC)T
• aBCTdT = (aB)(Cd)T
•  (ab)(cd)T = (baT): (dcT):T = abcTdT= cTadTb =  acTbdT
• ABCD = (AC)(BD)
• A[m#n]B[p#q] = (AI[p#p])(I[n#n]B) = (I[m#m]B)(AI[q#q])
• a[m]B[p#q] = (aI[p#p])B
• A[m#n]b[p] = (I[m#m]b)A
• a[m]B[p] = (aI[p#p])b = (I[m#m]b)a
• I[n#n]AB = (I[n#n]A)(I[n#n]B)
• ABI[n#n] = (AI[n#n])(BI[n#n])
• abHcdH = (ac)(bd)H = (caT):(dbT):H
• aHbcHd = aHbcHd = (ac)H(bd) = (caT):H(dbT):
• (AB)H(AB) = AHABHB
• (ABC): = (CTA) B:
• (AB): = (IA) B: = (BTI) A:= (BTA) I:
• (AbcT): = (cA) b = cAb
• ABc = (cTA) B:
• aTBc = (ca)T B: = (acT):T B: = (ca)T B: = (ac)T BT: =  B:T (ac) = B:T (caT):
• (ABC):T =  B:T (CAT)
• (AB):T = B:T (IAT)  = A:T (BI) = I:T (BAT)
• (AbcT):T =  bT(cTAT) = cTbTAT
• aTBTC = B:T (aC)
• ((ABC)T): =(CTBTAT): = (ACT) BT:
• ABc = (AcT) BT:
• B[m#n]c = (I[m#m]cT) BT:
• If Y=AXB+CXD+... then Y: = (BTA + DTC+...) X: however this is a slow and often ill-conditioned way of solving such equations for X.

In the identities below, In = I[n#n] and Tm,n = TVEC(m,n) [see vectorized transpose]

• B[p#q] ⊗  A[m#n] = Tp,m (A  ⊗  B) Tn,q
• (A[m#n]  ⊗  B[p#q]) Tn,q  = Tm,p (BA)
• a[m]  ⊗  B[p#q]  = (aIp)B = Tm,p (Ba)
• A[m#n]b[p] = (Im]b)A = Tm,p (bA)
• a[m]b[p] = (a  ⊗ Ip)b = (Im]b)a
• (Ab): = A: ⊗ b
• (a[m]B[p,q]): = (Tq,m  ⊗ Ip)(aB:) = (Iq ⊗ a ⊗ Ip)B:
• (AB): = (InTq,mIp)(A:B:) = (InTq,mIp)(AB:): = (Tn,qImp)(A:B): = (InqTm,p)(B:A):

### Kroneker Sum

The Kronecker sum of two square matrices, A[m#m] and B[n#n], is equal to  (AIn) +  (Im ⊗  B). It is sometimes written AB but in these pages, this notation is reserved for the direct sum.

### Loewner Partial Order

We can define a partial order on the set of Hermitian matrices by writing A>=B iff A-B is positive semidefinite and A>B iff A-B is positive definite.

• The partial order is:
• reflexive: A>=A for all A.
• antisymmetric: A>=B and B>=A are both true iff A=B.
• transitive: If A>=B and B>=C then A>=C.
• Any pair of hermitian matrices, A and B, satisfy precisely one of the following:
1. None of the relations A<B, A<=,B A=B, A>=B, A>B is true.
2. A<B and A<=B only are true.
3. A<=B only is true.
4. A=B, A<=B and A>=B only are true.
5. A>=B only is true.
6. A>B  and A>=B only are true.
• A>=B iff xHAx >= xHBx for all x where >= has its normal scalar meaning (likewise for >)
• A>=B iff DHAD >= DHBD for any, not necessarily square, D. (not true for >).
• A>B iff DHAD > DHBD for any non-singular D.

### Orthogonal Similarity

Real square matrices A and B are orthogonally similar if there exists an orthogonal Q such that B= QTAQ .

Orthogonal similarity implies both similarity and congruence.

### Similarity

Square matrices A and B are similar (also called conjugate) if there exists a non-singular X such that B=X-1AX .

Similar matrices represent the same linear transformation in a different basis. Similarity implies equivalence .

### Unitary Similarity or Unitary Equivalence

Square matrices A and B are unitarily similar if there exists a unitary Q such that B= QHAQ . Unitary similarity is an equivalence relation and implies both similarity and conjunctivity.

This page is part of a href="intro.html">The Matrix Reference Manual. Copyright © 1998-2021 Mike Brookes, Imperial College, London, UK. See the file gfl.html for copying instructions. Please send any comments or suggestions to "mike.brookes" at "imperial.ac.uk".
Updated: \$Id: relation.html 11291 2021-01-05 18:26:10Z dmb \$