Matrix Properties

Go to: Introduction, Notation, Index



Adjoint or Adjugate

The adjoint of A, ADJ(A) is the transpose of the matrix formed by taking the cofactor of each element of A.


Characteristic Equation

The characteristic equation of a matrix A[n#n] is |tI-A| = 0. It is a polynomial equation in t.

The properties of the characteristic equation are described in the section on eigenvalues.


Characteristic Matrix

The characteristic matrix of A[n#n] is (tI-A) and is a function of the scalar t.

The properties of the characteristic matrix are described in the section on eigenvalues.


Characteristic Polynomial

The characteristic polynomial, p(t), of a matrix A[n#n] is p(t) = |tI - A|.

The properties of the characteristic polynomial are described in the section on eigenvalues.


Cofactor

The cofactor of a minor of A:n#n is equal to the product of (i) the determinant of the submatrix consisting of all the rows and columns that are not in the minor and (ii) -1 raised to the power of the sum of all the row and column indices that are in the minor.

See Minor, Adjoint


Compound Matrix

The kth compound matrix of A[m#n] is the m!(k!(m-k)!)-1#n!(k!(n-k)!)-1 matrix formed from the determinants of all k#k submatrices of A arranged with the submatrix index sets in lexicographic order. Within this section, we denote this matrix by Ck(A).


Condition Number

The condition number of a matrix is its largest singular value divided by its smallest singular value.


Conjugate Transpose

X=YH is the Hermitian transpose or Conjugate transpose of Y iff xi,j=yj,iC.

See Hermitian Transpose.


Constructibility

The pair of matrices {A[n#n], C[m#n]} are constructible iff {AH, CH} are controllable.


Controllability

The pair of matrices {A[n#n], B[n#m]} are controllable iff any of the following equivalent conditions are true

  1. There exists a G[mn#n] such that An = CG where C = [B AB A2B ... An-1B][n#mn] is the controllability matrix.
  2. If xTArB = 0 for 0<=r<n then xTAn = 0.
  3. If xTB = 0 and xTA = kxT then either k=0 or else x = 0.

Definiteness

A Hermitian square matrix A is

This definition only applies to Hermitian and real-symmetric matrices; if A is non-real and non-Hermitian then xHAx is complex for some values of x and so the concept of definiteness does not make sense. Some authors also call a real non-symmetric matrix positive definite if xHAx > 0 for all non-zero real x; this is true iff its symmetric part is positive definite (see below). We abbreviate positive as +ve below.


Detectability

The pair of matrices {A[n#n], C[m#n]} are detectable iff {AH, CH} are stabilizable.

If {A, C} are observable or constructible then they are detectable..


Determinant

For an n#n matrix A, det(A) is a scalar number defined by det(A)=sgn(PERM(n))'*prod(A(1:n,PERM(n)))

This is the sum of n! terms each involving the product of n matrix elements of which exactly one comes from each row and each column. Each term is multiplied by the signature (+1 or -1) of the column-order permutation !. See the notation section for definitions of sgn(), prod() and PERM().

The determinant is important because INV(A) exists iff det(A) != 0.

Geometric Interpretation

The determinant of a matrix equals the +area of the +parallelogram that has the matrix columns as n of its sides. If a vector space is transformed by multiplying by a matrix A, then all +areas will be multiplied by det(A).

Properties of Determinants

Determinants of simple matrices

Determinants of sums and products

Determinants of block matrices

In this section we have A[m#m], B[m#n], C[n#m] and D[n#n].

See also Grammian, Schur Complement


Displacement Rank

The displacement rank of X[m#n] is given by dis_rank(X) =  rank(X - ZXZT) where the Z are shift matrices of size m#m and n#n respectively.


Eigenvalues

The eigenvalues of A are the roots of its characteristic equation: |tI-A| = 0.

The properties of the eigenvalues are described in the section on eigenvalues.


Field of Values

The field of values of a square matrix A is the set of complex numbers xHAx for all x with ||x||=1.


Generalized Inverse

A generalized inverse of X:m#n is any matrix, X#:n#m satisfying XX#X=X. Note that if X is singular or non-square, then X# is not unique. This is also called a weak generalized inverse to distinguish it from the pseudoinverse.

See also: Pseudoinverse


Gram Matrix

The gram matrix of X, GRAM(X), is the matrix XHX.

If X is m#n, the elements of GRAM(X) are the n2 possible inner products between pairs of its columns. We can form such a matrix from n vectors in any vector space having an inner product.

See also: Grammian


Grammian

The grammian of a matrix X, gram(X), equals det(GRAM(X)) = det(XHX).

Geometric Interpretation

The grammian of Xm#n is the squared "volume" of the n-dimensional parallelepiped spanned by the columns of X.

See also: Gram Matrix


Hermitian Transpose or Conjugate Transpose

X=YH is the Hermitian transpose or Conjugate transpose of Y iff x(i,j)=conj(y(j,i)).


Inertia

The inertia of an m#m square matrix is the scalar triple (p,n,z) where p+n+z=m and p, n and z are respectively the number of eigenvalues, counting multiplicities, with positive, negative and zero real parts. Some authors call this the signature rather than the inertia.


Inverse

B is a left inverse of A if BA=I. B is a right inverse of A if AB=I.

If BA=AB=I then B is the inverse of A and we write B=A-1.

Inverse of Block Matrices

See also: Generalized Inverse, Pseudoinverse, Inversion Lemma


Kernel

The kernel (or null space) of A is the subspace of vectors x for which Ax = 0. The dimension of this subspace is the nullity of A.


Linear Independence

The columns of A are linearly independent iff the only solution to Ax=0 is x=0.


Matrix Norms

A matrix norm is a real-valued function of a square matrix satisfying the four axioms listed below. A generalized matrix norm satisfies only the first three.

  1. Positive: ||X||=0 iff X=0 else ||X||>0
  2. Homogeneous: ||cX||=|c| ||X|| for any real or complex scalar c
  3. Triangle Inequality: ||X+Y||<=||X||+||Y||
  4. Submultiplicative: ||XY||<=||X|| ||Y||

Induced Matrix Norm

If ||y|| is a vector norm, then we define the induced matrix norm to be ||X||=max(||Xy|| for ||y||=1)

Euclidean or Frobenius Norm

The Euclidean or Frobenius norm of a matrix A is given by  ||A||F = sqrt(sum(ABS(A).2)). It is always a real number. The closely related Hilbert-Schmidt norm of a square matrix An#n is given by  ||A||HS = n  ||A||F.

p-Norms

||A||p = max(||Ax||p) where the max() is taken over all x with ||x||p = 1 where ||x||p = sum(abs(x)p)(1/p) denotes the vector p-norm for p>=1.


Minor

A kth-order minor of A is the determinant of a k#k submatrix of A.

A principal minor is the determinant of a submatrix whose diagonal elements lie on the principal diagonal of A.


Null Space

The null space (or kernel) of A is the subspace of vectors x for which Ax = 0.


Nullity

The nullity of a matrix A is the dimension of the null space of A.


Observability

The pair of matrices {A[n#n], C[m#n]} are observable iff {AH, CH} are reachable.


Permanent

For an n#n matrix A, pet(A) is a scalar number defined by pet(A)=sum(prod(A(1:n,PERM(n))))

This is the same as the determinant except that the individual terms within the sum are not multiplied by the signatures of the column permutations.

Properties of Permanents

Permanents of simple matrices


Potency

The potency of a non-negative matrix A is the smallest n>0 such that diag(An) > 0 i.e. all diagonal elements of An are strictly positive. If no such n exists then A is impotent.


Pseudoinverse

The pseudoinverse (also called the Natural Inverse or Moore-Penrose Pseudoinverse) of Xm#n is the unique [1.20] n#m matrix X+ that satisfies:

  1. XX+X=X    (i.e. X+ is a generalized inverse of X).
  2. X+XX+=X+    (i.e. X is a generalized inverse of X+).
  3. (XX+)H=XX+
  4. (X+X)H=X+X

See also: Inverse, Generalized Inverse


Rank

The rank of an m#n matrix A is the smallest r for which there exist F[m#r] and G[r#n] such that A=FG. Such a decomposition is a full-rank decomposition. As a special case, the rank of 0 is 0.


Range

The range (or image) of A is the subspace of vectors that equal Ax for some x. The dimension of this subspace is the rank of A.


Reachability

The pair of matrices {A[n#n], B[n#m]} are reachable iff any of the following equivalent conditions are true

  1. rank(C)=n where C = [B AB A2B ... An-1B][n#mn] is the controllability matrix.
  2. If xHArB = 0 for 0<=r<n then x = 0.
  3. If xHB = 0 and xHA = kxH then x = 0.
  4. For any v, it is possible to choose L[n#m] such that eig(A+BLH)=v.

Schur Complement

Given a block matrix  M = [A[m#m], B; C, D[n#n]], then P[n#n]=D-CA-1B and Q[m#m]=A-BD-1C are respectively the Schur Complements  of A and D in M.


Spectral Radius

The spectral radius, rho(A), of A[n#n] is the maximum modulus of any of its eigenvalues.


Spectrum

The spectrum of A[n#n] is the set of all its eigenvalues.


Stabilizability

The pair of matrices {A[n#n], B[n#m]} are stabilizable iff either of the following equivalent conditions are true

  1. If xTB = 0 and xTA = kxT then either |k|< 1 or else x = 0.
  2. It is possible to choose L[n#m] such that all elements of eig(A+BLH) have absolute value < 1.

Submatrix

A submatrix of A is a matrix formed by the elements a(i,j) where i ranges over a subset of the rows and j ranges over a subset of the columns.


Trace

The trace of a square matrix is the sum of its diagonal elements: tr(A)=sum(diag(A))

In the formulae below, we assume that matrix dimensions ensure that the argument of tr() is square.


Transpose

X=YT is the transpose of Y iff x(i,j)=y(j,i).


Vectorization

The vector formed by concatenating all the columns of X is written vec(X) or, in this website, X:. If y = X[m#n]:  then yi+m(j-1) = xi,j.


Vector Norms

A vector norm is a real-valued function of a vector satisfying the three axioms listed below.

  1. Positive: ||x||=0 iff x=0 else ||x||>0
  2. Homogeneous: ||cx||=|c| ||x|| for any real or complex scalar c
  3. Triangle Inequality: ||x+x||<=||x||+||x||

Inner Product Norm

If <x, y> is an inner product then ||x|| = sqrt(<x, x>) is a vector norm.

Euclidean Norm

The Euclidean norm of a vector x equals the square root of the sum of the squares of the absolute values of all its elements and is written ||x||. It is always a real number and corresponds to the normal notion of the vector's length.

Hölder Norms or p-Norms

The p-norm of a vector x is defined by ||x||p = sum(abs(x)p)(1/p) for p>=1. The most common values of p are 1, 2 and infinity.


This page is part of The Matrix Reference Manual. Copyright © 1998-2022 Mike Brookes, Imperial College, London, UK. See the file gfl.html for copying instructions. Please send any comments or suggestions to "mike.brookes" at "imperial.ac.uk".
Updated: $Id: property.html 11291 2021-01-05 18:26:10Z dmb $