Sponsored Links

Selasa, 23 Januari 2018

Sponsored Links

Linear Algebra Example Problems - Subspace Dimension #2 (Rank ...
src: i.ytimg.com

In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

The rank is commonly denoted rank(A) or rk(A); sometimes the parentheses are not written, as in rank A.


Video Rank (linear algebra)



Main definitions

In this section we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these.

The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A.

A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Two proofs of this result are given in Proofs that column rank = row rank below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of A.

A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank deficient if it does not have full rank.

The rank is also the dimension of the image of the linear transformation that is given by multiplication by A. More generally, if a linear operator on a vector space (possibly infinite-dimensional) has finite-dimensional image (e.g., a finite-rank operator), then the rank of the operator is defined as the dimension of the image.


Maps Rank (linear algebra)



Examples

The matrix

[ 1 0 1 - 2 - 3 1 3 3 0 ] {\displaystyle {\begin{bmatrix}1&0&1\\-2&-3&1\\3&3&0\end{bmatrix}}}

has rank 2: the first two rows are linearly independent, so the rank is at least 2, but all three rows are linearly dependent (the third is equal to the second subtracted from the first) so the rank must be less than 3.

The matrix

A = [ 1 1 0 2 - 1 - 1 0 - 2 ] {\displaystyle A={\begin{bmatrix}1&1&0&2\\-1&-1&0&-2\end{bmatrix}}}

has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose

A T = [ 1 - 1 1 - 1 0 0 2 - 2 ] {\displaystyle A^{T}={\begin{bmatrix}1&-1\\1&-1\\0&0\\2&-2\end{bmatrix}}}

of A has rank 1. Indeed, since the column vectors of A are the row vectors of the transpose of A, the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., rk(A) = rk(AT).


De-biasing low-rank projection for matrix completion
src: f1.media.brightcove.com


Computing the rank of a matrix

Rank from row echelon forms

A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows.

For example, the matrix A given by

A = [ 1 2 1 - 2 - 3 1 3 5 0 ] {\displaystyle A={\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}}

can be put in reduced row-echelon form by using the following elementary row operations:

[ 1 2 1 - 2 - 3 1 3 5 0 ] R 2 -> 2 r 1 + r 2 [ 1 2 1 0 1 3 3 5 0 ] R 3 -> - 3 r 1 + r 3 [ 1 2 1 0 1 3 0 - 1 - 3 ] R 3 -> r 2 + r 3 [ 1 2 1 0 1 3 0 0 0 ] R 1 -> - 2 r 2 + r 1 [ 1 0 - 5 0 1 3 0 0 0 ] {\displaystyle {\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}R_{2}\rightarrow 2r_{1}+r_{2}{\begin{bmatrix}1&2&1\\0&1&3\\3&5&0\end{bmatrix}}R_{3}\rightarrow -3r_{1}+r_{3}{\begin{bmatrix}1&2&1\\0&1&3\\0&-1&-3\end{bmatrix}}R_{3}\rightarrow r_{2}+r_{3}{\begin{bmatrix}1&2&1\\0&1&3\\0&0&0\end{bmatrix}}R_{1}\rightarrow -2r_{2}+r_{1}{\begin{bmatrix}1&0&-5\\0&1&3\\0&0&0\end{bmatrix}}} .

The final matrix (in row echelon form) has two non-zero rows and thus the rank of matrix A is 2.

Computation

When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.


Find rank, Dimension, Basis of Null (A) & Col (A) - Linear Algebra ...
src: i.ytimg.com


Proofs that column rank = row rank

The fact that the column and row ranks of any matrix are equal forms an important part of the fundamental theorem of linear algebra. We present two proofs of this result. The first is short, uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second is an elegant argument using orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014)

First proof

Let A be a matrix of size m × n (with m rows and n columns). Let the column rank of A be r and let c1,...,cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose i-th column is formed from the coefficients giving the i-th column of A as a linear combination of the r columns of C. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see rank factorization.)

Second proof

Let A be an m × n matrix with entries in the real numbers whose row rank is r. Therefore, the dimension of the row space of A is r. Let x 1 , x 2 , ... , x r {\displaystyle x_{1},x_{2},\ldots ,x_{r}} be a basis of the row space of A. We claim that the vectors A x 1 , A x 2 , ... , A x r {\displaystyle Ax_{1},Ax_{2},\ldots ,Ax_{r}} are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c 1 , c 2 , ... , c r {\displaystyle c_{1},c_{2},\ldots ,c_{r}} :

0 = c 1 A x 1 + c 2 A x 2 + ? + c r A x r = A ( c 1 x 1 + c 2 x 2 + ? + c r x r ) = A v , {\displaystyle 0=c_{1}Ax_{1}+c_{2}Ax_{2}+\cdots +c_{r}Ax_{r}=A(c_{1}x_{1}+c_{2}x_{2}+\cdots +c_{r}x_{r})=Av,}

where v = c 1 x 1 + c 2 x 2 + ? + c r x r {\displaystyle v=c_{1}x_{1}+c_{2}x_{2}+\cdots +c_{r}x_{r}} . We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since A v = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v,

c 1 x 1 + c 2 x 2 + ? + c r x r = 0. {\displaystyle c_{1}x_{1}+c_{2}x_{2}+\cdots +c_{r}x_{r}=0.}

But recall that the x i {\displaystyle x_{i}} were chosen as a basis of the row space of A and so are linearly independent. This implies that c 1 = c 2 = ? = c r = 0 {\displaystyle c_{1}=c_{2}=\cdots =c_{r}=0} . It follows that A x 1 , A x 2 , ... , A x r {\displaystyle Ax_{1},Ax_{2},\ldots ,Ax_{r}} are linearly independent.

Now, each A x i {\displaystyle Ax_{i}} is obviously a vector in the column space of A. So, A x 1 , A x 2 , ... , A x r {\displaystyle Ax_{1},Ax_{2},\ldots ,Ax_{r}} is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.


Rank and nullity of linear map, Rank and Nullity Theorem - YouTube
src: i.ytimg.com


Alternative definitions

In all the definitions in this section, the matrix A is taken to be an m × n matrix over an arbitrary field F.

Dimension of image

Given the matrix A, there is an associated linear mapping

f : Fn -> Fm

defined by

f(x) = Ax.

The rank of A is the dimension of the image of f. This definition has the advantage that it can be applied to any linear map without need for a specific matrix.

Rank in terms of nullity

Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f. The rank-nullity theorem states that this definition is equivalent to the preceding one.

Column rank - dimension of column space

The rank of A is the maximal number of linearly independent columns c 1 , c 2 , ... , c k {\displaystyle c_{1},c_{2},\dots ,c_{k}} of A; this is the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A, which is in fact just the image of the linear map f associated to A).

Row rank - dimension of row space

The rank of A is the maximal number of linearly independent rows of A; this is the dimension of the row space of A.

Decomposition rank

The rank of A is the smallest integer k such that A can be factored as A = C R {\displaystyle A=CR} , where C is an m × k matrix and R is a k × n matrix. In fact, for all integers k, the following are equivalent:

  1. the column rank of A is less than or equal to k,
  2. there exist k columns c 1 , ... , c k {\displaystyle c_{1},\ldots ,c_{k}} of size m such that every column of A is a linear combination of c 1 , ... , c k {\displaystyle c_{1},\ldots ,c_{k}} ,
  3. there exist an m × k {\displaystyle m\times k} matrix C and a k × n {\displaystyle k\times n} matrix R such that A = C R {\displaystyle A=CR} (when k is the rank, this is a rank factorization of A),
  4. there exist k rows r 1 , ... , r k {\displaystyle r_{1},\ldots ,r_{k}} of size n such that every row of A is a linear combination of r 1 , ... , r k {\displaystyle r_{1},\ldots ,r_{k}} ,
  5. the row rank of A is less than or equal to k.

Indeed, the following equivalences are obvious: ( 1 ) <=> ( 2 ) <=> ( 3 ) <=> ( 4 ) <=> ( 5 ) {\displaystyle (1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)\Leftrightarrow (5)} . For example, to prove (3) from (2), take C to be the matrix whose columns are c 1 , ... , c k {\displaystyle c_{1},\ldots ,c_{k}} from (2). To prove (2) from (3), take c 1 , ... , c k {\displaystyle c_{1},\ldots ,c_{k}} to be the columns of C.

It follows from the equivalence ( 1 ) <=> ( 5 ) {\displaystyle (1)\Leftrightarrow (5)} that the row rank is equal to the column rank.

As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map f : V -> W is the minimal dimension k of an intermediate space X such that f can be written as the composition of a map V -> X and a map X -> W. Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details.

Determinantal rank - size of largest non-vanishing minor

The rank of A is the largest order of any non-zero minor in A. (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix.

A non-vanishing p-minor (p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p, then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent).

Tensor rank - minimum number of simple tensors

The rank of A is the smallest number k such that A can be written as a sum of k rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product c ? r {\displaystyle c\cdot r} of a column vector c and a row vector r. This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition.


CS246 Linear Algebra Review. A Brief Review of Linear Algebra ...
src: images.slideplayer.com


Properties

We assume that A is an m × n matrix, and we define the linear map f by f(x) = Ax as above.

  • The rank of an m × n matrix is a nonnegative integer and cannot be greater than either m or n. That is,
rank ( A ) <= min ( m , n ) . {\displaystyle \operatorname {rank} (A)\leq \min(m,n).}
A matrix that has rank min(m, n) is said to have full rank; otherwise, the matrix is rank deficient.
  • Only a zero matrix has rank zero.
  • f is injective (or "one-to-one") if and only if A has rank n (in this case, we say that A has full column rank).
  • f is surjective (or "onto") if and only if A has rank m (in this case, we say that A has full row rank).
  • If A is a square matrix (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
  • If B is any n × k matrix, then
rank ( A B ) <= min ( rank ( A ) , rank ( B ) ) . {\displaystyle \operatorname {rank} (AB)\leq \min(\operatorname {rank} (A),\operatorname {rank} (B)).}
  • If B is an n × k matrix of rank n, then
rank ( A B ) = rank ( A ) . {\displaystyle \operatorname {rank} (AB)=\operatorname {rank} (A).}
  • If C is an l × m matrix of rank m, then
rank ( C A ) = rank ( A ) . {\displaystyle \operatorname {rank} (CA)=\operatorname {rank} (A).}
  • The rank of A is equal to r if and only if there exists an invertible m × m matrix X and an invertible n × n matrix Y such that
X A Y = [ I r 0 0 0 ] , {\displaystyle XAY={\begin{bmatrix}I_{r}&0\\0&0\\\end{bmatrix}},}
where Ir denotes the r × r identity matrix.
  • Sylvester's rank inequality: if A is an m × n matrix and B is n × k, then
rank ( A ) + rank ( B ) - n <= rank ( A B ) . {\displaystyle \operatorname {rank} (A)+\operatorname {rank} (B)-n\leq \operatorname {rank} (AB).}
This is a special case of the next inequality.
  • The inequality due to Frobenius: if AB, ABC and BC are defined, then
rank ( A B ) + rank ( B C ) <= rank ( B ) + rank ( A B C ) . {\displaystyle \operatorname {rank} (AB)+\operatorname {rank} (BC)\leq \operatorname {rank} (B)+\operatorname {rank} (ABC).}
  • Subadditivity:
rank ( A + B ) <= rank ( A ) + rank ( B ) {\displaystyle \operatorname {rank} (A+B)\leq \operatorname {rank} (A)+\operatorname {rank} (B)}
when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not fewer.
  • The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank-nullity theorem.)
  • If A is a matrix over the real numbers then the rank of A and the rank of its corresponding Gram matrix are equal. Thus, for real matrices
rank ( A T A ) = rank ( A A T ) = rank ( A ) = rank ( A T ) . {\displaystyle \operatorname {rank} (A^{\mathrm {T} }A)=\operatorname {rank} (AA^{\mathrm {T} })=\operatorname {rank} (A)=\operatorname {rank} (A^{\mathrm {T} }).}
This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors x for which A T A x = 0. {\displaystyle A^{\mathrm {T} }Ax=0.} If this condition is fulfilled, we also have 0 = x T A T A x = | A x | 2 . {\displaystyle 0=x^{\mathrm {T} }A^{\mathrm {T} }Ax=\left|Ax\right|^{2}.}
  • If A is a matrix over the complex numbers and A ¯ {\displaystyle {\overline {A}}} denotes the complex conjugate of A and A* the conjugate transpose of A (i.e., the adjoint of A), then
rank ( A ) = rank ( A ¯ ) = rank ( A T ) = rank ( A * ) = rank ( A * A ) = rank ( A A * ) . {\displaystyle \operatorname {rank} (A)=\operatorname {rank} ({\overline {A}})=\operatorname {rank} (A^{\mathrm {T} })=\operatorname {rank} (A^{*})=\operatorname {rank} (A^{*}A)=\operatorname {rank} (AA^{*}).}

Rank and Nullity of Linear Transformations - YouTube
src: i.ytimg.com


Applications

One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché-Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions.

In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.

In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function.


linear algebra - A question on rank of a difference of two skew ...
src: i.stack.imgur.com


Generalization

There are different generalisations of the concept of rank to matrices over arbitrary rings. In those generalisations, column rank, row rank, dimension of column space and dimension of row space of a matrix may be different from the others or may not exist.

Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; note that for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices.

There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.


Linear Algebra - Kernel, Range, and Rank of Vector Transformations ...
src: i.ytimg.com


Matrices as tensors

Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details.

Note that the tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed.


CS246 Linear Algebra Review. A Brief Review of Linear Algebra ...
src: images.slideplayer.com


See also

  • Matroid rank
  • Nonnegative rank (linear algebra)
  • Rank (differential topology)
  • Multicollinearity
  • Linear dependence

Linear Algebra - Rank of a Matrix - YouTube
src: i.ytimg.com


Notes


CS246 Linear Algebra Review. A Brief Review of Linear Algebra ...
src: images.slideplayer.com


References


Rank Nullity Theorem Example 1 | Linear Algebra | Griti - YouTube
src: i.ytimg.com


Further reading

  • Roger A. Horn and Charles R. Johnson (1985). Matrix Analysis. ISBN 978-0-521-38632-6. 
  • Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors [1] and System of Equations [2]
  • Mike Brookes: Matrix Reference Manual. [3]

Source of the article : Wikipedia

Comments
0 Comments