Linear Algebra Methods
¦ HOME ¦ Math Software Downloads ¦ Numerical Methods ¦ Register Your Software ¦ Contact ¦ Search ¦ Credit ¦

Software:
Linear Algebra
Functions
Ordinary Differential Equations
Systems of Nonlinear Equations
Multiple Integration
Maxima and Minima
Functions and Equations
Regression
Approximation & Interpolation
Stereographer
Math Miscellanea
Support Software
Numerical Methods:
Integration and Differentiation
Solution of Equations
Maxima and Minima
Approximation of Functions
Regression
Polynomial Regression
Fast Fourier Transforms
Differential Equations
Linear Algebra Methods
Miscellaneous Procedures
Decimal Comma/Decimal Point
                            

   
Google
 
LINEAR ALGEBRA METHODS

Matrix Addition or Subtraction

To find [C]=[A] ± [B] we obtain each element Cij of  [C] by

Cij = Aij ± Bij

Matrix Multiplication

To find [C]=[A][B] we obtain each element Cij  of  [C] by

      n
Cij
=
å  AikBkj
      k=1

where n = # of columns of [A] = # of rows of [B].

Determinant

To find the determinant of matrix [A], we triangularize it using elementary row operations.  The determinant is then the product of the diagonal elements.

Trace

The trace of matrix [A] is the sum of its main diagonal elements.

Inverse

The inverse of [A] is computed using the Gauss-Jordan method.  It starts by augmenting [A] on the right by an identity matrix [I] of the same order.  Then, by a series of elementary row operations, we transform [A] into an identity matrix, making sure to extend each operation to include the entire augmented matrix.  When the process is finished, the original [I] with which we flanked [A] will have been transformed into the inverse of [A].

Adjoint

The adjoint of a matrix is the transpose of the matrix of the cofactors of its elements.  If [B] is the adjoint of [A], then the elements of [B] are

Bji = (-1)i+j det [Mij]

where [Mij] is a submatrix of [A] obtained by deleting from [A] the ith row and the jth column.

If [A] is not singular, its adjoint may be found using the relation

adj [A] = det [A] * [A]-1

where adj [A] is the adjoint of [A], det [A] is the determinant of [A], and [A]-1 is the inverse of [A].

Transpose

To find the transpose of a matrix [A] we form a matrix [A]T whose nth row is equal to the nth column of [A].

LU Factors

To find the [L][U] factors of matrix [A], essentially what we do is carry out the process of Gaussian Elimination, in which a system of linear equations

[A][X]=[B]

is solved by upper-triangularizing [A] by the process of elimination, to get

[U][X]=[C]

and solving this system by back-substitution.

The [U] that appears in the last system above is the [U] factor we seek, while the elements of the [L] factor are the factors Fij by which row j is multiplied before subtracting it from row i during the elimination process.

QR Factors

The Gram-Schmidt orthogonalization process is used to find the QR factors.

If we have a set of vectors {A1 , A2 ,...An} we can generate an orthonormal set {Q1,Q2,...Qn} as follows:

Choose the first vector in the direction of A1:

                                                                    V1
V
1 = A1       then normalize it to get      Q1 = ¾¾¾
                                                                                       
½V1½

then continue thus:

                                                                    V2
V
2 = A2 - (A2•Q1)Q1                 ;                  Q2 = ¾¾¾
                                                                                       
½V2½

                                                                                          V3
V3 = A3 - (A3•Q
1)Q1 - (A3•Q2)Q2     ;      Q3 = ¾¾¾
                                                                                       
½V3½
.         .                                  .                               .
.         .                                  .                               .
.         .                                  .                               .

Now, since we want [R] such that [Q][R]=[A], we pre-multiply both sides by [Q]T, which is the transpose of [Q]:

[Q]T[Q][R]=[Q]T[A]

but, since [Q] is orthonormal, then [Q]T[Q] = [I], so

[R]=[Q]T[A]

Test for Definiteness

For illustration purposes, consider the 4th order symmetric matrix:

       é A11  A12  A13  A14 ù
[A] =  ê A21  A22  A23  A24 ú
       ê A31  A32  A33  A34 ú
       ë A41  A42  A43  A44 û

where Aij=Aji[A] is positive or negative definite if the product

[x y z t] é A11  A12  A13  A14 ù éxù
          ê A21  A22  A23  A24 ú êyú
          ê A31  A32  A33  A34 ú êzú
          ë A41  A42  A43  A44 û ëtû

is positive or negative, respectively, for all nontrivial [x y z t].  Clearly, we can't test that product for all possible [x y z t], but there are other equivalent tests.  One is to check the signs of the eigenvalues:  if they are all positive or all negative, the matrix is positive or negative definite, respectively.  Another test is to compute the determinants A1 , A2 , A3 and A4 of these principal submatrices:

A1 = det  [ A11 ]

A2 = det  é A11  A12 ù
          ë A21  A22
û

A3 = det  é A11  A12  A13 ù
          ê A21  A22  A23 ú
          ë A31  A32  A33 û

A4 = det  é A11  A12  A13  A14 ù
          ê A21  A22  A23  A24 ú
          ê A31  A32  A33  A34 ú
          ë A41  A42  A43  A44 û

[A] is positive definite if   Ai > 0    for  i = 1, 2, 3, 4   (all i)

[A] is negative definite if   Ai < 0    for  i = 1, 3   (i odd)
                        and   Ai > 0    for  i = 2, 4   (i even)

The matrix may also be positive semidefinite, negative semidefinite, or indefinite.

Simultaneous Linear Equations

The method used is called Gaussian Elimination.  Suppose the system is

[A][X]=[B]

where [A] is n-by-n, [X] is n-by-1 and [B] is n-by-1.  Here [A] is the matrix of coefficients, [X] the solution sought and [B] the right-hand-side of the system.

We start by augmenting matrix [A] by attaching to it the column [B].  Then, by a series of elementary row operations, we reduce the extended matrix to echelon form, that is, [A] is made upper triangular, making sure to extend each operation to include column [B].  If we call [T] the triangularized [A] and [B'] the transformed [B], then we simply solve the system

[T][X]=[B']

by back substitution.

Simultaneous Linear Equations (Solution by [L][U] Factors)

This option solves the equations using the LU factorization of the matrix of coefficients.  This method is equivalent to the method of Gaussian Elimination, which actually consists first of elimination and then back-substitution, but it facilitates the solution of many systems sharing the same matrix of coefficients by avoiding the duplication of many operations.  Once the LU factors are known, the solution is reduced to a forward-substitution and then a back-substitution.

Suppose the system is

[A][X]=[B]

where [A] is n-by-n, [X] is n-by-1 and [B] is n-by-1.  Here [A] is the matrix of coefficients, [X] the solution sought and [B] the right-hand-side of the system. We start by factoring [A] into its LU factors.  The above equation becomes

[L][U][X]=[B]

Each time we are given a set of independent terms [B], we first solve for [C] the equation

[L][C]=[B]

which, since [L] is lower-triangular, is easily solved by forward-substitution, then we solve for [X] the equation

[U][X]=[C]

which, since [U] is upper-triangular, is easily solved by back-substitution.  Of course, the factorization [A]=[L][U] may really be [A]=[P][L][U], where [P] is a permutation matrix, but this may be remedied by reorganizing the independent terms using the map of row exchanges.

Overdetermined or Inconsistent Systems (Least-Squares Solution)

If we have a system with more equations than unknowns,

[A][X]=[B]

where [A] is the matrix of coefficients, [X] the solution sought, and [B] the system's right-hand-side, we may not be able to find a solution which satisfies all the equations, but we can find the least-squares solution by solving

[A]T[A][X']=[A]T[B]

where [A]T is the transpose of [A] and [X'] is the least-squares solution.

Eigenvalues and Eigenvectors

In principle, to obtain the eigenvalues of a matrix its characteristic equation may be solved:

det [A] - l[I] = 0

where [I] is the identity matrix.  For an n-by-n  [A], it is of nth degree in l.

Although numerically extracting the roots of an nth degree equation may be relatively easy, the expansion of the determinant is impractical even for a 4-by-4 matrix.  For large matrices it may be prohibitive, hence the need for algorithms to extract eigenvalues.

The shifted QR algorithm generally converges relatively fast.

If we could make [A] upper-triangular while preserving its eigenvalues, we would find those eigenvalues in its main diagonal.  Elementary row operations destroy the eigenvalues, so they can't be used, and generally [A] can't be triangularized in a finite number of steps.  What is done is to make [A] as nearly upper triangular as possible while preserving its eigenvalues.  This is accomplished by the QR method, except for 2-by-2 blocks which correspond to complex conjugate pairs of eigenvalues.

Basically, at each step we factor [A] into [Q][R] using the Gram-Schmidt orthogonalization process.  The factors are then reversed to obtain a more nearly triangular [A'] with the same eigenvalues.

In practice, the shifted QR algorithm is the one used, which provides faster convergence.  Furthermore, [A] is first transformed to produce as many zeroes as possible.  It is made upper-Hessenberg, which is nearly upper-triangular except for the elements of the diagonal below its main diagonal.  This is done using Householder transformations.  This preliminary step insures faster QR factorization.  It should be said that the upper-Hessenberg form is preserved at each step of the QR process.

To illustrate, call [A0] the original matrix already made upper-Hessenberg. Then the QR factorization will give us

[A0] = [Q0][R0]

The next [A] is, reversing the factors:

[A1] = [R0][Q0]

Again we factor [A1], then reverse the factors:

[A1] = [Q1][R1]   ;   [A2] = [R1][Q1]

and so on.  However, in the shifted algorithm, if lk is close to an eigenvalue at step k, we shift [Ak], so

[Ak] - lk[I] = [Qk][Rk]

where [I] is the identity matrix.

At each step [A] is made more and more upper triangular as the elements below its main diagonal approach zero (except, as was said before, for those which correspond to complex eigenvalues).

The process is stopped when all the eigenvalues have converged, or when it has gone through the number of cycles specified.  An eigenvalue is assumed to have converged when its value differs from its previous value by less than the specified error. In practice, the first one to converge is frequently the smallest one.  If this is the one sought, a moderate number of cycles may produce the desired result.

The eigenvectors can be found using the Shifted Inverse Power algorithm.  It starts by making [A] as nearly singular as possible by shifting it using the eigenvalue for which the eigenvector is sought.  The iteration for lk starts with:

[ [A] - lk[I] ][X] = [RND]

where [RND] is a column vector with random components.  The system is solved and the solution is then used in place of [RND] to obtain the next better [X].  It cycles until the vector solution changes very little, that is, until each component of the normalized eigenvector changes by less than the specified error.  The above computation of eigenvectors allows for the possibility of the existence of several distinct eigenvectors corresponding to an eigenvalue which is a multiple root of the characteristic equation.  If the successive approximations sequence were to be always started with [1] or any other fixed column vector instead of [RND] then it would always converge to the same eigenvector.

Copyright © 2001-2010 Numerical Mathematics. All rights reserved.