Coefficient matrix

The matrix A in this equation is called the coefficient matrix of the system.




Transpose

If A is any m x n matrix, then the transpose of A is defined to be the n x m matrix that results from interchanging the rows and columns of A.





Trace

If A is a square matrix, then the trace of A is defined to be the sum of the entries on the main diagonal of A.




Identity matrix

The n × n identity matrices, denoted by In, are square matrices with 1's on the main diagonal and 0's off the main diagonal.




Singular

If A is a square matrix, and if a matrix B of the same size can be found such that AB = BA = I, then A is said to be invertible and B is called an inverse of A.

If no such matrix B can be found, then A is said to be singular.



Unitary matrix

In mathematics, a complex square matrix U is unitary if



where I is the identity matrix and U* is the conjugate transpose of U.



Elementary matrix

(http://en.wikipedia.org/wiki/Elementary_matrix)

An n x n matrix is called an elementary matrix if it can be obtained from the n x n identity matrix In by performing a single elementary row operation.



Row operation

  1. Multiplying row i through by a nonzero constant c
  2. Interchange rows i and j
  3. Adding c times row i to row j



Row-echelon form

  1. If a row does not consist entirely of zeros, then the first nonzero number in the row is a 1.(leading 1)
  2. If there are any rows that consist entirely of zeros, then they are grouped together at the bottom of the matrix.
  3. If any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs farther to the right than the leading 1 in the higher row.

    To be Reduced Row-echelon form

  4. Each column that contains a leading 1 has zeros everywhere else in that column.


     

Row-Echelon Form and Reduced Row-Echelon Form



Orthogonal Matrices

(http://en.wikipedia.org/wiki/Orthogonal_matrix)

A square matrix is orthogonal if its transpose is equal to its inverse.




Diagonal matrices

(http://en.wikipedia.org/wiki/Diagonal_matrix)

A square matrix in which all the entries off the main diagonal are zero.




Triangular matrices

A square matrix in which all the entries above the main diagonal are zero is called lower triangular, and a square matrix in which all the entries below the main diagonal are zero is called upper triangular.


     

3 × 3 upper triangular matrix and 3 × 3 lower triangular matrix



Symmetric matrices

A square matrix A is called symmetric if




Minors and Cofactors

If A is a square matrix, then the minor of entry aij is denoted by Mij and is defined to be the determinant of the submatrix that remains after the ith row and jth column are deleted from A.

The number (-1)i+jMij is denoted by Cij and is called the cofactor of entry aij.


     



Determinant

(http://sens.tistory.com/267)



Adjoint of a matrix

The transpose of the matrix of cofactors from A is called the adjoint of A and is denoted by adj(A).




Norm

(http://sens.tistory.com/271)

The length of a vector.




Linearly independent

If S = {v1, v2, ..., vr} is a nonempty set of vectors, then the vector equation



has at least one solution, namely



If this is the only solution, then S is called a linearly independent set.

If there are other solutions, then S is called a linearly dependent set.



Vector Space

n-dimensional space




Basis

If V is any vector space and S = {v1, v2, ..., vn} is a set of vectors in V, then S is called a basis for V if the following two conditions hold:

  1. S is linearly independent.
  2. S spans V.

Every vector v in V can be expressed in the form




Teaching materials

00-07. Elementary Linear Algebra, 6th edition.pdf

08.1. Complex Numbers.pdf

08.2. Conjugates and Division of Complex Numbers.pdf

08.3. Polar Form and Demoivre's Theorem.pdf

08.4. Complex Vector Spaces And Inner Products.pdf

09.1. Systems of Linear Inequalities.pdf

09.2. Linear Programming Involving Two Variables.pdf

09.3. The Simplex Method - Maximization.pdf

09.4. The Simplex Method - Minimization.pdf

10.1. Gaussian Elimination with Partial Pivoting.pdf

10.2. Iteration Method for Solving Linear Systems.pdf

10.3. Power Method for Approximating Eigenvalues.pdf

10.4. Applications of Numerical Methods.pdf