Dybuk Soft

Matrix are rectangular array of numbers, for example matrix 3x5 might look like this:

9 | -1 | 3 | -8 | -2 |

1 | 5 | 4 | -5 | 12 |

-6 | - 7 | -4 | 4 | 7 |

You can access individual elements by row and column index for example m_{i,j}.

Usually "i" (first index) is row index and "j"(second index) is for columns, each index start from 1, for our example matrix this mean :

* m_{1, 1} = 9

* m_{3, 5} = 7

* m_{2, 4} = -5

Matrix are widely used in computer science and game programming because it allow to simply represent mathematical formulas that is easy and natural to use on pc.

Lets assume we have following two formulas :

3x + 5y = 6

4x - 3y = -9

We can represent this as matrix :

3 | 5 |

4 | -3 |

*

x |

y |

=

6 |

-9 |

Now we can simplify our two formulas into one :

Matrix * VectorXY = VectorConstTo calculate VectorXY we only have to divide both side by Matrix:

VectorXY = Matrix

!Important

Lot of matrix operations are noncommutative, this mean A * B is not equal to B * A

In linear algebra,the identity matrix of size NxN is the **square** matrix with ones on the main diagonal and zeros elsewhere.
Usually we name it "I" example :

1 | 0 | 0 | 0 |

0 | 1 | 0 | 0 |

0 | 0 | 1 | 0 |

0 | 0 | 0 | 1 |

1 | 0 | ... | 0 |

0 | 1 | ... | 0 |

... | ... | ... | 0 |

0 | 0 | ... | 1 |

When two given matrix (A and B) multiplied by each other return as a result I-matrix then we know that matrix B is inverse of matrix A.

This operation are commutative, A + B equal B + A. This operation require matrix to have the same dimensions. Addition or subtraction is accomplished by adding or subtracting corresponding elements.

a_{1,1} | a_{1,2} | ... | a_{1,N} |

a_{2,1} | a_{2,2} | ... | a_{2,N} |

... | ... | ... | ... |

a_{M,1} | a_{M,2} | ... | a_{M,N} |

+

b_{1,1} | b_{1,2} | ... | b_{1,N} |

b_{2,1} | b_{2,2} | ... | b_{2,N} |

... | ... | ... | ... |

b_{M,1} | b_{M,2} | ... | b_{M,N} |

=

a_{1,1} + b_{1,1} | a_{1,2} + b_{1,2} | ... | a_{1,N} + b_{1,N} |

a_{2,1} + b_{2,1} | a_{2,2} + b_{2,2} | ... | a_{2,N} + b_{2,N} |

... | ... | ... | ... |

a_{M,1} + b_{M,1} | a_{M,2} + b_{M,2} | ... | a_{M,N} + b_{M,N} |

Each element of destination matrix is calculated by adding element from matrix a and b that have the same row and col index :

c_{i, j} = a_{i, j} + b_{i, j}

Destination matrix element for subtract :
c_{i, j} = a_{i, j} - b_{i, j}

Example for 2x2 add operation :

3 | 5 |

4 | -3 |

+

6 | -1 |

-7 | 8 |

=>

3 + 6 | 5 - 1 |

4 - 7 | -3 + 8 |

=>

9 | 4 |

-3 | 5 |

Matrix can be multiply by scalar value, this is accomplished by multiplying each matrix element by scalar value

(c * A)_{i, j} = c * A_{i, j}

c

*

a_{1,1} | a_{1,2} | ... | a_{1,N} |

a_{2,1} | a_{2,2} | ... | a_{2,N} |

... | ... | ... | ... |

a_{M,1} | a_{M,2} | ... | a_{M,N} |

=

c * a_{1,1} | c * a_{1,2} | ... | c * a_{1,N} |

c * a_{2,1} | c * a_{2,2} | ... | c * a_{2,N} |

... | ... | ... | ... |

c * a_{M,1} | c * a_{M,2} | ... | c * a_{M,N} |

Example multiply 2x3 matrix by 5

5

*

-2 | 1 | 3 |

7 | -2 | -1 |

=

5 * (-2) | 5 * 1 | 5 * 3 |

5 * 7 | 5 * (-2) | 5 * (-1) |

=>

-10 | 5 | 5 * 3 |

35 | -10 | -5 |

Matrix multiplication is noncommutative, A * B is NOT EQUAL to B * A.

Each element of destination matrix C is computed by cross product of matrix A row, and Matrix B column.

Mathematical formula for that is :

C_{i, j} = A_{i, 1} * B_{1, j} + A_{i, 2} * B_{2, j} + ... + A_{i, N} * B_{N, j}

a_{1,1} | a_{1,2} | ... | a_{1,N} |

a_{2,1} | a_{2,2} | ... | a_{2,N} |

... | ... | ... | ... |

a_{M,1} | a_{M,2} | ... | a_{M,N} |

+

b_{1,1} | b_{1,2} | ... | b_{1,P} |

b_{2,1} | b_{2,2} | ... | b_{2,P} |

... | ... | ... | ... |

b_{N,1} | b_{N,2} | ... | b_{N,P} |

=

a_{1,1} * b_{1,1} + a_{1,2} * b_{2,1} + ... + a_{1,N} * b_{N,1} | a_{1,1} * b_{1,2} + a_{1,2} * b_{2,2} + ... + a_{1,N} * b_{N,2} | ... | a_{1,1} * b_{1,P} + a_{1,2} * b_{2,P} + ... + a_{1,N} * b_{N,P} |

a_{2,1} * b_{1,1} + a_{2,2} * b_{2,1} + ... + a_{2,N} * b_{N,1} | a_{2,1} * b_{1,2} + a_{2,2} * b_{2,2} + ... + a_{2,N} * b_{N,2} | ... | a_{2,1} * b_{1,P} + a_{2,2} * b_{2,P} + ... + a_{2,N} * b_{N,P} |

... | ... | ... | ... |

a_{M,1} * b_{1,1} + a_{M,2} * b_{2,1} + ... + a_{M,N} * b_{N,1} | a_{M,1} * b_{1,2} + a_{M,2} * b_{2,2} + ... + a_{M,N} * b_{N,2} | ... | a_{M,1} * b_{1,P} + a_{M,2} * b_{2,P} + ... + a_{M,N} * b_{N,P} |

This formula show requirement for matrix A to have exactly the same amount of row as Matrix B have columns.

A(MxN) can be multiply with matrix B(NxP) result is matrix C(NxP)

Example of multiplication two arrays

6 | 2 | 1 |

-3 | 4 | -5 |

+

-1 | 5 |

9 | -2 |

3 | 9 |

=

6 * (-1) + 2 * 9 + 1 * 3 | 6 * 5 + 2 * (-2) + 1 * 9 |

-3 * (-1) + 4 * 9 + (- 5) * 3 | -3 * 5 + 4 * (-2) + (-5) * 9 |

=>

15 | 35 |

24 | -68 |

Matrix transpose is operation that flips matrix over its diagonal. This can be represented by formula :

B_{i, j} = A_{j, i}

a_{1,1} | a_{1,2} | ... | a_{1,N} |

a_{2,1} | a_{2,2} | ... | a_{2,N} |

... | ... | ... | ... |

a_{M,1} | a_{M,2} | ... | a_{M,N} |

T

=

a_{1,1} | a_{2,1} | ... | a_{M,1} |

a_{1,2} | a_{2,2} | ... | a_{M,2} |

... | ... | ... | ... |

a_{1,N} | a_{2,N} | ... | a_{M,N} |

Example for 2x3 matrix transposition

6 | 2 | 1 |

-3 | 4 | -5 |

T

=

6 | -3 |

2 | 4 |

1 | -5 |

This is scalar value and it is computed from all matrix elements. Determinant is used to calculate some of Matrix operation like matrix inversion. This operation require square matrix (NxN)

Wiki definitions :

In linear algebra, the determinant is a value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix A is denoted det(A), det A, or |A|. Geometrically, it can be viewed as the volume scaling factor of the linear transformation described by the matrix. This is also the signed volume of the n-dimensional parallelepiped spanned by the column or row vectors of the matrix. The determinant is positive or negative according to whether the linear mapping preserves or reverses the orientation of n-space.

Computing determinant for matrix 2x2 and 3x3 is straightforward :

det

a | b |

c | d |

=

a * d - b * c

det

a_{1,1} | a_{1,2} | a_{1,3} |

a_{2,1} | a_{2,2} | a_{2,3} |

a_{3,1} | a_{3,2} | a_{3,3} |

=

a_{1,1} * a_{2,2} * a_{3,3} + a_{1,2} * a_{2,3} * a_{3,1} + a_{1,3} * a_{2,1} * a_{3,2} - a_{1,3} * a_{2,2} * a_{3,1} - a_{1,1} * a_{2,3} * a_{3,2} - a_{1,2} * a_{2,1} * a_{3,3} |

Leibniz formula for PC computation is not required but it helps reduce operation in Laplace expansion. You can take any row of matrix multiply it by scalar value and add to another row, the same operation can by done on columns, lets have a look at simple example:

-2 | 1 | 3 | -2 |

9 | 3 | -3 | 15 |

10 | -2 | 1 | 12 |

-20 | 4 | 7 | 9 |

We will use Leibniz formula to reduce second row.

( 9, 3, -3, 15)To do that we will select second column and add it permutations to other columns. First we need to define scalar values for each column, to do that we will use row values (without second column):

(9, -3, 15)and divide it by inverted second column value (-3), result is :

(-3, 1, -5)

Now we can add second column, multiplied by scalar values:

-2 | 1 | 3 | -2 |

9 | 3 | -3 | 15 |

10 | -2 | 1 | 12 |

-20 | 4 | 7 | 9 |

-3

*

1 | 0 | 0 | 0 |

3 | 0 | 0 | 0 |

-2 | 0 | 0 | 0 |

4 | 0 | 0 | 0 |

+

1

*

0 | 0 | 1 | 0 |

0 | 0 | 3 | 0 |

0 | 0 | -2 | 0 |

0 | 0 | 4 | 0 |

-5

*

0 | 0 | 0 | 1 |

0 | 0 | 0 | 3 |

0 | 0 | 0 | -2 |

0 | 0 | 0 | 4 |

=>

-2 - 3 * 1 | 1 | 3 + 1 * 1 | -2 - 5 * 1 |

9 - 3 * 3 | 3 | -3 + 1 * 3 | 15 - 5 * 3 |

10 - 3 * (-2) | -2 | 1 + 1 * (-2) | 12 - 5 * (-2) |

-20 - 3 * 4 | 4 | 7 + 1 * 4 | 9 - 5 * 4 |

=>

-5 | 1 | 4 | -7 |

0 | 3 | 0 | 0 |

16 | -2 | -1 | 22 |

-32 | 4 | 11 | -11 |

Derivation of both matrix(before and after Leibniz algorithm) should be equal.

**Laplace expansion is used to compute matrix derivation by changing matrix representation to weighted sum of the determinants of m sub-matrix, each of size (n−1) × (n−1).
Determinant of each sub matrix is called Minor. **

This may sound intimidating but actually is very simple. We will split it into few steps:

**a) Create all sub-matrix **

We will have "m" sub-matrix, each matrix will be created from crossing out one row(i-th), and one column(j-th) of source matrix lets call it S_{ij}
This matrix. Example of calculating sub matrix :

Source Matrix

9 | 1 | 5 | 2 |

3 | -8 | 8 | 4 |

-3 | 7 | -3 | -2 |

0 | -4 | -1 | 6 |

Sub matrix S_{2,3} will be created from removing second row and third column :

9 | 1 | * | 2 |

* | * | * | * |

-3 | 7 | * | -2 |

0 | -4 | * | 6 |

=>

9 | 1 | 2 |

-3 | 7 | -2 |

0 | -4 | 6 |

Repeat this operation to create other sub-matrix

**b) Calculate Minor matrix **

Minor matrix element M_{i,j} is calculated as determinant of previous matrix S_{i,j}

M_{i,j}= det(S_{i,j});

From our sample S_{2,3} this is calculated as :

det

9 | 1 | 2 |

-3 | 7 | -2 |

0 | -4 | 6 |

=

9*7*6+1*(-2)*0+2*(-3)*(-4)-2*7*0-1*(-3)*6-9*(-2)*(-4)

=

348

**c) Calculate Cofactor matrix **

This matrix is created from Minor matrix by multiplying each value by (-1)^{i + j}.

C_{i,j}= (-1)^{i + j}M_{i,j}

Again for our sample (-1)^{i + j} we will calculate it :

C_{2,3}= (-1)^{2 + 3}M_{2,3}= (-1)^{5}348 = -348

This operation reduce matrix size by one on each dimension, we can reduce matrix size until we reach size 2x2 or 3x3 and then we can use already known formulas.

Now we can go back to **Laplace expansion**

First we pick either row or column that we want to reduce, best option is column/row that have lot of zero values, because each value will be use as weight. When weigh is equal 0 we can remove all sub-matrix.

For removing ith row we will use:
- weight a_{i,1}, a_{i,2} ... a_{i,N}
- each weight will be multiply by cofactor matrix for created by removing i-th row and j-th column

For removing jth column we wil use:
- weight a_{1,j}, a_{2,j} ... a_{N,j}
- each weight will be multiply by cofactor matrix for created by removing i-th row and j-th column

This is example for our previous matrix (before leibniz):

-2 | 1 | 3 | -2 |

9 | 3 | -3 | 15 |

10 | -2 | 1 | 12 |

-20 | 4 | 7 | 9 |

We will remove second row (9, 3, -3, 15), we need four Cofactor matrix C_{2,1}, C_{2,2}, C_{2,3}, C_{2,4}

9*(-1)^{1}

det

* | 1 | 3 | -2 |

* | * | * | * |

* | -2 | 1 | 12 |

* | 4 | 7 | 9 |

+

3*(-1)^{2}

det

-2 | * | 3 | -2 |

* | * | * | * |

10 | * | 1 | 12 |

-20 | * | 7 | 9 |

-3*(-1)^{3}

det

-2 | 1 | * | -2 |

* | * | * | * |

10 | -2 | * | 12 |

-20 | 4 | * | 9 |

+

15*(-1)^{4}

det

-2 | 1 | 3 | * |

* | * | * | * |

10 | -2 | 1 | * |

-20 | 4 | 7 | * |

At this point we have 4 matrix 3x3 lets compute det for each one :

det

1 | 3 | -2 |

-2 | 1 | 12 |

4 | 7 | 9 |

=

1 * 1 * 9 + 3 * 12 * 4 + (-2) * (-2) * 7 - (-2) * 1 * 4 - 3 * (-2) * 9 - 1 * 12 * 7 |

=

159;

det

-2 | 3 | -2 |

10 | 1 | 12 |

-20 | 7 | 9 |

=

-1020;

det

-2 | 1 | -2 |

10 | -2 | 12 |

-20 | 4 | 9 |

=

-198;

det

-2 | 1 | 3 |

10 | -2 | 1 |

-20 | 4 | 7 |

=

-54;

Now we need to go back to our formula and insert det values :

det A = -9 * 159 + 3 * (-1020) + 3 * (-198) + 15 * ( -54) = -5895

That's it. Lets try to do the same for matrix after leibniz :

-5 | 1 | 4 | -7 |

0 | 3 | 0 | 0 |

16 | -2 | -1 | 22 |

-32 | 4 | 11 | -11 |

We will remove the same row, this time it has only one non zero value this mean we have only one sub-matrix because other will be multiply by 0

3*(-1)^{2}

det

-5 | * | 4 | -7 |

* | * | * | * |

16 | * | -1 | 22 |

-32 | * | 11 | -11 |

det for this matrix is calculated by saurrs formula and is equal to : -1965, result for our formula:

det A = 3 * (-1965) = -5895

We have proven that both method leads to the same result.

There is no division by matrix, we can multiply result by inverted matrix. For ordinary number a / b means the solution to the equation xb = a.
This is the same as bx = a, but not for matrix, since matrix multiplication is non commutative.

For matrix we we have to solve XB = A for A * B^{-1} or BX = A for B^{-1}A

To calculate A^{-1} of a matrix A we can use formula:

A^{-1} =

C^{T}

/

det(A)

"C" is cofactor matrix, that we already know form "7. Matrix Determinant.". Transpose of this matrix is called the **adjugate** matrix.

We will try to calculate inverse matrix for :

-2 | 1 | -3 |

-1 | 4 | 2 |

-4 | 2 | -5 |

C_{1,1}

=

(-1)^{1+1}

*

det

* | * | * |

* | 4 | 2 |

* | 2 | -5 |

=

det

4 | 2 |

2 | -5 |

=

4*(-5)-2*2

=

-24

C_{1,2}

=

(-1)^{1+2}

*

det

* | * | * |

-1 | * | 2 |

-4 | * | -5 |

=

-1

*

det

-1 | 2 |

-4 | -5 |

=

(-1)

*

((-1)*(-5)-2*(-4))

=

-13

C_{1,3}

=

(-1)^{1+3}

*

det

* | * | * |

-1 | 4 | * |

-4 | 2 | * |

=

-1 | 4 |

-4 | 2 |

=

14

C_{2,1}

=

(-1)^{2+1}

*

det

* | 1 | -3 |

* | * | * |

* | 2 | -5 |

=

(-1)

*

det

1 | -3 |

2 | -5 |

=

-1

C_{2,2}

=

(-1)^{2+2}

*

det

-2 | * | -3 |

* | * | * |

-4 | * | -5 |

=

det

-2 | -3 |

-4 | -5 |

=

-2

C_{2,3}

=

(-1)^{5}

*

det

-2 | 1 | * |

* | * | * |

-4 | 2 | * |

=

(-1)

*

-2 | 1 |

-4 | 2 |

=

0

C_{3,1}

=

(-1)^{3+1}

*

det

* | 1 | -3 |

* | 4 | 2 |

* | * | * |

=

det

1 | -3 |

4 | 2 |

=

14

C_{3,2}

=

(-1)^{3+2}

*

det

-2 | * | -3 |

-1 | * | 2 |

* | * | * |

=

(-1)

*

det

-2 | -3 |

-1 | 2 |

=

7

C_{3,3}

=

(-1)^{6}

*

det

-2 | 1 | * |

-1 | 4 | * |

* | * | * |

=

-7

This is our cofactor matrix:

-24 | -13 | 14 |

-1 | -2 | 0 |

14 | 7 | -7 |

Transposition of this matrix (adjugate) :

-24 | -1 | 14 |

-13 | -2 | 7 |

14 | 0 | -7 |

Last component is determinant of our initial matrix(Sarrus algorithm) :

det

-2 | 1 | -3 |

-1 | 4 | 2 |

-4 | 2 | -5 |

=

-7

Our inverse matrix is computed by dividing each element of adjugate matrix by det A, result

3.4285 | 0.1428 | -2 |

1.8571 | 0.2857 | -1 |

-2 | 0 | 1 |

To validate result we can multiply source matrix with our result matrix, the result should be identity matrix

C_{1,1} = -2 * 3.4285 + 1 * 1.8571 -3 * (-2) = 1.0001

C_{1,2} = -2 * 0.1428 + 1 * 0.2857 -3 * 0 = 0.0001

C_{1,3} = -2 * (-2) + 1 * (-1) -3 * 1 = 0.0

C_{2,1} = -1 * 3.4285 + 4 * 1.8571 +2 * (-2) = -0.0001

C_{2,2} = -1 * 0.1428 + 4 * 0.2857 +2 * 0 = 1

C_{2,3} = -1 * (-2) + 4 * (-1) +2 * 1 = 0.0

C_{3,1} = -4 * 3.4285 + 2 * 1.8571 -5 * (-2) = 0.0002

C_{3,2} = -4 * 0.1428 + 2 * 0.2857 -5 * 0 = 0.0002

C_{3,3} = -4 * (-2) + 2 * (-1) -5 * 1 = 1

Output matrix:

1.0001 | 0.0001 | 0.0 |

-0.0001 | 1 | 0.0 |

0.0002 | 0.0002 | 1 |

This is identity matrix with rounding "error".