Hardware and software setup

Elementary transformations of matrices. Elementary transformations of matrices and their properties Notation for elementary transformations of matrices

Elementary matrix transformations are such transformations of the matrix , as a result of which the equivalence of matrices is preserved. Thus elementary transformations do not change the solution set of the system of linear algebraic equations that this matrix represents.

Elementary transformations are used in the Gauss method to bring a matrix to a triangular or stepped form.

Definition

Elementary string transformations called:

In some linear algebra courses, a permutation of matrix rows is not distinguished as a separate elementary transformation due to the fact that a permutation of any two matrix rows can be obtained using the multiplication of any matrix row by a constant k (\displaystyle k), and adding to any row of the matrix another row, multiplied by a constant k (\displaystyle k), k ≠ 0 (\displaystyle k\neq 0).

The elementary column transformations.

Elementary transformations reversible.

The notation indicates that the matrix A (\displaystyle A) can be obtained from B (\displaystyle B) by elementary transformations (or vice versa).

Properties

Rank invariance under elementary transformations

Theorem (on rank invariance under elementary transformations).
If a A ∼ B (\displaystyle A\sim B), then r a n g A = r a n g B (\displaystyle \mathrm (rang) A=\mathrm (rang) B).

Equivalence of SLAE under elementary transformations

Let's call elementary transformations over a system of linear algebraic equations :
  • permutation of equations;
  • multiplying an equation by a non-zero constant;
  • addition of one equation to another, multiplied by some constant.
That is, elementary transformations over its expanded matrix. Then the following assertion is true: Recall that two systems are said to be equivalent if the sets of their solutions coincide.

Finding inverse matrices

Theorem (on finding the inverse matrix).
Let the matrix determinant A n × n (\displaystyle A_(n\times n)) is not equal to zero, let the matrix B (\displaystyle B) is defined by the expression B = [ A | E ] n × 2 n (\displaystyle B=_(n\times 2n)). Then, under an elementary transformation of the rows of the matrix A (\displaystyle A) to identity matrix E (\displaystyle E) as part of B (\displaystyle B) transformation takes place at the same time. E (\displaystyle E) to A − 1 (\displaystyle A^(-1)).
Matrix algebra - Elementary transformations of matrices

Elementary Matrix Transformations

Elementary matrix transformations are widely used in various mathematical problems. For example, they form the basis of the well-known Gauss method (method of eliminating unknowns) for solving a system of linear equations.

The elementary transformations are:
1) permutation of two rows (columns);
2) multiplication of all elements of the row (column) of the matrix by some number that is not equal to zero;
3) addition of two rows (columns) of a matrix multiplied by the same non-zero number.

The two matrices are called equivalent, if one of them can be obtained from the other after a finite number of elementary transformations. In general, equivalent matrices are not equal, but have the same rank.

Calculation of determinants using elementary transformations

Using elementary transformations, it is easy to calculate the matrix determinant. For example, it is required to calculate the matrix determinant:

where ≠ 0.
Then you can take out the multiplier:

now, subtracting from the elements j-th column, the corresponding elements of the first column, multiplied by, we get the determinant:

which is: where

Then we repeat the same steps for and if all elements then we finally get:

If for some intermediate determinant it turns out that its upper left element is , then it is necessary to rearrange the rows or columns so that the new upper left element is not equal to zero. If ∆ ≠ 0, then it can always be done. In this case, it should be taken into account that the sign of the determinant changes depending on which element is the main one (that is, when the matrix is ​​transformed in such a way that). Then the sign of the corresponding determinant is equal.

EXAMPLE Using elementary transformations, bring the matrix

Let us introduce the concept of an elementary matrix.

DEFINITION. The square matrix obtained from the identity matrix as a result of a non-singular elementary transformation over rows (columns) is called the elementary matrix corresponding to this transformation.

For example, the second-order elementary matrices are the matrices

where A is any nonzero scalar.

The elementary matrix is ​​obtained from the identity matrix E as a result of one of the following non-singular transformations:

1) multiplication of a row (column) of the matrix E by a non-zero scalar;

2) addition (or subtraction) to any row (column) of the matrix E of another row (column), multiplied by a scalar.

Denote by the matrix obtained from the matrix E as a result of multiplying a row by a non-zero scalar A:

Denote by the matrix obtained from the matrix E as a result of adding (subtracting) to the row of the row multiplied by A;

Through we will denote the matrix obtained from the identity matrix E as a result of applying an elementary transformation over rows; thus there is a matrix corresponding to the transformation

Consider some properties of elementary matrices.

PROPERTY 2.1. Any elementary matrix is ​​invertible. A matrix inverse to an elementary one is elementary.

Proof. A direct verification shows that for any non-zero scalar A. and arbitrary ones, the equalities

Based on these equalities, we conclude that property 2.1 holds.

PROPERTY 2.2. The product of elementary matrices is an invertible matrix.

This property follows directly from Property 2.1 and Corollary 2.3.

PROPERTY 2.3. If a non-singular row elementary transformation transforms -matrix A into matrix B, then . The absurd statement is also true.

Proof. If there is a multiplication of a string by a non-zero scalar A, then

If , then

It is easy to verify that the converse is also true.

PROPERTY 2.4. If the matrix C is obtained from the matrix A using a chain of nonsingular row elementary transformations , then . The converse is also true.

Proof. By property 2.3, the transformation transforms the matrix A into a matrix, transforms the matrix into a matrix, etc. Finally, transforms the matrix into a matrix Therefore, .

It is easy to verify that the converse is also true. Matrix invertibility conditions. The following three lemmas are needed to prove Theorem 2.8.

LEMMA 2.4. A square matrix with a zero row (column) is not invertible.

Proof. Let A be a square matrix with row zero, B any matrix, . Let be the zero row of the matrix A; then

i.e., the i-th row of the matrix AB is zero. Therefore, the matrix A is irreversible.

LEMMA 2.5. If the rows of a square matrix are linearly dependent, then the matrix is ​​not invertible.

Proof. Let A be a square matrix with linearly dependent rows. Then there is a chain of non-singular row elementary transformations that transform A into a step matrix; let such a chain. By property 2.4 of elementary matrices, we have

where C is a matrix with a zero row.

Therefore, by Lemma 2.4, the matrix C is not invertible. On the other hand, if the matrix A were invertible, then the product on the left in equality (1) would be an invertible matrix, as the product of invertible matrices (see Corollary 2.3), which is impossible. Therefore, the matrix A is irreversible.

Elementary matrix transformations are such transformations of the matrix , as a result of which the equivalence of matrices is preserved. Thus elementary transformations do not change the solution set of the system of linear algebraic equations that this matrix represents.

Elementary transformations are used in the Gauss method to bring a matrix to a triangular or stepped form.

Definition

Elementary string transformations called:

In some linear algebra courses, the permutation of matrix rows is not distinguished as a separate elementary transformation due to the fact that a permutation of any two matrix rows can be obtained by multiplying any matrix row by a constant , and adding to any matrix row another row multiplied by a constant , .

The elementary column transformations.

Elementary transformations reversible.

The designation indicates that the matrix can be obtained from by elementary transformations (or vice versa).

Properties

Rank invariance under elementary transformations

Equivalence of SLAE under elementary transformations

Let's call elementary transformations over a system of linear algebraic equations :
  • permutation of equations;
  • multiplying an equation by a non-zero constant;
  • addition of one equation to another, multiplied by some constant.
Those. elementary transformations over its augmented matrix. Then the following assertion is true: Recall that two systems are said to be equivalent if the sets of their solutions coincide.

Finding inverse matrices

Theorem (on finding the inverse matrix).
Let the determinant of the matrix be non-zero, let the matrix be defined by the expression . Then, with an elementary transformation of the rows of the matrix to the identity matrix in the composition, the transformation to takes place simultaneously.

Reduction of matrices to stepped form

Let us introduce the concept of step matrices: A matrix has stepped view if: Then the following statement is true:

Related definitions

Elementary matrix. A matrix A is elementary if multiplication of an arbitrary matrix B by it leads to elementary row transformations in matrix B.

Literature

Ilyin V. A., Poznyak E. G. Linear Algebra: Textbook for High Schools. - 6th ed., erased. - M .: FIZMATLIT, 2004. - 280 p.


Wikimedia Foundation. 2010 .

See what "Elementary matrix transformations" are in other dictionaries:

    Introduction. E. parts in the exact meaning of this term are primary, further indecomposable parts, from which, by assumption, all matter consists. In modern physics the term "E. h." usually used not in its exact meaning, but less strictly for the name ... ... Physical Encyclopedia

    Introduction. E. particles in the exact meaning of this term are primary, further indecomposable particles, of which, by assumption, all matter consists. In terms of "E. h." in modern physics the idea of ​​primitive entities finds expression, ... ... Great Soviet Encyclopedia

    This term has other meanings, see Matrix. Matrix is ​​a mathematical object written as a rectangular table of elements of a ring or field (for example, integer, real or complex numbers), which represents ... ... Wikipedia

    A matrix is ​​a mathematical object written as a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    A matrix is ​​a mathematical object written as a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    A matrix is ​​a mathematical object written as a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    A matrix is ​​a mathematical object written as a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    A matrix is ​​a mathematical object written as a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

Definition 5.8. Elementary transformations of matrix rows called the following transformations:

1) multiplication of a matrix row by a non-zero real number;

2) adding to one row of the matrix its other row, multiplied by an arbitrary real number.

Lemma 5.1. With the help of elementary transformations of matrix rows, any two rows can be swapped.

Proof.

A= .

.

step matrix. Matrix rank

Definition 5.9. stepped We will call a matrix that has the following properties:

1) if i-th row is zero, then ( i+ 1)th row is also null,

2) if the first non-zero elements i-th and ( i+ 1)th rows are arranged in columns with numbers k and R, respectively, then k < R.

Condition 2) requires the obligatory increase of zeros on the left when passing from i-th line to ( i+ 1)th line. For example, matrices

BUT 1 = , BUT 2 = , BUT 3 =

are stepped, and the matrices

AT 1 = , AT 2 = , AT 3 =

are not stepped.

Theorem 5.1. Any matrix can be reduced to a step matrix using elementary row transformations.

Let's illustrate this theorem with an example.

BUT=

.

The resulting matrix is ​​a step matrix.

Definition 5.10. Matrix rank we will call the number of non-zero rows in the stepped form of this matrix.

For example, the rank of matrix A in the previous example is 3.

Questions for self-control

1. What is called a matrix?

2. How to add and subtract matrices; multiplying a matrix by a number?

3. Define matrix multiplication.

4. What matrix is ​​called transposed?

5. What transformations of matrix rows are called elementary?

6. Define step matrix.

7. What is called the rank of a matrix?

Determinants

Calculation of determinants

Second order determinants

Consider a second-order square matrix

Definition 6.1. second order determinant, corresponding to the matrix A is the number calculated by the formula

BUT│= = .

Elements a ij are called determinant elementsA│, elements a 11 , a 22 form main diagonal, and the elements a 12 , a 21 – side.

Example. = –28 + 6 = –22.

Third order determinants

Consider a square matrix of the third order

BUT = .

Definition 6.2. third order determinant, corresponding to the matrix BUT, is the number calculated by the formula

BUT│= = .

In order to remember which products on the right side of the equality should be taken with a plus sign, and which ones should be taken with a minus sign, it is useful to remember the rule called triangle rule:

Example.

1) = –4 + 0 + 4 – 0 + 2 + 6 = 8.

2) = 1, i.e. │ E 3 │= 1.

Consider another way to calculate the third-order determinant.

Definition 6.3. Minor M ijelement a ij determinant is the determinant obtained from the given by deleting i-th line and j-th column. Algebraic additionA ij element a ij of the determinant is called its minor M ij, taken with a sign (–1) i + j.

Example. Calculate the minor M 23 and algebraic complement BUT 23 elements a 23 in the matrix

Calculate the minor M 23:

M 23 = = = –6 + 4 = –2.

Then BUT 23 = (–1) 2+3 M 23 = 2.

Theorem 6.1. The third-order determinant is equal to the sum of the products of the elements of any row (column) and their algebraic complements.

Proof. A-priory

= . (6.1)

Let us choose, for example, the second row and find algebraically complements BUT 21 , BUT 22 , BUT 23:

BUT 21 = (–1) 2+1 = –() = ,

BUT 22 = (–1) 2+2 = ,

BUT 23 = (–1) 2+3 = –() = .

We now transform formula (6.1)

BUT│= () + () + () =

= BUT 21 + BUT 22 + BUT 23.

Formula BUT│= BUT 21 + BUT 22 + BUT 23. called decomposition of the determinantBUT│ over the elements of the second row. Similarly, the expansion can be obtained over the elements of other rows and any column

Example.

= (by elements of the second column) = 1 × (–1) 1+2 + 2 × (–1) 2+2 +

+ (–1)(–1) 3+2 = –(0 + 15) + 2(–2 +20) + (–6 +0) = –15 +36 – 6 = 15.

6.1.3 Determinants of the nth order ( n N)

Definition 6.4. determinant n th order, corresponding to the matrix n-th order

A =

is called a number equal to the sum of the products of the elements of any row (column) and their algebraic complements, i.e.

A│= BUT i1 + A i2 + … + A in = BUT 1j + A 2j + … + A nj .

It is easy to see that when n= 2 we obtain a formula for calculating the second-order determinant. If a n= 1, then by definition we will assume | A| = |a | = a .

Example. = (by elements of the 4th row) = 3×(–1) 4+2 +

2×(–1) 4+4 = 3(–6 + 20 –2 –32) +2(– 6 +16 +60 +2) = 3(–20) +2×72 = –60 +144 = 84 .

Note that if in the determinant all elements of any row (column), except for one, are equal to zero, then when calculating the determinant, it is convenient to expand it over the elements of this row (column).

Example.

E n│= = 1 × │ E n - 1 │ = … = │E 3 │= 1.

Property of determinants

Definition 6.5. view matrix

or

we will call triangular matrix.

Property 6.1. The determinant of a triangular matrix is ​​equal to the product of the elements of the main diagonal, i.e.

= = .

Property 6.2. The determinant of a matrix with zero row or zero column is zero.

Property 6.3. When transposing a matrix, the determinant does not change, i.e.

BUT│= │A t│.

Property 6.4. If the matrix AT obtained from the matrix BUT multiplying each element of some string by a number k, then

AT│= kBUT│.

Property 6.5.

= + .

Property 6.6. If the matrix AT obtained from the matrix BUT permutation of two strings, then AT│= −│BUT│.

Property 6.7. The determinant of a matrix with proportional rows is equal to zero, in particular, the determinant of a matrix with two identical rows is equal to zero.

Property 6.8. The determinant of the matrix does not change if the elements of one row are added to the elements of another row of the matrix, multiplied by some number.

Comment. 6.1. Since, by property 6.3, the determinant of a matrix does not change when transposed, then all properties about the rows of a matrix are also true for the columns.

Property 6.9. If a BUT and AT are square matrices of order n, then │ AB│=│BUT││AT│.

inverse matrix

Definition 6.6. square matrix BUT order n called reversible if there is a matrix AT such that AB \u003d BA \u003d E n. In this case, the matrix AT called matrix inverseBUT and denoted BUT –1 .

Theorem 6.2. The following statements are true:

1) if matrix BUT is invertible, then there is exactly one matrix inverse to it;

2) the invertible matrix has a non-zero determinant;

3) if BUT and B are invertible matrices of order n, then the matrix AB reversible, and ( AB) –1 = AT–1× BUT –1 .

Proof.

1. Let AT and With are matrices inverse to the matrix BUT, i.e. AB \u003d BA \u003d E n and AC \u003d CA \u003d E n. Then B = BE n = B(AC) = (VA)C \u003d E n C \u003d C.

2. Let the matrix BUT reversible. Then there is a matrix BUT–1 , its inverse, and

AA –1 = E n.

By property 6.9 of the determinant │ AA –1 │=│BUT││А –1 │. Then │ BUT││BUT –1 │=│E n│, whence │ BUT││BUT–1 │= 1. Therefore, │ BUT│¹ 0.

3. Indeed,

(AB)(AT –1 BUT –1) = (BUT(BB –1))BUT –1 = (AE n)BUT –1 = AA –1 = E n .

(AT –1 BUT –1)(AB) = (AT –1 (BUT-1 A 21 \u003d -1, BUT 22 = 2. Then BUT –1 = .

Questions for self-control

1. What is called a determinant?

2. What are its main properties?

3. What is called a minor and an algebraic complement?

4. What are the ways to calculate the determinants (second, third and n th orders)?

5. What matrix is ​​called square?


Similar information.


Liked the article? Share with friends!
Was this article helpful?
Yes
Not
Thanks for your feedback!
Something went wrong and your vote was not counted.
Thank you. Your message has been sent
Did you find an error in the text?
Select it, click Ctrl+Enter and we'll fix it!