Invertible matrices. Higher mathematics

This topic is one of the most hated among students. Worse, probably, are the qualifiers.

The trick is that the very concept of an inverse element (and I’m not just talking about matrices) refers us to the operation of multiplication. Even in the school curriculum, multiplication is considered a complex operation, and multiplication of matrices is generally a separate topic, to which I have an entire paragraph and video lesson dedicated.

Today we will not go into the details of matrix calculations. Let’s just remember: how matrices are designated, how they are multiplied, and what follows from this.

Review: Matrix Multiplication

First of all, let's agree on notation. A matrix $A$ of size $\left[ m\times n \right]$ is simply a table of numbers with exactly $m$ rows and $n$ columns:

\=\underbrace(\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) \\ (( a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) \\ ... & ... & ... & ... \\ ((a)_(m1)) & ((a)_(m2)) & ... & ((a)_(mn)) \\\end(matrix) \right])_(n)\]

To avoid accidentally mixing up rows and columns (believe me, in an exam you can confuse a one with a two, let alone some rows), just look at the picture:

Determining indices for matrix cells

What's happening? If you place the standard coordinate system $OXY$ in the upper left corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with coordinates $\left(x;y \right)$ - this will be the row number and column number.

Why is the coordinate system placed in the upper left corner? Yes, because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $x$ axis directed downwards and not to the right? Again, it's simple: take a standard coordinate system (the $x$ axis goes to the right, the $y$ axis goes up) and rotate it so that it covers the matrix. This is a 90 degree clockwise rotation - we see the result in the picture.

In general, we have figured out how to determine the indices of matrix elements. Now let's look at multiplication.

Definition. Matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, when the number of columns in the first coincides with the number of rows in the second, are called consistent.

Exactly in that order. One can be confused and say that the matrices $A$ and $B$ form an ordered pair $\left(A;B \right)$: if they are consistent in this order, then it is not at all necessary that $B$ and $A$ those. the pair $\left(B;A \right)$ is also consistent.

Only matched matrices can be multiplied.

Definition. The product of matched matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right]$ , the elements of which $((c)_(ij))$ are calculated according to the formula:

\[((c)_(ij))=\sum\limits_(k=1)^(n)(((a)_(ik)))\cdot ((b)_(kj))\]

In other words: to get the element $((c)_(ij))$ of the matrix $C=A\cdot B$, you need to take the $i$-row of the first matrix, the $j$-th column of the second matrix, and then multiply in pairs elements from this row and column. Add up the results.

Yes, that’s such a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication, generally speaking, is non-commutative: $A\cdot B\ne B\cdot A$;
  2. However, multiplication is associative: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$;
  3. And even distributively: $\left(A+B \right)\cdot C=A\cdot C+B\cdot C$;
  4. And once again distributively: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$.

The distributivity of multiplication had to be described separately for the left and right sum factor precisely because of the non-commutativity of the multiplication operation.

If it turns out that $A\cdot B=B\cdot A$, such matrices are called commutative.

Among all the matrices that are multiplied by something there, there are special ones - those that, when multiplied by any matrix $A$, again give $A$:

Definition. A matrix $E$ is called identity if $A\cdot E=A$ or $E\cdot A=A$. In the case of a square matrix $A$ we can write:

The identity matrix is ​​a frequent guest when solving matrix equations. And in general, a frequent guest in the world of matrices. :)

And because of this $E$, someone came up with all the nonsense that will be written next.

What is an inverse matrix

Since matrix multiplication is a very labor-intensive operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix also turns out to be not the most trivial. And requiring some explanation.

Key Definition

Well, it's time to know the truth.

Definition. A matrix $B$ is called the inverse of a matrix $A$ if

The inverse matrix is ​​denoted by $((A)^(-1))$ (not to be confused with the degree!), so the definition can be rewritten as follows:

It would seem that everything is extremely simple and clear. But when analyzing this definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that there is exactly one such matrix? What if for some initial matrix $A$ there is a whole crowd of inverses?
  3. What do all these “reverses” look like? And how, exactly, should we count them?

As for calculation algorithms, we will talk about this a little later. But we will answer the remaining questions right now. Let us formulate them in the form of separate statements-lemmas.

Basic properties

Let's start with how the matrix $A$ should, in principle, look in order for $((A)^(-1))$ to exist for it. Now we will make sure that both of these matrices must be square, and of the same size: $\left[ n\times n \right]$.

Lemma 1. Given a matrix $A$ and its inverse $((A)^(-1))$. Then both of these matrices are square, and of the same order $n$.

Proof. It's simple. Let the matrix $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ a\times b \right]$. Since the product $A\cdot ((A)^(-1))=E$ exists by definition, the matrices $A$ and $((A)^(-1))$ are consistent in the order shown:

\[\begin(align) & \left[ m\times n \right]\cdot \left[ a\times b \right]=\left[ m\times b \right] \\ & n=a \end( align)\]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $n$ and $a$ are “transit” and must be equal.

At the same time, the inverse multiplication is also defined: $((A)^(-1))\cdot A=E$, therefore the matrices $((A)^(-1))$ and $A$ are also consistent in the specified order:

\[\begin(align) & \left[ a\times b \right]\cdot \left[ m\times n \right]=\left[ a\times n \right] \\ & b=m \end( align)\]

Thus, without loss of generality, we can assume that $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ n\times m \right]$. However, according to the definition of $A\cdot ((A)^(-1))=((A)^(-1))\cdot A$, therefore the sizes of the matrices strictly coincide:

\[\begin(align) & \left[ m\times n \right]=\left[ n\times m \right] \\ & m=n \end(align)\]

So it turns out that all three matrices - $A$, $((A)^(-1))$ and $E$ - are square matrices of size $\left[ n\times n \right]$. The lemma is proven.

Well, that's already good. We see that only square matrices are invertible. Now let's make sure that the inverse matrix is ​​always the same.

Lemma 2. Given a matrix $A$ and its inverse $((A)^(-1))$. Then this inverse matrix is ​​the only one.

Proof. Let's go by contradiction: let the matrix $A$ have at least two inverses - $B$ and $C$. Then, according to definition, the following equalities are true:

\[\begin(align) & A\cdot B=B\cdot A=E; \\ & A\cdot C=C\cdot A=E. \\ \end(align)\]

From Lemma 1 we conclude that all four matrices - $A$, $B$, $C$ and $E$ - are squares of the same order: $\left[ n\times n \right]$. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), we can write:

\[\begin(align) & B\cdot A\cdot C=\left(B\cdot A \right)\cdot C=E\cdot C=C; \\ & B\cdot A\cdot C=B\cdot \left(A\cdot C \right)=B\cdot E=B; \\ & B\cdot A\cdot C=C=B\Rightarrow B=C. \\ \end(align)\]

We got the only possible option: two copies of the inverse matrix are equal. The lemma is proven.

The above arguments repeat almost verbatim the proof of the uniqueness of the inverse element for all real numbers $b\ne 0$. The only significant addition is taking into account the dimension of matrices.

However, we still do not know anything about whether every square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3. Given a matrix $A$. If its inverse matrix $((A)^(-1))$ exists, then the determinant of the original matrix is ​​nonzero:

\[\left| A\right|\ne 0\]

Proof. We already know that $A$ and $((A)^(-1))$ are square matrices of size $\left[ n\times n \right]$. Therefore, for each of them we can calculate the determinant: $\left| A\right|$ and $\left| ((A)^(-1)) \right|$. However, the determinant of a product is equal to the product of the determinants:

\[\left| A\cdot B \right|=\left| A \right|\cdot \left| B \right|\Rightarrow \left| A\cdot ((A)^(-1)) \right|=\left| A \right|\cdot \left| ((A)^(-1)) \right|\]

But according to the definition, $A\cdot ((A)^(-1))=E$, and the determinant of $E$ is always equal to 1, so

\[\begin(align) & A\cdot ((A)^(-1))=E; \\ & \left| A\cdot ((A)^(-1)) \right|=\left| E\right|; \\ & \left| A \right|\cdot \left| ((A)^(-1)) \right|=1. \\ \end(align)\]

The product of two numbers is equal to one only if each of these numbers is non-zero:

\[\left| A \right|\ne 0;\quad \left| ((A)^(-1)) \right|\ne 0.\]

So it turns out that $\left| A \right|\ne 0$. The lemma is proven.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become completely clear why, with a zero determinant, no inverse matrix in principle can exist.

But first, let’s formulate an “auxiliary” definition:

Definition. A singular matrix is ​​a square matrix of size $\left[ n\times n \right]$ whose determinant is zero.

Thus, we can claim that every invertible matrix is ​​non-singular.

How to find the inverse of a matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be discussed now is very effective for matrices of size $\left[ 2\times 2 \right]$ and - partially - size $\left[ 3\times 3 \right]$. But starting from the size $\left[ 4\times 4 \right]$ it is better not to use it. Why - now you will understand everything yourself.

Algebraic additions

Get ready. Now there will be pain. No, don’t worry: a beautiful nurse in a skirt, stockings with lace will not come to you and give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty the “Union Matrix” come to you.

Let's start with the main thing. Let there be a square matrix of size $A=\left[ n\times n \right]$, whose elements are called $((a)_(ij))$. Then for each such element we can define an algebraic complement:

Definition. Algebraic complement $((A)_(ij))$ to the element $((a)_(ij))$ located in the $i$th row and $j$th column of the matrix $A=\left[ n \times n \right]$ is a construction of the form

\[((A)_(ij))=((\left(-1 \right))^(i+j))\cdot M_(ij)^(*)\]

Where $M_(ij)^(*)$ is the determinant of the matrix obtained from the original $A$ by deleting the same $i$th row and $j$th column.

Again. The algebraic complement to a matrix element with coordinates $\left(i;j \right)$ is denoted as $((A)_(ij))$ and is calculated according to the scheme:

  1. First, we delete the $i$-row and $j$-th column from the original matrix. We obtain a new square matrix, and we denote its determinant as $M_(ij)^(*)$.
  2. Then we multiply this determinant by $((\left(-1 \right))^(i+j))$ - at first this expression may seem mind-blowing, but in essence we are simply figuring out the sign in front of $M_(ij)^(*) $.
  3. We count and get a specific number. Those. the algebraic addition is precisely a number, and not some new matrix, etc.

The matrix $M_(ij)^(*)$ itself is called an additional minor to the element $((a)_(ij))$. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - what we looked at in the lesson about the determinant.

Important note. Actually, in “adult” mathematics, algebraic additions are defined as follows:

  1. We take $k$ rows and $k$ columns in a square matrix. At their intersection we get a matrix of size $\left[ k\times k \right]$ - its determinant is called a minor of order $k$ and is denoted $((M)_(k))$.
  2. Then we cross out these “selected” $k$ rows and $k$ columns. Once again you get a square matrix - its determinant is called an additional minor and is denoted $M_(k)^(*)$.
  3. Multiply $M_(k)^(*)$ by $((\left(-1 \right))^(t))$, where $t$ is (attention now!) the sum of the numbers of all selected rows and columns . This will be the algebraic addition.

Look at the third step: there is actually a sum of $2k$ terms! Another thing is that for $k=1$ we will get only 2 terms - these will be the same $i+j$ - the “coordinates” of the element $((a)_(ij))$ for which we are looking for an algebraic complement.

So today we're using a slightly simplified definition. But as we will see later, it will be more than enough. The following thing is much more important:

Definition. The allied matrix $S$ to the square matrix $A=\left[ n\times n \right]$ is a new matrix of size $\left[ n\times n \right]$, which is obtained from $A$ by replacing $(( a)_(ij))$ by algebraic additions $((A)_(ij))$:

\\Rightarrow S=\left[ \begin(matrix) ((A)_(11)) & ((A)_(12)) & ... & ((A)_(1n)) \\ (( A)_(21)) & ((A)_(22)) & ... & ((A)_(2n)) \\ ... & ... & ... & ... \\ ((A)_(n1)) & ((A)_(n2)) & ... & ((A)_(nn)) \\\end(matrix) \right]\]

The first thought that arises at the moment of realizing this definition is “how much will have to be counted!” Relax: you will have to count, but not that much. :)

Well, all this is very nice, but why is it necessary? But why.

Main theorem

Let's go back a little. Remember, in Lemma 3 it was stated that the invertible matrix $A$ is always non-singular (that is, its determinant is non-zero: $\left| A \right|\ne 0$).

So, the opposite is also true: if the matrix $A$ is not singular, then it is always invertible. And there is even a search scheme for $((A)^(-1))$. Check it out:

Inverse matrix theorem. Let a square matrix $A=\left[ n\times n \right]$ be given, and its determinant is nonzero: $\left| A \right|\ne 0$. Then the inverse matrix $((A)^(-1))$ exists and is calculated by the formula:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))\]

And now - everything is the same, but in legible handwriting. To find the inverse matrix, you need:

  1. Calculate the determinant $\left| A \right|$ and make sure it is non-zero.
  2. Construct the union matrix $S$, i.e. count 100500 algebraic additions $((A)_(ij))$ and place them in place $((a)_(ij))$.
  3. Transpose this matrix $S$, and then multiply it by some number $q=(1)/(\left| A \right|)\;$.

That's all! The inverse matrix $((A)^(-1))$ has been found. Let's look at examples:

\[\left[ \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right]\]

Solution. Let's check the reversibility. Let's calculate the determinant:

\[\left| A\right|=\left| \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right|=3\cdot 2-1\cdot 5=6-5=1\]

The determinant is different from zero. This means the matrix is ​​invertible. Let's create a union matrix:

Let's calculate the algebraic additions:

\[\begin(align) & ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| 2 \right|=2; \\ & ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| 5 \right|=-5; \\ & ((A)_(21))=((\left(-1 \right))^(2+1))\cdot \left| 1 \right|=-1; \\ & ((A)_(22))=((\left(-1 \right))^(2+2))\cdot \left| 3\right|=3. \\ \end(align)\]

Please note: the determinants |2|, |5|, |1| and |3| are determinants of matrices of size $\left[ 1\times 1 \right]$, and not modules. Those. If there were negative numbers in the determinants, there is no need to remove the “minus”.

In total, our union matrix looks like this:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))=\frac(1)(1)\cdot ( (\left[ \begin(array)(*(35)(r)) 2 & -5 \\ -1 & 3 \\\end(array) \right])^(T))=\left[ \begin (array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]\]

OK it's all over Now. The problem is solved.

Answer. $\left[ \begin(array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]$

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right] \]

Solution. We calculate the determinant again:

\[\begin(align) & \left| \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right|=\begin(matrix ) \left(1\cdot 2\cdot 1+\left(-1 \right)\cdot \left(-1 \right)\cdot 1+2\cdot 0\cdot 0 \right)- \\ -\left (2\cdot 2\cdot 1+\left(-1 \right)\cdot 0\cdot 1+1\cdot \left(-1 \right)\cdot 0 \right) \\\end(matrix)= \ \ & =\left(2+1+0 \right)-\left(4+0+0 \right)=-1\ne 0. \\ \end(align)\]

The determinant is nonzero—the matrix is ​​invertible. But now it’s going to be really tough: we need to count as many as 9 (nine, motherfucker!) algebraic additions. And each of them will contain the determinant $\left[ 2\times 2 \right]$. Flew:

\[\begin(matrix) ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 2 & -1 \\ 0 & 1 \\\end(matrix) \right|=2; \\ ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 0 & -1 \\ 1 & 1 \\\end(matrix) \right|=-1; \\ ((A)_(13))=((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 0 & 2 \\ 1 & 0 \\\end(matrix) \right|=-2; \\ ... \\ ((A)_(33))=((\left(-1 \right))^(3+3))\cdot \left| \begin(matrix) 1 & -1 \\ 0 & 2 \\\end(matrix) \right|=2; \\ \end(matrix)\]

In short, the union matrix will look like this:

Therefore, the inverse matrix will be:

\[((A)^(-1))=\frac(1)(-1)\cdot \left[ \begin(matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\end(matrix) \right]=\left[ \begin(array)(*(35)(r))-2 & -1 & 3 \\ 1 & 1 & -1 \ \2 & 1 & -2 \\\end(array) \right]\]

That's it. Here is the answer.

Answer. $\left[ \begin(array)(*(35)(r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\end(array) \right ]$

As you can see, at the end of each example we performed a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse matrix - you should get $E$.

Performing this check is much easier and faster than looking for an error in further calculations when, for example, you are solving a matrix equation.

Alternative way

As I said, the inverse matrix theorem works great for sizes $\left[ 2\times 2 \right]$ and $\left[ 3\times 3 \right]$ (in the latter case, it’s not so “great” "), but for larger matrices the sadness begins.

But don’t worry: there is an alternative algorithm with which you can calmly find the inverse even for the matrix $\left[ 10\times 10 \right]$. But, as often happens, to consider this algorithm we need a little theoretical background.

Elementary transformations

Among all possible matrix transformations, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $i$th row (column) and multiply it by any number $k\ne 0$;
  2. Addition. Add to the $i$-th row (column) any other $j$-th row (column), multiplied by any number $k\ne 0$ (you can, of course, do $k=0$, but what's the point? ? Nothing will change).
  3. Rearrangement. Take the $i$th and $j$th rows (columns) and swap places.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the adjoint matrix. Yes, yes: you heard right. Now there will be one more definition - the last one in today's lesson.

Adjoint matrix

Surely at school you solved systems of equations using the addition method. Well, there, subtract another from one line, multiply some line by a number - that’s all.

So: now everything will be the same, but in an “adult” way. Ready?

Definition. Let a matrix $A=\left[ n\times n \right]$ and an identity matrix $E$ of the same size $n$ be given. Then the adjoint matrix $\left[ A\left| E\right. \right]$ is a new matrix of size $\left[ n\times 2n \right]$ that looks like this:

\[\left[ A\left| E\right. \right]=\left[ \begin(array)(rrrr|rrrr)((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) & 1 & 0 & ... & 0 \\((a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) & 0 & 1 & ... & 0 \\... & ... & ... & ... & ... & ... & ... & ... \\((a)_(n1)) & ((a)_(n2)) & ... & ((a)_(nn)) & 0 & 0 & ... & 1 \\\end(array) \right]\]

In short, we take the matrix $A$, on the right we assign to it the identity matrix $E$ of the required size, we separate them with a vertical bar for beauty - here you have the adjoint. :)

What's the catch? Here's what:

Theorem. Let the matrix $A$ be invertible. Consider the adjoint matrix $\left[ A\left| E\right. \right]$. If using elementary string conversions bring it to the form $\left[ E\left| B\right. \right]$, i.e. by multiplying, subtracting and rearranging rows to obtain from $A$ the matrix $E$ on the right, then the matrix $B$ obtained on the left is the inverse of $A$:

\[\left[ A\left| E\right. \right]\to \left[ E\left| B\right. \right]\Rightarrow B=((A)^(-1))\]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the adjoint matrix $\left[ A\left| E\right. \right]$;
  2. Perform elementary string conversions until $E$ appears instead of $A$;
  3. Of course, something will also appear on the left - a certain matrix $B$. This will be the opposite;
  4. PROFIT!:)

Of course, this is much easier said than done. So let's look at a couple of examples: for sizes $\left[ 3\times 3 \right]$ and $\left[ 4\times 4 \right]$.

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\end(array) \right]\ ]

Solution. We create the adjoint matrix:

\[\left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\end(array) \right]\]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\\end(matrix)\to \\ & \to \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

There are no more units, except for the first line. But we don’t touch it, otherwise the newly removed units will begin to “multiply” in the third column.

But we can subtract the second line twice from the last - we get one in the lower left corner:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right]\begin(matrix) \ \\ \downarrow \\ -2 \\\end(matrix)\to \\ & \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Now we can subtract the last row from the first and twice from the second - this way we “zero” the first column:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -1 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \ to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Multiply the second line by −1, and then subtract it 6 times from the first and add 1 time to the last:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -6 \\ \updownarrow \\ +1 \\\end (matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\end(array) \right] \\ \end(align)\]

All that remains is to swap lines 1 and 3:

\[\left[ \begin(array)(rrr|rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\end(array) \right]\]

Ready! On the right is the required inverse matrix.

Answer. $\left[ \begin(array)(*(35)(r))4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\end(array) \right ]$

Task. Find the inverse matrix:

\[\left[ \begin(matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\end(matrix) \right]\]

Solution. We compose the adjoint again:

\[\left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\]

Let's cry a little, be sad about how much we have to count now... and start counting. First, let’s “zero out” the first column by subtracting row 1 from rows 2 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

We see too many “cons” in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\end (array) \right]\begin(matrix) -2 \\ -1 \\ \updownarrow \\ -2 \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr| rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Now is the time to “fry” the last column of the original matrix: subtract row 4 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array ) \right]\begin(matrix) +1 \\ -3 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Final throw: “burn out” the second column by subtracting line 2 from lines 1 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end( array) \right]\begin(matrix) 6 \\ \updownarrow \\ -5 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

And again the identity matrix is ​​on the left, which means the inverse is on the right. :)

Answer. $\left[ \begin(matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\end(matrix) \right]$

Similar to the inverse in many properties.

Encyclopedic YouTube

    1 / 5

    ✪ How to find the inverse of a matrix - bezbotvy

    ✪ Inverse matrix (2 ways to find)

    ✪ Inverse matrix #1

    ✪ 2015-01-28. Inverse 3x3 matrix

    ✪ 2015-01-27. Inverse matrix 2x2

    Subtitles

Properties of an inverse matrix

  • det A − 1 = 1 det A (\displaystyle \det A^(-1)=(\frac (1)(\det A))), Where det (\displaystyle \\det ) denotes the determinant.
  • (A B) − 1 = B − 1 A − 1 (\displaystyle \ (AB)^(-1)=B^(-1)A^(-1)) for two square invertible matrices A (\displaystyle A) And B (\displaystyle B).
  • (A T) − 1 = (A − 1) T (\displaystyle \ (A^(T))^(-1)=(A^(-1))^(T)), Where (. . .) T (\displaystyle (...)^(T)) denotes a transposed matrix.
  • (k A) − 1 = k − 1 A − 1 (\displaystyle \ (kA)^(-1)=k^(-1)A^(-1)) for any coefficient k ≠ 0 (\displaystyle k\not =0).
  • E − 1 = E (\displaystyle \E^(-1)=E).
  • If it is necessary to solve a system of linear equations, (b is a non-zero vector) where x (\displaystyle x) is the desired vector, and if A − 1 (\displaystyle A^(-1)) exists, then x = A − 1 b (\displaystyle x=A^(-1)b). Otherwise, either the dimension of the solution space is greater than zero, or there are no solutions at all.

Methods for finding the inverse matrix

If the matrix is ​​invertible, then to find the inverse matrix you can use one of the following methods:

Exact (direct) methods

Gauss-Jordan method

Let's take two matrices: the A and single E. Let's present the matrix A to the identity matrix using the Gauss-Jordan method, applying transformations along the rows (you can also apply transformations along the columns, but not intermixed). After applying each operation to the first matrix, apply the same operation to the second. When the reduction of the first matrix to unit form is completed, the second matrix will be equal to A−1.

When using the Gaussian method, the first matrix will be multiplied on the left by one of the elementary matrices Λ i (\displaystyle \Lambda _(i))(transvection or diagonal matrix with units on the main diagonal, except for one position):

Λ 1 ⋅ ⋯ ⋅ Λ n ⋅ A = Λ A = E ⇒ Λ = A − 1 (\displaystyle \Lambda _(1)\cdot \dots \cdot \Lambda _(n)\cdot A=\Lambda A=E \Rightarrow \Lambda =A^(-1)). Λ m = [ 1 … 0 − a 1 m / a m m 0 … 0 … 0 … 1 − a m − 1 m / a m m 0 … 0 0 … 0 1 / a m m 0 … 0 0 … 0 − a m + 1 m / a m m 1 … 0 … 0 … 0 − a n m / a m m 0 … 1 ] (\displaystyle \Lambda _(m)=(\begin(bmatrix)1&\dots &0&-a_(1m)/a_(mm)&0&\dots &0\\ &&&\dots &&&\\0&\dots &1&-a_(m-1m)/a_(mm)&0&\dots &0\\0&\dots &0&1/a_(mm)&0&\dots &0\\0&\dots &0&-a_( m+1m)/a_(mm)&1&\dots &0\\&&&\dots &&&\\0&\dots &0&-a_(nm)/a_(mm)&0&\dots &1\end(bmatrix))).

The second matrix after applying all operations will be equal to Λ (\displaystyle \Lambda), that is, it will be the desired one. Algorithm complexity - O (n 3) (\displaystyle O(n^(3))).

Using the algebraic complement matrix

Matrix inverse of matrix A (\displaystyle A), can be represented in the form

A − 1 = adj (A) det (A) (\displaystyle (A)^(-1)=(((\mbox(adj))(A)) \over (\det(A))))

Where adj (A) (\displaystyle (\mbox(adj))(A))- adjoint matrix;

The complexity of the algorithm depends on the complexity of the algorithm for calculating the determinant O det and is equal to O(n²)·O det.

Using LU/LUP Decomposition

Matrix equation A X = I n (\displaystyle AX=I_(n)) for the inverse matrix X (\displaystyle X) can be considered as a collection n (\displaystyle n) systems of the form A x = b (\displaystyle Ax=b). Let's denote i (\displaystyle i) th column of the matrix X (\displaystyle X) through X i (\displaystyle X_(i)); Then A X i = e i (\displaystyle AX_(i)=e_(i)), i = 1 , … , n (\displaystyle i=1,\ldots ,n),because the i (\displaystyle i) th column of the matrix I n (\displaystyle I_(n)) is the unit vector e i (\displaystyle e_(i)). in other words, finding the inverse matrix comes down to solving n equations with the same matrix and different right-hand sides. After performing the LUP decomposition (O(n³) time), solving each of the n equations takes O(n²) time, so this part of the work also requires O(n³) time.

If the matrix A is non-singular, then the LUP decomposition can be calculated for it P A = L U (\displaystyle PA=LU). Let P A = B (\displaystyle PA=B), B − 1 = D (\displaystyle B^(-1)=D). Then from the properties of the inverse matrix we can write: D = U − 1 L − 1 (\displaystyle D=U^(-1)L^(-1)). If you multiply this equality by U and L, you can get two equalities of the form U D = L − 1 (\displaystyle UD=L^(-1)) And D L = U − 1 (\displaystyle DL=U^(-1)). The first of these equalities is a system of n² linear equations for n (n + 1) 2 (\displaystyle (\frac (n(n+1))(2))) from which the right-hand sides are known (from the properties of triangular matrices). The second also represents a system of n² linear equations for n (n − 1) 2 (\displaystyle (\frac (n(n-1))(2))) from which the right-hand sides are known (also from the properties of triangular matrices). Together they represent a system of n² equalities. Using these equalities, we can recursively determine all n² elements of the matrix D. Then from the equality (PA) −1 = A −1 P −1 = B −1 = D. we obtain the equality A − 1 = D P (\displaystyle A^(-1)=DP).

In the case of using the LU decomposition, no permutation of the columns of the matrix D is required, but the solution may diverge even if the matrix A is nonsingular.

The complexity of the algorithm is O(n³).

Iterative methods

Schultz methods

( Ψ k = E − A U k , U k + 1 = U k ∑ i = 0 n Ψ k i (\displaystyle (\begin(cases)\Psi _(k)=E-AU_(k),\\U_( k+1)=U_(k)\sum _(i=0)^(n)\Psi _(k)^(i)\end(cases)))

Error estimate

Selecting an Initial Approximation

The problem of choosing the initial approximation in the iterative matrix inversion processes considered here does not allow us to treat them as independent universal methods that compete with direct inversion methods based, for example, on the LU decomposition of matrices. There are some recommendations for choosing U 0 (\displaystyle U_(0)), ensuring the fulfillment of the condition ρ (Ψ 0) < 1 {\displaystyle \rho (\Psi _{0})<1} (spectral radius of the matrix is ​​less than unity), which is necessary and sufficient for the convergence of the process. However, in this case, firstly, it is required to know from above the estimate for the spectrum of the invertible matrix A or the matrix A A T (\displaystyle AA^(T))(namely, if A is a symmetric positive definite matrix and ρ (A) ≤ β (\displaystyle \rho (A)\leq \beta ), then you can take U 0 = α E (\displaystyle U_(0)=(\alpha )E), Where ; if A is an arbitrary non-singular matrix and ρ (A A T) ≤ β (\displaystyle \rho (AA^(T))\leq \beta ), then they believe U 0 = α A T (\displaystyle U_(0)=(\alpha )A^(T)), where also α ∈ (0 , 2 β) (\displaystyle \alpha \in \left(0,(\frac (2)(\beta ))\right)); You can, of course, simplify the situation and take advantage of the fact that ρ (A A T) ≤ k A A T k (\displaystyle \rho (AA^(T))\leq (\mathcal (k))AA^(T)(\mathcal (k))), put U 0 = A T ‖ A A T ‖ (\displaystyle U_(0)=(\frac (A^(T))(\|AA^(T)\|)))). Secondly, when specifying the initial matrix in this way, there is no guarantee that ‖ Ψ 0 ‖ (\displaystyle \|\Psi _(0)\|) will be small (perhaps it will even turn out to be ‖ Ψ 0 ‖ > 1 (\displaystyle \|\Psi _(0)\|>1)), and a high order of convergence rate will not be revealed immediately.

Examples

Matrix 2x2

A − 1 = [ a b c d ] − 1 = 1 det (A) [ d − b − c a ] = 1 a d − b c [ d − b − c a ] . (\displaystyle \mathbf (A) ^(-1)=(\begin(bmatrix)a&b\\c&d\\\end(bmatrix))^(-1)=(\frac (1)(\det(\mathbf (A))))(\begin(bmatrix)\,\,\,d&\!\!-b\\-c&\,a\\\end(bmatrix))=(\frac (1)(ad- bc))(\begin(bmatrix)\,\,\,d&\!\!-b\\-c&\,a\\\end(bmatrix)).)

Inversion of a 2x2 matrix is ​​possible only under the condition that a d − b c = det A ≠ 0 (\displaystyle ad-bc=\det A\neq 0).

Matrix A -1 is called the inverse matrix with respect to matrix A if A*A -1 = E, where E is the identity matrix of the nth order. An inverse matrix can only exist for square matrices.

Purpose of the service. Using this service online you can find algebraic complements, transposed matrix A T, allied matrix and inverse matrix. The decision is carried out directly on the website (online) and is free. The calculation results are presented in a report in Word and Excel format (i.e., it is possible to check the solution). see design example.

Instructions. To obtain a solution, it is necessary to specify the dimension of the matrix. Next, fill out matrix A in the new dialog box.

Matrix dimension 2 3 4 5 6 7 8 9 10

See also Inverse matrix using the Jordano-Gauss method

Algorithm for finding the inverse matrix

  1. Finding the transposed matrix A T .
  2. Definition of algebraic complements. Replace each element of the matrix with its algebraic complement.
  3. Compiling an inverse matrix from algebraic additions: each element of the resulting matrix is ​​divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
Next algorithm for finding the inverse matrix similar to the previous one except for some steps: first the algebraic complements are calculated, and then the allied matrix C is determined.
  1. Determine whether the matrix is ​​square. If not, then there is no inverse matrix for it.
  2. Calculation of the determinant of the matrix A. If it is not equal to zero, we continue the solution, otherwise the inverse matrix does not exist.
  3. Definition of algebraic complements.
  4. Filling out the union (mutual, adjoint) matrix C .
  5. Compiling an inverse matrix from algebraic additions: each element of the adjoint matrix C is divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
  6. They do a check: they multiply the original and the resulting matrices. The result should be an identity matrix.

Example No. 1. Let's write the matrix in the form:


Algebraic additions.
A 1,1 = (-1) 1+1
-1 -2
5 4

∆ 1,1 = (-1 4-5 (-2)) = 6
A 1,2 = (-1) 1+2
2 -2
-2 4

∆ 1,2 = -(2 4-(-2 (-2))) = -4
A 1.3 = (-1) 1+3
2 -1
-2 5

∆ 1,3 = (2 5-(-2 (-1))) = 8
A 2,1 = (-1) 2+1
2 3
5 4

∆ 2,1 = -(2 4-5 3) = 7
A 2,2 = (-1) 2+2
-1 3
-2 4

∆ 2,2 = (-1 4-(-2 3)) = 2
A 2,3 = (-1) 2+3
-1 2
-2 5

∆ 2,3 = -(-1 5-(-2 2)) = 1
A 3.1 = (-1) 3+1
2 3
-1 -2

∆ 3,1 = (2 (-2)-(-1 3)) = -1
A 3.2 = (-1) 3+2
-1 3
2 -2

∆ 3,2 = -(-1 (-2)-2 3) = 4
A 3.3 = (-1) 3+3
-1 2
2 -1

∆ 3,3 = (-1 (-1)-2 2) = -3
Then inverse matrix can be written as:
A -1 = 1/10
6 -4 8
7 2 1
-1 4 -3

A -1 =
0,6 -0,4 0,8
0,7 0,2 0,1
-0,1 0,4 -0,3

Another algorithm for finding the inverse matrix

Let us present another scheme for finding the inverse matrix.
  1. Find the determinant of a given square matrix A.
  2. We find algebraic complements to all elements of the matrix A.
  3. We write algebraic additions of row elements to columns (transposition).
  4. We divide each element of the resulting matrix by the determinant of the matrix A.
As we see, the transposition operation can be applied both at the beginning, on the original matrix, and at the end, on the resulting algebraic additions.

A special case: The inverse of the identity matrix E is the identity matrix E.

Let's continue the conversation about actions with matrices. Namely, during the study of this lecture you will learn how to find the inverse matrix. Learn. Even if math is difficult.

What is an inverse matrix? Here we can draw an analogy with inverse numbers: consider, for example, the optimistic number 5 and its inverse number. The product of these numbers is equal to one: . Everything is similar with matrices! The product of a matrix and its inverse matrix is ​​equal to – identity matrix, which is the matrix analogue of the numerical unit. However, first things first – let’s first solve an important practical issue, namely, learn how to find this very inverse matrix.

What do you need to know and be able to do to find the inverse matrix? You must be able to decide qualifiers. You must understand what it is matrix and be able to perform some actions with them.

There are two main methods for finding the inverse matrix:
by using algebraic additions And using elementary transformations.

Today we will study the first, simpler method.

Let's start with the most terrible and incomprehensible. Let's consider square matrix. The inverse matrix can be found using the following formula:

Where is the determinant of the matrix, is the transposed matrix of algebraic complements of the corresponding elements of the matrix.

The concept of an inverse matrix exists only for square matrices, matrices “two by two”, “three by three”, etc.

Designations: As you may have already noticed, the inverse matrix is ​​denoted by a superscript

Let's start with the simplest case - a two-by-two matrix. Most often, of course, “three by three” is required, but, nevertheless, I strongly recommend studying a simpler task in order to understand the general principle of the solution.

Example:

Find the inverse of a matrix

Let's decide. It is convenient to break down the sequence of actions point by point.

1) First we find the determinant of the matrix.

If your understanding of this action is not good, read the material How to calculate the determinant?

Important! If the determinant of the matrix is ​​equal to ZERO– inverse matrix DOES NOT EXIST.

In the example under consideration, as it turned out, , which means everything is in order.

2) Find the matrix of minors.

To solve our problem, it is not necessary to know what a minor is, however, it is advisable to read the article How to calculate the determinant.

The matrix of minors has the same dimensions as the matrix, that is, in this case.
The only thing left to do is find four numbers and put them instead of asterisks.

Let's return to our matrix
Let's look at the top left element first:

How to find it minor?
And this is done like this: MENTALLY cross out the row and column in which this element is located:

The remaining number is minor of this element, which we write in our matrix of minors:

Consider the following matrix element:

Mentally cross out the row and column in which this element appears:

What remains is the minor of this element, which we write in our matrix:

Similarly, we consider the elements of the second row and find their minors:


Ready.

It's simple. In the matrix of minors you need CHANGE SIGNS two numbers:

These are the numbers that I circled!

– matrix of algebraic additions of the corresponding elements of the matrix.

And just...

4) Find the transposed matrix of algebraic additions.

– transposed matrix of algebraic complements of the corresponding elements of the matrix.

5) Answer.

Let's remember our formula
Everything has been found!

So the inverse matrix is:

It is better to leave the answer as is. NO NEED divide each element of the matrix by 2, since the result is fractional numbers. This nuance is discussed in more detail in the same article. Actions with matrices.

How to check the solution?

You need to perform matrix multiplication or

Examination:

Received already mentioned identity matrix is a matrix with ones by main diagonal and zeros in other places.

Thus, the inverse matrix is ​​found correctly.

If you carry out the action, the result will also be an identity matrix. This is one of the few cases where matrix multiplication is commutative, more details can be found in the article Properties of operations on matrices. Matrix Expressions. Also note that during the check, the constant (fraction) is brought forward and processed at the very end - after the matrix multiplication. This is a standard technique.

Let's move on to a more common case in practice - the three-by-three matrix:

Example:

Find the inverse of a matrix

The algorithm is exactly the same as for the “two by two” case.

We find the inverse matrix using the formula: , where is the transposed matrix of algebraic complements of the corresponding elements of the matrix.

1) Find the determinant of the matrix.


Here the determinant is revealed on the first line.

Also, don’t forget that, which means everything is fine - inverse matrix exists.

2) Find the matrix of minors.

The matrix of minors has a dimension of “three by three” , and we need to find nine numbers.

I'll look at a couple of minors in detail:

Consider the following matrix element:

MENTALLY cross out the row and column in which this element is located:

We write the remaining four numbers in the “two by two” determinant.

This two-by-two determinant and is the minor of this element. It needs to be calculated:


That’s it, the minor has been found, we write it in our matrix of minors:

As you probably guessed, you need to calculate nine two-by-two determinants. The process, of course, is tedious, but the case is not the most severe, it can be worse.

Well, to consolidate – finding another minor in the pictures:

Try to calculate the remaining minors yourself.

Final result:
– matrix of minors of the corresponding elements of the matrix.

The fact that all the minors turned out to be negative is purely an accident.

3) Find the matrix of algebraic additions.

In the matrix of minors it is necessary CHANGE SIGNS strictly for the following elements:

In this case:

We do not consider finding the inverse matrix for a “four by four” matrix, since such a task can only be given by a sadistic teacher (for the student to calculate one “four by four” determinant and 16 “three by three” determinants). In my practice, there was only one such case, and the customer of the test paid quite dearly for my torment =).

In a number of textbooks and manuals you can find a slightly different approach to finding the inverse matrix, but I recommend using the solution algorithm outlined above. Why? Because the likelihood of getting confused in calculations and signs is much less.

Typically, inverse operations are used to simplify complex algebraic expressions. For example, if the problem involves the operation of dividing by a fraction, you can replace it with the operation of multiplying by the reciprocal of a fraction, which is the inverse operation. Moreover, matrices cannot be divided, so you need to multiply by the inverse matrix. Calculating the inverse of a 3x3 matrix is ​​quite tedious, but you need to be able to do it manually. You can also find the reciprocal using a good graphing calculator.

Steps

Using the adjoint matrix

Transpose the original matrix. Transposition is the replacement of rows with columns relative to the main diagonal of the matrix, that is, you need to swap the elements (i,j) and (j,i). In this case, the elements of the main diagonal (starts in the upper left corner and ends in the lower right corner) do not change.

  • To change rows to columns, write the elements of the first row in the first column, the elements of the second row in the second column, and the elements of the third row in the third column. The order of changing the position of the elements is shown in the figure, in which the corresponding elements are circled with colored circles.
  • Find the definition of each 2x2 matrix. Every element of any matrix, including a transposed one, is associated with a corresponding 2x2 matrix. To find a 2x2 matrix that corresponds to a specific element, cross out the row and column in which the given element is located, that is, you need to cross out five elements of the original 3x3 matrix. Four elements will remain uncrossed, which are elements of the corresponding 2x2 matrix.

    • For example, to find a 2x2 matrix for the element that is located at the intersection of the second row and the first column, cross out the five elements that are in the second row and first column. The remaining four elements are elements of the corresponding 2x2 matrix.
    • Find the determinant of each 2x2 matrix. To do this, subtract the product of the elements of the secondary diagonal from the product of the elements of the main diagonal (see figure).
    • Detailed information about 2x2 matrices corresponding to specific elements of a 3x3 matrix can be found on the Internet.
  • Create a cofactor matrix. Write the results obtained earlier in the form of a new cofactor matrix. To do this, write the found determinant of each 2x2 matrix where the corresponding element of the 3x3 matrix was located. For example, if you are considering a 2x2 matrix for element (1,1), write its determinant in position (1,1). Then change the signs of the corresponding elements according to a certain scheme, which is shown in the figure.

    • Scheme for changing signs: the sign of the first element of the first line does not change; the sign of the second element of the first line is reversed; the sign of the third element of the first line does not change, and so on line by line. Please note that the “+” and “-” signs that are shown in the diagram (see figure) do not indicate that the corresponding element will be positive or negative. In this case, the “+” sign indicates that the sign of the element does not change, and the “-” sign indicates a change in the sign of the element.
    • Detailed information about cofactor matrices can be found on the Internet.
    • This way you will find the adjoint matrix of the original matrix. It is sometimes called a complex conjugate matrix. Such a matrix is ​​denoted as adj(M).
  • Divide each element of the adjoint matrix by its determinant. The determinant of the matrix M was calculated at the very beginning to check that the inverse matrix exists. Now divide each element of the adjoint matrix by this determinant. Write the result of each division operation where the corresponding element is located. This way you will find the matrix inverse to the original one.

    • The determinant of the matrix which is shown in the figure is 1. Thus, here the adjoint matrix is ​​the inverse matrix (because when any number is divided by 1, it does not change).
    • In some sources, the division operation is replaced by the operation of multiplication by 1/det(M). However, the final result does not change.
  • Write the inverse matrix. Write the elements located on the right half of the large matrix as a separate matrix, which is the inverse matrix.

    Enter the original matrix into the calculator's memory. To do this, click the Matrix button, if available. For a Texas Instruments calculator, you may need to press the 2nd and Matrix buttons.

    Select the Edit menu. Do this using the arrow buttons or the appropriate function button located at the top of the calculator's keyboard (the location of the button varies depending on the calculator model).

    Enter the matrix notation. Most graphic calculators can work with 3-10 matrices, which can be designated by the letters A-J. Typically, just select [A] to designate the original matrix. Then press the Enter button.

    Enter the matrix size. This article talks about 3x3 matrices. But graphic calculators can work with large matrices. Enter the number of rows, press Enter, then enter the number of columns and press Enter again.

    Enter each matrix element. A matrix will be displayed on the calculator screen. If you have previously entered a matrix into the calculator, it will appear on the screen. The cursor will highlight the first element of the matrix. Enter the value for the first element and press Enter. The cursor will automatically move to the next matrix element.