Is it a basis? Example. Find the basis of the system of vectors and expand vectors not included in the basis into the basis. Fundamental solution system


When we examined the concepts of an n-dimensional vector and introduced operations on vectors, we found out that the set of all n-dimensional vectors generates a linear space. In this article we will talk about the most important related concepts - the dimension and basis of a vector space. We will also consider the theorem on the expansion of an arbitrary vector into a basis and the connection between various bases of n-dimensional space. Let us examine in detail the solutions to typical examples.

Page navigation.

The concept of dimension of vector space and basis.

The concepts of dimension and basis of a vector space are directly related to the concept of a linearly independent system of vectors, so if necessary, we recommend that you refer to the article linear dependence of a system of vectors, properties of linear dependence and independence.

Definition.

Dimension of vector space is a number equal to the maximum number of linearly independent vectors in this space.

Definition.

Vector space basis is an ordered set of linearly independent vectors of this space, the number of which is equal to the dimension of the space.

Let us give some reasoning based on these definitions.

Consider the space of n-dimensional vectors.

Let us show that the dimension of this space is n.

Let us take a system of n unit vectors of the form

Let's take these vectors as rows of the matrix A. In this case, matrix A will be an identity matrix of dimension n by n. The rank of this matrix is ​​n (see article if necessary). Therefore, the system of vectors is linearly independent, and not a single vector can be added to this system without violating its linear independence. Since the number of vectors in the system equals n, then the dimension of the space of n-dimensional vectors is n, and the unit vectors are the basis of this space.

From the last statement and definition of the basis we can conclude that any system of n-dimensional vectors, the number of vectors in which is less than n, is not a basis.

Now let’s swap the first and second vectors of the system . It is easy to show that the resulting system of vectors is also a basis of an n-dimensional vector space. Let's create a matrix by taking the vectors of this system as its rows. This matrix can be obtained from the identity matrix by swapping the first and second rows, hence its rank will be n. Thus, a system of n vectors is linearly independent and is the basis of an n-dimensional vector space.

If we rearrange other vectors of the system , then we get another basis.

If we take a linearly independent system of non-unit vectors, then it is also the basis of an n-dimensional vector space.

Thus, a vector space of dimension n has as many bases as there are linearly independent systems of n n -dimensional vectors.

If we talk about a two-dimensional vector space (that is, about a plane), then its basis is any two non-collinear vectors. Basis three-dimensional space are any three non-coplanar vectors.

Let's look at a few examples.

Example.

Are vectors the basis of three-dimensional vector space?

Solution.

Let us examine this system of vectors for linear dependence. To do this, let’s create a matrix whose rows will be the coordinates of the vectors, and find its rank:


Thus, the vectors a, b and c are linearly independent and their number is equal to the dimension of the vector space, therefore, they are the basis of this space.

Answer:

Yes, they are.

Example.

Can a system of vectors be the basis of a vector space?

Solution.

This system of vectors is linearly dependent, since the maximum number of linearly independent three-dimensional vectors is three. Consequently, this system of vectors cannot be a basis of a three-dimensional vector space (although a subsystem of the original system of vectors is a basis).

Answer:

No, he can not.

Example.

Make sure the vectors

can be the basis of a four-dimensional vector space.

Solution.

Let's create a matrix by taking the original vectors as its rows:

Let's find:

Thus, the system of vectors a, b, c, d is linearly independent and their number is equal to the dimension of the vector space, therefore, a, b, c, d are its basis.

Answer:

The original vectors are indeed the basis of four-dimensional space.

Example.

Do vectors form the basis of a vector space of dimension 4?

Solution.

Even if the original system of vectors is linearly independent, the number of vectors in it is not enough to be the basis of a four-dimensional space (the basis of such a space consists of 4 vectors).

Answer:

No, it doesn't.

Decomposition of a vector according to the basis of the vector space.

Let arbitrary vectors are the basis of an n-dimensional vector space. If we add some n-dimensional vector x to them, then the resulting system of vectors will be linearly dependent. From the properties of linear dependence we know that at least one vector of a linearly dependent system is linearly expressed through the others. In other words, at least one of the vectors of a linearly dependent system is expanded into the remaining vectors.

This brings us to a very important theorem.

Theorem.

Any vector of n-dimensional vector space the only way is decomposed according to the basis.

Proof.

Let - basis of n-dimensional vector space. Let's add an n-dimensional vector x to these vectors. Then the resulting system of vectors will be linearly dependent and the vector x can be linearly expressed in terms of vectors : , where are some numbers. This is how we obtained the expansion of the vector x with respect to the basis. It remains to prove that this decomposition is unique.

Let us assume that there is another decomposition, where - some numbers. Let us subtract from the left and right sides of the last equality the left and right sides of the equality, respectively:

Since the system of basis vectors is linearly independent, then by the definition of linear independence of a system of vectors, the resulting equality is possible only when all coefficients are equal to zero. Therefore, , which proves the uniqueness of the vector decomposition with respect to the basis.

Definition.

The coefficients are called coordinates of the vector x in the basis .

After becoming familiar with the theorem about the decomposition of a vector into a basis, we begin to understand the essence of the expression “we are given an n-dimensional vector " This expression means that we are considering a vector of x n -dimensional vector space, the coordinates of which are specified in some basis. At the same time, we understand that the same vector x in another basis of the n-dimensional vector space will have coordinates different from .

Let's consider the following problem.

Let us be given a system of n linearly independent vectors in some basis of n-dimensional vector space

and vector . Then the vectors are also the basis of this vector space.

Let us need to find the coordinates of the vector x in the basis . Let us denote these coordinates as .

Vector x in basis has an idea. Let us write this equality in coordinate form:

This equality is equivalent to a system of n linear algebraic equations with n unknown variables :

The main matrix of this system has the form

Let's denote it by the letter A. The columns of matrix A represent vectors of a linearly independent system of vectors , so the rank of this matrix is ​​n, hence its determinant is non-zero. This fact indicates that the system of equations has a unique solution that can be found by any method, for example, or.

This way the required coordinates will be found vector x in the basis .

Let's look at the theory using examples.

Example.

In some basis of three-dimensional vector space, the vectors

Make sure that the system of vectors is also a basis of this space and find the coordinates of the vector x in this basis.

Solution.

For a system of vectors to be the basis of a three-dimensional vector space, it must be linearly independent. Let's find out this by determining the rank of the matrix A, the rows of which are vectors. Let's find the rank using the Gaussian method


therefore, Rank(A) = 3, which shows the linear independence of the system of vectors.

So, vectors are the basis. Let the vector x have coordinates in this basis. Then, as we showed above, the relationship between the coordinates of this vector is given by the system of equations

Substituting the values ​​known from the condition into it, we obtain

Let's solve it using Cramer's method:

Thus, the vector x in the basis has coordinates .

Answer:

Example.

On some basis of a four-dimensional vector space, a linearly independent system of vectors is given

It is known that . Find the coordinates of the vector x in the basis .

Solution.

Since the system of vectors linearly independent by condition, then it is a basis of four-dimensional space. Then equality means that the vector x in the basis has coordinates. Let us denote the coordinates of the vector x in the basis How .

System of equations defining the relationship between the coordinates of the vector x in bases And looks like

We substitute it known values and find the required coordinates:

Answer:

.

Relationship between bases.

Let two linearly independent systems of vectors be given in some basis of an n-dimensional vector space

And

that is, they are also the bases of this space.

If - coordinates of the vector in the basis , then the coordinate connection And is given by a system of linear equations (we talked about this in the previous paragraph):

, which in matrix form can be written as

Similarly for a vector we can write

The previous matrix equalities can be combined into one, which essentially defines the relationship between the vectors of two different bases

Similarly, we can express all basis vectors through basis :

Definition.

Matrix called transition matrix from the basis to the base , then the equality is true

Multiplying both sides of this equality from the right by

we get

Let's find the transition matrix, but we will not dwell in detail on finding the inverse matrix and multiplying matrices (see articles and if necessary):

It remains to find out the relationship between the coordinates of the vector x in the given bases.

Let the vector x have coordinates in the basis, then

and in the basis the vector x has coordinates , then

Since the left sides of the last two equalities are the same, we can equate the right sides:

If we multiply both sides on the right by

then we get


On the other side

(find the inverse matrix yourself).
The last two equalities give us the required relationship between the coordinates of the vector x in the bases and .

Answer:

The transition matrix from basis to basis has the form
;
coordinates of the vector x in bases and are related by the relations

or
.

We examined the concepts of dimension and basis of a vector space, learned to decompose a vector into a basis, and discovered the connection between different bases of the n-dimensional vector space through the transition matrix.

Find the basis of the system of vectors and vectors not included in the basis, expand them according to the basis:

A 1 = {5, 2, -3, 1}, A 2 = {4, 1, -2, 3}, A 3 = {1, 1, -1, -2}, A 4 = {3, 4, -1, 2}, A 5 = {13, 8, -7, 4}.

Solution. Consider a homogeneous system of linear equations

A 1 X 1 + A 2 X 2 + A 3 X 3 + A 4 X 4 + A 5 X 5 = 0

or in expanded form .

We will solve this system by the Gaussian method, without swapping rows and columns, and, in addition, choosing the main element not in the upper left corner, but along the entire row. The challenge is to select the diagonal part of the transformed system of vectors.

~ ~

~ ~ ~ .

The allowed system of vectors, equivalent to the original one, has the form

A 1 1 X 1 + A 2 1 X 2 + A 3 1 X 3 + A 4 1 X 4 + A 5 1 X 5 = 0 ,

Where A 1 1 = , A 2 1 = , A 3 1 = , A 4 1 = , A 5 1 = . (1)

Vectors A 1 1 , A 3 1 , A 4 1 form a diagonal system. Therefore, the vectors A 1 , A 3 , A 4 form the basis of the vector system A 1 , A 2 , A 3 , A 4 , A 5 .

Let us now expand the vectors A 2 And A 5 on basis A 1 , A 3 , A 4 . To do this, we first expand the corresponding vectors A 2 1 And A 5 1 diagonal system A 1 1 , A 3 1 , A 4 1, bearing in mind that the coefficients of the expansion of a vector along the diagonal system are its coordinates x i.

From (1) we have:

A 2 1 = A 3 1 · (-1) + A 4 1 0 + A 1 1 ·1 => A 2 1 = A 1 1 – A 3 1 .

A 5 1 = A 3 1 0 + A 4 1 1 + A 1 1 ·2 => A 5 1 = 2A 1 1 + A 4 1 .

Vectors A 2 And A 5 are expanded in basis A 1 , A 3 , A 4 with the same coefficients as vectors A 2 1 And A 5 1 diagonal system A 1 1 , A 3 1 , A 4 1 (those coefficients x i). Hence,

A 2 = A 1 – A 3 , A 5 = 2A 1 + A 4 .

Tasks. 1.Find the basis of the system of vectors and vectors not included in the basis, expand them according to the basis:

1. a 1 = { 1, 2, 1 }, a 2 = { 2, 1, 3 }, a 3 = { 1, 5, 0 }, a 4 = { 2, -2, 4 }.

2. a 1 = { 1, 1, 2 }, a 2 = { 0, 1, 2 }, a 3 = { 2, 1, -4 }, a 4 = { 1, 1, 0 }.

3. a 1 = { 1, -2, 3 }, a 2 = { 0, 1, -1 }, a 3 = { 1, 3, 0 }, a 4 = { 0, -7, 3 }, a 5 = { 1, 1, 1 }.

4. a 1 = { 1, 2, -2 }, a 2 = { 0, -1, 4 }, a 3 = { 2, -3, 3 }.

2. Find all bases of the vector system:

1. a 1 = { 1, 1, 2 }, a 2 = { 3, 1, 2 }, a 3 = { 1, 2, 1 }, a 4 = { 2, 1, 2 }.

2. a 1 = { 1, 1, 1 }, a 2 = { -3, -5, 5 }, a 3 = { 3, 4, -1 }, a 4 = { 1, -1, 4 }.

In the article about n-dimensional vectors we came to the concept linear space, generated by a set of n-dimensional vectors. Now we have to consider equally important concepts, such as the dimension and basis of a vector space. They are directly related to the concept of a linearly independent system of vectors, so it is additionally recommended to remind yourself of the basics of this topic.

Let us introduce some definitions.

Definition 1

Dimension of vector space– a number corresponding to the maximum number of linearly independent vectors in this space.

Definition 2

Vector space basis– a set of linearly independent vectors, ordered and equal in number to the dimension of space.

Let's consider a certain space of n -vectors. Its dimension is correspondingly equal to n. Let's take a system of n-unit vectors:

e (1) = (1, 0, . . . 0) e (2) = (0, 1, . . , 0) e (n) = (0, 0, . . , 1)

We use these vectors as components of matrix A: it will be unit matrix with dimension n by n. The rank of this matrix is ​​n. Therefore, the vector system e (1) , e (2) , . . . , e(n) is linearly independent. In this case, it is impossible to add a single vector to the system without violating its linear independence.

Since the number of vectors in the system is n, then the dimension of the space of n-dimensional vectors is n, and the unit vectors are e (1), e (2), . . . , e (n) are the basis of the specified space.

From the resulting definition we can conclude: any system of n-dimensional vectors in which the number of vectors is less than n is not a basis of space.

If we swap the first and second vectors, we get a system of vectors e (2) , e (1) , . . . , e (n) . It will also be the basis of an n-dimensional vector space. Let's create a matrix by taking the vectors of the resulting system as its rows. The matrix can be obtained from the identity matrix by swapping the first two rows, its rank will be n. System e (2) , e (1) , . . . , e(n) is linearly independent and is the basis of an n-dimensional vector space.

By rearranging other vectors in the original system, we obtain another basis.

We can take a linearly independent system of non-unit vectors, and it will also represent the basis of an n-dimensional vector space.

Definition 3

A vector space with dimension n has as many bases as there are linearly independent systems of n-dimensional vectors of number n.

The plane is a two-dimensional space - its basis will be any two non-collinear vectors. The basis of three-dimensional space will be any three non-coplanar vectors.

Let's consider the application of this theory using specific examples.

Example 1

Initial data: vectors

a = (3 , - 2 , 1) b = (2 , 1 , 2) c = (3 , - 1 , - 2)

It is necessary to determine whether the specified vectors are the basis of a three-dimensional vector space.

Solution

To solve the problem, we study the given system of vectors for linear dependence. Let's create a matrix, where the rows are the coordinates of the vectors. Let's determine the rank of the matrix.

A = 3 2 3 - 2 1 - 1 1 2 - 2 A = 3 - 2 1 2 1 2 3 - 1 - 2 = 3 1 (- 2) + (- 2) 2 3 + 1 2 · (- 1) - 1 · 1 · 3 - (- 2) · 2 · (- 2) - 3 · 2 · (- 1) = = - 25 ≠ 0 ⇒ R a n k (A) = 3

Consequently, the vectors specified by the condition of the problem are linearly independent, and their number is equal to the dimension of the vector space - they are the basis of the vector space.

Answer: the indicated vectors are the basis of the vector space.

Example 2

Initial data: vectors

a = (3, - 2, 1) b = (2, 1, 2) c = (3, - 1, - 2) d = (0, 1, 2)

It is necessary to determine whether the specified system of vectors can be the basis of three-dimensional space.

Solution

The system of vectors specified in the problem statement is linearly dependent, because the maximum number of linearly independent vectors is 3. Thus, the indicated system of vectors cannot serve as a basis for a three-dimensional vector space. But it is worth noting that the subsystem of the original system a = (3, - 2, 1), b = (2, 1, 2), c = (3, - 1, - 2) is a basis.

Answer: the indicated system of vectors is not a basis.

Example 3

Initial data: vectors

a = (1, 2, 3, 3) b = (2, 5, 6, 8) c = (1, 3, 2, 4) d = (2, 5, 4, 7)

Can they be the basis of four-dimensional space?

Solution

Let's create a matrix using the coordinates of the given vectors as rows

A = 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7

Using the Gaussian method, we determine the rank of the matrix:

A = 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7 ~ 1 2 3 3 0 1 0 2 0 1 - 1 1 0 1 - 2 1 ~ ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 - 2 - 1 ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 0 1 ⇒ ⇒ R a n k (A) = 4

Consequently, the system of given vectors is linearly independent and their number is equal to the dimension of the vector space - they are the basis of a four-dimensional vector space.

Answer: the given vectors are the basis of four-dimensional space.

Example 4

Initial data: vectors

a (1) = (1 , 2 , - 1 , - 2) a (2) = (0 , 2 , 1 , - 3) a (3) = (1 , 0 , 0 , 5)

Do they form the basis of a space of dimension 4?

Solution

The original system of vectors is linearly independent, but the number of vectors in it is not sufficient to become the basis of a four-dimensional space.

Answer: no, they don’t.

Decomposition of a vector into a basis

Let us assume that arbitrary vectors e (1) , e (2) , . . . , e (n) are the basis of an n-dimensional vector space. Let's add to them a certain n-dimensional vector x →: the resulting system of vectors will become linearly dependent. The properties of linear dependence state that at least one of the vectors of such a system can be linearly expressed through the others. Reformulating this statement, we can say that at least one of the vectors of a linearly dependent system can be expanded into the remaining vectors.

Thus, we came to the formulation of the most important theorem:

Definition 4

Any vector of an n-dimensional vector space can be uniquely decomposed into a basis.

Evidence 1

Let's prove this theorem:

let's set the basis of the n-dimensional vector space - e (1) , e (2) , . . . , e (n) . Let's make the system linearly dependent by adding an n-dimensional vector x → to it. This vector can be linearly expressed in terms of the original vectors e:

x = x 1 · e (1) + x 2 · e (2) + . . . + x n · e (n) , where x 1 , x 2 , . . . , x n - some numbers.

Now we prove that such a decomposition is unique. Let's assume that this is not the case and there is another similar decomposition:

x = x ~ 1 e (1) + x 2 ~ e (2) + . . . + x ~ n e (n) , where x ~ 1 , x ~ 2 , . . . , x ~ n - some numbers.

Let us subtract from the left and right sides of this equality, respectively, the left and right sides of the equality x = x 1 · e (1) + x 2 · e (2) + . . . + x n · e (n) . We get:

0 = (x ~ 1 - x 1) · e (1) + (x ~ 2 - x 2) · e (2) + . . . (x ~ n - x n) e (2)

System of basis vectors e (1) , e (2) , . . . , e(n) is linearly independent; by definition of linear independence of a system of vectors, the equality above is possible only when all coefficients are (x ~ 1 - x 1) , (x ~ 2 - x 2) , . . . , (x ~ n - x n) will be equal to zero. From which it will be fair: x 1 = x ~ 1, x 2 = x ~ 2, . . . , x n = x ~ n . And this proves the only option for decomposing a vector into a basis.

In this case, the coefficients x 1, x 2, . . . , x n are called the coordinates of the vector x → in the basis e (1) , e (2) , . . . , e (n) .

The proven theory makes clear the expression “given an n-dimensional vector x = (x 1 , x 2 , . . . , x n)”: a vector x → n-dimensional vector space is considered, and its coordinates are specified in a certain basis. It is also clear that the same vector in another basis of n-dimensional space will have different coordinates.

Consider the following example: suppose that in some basis of n-dimensional vector space a system of n linearly independent vectors is given

and also the vector x = (x 1 , x 2 , . . . , x n) is given.

Vectors e 1 (1) , e 2 (2) , . . . , e n (n) in this case are also the basis of this vector space.

Suppose that it is necessary to determine the coordinates of the vector x → in the basis e 1 (1) , e 2 (2) , . . . , e n (n) , denoted as x ~ 1 , x ~ 2 , . . . , x ~ n.

Vector x → will be represented as follows:

x = x ~ 1 e (1) + x ~ 2 e (2) + . . . + x ~ n e (n)

Let's write this expression in coordinate form:

(x 1 , x 2 , . . . , x n) = x ~ 1 (e (1) 1 , e (1) 2 , . . , e (1) n) + x ~ 2 (e (2 ) 1 , e (2) 2 , . . . , e (2) n) + . . . + + x ~ n · (e (n) 1 , e (n) 2 , . . . , e (n) n) = = (x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + . . . + x ~ n e 1 (n) , x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + + . . + x ~ n e 2 (n) , . . . , x ~ 1 e n (1) + x ~ 2 e n (2) + ... + x ~ n e n (n))

The resulting equality is equivalent to a system of n linear algebraic expressions with n unknown linear variables x ~ 1, x ~ 2, . . . , x ~ n:

x 1 = x ~ 1 e 1 1 + x ~ 2 e 1 2 + . . . + x ~ n e 1 n x 2 = x ~ 1 e 2 1 + x ~ 2 e 2 2 + . . . + x ~ n e 2 n ⋮ x n = x ~ 1 e n 1 + x ~ 2 e n 2 + . . . + x ~ n e n n

The matrix of this system will have the following form:

e 1 (1) e 1 (2) ⋯ e 1 (n) e 2 (1) e 2 (2) ⋯ e 2 (n) ⋮ ⋮ ⋮ ⋮ e n (1) e n (2) ⋯ e n (n)

Let this be a matrix A, and its columns are vectors of a linearly independent system of vectors e 1 (1), e 2 (2), . . . , e n (n) . The rank of the matrix is ​​n, and its determinant is nonzero. This indicates that the system of equations has a unique solution, determined by any convenient method: for example, the Cramer method or the matrix method. This way we can determine the coordinates x ~ 1, x ~ 2, . . . , x ~ n vector x → in the basis e 1 (1) , e 2 (2) , . . . , e n (n) .

Let's apply the considered theory to a specific example.

Example 6

Initial data: vectors are specified in the basis of three-dimensional space

e (1) = (1 , - 1 , 1) e (2) = (3 , 2 , - 5) e (3) = (2 , 1 , - 3) x = (6 , 2 , - 7)

It is necessary to confirm the fact that the system of vectors e (1), e (2), e (3) also serves as the basis of a given space, and also to determine the coordinates of vector x in a given basis.

Solution

The system of vectors e (1), e (2), e (3) will be the basis of three-dimensional space if it is linearly independent. Let's find out this possibility by determining the rank of the matrix A, the rows of which are the given vectors e (1), e (2), e (3).

We use the Gaussian method:

A = 1 - 1 1 3 2 - 5 2 1 - 3 ~ 1 - 1 1 0 5 - 8 0 3 - 5 ~ 1 - 1 1 0 5 - 8 0 0 - 1 5

R a n k (A) = 3 . Thus, the system of vectors e (1), e (2), e (3) is linearly independent and is a basis.

Let the vector x → have coordinates x ~ 1, x ~ 2, x ~ 3 in the basis. The relationship between these coordinates is determined by the equation:

x 1 = x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + x ~ 3 e 1 (3) x 2 = x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + x ~ 3 e 2 (3) x 3 = x ~ 1 e 3 (1) + x ~ 2 e 3 (2) + x ~ 3 e 3 (3)

Let's apply the values ​​according to the conditions of the problem:

x ~ 1 + 3 x ~ 2 + 2 x ~ 3 = 6 - x ~ 1 + 2 x ~ 2 + x ~ 3 = 2 x ~ 1 - 5 x ~ 2 - 3 x 3 = - 7

Let's solve the system of equations using Cramer's method:

∆ = 1 3 2 - 1 2 1 1 - 5 - 3 = - 1 ∆ x ~ 1 = 6 3 2 2 2 1 - 7 - 5 - 3 = - 1 , x ~ 1 = ∆ x ~ 1 ∆ = - 1 - 1 = 1 ∆ x ~ 2 = 1 6 2 - 1 2 1 1 - 7 - 3 = - 1 , x ~ 2 = ∆ x ~ 2 ∆ = - 1 - 1 = 1 ∆ x ~ 3 = 1 3 6 - 1 2 2 1 - 5 - 7 = - 1 , x ~ 3 = ∆ x ~ 3 ∆ = - 1 - 1 = 1

Thus, the vector x → in the basis e (1), e (2), e (3) has coordinates x ~ 1 = 1, x ~ 2 = 1, x ~ 3 = 1.

Answer: x = (1 , 1 , 1)

Relationship between bases

Let us assume that in some basis of n-dimensional vector space two linearly independent systems of vectors are given:

c (1) = (c 1 (1) , c 2 (1) , . . . , c n (1)) c (2) = (c 1 (2) , c 2 (2) , . . . , c n (2)) ⋮ c (n) = (c 1 (n) , e 2 (n) , . . . , c n (n))

e (1) = (e 1 (1) , e 2 (1) , . . . , e n (1)) e (2) = (e 1 (2) , e 2 (2) , . . . , e n (2)) ⋮ e (n) = (e 1 (n) , e 2 (n) , . . . , e n (n))

These systems are also bases of a given space.

Let c ~ 1 (1) , c ~ 2 (1) , . . . , c ~ n (1) - coordinates of the vector c (1) in the basis e (1) , e (2) , . . . , e (3) , then the coordinate relationship will be given by a system of linear equations:

c 1 (1) = c ~ 1 (1) e 1 (1) + c ~ 2 (1) e 1 (2) + . . . + c ~ n (1) e 1 (n) c 2 (1) = c ~ 1 (1) e 2 (1) + c ~ 2 (1) e 2 (2) + . . . + c ~ n (1) e 2 (n) ⋮ c n (1) = c ~ 1 (1) e n (1) + c ~ 2 (1) e n (2) + . . . + c ~ n (1) e n (n)

The system can be represented as a matrix as follows:

(c 1 (1) , c 2 (1) , . . . , c n (1)) = (c ~ 1 (1) , c ~ 2 (1) , . . . , c ~ n (1)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

Let us make the same entry for the vector c (2) by analogy:

(c 1 (2) , c 2 (2) , . . . , c n (2)) = (c ~ 1 (2) , c ~ 2 (2) , . . . , c ~ n (2)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

(c 1 (n) , c 2 (n) , . . . , c n (n)) = (c ~ 1 (n) , c ~ 2 (n) , . . . , c ~ n (n)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

Let's combine the matrix equalities into one expression:

c 1 (1) c 2 (1) ⋯ c n (1) c 1 (2) c 2 (2) ⋯ c n (2) ⋮ ⋮ ⋮ ⋮ c 1 (n) c 2 (n) ⋯ c n (n) = c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e 1 (1) e 2 (1) ⋯ e n (1) e 1 (2) e 2 (2) ⋯ e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n ) e 2 (n) ⋯ e n (n)

It will determine the connection between the vectors of two different bases.

Using the same principle, it is possible to express all basis vectors e(1), e(2), . . . , e (3) through the basis c (1) , c (2) , . . . , c (n) :

e 1 (1) e 2 (1) ⋯ e n (1) e 1 (2) e 2 (2) ⋯ e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) ⋯ e n (n) = e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) c 1 (1) c 2 (1) ⋯ c n (1) c 1 (2) c 2 (2) ⋯ c n (2) ⋮ ⋮ ⋮ ⋮ c 1 (n ) c 2 (n) ⋯ c n (n)

Let us give the following definitions:

Definition 5

Matrix c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) is the transition matrix from the basis e (1) , e (2) , . . . , e (3)

to the basis c (1) , c (2) , . . . , c (n) .

Definition 6

Matrix e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) is the transition matrix from the basis c (1) , c (2) , . . . , c(n)

to the basis e (1) , e (2) , . . . , e (3) .

From these equalities it is obvious that

c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) = 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1 e ~ 1 (1) e ~ 2 ( 1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n ) · c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) = 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1

those. the transition matrices are reciprocal.

Let's look at the theory using a specific example.

Example 7

Initial data: it is necessary to find the transition matrix from the basis

c (1) = (1 , 2 , 1) c (2) = (2 , 3 , 3) ​​c (3) = (3 , 7 , 1)

e (1) = (3 , 1 , 4) e (2) = (5 , 2 , 1) e (3) = (1 , 1 , - 6)

You also need to indicate the relationship between the coordinates of an arbitrary vector x → in the given bases.

Solution

1. Let T be the transition matrix, then the equality will be true:

3 1 4 5 2 1 1 1 1 = T 1 2 1 2 3 3 3 7 1

Multiply both sides of the equality by

1 2 1 2 3 3 3 7 1 - 1

and we get:

T = 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1

2. Define the transition matrix:

T = 3 1 4 5 2 1 1 1 - 6 · 1 2 1 2 3 3 3 7 1 - 1 = = 3 1 4 5 2 1 1 1 - 6 · - 18 5 3 7 - 2 - 1 5 - 1 - 1 = - 27 9 4 - 71 20 12 - 41 9 8

3. Let us define the relationship between the coordinates of the vector x → :

Let us assume that in the basis c (1) , c (2) , . . . , c (n) vector x → has coordinates x 1 , x 2 , x 3 , then:

x = (x 1 , x 2 , x 3) 1 2 1 2 3 3 3 7 1 ,

and in the basis e (1) , e (2) , . . . , e (3) has coordinates x ~ 1, x ~ 2, x ~ 3, then:

x = (x ~ 1 , x ~ 2 , x ~ 3) 3 1 4 5 2 1 1 1 - 6

Because If the left-hand sides of these equalities are equal, we can equate the right-hand sides as well:

(x 1 , x 2 , x 3) · 1 2 1 2 3 3 3 7 1 = (x ~ 1 , x ~ 2 , x ~ 3) · 3 1 4 5 2 1 1 1 - 6

Multiply both sides on the right by

1 2 1 2 3 3 3 7 1 - 1

and we get:

(x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) · 3 1 4 5 2 1 1 1 - 6 · 1 2 1 2 3 3 3 7 1 - 1 ⇔ ⇔ ( x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) T ⇔ ⇔ (x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3 ) · - 27 9 4 - 71 20 12 - 41 9 8

On the other side

(x ~ 1, x ~ 2, x ~ 3) = (x 1, x 2, x 3) · - 27 9 4 - 71 20 12 - 41 9 8

The last equalities show the relationship between the coordinates of the vector x → in both bases.

Answer: transition matrix

27 9 4 - 71 20 12 - 41 9 8

The coordinates of the vector x → in the given bases are related by the relation:

(x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) · - 27 9 4 - 71 20 12 - 41 9 8

(x ~ 1, x ~ 2, x ~ 3) = (x 1, x 2, x 3) · - 27 9 4 - 71 20 12 - 41 9 8 - 1

If you notice an error in the text, please highlight it and press Ctrl+Enter

Example 8

Vectors are given. Show that vectors form a basis in three-dimensional space and find the coordinates of the vector in this basis.

Solution: First, let's deal with the condition. By condition, four vectors are given, and, as you can see, they already have coordinates in some basis. What this basis is is not of interest to us. And the following thing is of interest: three vectors may well form a new basis. And the first stage completely coincides with the solution of Example 6; it is necessary to check whether the vectors are truly linearly independent:

Let's calculate the determinant made up of vector coordinates:

, which means that the vectors are linearly independent and form the basis of three-dimensional space.

! Important: vector coordinates Necessarily write down into columns determinant, not in strings. Otherwise, there will be confusion in the further solution algorithm.

Now let's remember theoretical part: if the vectors form a basis, then any vector can be expanded into this basis in the only way: , where are the coordinates of the vector in the basis.

Since our vectors form the basis of three-dimensional space (this has already been proven), the vector can be expanded in a unique way over this basis:
, where are the coordinates of the vector in the basis.

According to the condition and it is required to find the coordinates.

For ease of explanation, I’ll swap the parts: . In order to find it, you should write down this equality coordinate-by-coordinate:

On what basis are the coefficients set? All coefficients on the left side are exactly transferred from the determinant , V right side the coordinates of the vector are recorded.

The result is a system of three linear equations with three unknowns. Usually it is solved by Cramer's formulas, often even in the problem statement there is such a requirement.

The main determinant of the system has already been found:
, which means the system has a unique solution.

What follows is a matter of technique:

Thus:
– decomposition of the vector according to the basis.

Answer:

As I already noted, the problem is algebraic in nature. The vectors that were considered are not necessarily those vectors that can be drawn in space, but, first of all, abstract vectors of the linear algebra course. For the case of two-dimensional vectors, a similar problem can be formulated and solved; the solution will be much simpler. However, in practice I have never encountered such a task, which is why I skipped it in the previous section.

The same problem with three-dimensional vectors for independent solution:

Example 9

Vectors are given. Show that the vectors form a basis and find the coordinates of the vector in this basis. Solve a system of linear equations using Cramer's method.

Complete solution and an approximate sample of the final design at the end of the lesson.

Similarly, we can consider four-dimensional, five-dimensional, etc. vector spaces, where vectors have 4, 5 or more coordinates, respectively. For these vector spaces, there is also the concept of linear dependence, linear independence of vectors, there is a basis, including an orthonormal basis, an expansion of a vector with respect to a basis. Yes, such spaces cannot be drawn geometrically, but all the rules, properties and theorems of two and three dimensional cases work in them - pure algebra. Actually, oh philosophical issues I was already tempted to talk in the article Partial derivatives of a function of three variables, which appeared earlier than this lesson.

Love vectors, and vectors will love you!

Solutions and answers:

Example 2: Solution: let’s make a proportion from the corresponding coordinates of the vectors:

Answer: at

Example 4: Proof: Trapeze A quadrilateral is called a quadrilateral in which two sides are parallel and the other two sides are not parallel.
1) Let's check the parallelism of opposite sides and .
Let's find the vectors:


, which means that these vectors are not collinear and the sides are not parallel.
2) Check the parallelism of opposite sides and .
Let's find the vectors:

Let's calculate the determinant made up of vector coordinates:
, which means that these vectors are collinear, and .
Conclusion: Two sides of a quadrilateral are parallel, but the other two sides are not parallel, which means it is a trapezoid by definition. Q.E.D.

Example 5: Solution:
b) Let’s check whether there is a coefficient of proportionality for the corresponding coordinates of the vectors:

The system has no solution, which means the vectors are not collinear.
Simpler design:
– the second and third coordinates are not proportional, which means the vectors are not collinear.
Answer: the vectors are not collinear.
c) We examine vectors for collinearity . Let's create a system:

The corresponding coordinates of the vectors are proportional, which means
This is where the “foppish” design method fails.
Answer:

Example 6: Solution: b) Let’s calculate the determinant made up of vector coordinates (the determinant is revealed in the first line):

, which means that the vectors are linearly dependent and do not form the basis of three-dimensional space.
Answer : these vectors do not form a basis

Example 9: Solution: Let's calculate the determinant made up of vector coordinates:


Thus, the vectors are linearly independent and form a basis.
Let's represent the vector in the form linear combination basis vectors:

Coordinatewise:

Let's solve the system using Cramer's formulas:
, which means the system has a unique solution.



Answer:The vectors form a basis,

Higher mathematics for correspondence students and more >>>

(Go to main page)

Vector artwork vectors.
Mixed product of vectors

In this lesson we will look at two more operations with vectors: vector product of vectors And mixed work vectors. It’s okay, sometimes it happens that for complete happiness, in addition to scalar product of vectors, more and more are required. This is vector addiction. It may seem that we are getting into the wilds analytical geometry. This is wrong. In this section of higher mathematics there is generally little wood, except perhaps enough for Pinocchio. In fact, the material is very common and simple - hardly more complicated than the same scalar product, there will even be fewer typical tasks. The main thing in analytical geometry, as many will be convinced or have already been convinced, is NOT TO MAKE MISTAKES IN CALCULATIONS. Repeat like a spell and you will be happy =)

If vectors sparkle somewhere far away, like lightning on the horizon, it doesn’t matter, start with the lesson Vectors for dummies to restore or reacquire basic knowledge about vectors. More prepared readers can get acquainted with the information selectively; I tried to collect as much as possible complete collection examples that are often found in practical work

What will make you happy right away? When I was little, I could juggle two and even three balls. It worked out well. Now you won't have to juggle at all, since we will consider only spatial vectors, and flat vectors with two coordinates will be left out. Why? This is how these actions were born - the vector and mixed product of vectors are defined and work in three-dimensional space. It's already easier!

Linear dependence and linear independence of vectors.
Basis of vectors. Affine coordinate system

There is a cart with chocolates in the auditorium, and every visitor today will get a sweet couple - analytical geometry with linear algebra. This article will touch upon two sections of higher mathematics at once, and we will see how they coexist in one wrapper. Take a break, eat a Twix! ...damn, what a bunch of nonsense. Although, okay, I won’t score, in the end, you should have a positive attitude towards studying.

Linear dependence of vectors, linear vector independence, basis of vectors and other terms have not only a geometric interpretation, but, above all, an algebraic meaning. The very concept of “vector” from the point of view of linear algebra is not always the “ordinary” vector that we can depict on a plane or in space. You don’t need to look far for proof, try drawing a vector of five-dimensional space . Or the weather vector, which I just went to Gismeteo for: – temperature and Atmosphere pressure respectively. The example, of course, is incorrect from the point of view of the properties of the vector space, but, nevertheless, no one forbids formalizing these parameters as a vector. Breath of autumn...

No, I'm not going to bore you with theory, linear vector spaces, the task is to understand definitions and theorems. The new terms (linear dependence, independence, linear combination, basis, etc.) apply to all vectors from an algebraic point of view, but geometric examples will be given. Thus, everything is simple, accessible and clear. In addition to problems of analytical geometry, we will also consider some typical tasks algebra To master the material, it is advisable to familiarize yourself with the lessons Vectors for dummies And How to calculate the determinant?

Linear dependence and independence of plane vectors.
Plane basis and affine coordinate system

Let's consider the plane of your computer desk (just a table, bedside table, floor, ceiling, whatever you like). The task will consist of the following actions:

1) Select plane basis. Roughly speaking, a tabletop has a length and a width, so it is intuitive that two vectors will be required to construct the basis. One vector is clearly not enough, three vectors are too much.

2) Based on the selected basis set coordinate system(coordinate grid) to assign coordinates to all objects on the table.

Don't be surprised, at first the explanations will be on the fingers. Moreover, on yours. Please place left index finger on the edge of the tabletop so that he looks at the monitor. This will be a vector. Now place little finger right hand on the edge of the table in the same way - so that it is directed at the monitor screen. This will be a vector. Smile, you look great! What can we say about vectors? Data vectors collinear, which means linear expressed through each other:
, well, or vice versa: , where is some number different from zero.

You can see a picture of this action in class. Vectors for dummies, where I explained the rule for multiplying a vector by a number.

Will your fingers set the basis on the plane of the computer desk? Obviously not. Collinear vectors travel back and forth across alone direction, and a plane has length and width.

Such vectors are called linearly dependent.

Reference: The words “linear”, “linearly” denote the fact that in mathematical equations, expressions do not contain squares, cubes, other powers, logarithms, sines, etc. There are only linear (1st degree) expressions and dependencies.

Two plane vectors linearly dependent if and only if they are collinear.

Cross your fingers on the table so that there is any angle between them other than 0 or 180 degrees. Two plane vectorslinear Not dependent if and only if they are not collinear. So, the basis is obtained. There is no need to be embarrassed that the basis turned out to be “skewed” with non-perpendicular vectors of different lengths. Very soon we will see that not only an angle of 90 degrees is suitable for its construction, and not only unit vectors of equal length

Any plane vector the only way is expanded according to the basis:
, where are real numbers. The numbers are called vector coordinates in this basis.

It is also said that vectorpresented as linear combination basis vectors. That is, the expression is called vector decompositionby basis or linear combination basis vectors.

For example, we can say that the vector is decomposed along an orthonormal basis of the plane, or we can say that it is represented as a linear combination of vectors.

Let's formulate definition of basis formally: The basis of the plane is called a pair of linearly independent (non-collinear) vectors, , wherein any a plane vector is a linear combination of basis vectors.

An essential point of the definition is the fact that the vectors are taken in a certain order. Bases – these are two completely different bases! As they say, you cannot replace the little finger of your left hand in place of the little finger of your right hand.

We have figured out the basis, but it is not enough to set a coordinate grid and assign coordinates to each item on your computer desk. Why isn't it enough? The vectors are free and wander throughout the entire plane. So how do you assign coordinates to those little dirty spots on the table left over from a wild weekend? A starting point is needed. And such a landmark is a point familiar to everyone - the origin of coordinates. Let's understand the coordinate system:

I'll start with the “school” system. Already in the introductory lesson Vectors for dummies I highlighted some differences between the rectangular coordinate system and the orthonormal basis. Here's the standard picture:

When they talk about rectangular coordinate system, then most often they mean the origin, coordinate axes and scale along the axes. Try typing “rectangular coordinate system” into a search engine, and you will see that many sources will tell you about coordinate axes familiar from the 5th-6th grade and how to plot points on a plane.

On the other hand, it seems that a rectangular coordinate system can be completely defined in terms of an orthonormal basis. And that's almost true. The wording is as follows:

origin, And orthonormal the basis is set Cartesian rectangular plane coordinate system . That is, the rectangular coordinate system definitely is defined by a single point and two unit orthogonal vectors. That is why you see the drawing that I gave above - in geometric problems Often (but not always) both vectors and coordinate axes are drawn.

I think everyone understands that using a point (origin) and an orthonormal basis ANY POINT on the plane and ANY VECTOR on the plane coordinates can be assigned. Figuratively speaking, “everything on a plane can be numbered.”

Are coordinate vectors required to be unit? No, they can have an arbitrary non-zero length. Consider a point and two orthogonal vectors of arbitrary non-zero length:


Such a basis is called orthogonal. The origin of coordinates with vectors is defined by a coordinate grid, and any point on the plane, any vector has its coordinates in a given basis. For example, or. The obvious inconvenience is that the coordinate vectors V general case have different lengths other than unity. If the lengths are equal to unity, then the usual orthonormal basis is obtained.

! Note : in the orthogonal basis, as well as below in the affine bases of plane and space, units along the axes are considered CONDITIONAL. For example, one unit along the x-axis contains 4 cm, one unit along the ordinate axis contains 2 cm. This information is enough to, if necessary, convert “non-standard” coordinates into “our usual centimeters”.

And the second question, which has actually already been answered, is whether the angle between the basis vectors must be equal to 90 degrees? No! As the definition states, the basis vectors must be only non-collinear. Accordingly, the angle can be anything except 0 and 180 degrees.

A point on the plane called origin, And non-collinear vectors, , set affine plane coordinate system :


Sometimes such a coordinate system is called oblique system. As examples, the drawing shows points and vectors:

As you understand, the affine coordinate system is even less convenient; the formulas for the lengths of vectors and segments, which we discussed in the second part of the lesson, do not work in it Vectors for dummies, many delicious formulas related to scalar product of vectors. But the rules for adding vectors and multiplying a vector by a number, formulas for dividing a segment in this relation, as well as some other types of problems that we will consider soon are valid.

And the conclusion is that the most convenient special case of an affine coordinate system is the Cartesian rectangular system. That’s why you most often have to see her, my dear one. ...However, everything in this life is relative - there are many situations in which an oblique angle (or some other one, for example, polar) coordinate system. And humanoids might like such systems =)

Let's move on to the practical part. All problems in this lesson are valid both for the rectangular coordinate system and for the general affine case. There is nothing complicated here; all the material is accessible even to a schoolchild.

How to determine collinearity of plane vectors?

Typical thing. In order for two plane vectors were collinear, it is necessary and sufficient that their corresponding coordinates be proportional Essentially, this is a coordinate-by-coordinate detailing of the obvious relationship.

Example 1

a) Check if the vectors are collinear .
b) Do the vectors form a basis? ?

Solution:
a) Let us find out whether there is for vectors proportionality coefficient, such that the equalities are satisfied:

I’ll definitely tell you about the “foppish” version of applying this rule, which works quite well in practice. The idea is to immediately make up the proportion and see if it is correct:

Let's make a proportion from the ratios of the corresponding coordinates of the vectors:

Let's shorten:
, thus the corresponding coordinates are proportional, therefore,

The relationship could be made the other way around; this is an equivalent option:

For self-test, you can use the fact that collinear vectors linearly expressed through each other. In this case, the equalities take place . Their validity can be easily verified through elementary operations with vectors:

b) Two plane vectors form a basis if they are not collinear (linearly independent). We examine vectors for collinearity . Let's create a system:

From the first equation it follows that , from the second equation it follows that , which means the system is inconsistent(no solutions). Thus, the corresponding coordinates of the vectors are not proportional.

Conclusion: the vectors are linearly independent and form a basis.

A simplified version of the solution looks like this:

Let's make a proportion from the corresponding coordinates of the vectors :
, which means that these vectors are linearly independent and form a basis.

Usually this option is not rejected by reviewers, but a problem arises in cases where some coordinates are equal to zero. Like this: . Or like this: . Or like this: . How to work through proportion here? (indeed, you cannot divide by zero). It is for this reason that I called the simplified solution “foppish”.

Answer: a) , b) form.

A small creative example for your own solution:

Example 2

At what value of the parameter are the vectors will they be collinear?

In the sample solution, the parameter is found through the proportion.

There is an elegant algebraic way to check vectors for collinearity. Let’s systematize our knowledge and add it as the fifth point:

For two plane vectors the following statements are equivalent:

2) the vectors form a basis;
3) the vectors are not collinear;

+ 5) the determinant composed of the coordinates of these vectors is nonzero.

Respectively, the following opposite statements are equivalent:
1) vectors are linearly dependent;
2) vectors do not form a basis;
3) the vectors are collinear;
4) vectors can be linearly expressed through each other;
+ 5) the determinant composed of the coordinates of these vectors is equal to zero.

I really, really hope that this moment you already understand all the terms and statements you come across.

Let's take a closer look at the new, fifth point: two plane vectors are collinear if and only if the determinant composed of the coordinates of the given vectors is equal to zero:. To apply this feature, of course, you need to be able to find determinants.

Let's decide Example 1 in the second way:

a) Let us calculate the determinant made up of the coordinates of the vectors :
, which means that these vectors are collinear.

b) Two plane vectors form a basis if they are not collinear (linearly independent). Let's calculate the determinant made up of vector coordinates :
, which means the vectors are linearly independent and form a basis.

Answer: a) , b) form.

It looks much more compact and prettier than a solution with proportions.

With the help of the material considered, it is possible to establish not only the collinearity of vectors, but also to prove the parallelism of segments and straight lines. Let's consider a couple of problems with specific geometric shapes.

Example 3

The vertices of a quadrilateral are given. Prove that a quadrilateral is a parallelogram.

Proof: There is no need to create a drawing in the problem, since the solution will be purely analytical. Let's remember the definition of a parallelogram:
Parallelogram A quadrilateral whose opposite sides are parallel in pairs is called.

Thus, it is necessary to prove:
1) parallelism of opposite sides and;
2) parallelism of opposite sides and.

We prove:

1) Find the vectors:


2) Find the vectors:

The result is the same vector (“according to school” – equal vectors). Collinearity is quite obvious, but it is better to formalize the decision clearly, with arrangement. Let's calculate the determinant made up of vector coordinates:
, which means that these vectors are collinear, and .

Conclusion: The opposite sides of a quadrilateral are parallel in pairs, which means it is a parallelogram by definition. Q.E.D.

More good and different figures:

Example 4

The vertices of a quadrilateral are given. Prove that a quadrilateral is a trapezoid.

For a more rigorous formulation of the proof, it is better, of course, to get the definition of a trapezoid, but it is enough to simply remember what it looks like.

This is a task for you to solve on your own. Full solution at the end of the lesson.

And now it’s time to slowly move from the plane into space:

How to determine collinearity of space vectors?

The rule is very similar. In order for two space vectors to be collinear, it is necessary and sufficient that their corresponding coordinates be proportional.

Example 5

Find out whether the following space vectors are collinear:

A) ;
b)
V)

Solution:
a) Let’s check whether there is a coefficient of proportionality for the corresponding coordinates of the vectors:

The system has no solution, which means the vectors are not collinear.

“Simplified” is formalized by checking the proportion. In this case:
– the corresponding coordinates are not proportional, which means the vectors are not collinear.

Answer: the vectors are not collinear.

b-c) These are points for independent decision. Try it out in two ways.

There is a method for checking spatial vectors for collinearity through a third-order determinant; this method is covered in the article Vector product of vectors.

Similar to the plane case, the considered tools can be used to study the parallelism of spatial segments and straight lines.

Welcome to the second section:

Linear dependence and independence of vectors in three-dimensional space.
Spatial basis and affine coordinate system

Many of the patterns that we examined on the plane will be valid for space. I tried to minimize the theory notes, since the lion's share of the information has already been chewed. However, I recommend that you read the introductory part carefully, as new terms and concepts will appear.

Now, instead of the plane of the computer desk, we explore three-dimensional space. First, let's create its basis. Someone is now indoors, someone is outdoors, but in any case, we cannot escape three dimensions: width, length and height. Therefore, to construct a basis, three spatial vectors will be required. One or two vectors are not enough, the fourth is superfluous.

And again we warm up on our fingers. Please raise your hand up and spread it out different sides thumb, index and middle finger. These will be vectors, they look in different directions, have different lengths and have different angles between themselves. Congratulations, the basis of three-dimensional space is ready! By the way, there is no need to demonstrate this to teachers, no matter how hard you twist your fingers, but there is no escape from definitions =)

Next, let's ask ourselves an important question: do any three vectors form a basis of three-dimensional space? Please press three fingers firmly onto the top of the computer desk. What happened? Three vectors are located in the same plane, and, roughly speaking, we have lost one of the dimensions - height. Such vectors are coplanar and, it is quite obvious that the basis of three-dimensional space is not created.

It should be noted that coplanar vectors do not have to lie in the same plane; they can be in parallel planes(just don’t do this with your fingers, only Salvador Dali pulled off this way =)).

Definition: vectors are called coplanar, if there is a plane to which they are parallel. It is logical to add here that if such a plane does not exist, then the vectors will not be coplanar.

Three coplanar vectors are always linearly dependent, that is, they are linearly expressed through each other. For simplicity, let us again imagine that they lie in the same plane. Firstly, vectors are not only coplanar, they can also be collinear, then any vector can be expressed through any vector. In the second case, if, for example, the vectors are not collinear, then the third vector is expressed through them in a unique way: (and why is easy to guess from the materials in the previous section).

The converse is also true: three non-coplanar vectors are always linearly independent, that is, they are in no way expressed through each other. And, obviously, only such vectors can form the basis of three-dimensional space.

Definition: The basis of three-dimensional space is called a triple of linearly independent (non-coplanar) vectors, taken in a certain order, and any vector of space the only way is decomposed over a given basis, where are the coordinates of the vector in this basis

Let me remind you that we can also say that the vector is represented in the form linear combination basis vectors.

The concept of a coordinate system is introduced in exactly the same way as for the plane case; one point and any three linearly independent vectors are enough:

origin, And non-coplanar vectors, taken in a certain order, set affine coordinate system of three-dimensional space :

Of course, the coordinate grid is “oblique” and inconvenient, but, nevertheless, the constructed coordinate system allows us definitely determine the coordinates of any vector and the coordinates of any point in space. Similar to a plane, some formulas that I have already mentioned will not work in the affine coordinate system of space.

The most familiar and convenient special case of an affine coordinate system, as everyone guesses, is rectangular space coordinate system:

A point in space called origin, And orthonormal the basis is set Cartesian rectangular space coordinate system . Familiar picture:

Before moving on to practical tasks, let’s again systematize the information:

For three space vectors the following statements are equivalent:
1) the vectors are linearly independent;
2) the vectors form a basis;
3) the vectors are not coplanar;
4) vectors cannot be linearly expressed through each other;
5) the determinant, composed of the coordinates of these vectors, is different from zero.

I think the opposite statements are understandable.

Linear dependence/independence of space vectors is traditionally checked using a determinant (point 5). Remaining practical tasks will have a pronounced algebraic character. It's time to hang up the geometry stick and wield the baseball bat of linear algebra:

Three vectors of space are coplanar if and only if the determinant composed of the coordinates of the given vectors is equal to zero: .

I would like to draw your attention to a small technical nuance: the coordinates of vectors can be written not only in columns, but also in rows (the value of the determinant will not change because of this - see properties of determinants). But it is much better in columns, since it is more beneficial for solving some practical problems.

For those readers who have a little forgotten the methods of calculating determinants, or maybe have little understanding of them at all, I recommend one of my oldest lessons: How to calculate the determinant?

Example 6

Check whether the following vectors form the basis of three-dimensional space:

Solution: In fact, the entire solution comes down to calculating the determinant.

a) Let’s calculate the determinant made up of vector coordinates (the determinant is revealed in the first line):

, which means that the vectors are linearly independent (not coplanar) and form the basis of three-dimensional space.

Answer: these vectors form a basis

b) This is a point for independent decision. Full solution and answer at the end of the lesson.

There are also creative tasks:

Example 7

At what value of the parameter will the vectors be coplanar?

Solution: Vectors are coplanar if and only if the determinant composed of the coordinates of these vectors is equal to zero:

Essentially, you need to solve an equation with a determinant. We swoop down on zeros like kites on jerboas - it’s best to open the determinant in the second line and immediately get rid of the minuses:

We carry out further simplifications and reduce the matter to the simplest linear equation:

Answer: at

It’s easy to check here; to do this, you need to substitute the resulting value into the original determinant and make sure that , opening it again.

In conclusion, let's look at one more typical task, which is more algebraic in nature and is traditionally included in the course of linear algebra. It is so common that it deserves its own topic:

Prove that 3 vectors form the basis of three-dimensional space
and find the coordinates of the 4th vector in this basis

Example 8

Vectors are given. Show that vectors form a basis in three-dimensional space and find the coordinates of the vector in this basis.

Solution: First, let's deal with the condition. By condition, four vectors are given, and, as you can see, they already have coordinates in some basis. What this basis is is not of interest to us. And the following thing is of interest: three vectors may well form a new basis. And the first stage completely coincides with the solution of Example 6; it is necessary to check whether the vectors are truly linearly independent:

Let's calculate the determinant made up of vector coordinates:

, which means that the vectors are linearly independent and form the basis of three-dimensional space.

! Important : vector coordinates Necessarily write down into columns determinant, not in strings. Otherwise, there will be confusion in the further solution algorithm.



Read also: