Linear dependence. Basis of the vector system. Linear dependence and linear independence of vectors. Basis of vectors. Affine coordinate system Given a finite vector system, find

In the article on n-dimensional vectors, we came to the concept of a linear space generated by a set of n-dimensional vectors. Now we have to consider equally important concepts, such as the dimension and basis of a vector space. They are directly related to the concept of a linearly independent system of vectors, so it is additionally recommended to remind yourself of the basics of this topic.

Let us introduce some definitions.

Definition 1

Dimension of vector space– a number corresponding to the maximum number of linearly independent vectors in this space.

Definition 2

Vector space basis– a set of linearly independent vectors, ordered and equal in number to the dimension of space.

Let's consider a certain space of n -vectors. Its dimension is correspondingly equal to n. Let's take a system of n-unit vectors:

e (1) = (1, 0, . . . 0) e (2) = (0, 1, . . , 0) e (n) = (0, 0, . . , 1)

We use these vectors as components of matrix A: it will be unit matrix with dimension n by n. The rank of this matrix is ​​n. Therefore, the vector system e (1) , e (2) , . . . , e(n) is linearly independent. In this case, it is impossible to add a single vector to the system without violating its linear independence.

Since the number of vectors in the system is n, then the dimension of the space of n-dimensional vectors is n, and the unit vectors are e (1), e (2), . . . , e (n) are the basis of the specified space.

From the resulting definition we can conclude: any system of n-dimensional vectors in which the number of vectors is less than n is not a basis of space.

If we swap the first and second vectors, we get a system of vectors e (2) , e (1) , . . . , e (n) . It will also be the basis of an n-dimensional vector space. Let's create a matrix by taking the vectors of the resulting system as its rows. The matrix can be obtained from the identity matrix by swapping the first two rows, its rank will be n. System e (2) , e (1) , . . . , e(n) is linearly independent and is the basis of an n-dimensional vector space.

By rearranging other vectors in the original system, we obtain another basis.

We can take a linearly independent system of non-unit vectors, and it will also represent the basis of an n-dimensional vector space.

Definition 3

A vector space with dimension n has as many bases as there are linearly independent systems of n-dimensional vectors of number n.

The plane is a two-dimensional space - its basis will be any two non-collinear vectors. The basis of three-dimensional space will be any three non-coplanar vectors.

Let's consider the application of this theory using specific examples.

Example 1

Initial data: vectors

a = (3 , - 2 , 1) b = (2 , 1 , 2) c = (3 , - 1 , - 2)

It is necessary to determine whether the specified vectors are the basis of a three-dimensional vector space.

Solution

To solve the problem, we study the given system of vectors for linear dependence. Let's create a matrix, where the rows are the coordinates of the vectors. Let's determine the rank of the matrix.

A = 3 2 3 - 2 1 - 1 1 2 - 2 A = 3 - 2 1 2 1 2 3 - 1 - 2 = 3 1 (- 2) + (- 2) 2 3 + 1 2 · (- 1) - 1 · 1 · 3 - (- 2) · 2 · (- 2) - 3 · 2 · (- 1) = = - 25 ≠ 0 ⇒ R a n k (A) = 3

Consequently, the vectors specified by the condition of the problem are linearly independent, and their number is equal to the dimension of the vector space - they are the basis of the vector space.

Answer: the indicated vectors are the basis of the vector space.

Example 2

Initial data: vectors

a = (3, - 2, 1) b = (2, 1, 2) c = (3, - 1, - 2) d = (0, 1, 2)

It is necessary to determine whether the specified system of vectors can be the basis of three-dimensional space.

Solution

The system of vectors specified in the problem statement is linearly dependent, because maximum number linearly independent vectors is equal to 3. Thus, the indicated system of vectors cannot serve as a basis for a three-dimensional vector space. But it is worth noting that the subsystem of the original system a = (3, - 2, 1), b = (2, 1, 2), c = (3, - 1, - 2) is a basis.

Answer: the indicated system of vectors is not a basis.

Example 3

Initial data: vectors

a = (1, 2, 3, 3) b = (2, 5, 6, 8) c = (1, 3, 2, 4) d = (2, 5, 4, 7)

Can they be the basis of four-dimensional space?

Solution

Let's create a matrix using the coordinates of the given vectors as rows

A = 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7

Using the Gaussian method, we determine the rank of the matrix:

A = 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7 ~ 1 2 3 3 0 1 0 2 0 1 - 1 1 0 1 - 2 1 ~ ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 - 2 - 1 ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 0 1 ⇒ ⇒ R a n k (A) = 4

Consequently, the system of given vectors is linearly independent and their number is equal to the dimension of the vector space - they are the basis of a four-dimensional vector space.

Answer: the given vectors are the basis of four-dimensional space.

Example 4

Initial data: vectors

a (1) = (1 , 2 , - 1 , - 2) a (2) = (0 , 2 , 1 , - 3) a (3) = (1 , 0 , 0 , 5)

Do they form the basis of a space of dimension 4?

Solution

The original system of vectors is linearly independent, but the number of vectors in it is not sufficient to become the basis of a four-dimensional space.

Answer: no, they don’t.

Decomposition of a vector into a basis

Let us assume that arbitrary vectors e (1) , e (2) , . . . , e (n) are the basis of an n-dimensional vector space. Let's add to them a certain n-dimensional vector x →: the resulting system of vectors will become linearly dependent. The properties of linear dependence state that at least one of the vectors of such a system can be linearly expressed through the others. Reformulating this statement, we can say that at least one of the vectors of a linearly dependent system can be expanded into the remaining vectors.

Thus, we came to the formulation of the most important theorem:

Definition 4

Any vector of an n-dimensional vector space can be uniquely decomposed into a basis.

Evidence 1

Let's prove this theorem:

let's set the basis of the n-dimensional vector space - e (1) , e (2) , . . . , e (n) . Let's make the system linearly dependent by adding an n-dimensional vector x → to it. This vector can be linearly expressed in terms of the original vectors e:

x = x 1 · e (1) + x 2 · e (2) + . . . + x n · e (n) , where x 1 , x 2 , . . . , x n - some numbers.

Now we prove that such a decomposition is unique. Let's assume that this is not the case and there is another similar decomposition:

x = x ~ 1 e (1) + x 2 ~ e (2) + . . . + x ~ n e (n) , where x ~ 1 , x ~ 2 , . . . , x ~ n - some numbers.

Let us subtract from the left and right sides of this equality, respectively, the left and right sides of the equality x = x 1 · e (1) + x 2 · e (2) + . . . + x n · e (n) . We get:

0 = (x ~ 1 - x 1) · e (1) + (x ~ 2 - x 2) · e (2) + . . . (x ~ n - x n) e (2)

System of basis vectors e (1) , e (2) , . . . , e(n) is linearly independent; by definition of linear independence of a system of vectors, the equality above is possible only when all coefficients are (x ~ 1 - x 1) , (x ~ 2 - x 2) , . . . , (x ~ n - x n) will be equal to zero. From which it will be fair: x 1 = x ~ 1, x 2 = x ~ 2, . . . , x n = x ~ n . And this proves the only option for decomposing a vector into a basis.

In this case, the coefficients x 1, x 2, . . . , x n are called the coordinates of the vector x → in the basis e (1) , e (2) , . . . , e (n) .

The proven theory makes clear the expression “given an n-dimensional vector x = (x 1 , x 2 , . . . , x n)”: a vector x → n-dimensional vector space is considered, and its coordinates are specified in a certain basis. It is also clear that the same vector in another basis of n-dimensional space will have different coordinates.

Consider the following example: suppose that in some basis of n-dimensional vector space a system of n linearly independent vectors is given

and also the vector x = (x 1 , x 2 , . . . , x n) is given.

Vectors e 1 (1) , e 2 (2) , . . . , e n (n) in this case are also the basis of this vector space.

Suppose that it is necessary to determine the coordinates of the vector x → in the basis e 1 (1) , e 2 (2) , . . . , e n (n) , denoted as x ~ 1 , x ~ 2 , . . . , x ~ n.

Vector x → will be represented as follows:

x = x ~ 1 e (1) + x ~ 2 e (2) + . . . + x ~ n e (n)

Let's write this expression in coordinate form:

(x 1 , x 2 , . . . , x n) = x ~ 1 (e (1) 1 , e (1) 2 , . . , e (1) n) + x ~ 2 (e (2 ) 1 , e (2) 2 , . . . , e (2) n) + . . . + + x ~ n · (e (n) 1 , e (n) 2 , . . . , e (n) n) = = (x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + . . . + x ~ n e 1 (n) , x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + + . . + x ~ n e 2 (n) , . . . , x ~ 1 e n (1) + x ~ 2 e n (2) + ... + x ~ n e n (n))

The resulting equality is equivalent to a system of n linear algebraic expressions with n unknown linear variables x ~ 1, x ~ 2, . . . , x ~ n:

x 1 = x ~ 1 e 1 1 + x ~ 2 e 1 2 + . . . + x ~ n e 1 n x 2 = x ~ 1 e 2 1 + x ~ 2 e 2 2 + . . . + x ~ n e 2 n ⋮ x n = x ~ 1 e n 1 + x ~ 2 e n 2 + . . . + x ~ n e n n

The matrix of this system will have the following form:

e 1 (1) e 1 (2) ⋯ e 1 (n) e 2 (1) e 2 (2) ⋯ e 2 (n) ⋮ ⋮ ⋮ ⋮ e n (1) e n (2) ⋯ e n (n)

Let this be a matrix A, and its columns are vectors of a linearly independent system of vectors e 1 (1), e 2 (2), . . . , e n (n) . The rank of the matrix is ​​n, and its determinant is nonzero. This indicates that the system of equations has a unique solution, determined by any convenient method: for example, the Cramer method or the matrix method. This way we can determine the coordinates x ~ 1, x ~ 2, . . . , x ~ n vector x → in the basis e 1 (1) , e 2 (2) , . . . , e n (n) .

Let's apply the considered theory to a specific example.

Example 6

Initial data: vectors are specified in the basis of three-dimensional space

e (1) = (1 , - 1 , 1) e (2) = (3 , 2 , - 5) e (3) = (2 , 1 , - 3) x = (6 , 2 , - 7)

It is necessary to confirm the fact that the system of vectors e (1), e (2), e (3) also serves as the basis of a given space, and also to determine the coordinates of vector x in a given basis.

Solution

The system of vectors e (1), e (2), e (3) will be the basis of three-dimensional space if it is linearly independent. Let's find out this possibility by determining the rank of the matrix A, the rows of which are the given vectors e (1), e (2), e (3).

We use the Gaussian method:

A = 1 - 1 1 3 2 - 5 2 1 - 3 ~ 1 - 1 1 0 5 - 8 0 3 - 5 ~ 1 - 1 1 0 5 - 8 0 0 - 1 5

R a n k (A) = 3 . Thus, the system of vectors e (1), e (2), e (3) is linearly independent and is a basis.

Let the vector x → have coordinates x ~ 1, x ~ 2, x ~ 3 in the basis. The relationship between these coordinates is determined by the equation:

x 1 = x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + x ~ 3 e 1 (3) x 2 = x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + x ~ 3 e 2 (3) x 3 = x ~ 1 e 3 (1) + x ~ 2 e 3 (2) + x ~ 3 e 3 (3)

Let's apply the values ​​according to the conditions of the problem:

x ~ 1 + 3 x ~ 2 + 2 x ~ 3 = 6 - x ~ 1 + 2 x ~ 2 + x ~ 3 = 2 x ~ 1 - 5 x ~ 2 - 3 x 3 = - 7

Let's solve the system of equations using Cramer's method:

∆ = 1 3 2 - 1 2 1 1 - 5 - 3 = - 1 ∆ x ~ 1 = 6 3 2 2 2 1 - 7 - 5 - 3 = - 1 , x ~ 1 = ∆ x ~ 1 ∆ = - 1 - 1 = 1 ∆ x ~ 2 = 1 6 2 - 1 2 1 1 - 7 - 3 = - 1 , x ~ 2 = ∆ x ~ 2 ∆ = - 1 - 1 = 1 ∆ x ~ 3 = 1 3 6 - 1 2 2 1 - 5 - 7 = - 1 , x ~ 3 = ∆ x ~ 3 ∆ = - 1 - 1 = 1

Thus, the vector x → in the basis e (1), e (2), e (3) has coordinates x ~ 1 = 1, x ~ 2 = 1, x ~ 3 = 1.

Answer: x = (1 , 1 , 1)

Relationship between bases

Suppose that in some basis of n-dimensional vector space two linear independent systems vectors:

c (1) = (c 1 (1) , c 2 (1) , . . . , c n (1)) c (2) = (c 1 (2) , c 2 (2) , . . . , c n (2)) ⋮ c (n) = (c 1 (n) , e 2 (n) , . . . , c n (n))

e (1) = (e 1 (1) , e 2 (1) , . . . , e n (1)) e (2) = (e 1 (2) , e 2 (2) , . . . , e n (2)) ⋮ e (n) = (e 1 (n) , e 2 (n) , . . . , e n (n))

These systems are also bases of a given space.

Let c ~ 1 (1) , c ~ 2 (1) , . . . , c ~ n (1) - coordinates of the vector c (1) in the basis e (1) , e (2) , . . . , e (3) , then the coordinate relationship will be given by a system of linear equations:

c 1 (1) = c ~ 1 (1) e 1 (1) + c ~ 2 (1) e 1 (2) + . . . + c ~ n (1) e 1 (n) c 2 (1) = c ~ 1 (1) e 2 (1) + c ~ 2 (1) e 2 (2) + . . . + c ~ n (1) e 2 (n) ⋮ c n (1) = c ~ 1 (1) e n (1) + c ~ 2 (1) e n (2) + . . . + c ~ n (1) e n (n)

The system can be represented as a matrix as follows:

(c 1 (1) , c 2 (1) , . . . , c n (1)) = (c ~ 1 (1) , c ~ 2 (1) , . . . , c ~ n (1)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

Let us make the same entry for the vector c (2) by analogy:

(c 1 (2) , c 2 (2) , . . . , c n (2)) = (c ~ 1 (2) , c ~ 2 (2) , . . . , c ~ n (2)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

(c 1 (n) , c 2 (n) , . . . , c n (n)) = (c ~ 1 (n) , c ~ 2 (n) , . . . , c ~ n (n)) e 1 (1) e 2 (1) … e n (1) e 1 (2) e 2 (2) … e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) … e n (n)

Let's combine the matrix equalities into one expression:

c 1 (1) c 2 (1) ⋯ c n (1) c 1 (2) c 2 (2) ⋯ c n (2) ⋮ ⋮ ⋮ ⋮ c 1 (n) c 2 (n) ⋯ c n (n) = c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e 1 (1) e 2 (1) ⋯ e n (1) e 1 (2) e 2 (2) ⋯ e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n ) e 2 (n) ⋯ e n (n)

It will determine the connection between the vectors of two different bases.

Using the same principle, it is possible to express all basis vectors e(1), e(2), . . . , e (3) through the basis c (1) , c (2) , . . . , c (n) :

e 1 (1) e 2 (1) ⋯ e n (1) e 1 (2) e 2 (2) ⋯ e n (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) ⋯ e n (n) = e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) c 1 (1) c 2 (1) ⋯ c n (1) c 1 (2) c 2 (2) ⋯ c n (2) ⋮ ⋮ ⋮ ⋮ c 1 (n ) c 2 (n) ⋯ c n (n)

Let us give the following definitions:

Definition 5

Matrix c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) is the transition matrix from the basis e (1) , e (2) , . . . , e (3)

to the basis c (1) , c (2) , . . . , c (n) .

Definition 6

Matrix e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) is the transition matrix from the basis c (1) , c (2) , . . . , c(n)

to the basis e (1) , e (2) , . . . , e (3) .

From these equalities it is obvious that

c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) = 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1 e ~ 1 (1) e ~ 2 ( 1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n ) · c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) = 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1

those. the transition matrices are reciprocal.

Let's look at the theory using a specific example.

Example 7

Initial data: it is necessary to find the transition matrix from the basis

c (1) = (1 , 2 , 1) c (2) = (2 , 3 , 3) ​​c (3) = (3 , 7 , 1)

e (1) = (3 , 1 , 4) e (2) = (5 , 2 , 1) e (3) = (1 , 1 , - 6)

You also need to indicate the relationship between the coordinates of an arbitrary vector x → in the given bases.

Solution

1. Let T be the transition matrix, then the equality will be true:

3 1 4 5 2 1 1 1 1 = T 1 2 1 2 3 3 3 7 1

Multiply both sides of the equality by

1 2 1 2 3 3 3 7 1 - 1

and we get:

T = 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1

2. Define the transition matrix:

T = 3 1 4 5 2 1 1 1 - 6 · 1 2 1 2 3 3 3 7 1 - 1 = = 3 1 4 5 2 1 1 1 - 6 · - 18 5 3 7 - 2 - 1 5 - 1 - 1 = - 27 9 4 - 71 20 12 - 41 9 8

3. Let us define the relationship between the coordinates of the vector x → :

Let us assume that in the basis c (1) , c (2) , . . . , c (n) vector x → has coordinates x 1 , x 2 , x 3 , then:

x = (x 1 , x 2 , x 3) 1 2 1 2 3 3 3 7 1 ,

and in the basis e (1) , e (2) , . . . , e (3) has coordinates x ~ 1, x ~ 2, x ~ 3, then:

x = (x ~ 1 , x ~ 2 , x ~ 3) 3 1 4 5 2 1 1 1 - 6

Because If the left-hand sides of these equalities are equal, we can equate the right-hand sides as well:

(x 1 , x 2 , x 3) · 1 2 1 2 3 3 3 7 1 = (x ~ 1 , x ~ 2 , x ~ 3) · 3 1 4 5 2 1 1 1 - 6

Multiply both sides on the right by

1 2 1 2 3 3 3 7 1 - 1

and we get:

(x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) · 3 1 4 5 2 1 1 1 - 6 · 1 2 1 2 3 3 3 7 1 - 1 ⇔ ⇔ ( x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) T ⇔ ⇔ (x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3 ) · - 27 9 4 - 71 20 12 - 41 9 8

On the other side

(x ~ 1, x ~ 2, x ~ 3) = (x 1, x 2, x 3) · - 27 9 4 - 71 20 12 - 41 9 8

The last equalities show the relationship between the coordinates of the vector x → in both bases.

Answer: transition matrix

27 9 4 - 71 20 12 - 41 9 8

The coordinates of the vector x → in the given bases are related by the relation:

(x 1 , x 2 , x 3) = (x ~ 1 , x ~ 2 , x ~ 3) · - 27 9 4 - 71 20 12 - 41 9 8

(x ~ 1, x ~ 2, x ~ 3) = (x 1, x 2, x 3) · - 27 9 4 - 71 20 12 - 41 9 8 - 1

If you notice an error in the text, please highlight it and press Ctrl+Enter

Linear dependence And linear independence vectors.
Basis of vectors. Affine coordinate system

There is a cart with chocolates in the auditorium, and every visitor today will get a sweet couple - analytical geometry with linear algebra. This article will cover two sections at once. higher mathematics, and we'll see how they get along in one wrapper. Take a break, eat a Twix! ...damn, what a bunch of nonsense. Although, okay, I won’t score, in the end, you should have a positive attitude towards studying.

Linear dependence of vectors, linear vector independence, basis of vectors and other terms have not only a geometric interpretation, but, above all, an algebraic meaning. The very concept of “vector” from the point of view of linear algebra is not always the “ordinary” vector that we can depict on a plane or in space. You don’t need to look far for proof, try drawing a vector of five-dimensional space . Or the weather vector, which I just went to Gismeteo for: – temperature and Atmosphere pressure respectively. The example, of course, is incorrect from the point of view of the properties of the vector space, but, nevertheless, no one forbids formalizing these parameters as a vector. Breath of autumn...

No, I'm not going to bore you with theory, linear vector spaces, the task is to understand definitions and theorems. The new terms (linear dependence, independence, linear combination, basis, etc.) apply to all vectors from an algebraic point of view, but geometric examples will be given. Thus, everything is simple, accessible and clear. In addition to problems of analytical geometry, we will also consider some typical algebra problems. To master the material, it is advisable to familiarize yourself with the lessons Vectors for dummies And How to calculate the determinant?

Linear dependence and independence of plane vectors.
Plane basis and affine coordinate system

Let's consider the plane of your computer desk (just a table, bedside table, floor, ceiling, whatever you like). The task will consist of the following actions:

1) Select plane basis. Roughly speaking, a tabletop has a length and a width, so it is intuitive that two vectors will be required to construct the basis. One vector is clearly not enough, three vectors are too much.

2) Based on the selected basis set coordinate system(coordinate grid) to assign coordinates to all objects on the table.

Don't be surprised, at first the explanations will be on the fingers. Moreover, on yours. Please place left index finger on the edge of the tabletop so that he looks at the monitor. This will be a vector. Now place little finger right hand on the edge of the table in the same way - so that it is directed at the monitor screen. This will be a vector. Smile, you look great! What can we say about vectors? Data vectors collinear, which means linear expressed through each other:
, well, or vice versa: , where is some number different from zero.

You can see a picture of this action in class. Vectors for dummies, where I explained the rule for multiplying a vector by a number.

Will your fingers set the basis on the plane of the computer desk? Obviously not. Collinear vectors travel back and forth across alone direction, and a plane has length and width.

Such vectors are called linearly dependent.

Reference: The words “linear”, “linearly” denote the fact that in mathematical equations and expressions there are no squares, cubes, other powers, logarithms, sines, etc. There are only linear (1st degree) expressions and dependencies.

Two plane vectors linearly dependent if and only if they are collinear.

Cross your fingers on the table so that there is any angle between them other than 0 or 180 degrees. Two plane vectorslinear Not dependent if and only if they are not collinear. So, the basis is obtained. There is no need to be embarrassed that the basis turned out to be “skewed” with non-perpendicular vectors of different lengths. Very soon we will see that not only an angle of 90 degrees is suitable for its construction, and not only unit vectors of equal length

Any plane vector the only way is expanded according to the basis:
, where are real numbers. The numbers are called vector coordinates in this basis.

It is also said that vectorpresented as linear combination basis vectors. That is, the expression is called vector decompositionby basis or linear combination basis vectors.

For example, we can say that the vector is decomposed along an orthonormal basis of the plane, or we can say that it is represented as a linear combination of vectors.

Let's formulate definition of basis formally: The basis of the plane is called a pair of linearly independent (non-collinear) vectors, , wherein any a plane vector is a linear combination of basis vectors.

An essential point of the definition is the fact that the vectors are taken in a certain order. Bases – these are two completely different bases! As they say, you cannot replace the little finger of your left hand in place of the little finger of your right hand.

We have figured out the basis, but it is not enough to set a coordinate grid and assign coordinates to each item on your computer desk. Why isn't it enough? The vectors are free and wander throughout the entire plane. So how do you assign coordinates to those little dirty spots on the table left over from a wild weekend? A starting point is needed. And such a landmark is a point familiar to everyone - the origin of coordinates. Let's understand the coordinate system:

I'll start with the “school” system. Already in the introductory lesson Vectors for dummies I highlighted some differences between the rectangular coordinate system and the orthonormal basis. Here's the standard picture:

When they talk about rectangular coordinate system, then most often they mean the origin, coordinate axes and scale along the axes. Try typing “rectangular coordinate system” into a search engine, and you will see that many sources will tell you about coordinate axes familiar from the 5th-6th grade and how to plot points on a plane.

On the other hand, it seems that a rectangular coordinate system can be completely defined in terms of an orthonormal basis. And that's almost true. The wording is as follows:

origin, And orthonormal the basis is set Cartesian rectangular plane coordinate system . That is, the rectangular coordinate system definitely is defined by a single point and two unit orthogonal vectors. That is why you see the drawing that I gave above - in geometric problems, both vectors and coordinate axes are often (but not always) drawn.

I think everyone understands that using a point (origin) and an orthonormal basis ANY POINT on the plane and ANY VECTOR on the plane coordinates can be assigned. Figuratively speaking, “everything on a plane can be numbered.”

Are coordinate vectors required to be unit? No, they can have an arbitrary non-zero length. Consider a point and two orthogonal vectors of arbitrary non-zero length:


Such a basis is called orthogonal. The origin of coordinates with vectors is defined by a coordinate grid, and any point on the plane, any vector has its coordinates in a given basis. For example, or. The obvious inconvenience is that the coordinate vectors V general case have different lengths other than unity. If the lengths are equal to unity, then the usual orthonormal basis is obtained.

! Note : in the orthogonal basis, as well as below in the affine bases of plane and space, units along the axes are considered CONDITIONAL. For example, one unit along the x-axis contains 4 cm, and one unit along the ordinate axis contains 2 cm. This information is enough to, if necessary, convert “non-standard” coordinates into “our usual centimeters”.

And the second question, which has actually already been answered, is whether the angle between the basis vectors must be equal to 90 degrees? No! As the definition states, the basis vectors must be only non-collinear. Accordingly, the angle can be anything except 0 and 180 degrees.

A point on the plane called origin, And non-collinear vectors, , set affine plane coordinate system :


Sometimes such a coordinate system is called oblique system. As examples, the drawing shows points and vectors:

As you understand, the affine coordinate system is even less convenient; the formulas for the lengths of vectors and segments, which we discussed in the second part of the lesson, do not work in it Vectors for dummies, many delicious formulas related to scalar product of vectors. But the rules for adding vectors and multiplying a vector by a number, formulas for dividing a segment in this relation, as well as some other types of problems that we will consider soon are valid.

And the conclusion is that the most convenient special case of an affine coordinate system is the Cartesian rectangular system. That’s why you most often have to see her, my dear one. ...However, everything in this life is relative - there are many situations in which an oblique angle (or some other one, for example, polar) coordinate system. And humanoids might like such systems =)

Let's move on to the practical part. All problems in this lesson are valid both for the rectangular coordinate system and for the general affine case. There is nothing complicated here; all the material is accessible even to a schoolchild.

How to determine collinearity of plane vectors?

Typical thing. In order for two plane vectors were collinear, it is necessary and sufficient that their corresponding coordinates be proportional Essentially, this is a coordinate-by-coordinate detailing of the obvious relationship.

Example 1

a) Check if the vectors are collinear .
b) Do the vectors form a basis? ?

Solution:
a) Let us find out whether there is for vectors proportionality coefficient, such that the equalities are satisfied:

I’ll definitely tell you about the “foppish” version of applying this rule, which works quite well in practice. The idea is to immediately make up the proportion and see if it is correct:

Let's make a proportion from the ratios of the corresponding coordinates of the vectors:

Let's shorten:
, thus the corresponding coordinates are proportional, therefore,

The relationship could be made the other way around; this is an equivalent option:

For self-test, you can use the fact that collinear vectors are linearly expressed through each other. In this case, the equalities take place . Their validity can be easily verified through elementary operations with vectors:

b) Two plane vectors form a basis if they are not collinear (linearly independent). We examine vectors for collinearity . Let's create a system:

From the first equation it follows that , from the second equation it follows that , which means the system is inconsistent(no solutions). Thus, the corresponding coordinates of the vectors are not proportional.

Conclusion: the vectors are linearly independent and form a basis.

A simplified version of the solution looks like this:

Let's make a proportion from the corresponding coordinates of the vectors :
, which means that these vectors are linearly independent and form a basis.

Usually this option is not rejected by reviewers, but a problem arises in cases where some coordinates are equal to zero. Like this: . Or like this: . Or like this: . How to work through proportion here? (indeed, you cannot divide by zero). It is for this reason that I called the simplified solution “foppish”.

Answer: a) , b) form.

A small creative example for your own solution:

Example 2

At what value of the parameter are the vectors will they be collinear?

In the sample solution, the parameter is found through the proportion.

There is an elegant algebraic way to check vectors for collinearity. Let’s systematize our knowledge and add it as the fifth point:

For two plane vectors the following statements are equivalent:

2) the vectors form a basis;
3) the vectors are not collinear;

+ 5) the determinant composed of the coordinates of these vectors is nonzero.

Respectively, the following opposite statements are equivalent:
1) vectors are linearly dependent;
2) vectors do not form a basis;
3) the vectors are collinear;
4) vectors can be linearly expressed through each other;
+ 5) the determinant composed of the coordinates of these vectors is equal to zero.

I really, really hope that by now you already understand all the terms and statements you have encountered.

Let's take a closer look at the new, fifth point: two plane vectors are collinear if and only if the determinant composed of the coordinates of the given vectors is equal to zero:. For use of this characteristic Naturally, you need to be able to find determinants.

Let's decide Example 1 in the second way:

a) Let us calculate the determinant made up of the coordinates of the vectors :
, which means that these vectors are collinear.

b) Two plane vectors form a basis if they are not collinear (linearly independent). Let's calculate the determinant made up of vector coordinates :
, which means the vectors are linearly independent and form a basis.

Answer: a) , b) form.

It looks much more compact and prettier than a solution with proportions.

With the help of the material considered, it is possible to establish not only the collinearity of vectors, but also to prove the parallelism of segments and straight lines. Let's consider a couple of problems with specific geometric shapes.

Example 3

The vertices of a quadrilateral are given. Prove that a quadrilateral is a parallelogram.

Proof: There is no need to create a drawing in the problem, since the solution will be purely analytical. Let's remember the definition of a parallelogram:
Parallelogram A quadrilateral whose opposite sides are parallel in pairs is called.

Thus, it is necessary to prove:
1) parallelism of opposite sides and;
2) parallelism of opposite sides and.

We prove:

1) Find the vectors:


2) Find the vectors:

The result is the same vector (“according to school” – equal vectors). Collinearity is quite obvious, but it is better to formalize the decision clearly, with arrangement. Let's calculate the determinant made up of vector coordinates:
, which means that these vectors are collinear, and .

Conclusion: Opposite sides quadrilaterals are parallel in pairs, which means that it is a parallelogram by definition. Q.E.D.

More good and different figures:

Example 4

The vertices of a quadrilateral are given. Prove that a quadrilateral is a trapezoid.

For a more rigorous formulation of the proof, it is better, of course, to get the definition of a trapezoid, but it is enough to simply remember what it looks like.

This is a task for you to solve on your own. Full solution at the end of the lesson.

And now it’s time to slowly move from the plane into space:

How to determine collinearity of space vectors?

The rule is very similar. In order for two space vectors to be collinear, it is necessary and sufficient that their corresponding coordinates be proportional.

Example 5

Find out whether the following space vectors are collinear:

A) ;
b)
V)

Solution:
a) Let’s check whether there is a coefficient of proportionality for the corresponding coordinates of the vectors:

The system has no solution, which means the vectors are not collinear.

“Simplified” is formalized by checking the proportion. In this case:
– the corresponding coordinates are not proportional, which means the vectors are not collinear.

Answer: the vectors are not collinear.

b-c) These are points for independent decision. Try it out in two ways.

There is a method for checking spatial vectors for collinearity through a third-order determinant; this method is covered in the article Vector product of vectors.

Similar to the plane case, the considered tools can be used to study the parallelism of spatial segments and straight lines.

Welcome to the second section:

Linear dependence and independence of vectors in three-dimensional space.
Spatial basis and affine coordinate system

Many of the patterns that we examined on the plane will be valid for space. I tried to minimize the theory notes, since the lion's share of the information has already been chewed. However, I recommend that you read the introductory part carefully, as new terms and concepts will appear.

Now, instead of the plane of the computer desk, we explore three-dimensional space. First, let's create its basis. Someone is now indoors, someone is outdoors, but in any case, we cannot escape three dimensions: width, length and height. Therefore, to construct a basis, three spatial vectors will be required. One or two vectors are not enough, the fourth is superfluous.

And again we warm up on our fingers. Please raise your hand up and spread it in different directions thumb, index and middle finger. These will be vectors, they look in different directions, they have different lengths and have different angles between them. Congratulations, the basis of three-dimensional space is ready! By the way, there is no need to demonstrate this to teachers, no matter how hard you twist your fingers, but there is no escape from definitions =)

Next, let's ask important issue, do any three vectors form a basis of three-dimensional space? Please press three fingers firmly onto the top of the computer desk. What happened? Three vectors are located in the same plane, and, roughly speaking, we have lost one of the dimensions - height. Such vectors are coplanar and, it is quite obvious that the basis of three-dimensional space is not created.

It should be noted that coplanar vectors do not have to lie in the same plane, they can be in parallel planes (just don’t do this with your fingers, only Salvador Dali did this =)).

Definition: vectors are called coplanar, if there is a plane to which they are parallel. It is logical to add here that if such a plane does not exist, then the vectors will not be coplanar.

Three coplanar vectors are always linearly dependent, that is, they are linearly expressed through each other. For simplicity, let us again imagine that they lie in the same plane. Firstly, vectors are not only coplanar, they can also be collinear, then any vector can be expressed through any vector. In the second case, if, for example, the vectors are not collinear, then the third vector is expressed through them in a unique way: (and why is easy to guess from the materials in the previous section).

The converse is also true: three non-coplanar vectors are always linearly independent, that is, they are in no way expressed through each other. And, obviously, only such vectors can form the basis of three-dimensional space.

Definition: The basis of three-dimensional space is called a triple of linearly independent (non-coplanar) vectors, taken in a certain order, and any vector of space the only way is decomposed over a given basis, where are the coordinates of the vector in this basis

Let me remind you that we can also say that the vector is represented in the form linear combination basis vectors.

The concept of a coordinate system is introduced in exactly the same way as for the plane case; one point and any three linearly independent vectors are sufficient:

origin, And non-coplanar vectors, taken in a certain order, set affine coordinate system of three-dimensional space :

Of course, the coordinate grid is “oblique” and inconvenient, but, nevertheless, the constructed coordinate system allows us definitely determine the coordinates of any vector and the coordinates of any point in space. Similar to a plane, some formulas that I have already mentioned will not work in the affine coordinate system of space.

The most familiar and convenient special case of an affine coordinate system, as everyone guesses, is rectangular space coordinate system:

A point in space called origin, And orthonormal the basis is set Cartesian rectangular space coordinate system . Familiar picture:

Before moving on to practical tasks, let’s again systematize the information:

For three space vectors the following statements are equivalent:
1) the vectors are linearly independent;
2) the vectors form a basis;
3) the vectors are not coplanar;
4) vectors cannot be linearly expressed through each other;
5) the determinant, composed of the coordinates of these vectors, is different from zero.

I think the opposite statements are understandable.

Linear dependence/independence of space vectors is traditionally checked using a determinant (point 5). The remaining practical tasks will be of a pronounced algebraic nature. It's time to hang up the geometry stick and wield the baseball bat of linear algebra:

Three vectors of space are coplanar if and only if the determinant composed of the coordinates of the given vectors is equal to zero: .

I would like to draw your attention to a small technical nuance: the coordinates of vectors can be written not only in columns, but also in rows (the value of the determinant will not change because of this - see properties of determinants). But it is much better in columns, since it is more beneficial for solving some practical problems.

For those readers who have a little forgotten the methods of calculating determinants, or maybe have little understanding of them at all, I recommend one of my oldest lessons: How to calculate the determinant?

Example 6

Check whether the following vectors form the basis of three-dimensional space:

Solution: In fact, the entire solution comes down to calculating the determinant.

a) Let’s calculate the determinant made up of vector coordinates (the determinant is revealed in the first line):

, which means that the vectors are linearly independent (not coplanar) and form the basis of three-dimensional space.

Answer: these vectors form a basis

b) This is a point for independent decision. Full solution and answer at the end of the lesson.

Meet and creative tasks:

Example 7

At what value of the parameter will the vectors be coplanar?

Solution: Vectors are coplanar if and only if the determinant composed of the coordinates of these vectors is equal to zero:

Essentially, you need to solve an equation with a determinant. We swoop down on zeros like kites on jerboas - it’s best to open the determinant in the second line and immediately get rid of the minuses:

We carry out further simplifications and reduce the matter to the simplest linear equation:

Answer: at

It’s easy to check here; to do this, you need to substitute the resulting value into the original determinant and make sure that , opening it again.

In conclusion, we will consider another typical problem, which is more algebraic in nature and is traditionally included in a linear algebra course. It is so common that it deserves its own topic:

Prove that 3 vectors form the basis of three-dimensional space
and find the coordinates of the 4th vector in this basis

Example 8

Vectors are given. Show that vectors form a basis in three-dimensional space and find the coordinates of the vector in this basis.

Solution: First, let's deal with the condition. By condition, four vectors are given, and, as you can see, they already have coordinates in some basis. What this basis is is not of interest to us. And the following thing is of interest: three vectors may well form a new basis. And the first stage completely coincides with the solution of Example 6; it is necessary to check whether the vectors are truly linearly independent:

Let's calculate the determinant made up of vector coordinates:

, which means that the vectors are linearly independent and form the basis of three-dimensional space.

! Important : vector coordinates Necessarily write down into columns determinant, not in strings. Otherwise, there will be confusion in the further solution algorithm.

In geometry, a vector is understood as a directed segment, and vectors obtained from one another by parallel translation are considered equal. All equal vectors are treated as the same vector. The origin of the vector can be placed at any point in space or plane.

If the coordinates of the ends of the vector are given in space: A(x 1 , y 1 , z 1), B(x 2 , y 2 , z 2), then

= (x 2 – x 1 , y 2 – y 1 , z 2 – z 1). (1)

A similar formula holds on the plane. This means that the vector can be written as a coordinate line. Operations on vectors, such as addition and multiplication by a number, on strings are performed componentwise. This makes it possible to expand the concept of a vector, understanding a vector as any string of numbers. For example, the solution to a system of linear equations, as well as any set of values ​​of the system's variables, can be viewed as a vector.

On strings of the same length, the addition operation is performed according to the rule

(a 1 , a 2 , … , a n) + (b 1 , b 2 , … , b n) = (a 1 + b 1 , a 2 + b 2 , … , a n+b n). (2)

Multiplying a string by a number follows the rule

l(a 1 , a 2 , … , a n) = (la 1 , la 2 , … , la n). (3)

A set of row vectors of a given length n with the indicated operations of addition of vectors and multiplication by a number forms an algebraic structure called n-dimensional linear space.

A linear combination of vectors is a vector , where λ 1 , ... , λ m– arbitrary coefficients.

A system of vectors is called linearly dependent if there is a linear combination of it equal to , in which there is at least one non-zero coefficient.

A system of vectors is called linearly independent if in any linear combination equal to , all coefficients are zero.

Thus, solving the question of the linear dependence of a system of vectors is reduced to solving the equation

x 1 + x 2 + … + x m = . (4)

If this equation has non-zero solutions, then the system of vectors is linearly dependent. If the zero solution is unique, then the system of vectors is linearly independent.

To solve system (4), for clarity, the vectors can be written not as rows, but as columns.

Then, having performed transformations on the left side, we arrive at a system of linear equations equivalent to equation (4). The main matrix of this system is formed by the coordinates of the original vectors arranged in columns. A column of free members is not needed here, since the system is homogeneous.

Basis system of vectors (finite or infinite, in particular, the entire linear space) is its non-empty linearly independent subsystem, through which any vector of the system can be expressed.

Example 1.5.2. Find the basis of the system of vectors = (1, 2, 2, 4), = (2, 3, 5, 1), = (3, 4, 8, –2), = (2, 5, 0, 3) and express the remaining vectors through the basis.

Solution. We build a matrix in which the coordinates of these vectors are arranged in columns. This is the matrix of the system x 1 + x 2 + x 3 + x 4 =. . We reduce the matrix to stepwise form:

~ ~ ~

The basis of this system of vectors is formed by the vectors , , , to which the leading elements of the rows, highlighted in circles, correspond. To express the vector, we solve the equation x 1 + x 2 + x 4 = . It reduces to a system of linear equations, the matrix of which is obtained from the original by rearranging the column corresponding to , in place of the column of free terms. Therefore, when reducing to a stepped form, the same transformations as above will be made on the matrix. This means that you can use the resulting matrix in a stepwise form, making the necessary rearrangements of the columns in it: we place the columns with circles to the left of the vertical bar, and the column corresponding to the vector is placed to the right of the bar.

We consistently find:

x 4 = 0;

x 2 = 2;

x 1 + 4 = 3, x 1 = –1;

Comment. If it is necessary to express several vectors through the basis, then for each of them a corresponding system of linear equations is constructed. These systems will differ only in the columns of free members. Moreover, each system is solved independently of the others.

Exercise 1.4. Find the basis of the system of vectors and express the remaining vectors through the basis:

a) = (1, 3, 2, 0), = (3, 4, 2, 1), = (1, –2, –2, 1), = (3, 5, 1, 2);

b) = (2, 1, 2, 3), = (1, 2, 2, 3), = (3, –1, 2, 2), = (4, –2, 2, 2);

c) = (1, 2, 3), = (2, 4, 3), = (3, 6, 6), = (4, –2, 1); = (2, –6, –2).

In a given system of vectors, the basis can usually be identified different ways, but all bases will have the same number of vectors. The number of vectors in the basis of a linear space is called the dimension of the space. For n-dimensional linear space n– this is the dimension of space, since this space has a standard basis = (1, 0, ... , 0), = (0, 1, ... , 0), ... , = (0, 0, ... , 1). Through this basis any vector = (a 1 , a 2 , … , a n) is expressed as follows:

= (a 1 , 0, … , 0) + (0, a 2 , … , 0) + … + (0, 0, … , a n) =

A 1 (1, 0, … , 0) + a 2 (0, 1, … , 0) + … + a n(0, 0, … ,1) = a 1 + a 2 +… + a n .

Thus, the components in the row of the vector = (a 1 , a 2 , … , a n) are its coefficients in the expansion through the standard basis.

Straight lines on a plane

The task of analytical geometry is the application of the coordinate method to geometric problems. Thus, the problem is translated into algebraic form and solved using algebra.

Find the basis of the system of vectors and vectors not included in the basis, expand them according to the basis:

A 1 = {5, 2, -3, 1}, A 2 = {4, 1, -2, 3}, A 3 = {1, 1, -1, -2}, A 4 = {3, 4, -1, 2}, A 5 = {13, 8, -7, 4}.

Solution. Consider a homogeneous system of linear equations

A 1 X 1 + A 2 X 2 + A 3 X 3 + A 4 X 4 + A 5 X 5 = 0

or in expanded form.

We will solve this system by the Gaussian method, without swapping rows and columns, and, in addition, choosing main element not in the upper left corner, but along the entire line. The challenge is to select the diagonal part of the transformed system of vectors.

~ ~

~ ~ ~ .

The allowed system of vectors, equivalent to the original one, has the form

A 1 1 X 1 + A 2 1 X 2 + A 3 1 X 3 + A 4 1 X 4 + A 5 1 X 5 = 0 ,

Where A 1 1 = , A 2 1 = , A 3 1 = , A 4 1 = , A 5 1 = . (1)

Vectors A 1 1 , A 3 1 , A 4 1 form a diagonal system. Therefore, the vectors A 1 , A 3 , A 4 form the basis of the vector system A 1 , A 2 , A 3 , A 4 , A 5 .

Let us now expand the vectors A 2 And A 5 on basis A 1 , A 3 , A 4 . To do this, we first expand the corresponding vectors A 2 1 And A 5 1 diagonal system A 1 1 , A 3 1 , A 4 1, bearing in mind that the coefficients of the expansion of a vector along the diagonal system are its coordinates x i.

From (1) we have:

A 2 1 = A 3 1 · (-1) + A 4 1 0 + A 1 1 ·1 => A 2 1 = A 1 1 – A 3 1 .

A 5 1 = A 3 1 0 + A 4 1 1 + A 1 1 ·2 => A 5 1 = 2A 1 1 + A 4 1 .

Vectors A 2 And A 5 are expanded in basis A 1 , A 3 , A 4 with the same coefficients as vectors A 2 1 And A 5 1 diagonal system A 1 1 , A 3 1 , A 4 1 (those coefficients x i). Hence,

A 2 = A 1 – A 3 , A 5 = 2A 1 + A 4 .

Tasks. 1.Find the basis of the system of vectors and vectors not included in the basis, expand them according to the basis:

1. a 1 = { 1, 2, 1 }, a 2 = { 2, 1, 3 }, a 3 = { 1, 5, 0 }, a 4 = { 2, -2, 4 }.

2. a 1 = { 1, 1, 2 }, a 2 = { 0, 1, 2 }, a 3 = { 2, 1, -4 }, a 4 = { 1, 1, 0 }.

3. a 1 = { 1, -2, 3 }, a 2 = { 0, 1, -1 }, a 3 = { 1, 3, 0 }, a 4 = { 0, -7, 3 }, a 5 = { 1, 1, 1 }.

4. a 1 = { 1, 2, -2 }, a 2 = { 0, -1, 4 }, a 3 = { 2, -3, 3 }.

2. Find all bases of the vector system:

1. a 1 = { 1, 1, 2 }, a 2 = { 3, 1, 2 }, a 3 = { 1, 2, 1 }, a 4 = { 2, 1, 2 }.

2. a 1 = { 1, 1, 1 }, a 2 = { -3, -5, 5 }, a 3 = { 3, 4, -1 }, a 4 = { 1, -1, 4 }.