Tangent method algorithm. Numerical methods: solution of non-linear equations. Simple iteration method

All people naturally seek knowledge. (Aristotle. Metaphysics)

Numerical Methods: Solving Nonlinear Equations

Problems of solving equations constantly arise in practice, for example, in economics, when developing a business, you want to know when the profit reaches a certain value, in medicine, when studying the effect of drugs, it is important to know when the concentration of a substance reaches a given level, etc.

In optimization problems, it is often necessary to determine the points at which the derivative of a function becomes 0, which is a necessary condition local extremum.

In statistics, when constructing estimates using the least squares method or the maximum likelihood method, one also has to solve nonlinear equations and systems of equations.

So, there is a whole class of problems related to finding solutions non-linear equations, e.g. equations or equations, etc.

In the simplest case, we have a function defined on the interval ( a, b ) and taking certain values.

Each value x from this segment we can match the number , this is functional dependency, a key concept of mathematics.

We need to find such a value at which such are called the roots of the function

Visually, we need to determine the intersection point of the graph of the functionwith the abscissa axis.

Bisection Method

The simplest method for finding the roots of an equation is the bisection method, or dichotomy.

This method is intuitive and everyone would act in a similar way when solving a problem.

The algorithm is as follows.

Suppose we have found two points and such that and have various signs, then between these points there is at least one root of the function .

Divide the segment in half and enter middle point .

Then either , or .

Let's leave that half of the segment for which the values ​​at the ends have different signs. Now we again divide this segment in half and leave that part of it, on the boundaries of which the function has different signs, and so on, to achieve the required accuracy.

Obviously, we will gradually narrow the area where the root of the function is located, and, therefore, we will determine it with a certain degree of accuracy.

Note that the described algorithm is applicable for any continuous function.

The advantages of the bisection method include its high reliability and simplicity.

The disadvantage of the method is the fact that before starting its application, it is necessary to find two points, the values ​​of the function in which have different signs. It is obvious that the method is not applicable to roots of even multiplicity and also cannot be generalized to the case of complex roots and systems of equations.

The order of convergence of the method is linear, at each step the accuracy doubles, the more iterations are made, the more accurately the root is determined.

Newton's method: theoretical foundations

Newton's classical method or tangents lies in the fact that if is some approximation to the root of the equation , then the next approximation is defined as the root of the tangent to the function drawn at the point .

The equation of the tangent to a function at a point has the form:

In the tangent equation, let's put and .

Then the algorithm of sequential calculations in Newton's method is as follows:

The convergence of the tangent method is quadratic, the order of convergence is 2.

Thus, the convergence of Newton's tangent method is very fast.

Remember this wonderful fact!

Without any changes, the method is generalized to the complex case.

If the root is a root of the second multiplicity or higher, then the order of convergence drops and becomes linear.

Exercise 1. Using the method of tangents, find the solution of the equation on the segment (0, 2).

Exercise 2. Using the method of tangents, find the solution of the equation on the segment (1, 3).

The disadvantages of Newton's method include its locality, since it is guaranteed to converge for an arbitrary starting approximation only if the condition , otherwise there is convergence only in some neighborhood of the root.

The disadvantage of Newton's method is the need to calculate derivatives at each step.

Visualization of Newton's method

Newton's method (tangent method) is applied if the equation f(x) = 0 has a root , and the following conditions are met:

1) function y= f(x) is defined and continuous for ;

2) f(af(b) < 0 (the function takes values ​​of different signs at the ends of the segment [ a; b]);

3) derivatives f"(x) and f""(x) keep the sign on the segment [ a; b] (i.e. function f(x) either increases or decreases on the segment [ a; b], while maintaining the direction of the convexity);

The main idea of ​​the method is as follows: on the segment [ a; b] such a number is chosen x 0 , under which f(x 0 ) has the same sign as f"" (x 0 ), i.e., the condition f(x 0 f"" (x) > 0 . Thus, a point with an abscissa is chosen x 0 , where the tangent to the curve y= f(x) on the segment [ a; b] crosses the axis Ox. For a point x 0 First, it is convenient to choose one of the ends of the segment.

Consider Newton's method on a specific example.

Let us be given an increasing function y \u003d f (x) \u003d x 2 -2, continuous on the interval (0;2), and having f"(x) = 2 x > 0 and f "" (x) = 2 > 0 .

Picture1 . f(x)=x 2 -2

The tangent equation in general form has the representation:

y-y 0 = f" (x 0) (x-x 0).

In our case: y-y 0 \u003d 2x 0 (x-x 0). As a point x 0 choose a point B 1 (b; f(b)) = (2,2). We draw a tangent to the function y = f(x) at point B 1 , and denote the point of intersection of the tangent and the axis Ox dot x 1. We get the equation of the first tangent: y-2=2 2(x-2), y=4x-6.

Ox: x 1 =

Picture2. Result of the first iteration

y=f(x) Ox through a point x 1, we get a point B 2 =(1.5; 0.25). Draw a tangent to the function again y = f(x) at point B 2, and denote the intersection point of the tangent and the axis Ox dot x2.

Equation of the second tangent: y-0.25=2*1.5(x-1.5), y = 3 x - 4.25.

Intersection point of tangent and axis Ox: x 2 =.

Picture3. Second iteration of Newton's method

Then we find the point of intersection of the function y=f(x) and a perpendicular to the axis Ox through the point x 2, we get the point B 3 and so on.

Picture4. The third step of the tangent method

The first approximation of the root is determined by the formula:

= 1.5.

The second approximation of the root is determined by the formula:

=

The third approximation of the root is determined by the formula:

Thus , i-th approximation of the root is determined by the formula:

Calculations are carried out until the decimal places that are needed in the answer match, or the specified accuracy e is reached - until the inequality is fulfilled | xi- xi-1 | < e.

In our case, let's compare the approximation obtained in the third step with the real answer calculated on the calculator:

Figure 5. Root of 2 calculated on a calculator

As you can see, already at the third step we got an error less than 0.000002.

Thus it is possible to calculate the value of the value "square root of 2" with any degree of accuracy. This wonderful method was invented by Newton and allows you to find the roots of very complex equations.

Newton's Method: C++ Application

In this article, we automate the process of calculating the roots of equations by writing a console application in C++. We will develop it in Visual C++ 2010 Express, which is a free and very convenient C++ development environment.

Let's start with Visual C++ 2010 Express. The start window of the program will appear. In the left corner, click "Create Project".

Rice. 1. Visual C++ 2010 Express Start Page

In the menu that appears, select "Win32 Console Application", enter the name of the application "Newton_Method".

Rice. 2. Create a project

// Newton_method.cpp: defines the entry point for the console application

#include "stdafx.h"

#include

using namespace std;

float f(double x) //returns the value of the function f(x) = x^2-2

float df(float x) //returns the value of the derivative

float d2f(float x) // second derivative value

int _tmain(int argc, _TCHAR* argv)

int exit = 0, i=0;//exit and loop variables

double x0,xn;// computed approximations for the root

double a, b, eps;// segment boundaries and required accuracy

cout<<"Please input \n=>";

cin>>a>>b; // enter the boundaries of the segment on which we will look for the root

cout<<"\nPlease input epsilon\n=>";

cin>>eps; // enter the desired calculation accuracy

if (a > b) // if the user mixed up the boundaries of the segment, swap them

if (f(a)*f(b)>0) // if the signs of the function on the edges of the segment are the same, then there is no root

cout<<"\nError! No roots in this interval\n";

if (f(a)*d2f(a)>0) x0 = a; // to select a starting point, check f(x0)*d2f(x0)>0 ?

xn = x0-f(x0)/df(x0); // count the first approximation

cout<<++i<<"-th iteration = "<

while(fabs(x0-xn) > eps) // until we reach the required accuracy, will continue to calculate

xn = x0-f(x0)/df(x0); // directly Newton's formula

cout<<++i<<"-th iteration = "<

cout<<"\nRoot = "<

cout<<"\nExit?=>";

) while (exit!=1); // until the user enters exit = 1

Let's see how it works. Click on the green triangle in the upper left corner of the screen, or press F5.

If a compilation error occurs "Error error LNK1123: failure to convert to COFF: file is invalid or corrupted", then this is treated either by installing the first Service pack 1, or in the project settings Properties -> Linker, disable incremental linking.

Rice. 4. Solving the project compilation error

We will look for the roots of the function f(x) =x2-2.

First, let's test the application on the "wrong" input data. There are no roots on the segment, our program should give an error message.

We have an application window:

Rice. 5. Entering input data

We introduce the boundaries of the segment 3 and 5, and the accuracy is 0.05. The program, as it should, gave an error message that there are no roots on this segment.

Rice. 6. Error "There are no roots on this segment!"

We are not going to leave yet, so the message “Exit?” enter "0".

Now let's test the application on the correct input data. Let's introduce a segment and an accuracy of 0.0001.

Rice. 7. Calculating the root with the required accuracy

As we can see, the required accuracy was already achieved at the 4th iteration.

To exit the application, enter "Exit?" => 1.

The secant method

To avoid calculating the derivative, Newton's method can be simplified by replacing the derivative with an approximate value calculated from the two previous points:

The iterative process looks like:

This is a two-step iterative process, since it uses the previous two to find the next approximation.

The order of convergence of the secant method is lower than that of the tangent method and is equal in the case of a single root.

This wonderful value is called the golden ratio:

We verify this by assuming for convenience that .

Thus, up to infinitesimals of a higher order

Discarding the remainder term, we obtain a recurrence relation, the solution of which is naturally sought in the form .

After substitution, we have: and

For convergence it is necessary that it be positive, therefore .

Since knowledge of the derivative is not required, then with the same amount of calculations in the secant method (despite the lower order of convergence), one can achieve greater accuracy than in the tangent method.

Note that near the root, one has to divide by a small number, and this leads to a loss of accuracy (especially in the case of multiple roots), therefore, choosing a relatively small , one performs calculations before executing and continue them until the modulus of the difference between neighboring approximations decreases.

As soon as growth begins, the calculations are stopped and the last iteration is not used.

This procedure for determining the end of iterations is called the technique Harvick.

Parabola method

Consider a three-step method in which the approximation is determined by the three previous points , and .

To do this, we replace, similarly to the secant method, the function with an interpolation parabola passing through the points , and .

In Newton's form, it looks like:

A point is defined as that of the roots of this polynomial, which is closer in modulus to the point.

The order of convergence of the parabola method is higher than that of the secant method, but lower than that of Newton's method.

An important difference from the previously considered methods is the fact that even if is real for real and the starting approximations are chosen to be real, the parabola method can lead to a complex root of the original problem.

This method is very useful for finding roots of high degree polynomials.

Simple iteration method

The problem of finding solutions to equations can be formulated as a problem of finding roots: , or as a problem of finding a fixed point.

Let be and - compression: (in particular, the fact that - compression, as it is easy to see, means that).

By Banach's theorem, there is a unique fixed point

It can be found as the limit of a simple iterative procedure

where the initial approximation is an arbitrary point in the interval .

If the function is differentiable, then a convenient compression criterion is the number . Indeed, by the Lagrange theorem

Thus, if the derivative is less than one, then it is a contraction.

Condition is essential, because if, for example, on , then there is no fixed point, although the derivative is equal to zero. The rate of convergence depends on the value of . The smaller , the faster the convergence.

Let the root of the equation f(x)=0 be separated on the segment , and the first and second derivatives of f’(x) and f""(x) are continuous and of constant sign for хн .

Let the next approximation to the root x n be obtained (chosen) at some step of the root refinement . Then suppose that the next approximation obtained with the help of the correction h n , results in the exact value of the root

x \u003d x n + h n. (1.2.3-6)

Counting h n small value, we represent f(x n + h n) as a Taylor series, limiting ourselves to linear terms

f(x n + h n) "f(x n) + h n f'(x n). (1.2.3-7)

Considering that f(x) = f(х n + h n) = 0, we get f(х n) + h n f ’(х n) » 0.

Hence h n "- f (x n) / f'(x n). Substitute the value h n in (1.2.3-6) and instead of the exact value of the root x we get another approximation

Formula (1.2.3-8) allows you to get a sequence of approximations x 1, x 2, x 3 ..., which, under certain conditions, converges to the exact value of the root x, i.e

Geometric interpretation of Newton's method is as follows
(Fig.1.2.3-6). We take the right end of the segment b as the initial approximation x 0 and at the corresponding point B 0 on the graph of the function y \u003d f (x) we construct a tangent. The point of intersection of the tangent with the x-axis is taken as a new, more accurate approximation x 1 . Repeating this procedure many times allows you to get a sequence of approximations x 0, x 1, x 2 , . . ., which tends to the exact value of the root x.

The calculation formula of Newton's method (1.2.3-8) can be obtained from a geometric construction. So in a right triangle x 0 B 0 x 1 leg
x 0 x 1 = x 0 V 0 / tga. Considering that point B 0 is on the graph of the function f(x), and the hypotenuse is formed by a tangent to the graph f (x) at point B 0, we get

(1.2.3-9)

(1.2.3-10)

This formula coincides with (1.2.3-8) for the nth approximation.

From Fig. 1.2.3-6 it can be seen that the choice of the point a as the initial approximation can lead to the fact that the next approximation x 1 will be outside the segment on which the root is separated x. In this case, the convergence of the process is not guaranteed. In the general case, the choice of the initial approximation is made in accordance with the following rule: for the initial approximation, one should take such a point x 0 н, at which f (x 0) × f '' (x 0)> 0, that is, the signs of the function and its second derivative match.

The convergence conditions for Newton's method are formulated in the following theorem.

If the root of the equation is separated on the segment, and f'(x 0) and f''(x) are different from zero and retain their signs at xo, then if we choose such a point as the initial approximation x 0 О , what f(x 0).f¢¢(x 0)>0 , then the root of the equation f(x)=0 can be calculated with any degree of accuracy.

The error estimate of Newton's method is determined by the following expression:

(1.2.3-11)

where is the smallest value at

Highest value at

The calculation process is terminated if ,

where is the specified accuracy.

In addition, the following expressions can serve as a condition for achieving a given accuracy when refining the root by the Newton method:

The scheme of the Newton method algorithm is shown in fig. 1.2.3-7.

The left side of the original equation f(x) and its derivative f’(x) in the algorithm are designed as separate software modules.

Rice. 1.2.3-7. Algorithm diagram of Newton's method

Example 1.2.3-3. Refine the roots of the equation x-ln(x+2) = 0 using the Newton method, provided that the roots of this equation are separated on the segments x 1 н[-1.9;-1.1] and x 2 н [-0.9;2 ].

The first derivative f'(x) = 1 - 1 / (x + 2) retains its sign on each of the segments:

f'(x)<0 при хÎ [-1.9; -1.1],

f’(x)>0 at xО [-0.9; 2].

The second derivative f "(x) \u003d 1 / (x + 2) 2\u003e 0 for any x.

Thus, the convergence conditions are satisfied. Since f "" (x)> 0 over the entire range of permissible values, then to refine the root for the initial approximation x 1 choose x 0 \u003d -1.9 (since f (-1.9) × f ”(-1.9)> 0). We get a sequence of approximations:

Continuing the calculations, we obtain the following sequence of the first four approximations: -1.9; –1.8552, -1.8421; -1.8414 . The value of the function f(x) at the point x=-1.8414 is equal to f(-1.8414)=-0.00003 .

To refine the root x 2 н[-0.9;2], we choose as the initial approximations 0 =2 (f(2)×f”(2)>0). Based on x 0 = 2, we obtain a sequence of approximations: 2.0; 1.1817; 1.1462; 1.1461. The value of the function f(x) at the point x=1.1461 is equal to f(1.1461)= -0.00006.

Newton's method has a high rate of convergence, but at each step it requires the calculation of not only the value of the function, but also its derivative.

chord method

Geometric interpretation of the chord method is as follows
(Fig.1.2.3-8).

Let's draw a straight line segment through points A and B. The next approximation x 1 is the abscissa of the intersection point of the chord with the 0x axis. Let's construct the equation of a straight line segment:

Let's put y=0 and find the value x=x 1 (another approximation):

We repeat the calculation process to obtain the next approximation to the root - x 2 :

In our case (Fig.1.2.11) and the calculation formula of the chord method will look like

This formula is valid when point b is taken as a fixed point, and point a acts as an initial approximation.

Consider another case (Fig. 1.2.3-9), when .

The straight line equation for this case has the form

The next approximation x 1 at y = 0

Then the recursive formula for the method of chords for this case has the form

It should be noted that for the fixed point in the method of chords, choose the end of the segment for which the condition f (x)∙f¢¢ (x)>0 is satisfied.

Thus, if the point a is taken as a fixed point , then x 0 = b acts as an initial approximation, and vice versa.

Sufficient conditions that ensure the calculation of the root of the equation f(x)=0 using the formula of chords will be the same as for the method of tangents (Newton's method), but instead of the initial approximation, a fixed point is chosen. The chord method is a modification of Newton's method. The difference is that the next approximation in the Newton method is the point of intersection of the tangent with the 0X axis, and in the method of chords - the point of intersection of the chord with the 0X axis - the approximations converge to the root from different sides.

The estimate of the error of the chord method is determined by the expression

(1.2.3-15)

Termination condition of the iteration process by the method of chords

(1.2.3-16)

If M 1<2m 1 , то для оценки погрешности метода может быть использована формула | x n -x n -1 |£e.

Example 1.2.3-4. Specify the root of the equation e x - 3x = 0, separated on a segment with an accuracy of 10 -4 .

Let's check the convergence condition:

Therefore, a=0 should be chosen as a fixed point, and x 0 \u003d 1 should be taken as the initial approximation, since f (0) \u003d 1> 0 and f (0) * f "(0)> 0.

Newton's (tangent) method for finding roots

This is an iterative method invented Isaac Newton(Isaak Newton) around 1664. However, sometimes this method is called the Newton-Raphson method (Raphson), since Raphson invented the same algorithm a few years later than Newton, but his paper was published much earlier.

The task is as follows. Given the equation:

It is required to solve this equation, more precisely, to find one of its roots (it is assumed that the root exists). It is assumed that is continuous and differentiable on the segment .

Algorithm

The input parameter of the algorithm, in addition to the function , is also initial approximation- some , from which the algorithm starts to go.

Let already calculated , calculate as follows. Let's draw a tangent to the graph of the function at the point , and find the point of intersection of this tangent with the x-axis. set equal to the found point, and repeat the whole process from the beginning.

It is easy to get the following formula:

It is intuitively clear that if the function is "good" (smooth) enough, and is close enough to the root, then it will be even closer to the desired root.

The rate of convergence is quadratic, which, relatively speaking, means that the number of exact bits in the approximate value doubles with each iteration.

Application for calculating the square root

Consider Newton's method using the example of calculating the square root.

If we substitute , then after simplifying the expression we get:

The first typical version of the problem is when a fractional number is given, and you need to calculate its root with some accuracy:

double n; cin >> n; const double EPS = 1E-15 ; double x = 1 ; for (;; ) ( double nx = (x + n / x) / 2 ; if (abs (x - nx)< EPS) break ; x = nx; } printf ("%.15lf" , x) ;

Another common version of the problem is when you need to calculate the integer root (for a given one, find the largest such that ). Here we have to slightly change the algorithm's stop condition, since it may happen that it starts to "jump" near the answer. Therefore, we add a condition that if the value at the previous step has decreased, and at the current step it tries to increase, then the algorithm must be stopped.

intn; cin >> n; int x = 1 ; bool decreased = false ; for (;; ) ( int nx = (x + n / x) >> 1 ; if (x == nx || nx > x && decreased) break ; decreased = nx< x; x = nx; } cout << x;

Finally, we give a third option - for the case of long arithmetic. Since the number can be quite large, it makes sense to pay attention to the initial approximation. Obviously, the closer it is to the root, the faster the result will be achieved. It will be quite simple and effective to take as an initial approximation the number , where is the number of bits in the number . Here is the Java code demonstrating this option:

BigIntegern; // input data BigInteger a = BigInteger.ONE .shiftLeft (n.bitLength () / 2 ) ; boolean p_dec = false ; for (;; ) ( BigInteger b = n.divide (a) .add (a) .shiftRight (1 ) ; if (a.compareTo (b) == 0 || a.compareTo (b)< 0 && p_dec) break ; p_dec = a.compareTo (b) >0; a = b; )

For example, this variant of the code runs for a number in milliseconds, and if you remove the improved choice of initial approximation (just start with ), then it will run in about milliseconds.

Tormenting at school over solving equations in mathematics lessons, many students are often sure that they are wasting their time, and meanwhile, such a skill will come in handy in life not only for those who decide to follow in the footsteps of Descartes, Euler or Lobachevsky.

In practice, for example, in medicine or economics, there are often situations when a specialist needs to find out when the concentration of the active substance of a particular drug reaches the required level in the patient's blood, or it is necessary to calculate the time required for a particular business to become profitable.

Most often, we are talking about solving nonlinear equations of various types. To do this as quickly as possible, especially with the use of computers, numerical methods allow. They are well studied and have long proven their effectiveness. Among them is Newton's tangent method, which is the subject of this article.

Formulation of the problem

In this case, there is a function g, which is given on the segment (a, b) and takes certain values ​​on it, i.e., it is possible to associate a specific number g (x) with each x belonging to (a, b).

It is required to establish all the roots of the equation from the interval between points a and b (including the ends), for which the function is set to zero. Obviously, these will be the points of intersection of y = g(x) with OX.

In some cases, it is more convenient to replace g(x)=0 with a similar one, g 1 (x) = g 2 (x). In this case, the abscissas (x value) of the intersection points of the graphs g 1 (x) and g 2 (x) act as roots.

The solution of a nonlinear equation is also important for optimization problems, for which the local extremum condition is the conversion of the derivative of the function to 0. In other words, such a problem can be reduced to finding the roots of the equation p(x) = 0, where p(x) is identical to g"(x).

Solution Methods

For some types of nonlinear equations, such as square or simple trigonometric equations, roots can be found in fairly simple ways. In particular, every student knows the formulas, using which you can easily find the values ​​of the argument of the points where the square trinomial is zeroed.

Methods for extracting the roots of nonlinear equations are usually divided into analytical (direct) and iterative. In the first case, the desired solution has the form of a formula, using which, for a certain number of arithmetic operations, you can find the value of the desired roots. Similar methods have been developed for exponential, trigonometric, logarithmic and elementary algebraic equations. For the rest, one has to use special numerical methods. They are easy to implement with the help of computers, which allow you to find the roots with the required accuracy.

Among them is the so-called numerical method of tangents. The latter was proposed by the great scientist Isaac Newton at the end of the 17th century. In the following centuries, the method was repeatedly improved.

Localization

Numerical methods for solving complex equations that do not have analytical solutions are usually carried out in 2 stages. First you need to localize them. This operation consists in finding such segments on OX on which there is one root of the equation being solved.

Let's consider a segment. If g(x) on it has no discontinuities and takes values ​​of different signs at the end points, then between a and b or in themselves there is at least 1 root of the equation g(x) = 0. For it to be unique, it is required that g(x) was monotonic. As is known, it will have such a property under the condition that g’(x) is of constant sign.

In other words, if g(x) has no discontinuities and monotonically increases or decreases, and its values ​​at the end points do not have the same signs, then there is 1 and only 1 root g(x).

In this case, you should know that this criterion will not work for the roots of equations that are multiple.

Solving the equation by dividing in half

Before considering more complex numerical tangents and its varieties), it is worth getting acquainted with the simplest way to identify roots. It is called a dichotomy and refers to the intuitive finding of roots based on the theorem that if for g (x), continuous on, the condition of different signs is satisfied, then on the segment under consideration there is at least 1 root g (x) = 0.

To find it, you need to divide the segment in half and designate the midpoint as x 2. Then two options are possible: g (x 0) * g (x 2) or g (x 2) * g (x 1) are equal to or less than 0. We choose the one for which one of these inequalities is true. We repeat the procedure described above until the length becomes less than a certain, pre-selected value that determines the accuracy of determining the root of the equation on .

The advantages of the method include its reliability and simplicity, and the disadvantage is the need to initially identify the points at which g (x) takes on different signs, so it cannot be used for roots with even multiplicity. In addition, it does not generalize to the case of a system of equations or when it comes to complex roots.

Example 1

Let we want to solve the equation g (x) = 2x 5 + x - 1 = 0. In order not to look for a suitable segment for a long time, we build a graph using, for example, the well-known Excel program. We see that it is better to take values ​​from the interval as a segment for localizing the root. We can be sure that at least one root of the desired equation exists on it.

g "(x) \u003d 10x 4 + 1, i.e. this is a monotonically increasing function, therefore there is only 1 root on the selected segment.

Substitute the endpoints into the equation. We have 0 and 1 respectively. At the first step, we take the point 0.5 as the solution. Then g(0.5) = -0.4375. So, the next segment for dividing in half will be. Its midpoint is 0.75. In it, the value of the function is 0.226. We take for consideration the segment and its midpoint, which is located at the point 0.625. Calculate the value of g(x) to 0.625. It is equal to -0.11, i.e. negative. Based on this result, we choose the segment . We get x = 0.6875. Then g(x) = -0.00532. If the accuracy of the solution is 0.01, then we can assume that the desired result is 0.6875.

Theoretical base

This method of finding roots using Newton's tangent method is popular because of its very fast convergence.

It is based on the proven fact that if x n is an approximation to a root f(x)=0 such that f" C 1 , then the next approximation will be at the point where the equation of the tangent to f(x) vanishes, i.e.

Substitute x = x n+1 and set y to zero.

Then the tangent looks like this:

Example 2

Let's try to use the classical Newton's tangent method and find a solution to some non-linear equation that is difficult or impossible to find analytically.

Let it be required to reveal the roots for x 3 + 4x - 3 = 0 with some accuracy, for example 0.001. As you know, the graph of any function in the form of a polynomial of odd degree must cross the OX axis at least once, i.e., there is no reason to doubt the existence of roots.

Before solving our example using the tangent method, we plot f (x) \u003d x 3 + 4x - 3 point by point. This is very easy to do, for example, using an Excel spreadsheet. From the resulting graph, it will be seen that it intersects with the OX axis and the function y \u003d x 3 + 4x - 3 monotonically increases. We can be sure that the equation x 3 + 4x - 3 = 0 has a solution and it is unique.

Algorithm

Any solution of equations by the tangent method begins with the calculation of f "(x). We have:

Then the second derivative will look like x * 6.

Using these expressions, we can write a formula for identifying the roots of the equation using the tangent method in the form:

Next, it is required to choose an initial approximation, i.e., to determine which point to consider as the starting point (rev. x 0) for the iterative process. We consider the ends of the segment. The one for which the condition of the function and its 2nd derivative at x 0 is true is suitable for us. As you can see, when substituting x 0 = 0, it is violated, but x 0 = 1 is quite suitable.

then if we are interested in the solution by the method of tangents with an accuracy of e, then the value of x n can be considered satisfying the requirements of the problem, provided that the inequality |f(x n) / f’(x n)|< e.

At the first step of tangents we have:

  • x 1 \u003d x 0 - (x 0 3 + 4x 0 - 3) / (3x 0 2 + 4) \u003d 1- 0.2857 \u003d 0.71429;
  • since the condition is not met, we go further;
  • we get a new value for x 2 , which is equal to 0.674;
  • we notice that the ratio of the value of the function to its derivative in x 2 is less than 0.0063, we stop the process.

Tangent Method in Excel

You can solve the previous example much easier and faster if you do not make calculations manually (on a calculator), but use the capabilities of a spreadsheet processor from Microsoft.

To do this, in Excel, you need to create a new page and fill its cells with the following formulas:

  • in C7 we write "= POWER (B7; 3) + 4 * B7 - 3";
  • in D7 we enter "= 4 + 3 * DEGREE (B7; 2)";
  • in E7 we write "= (POWER (B7; 3) - 3 + 4 * B7) / (3 * POWER (B7; 2) + 4)";
  • in D7 we enter the expression "= B7 - E7";
  • in B8 we enter the formula-condition “= IF (E7< 0,001;"Завершение итераций"; D7)».

In a specific task, already in cell B10, the inscription “Completion of iterations” will appear, and for solving the problem you will need to take the number written in the cell located one line above. For it, you can also select a separate “stretchable” column by entering a conditional formula there, according to which the result will be written there if the content in one or another cell of column B takes the form “Completion of iterations”.

Implementation in Pascal

Let's try to get the solution of the non-linear equation y = x 4 - 4 - 2 * x using the tangent method in Pascal.

We use an auxiliary function that will help to carry out an approximate calculation f "(x) \u003d (f (x + delta) - f (x)) / delta. As a condition for completing the iterative process, we will choose the fulfillment of the inequality | x 0 -x 1 |< некого малого числа. В Паскале его запишем, как abs(x0 - x1)<= epsilon.

The program is remarkable in that it does not require manual calculation of the derivative.

chord method

Consider another way to identify the roots of nonlinear equations. The iteration process consists in the fact that as successive approximations to the desired root for f(x)=0, the values ​​​​of the intersection points of the chord with the abscissas of the end points a and b with OX are taken, denoted as x 1 , ..., x n . We have:

For the point where the chord intersects with the OX axis, the expression will be written as:

Let the second derivative be positive for x £ (the opposite case is reduced to the one under consideration if we write f(x) = 0). In this case, the graph y \u003d f (x) is a curve convex at the bottom and located below the chord AB. There can be 2 cases: when the function is positive at point a or it is negative at point b.

In the first case, we choose the end a as the fixed one, and take the point b for x 0. Then successive approximations according to the formula presented above form a sequence that decreases monotonically.

In the second case, the end b is fixed at x 0 = a. The x values ​​obtained at each iteration step form a sequence that is monotonically increasing.

Thus, we can state that:

  • fixed in the method of chords is that end of the segment where the signs of the function and its second derivative do not coincide;
  • approximations for the root x - x m - lie from it on the side where f (x) has a sign that does not coincide with the sign of f "" (x).

Iterations can be continued until the conditions for the proximity of the roots are satisfied at this and the previous iteration step modulo abs(x m - x m - 1)< e.

Modified method

The combined method of chords and tangents allows you to establish the roots of the equation, approaching them from different sides. Such a value, at which the f(x) graph intersects OX, allows you to refine the solution much faster than using each of the methods separately.

Suppose we need to find the roots f(x)=0 if they exist on . You can use any of the methods described above. However, it is better to try a combination of them, which will significantly increase the accuracy of the root.

We consider the case with an initial approximation corresponding to the condition that the first and second derivatives have different signs at a specific point x.

Under such conditions, the solution of nonlinear equations by the tangent method allows you to find a root with excess if x 0 =b, and the method using chords at a fixed end b leads to finding an approximate root with a disadvantage.

Formulas used:

Now the desired root x must be sought in the interval. At the next step, you need to apply the combined method already to this segment. Proceeding like this, we get formulas of the form:

If there is a difference in sign between the first and second derivatives, then, arguing in a similar way, to refine the root, we obtain the following recursive formulas:

As a condition, the estimated inequality | b n +1 - a n +1 |< e. Иными словами, на практике приходится находить решение при помощи двух методов, но на каждом шаге требуется выяснять, насколько полученные результаты близки друг другу.

If the above inequality is true, then a point is taken as the root of the nonlinear equation on a given interval, which is exactly in the middle between the solutions found at a particular iteration step.

The combined method is easily implemented in the TURBO PASCAL environment. With a strong desire, you can try to carry out all the calculations using the tabular method in the Excel program.

In the latter case, several columns are selected for solving the problem using chords and separately for the method proposed by Isaac Newton.

In this case, each line is used to record calculations at a specific iterative step for two methods. Then, in the left part of the solution area, on the active working page, a column is highlighted in which the result of calculating the module of the difference in the values ​​of the next iteration step for each of the methods is entered. Another one can be used to enter the results of calculations according to the calculation formula of the logical construction "IF", used to find out whether the condition is met or not.

Now you know how to solve complex equations. The tangent method, as you have already seen, is implemented quite simply, both in Pascal and in Excel. Therefore, you can always establish the roots of an equation that is difficult or impossible to solve using formulas.

Same as approximation. The term P. is sometimes used in the sense of an approximating object (for example, the initial P.) ... Mathematical Encyclopedia

Newton's method- Newton's method, Newton's algorithm (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton ... ... Wikipedia

One tangent method

Gauss - Newton method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Newton-Raphson method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Newton - Raphson method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Tangent method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Tangent method (Newton's method)- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Tangent method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Numerical solution of equations- and their systems consists in an approximate determination of the root or roots of an equation or a system of equations and is used in cases where it is impossible or very time consuming to calculate the exact value. Contents 1 Problem statement 2 Numerical methods ... Wikipedia

Sequential approximation method- a method for solving mathematical problems using such a sequence of approximations that converges to a solution and is built recurrently (i.e., each new approximation is calculated based on the previous one; the initial approximation is chosen in ... ... Great Soviet Encyclopedia

Read also: