full differential. The geometric meaning of the total differential. Tangent plane and surface normal. Total increment and total differential Examples of problem solving

DIFFERENTIAL CALCULUS OF FUNCTIONS OF SEVERAL VARIABLES.

Basic concepts and definitions.

When considering functions of several variables, we confine ourselves to a detailed description of functions of two variables, since all the results obtained will be valid for functions of an arbitrary number of variables.

If each pair of mutually independent numbers (x, y) from a certain set, according to some rule, is assigned one or more values ​​of the variable z, then the variable z is called function of two variables.

If a pair of numbers (x, y) corresponds to one value of z, then the function is called unambiguous, and if more than one, then - ambiguous.

Scope of definition the function z is the set of pairs (x, y) for which the function z exists.

Neighborhood point M 0 (x 0, y 0) of radius r is the set of all points (x, y) that satisfy the condition.

The number A is called limit function f(x, y) as the point M(x, y) tends to the point M 0 (x 0, y 0), if for each number e > 0 there is such a number r > 0 that for any point M(x, y) for which the condition

the condition is also true .

Write down:

Let the point M 0 (x 0, y 0) belong to the domain of the function f(x, y). Then the function z = f(x, y) is called continuous at the point M 0 (x 0, y 0), if

(1)

moreover, the point M(x, y) tends to the point M 0 (x 0, y 0) in an arbitrary way.

If condition (1) is not satisfied at any point, then this point is called breaking point functions f(x, y). This may be in the following cases:

1) The function z \u003d f (x, y) is not defined at the point M 0 (x 0, y 0).

2) There is no limit.

3) This limit exists, but it is not equal to f(x 0 , y 0).

Properties of functions of several variables related to their continuity.

Property. If the function f(x, y, …) is defined and continuous in a closed and bounded domain D, then there is at least one point in this domain

N(x 0 , y 0 , …) such that the inequality

f(x 0 , y 0 , …) ³ f(x, y, …)

as well as a point N 1 (x 01 , y 01 , ...), such that for all other points the inequality is true

f(x 01 , y 01 , …) £ f(x, y, …)

then f(x 0 , y 0 , …) = M – highest value functions, and f(x 01 , y 01 , ...) = m - smallest value functions f(x, y, …) in the domain D.

A continuous function in a closed and bounded domain D attains at least once the greatest value and once the smallest.

Property. If the function f(x, y, …) is defined and continuous in a closed bounded domain D, and M and m are the largest and smallest values ​​of the function in this domain, respectively, then for any point m О there is a point

N 0 (x 0 , y 0 , …) such that f(x 0 , y 0 , …) = m.

Simply put, a continuous function takes in the domain D all intermediate values ​​between M and m. A consequence of this property may be the conclusion that if the numbers M and m have different signs, then in the domain D the function vanishes at least once.

Property. Function f(x, y, …), continuous in a closed bounded domain D, limited in this area, if there is such a number K that for all points of the area the inequality is true .

Property. If a function f(x, y, …) is defined and continuous in a closed bounded domain D, then it uniformly continuous in this area, i.e. for any positive number e, there is such a number D > 0 that for any two points (x 1, y 1) and (x 2, y 2) of the region located at a distance less than D, the inequality

2. Partial derivatives. Partial derivatives of higher orders.

Let a function z = f(x, y) be given in some domain. Take an arbitrary point M(x, y) and set the increment Dx to the variable x. Then the quantity D x z = f(x + Dx, y) – f(x, y) is called partial increment of the function in x.

Can be written

.

Then called partial derivative functions z = f(x, y) in x.

Designation:

The partial derivative of a function with respect to y is defined similarly.

geometric sense the partial derivative (let's say) is the tangent of the slope of the tangent drawn at the point N 0 (x 0, y 0, z 0) to the surface section by the plane y \u003d y 0.

If the function f(x, y) is defined in some domain D, then its partial derivatives and will also be defined in the same domain or part of it.

We will call these derivatives partial derivatives of the first order.

The derivatives of these functions will be partial derivatives of the second order.

Continuing to differentiate the obtained equalities, we obtain partial derivatives of higher orders.

Partial derivatives of the form etc. called mixed derivatives.

Theorem. If the function f(x, y) and its partial derivatives are defined and continuous at the point M(x, y) and its neighborhood, then the relation is true:

Those. partial derivatives of higher orders do not depend on the order of differentiation.

Higher-order differentials are defined similarly.

…………………

Here n is the symbolic power of the derivative, which is replaced by the real power after the parenthesized expression is raised to it.

full differential. The geometric meaning of the total differential. Tangent plane and surface normal.

The expression is called full increment functions f(x, y) at some point (x, y), where a 1 and a 2 are infinitesimal functions as Dх ® 0 and Dу ® 0, respectively.

full differential the function z = f(x, y) is the main linear part with respect to Dx and Dy of the increment of the function Dz at the point (x, y).

For a function of an arbitrary number of variables:

Example 3.1. Find the full differential of the function.

The geometric meaning of the total differential of a function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (z-coordinate) of the tangent plane to the surface during the transition from the point (x 0, y 0) to the point (x 0 + Dx, y 0 + Dy).

Partial derivatives of higher orders. : If the function f(x, y) is defined in some domain D, then its partial derivatives and will also be defined in the same domain or part of it. We will call these derivatives partial derivatives of the first order.

The derivatives of these functions will be partial derivatives of the second order.

Continuing to differentiate the obtained equalities, we obtain partial derivatives of higher orders. Definition. Partial derivatives of the form etc. are called mixed derivatives. Schwartz theorem:

If partial derivatives of higher orders f.m.s. are continuous, then mixed derivatives of the same order, differing only in the order of differentiation = among themselves.

Here n is the symbolic power of the derivative, which is replaced by the real power after the parenthesized expression is raised to it.

14. The equation of the tangent plane and normal to the surface!

Let N and N 0 be points of the given surface. Let's draw a straight line NN 0 . The plane that passes through the point N 0 is called tangent plane to the surface if the angle between the secant NN 0 and this plane tends to zero when the distance NN 0 tends to zero.

Definition. normal to the surface at the point N 0 is called a straight line passing through the point N 0 perpendicular to the tangent plane to this surface.

At some point, the surface has either only one tangent plane, or does not have it at all.

If the surface is given by the equation z \u003d f (x, y), where f (x, y) is a function differentiable at the point M 0 (x 0, y 0), tangent plane at the point N 0 (x 0, y 0, (x 0, y 0)) exists and has the equation:

The equation of the normal to the surface at this point:

geometric sense of the total differential of a function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (z-coordinate) of the tangent plane to the surface during the transition from the point (x 0, y 0) to the point (x 0 + Dx, y 0 + Dy).

As you can see, the geometric meaning of the total differential of a function of two variables is a spatial analogue of the geometric meaning of the differential of a function of one variable.

16. Scalar field and its characteristics. Level lines, derivatives in direction, scalar field gradient.

If each point in space is assigned a scalar quantity , then a scalar field arises (for example, a temperature field, an electric potential field). If Cartesian coordinates are entered, then also denote or The field can be flat if central (spherical) if cylindrical, if



Level surfaces and lines: The properties of scalar fields can be visualized using level surfaces. These are surfaces in space on which it takes on a constant value. Their equation is: . In a flat scalar field, level lines are curves on which the field takes on a constant value: In some cases, level lines can degenerate into points, and level surfaces into points and curves.

Directional derivative and gradient of the scalar field:

Let the unit vector with coordinates be a scalar field. The directional derivative characterizes the change in the field in a given direction and is calculated by the formula The directional derivative is the scalar product of a vector and a vector with coordinates , which is called the gradient of the function and is denoted by . Since , where the angle between and , then the vector indicates the direction of the fastest increase in the field, and its modulus is equal to the derivative in this direction. Since the components of the gradient are partial derivatives, it is easy to get the following properties of the gradient:

17. FMP extrema Local extremum of fmp, necessary and sufficient conditions for its existence. The largest and smallest f.m.s. in limited closed area.

Let the function z = ƒ(x;y) be defined in some domain D, the point N(x0;y0)

A point (x0; y0) is called a maximum point of the function z=ƒ(x; y) if there is such a d-neighbourhood of the point (x0; y0) that for each point (x; y) other than (xo; yo), this neighborhood satisfies the inequality ƒ(х;у)<ƒ(хо;уо). Аналогично определяется точка минимума функции: для всех точек (х; у), отличных от (х0;у0), из d-окрестности точки (хо;уо) выполняется неравенство: ƒ(х;у)>ƒ(х0;y0). The value of the function at the point of maximum (minimum) is called the maximum (minimum) of the function. The maximum and minimum of a function are called its extrema. Note that, by virtue of the definition, the extremum point of the function lies inside the domain of the function; the maximum and minimum have a local (local) character: the value of the function at the point (x0; y0) is compared with its values ​​at points sufficiently close to (x0; y0). In region D, the function may have several extrema or none.



Necessary(1) and sufficient(2) conditions for existence:

(1) If at the point N (x0; y0) the differentiable function z \u003d ƒ (x; y) has an extremum, then its partial derivatives at this point are equal to zero: ƒ "x (x0; y0) \u003d 0, ƒ" y (x0; y0 )=0. Comment. A function can have an extremum at points where at least one of the partial derivatives does not exist. The point at which the first order partial derivatives of the function z ≈ ƒ(x; y) are equal to zero, i.e. f "x=0, f" y=0, is called the stationary point of the function z.

Stationary points and points where at least one partial derivative does not exist are called critical points.

(2) Let the function ƒ(x; y) have continuous partial derivatives up to the second order inclusive at a stationary point (xo; yo) and some of its neighborhood. Let us calculate at the point (x0;y0) the values ​​A=f""xx(x0;y0), B=ƒ""xy(x0;y0), C=ƒ""yy(x0;y0). Denote Then:

1. if Δ > 0, then the function ƒ(x; y) at the point (x0; y0) has an extremum: maximum if A< 0; минимум, если А > 0;

2. if Δ< 0, то функция ƒ(х;у) в точке (х0;у0) экстремума не имеет.

3. In the case of Δ = 0, there may or may not be an extremum at the point (x0; y0). More research is needed.

Tangent plane and surface normal.

tangent plane

Let N and N 0 be points of the given surface. Let's draw a straight line NN 0 . The plane that passes through the point N 0 is called tangent plane to the surface if the angle between the secant NN 0 and this plane tends to zero when the distance NN 0 tends to zero.

Definition. normal to the surface at the point N 0 is called a straight line passing through the point N 0 perpendicular to the tangent plane to this surface.

At some point, the surface has either only one tangent plane, or does not have it at all.

If the surface is given by the equation z \u003d f (x, y), where f (x, y) is a function differentiable at the point M 0 (x 0, y 0), the tangent plane at the point N 0 (x 0, y 0, ( x 0 ,y 0)) exists and has the equation:

The equation for the normal to the surface at this point is:

geometric sense of the total differential of a function of two variables f (x, y) at the point (x 0, y 0) is the increment of the applicate (z-coordinate) of the tangent plane to the surface during the transition from the point (x 0, y 0) to the point (x 0 +x , y 0 +y).

As you can see, the geometric meaning of the total differential of a function of two variables is a spatial analogue of the geometric meaning of the differential of a function of one variable.

Example. Find the equations of the tangent plane and normal to the surface

at the point M(1, 1, 1).

Tangent plane equation:

Normal Equation:

20.4. Approximate calculations using the total differential.

Let the function f(x, y) be differentiable at the point (x, y). Let's find the total increment of this function:

If we substitute into this formula the expression

then we get the approximate formula:

Example. Calculate approximately the value of , based on the value of the function at x = 1, y = 2, z = 1.

From the given expression, we determine x = 1.04 - 1 = 0.04, y = 1.99 - 2 = -0.01,

z \u003d 1.02 - 1 \u003d 0.02.

Find the value of the function u(x, y, z) =

We find partial derivatives:

The total differential of the function u is:

The exact value of this expression is 1.049275225687319176.

20.5. Partial derivatives of higher orders.

If the function f(x, y) is defined in some domain D, then its partial derivatives will also be defined in the same domain or part of it.

We will call these derivatives partial derivatives of the first order.

The derivatives of these functions will be partial derivatives of the second order.

Continuing to differentiate the obtained equalities, we obtain partial derivatives of higher orders.

Definition. Partial derivatives of the form etc. called mixed derivatives.

Theorem. If the function f(x, y) and its partial derivatives are defined and continuous at the point M(x, y) and its neighborhood, then the relation is true:

Those. partial derivatives of higher orders do not depend on the order of differentiation.

Higher-order differentials are defined similarly.

…………………

Here n is the symbolic power of the derivative, which is replaced by the real power after the parenthesized expression is raised to it.

For a function of one variable y = f(x) at the point x 0 the geometric meaning of the differential means the increment of the ordinate of the tangent drawn to the graph of the function at the point with the abscissa x 0 when moving to a point x 0 + x. And the differential of a function of two variables in this regard is an increment appliques tangent plane drawn to the surface given by the equation z = f(x, y) , at the point M 0 (x 0 , y 0 ) when moving to a point M(x 0 + x, y 0 + y). We give the definition of a tangent plane to some surface:

Df . Plane passing through a point R 0 surfaces S, is called tangent plane at a given point, if the angle between this plane and a secant passing through two points R 0 and R(any point on the surface S) , tends to zero when the point R tends along this surface to a point R 0 .

Let the surface S given by the equation z = f(x, y). Then it can be shown that this surface has at a point P 0 (x 0 , y 0 , z 0 ) tangent plane if and only if the function z = f(x, y) is differentiable at this point. In this case, the tangent plane is given by the equation:

zz 0 = +
(6).

§5. Directional derivative, function gradient.

Partial derivative functions y= f(x 1 , x 2 .. x n ) by variables x 1 , x 2 . . . x n express the rate of change of the function in the direction of the coordinate axes. For example, is the rate of change of the function X 1 - that is, it is assumed that the point belonging to the domain of the function definition moves only parallel to the axis OH 1 , and all other coordinates remain unchanged. However, it can be assumed that the function can also change in some other direction, which does not coincide with the direction of any of the axes.

Consider a function of three variables: u= f(x, y, z).

Fix a point M 0 (x 0 , y 0 , z 0 ) and some directed straight line (axis) l passing through this point. Let M(x, y, z) - an arbitrary point of this line and  M 0 M - distance from M 0 before M.

u = f (x, y, z) – f(x 0 , y 0 , z 0 ) – function increment at a point M 0 .

Find the ratio of the increment of the function to the length of the vector
:

Df . Derivative function u = f (x, y, z) towards l at the point M 0 is called the limit of the ratio of the increment of the function to the length of the vector  M 0 Mwhen the latter tends to 0 (or, what is the same thing, with unlimited approximation M to M 0 ):

(1)

This derivative characterizes the rate of change of the function at the point M 0 in the direction l.

Let the axis l (vector M 0 M) forms with axes OX, OY, oz corners
respectively.

Denote x-x 0 =
;

y - y 0 =
;

z - z 0 =
.

Then the vector M 0 M = (x - x 0 , y - y 0 , z - z 0 )=
and its direction cosines:

;

;

.

(4).

(4) is a formula for calculating the directional derivative.

Consider a vector whose coordinates are the partial derivatives of the function u= f(x, y, z) at the point M 0 :

grad u - function gradient u= f(x, y, z) at the point M(x, y, z)

Gradient properties:


Conclusion: length of the function gradient u= f(x, y, z) - is the highest possible value at this point M(x, y, z) , and the direction of the vector grad u coincides with the direction of the vector coming out of the point M, along which the function changes the fastest. That is, the direction of the function gradient grad u is the direction of the fastest increase of the function.

$E \subset \mathbb(R)^(n)$. It is said that $f$ has local maximum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \leqslant f \left(x_(0)\right)$.

The local maximum is called strict , if the neighborhood $U$ can be chosen in such a way that for all $x \in U$ different from $x_(0)$ there is $f\left(x\right)< f\left(x_{0}\right)$.

Definition
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. It is said that $f$ has local minimum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \geqslant f \left(x_(0)\right)$.

A local minimum is said to be strict if the neighborhood $U$ can be chosen so that for all $x \in U$ different from $x_(0)$ $f\left(x\right) > f\left(x_( 0)\right)$.

A local extremum combines the concepts of a local minimum and a local maximum.

Theorem (necessary condition for extremum of a differentiable function)
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. If at the point $x_(0) \in E$ the function $f$ has a local extremum at this point as well, then $$\text(d)f\left(x_(0)\right)=0.$$ Equality to zero differential is equivalent to the fact that all are equal to zero, i.e. $$\displaystyle\frac(\partial f)(\partial x_(i))\left(x_(0)\right)=0.$$

In the one-dimensional case, this is . Denote $\phi \left(t\right) = f \left(x_(0)+th\right)$, where $h$ is an arbitrary vector. The function $\phi$ is defined for sufficiently small modulo values ​​of $t$. Moreover, with respect to , it is differentiable, and $(\phi)’ \left(t\right) = \text(d)f \left(x_(0)+th\right)h$.
Let $f$ have a local maximum at x $0$. Hence, the function $\phi$ at $t = 0$ has a local maximum and, by Fermat's theorem, $(\phi)' \left(0\right)=0$.
So, we got that $df \left(x_(0)\right) = 0$, i.e. function $f$ at the point $x_(0)$ is equal to zero on any vector $h$.

Definition
The points at which the differential is equal to zero, i.e. those in which all partial derivatives are equal to zero are called stationary. critical points functions $f$ are those points at which $f$ is not differentiable, or its equal to zero. If the point is stationary, then it does not yet follow that the function has an extremum at this point.

Example 1
Let $f \left(x,y\right)=x^(3)+y^(3)$. Then $\displaystyle\frac(\partial f)(\partial x) = 3 \cdot x^(2)$,$\displaystyle\frac(\partial f)(\partial y) = 3 \cdot y^(2 )$, so $\left(0,0\right)$ is a stationary point, but the function has no extremum at this point. Indeed, $f \left(0,0\right) = 0$, but it is easy to see that in any neighborhood of the point $\left(0,0\right)$ the function takes both positive and negative values.

Example 2
The function $f \left(x,y\right) = x^(2) − y^(2)$ has the origin of coordinates as a stationary point, but it is clear that there is no extremum at this point.

Theorem (sufficient condition for an extremum).
Let a function $f$ be twice continuously differentiable on an open set $E \subset \mathbb(R)^(n)$. Let $x_(0) \in E$ be a stationary point and $$\displaystyle Q_(x_(0)) \left(h\right) \equiv \sum_(i=1)^n \sum_(j=1) ^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)\right)h^(i)h^(j).$$ Then

  1. if $Q_(x_(0))$ is , then the function $f$ at the point $x_(0)$ has a local extremum, namely, the minimum if the form is positive-definite and the maximum if the form is negative-definite;
  2. if the quadratic form $Q_(x_(0))$ is indefinite, then the function $f$ at the point $x_(0)$ has no extremum.

Let's use the expansion according to the Taylor formula (12.7 p. 292) . Taking into account that the first order partial derivatives at the point $x_(0)$ are equal to zero, we get $$\displaystyle f \left(x_(0)+h\right)−f \left(x_(0)\right) = \ frac(1)(2) \sum_(i=1)^n \sum_(j=1)^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)+\theta h\right)h^(i)h^(j),$$ where $0<\theta<1$. Обозначим $\displaystyle a_{ij}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right)$. В силу теоремы Шварца (12.6 стр. 289-290) , $a_{ij}=a_{ji}$. Обозначим $$\displaystyle \alpha_{ij} \left(h\right)=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}+\theta h\right)−\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right).$$ По предположению, все непрерывны и поэтому $$\lim_{h \rightarrow 0} \alpha_{ij} \left(h\right)=0. \left(1\right)$$ Получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left.$$ Обозначим $$\displaystyle \epsilon \left(h\right)=\frac{1}{|h|^{2}}\sum_{i=1}^n \sum_{j=1}^n \alpha_{ij} \left(h\right)h_{i}h_{j}.$$ Тогда $$|\epsilon \left(h\right)| \leq \sum_{i=1}^n \sum_{j=1}^n |\alpha_{ij} \left(h\right)|$$ и, в силу соотношения $\left(1\right)$, имеем $\epsilon \left(h\right) \rightarrow 0$ при $h \rightarrow 0$. Окончательно получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left. \left(2\right)$$ Предположим, что $Q_{x_{0}}$ – положительноопределенная форма. Согласно лемме о положительноопределённой квадратичной форме (12.8.1 стр. 295, Лемма 1) , существует такое положительное число $\lambda$, что $Q_{x_{0}} \left(h\right) \geqslant \lambda|h|^{2}$ при любом $h$. Поэтому $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right) \geq \frac{1}{2}|h|^{2} \left(λ+\epsilon \left(h\right)\right).$$ Так как $\lambda>0$, and $\epsilon \left(h\right) \rightarrow 0$ for $h \rightarrow 0$, then the right side is positive for any vector $h$ of sufficiently small length.
Thus, we have come to the conclusion that in some neighborhood of the point $x_(0)$ the inequality $f \left(x\right) >f \left(x_(0)\right)$ is satisfied if only $x \neq x_ (0)$ (we put $x=x_(0)+h$\right). This means that at the point $x_(0)$ the function has a strict local minimum, and thus the first part of our theorem is proved.
Suppose now that $Q_(x_(0))$ is an indefinite form. Then there are vectors $h_(1)$, $h_(2)$ such that $Q_(x_(0)) \left(h_(1)\right)=\lambda_(1)>0$, $Q_ (x_(0)) \left(h_(2)\right)= \lambda_(2)<0$. В соотношении $\left(2\right)$ $h=th_{1}$ $t>0$. Then we get $$f \left(x_(0)+th_(1)\right)−f \left(x_(0)\right) = \frac(1)(2) \left[ t^(2) \ lambda_(1) + t^(2) |h_(1)|^(2) \epsilon \left(th_(1)\right) \right] = \frac(1)(2) t^(2) \ left[ \lambda_(1) + |h_(1)|^(2) \epsilon \left(th_(1)\right) \right].$$ For sufficiently small $t>0$, the right side is positive. This means that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​$f \left(x\right)$ greater than $f \left(x_(0)\right)$.
Similarly, we obtain that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​less than $f \left(x_(0)\right)$. This, together with the previous one, means that the function $f$ does not have an extremum at the point $x_(0)$.

Let us consider a particular case of this theorem for a function $f \left(x,y\right)$ of two variables defined in some neighborhood of the point $\left(x_(0),y_(0)\right)$ and having continuous partial derivatives of the first and second orders. Let $\left(x_(0),y_(0)\right)$ be a stationary point and let $$\displaystyle a_(11)= \frac(\partial^(2) f)(\partial x ^(2)) \left(x_(0) ,y_(0)\right), a_(12)=\frac(\partial^(2) f)(\partial x \partial y) \left(x_( 0), y_(0)\right), a_(22)=\frac(\partial^(2) f)(\partial y^(2)) \left(x_(0), y_(0)\right ).$$ Then the previous theorem takes the following form.

Theorem
Let $\Delta=a_(11) \cdot a_(22) − a_(12)^2$. Then:

  1. if $\Delta>0$, then the function $f$ has a local extremum at the point $\left(x_(0),y_(0)\right)$, namely, a minimum if $a_(11)>0$ , and maximum if $a_(11)<0$;
  2. if $\Delta<0$, то экстремума в точке $\left(x_{0},y_{0}\right)$ нет. Как и в одномерном случае, при $\Delta=0$ экстремум может быть, а может и не быть.

Examples of problem solving

Algorithm for finding the extremum of a function of many variables:

  1. We find stationary points;
  2. We find the differential of the 2nd order at all stationary points
  3. Using the sufficient condition for the extremum of a function of several variables, we consider the second-order differential at each stationary point
  1. Investigate the function to the extremum $f \left(x,y\right) = x^(3) + 8 \cdot y^(3) + 18 \cdot x — 30 \cdot y$.
    Solution

    Find partial derivatives of the 1st order: $$\displaystyle \frac(\partial f)(\partial x)=3 \cdot x^(2) — 6 \cdot y;$$ $$\displaystyle \frac(\partial f)(\partial y)=24 \cdot y^(2) — 6 \cdot x.$$ Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x) = 0\\\frac(\partial f)(\partial y)= 0\end(cases) \Rightarrow \begin(cases)3 \cdot x^(2) - 6 \cdot y= 0\\24 \cdot y^(2) - 6 \cdot x = 0\end(cases) \Rightarrow \begin(cases)x^(2) - 2 \cdot y= 0\\4 \cdot y^(2) - x = 0 \end(cases)$$ From the 2nd equation, we express $x=4 \cdot y^(2)$ — substitute into the 1st equation: $$\displaystyle \left(4 \cdot y^(2)\right )^(2)-2 \cdot y=0$$ $$16 \cdot y^(4) — 2 \cdot y = 0$$ $$8 \cdot y^(4) — y = 0$$ $$y \left(8 \cdot y^(3) -1\right)=0$$ As a result, 2 stationary points are obtained:
    1) $y=0 \Rightarrow x = 0, M_(1) = \left(0, 0\right)$;
    2) $\displaystyle 8 \cdot y^(3) -1=0 \Rightarrow y^(3)=\frac(1)(8) \Rightarrow y = \frac(1)(2) \Rightarrow x=1 , M_(2) = \left(\frac(1)(2), 1\right)$
    Let us check the fulfillment of the sufficient extremum condition:
    $$\displaystyle \frac(\partial^(2) f)(\partial x^(2))=6 \cdot x; \frac(\partial^(2) f)(\partial x \partial y)=-6; \frac(\partial^(2) f)(\partial y^(2))=48 \cdot y$$
    1) For point $M_(1)= \left(0,0\right)$:
    $$\displaystyle A_(1)=\frac(\partial^(2) f)(\partial x^(2)) \left(0,0\right)=0; B_(1)=\frac(\partial^(2) f)(\partial x \partial y) \left(0,0\right)=-6; C_(1)=\frac(\partial^(2) f)(\partial y^(2)) \left(0,0\right)=0;$$
    $A_(1) \cdot B_(1) - C_(1)^(2) = -36<0$ , значит, в точке $M_{1}$ нет экстремума.
    2) For point $M_(2)$:
    $$\displaystyle A_(2)=\frac(\partial^(2) f)(\partial x^(2)) \left(1,\frac(1)(2)\right)=6; B_(2)=\frac(\partial^(2) f)(\partial x \partial y) \left(1,\frac(1)(2)\right)=-6; C_(2)=\frac(\partial^(2) f)(\partial y^(2)) \left(1,\frac(1)(2)\right)=24;$$
    $A_(2) \cdot B_(2) — C_(2)^(2) = 108>0$, so there is an extremum at $M_(2)$, and since $A_(2)>0$, then this is the minimum.
    Answer: The point $\displaystyle M_(2) \left(1,\frac(1)(2)\right)$ is the minimum point of the function $f$.

  2. Investigate the function for the extremum $f=y^(2) + 2 \cdot x \cdot y - 4 \cdot x - 2 \cdot y - 3$.
    Solution

    Find stationary points: $$\displaystyle \frac(\partial f)(\partial x)=2 \cdot y - 4;$$ $$\displaystyle \frac(\partial f)(\partial y)=2 \cdot y + 2 \cdot x — 2.$$
    Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x)= 0\\\frac(\partial f)(\partial y)= 0\end(cases) \ Rightarrow \begin(cases)2 \cdot y - 4= 0\\2 \cdot y + 2 \cdot x - 2 = 0\end(cases) \Rightarrow \begin(cases) y = 2\\y + x = 1\end(cases) \Rightarrow x = -1$$
    $M_(0) \left(-1, 2\right)$ is a stationary point.
    Let's check the fulfillment of the sufficient extremum condition: $$\displaystyle A=\frac(\partial^(2) f)(\partial x^(2)) \left(-1,2\right)=0; B=\frac(\partial^(2) f)(\partial x \partial y) \left(-1,2\right)=2; C=\frac(\partial^(2) f)(\partial y^(2)) \left(-1,2\right)=2;$$
    $A \cdot B - C^(2) = -4<0$ , значит, в точке $M_{0}$ нет экстремума.
    Answer: there are no extrema.

Time limit: 0

Navigation (job numbers only)

0 of 4 tasks completed

Information

Take this quiz to test your knowledge of the topic you just read, Local Extrema of Functions of Many Variables.

You have already taken the test before. You cannot run it again.

Test is loading...

You must login or register in order to start the test.

You must complete the following tests to start this one:

results

Correct answers: 0 out of 4

Your time:

Time is over

You scored 0 out of 0 points (0 )

Your score has been recorded on the leaderboard

  1. With an answer
  2. Checked out

    Task 1 of 4

    1 .
    Number of points: 1

    Investigate the function $f$ for extrema: $f=e^(x+y)(x^(2)-2 \cdot y^(2))$

    Correctly

    Not properly

  1. Task 2 of 4

    2 .
    Number of points: 1

    Does the function $f = 4 + \sqrt((x^(2)+y^(2))^(2))$

    Correctly

Read also: