Finding a conditional extremum. Extremum of a function of several variables The concept of extremum of a function of several variables. Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values ​​of continuous functions

Example

Find the extremum of the function provided that X And at are related by the relation: . Geometrically, the problem means the following: on an ellipse
plane
.

This problem can be solved this way: from the equation
we find
X:


provided that
, reduced to the problem of finding the extremum of a function of one variable on the interval
.

Geometrically, the problem means the following: on an ellipse , obtained by crossing the cylinder
plane
, you need to find the maximum or minimum value of the applicate (Fig.9). This problem can be solved this way: from the equation
we find
. Substituting the found value of y into the equation of the plane, we obtain a function of one variable X:

Thus, the problem of finding the extremum of the function
provided that
, reduced to the problem of finding the extremum of a function of one variable on an interval.

So, the problem of finding a conditional extremum– this is the problem of finding the extremum of the objective function
, provided that the variables X And at subject to restriction
, called connection equation.

Let's say that dot
, satisfying the coupling equation, is the point of local conditional maximum (minimum), if there is a neighborhood
such that for any points
, whose coordinates satisfy the connection equation, the inequality is satisfied.

If from the coupling equation one can find an expression for at, then by substituting this expression into the original function, we turn the latter into a complex function of one variable X.

The general method for solving the conditional extremum problem is Lagrange multiplier method. Let's create an auxiliary function, where ─ some number. This function is called Lagrange function, A ─ Lagrange multiplier. Thus, the task of finding a conditional extremum has been reduced to finding local extremum points for the Lagrange function. To find possible extremum points, you need to solve a system of 3 equations with three unknowns x, y And.

Then you should use the following sufficient condition for an extremum.

THEOREM. Let the point be a possible extremum point for the Lagrange function. Let us assume that in the vicinity of the point
there are continuous partial derivatives of the second order of functions And . Let's denote

Then if
, That
─ conditional extremum point of the function
with the coupling equation
in this case, if
, That
─ conditional minimum point, if
, That
─ conditional maximum point.

§8. Gradient and directional derivative

Let the function
defined in some (open) region. Consider any point
this area and any directed straight line (axis) , passing through this point (Fig. 1). Let
- some other point on this axis,
– length of the segment between
And
, taken with a plus sign, if the direction
coincides with the direction of the axis , and with a minus sign if their directions are opposite.

Let
approaches indefinitely
. Limit

called derivative of a function
towards
(or along the axis ) and is denoted as follows:

.

This derivative characterizes the “rate of change” of the function at the point
towards . In particular, the ordinary partial derivatives ,can also be thought of as derivatives "with respect to direction".

Let us now assume that the function
has continuous partial derivatives in the region under consideration. Let the axis forms angles with the coordinate axes
And . Under the assumptions made, the directional derivative exists and is expressed by the formula

.

If the vector
given by its coordinates
, then the derivative of the function
in the direction of the vector
can be calculated using the formula:

.

Vector with coordinates
called gradient vector functions
at the point
. The gradient vector indicates the direction of the fastest increase in the function at a given point.

Example

Given a function, point A(1, 1) and vector
. Find: 1)grad z at point A; 2) derivative at point A in the direction of the vector .

Partial derivatives of a given function at a point
:

;
.

Then the gradient vector of the function at this point is:
. The gradient vector can also be written using vector decomposition And :

. Derivative of a function in the direction of the vector :

So,
,
.◄

Let the function z - /(x, y) be defined in some domain D and let Mo(xo, Vo) be an interior point of this domain. Definition. If there is a number such that for all satisfying the conditions the inequality is true, then the point Mo(xo, y) is called the local maximum point of the function /(x, y); if for all Dx, Du, satisfying the conditions | then the point Mo(xo,yo) is called a thin local minimum. In other words, the point M0(x0, y0) is a point of maximum or minimum of the function f(x, y) if there exists a 6-neighborhood of the point A/o(x0, y0) such that at all points M(x, y) of this in the neighborhood, the increment of the function maintains its sign. Examples. 1. For the function point - minimum point (Fig. 17). 2. For the function, point 0(0,0) is the maximum point (Fig. 18). 3. For a function, point 0(0,0) is a local maximum point. 4 In fact, there is a neighborhood of the point 0(0, 0), for example, a circle of radius j (see Fig. 19), at any point of which, different from the point 0(0,0), the value of the function /(x,y) less than 1 = We will consider only points of strict maximum and minimum of functions when strict inequality or strict inequality is satisfied for all points M(x) y) from some punctured 6-neighborhood of the point Mq. The value of a function at the maximum point is called the maximum, and the value of the function at the minimum point is called the minimum of this function. The maximum and minimum points of a function are called the extremum points of the function, and the maximums and minimums of the function themselves are called its extrema. Theorem 11 (necessary condition for an extremum). If a function is an extremum of a function of several variables. The concept of an extremum of a function of several variables. Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values ​​of continuous functions have an extremum at the point then at this point each partial derivative u either vanishes or does not exist. Let at the point M0(x0, yо) the Function z = f(x) y) have an extremum. Let's give the variable y the value yo. Then the function z = /(x, y) will be a function of one variable x\ Since at x = xo it has an extremum (maximum or minimum, Fig. 20), then its derivative with respect to x = “o, | (*o,l>)" Equal to zero or does not exist. Similarly, we are convinced that) is either equal to zero or does not exist. The points at which = 0 and χ = 0 or do not exist are called critical points of the function z = Dx, y).The points at which $£ = φ = 0 are also called stationary points of the function. Theorem 11 expresses only necessary conditions for an extremum, which are not sufficient. Example: Function Fig. 18 Fig. 20 immt derivatives which turn to zero at. But this function is thin on the imvat of the strum. Indeed, the function is equal to zero at the point 0(0,0) and takes positive and negative values ​​at points M(x,y), arbitrarily close to the point 0(0,0). For it, so at points at points (0, y) for arbitrarily small Point 0(0,0) of the indicated type is called a mini-max point (Fig. 21). Sufficient conditions for an extremum of a function of two variables are expressed by the following theorem. Theorem 12 (sufficient conditions for an extremum in two variables). Let the point Mo(xo»Yo) be a stationary point of the function f(x, y), and in some neighborhood of the point /, including the point Mo itself, the function f(z, y) has continuous partial derivatives up to the second order inclusive. Then". at the point Mo(xo, V0) the function /(xo, y) does not have an extremum if D(xo, yo)< 0. Если же то в точке Мо(жо> The extremum of the function f(x, y) may or may not exist. In this case, further research is required. m Let us limit ourselves to proving statements 1) and 2) of the theorem. Let us write the second-order Taylor formula for the function /(i, y): where. According to the condition, it is clear that the sign of the increment D/ is determined by the sign of the trinomial on the right side of (1), i.e., the sign of the second differential d2f. Let's denote it for brevity. Then equality (l) can be written as follows: Let at the point MQ(so, V0) we have... Since, by condition, the second-order partial derivatives of the function f(s, y) are continuous, then inequality (3) will also hold at some neighborhood of the point M0(s0,yo). If the condition is satisfied (at the point А/0, and by virtue of continuity the derivative /,z(s,y) will retain its sign in some neighborhood of the point Af0. In the region where А Ф 0, we have. It is clear from this that if ЛС - В2 > 0 in some neighborhood of the point M0(x0) y0), then the sign of the trinomial AAx2 -I- 2BAxAy + CDy2 coincides with the sign of A at the point (so, V0) (as well as with the sign of C, since for AC - B2 > 0 A and C cannot have different signs). Since the sign of the sum AAAs2 + 2BAxAy + CAy2 at the point (s0 + $ Ax, y0 + 0 Du) determines the sign of the difference, we come to the following conclusion: if for the function /(s,y) at the stationary point (s0, V0) condition, then for sufficiently small || inequality will be satisfied. Thus, at the point (sq, V0) the function /(s, y) has a maximum. If the condition is satisfied at the stationary point (s0, y0), then for all sufficiently small |Dr| and |Du| the inequality is true, which means that at the point (so,yo) the function /(s, y) has a minimum. Examples. 1. Investigate the function for an extremum 4 Using the necessary conditions for an extremum, we look for stationary points of the function. To do this, we find the partial derivatives u and equate them to zero. We obtain a system of equations from where - a stationary point. Let us now use Theorem 12. We have This means that there is an extremum at point Ml. Because this is the minimum. If we transform the function r into form, it is easy to see that the right side (“) will be minimal when is the absolute minimum of this function. 2. Examine the function for an extremum. We find stationary points of the function, for which we compose a system of equations. Hence, so that the point is stationary. Since, by virtue of Theorem 12, there is no extremum at point M. * 3. Investigate the extremum of the function. Find the stationary points of the function. From the system of equations we obtain that, so the point is stationary. Next we have that Theorem 12 does not answer the question about the presence or absence of an extremum. Let's do it this way. For a function about all points different from the point so, by definition, and the point A/o(0,0) the function r has an absolute minimum. By similar calculations we establish that the function has a maximum at the point, but the function does not have an extremum at the point. Let a function of n independent variables be differentiable at a point. Point Mo is called a stationary point of the function if Theorem 13 (up to sufficient conditions for an extremum). Let the function be defined and have continuous partial derivatives of the second order in some neighborhood of the fine Mt(xi..., which is a stationary fine function if the quadratic form (the second differential of the function f in the fine is positive definite (negative definite), the minimum point (respectively, fine maximum) of the function f is fine If the quadratic form (4) is alternating in sign, then there is no extremum in the fine LG0. In order to establish whether the quadratic form (4) will be positive or negative definite, you can use, for example, the Sylvester criterion for positive (negative ) certainty of the quadratic form. 15.2. Conditional extrema. Until now, we have been looking for local extrema of a function throughout its domain of definition, when the arguments of the function are not bound by any additional conditions. Such extrema are called unconditional. However, problems of finding so-called conditional extrema are often encountered. Let the function z = /(x, y) be defined in the domain D. Let us assume that a curve L is given in this domain, and we need to find the extrema of the function f(x> y) only among those of its values ​​that correspond to the points of the curve L. The same extrema are called conditional extrema of the function z = f(x) y) on the curve L. Definition They say that at a point lying on the curve L, the function f(x, y) has a conditional maximum (minimum) if the inequality is satisfied at all points M (s, y) y) curve L, belonging to some neighborhood of the point M0(x0, V0) and different from the point M0 (If the curve L is given by an equation, then the problem is to find the conditional extremum of the function r - f(x,y) on the curve! can be formulated as follows: find the extrema of the function x = /(z, y) in the region D, provided that Thus, when finding the conditional extrema of the function z = y), the arguments of wildebeest can no longer be considered as independent variables: they are related to each other by the relation y ) = 0, which is called the coupling equation. To clarify the distinction between unconditional and conditional extremum, let’s look at an example, the unconditional maximum of a function (Fig. 23) is equal to one and is achieved at point (0,0). It corresponds to point M - the vertex of the pvvboloid. Let us add the connection equation y = j. Then the conditional maximum will obviously be equal to it. It is reached at the point (o,|), and it corresponds to the vertex Afj of the ball, which is the line of intersection of the ball with the plane y = j. In the case of an unconditional mvximum, we have a mvximum applicate among all vpplicvt of the surface * = 1 - l;2 ~ y1; summvv conditional - only among the vllikvt points pvraboloidv, corresponding to the point* of the straight line y = j not the xOy plane. One of the methods for finding the conditional extremum of a function in the presence and connection is as follows. Let the connection equation y) - O define y as a unique differentiable function of the argument x: Substituting a function instead of y into the function, we obtain a function of one argument in which the connection condition is already taken into account. The (unconditional) extremum of the function is the desired conditional extremum. Example. Find the extremum of a function under the condition Extremum of a function of several variables The concept of an extremum of a function of several variables. Necessary and sufficient conditions for an extremum Conditional extremum The largest and smallest values ​​of continuous functions A From the connection equation (2") we find y = 1-x. Substituting this value y into (V), we obtain a function of one argument x: Let us examine it for the extremum: whence x = 1 is the critical point; , so it delivers the conditional minimum of the function r (Fig. 24). Let us indicate another way to solve the problem of the conditional extremum, called the Lagrange multiplier method. Let there be a point of the conditional extremum of the function in the presence of a connection. Let us assume that the connection equation defines a unique continuously differentiable function in a certain neighborhood of the point xx. Assuming that we obtain that the derivative with respect to x of the function /(r, ip(x)) at the point xq must be equal to zero or, which is equivalent to this, the differential of f(x, y) at the point Mo" O) From the connection equation we have (5) Multiplying the last equality by an as yet undetermined numerical factor A and adding term by term with equality (4), we will have (we assume that). Then, due to the arbitrariness of dx, we obtain Equalities (6) and (7) express the necessary conditions for an unconditional extremum at the point of the function, which is called the Lagrange function. Thus, the conditional extremum point of the function /(x, y), if, is necessarily a stationary point of the Lagrange function where A is a certain numerical coefficient. From here we obtain a rule for finding conditional extrema: in order to find points that can be points of the conventional extremum of a function in the presence of a connection, 1) we compose the Lagrange function, 2) by equating the derivatives of this function to zero and adding the connection equation to the resulting equations, we obtain a system of three equations from which we find the values ​​of A and the coordinates x, y of possible extremum points. The question of the existence and nature of the conditional extremum is resolved on the basis of studying the sign of the second differential of the Lagrange function for the considered system of values ​​x0, V0, A, obtained from (8) provided that If, then at the point (x0, V0) the function /(x, y ) has a conditional maximum; if d2F > 0 - then a conditional minimum. In particular, if at a stationary point (xo, J/o) the determinant D for the function F(x, y) is positive, then at the point (®o, V0) there is a conditional maximum of the function f(x, y), if and conditional minimum of the function /(x, y), if Example. Let us turn again to the conditions of the previous example: find the extremum of the function under the condition that x + y = 1. We will solve the problem using the Lagrange multiplier method. The Lagrange function in this case has the form To find stationary points, we compose a system. From the first two equations of the system, we obtain that x = y. Then from the third equation of the system (the connection equation) we find that x - y = j are the coordinates of the possible extremum point. In this case (it is indicated that A = -1. Thus, the Lagrange function. is the conditional minimum point of the function * = x2 + y2 under the condition There is no unconditional extremum for the Lagrange function. P(x, y) does not yet mean the absence of a conditional extremum for the function /(x, y) in the presence of a connection Example: Find the extremum of a function under the condition y 4 We compose the Lagrange function and write out a system for determining A and the coordinates of possible extremum points: From the first two equations we obtain x + y = 0 and we arrive at the system from where x = y = A = 0. Thus, the corresponding Lagrange function has the form At the point (0,0) the function F(x, y; 0) does not have an unconditional extremum, however, the conditional extremum of the function r = xy. When y = x, there is ". Indeed, in this case r = x2. From here it is clear that at the point (0,0) there is a conditional minimum. "The method of Lagrange multipliers is transferred to the case of functions of any number of arguments/ Let us look for the extremum of the function in the presence of connection equations Compose the Lagrange function where A|, Az,..., A„, are indefinite constant factors. Equating to zero all first-order partial derivatives of the function F and adding connection equations (9) to the resulting equations, we obtain a system of n + m equations, from which we determine Ab A3|..., At and coordinates x\) x2). » xn of possible points of conditional extremum. The question of whether the points found using the Lagrange method are actually points of a conditional extremum can often be resolved based on considerations of a physical or geometric nature. 15.3. The largest and smallest values ​​of continuous functions Let it be necessary to find the largest (smallest) value of a function z = /(x, y), continuous in some closed limited domain D. According to Theorem 3, in this region there is a point (xo, V0) at which the function takes the largest (smallest) value. If the point (xo, y0) lies inside the domain D, then the function / has a maximum (minimum) in it, so in this case the point of interest to us is contained among the critical points of the function /(x, y). However, the function /(x, y) can reach its greatest (smallest) value at the boundary of the region. Therefore, in order to find the largest (smallest) value taken by the function z = /(x, y) in a limited closed area 2), you need to find all the maxima (minimum) of the function achieved inside this area, as well as the largest (smallest) value of the function in border of this area. The largest (smallest) of all these numbers will be the desired largest (smallest) value of the function z = /(x,y) in region 27. Let us show how this is done in the case of a differentiable function. Prmmr. Find the largest and smallest values ​​of the function of region 4. We find the critical points of the function inside region D. To do this, we compose a system of equations. From here we obtain x = y « 0, so that point 0 (0,0) is the critical point of the function x. Since Let us now find the largest and smallest values ​​of the function on the boundary Г of the region D. On part of the boundary we have that y = 0 is a critical point, and since = then at this point the function z = 1 + y2 has a minimum equal to one. At the ends of the segment Г", at points (, we have. Using symmetry considerations, we obtain the same results for other parts of the boundary. We finally obtain: the smallest value of the function z = x2+y2 in the region "B is equal to zero and it is achieved at the internal point 0( 0, 0) region, and the maximum value of this function, equal to two, is achieved at four points of the boundary (Fig. 25) Fig. 25 Exercises Find the domain of definition of the functions: Construct the level lines of the functions: 9 Find the level surfaces of the functions of three independent variables: Calculate the limits functions: Find partial derivatives of functions and their total differentials: Find derivatives of complex functions: 3 Find J. Extremum of a function of several variables Concept of extremum of a function of several variables Necessary and sufficient conditions for an extremum Conditional extremum Maximum and minimum values ​​of continuous functions 34. Using the formula for the derivative of a complex function two variables, find and functions: 35. Using the formula for the derivative of a complex function of two variables, find |J and functions: Find jj functions given implicitly: 40. Find the slope of the tangent curve at the point of its intersection with the line x = 3. 41. Find the points at which the tangent of the x curve is parallel to the Ox axis. . In the following problems, find and T: Write the equations of the tangent plane and the normal of the surface: 49. Write the equations of the tangent planes of the surface x2 + 2y2 + 3z2 = 21, parallel to the plane x + 4y + 6z = 0. Find the first three or four terms of the expansion using the Taylor formula : 50. y in the vicinity of the point (0, 0). Using the definition of an extremum of a function, examine the following functions for extremum:). Using sufficient conditions for the extremum of a function of two variables, examine the extremum of the function: 84. Find the largest and smallest values ​​of the function z = x2 - y2 in a closed circle 85. Find the largest and smallest values ​​of the function * = x2y(4-x-y) in a triangle bounded by straight lines x = 0, y = 0, x + y = b. 88. Determine the dimensions of a rectangular open pool that has the smallest surface, provided that its volume is equal to V. 87. Find the dimensions of a rectangular parallelepiped that has the maximum volume given the total surface 5. Answers 1. and | A square formed by line segments x including its sides. 3. Family of concentric rings 2= 0,1,2,... .4. The entire plane except for the points on the straight lines. Part of the plane located above the parabola y = -x?. 8. Points of the circle x. The entire plane except for straight lines x The radical expression is non-negative in two cases j * ^ or j x ^ ^ which is equivalent to an infinite series of inequalities, respectively. The domain of definition is shaded squares (Fig. 26); l which is equivalent to an infinite series The function is defined in points. a) Straight lines parallel to straight line x b) concentric circles with the center at the origin. 10. a) parabolas y) parabolas y a) parabolas b) hyperbolas | .Planes xc. 13.Prime - single-cavity hyperboloids of rotation around the Oz axis; when and are two-sheet hyperboloids of rotation around the Oz axis, both families of surfaces are separated by a cone; There is no limit, b) 0. 18. Let us set y = kxt then z lim z = -2, so the given function at the point (0,0) has no limit. 19. a) Point (0,0); b) point (0,0). 20. a) Break line - circle x2 + y2 = 1; b) the break line is the straight line y = x. 21. a) Break lines - coordinate axes Ox and Oy; b) 0 (empty set). 22. All points (m, n), where and n are integers

CONDITIONAL EXTREME

The minimum or maximum value achieved by a given function (or functional) provided that certain other functions (functionals) take values ​​from a given admissible set. If there are no conditions limiting changes in independent variables (functions) in the indicated sense, then we speak of an unconditional extremum.
Classic task on U. e. is the problem of determining the minimum of a function of several variables

Provided that certain other functions take the given values:

In this problem G, to whom the values ​​of the vector function must belong g=(g 1, ...,g m), included in additional conditions (2), there is a fixed point c=(c 1, ..., with t)in m-dimensional Euclidean space
If in (2) along with the equal sign, inequality signs are allowed

This then leads to the problem nonlinear programming(13). In problem (1), (3), the set G of admissible values ​​of the vector function g is a certain curvilinear one belonging to the (n-m 1)-dimensional hypersurface defined by m 1 , m 1 conditions like equality (3). The boundaries of the specified curvilinear polyhedron are constructed taking into account p-m 1 inequalities included in (3).
A special case of problem (1), (3) on U.V. is the task linear programming, in which all the functions f and g i are linear in x l , ... , x p. In a linear programming problem, the set G of admissible values ​​of the vector function g, included in the conditions limiting the area of ​​change of variables x 1, .....x n , represents , belonging to the (n-t 1)-dimensional hyperplane specified by m 1 conditions of the type of equality in (3).
Similarly, most problems of optimization of functionals representing practical interest comes down to problems on U. e. (cm. Isoperimetric problem, Ring problem, Lagrange problem, Manner problem). The same as in mathematics. programming, the main problems of the calculus of variations and the theory of optimal control are problems in electronic systems.
When solving problems in electronic systems, especially when considering theoretical ones. questions related to problems in electronic systems, the use of indefinite Lagrange multipliers, allowing us to reduce the problem to U. e. to the problem on the unconditional and simplify the necessary optimality conditions. The use of Lagrange multipliers underlies most classical studies. methods for solving problems in electronic systems.

Lit.: Hedley J., Nonlinear and, trans. from English, M., 1967; Bliss G. A., Lectures on the calculus of variations, trans. from English, M., 1950; Pontryagin L. S. [et al.], Mathematical optimal processes, 2nd ed., M., 1969.
I. B. Vapnyarsky.

Mathematical encyclopedia. - M.: Soviet Encyclopedia. I. M. Vinogradov. 1977-1985.

See what "CONDITIONAL EXTREME" is in other dictionaries:

    Relative extremum, extremum of a function f (x1,..., xn + m) from n + m variables under the assumption that these variables are also subject to m connection equations (conditions): φk (x1,..., xn + m) = 0, 1≤ k ≤ m (*) (see Extremum).… …

    Let the set be open and the functions given. Let be. These equations are called constraint equations (the terminology is borrowed from mechanics). Let a function be defined on G... Wikipedia

    - (from the Latin extremum extreme) the value of a continuous function f (x), which is either a maximum or a minimum. More precisely: a function f (x) continuous at a point x0 has a maximum (minimum) at x0 if there is a neighborhood (x0 + δ, x0 δ) of this point,... ... Great Soviet Encyclopedia

    This term has other meanings, see Extremum (meanings). Extremum (lat. extremum extreme) in mathematics is the maximum or minimum value of a function on a given set. The point at which the extremum is reached... ... Wikipedia

    A function used in solving problems on the conditional extremum of functions of many variables and functionals. With the help of L. f. the necessary conditions for optimality in problems on a conditional extremum are written down. In this case, it is not necessary to express only variables... Mathematical Encyclopedia

    A mathematical discipline devoted to finding extreme (largest and smallest) values ​​of functionals of variables that depend on the choice of one or more functions. In and. is a natural development of that chapter... ... Great Soviet Encyclopedia

    Variables, with the help of which the Lagrange function is constructed when studying problems on a conditional extremum. The use of linear methods and the Lagrange function allows us to obtain the necessary optimality conditions in problems involving a conditional extremum in a uniform way... Mathematical Encyclopedia

    Calculus of variations is a branch of functional analysis that studies variations of functionals. The most typical problem in the calculus of variations is to find a function on which a given functional achieves... ... Wikipedia

    A branch of mathematics devoted to the study of methods for finding extrema of functionals that depend on the choice of one or several functions under various kinds of restrictions (phase, differential, integral, etc.) imposed on these... ... Mathematical Encyclopedia

    Calculus of variations is a branch of mathematics that studies variations of functionals. The most typical problem in the calculus of variations is to find the function at which the functional reaches an extreme value. Methods... ...Wikipedia

Books

  • Lectures on control theory. Volume 2. Optimal control, V. Boss. The classical problems of optimal control theory are considered. The presentation begins with the basic concepts of optimization in finite-dimensional spaces: conditional and unconditional extremum,...

Definition1: A function is said to have a local maximum at a point if there is a neighborhood of the point such that for any point M with coordinates (x, y) inequality holds: . In this case, i.e., the increment of the function< 0.

Definition2: A function is said to have a local minimum at a point if there is a neighborhood of the point such that for any point M with coordinates (x, y) inequality holds: . In this case, i.e., the increment of the function > 0.

Definition 3: The points of local minimum and maximum are called extremum points.

Conditional Extremes

When searching for extrema of a function of many variables, problems often arise related to the so-called conditional extremum. This concept can be explained using the example of a function of two variables.

Let a function and a line be given L on surface 0xy. The task is to get on the line L find such a point P(x, y), in which the value of a function is the largest or smallest compared to the values ​​of this function at points on the line L, located near the point P. Such points P are called conditional extremum points functions on line L. In contrast to the usual extremum point, the value of the function at the conditional extremum point is compared with the values ​​of the function not at all points of its neighborhood, but only at those that lie on the line L.

It is absolutely clear that the point of the usual extremum (they also say unconditional extremum) is also a conditional extremum point for any line passing through this point. The converse, of course, is not true: the conditional extremum point may not be the ordinary extremum point. Let me explain what I said with a simple example. The graph of the function is the upper hemisphere (Appendix 3 (Fig. 3)).

This function has a maximum at the origin; the vertex corresponds to it M hemispheres. If the line L there is a line passing through the points A And IN(her equation x+y-1=0), then it is geometrically clear that for the points of this line the greatest value of the function is achieved at the point lying in the middle between the points A And IN. This is the point of conditional extremum (maximum) of the function on this line; it corresponds to point M 1 on the hemisphere, and from the figure it is clear that there can be no talk of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, we have to find the extreme values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the conditional extremum problem.

Let us now proceed to the practical search for the conditional extremum points of the function Z= f(x, y) provided that the variables x and y are related by the equation (x, y) = 0. We will call this relation the connection equation. If from the coupling equation y can be expressed explicitly in terms of x: y=(x), we obtain a function of one variable Z= f(x, (x)) = Ф(x).

Having found the value x at which this function reaches an extremum, and then determined from the connection equation the corresponding y values, we obtain the desired points of the conditional extremum.

So, in the above example, from the relation equation x+y-1=0 we have y=1-x. From here

It is easy to check that z reaches its maximum at x = 0.5; but then from the connection equation y = 0.5, and we get exactly the point P, found from geometric considerations.

The problem of a conditional extremum can be solved very simply when the connection equation can be represented by parametric equations x=x(t), y=y(t). Substituting expressions for x and y into this function, we again come to the problem of finding the extremum of a function of one variable.

If the coupling equation has a more complex form and we are unable to either explicitly express one variable in terms of another or replace it with parametric equations, then the task of finding a conditional extremum becomes more difficult. We will continue to assume that in the expression of the function z= f(x, y) the variable (x, y) = 0. The total derivative of the function z= f(x, y) is equal to:

Where the derivative y` is found using the rule of differentiation of the implicit function. At the points of the conditional extremum, the found total derivative must be equal to zero; this gives one equation relating x and y. Since they must also satisfy the coupling equation, we get a system of two equations with two unknowns

Let's transform this system to a much more convenient one by writing the first equation in the form of a proportion and introducing a new auxiliary unknown:

(the minus sign in front is for convenience). From these equalities it is easy to move to the following system:

f` x =(x,y)+` x (x,y)=0, f` y (x,y)+` y (x,y)=0 (*),

which, together with the connection equation (x, y) = 0, forms a system of three equations with unknowns x, y and.

These equations (*) are easiest to remember using the following rule: in order to find points that can be points of the conditional extremum of the function

Z= f(x, y) with the connection equation (x, y) = 0, you need to form an auxiliary function

F(x,y)=f(x,y)+(x,y)

Where is some constant, and create equations to find the extremum points of this function.

The indicated system of equations provides, as a rule, only the necessary conditions, i.e. not every pair of values ​​x and y that satisfies this system is necessarily a conditional extremum point. I will not give sufficient conditions for the points of conditional extremum; very often the specific content of the problem itself suggests what the found point is. The described technique for solving problems on a conditional extremum is called the Lagrange multiplier method.

First, let's consider the case of a function of two variables. The conditional extremum of a function $z=f(x,y)$ at the point $M_0(x_0;y_0)$ is the extremum of this function, achieved under the condition that the variables $x$ and $y$ in the vicinity of this point satisfy the connection equation $\ varphi (x,y)=0$.

The name “conditional” extremum is due to the fact that an additional condition $\varphi(x,y)=0$ is imposed on the variables. If one variable can be expressed from the connection equation through another, then the problem of determining the conditional extremum is reduced to the problem of determining the usual extremum of a function of one variable. For example, if the connection equation implies $y=\psi(x)$, then substituting $y=\psi(x)$ into $z=f(x,y)$, we obtain a function of one variable $z=f\left (x,\psi(x)\right)$. In the general case, however, this method is of little use, so the introduction of a new algorithm is required.

Lagrange multiplier method for functions of two variables.

The Lagrange multiplier method consists of constructing a Lagrange function to find a conditional extremum: $F(x,y)=f(x,y)+\lambda\varphi(x,y)$ (the $\lambda$ parameter is called the Lagrange multiplier ). The necessary conditions for the extremum are specified by a system of equations from which the stationary points are determined:

$$ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \varphi (x,y)=0. \end(aligned) \right. $$

A sufficient condition from which one can determine the nature of the extremum is the sign $d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("" )dy^2$. If at a stationary point $d^2F > 0$, then the function $z=f(x,y)$ has a conditional minimum at this point, but if $d^2F< 0$, то условный максимум.

There is another way to determine the nature of the extremum. From the coupling equation we obtain: $\varphi_(x)^(")dx+\varphi_(y)^(")dy=0$, $dy=-\frac(\varphi_(x)^("))(\varphi_ (y)^("))dx$, therefore at any stationary point we have:

$$d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=F_(xx)^( "")dx^2+2F_(xy)^("")dx\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)+ F_(yy)^("")\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)^2=\\ =-\frac (dx^2)(\left(\varphi_(y)^(") \right)^2)\cdot\left(-(\varphi_(y)^("))^2 F_(xx)^(" ")+2\varphi_(x)^(")\varphi_(y)^(")F_(xy)^("")-(\varphi_(x)^("))^2 F_(yy)^ ("") \right)$$

The second factor (located in brackets) can be represented in this form:

The elements of the determinant $\left| are highlighted in red. \begin(array) (cc) F_(xx)^("") & F_(xy)^("") \\ F_(xy)^("") & F_(yy)^("") \end (array)\right|$, which is the Hessian of the Lagrange function. If $H > 0$, then $d^2F< 0$, что указывает на условный максимум. Аналогично, при $H < 0$ имеем $d^2F >0$, i.e. we have a conditional minimum of the function $z=f(x,y)$.

A note regarding the notation of the determinant $H$. show\hide

$$ H=-\left|\begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_ (xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \ end(array) \right| $$

In this situation, the rule formulated above will change as follows: if $H > 0$, then the function has a conditional minimum, and if $H< 0$ получим условный максимум функции $z=f(x,y)$. При решении задач следует учитывать такие нюансы.

Algorithm for studying a function of two variables for a conditional extremum

  1. Compose the Lagrange function $F(x,y)=f(x,y)+\lambda\varphi(x,y)$
  2. Solve the system $ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \ varphi (x,y)=0. \end(aligned) \right.$
  3. Determine the nature of the extremum at each of the stationary points found in the previous paragraph. To do this, use any of the following methods:
    • Compose the determinant of $H$ and find out its sign
    • Taking into account the coupling equation, calculate the sign of $d^2F$

Lagrange multiplier method for functions of n variables

Let's say we have a function of $n$ variables $z=f(x_1,x_2,\ldots,x_n)$ and $m$ coupling equations ($n > m$):

$$\varphi_1(x_1,x_2,\ldots,x_n)=0; \; \varphi_2(x_1,x_2,\ldots,x_n)=0,\ldots,\varphi_m(x_1,x_2,\ldots,x_n)=0.$$

Denoting the Lagrange multipliers as $\lambda_1,\lambda_2,\ldots,\lambda_m$, we compose the Lagrange function:

$$F(x_1,x_2,\ldots,x_n,\lambda_1,\lambda_2,\ldots,\lambda_m)=f+\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_m\varphi_m$$

The necessary conditions for the presence of a conditional extremum are given by a system of equations from which the coordinates of stationary points and the values ​​of the Lagrange multipliers are found:

$$\left\(\begin(aligned) & \frac(\partial F)(\partial x_i)=0; (i=\overline(1,n))\\ & \varphi_j=0; (j=\ overline(1,m)) \end(aligned) \right.$$

You can find out whether a function has a conditional minimum or a conditional maximum at the found point, as before, using the sign $d^2F$. If at the found point $d^2F > 0$, then the function has a conditional minimum, but if $d^2F< 0$, - то условный максимум. Можно пойти иным путем, рассмотрев следующую матрицу:

Determinant of the matrix $\left| \begin(array) (ccccc) \frac(\partial^2F)(\partial x_(1)^(2)) & \frac(\partial^2F)(\partial x_(1)\partial x_(2) ) & \frac(\partial^2F)(\partial x_(1)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(1)\partial x_(n)) \\ \frac(\partial^2F)(\partial x_(2)\partial x_1) & \frac(\partial^2F)(\partial x_(2)^(2)) & \frac(\partial^2F )(\partial x_(2)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(2)\partial x_(n))\\ \frac(\partial^2F )(\partial x_(3) \partial x_(1)) & \frac(\partial^2F)(\partial x_(3)\partial x_(2)) & \frac(\partial^2F)(\partial x_(3)^(2)) &\ldots & \frac(\partial^2F)(\partial x_(3)\partial x_(n))\\ \ldots & \ldots & \ldots &\ldots & \ ldots\\ \frac(\partial^2F)(\partial x_(n)\partial x_(1)) & \frac(\partial^2F)(\partial x_(n)\partial x_(2)) & \ frac(\partial^2F)(\partial x_(n)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(n)^(2))\\ \end( array) \right|$, highlighted in red in the matrix $L$, is the Hessian of the Lagrange function. We use the following rule:

  • If the signs of the angular minors $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ matrices $L$ coincide with the sign of $(-1)^m$, then the stationary point under study is the conditional minimum point of the function $z=f(x_1,x_2 ,x_3,\ldots,x_n)$.
  • If the signs of the angular minors $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ alternate, and the sign of the minor $H_(2m+1)$ coincides with the sign of the number $(-1)^(m+1)$, then the stationary the point is the conditional maximum point of the function $z=f(x_1,x_2,x_3,\ldots,x_n)$.

Example No. 1

Find the conditional extremum of the function $z(x,y)=x+3y$ under the condition $x^2+y^2=10$.

The geometric interpretation of this problem is as follows: it is required to find the largest and smallest values ​​of the applicate of the plane $z=x+3y$ for the points of its intersection with the cylinder $x^2+y^2=10$.

It is somewhat difficult to express one variable through another from the coupling equation and substitute it into the function $z(x,y)=x+3y$, so we will use the Lagrange method.

Denoting $\varphi(x,y)=x^2+y^2-10$, we compose the Lagrange function:

$$ F(x,y)=z(x,y)+\lambda \varphi(x,y)=x+3y+\lambda(x^2+y^2-10);\\ \frac(\partial F)(\partial x)=1+2\lambda x; \frac(\partial F)(\partial y)=3+2\lambda y. $$

Let us write a system of equations to determine the stationary points of the Lagrange function:

$$ \left \( \begin(aligned) & 1+2\lambda x=0;\\ & 3+2\lambda y=0;\\ & x^2+y^2-10=0. \end (aligned)\right.$$

If we assume $\lambda=0$, then the first equation becomes: $1=0$. The resulting contradiction indicates that $\lambda\neq 0$. Under the condition $\lambda\neq 0$, from the first and second equations we have: $x=-\frac(1)(2\lambda)$, $y=-\frac(3)(2\lambda)$. Substituting the obtained values ​​into the third equation, we get:

$$ \left(-\frac(1)(2\lambda) \right)^2+\left(-\frac(3)(2\lambda) \right)^2-10=0;\\ \frac (1)(4\lambda^2)+\frac(9)(4\lambda^2)=10; \lambda^2=\frac(1)(4); \left[ \begin(aligned) & \lambda_1=-\frac(1)(2);\\ & \lambda_2=\frac(1)(2). \end(aligned) \right.\\ \begin(aligned) & \lambda_1=-\frac(1)(2); \; x_1=-\frac(1)(2\lambda_1)=1; \; y_1=-\frac(3)(2\lambda_1)=3;\\ & \lambda_2=\frac(1)(2); \; x_2=-\frac(1)(2\lambda_2)=-1; \; y_2=-\frac(3)(2\lambda_2)=-3.\end(aligned) $$

So, the system has two solutions: $x_1=1;\; y_1=3;\; \lambda_1=-\frac(1)(2)$ and $x_2=-1;\; y_2=-3;\; \lambda_2=\frac(1)(2)$. Let us find out the nature of the extremum at each stationary point: $M_1(1;3)$ and $M_2(-1;-3)$. To do this, we calculate the determinant of $H$ at each point.

$$ \varphi_(x)^(")=2x;\; \varphi_(y)^(")=2y;\; F_(xx)^("")=2\lambda;\; F_(xy)^("")=0;\; F_(yy)^("")=2\lambda.\\ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 2x & 2y\\ 2x & 2\lambda & 0 \\ 2y & 0 & 2\lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right| $$

At point $M_1(1;3)$ we get: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & 1 & 3\\ 1 & -1/2 & 0 \\ 3 & 0 & -1/2 \end(array) \right|=40 > 0$, so at the point The $M_1(1;3)$ function $z(x,y)=x+3y$ has a conditional maximum, $z_(\max)=z(1;3)=10$.

Similarly, at point $M_2(-1,-3)$ we find: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & -1 & -3\\ -1 & 1/2 & 0 \\ -3 & 0 & 1/2 \end(array) \right|=-40$. Since $H< 0$, то в точке $M_2(-1;-3)$ имеем условный минимум функции $z(x,y)=x+3y$, а именно: $z_{\min}=z(-1;-3)=-10$.

I note that instead of calculating the value of the determinant $H$ at each point, it is much more convenient to expand it in general form. In order not to clutter the text with details, I will hide this method under a note.

Writing the determinant $H$ in general form. show\hide

$$ H=8\cdot\left|\begin(array)(ccc)0&x&y\\x&\lambda&0\\y&0&\lambda\end(array)\right| =8\cdot\left(-\lambda(y^2)-\lambda(x^2)\right) =-8\lambda\cdot\left(y^2+x^2\right). $$

In principle, it is already obvious what sign $H$ has. Since none of the points $M_1$ or $M_2$ coincides with the origin, then $y^2+x^2>0$. Therefore, the sign of $H$ is opposite to the sign of $\lambda$. You can complete the calculations:

$$ \begin(aligned) &H(M_1)=-8\cdot\left(-\frac(1)(2)\right)\cdot\left(3^2+1^2\right)=40;\ \ &H(M_2)=-8\cdot\frac(1)(2)\cdot\left((-3)^2+(-1)^2\right)=-40. \end(aligned) $$

The question about the nature of the extremum at the stationary points $M_1(1;3)$ and $M_2(-1;-3)$ can be solved without using the determinant $H$. Let's find the sign of $d^2F$ at each stationary point:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=2\lambda \left( dx^2+dy^2\right) $$

Let me note that the notation $dx^2$ means exactly $dx$ raised to the second power, i.e. $\left(dx \right)^2$. Hence we have: $dx^2+dy^2>0$, therefore, with $\lambda_1=-\frac(1)(2)$ we get $d^2F< 0$. Следовательно, функция имеет в точке $M_1(1;3)$ условный максимум. Аналогично, в точке $M_2(-1;-3)$ получим условный минимум функции $z(x,y)=x+3y$. Отметим, что для определения знака $d^2F$ не пришлось учитывать связь между $dx$ и $dy$, ибо знак $d^2F$ очевиден без дополнительных преобразований. В следующем примере для определения знака $d^2F$ уже будет необходимо учесть связь между $dx$ и $dy$.

Answer: at point $(-1;-3)$ the function has a conditional minimum, $z_(\min)=-10$. At point $(1;3)$ the function has a conditional maximum, $z_(\max)=10$

Example No. 2

Find the conditional extremum of the function $z(x,y)=3y^3+4x^2-xy$ under the condition $x+y=0$.

First method (Lagrange multiplier method)

Denoting $\varphi(x,y)=x+y$, we compose the Lagrange function: $F(x,y)=z(x,y)+\lambda \varphi(x,y)=3y^3+4x^2 -xy+\lambda(x+y)$.

$$ \frac(\partial F)(\partial x)=8x-y+\lambda; \; \frac(\partial F)(\partial y)=9y^2-x+\lambda.\\ \left \( \begin(aligned) & 8x-y+\lambda=0;\\ & 9y^2-x+\ lambda=0; \\ & x+y=0. \end(aligned) \right. $$

Having solved the system, we get: $x_1=0$, $y_1=0$, $\lambda_1=0$ and $x_2=\frac(10)(9)$, $y_2=-\frac(10)(9)$ , $\lambda_2=-10$. We have two stationary points: $M_1(0;0)$ and $M_2 \left(\frac(10)(9);-\frac(10)(9) \right)$. Let us find out the nature of the extremum at each stationary point using the determinant $H$.

$$H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 1 & 1\\ 1 & 8 & -1 \\ 1 & -1 & 18y \end(array) \right|=-10-18y $$

At point $M_1(0;0)$ $H=-10-18\cdot 0=-10< 0$, поэтому $M_1(0;0)$ есть точка условного минимума функции $z(x,y)=3y^3+4x^2-xy$, $z_{\min}=0$. В точке $M_2\left(\frac{10}{9};-\frac{10}{9}\right)$ $H=10 >0$, therefore at this point the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

We investigate the nature of the extremum at each point using a different method, based on the sign of $d^2F$:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=8dx^2-2dxdy+ 18ydy^2 $$

From the connection equation $x+y=0$ we have: $d(x+y)=0$, $dx+dy=0$, $dy=-dx$.

$$ d^2 F=8dx^2-2dxdy+18ydy^2=8dx^2-2dx(-dx)+18y(-dx)^2=(10+18y)dx^2 $$

Since $ d^2F \Bigr|_(M_1)=10 dx^2 > 0$, then $M_1(0;0)$ is the conditional minimum point of the function $z(x,y)=3y^3+4x^ 2-xy$. Similarly, $d^2F \Bigr|_(M_2)=-10 dx^2< 0$, т.е. $M_2\left(\frac{10}{9}; -\frac{10}{9} \right)$ - точка условного максимума.

Second way

From the connection equation $x+y=0$ we get: $y=-x$. Substituting $y=-x$ into the function $z(x,y)=3y^3+4x^2-xy$, we obtain some function of the variable $x$. Let's denote this function as $u(x)$:

$$ u(x)=z(x,-x)=3\cdot(-x)^3+4x^2-x\cdot(-x)=-3x^3+5x^2. $$

Thus, we reduced the problem of finding the conditional extremum of a function of two variables to the problem of determining the extremum of a function of one variable.

$$ u_(x)^(")=-9x^2+10x;\\ -9x^2+10x=0; \; x\cdot(-9x+10)=0;\\ x_1=0; \ ; y_1=-x_1=0;\\ x_2=\frac(10)(9);\; y_2=-x_2=-\frac(10)(9). $$

We obtained points $M_1(0;0)$ and $M_2\left(\frac(10)(9); -\frac(10)(9)\right)$. Further research is known from the course of differential calculus of functions of one variable. By examining the sign of $u_(xx)^("")$ at each stationary point or checking the change in the sign of $u_(x)^(")$ at the found points, we obtain the same conclusions as when solving the first method. For example, we will check sign $u_(xx)^("")$:

$$u_(xx)^("")=-18x+10;\\ u_(xx)^("")(M_1)=10;\;u_(xx)^("")(M_2)=- 10.$$

Since $u_(xx)^("")(M_1)>0$, then $M_1$ is the minimum point of the function $u(x)$, and $u_(\min)=u(0)=0$ . Since $u_(xx)^("")(M_2)<0$, то $M_2$ - точка максимума функции $u(x)$, причём $u_{\max}=u\left(\frac{10}{9}\right)=\frac{500}{243}$.

The values ​​of the function $u(x)$ for a given connection condition coincide with the values ​​of the function $z(x,y)$, i.e. the found extrema of the function $u(x)$ are the sought conditional extrema of the function $z(x,y)$.

Answer: at the point $(0;0)$ the function has a conditional minimum, $z_(\min)=0$. At the point $\left(\frac(10)(9); -\frac(10)(9) \right)$ the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

Let's consider another example in which we will clarify the nature of the extremum by determining the sign of $d^2F$.

Example No. 3

Find the largest and smallest values ​​of the function $z=5xy-4$ if the variables $x$ and $y$ are positive and satisfy the coupling equation $\frac(x^2)(8)+\frac(y^2)(2) -1=0$.

Let's compose the Lagrange function: $F=5xy-4+\lambda \left(\frac(x^2)(8)+\frac(y^2)(2)-1 \right)$. Let's find the stationary points of the Lagrange function:

$$ F_(x)^(")=5y+\frac(\lambda x)(4); \; F_(y)^(")=5x+\lambda y.\\ \left \( \begin(aligned) & 5y+\frac(\lambda x)(4)=0;\\ & 5x+\lambda y=0;\\ & \frac(x^2)(8)+\frac(y^2)(2)- 1=0;\\ & x > 0; \;y > 0. \end(aligned) \right. $$

All further transformations are carried out taking into account $x > 0; \; y > 0$ (this is specified in the problem statement). From the second equation we express $\lambda=-\frac(5x)(y)$ and substitute the found value into the first equation: $5y-\frac(5x)(y)\cdot \frac(x)(4)=0$ , $4y^2-x^2=0$, $x=2y$. Substituting $x=2y$ into the third equation, we get: $\frac(4y^2)(8)+\frac(y^2)(2)-1=0$, $y^2=1$, $y =1$.

Since $y=1$, then $x=2$, $\lambda=-10$. We determine the nature of the extremum at the point $(2;1)$ based on the sign of $d^2F$.

$$ F_(xx)^("")=\frac(\lambda)(4); \; F_(xy)^("")=5; \; F_(yy)^("")=\lambda. $$

Since $\frac(x^2)(8)+\frac(y^2)(2)-1=0$, then:

$$ d\left(\frac(x^2)(8)+\frac(y^2)(2)-1\right)=0; \; d\left(\frac(x^2)(8) \right)+d\left(\frac(y^2)(2) \right)=0; \; \frac(x)(4)dx+ydy=0; \; dy=-\frac(xdx)(4y). $$

In principle, here you can immediately substitute the coordinates of the stationary point $x=2$, $y=1$ and the parameter $\lambda=-10$, obtaining:

$$ F_(xx)^("")=\frac(-5)(2); \; F_(xy)^("")=-10; \; dy=-\frac(dx)(2).\\ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^(" ")dy^2=-\frac(5)(2)dx^2+10dx\cdot \left(-\frac(dx)(2) \right)-10\cdot \left(-\frac(dx) (2) \right)^2=\\ =-\frac(5)(2)dx^2-5dx^2-\frac(5)(2)dx^2=-10dx^2. $$

However, in other problems on a conditional extremum there may be several stationary points. In such cases, it is better to represent $d^2F$ in general form, and then substitute the coordinates of each of the found stationary points into the resulting expression:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=\frac(\lambda) (4)dx^2+10\cdot dx\cdot \frac(-xdx)(4y) +\lambda\cdot \left(-\frac(xdx)(4y) \right)^2=\\ =\frac (\lambda)(4)dx^2-\frac(5x)(2y)dx^2+\lambda \cdot \frac(x^2dx^2)(16y^2)=\left(\frac(\lambda )(4)-\frac(5x)(2y)+\frac(\lambda \cdot x^2)(16y^2) \right)\cdot dx^2 $$

Substituting $x=2$, $y=1$, $\lambda=-10$, we get:

$$ d^2 F=\left(\frac(-10)(4)-\frac(10)(2)-\frac(10 \cdot 4)(16) \right)\cdot dx^2=- 10dx^2. $$

Since $d^2F=-10\cdot dx^2< 0$, то точка $(2;1)$ есть точкой условного максимума функции $z=5xy-4$, причём $z_{\max}=10-4=6$.

Answer: at point $(2;1)$ the function has a conditional maximum, $z_(\max)=6$.

In the next part we will consider the application of the Lagrange method for functions of a larger number of variables.

Share