Give the classic definition of the probability of a random event. Classic probability. The probability of a random event. Types of random events. Basic theorems of probability theory

Probability theory is a mathematical science that studies the patterns of random events. A probabilistic experiment (test, observation) is an experiment whose result cannot be predicted in advance. In this experiment, any result (outcome) is event.

The event may be reliable(always occurs as a result of a test); impossible(obviously does not occur during testing); random(may or may not happen under the conditions of this experiment).

An event that cannot be broken down into simpler events is called elementary. An event presented as a combination of several elementary events is called complex(the company did not suffer losses - profit can be positive or equal to zero).

Two events that cannot occur simultaneously (increase in taxes - increase in disposable income; increase in investment - decrease in risk) are called incompatible.

In other words, two events are incompatible if the occurrence of one of them excludes the occurrence of the other. Otherwise they are joint(increase in sales volume - increase in profits). The events are called opposite, if one of them occurs if and only if the other does not occur (the product is sold - the product is not sold).

Probability of event – This is a numerical measure that is introduced to compare events according to the degree of possibility of their occurrence.

Classic definition of probability. Probability R(A) events A is called the number ratio m equally possible elementary events (outcomes) favorable to the occurrence of the event A, to the total number n all possible elementary outcomes of this experiment:

The following basic properties of probability follow from the above:

1.0 £ R(A) £ 1.

2. Probability of a certain event A equals 1: R(A) = 1.

3. The probability of an impossible event A is 0: R(A) = 0.

4. If events A And IN are incompatible, then R(A + IN) = R(A) + R(IN); if events A And IN are joint, then R(A + IN) = R(A) + R(IN) - R(A . B).(R(A . B) is the probability of the joint occurrence of these events).

5. If A and opposite events, then R() = 1 - R(A).

If the probability of one event occurring does not change the probability of another occurring, then such events are called independent.

When directly calculating the probabilities of events characterized by a large number of outcomes, one should use combinatorics formulas. To study a group of events (hypotheses)

formulas apply full probability, Bayesian and Bernoulli ( n independent tests - repetition of experiments).

At statistical determination of probability events A under n refers to the total number of tests actually performed in which the event A met exactly m once. In this case the relation m/n called relative frequency (frequency) W n(A) occurrence of the event A V n tests performed.


When determining the probability by method expert assessments under n refers to the number of experts (specialists in a given field) interviewed regarding the possibility of an event occurring A. Wherein m of which they claim that the event A will happen.

The concept of a random event is not enough to describe the results of observations of quantities that have a numerical expression. For example, when analyzing the financial result of an enterprise, they are primarily interested in its size. Therefore, the concept of a random event is complemented by the concept of a random variable.

Under random variable(SV) is understood as a quantity that, as a result of observation (testing), takes on one of a possible set of its values, unknown in advance and depending on random circumstances. For each elementary event, SV has a single meaning.

There are discrete and continuous SVs. For discrete SV the set of its possible values ​​is finite or countable, i.e. SV takes on individual isolated values ​​that can be listed in advance, with certain probabilities. For continuous SV, the set of its possible values ​​is infinite and uncountable, for example, all numbers of a given interval, i.e. possible values ​​of SV cannot be listed in advance and continuously fill a certain gap.

Examples of random variables: X- daily number of customers in the supermarket (discrete SV); Y- the number of children born during the day in a certain administrative center(discrete SV); Z- coordinate of the point of impact of an artillery shell (continuous NE).

Many SVs considered in economics have such a large number of possible values ​​that it is more convenient to represent them in the form of continuous SVs. For example, exchange rates, household income, etc.

To describe the SV, it is necessary to establish a relationship between all possible values ​​of the SV and their probabilities. This ratio will be called law of distribution of SV. For a discrete SV it can be specified tabularly, analytically (in the form of a formula) or graphically. For example, tabular for SV X

ChapterI. RANDOM EVENTS. PROBABILITY

1.1. Regularity and randomness, random variability in the exact sciences, biology and medicine

Probability theory is a branch of mathematics that studies patterns in random phenomena. A random phenomenon is a phenomenon that, when the same experience is repeated several times, can occur slightly differently each time.

Obviously, there is not a single phenomenon in nature in which elements of randomness are not present to one degree or another, but in different situations we take them into account in different ways. Thus, in a number of practical problems they can be neglected and, instead of a real phenomenon, its simplified diagram – a “model” – can be considered, assuming that under given experimental conditions the phenomenon proceeds in a very definite way. At the same time, the most important, decisive factors characterizing the phenomenon are highlighted. It is this scheme for studying phenomena that is most often used in physics, technology, and mechanics; this is how the main pattern is revealed , characteristic of a given phenomenon and making it possible to predict the result of an experiment based on given initial conditions. And the influence of random, minor factors on the result of the experiment is taken into account here by random measurement errors (we will consider the method of their calculation below).

However, the described classical scheme of the so-called exact sciences poorly suited for solving many problems in which numerous, closely intertwined random factors play a noticeable (often decisive) role. Here the random nature of the phenomenon comes to the fore, which can no longer be neglected. This phenomenon must be studied precisely from the point of view of the patterns inherent in it as a random phenomenon. In physics, examples of such phenomena are Brownian motion, radioactive decay, a number of quantum mechanical processes, etc.

The subject of study of biologists and physicians is a living organism, the origin, development and existence of which is determined by many and varied, often random external and internal factors. That is why the phenomena and events of the living world are in many ways also random in nature.

The elements of uncertainty, complexity, and multi-causality inherent in random phenomena necessitate the creation of special mathematical methods for studying these phenomena. The development of such methods and the establishment of specific patterns inherent in random phenomena are the main tasks of probability theory. It is characteristic that these patterns are fulfilled only when random phenomena are widespread. Moreover individual characteristics individual cases seem to cancel each other out, and the averaged result for a mass of random phenomena turns out to be no longer random, but completely natural . To a large extent, this circumstance was the reason for the widespread probabilistic methods research in biology and medicine.

Let's consider the basic concepts of probability theory.

1.2. Probability of a random event

Each science that develops a general theory of any range of phenomena is based on a number of basic concepts. For example, in geometry these are the concepts of a point, a straight line; in mechanics - the concepts of force, mass, speed, etc. Basic concepts also exist in probability theory, one of them is a random event.

A random event is any phenomenon (fact) that may or may not occur as a result of experience (test).

Random events are indicated by letters A, B, C... etc. Here are some examples of random events:

A– the appearance of an eagle (coat of arms) when tossing a standard coin;

IN– the birth of a girl in a given family;

WITH– birth of a child with a predetermined body weight;

D– the occurrence of an epidemic disease in a given region during a certain period of time, etc.

The main quantitative characteristic of a random event is its probability. Let A- some random event. The probability of a random event A is a mathematical quantity that determines the possibility of its occurrence. It is designated R(A).

Let's consider two main methods for determining this value.

Classic definition of the probability of a random event usually based on the results of the analysis of speculative experiments (tests), the essence of which is determined by the conditions of the task. In this case, the probability of a random event P(A) is equal to:

Where m– the number of cases favorable to the occurrence of the event A; n– the total number of equally possible cases.

Example 1: A laboratory rat is placed in a maze in which only one of four possible paths leads to a food reward. Determine the probability of the rat choosing this path.

Solution: according to the conditions of the problem from four equally possible cases ( n=4) event A(rat finds food)
only one is favorable, i.e. m= 1 Then R(A) = R(rat finds food) = = 0.25 = 25%.

Example 2. There are 20 black and 80 white balls in an urn. One ball is drawn from it at random. Determine the probability that this ball will be black.

Solution: the number of all balls in the urn is the total number of equally possible cases n, i.e. n = 20 + 80 = 100, of which event A(removing the black ball) is possible only at 20, i.e. m= 20. Then R(A) = R(h.s.) = = 0.2 = 20%.

Let us list the properties of probability following from its classical definition- Formula 1):

1. The probability of a random event is a dimensionless quantity.

2. The probability of a random event is always positive and less than one, i.e. 0< P (A) < 1.

3. The probability of a reliable event, i.e. an event that will definitely happen as a result of experience ( m = n), is equal to one.

4. Probability of an impossible event ( m= 0) is equal to zero.

5. The probability of any event is a value that is not negative and does not exceed one:
0 £ P (A) £ 1.

Statistical determination of the probability of a random event is used when it is impossible to use the classical definition (1). This is often the case in biology and medicine. In this case, the probability R(A) are determined by summarizing the results of actually conducted series of tests (experiments).

Let us introduce the concept of the relative frequency of occurrence of a random event. Let a series be carried out consisting of N experiments (number N can be selected in advance); event of interest to us A happened in M of them ( M < N). Ratio of number of experiments M, in which this event occurred, to the total number of experiments performed N is called the relative frequency of occurrence of a random event A in this series of experiments - R* (A)

R*(A) = .

It has been experimentally established that if a series of tests (experiments) are carried out under the same conditions and in each of them the number N is sufficiently large, then the relative frequency exhibits the property of stability : it changes little from episode to episode , approaching with increasing number of experiments to some constant value . It is taken as the statistical probability of a random event A:

R(A)= lim , with N , (2)

So, statistical probability R(A) random event A name the limit to which the relative frequency of occurrence of this event tends with an unlimited increase in the number of trials (with N → ∞).

Approximately the statistical probability of a random event is equal to the relative frequency of occurrence of this event over a large number of trials:

R(A)≈ P*(A)= (for large N) (3)

For example, in experiments on coin tossing, the relative frequency of the coat of arms falling out with 12,000 tosses turned out to be equal to 0.5016, and with 24,000 tosses - 0.5005. In accordance with formula (1):

P(coat of arms) = = 0.5 = 50%

Example . During a medical examination of 500 people, 5 of them were diagnosed with a tumor in the lungs (l.l.). Determine the relative frequency and probability of this disease.

Solution: according to the problem conditions M = 5, N= 500, relative frequency R*(o.l.) = M/N= 5/500 = 0.01; because the N is sufficiently large, we can assume with good accuracy that the probability of having a tumor in the lungs is equal to the relative frequency of this event:

R(o.l.) = R*(v.l.) = 0.01 = 1%.

The previously listed properties of the probability of a random event are preserved in the statistical determination of this quantity.

1.3. Types of random events. Basic theorems of probability theory

All random events can be divided into:

¾ incompatible;

¾ independent;

¾ dependent.

Each type of event has its own characteristics and theorems of probability theory.

1.3.1. Incompatible random events. Probability addition theorem

Random events (A, B, C,D...) are called incompatible , if the occurrence of one of them excludes the occurrence of other events in the same trial.

Example1 . A coin is tossed. When it falls, the appearance of the “coat of arms” eliminates the appearance of “tails” (the inscription that determines the price of the coin). The events “the coat of arms fell” and “the head fell” are incompatible.

Example 2 . A student receiving a grade of “2”, or “3”, or “4”, or “5” on one exam are incompatible events, since one of these grades excludes the other on the same exam.

For inconsistent random events, the following holds: probability addition theorem: probability of occurrence one, but no matter which one, of several incompatible events A1, A2, A3 ... Ak equal to the sum of their probabilities:

P(A1 or A2...or Ak) = P(A1) + P(A2) + …+ P(Ak). (4)

Example 3. An urn contains 50 balls: 20 white, 20 black and 10 red. Find the probability of white occurrence (event A) or red ball (event IN), when a ball is drawn at random from the urn.

Solution: R(A or B)= P(A)+ R(IN);

R(A) = 20/50 = 0,4;

R(IN) = 10/50 = 0,2;

R(A or IN)= P(b. sh. or k. sh.) = 0,4 + 0,2 = 0,6 = 60%.

Example 4 . There are 40 children in the class. Of these, aged 7 to 7.5 years, 8 boys ( A) and 10 girls ( IN). Find the probability of having children of this age in the class.

Solution: R(A)= 8/40 = 0.2; R(IN) = 10/40 = 0,25.

P(A or B) = 0.2 + 0.25 = 0.45 = 45%

Following important conceptcomplete group of events: several incompatible events form a complete group of events if only one of the events of this group and no other can occur as a result of each trial.

Example 5 . The shooter fired a shot at the target. One of the following events will definitely happen: getting into the “ten”, “nine”, “eight”,..., “one” or miss. These 11 incompatible events form a complete group.

Example 6 . In a university exam, a student can receive one of the following four grades: 2, 3, 4 or 5. These four incompatible events also form a complete group.

If incompatible events A1, A2...Ak form a complete group, then the sum of the probabilities of these events is always equal to one:

R(A1)+ R(A2)+ … R(Ak) = 1, (5)

This statement is often used in solving many applied problems.

If two events are the only possible and incompatible, then they are called opposite and denoted A And . Such events form a complete group, so the sum of their probabilities is always equal to one:

R(A)+ R() = 1. (6)

Example 7. Let R(A) – the probability of death due to a certain disease; it is known and equal to 2%. Then the probability of a successful outcome for this disease is 98% ( R() = 1 – R(A) = 0.98), since R(A) + R() = 1.

1.3.2. Independent random events. Probability multiplication theorem

Random events are called independent if the occurrence of one of them does not in any way affect the probability of the occurrence of other events.

Example 1 . If there are two or more urns with colored balls, then drawing any ball from one urn will not affect the probability of drawing other balls from the remaining urns.

For independent events it is true probability multiplication theorem: joint probability(simultaneous)the occurrence of several independent random events is equal to the product of their probabilities:

P(A1 and A2 and A3 ... and Ak) = P(A1) ∙P(A2) ∙…∙P(Ak). (7)

The joint (simultaneous) occurrence of events means that events occur and A1, And A2, And A3… And Ak .

Example 2 . There are two urns. One contains 2 black and 8 white balls, the other contains 6 black and 4 white balls. Let the event A-choosing a white ball at random from the first urn, IN- from the second. What is the probability of choosing a white ball at random from these urns at the same time, i.e., what is equal to R (A And IN)?

Solution: probability of drawing a white ball from the first urn
R(A) = = 0.8 from the second – R(IN) = = 0.4. The probability of simultaneously drawing a white ball from both urns is
R(A And IN) = R(AR(IN) = 0,8∙ 0,4 = 0,32 = 32%.

Example 3: A diet low in iodine causes enlargement of the thyroid gland in 60% of animals in a large population. For the experiment, 4 enlarged glands are needed. Find the probability that 4 randomly selected animals will have an enlarged thyroid gland.

Solution: Random event A– random selection of an animal with an enlarged thyroid gland. According to the conditions of the problem, the probability of this event R(A) = 0.6 = 60%. Then the probability of the joint occurrence of four independent events - a random selection of 4 animals with an enlarged thyroid gland - will be equal to:

R(A 1 and A 2 and A 3 and A 4) = 0,6 ∙ 0,6 ∙0,6 ∙ 0,6=(0,6)4 ≈ 0,13 = 13%.

1.3.3. Dependent events. Probability multiplication theorem for dependent events

Random events A and B are called dependent if the occurrence of one of them, for example, A, changes the probability of the occurrence of another event, B. Therefore, two probability values ​​are used for dependent events: unconditional and conditional probabilities .

If A And IN dependent events, then the probability of the event occurring IN first (i.e. before the event A) is called unconditional probability this event is designated R(IN). Probability of an event occurring IN provided that the event A has already happened, it's called conditional probability events IN and is designated R(IN/A) or RA(IN).

Unconditional - R(A) and conditional – R(A/B) probability for an event A.

Probability multiplication theorem for two dependent events: the probability of the simultaneous occurrence of two dependent events A and B is equal to the product of the unconditional probability of the first event by the conditional probability of the second:

R(A and B)= P(A)∙P(V/A) , (8)

A, or

R(A and B)= P(IN)∙P(A/B), (9)

if the event occurs first IN.

Example 1. There are 3 black balls and 7 white balls in an urn. Find the probability that 2 white balls will be drawn from this urn one after the other (without the first ball being returned to the urn).

Solution: probability of getting the first white ball (event A) is equal to 7/10. After it is removed, there are 9 balls left in the urn, 6 of which are white. Then the probability of the second white ball appearing (event IN) is equal to R(IN/A) = 6/9, and the probability of getting two white balls in a row is

R(A And IN) = R(A)∙R(IN/A) = = 0,47 = 47%.

The given theorem for multiplying probabilities for dependent events can be generalized to any number of events. In particular, for three events, related friend with a friend:

R(A And IN And WITH)= P(A)∙ R(V/A)∙ R(S/AB). (10)

Example 2. An outbreak of an infectious disease occurred in two kindergartens, each attended by 100 children. The proportions of the sick are 1/5 and 1/4, respectively, and in the first institution 70%, and in the second - 60% of the sick - children under 3 years of age. One child is randomly selected. Determine the probability that:

1) the selected child belongs to the first kindergarten (event A) and sick (event IN).

2) a child from the second is selected kindergarten(event WITH), sick (event D) and older than 3 years (event E).

Solution. 1) the required probability –

R(A And IN) = R(A) ∙ R(IN/A) = = 0,1 = 10%.

2) the required probability:

R(WITH And D And E) = R(WITH) ∙ R(D/C) ∙ R(E/CD) = = 5%.

1.4. Bayes formula

If the probability of co-occurrence of dependent events A And IN does not depend on the order in which they occur, then R(A And IN)= P(A)∙P(V/A)= P(IN) × R(A/B). In this case, the conditional probability of one of the events can be found by knowing the probabilities of both events and the conditional probability of the second:

R(V/A) = (11)

A generalization of this formula for the case of many events is the Bayes formula.

Let " n» incompatible random events Н1, Н2, …, Нn, form a complete group of events. The probabilities of these events are R(H1), R(H2), …, R(Nn) are known and since they form a complete group, then = 1.

Some random event A related to events Н1, Н2, …, Нn, and the conditional probabilities of the occurrence of the event are known A with each of the events Ni, i.e. known R(A/H1), R(A/H2), …, R(A/Nn). In this case, the sum of conditional probabilities R(A/Ni) may not be equal to unity, i.e. ≠ 1.

Then the conditional probability of the event occurring Ni when an event is realized A(i.e., provided that the event A happened) is determined by Bayes' formula :

Moreover, for these conditional probabilities .

Bayes' formula has found wide application not only in mathematics, but also in medicine. For example, it is used to calculate the probabilities of certain diseases. So, if N 1,…, Nn– expected diagnoses for this patient, A– some sign related to them (symptom, a certain indicator of a blood test, urine test, detail of an x-ray, etc.), and conditional probabilities R(A/Ni) manifestations of this symptom with each diagnosis Ni (i = 1,2,3,…n) are known in advance, then the Bayes formula (12) allows us to calculate the conditional probabilities of diseases (diagnoses) R(Ni/A) after it has been established that the characteristic feature A present in the patient.

Example 1. During the initial examination of the patient, 3 diagnoses are assumed N 1, N 2, N 3. Their probabilities, according to the doctor, are distributed as follows: R(N 1) = 0,5; R(N 2) = 0,17; R(N 3) = 0.33. Therefore, the first diagnosis seems tentatively most likely. To clarify it, for example, a blood test is prescribed, in which an increase in ESR is expected (event A). It is known in advance (based on research results) that the probabilities of an increase in ESR in suspected diseases are equal:

R(A/N 1) = 0,1; R(A/N 2) = 0,2; R(A/N 3) = 0,9.

The resulting analysis recorded an increase in ESR (event A happened). Then the calculation using the Bayes formula (12) gives the probabilities of expected diseases with an increased ESR value: R(N 1/A) = 0,13; R(N 2/A) = 0,09;
R(N 3/A) = 0.78. These figures show that, taking into account laboratory data, the most realistic is not the first, but the third diagnosis, the probability of which has now turned out to be quite high.

The above example is the simplest illustration of how, using the Bayes formula, you can formalize a doctor’s logic when making a diagnosis and, thanks to this, create computer diagnostic methods.

Example 2. Determine the probability that estimates the degree of risk of perinatal* child mortality in women with an anatomically narrow pelvis.

Solution: let the event N 1 – successful birth. According to clinical reports, R(N 1) = 0.975 = 97.5%, then if H2– the fact of perinatal mortality, then R(N 2) = 1 – 0,975 = 0,025 = 2,5 %.

Let's denote A– the fact that a woman in labor has a narrow pelvis. From the studies carried out we know: a) R(A/N 1) – probability of a narrow pelvis during a favorable birth, R(A/N 1) = 0.029, b) R(A/N 2) – probability of a narrow pelvis with perinatal mortality,
R(A/N 2) = 0.051. Then the desired probability of perinatal mortality in a woman in labor with a narrow pelvis is calculated using the Bays formula (12) and is equal to:


Thus, the risk of perinatal mortality in an anatomically narrow pelvis is significantly higher (almost twice) than the average risk (4.4% versus 2.5%).

Such calculations, usually performed using a computer, underlie methods for forming groups of patients at increased risk associated with the presence of a particular aggravating factor.

The Bayes formula is very useful for assessing many other medical and biological situations, which will become obvious when solving the problems given in the manual.

1.5. About random events with probabilities close to 0 or 1

When solving many practical problems, one has to deal with events whose probability is very small, that is, close to zero. Based on experience regarding such events, the following principle has been adopted. If a random event has a very low probability, then we can practically assume that it will not occur in a single test, in other words, the possibility of its occurrence can be neglected. The answer to the question of how small this probability should be is determined by the essence of the problems being solved and by how important the result of the prediction is for us. For example, if the probability that a parachute will not open during a jump is 0.01, then the use of such parachutes is unacceptable. However, the same 0.01 probability that a long-distance train will arrive late makes us almost certain that it will arrive on time.

A sufficiently small probability at which (in a given specific problem) an event can be considered practically impossible is called level of significance. In practice, the significance level is usually taken equal to 0.01 (one percent significance level) or 0.05 (five percent significance level), much less often it is taken equal to 0.001.

The introduction of a significance level allows us to state that if some event A almost impossible, then the opposite event - practically reliable, i.e. for him R() » 1.

ChapterII. RANDOM VARIABLES

2.1. Random variables, their types

In mathematics, a quantity is common name various quantitative characteristics objects and phenomena. Length, area, temperature, pressure, etc. are examples of different quantities.

A quantity that takes on different numerical values ​​under the influence of random circumstances, called a random variable. Examples of random variables: the number of patients at a doctor's appointment; exact dimensions internal organs people, etc.

Distinguish between discrete and continuous random variables .

A random variable is called discrete if it takes only certain distinct values ​​that can be identified and enumerated.

Examples of a discrete random variable are:

– the number of students in the audience – can only be a positive integer: 0,1,2,3,4….. 20…..;

– the number that appears on the top face when throwing a die – can only take integer values ​​from 1 to 6;

– relative frequency of hitting the target with 10 shots – its values: 0; 0.1; 0.2; 0.3…1

– the number of events occurring over the same periods of time: heart rate, number of ambulance calls per hour, number of operations per month with a fatal outcome, etc.

A random variable is called continuous if it can take on any value within a certain interval, which sometimes has clearly defined boundaries and sometimes does not.*. Continuous random variables include, for example, body weight and height of adults, body weight and brain volume, the quantitative content of enzymes in healthy people, the size of blood cells, R N blood, etc.

The concept of a random variable plays a decisive role in modern theory probabilities, which developed special techniques for the transition from random events to random variables.

If a random variable depends on time, then we can talk about a random process.

2.2. Distribution law of a discrete random variable

To give full description For a discrete random variable, it is necessary to indicate all its possible values ​​and their probabilities.

The correspondence between the possible values ​​of a discrete random variable and their probabilities is called the distribution law of this variable.

Let us denote the possible values ​​of the random variable X through Xi, and the corresponding probabilities – through Ri *. Then the distribution law of a discrete random variable can be specified in three ways: in the form of a table, graph or formula.

In a table called a distribution series, lists all possible values ​​of a discrete random variable X and the corresponding probabilities R(X):

X

…..

…..

P(X)

…..

…..

In this case, the sum of all probabilities Ri must be equal to unity (normalization condition):

Ri = p1 + p2 + ... + pn = 1. (13)

Graphically the law is represented by a broken line, which is usually called the distribution polygon (Fig. 1). Here, all possible values ​​of the random variable are plotted along the horizontal axis Xi, , and along the vertical axis – the corresponding probabilities Ri

Analytically the law is expressed by the formula. For example, if the probability of hitting the target with one shot is R, then the probability of hitting the target 1 time at n shots is given by the formula R(n) = n qn-1 × p, Where q= 1 – p– probability of miss with one shot.

2.3. Distribution law of a continuous random variable. Probability density function

For continuous random variables, it is impossible to apply the distribution law in the forms given above, since such a variable has an innumerable (“uncountable”) set of possible values ​​that completely fill a certain interval. Therefore, it is impossible to create a table that lists all its possible values, or to construct a distribution polygon. In addition, the probability of any particular value is very small (close to 0)*. At the same time various areas(intervals) of possible values ​​of a continuous random variable are not equally probable. Thus, in this case, too, a certain law of distribution operates, although not in the previous sense.

Consider a continuous random variable X, the possible values ​​of which completely fill a certain interval (A, b)**. The law of probability distribution of such a value should allow us to find the probability of its value falling into any specified interval (x1, x2), lying inside ( A,b), Fig.2.

This probability is denoted R(x1< Х < х2 ), or
R(x1£ X£ x2).

Let us first consider a very small range of values X– from X before ( x +DX); see Fig.2. Low probability dR that the random variable X will take some value from the interval ( x, x +DX), will be proportional to the size of this interval DX:dR~ DX, or by introducing the proportionality coefficient f, which itself may depend on X, we get:

dP =f(X) × D x =f(x) × dx (14)

The function introduced here f(X) is called probability distribution density random variable X, or, in short, probability density, distribution density. Equation (13) is a differential equation, the solution of which gives the probability of hitting the value X in the interval ( x1,x2):

R(x1<X<x2) = f(X) dX. (15)

Graphically probability R(x1<X<x2) is equal to the area of ​​a curvilinear trapezoid bounded by the abscissa axis of the curve f(X) and straight X = x1 and X = x2(Fig. 3). This follows from the geometric meaning of the definite integral (15) Curve f(X) is called a distribution curve.

From (15) it follows that if the function is known f(X), then, by changing the limits of integration, we can find the probability for any intervals of interest to us. Therefore, it is the task of the function f(X) completely determines the distribution law for continuous random variables.

For the probability density f(X) the normalization condition must be satisfied in the form:

f(X) dx = 1, (16)

if it is known that all values X lie in the interval ( A,b), or in the form:

f(X) dx = 1, (17)

if the interval boundaries for values X definitely uncertain. Conditions for normalizing the probability density (16) or (17) are a consequence of the fact that the values ​​of the random variable X reliably lie within ( A,b) or (-¥, +¥). From (16) and (17) it follows that the area of ​​the figure bounded by the distribution curve and the x-axis is always equal to 1 .

2.4. Basic numerical characteristics of random variables

The results presented in paragraphs 2.2 and 2.3 show that a complete description of discrete and continuous random variables can be obtained by knowing the laws of their distribution. However, in many practically significant situations, the so-called numerical characteristics of random variables are used; the main purpose of these characteristics is to express in a concise form the most significant features of the distribution of random variables. It is important that these parameters represent specific (constant) values ​​that can be assessed using data obtained in experiments. These estimates are dealt with by “Descriptive Statistics”.

In probability theory and mathematical statistics, quite a lot of different characteristics are used, but we will consider only the most used ones. Moreover, only for some of them we will present the formulas by which their values ​​are calculated; in other cases, we will leave the calculations to the computer.

Let's consider position characteristics – mathematical expectation, mode, median.

They characterize the position of a random variable on the number axis , i.e., they indicate some approximate value around which all possible values ​​of the random variable are grouped. Among them, the most important role is played by the mathematical expectation M(X).

Brief theory

To quantitatively compare events according to the degree of possibility of their occurrence, a numerical measure is introduced, which is called the probability of an event. The probability of a random event is a number that expresses the measure of the objective possibility of an event occurring.

The quantities that determine how significant the objective reasons are to expect the occurrence of an event are characterized by the probability of the event. It must be emphasized that probability is an objective quantity that exists independently of the knower and is conditioned by the entire set of conditions that contribute to the occurrence of an event.

The explanations we have given for the concept of probability are not a mathematical definition, since they do not quantify the concept. There are several definitions of the probability of a random event, which are widely used in solving specific problems (classical, geometric definition of probability, statistical, etc.).

Classic definition of event probability reduces this concept to the more elementary concept of equally possible events, which is no longer subject to definition and is assumed to be intuitively clear. For example, if a die is a homogeneous cube, then the loss of any of the faces of this cube will be equally possible events.

Let a reliable event be divided into equally possible cases, the sum of which gives the event. That is, the cases into which it breaks down are called favorable for the event, since the appearance of one of them ensures the occurrence.

The probability of an event will be denoted by the symbol.

The probability of an event is equal to the ratio of the number of cases favorable to it, out of the total number of uniquely possible, equally possible and incompatible cases, to the number, i.e.

This is the classic definition of probability. Thus, to find the probability of an event, it is necessary, having considered the various outcomes of the test, to find a set of uniquely possible, equally possible and incompatible cases, calculate their total number n, the number of cases m favorable for a given event, and then perform the calculation using the above formula.

The probability of an event equal to the ratio of the number of experimental outcomes favorable to the event to the total number of experimental outcomes is called classical probability random event.

The following properties of probability follow from the definition:

Property 1. The probability of a reliable event is equal to one.

Property 2. The probability of an impossible event is zero.

Property 3. The probability of a random event is a positive number between zero and one.

Property 4. The probability of the occurrence of events that form a complete group is equal to one.

Property 5. The probability of the occurrence of the opposite event is determined in the same way as the probability of the occurrence of event A.

The number of cases favoring the occurrence of an opposite event. Hence, the probability of the occurrence of the opposite event is equal to the difference between unity and the probability of the occurrence of event A:

An important advantage of the classical definition of the probability of an event is that with its help the probability of an event can be determined without resorting to experience, but based on logical reasoning.

When a set of conditions is met, a reliable event will definitely happen, but an impossible event will definitely not happen. Among the events that may or may not occur when a set of conditions is created, the occurrence of some can be counted on with good reason, and the occurrence of others with less reason. If, for example, there are more white balls in an urn than black balls, then there is more reason to hope for the appearance of a white ball when drawn from the urn at random than for the appearance of a black ball.

The next page discusses.

Example of problem solution

Example 1

A box contains 8 white, 4 black and 7 red balls. 3 balls are drawn at random. Find the probabilities of the following events: – at least 1 red ball is drawn, – there are at least 2 balls of the same color, – there are at least 1 red and 1 white ball.

The solution of the problem

We find the total number of test outcomes as the number of combinations of 19 (8+4+7) elements of 3:

Let's find the probability of the event– at least 1 red ball is drawn (1,2 or 3 red balls)

Required probability:

Let the event– there are at least 2 balls of the same color (2 or 3 white balls, 2 or 3 black balls and 2 or 3 red balls)

Number of outcomes favorable to the event:

Required probability:

Let the event– there is at least one red and 1 white ball

(1 red, 1 white, 1 black or 1 red, 2 white or 2 red, 1 white)

Number of outcomes favorable to the event:

Required probability:

Answer: P(A)=0.773;P(C)=0.7688; P(D)=0.6068

Example 2

Two dice are thrown. Find the probability that the sum of points is at least 5.

Solution

Let the event be a score of at least 5

Let's use the classic definition of probability:

Total number of possible test outcomes

Number of trials favoring the event of interest

On the dropped side of the first dice, one point, two points..., six points may appear. similarly, six outcomes are possible when rolling the second die. Each of the outcomes of throwing the first die can be combined with each of the outcomes of the second. Thus, the total number of possible elementary test outcomes is equal to the number of placements with repetitions (choice with placements of 2 elements from a set of volume 6):

Let's find the probability of the opposite event - the sum of points is less than 5

The following combinations of dropped points will favor the event:

1st bone 2nd bone 1 1 1 2 1 2 3 2 1 4 3 1 5 1 3

Average the cost of solving a test is 700 - 1200 rubles (but not less than 300 rubles for the entire order). The price is greatly influenced by the urgency of the decision (from a day to several hours). The cost of online help for an exam/test is from 1000 rubles. for solving the ticket.

You can leave a request directly in the chat, having previously sent the conditions of the tasks and informed you of the deadlines for the solution you need. Response time is a few minutes.

Probability is one of the basic concepts of probability theory. There are several definitions of this concept. Let us give a definition that is called classical.

Probability event is the ratio of the number of elementary outcomes favorable for a given event to the number of all equally possible outcomes of the experience in which this event may appear.

The probability of event A is denoted by P(A)(Here R– the first letter of a French word probabilite- probability).

According to the definition

where is the number of elementary test outcomes favorable to the occurrence of the event;

The total number of possible elementary test outcomes.

This definition of probability is called classic. It arose at the initial stage of the development of probability theory.

The number is often called the relative frequency of occurrence of an event A in experience.

The greater the probability of an event, the more often it occurs, and vice versa, the less probability of an event, the less often it occurs. When the probability of an event is close to or equal to one, then it occurs in almost all trials. Such an event is said to be almost certain, i.e. that one can certainly count on its occurrence.

On the contrary, when the probability is zero or very small, then the event occurs extremely rarely; such an event is said to be almost impossible.

Sometimes the probability is expressed as a percentage: P(A) 100% is the average percentage of the number of occurrences of an event A.

Example 2.13. While dialing a phone number, the subscriber forgot one digit and dialed it at random. Find the probability that the correct number is dialed.

Solution.

Let us denote by A event - “the required number has been dialed.”

The subscriber could dial any of the 10 digits, so the total number of possible elementary outcomes is 10. These outcomes are incompatible, equally possible and form a complete group. Favors the event A only one outcome (there is only one required number).

The required probability is equal to the ratio of the number of outcomes favorable to the event to the number of all elementary outcomes:

The classical probability formula provides a very simple, experiment-free way to calculate probabilities. However, the simplicity of this formula is very deceptive. The fact is that when using it, two very difficult questions usually arise:

1. How to choose a system of experimental outcomes so that they are equally possible, and is it possible to do this at all?

2. How to find numbers m And n?

If several objects are involved in an experiment, it is not always easy to see equally possible outcomes.

The great French philosopher and mathematician d'Alembert entered the history of probability theory with his famous mistake, the essence of which was that he incorrectly determined the equipossibility of outcomes in an experiment with only two coins!

Example 2.14. ( d'Alembert's error). Two identical coins are tossed. What is the probability that they will fall on the same side?

D'Alembert's solution.

The experiment has three equally possible outcomes:

1. Both coins will land on heads;

2. Both coins will land on tails;

3. One of the coins will land on heads, the other on tails.

Correct solution.

The experiment has four equally possible outcomes:

1. The first coin will fall on heads, the second will also fall on heads;

2. The first coin will land on tails, the second will also land on tails;

3. The first coin will fall on heads, and the second on tails;

4. The first coin will land on tails, and the second on heads.

Of these, two outcomes will be favorable for our event, so the required probability is equal to .

D'Alembert made one of the most common mistakes made when calculating probability: he combined two elementary outcomes into one, thereby making it unequal in probability to the remaining outcomes of the experiment.

The probability of an event is understood as a certain numerical characteristic of the possibility of the occurrence of this event. There are several approaches to determining probability.

Probability of the event A is called the ratio of the number of outcomes favorable to this event to the total number of all equally possible incompatible elementary outcomes that form the complete group. So, the probability of the event A is determined by the formula

Where m– the number of elementary outcomes favorable A, n– the number of all possible elementary test outcomes.

Example 3.1. In an experiment involving throwing a die, the number of all outcomes n equals 6 and they are all equally possible. Let the event A means the appearance of an even number. Then for this event, favorable outcomes will be the appearance of numbers 2, 4, 6. Their number is 3. Therefore, the probability of the event A equal to

Example 3.2. What is the probability that a two-digit number chosen at random has the same digits?

Two-digit numbers are numbers from 10 to 99, there are 90 such numbers in total. 9 numbers have identical digits (these are numbers 11, 22, ..., 99). Since in this case m=9, n=90, then

Where A– event, “a number with the same digits.”

Example 3.3. In a batch of 10 parts, 7 are standard. Find the probability that among six parts taken at random, 4 are standard.

The total number of possible elementary test outcomes is equal to the number of ways in which 6 parts can be extracted from 10, i.e., the number of combinations of 10 elements of 6 elements each. Let us determine the number of outcomes favorable to the event of interest to us A(among the six taken parts there are 4 standard ones). Four standard parts can be taken from seven standard parts in different ways; at the same time, the remaining 6-4=2 parts must be non-standard, but you can take two non-standard parts from 10-7=3 non-standard parts in different ways. Therefore, the number of favorable outcomes is equal to .

Then the required probability is equal to

The following properties follow from the definition of probability:

1. The probability of a reliable event is equal to one.

Indeed, if the event is reliable, then every elementary outcome of the test favors the event. In this case m=n, therefore

2. The probability of an impossible event is zero.

Indeed, if an event is impossible, then none of the elementary outcomes of the test favor the event. In this case it means

3. The probability of a random event is a positive number between zero and one.

Indeed, only a part of the total number of elementary outcomes of the test is favored by a random event. In this case< m< n, means 0 < m/n < 1, i.e. 0< P(A) < 1. Итак, вероятность любого события удовлетворяет двойному неравенству


The construction of a logically complete theory of probability is based on the axiomatic definition of a random event and its probability. In the system of axioms proposed by A. N. Kolmogorov, the undefined concepts are an elementary event and probability. Here are the axioms that define probability:

1. Every event A assigned a non-negative real number P(A). This number is called the probability of the event A.

2. The probability of a reliable event is equal to one.

3. The probability of the occurrence of at least one of the pairwise incompatible events is equal to the sum of the probabilities of these events.

Based on these axioms, the properties of probabilities and the dependencies between them are derived as theorems.

Self-test questions

1. What is the name of the numerical characteristic of the possibility of an event occurring?

2. What is the probability of an event?

3. What is the probability of a reliable event?

4. What is the probability of an impossible event?

5. What are the limits of the probability of a random event?

6. What are the limits of the probability of any event?

7. What definition of probability is called classical?

Share