General understanding of processing. Scientific electronic library Qualitative methods of data processing in psychology

3.1. Primary and secondary quantitative processing

The quantitative data processing process has two phases: primary and secondary.

Primary quantitative processing is aimed at organizing information about the object and subject of study obtained at the empirical stage of the study. The main methods of primary processing include: tabulation, construction of charts, histograms (step chart), distribution polygons (the upper points of the central axes of all sections of the histogram are connected by straight segments) and distribution curves (distribution polygon, but smooth curved lines). The diagrams reflect discrete distribution, other graphic forms are continuous.

Secondary quantitative processing consists mainly of statistical analysis of the results of primary processing. It is important to get answers to three main questions here.

1. Which value is most typical for the sample?

To solve this issue, the so-called "measures of central tendency". These are generalizing quantities and include: arithmetic mean, median, mode, geometric mean and harmonic mean. In psychology, the first three are usually used.

The arithmetic mean (estimate of mathematical expectation) is calculated using the formula:

where x i is each observed value of the attribute, i is an index indicating the serial number of this attribute value;

n – number of observations.

Median (Me) This is a point on the measurement scale, above and below which exactly 50% of the values ​​of the series (observations) are located. It is determined by the median rank using the formula:

That is, in order to calculate the median, it is necessary to rank a number of values ​​(observations). The resulting median value may not correspond to the value of the series, but is between two adjacent values, then the arithmetic mean of these values ​​is calculated.

For example, we have the row 3-5-6-7-9-10-11-12. Having ranked it, we have 1-2-3-4-5-6-7-8. Rank median in this series: Ме=8+1/2=4.5. This rank corresponds to the midpoint between the members of the original series having ranks 4 and 5. Therefore, the median of this series is 8 (7+9/2). It should be noted that there is no value 8 in the series, but this is the value of the median of this series.

Fashion (Mo) this is the value that occurs most frequently in the sample. Example: 2, 6, 6, 8, 9, 9, 9, 10; Mo=9.

If all values ​​in a group occur equally often, then it is considered that there is no mode. If two adjacent values ​​have the same frequency and they are greater than the frequency of any other value, the mode is the average of these two values ​​(for example: 1, 2, 2, 2, 4, 4, 4, 5, 5, 7; Mo=3). If the same applies to two non-adjacent values, then there are two modes, and the group of attribute values ​​is bimodal (example: 0, 1, 1, 1, 2, 3, 4, 4, 4, 7; Mo=1 and 4) .

Usually the arithmetic mean is used when striving for the greatest accuracy, and when the standard deviation later needs to be calculated. Median – when the attribute values ​​contain atypical data (for example: 1, 3, 5, 7, 9, 26, 13). Fashion – when high accuracy is not needed, but the speed of determining the measure of central tendency is important.

2. Is there a large spread of data around the mean?

To answer this question, measures of variability (dispersion, spread) are used. They allow one to judge the degree of homogeneity of the resulting set, its compactness, and indirectly, the reliability of the results obtained. Most used in psychological research: range, mean deviation, dispersion, standard deviation, quartile deviation.

Range (P) is the interval between the maximum and minimum values ​​of a characteristic. Easily determined, but sensitive to randomness, especially with a small number of data. Example: (0, 2, 3, 5, 8; P=8); (-0.2, 1.0, 1.4, 2.0; P=2.2)

Mean deviation (MD) is the arithmetic mean of the difference (modulo) between each value in the sample and its average:

where d=│XM│; where M is the sample average; X – specific value; N – number of values.

The set of all specific deviations from the average characterizes the variability of the data, but if they are not taken modulo, then their sum will be equal to zero, and we will not receive information about their variability. MD shows the degree of crowding of data around the average (sometimes Me or Mo is taken instead of M).

Dispersion (D) ( from lat. – scattered).

D=∑d 2 /(N-1) or σ x 2 =∑(x i -x avg) 2 *(m i / N-1),

where m i is the number of occurrences of x i values ​​in N observations.

For large samples (N≥30) the denominator is simply N.

Standard deviation or mean square deviation. In psychology, it is customary to denote this value as σ (sigma):

σ = √∑(x i – x) 2 /n-1

The covariance coefficient is a relative characteristic of dispersion and is calculated by the formula:

V= (σ x / x avg)*100%

Quartile deviation (Q) . In practice, it is often important for us to find out not a point, but an interval of values; therefore, the accumulated frequency axis (if all values ​​are placed on the axis) is divided into an equal number of intervals. This is an S-shaped curve (accumulated frequency axis), where M is the general average. The function of this curve looks symbolically as follows:

F(Х) = (1/σ√2π*)∫((-(t-µ) 2)/ 2σ 2)dt

The points on the accumulated frequency axis that divide it in a specified proportion are called quantiles (hence the name quantile standardization of tests). Quantiles include quartiles, quintiles, deciles, and percentiles. For example, 3 quartiles (Q 1, Q 2, Q 3) divide the sample into 4 equal parts (quarts) in such a way that 25% of subjects are below Q 1, 50% below Q 2, 75% below Q 3, 99 percentiles divide the sample into 100 equal parts (percent), etc.

The first quartile is calculated using the formula: Q 1 =(R 1 +R n/2)/2, i.e. half-sum of the first and last ranks of the first – left of the median – half of the row;

Third quartile: Q 3 =(R n/2 +R n)/2, i.e. the half-sum of the first and last ranks of the second – to the right of the median – the middle of the row.

The obtained rank values ​​correspond to certain values ​​in the original data series. For the Distribution characteristic, the average quartile deviation is calculated:

Q=(X 1 (Q 3)-X 2 (Q 1))/2,

where X 1 and X 2 are the values ​​of the series corresponding to the third and first quartiles.

It is clear that with a symmetric distribution, Q 2 and Me will coincide. In general, the point on the axis corresponding to Q 2 is determined after separating 50% of all sample values.

3. Is there a relationship between individual data in the existing population and what is the nature and strength of these relationships?

To solve this issue, it is necessary to calculate measures of connection (correlation). Relationship measures reveal relationships between two variables. These relationships are calculated using correlation coefficients.

The Karl Pearson correlation coefficient is calculated by normalizing the covariance of variables by the product of their standard deviations:

r xy =(∑(x avg -x i)(y avg -y yi)/√∑(x avg -x i) 2 ∑(y avg -y yi) 2 .

The coefficient value can vary from -1 to +1.

Charles Edward Spearman Rank Correlation Coefficient:

r s =1-6*∑d 2 /(N(N 2 -1))

Its obtained value must be compared with the tabular one (in reference books, statistics textbooks, special publications, etc.).

3.2. Types of quantitative data analysis

Statistical analysis of data included in the procedure for processing research results includes, in addition to what is indicated, the following.

1. Analysis of variance (DA). Unlike correlation, it can reveal the dependence between two, three, etc. variables. Changes in the trait under study can be caused by several variables or their interaction, which can reveal DA.

2. Factor analysis. Allows you to reduce the dimension of the data space, i.e. it is reasonable to reduce the number of measured characteristics by combining them into certain aggregates (factors). The basis of the analysis is the correlation matrix, i.e. tables of correlation coefficients of each characteristic with all others. Depending on the number of factors in the correlation matrix, there are:

Unifactorial (according to Spearman);

Bifactorial (according to Holzinger);

Multifactorial (according to Thurston.

The very complex mathematical and logical apparatus of factor analysis often makes it difficult to choose a method option that is adequate to the research tasks.

3. Regression analysis. The method allows you to study the dependence of the average value of one value on the variation of another (other) value. The specificity of the method lies in the fact that at least one of the quantities under consideration is random. Then the description of the dependence breaks down into two tasks: 1) identifying general view dependence and 2) clarification by calculating estimates of the dependence parameters. Solving the first problem is a matter of skill and intuition of the researcher, because There are no standard methods for solving it. The solution to the second problem is essentially finding an approximating curve. Most often, this approximation is carried out using the mathematical method of least squares.

The idea of ​​this method belongs to Francis Galton, who noticed that very tall parents had children that were somewhat shorter, while very short parents had taller children. He called this pattern regression.

4. Taxonomic analysis. This is a mathematical technique for grouping data into classes (taxa, clusters) in such a way that objects included in one class are more homogeneous in some way compared to objects included in other classes. As a result, it becomes possible to determine the distance between the objects under study in one metric or another and give an orderly description of their relationships at a quantitative level. Due to the insufficient development of criteria for the effectiveness and admissibility of cluster procedures, this method is considered as additional or complemented by other methods, in particular, factor analysis.

Processing of psychological research data is a separate branch of experimental psychology, closely related to mathematical statistics and logic. Data processing is aimed at solving the following tasks:

Organizing the received material;

Detection and elimination of errors, shortcomings, gaps in information;

Identification of trends, patterns and connections hidden from direct perception;

Discovery of new facts that were not expected or noticed during the empirical process;

Determining the level of reliability, reliability and accuracy of the collected data and obtaining scientifically based results based on them.

There are quantitative and qualitative data processing. Quantitative processing is work with the measured characteristics of the object being studied, its “objectified” properties. High quality processing is a way of penetrating into the essence of an object by identifying its unmeasurable properties.

Quantitative processing is aimed mainly at a formal, external study of an object, while qualitative processing is mainly aimed at a meaningful, internal study of it. In quantitative research, the analytical component of cognition dominates, which is reflected in the names of quantitative methods for processing empirical material: correlation analysis, factor analysis, etc. Quantitative processing is carried out using mathematical and statistical methods.

In qualitative processing, synthetic methods of cognition predominate. Generalization is carried out at the next stage of the research process - interpretation. With high-quality data processing, the main thing is the appropriate presentation of information about the phenomenon being studied, ensuring its further theoretical study. Typically, the result of qualitative processing is an integrated representation of the set of properties of an object or set of objects in the form of classifications and typologies. Qualitative processing largely appeals to the methods of logic.

The contrast between qualitative and quantitative processing is rather arbitrary. Quantitative analysis without subsequent qualitative processing is meaningless, since in itself it does not lead to an increase in knowledge, and qualitative study of an object without basic quantitative data in scientific knowledge impossible. Without quantitative data, scientific knowledge is a purely speculative procedure.

The unity of quantitative and qualitative processing is clearly presented in many methods of data processing: factor and taxonomic analysis, scaling, classification, etc. The most common methods of quantitative processing are classification, typology, systematization, periodization, and casuistry.

Qualitative processing naturally results in a description and explanation of the phenomena being studied, which constitutes the next level of their study, carried out at the stage of interpretation of the results. Quantitative processing refers entirely to the data processing stage.

7.2. Primary statistical data processing

All methods of quantitative processing are usually divided into primary and secondary.

Primary statistical processing is aimed at organizing information about the object and subject of study. At this stage, “raw” information is grouped according to certain criteria and entered into summary tables. Primary processed data, presented in a convenient form, gives the researcher, as a first approximation, an idea of ​​the nature of the entire data set as a whole: their homogeneity - heterogeneity, compactness - scatteredness, clarity - blurriness, etc. This information is well read from visual forms of data presentation and provides information about their distribution.

During the application of primary methods of statistical processing, indicators are obtained that are directly related to the measurements made in the study.

The main methods of primary statistical processing include: calculating measures of central tendency and measures of data dispersion (variability).

Primary statistical analysis of the entire set of data obtained in the study makes it possible to characterize it in an extremely concise form and answer two main questions: 1) what value is most typical for the sample; 2) is there a large spread of data regarding this? characteristic value, i.e., what is the “fuzziness” of the data. To solve the first question, measures of central tendency are calculated, to solve the second, measures of variability (or dispersion) are calculated. These statistics are used on quantitative data presented on an ordinal, interval, or proportional scale.

Measures of central tendency– these are the quantities around which the rest of the data is grouped. These values ​​are, as it were, indicators that generalize the entire sample, which, firstly, makes it possible to judge the entire sample by them, and secondly, makes it possible to compare different samples, different series with each other. Measures of central tendency in processing the results of psychological research include: sample mean, median, mode.

Sample mean (M) is the result of dividing the sum of all values (X) by their number (N).

Median (Me) is a value above and below which the number of different values ​​is the same, i.e. it is the central value in a sequential series of data. The median does not have to coincide with a specific value. A match occurs in the case of an odd number of values ​​(answers), a mismatch occurs in the case of an even number. In the latter case, the median is calculated as the arithmetic mean of the two central values ​​in the ordered series.

Fashion (Mo)– this is the value that occurs most frequently in the sample, i.e. the value with the highest frequency. If all values ​​in a group occur equally often, then it is considered that there is no mode. If two adjacent values ​​have the same frequency and are greater than the frequency of any other value, the mode is the average of the two values. If the same applies to two non-adjacent values, then there are two modes and the group of scores is bimodal.

Typically, a sample mean is used when seeking the greatest accuracy in determining central tendency. The median is calculated when there are “atypical” data in the series that sharply affect the average. The mode is used in situations where high accuracy is not needed, but the speed of determining the measure of central tendency is important.

All three indicators are also calculated to evaluate the data distribution. At normal distribution the values ​​of the sample mean, median and mode are the same or very close.

Measures of scatter (variability)– these are statistical indicators that characterize the differences between individual sample values. They make it possible to judge the degree of homogeneity of the resulting set, its compactness, and indirectly, the reliability of the data obtained and the results arising from them. The most used indicators in psychological research are: mean deviation, dispersion, standard deviation.

Scope(P) is the interval between the maximum and minimum values ​​of the characteristic. It is determined easily and quickly, but is sensitive to randomness, especially with a small number of data.

Average deviation(MD) is the arithmetic mean of the difference (by absolute value) between each value in the sample and its mean.

Where d= |X – M |, M– sample average, X– specific meaning, N– number of values.

The set of all specific deviations from the average characterizes the variability of the data, but if we do not take them in absolute value, then their sum will be equal to zero and we will not receive information about their variability. The mean deviation shows the degree of crowding of the data around the sample mean. By the way, sometimes when determining this sample characteristic, instead of the average (M) take other measures of central tendency - mode or median.

Dispersion (D) characterizes deviations from the average value in a given sample. Calculating variance allows you to avoid zero-sum specific differences. (d = HM) not through their absolute values, but through their squaring:

Where d= |X – M|, M– sample average, X– specific meaning, N– number of values.

Standard deviation(b). Due to the squaring of individual deviations d When calculating the dispersion, the resulting value turns out to be far from the initial deviations and therefore does not give a clear idea of ​​them. To avoid this and obtain a characteristic comparable to the average deviation, perform the inverse mathematical operation - extract from the variance Square root. Its positive value is taken as a measure of variability, called the root-mean-square, or standard, deviation:



Where d= |X – M|, M– sample average, X – specific value, N– number of values.

MD, D And? Applicable to interval and proportional data. For ordinal data, the measure of variability is usually taken semiquartile deviation (Q), also called the semiquartile coefficient. This indicator is calculated as follows. The entire data distribution area is divided into four equal parts. If you count observations starting from the minimum value on a measuring scale, then the first quarter of the scale is called the first quartile, and the point separating it from the rest of the scale is indicated by the symbol Qv The second 25% of the distribution is the second quartile, and the corresponding point on the scale is Q2. Between the third and fourth quarters of the distribution there is a point Q3. The semiquartile ratio is defined as half the interval between the first and third quartiles:

With a symmetric distribution, the point Q2 will coincide with the median (and therefore with the average), and then the coefficient can be calculated Q to characterize the spread of data relative to the middle of the distribution. With an asymmetric distribution, this is not enough. Then the coefficients for the left and right sections are additionally calculated:

7.3. Secondary statistical data processing

Secondary methods include those methods of statistical processing, with the help of which, on the basis of primary data, statistical patterns hidden in them are revealed. Secondary methods can be divided into methods for assessing the significance of differences and methods for establishing statistical relationships.

Methods for assessing the significance of differences. To compare sample means belonging to two sets of data and to decide whether the means are statistically significantly different from each other, use the Student t test. Its formula is as follows:

Where M1, M2– sample means of the compared samples, m1, m2– integrated indicators of deviations of partial values ​​from two compared samples are calculated using the following formulas:

Where D1, D2– variances of the first and second samples, N1, N2– number of values ​​in the first and second samples.

t according to the table of critical values ​​(see Statistical Appendix 1), a given number of degrees of freedom ( N 1 + N 2 – 2) and the selected probability of acceptable error (0.05, 0.01, 0.02, 001, etc.) find the table value t. If the calculated value t greater than or equal to the table, they conclude that the compared average values ​​of the two samples are statistically significantly different with the probability of an acceptable error being less than or equal to the selected one.

If during the research process the task arises of comparing non-absolute average values, frequency distributions of data, then?2 criterion(see Appendix 2). Its formula is as follows:

Where Pk– distribution frequencies in the first measurement, Vk– distribution frequencies in the second measurement, m– the total number of groups into which the measurement results were divided.

After calculating the value of the indicator?2 according to the table of critical values ​​(see Statistical Appendix 2), a given number of degrees of freedom ( m– 1) and the selected probability of acceptable error (0.05, 0.0 ?2 t greater than or equal to the table) conclude that the compared data distributions in the two samples are statistically significantly different with the probability of an acceptable error being less than or equal to the selected one.

To compare the variances of two samples, use F-test Fisher. Its formula is as follows:


Where D 1, D 2 – variances of the first and second samples, N 1, N 2 – number of values ​​in the first and second samples.

After calculating the indicator value F according to the table of critical values ​​(see Statistical Appendix 3), a given number of degrees of freedom ( N 1 – 1, N2– 1) is located F cr. If the calculated value F greater than or equal to the table, they conclude that the difference in variances in the two samples is statistically significant.

Methods for establishing statistical relationships. The previous indicators characterize the totality of data according to any one characteristic. This changing characteristic is called a variable value or simply a variable. Communication measures identify relationships between two variables or between two samples. These connections, or correlations, are determined through the calculation of correlation coefficients. However, the presence of a correlation does not mean that there is a causal (or functional) relationship between the variables. Functional dependence is special case correlations. Even if a relationship is causal, correlation measures cannot indicate which of two variables is the cause and which is the effect. In addition, any relationship found in psychological research is usually due to other variables than just the two in question. In addition, the interrelations of psychological signs are so complex that they are hardly determined by one cause; they are determined by many causes.

Based on the closeness of the connection, the following types of correlation can be distinguished: complete, high, pronounced, partial; lack of correlation. These types of correlations are determined depending on the value of the correlation coefficient.

At full correlations, its absolute values ​​are equal to or very close to 1. In this case, a mandatory interdependence between the variables is established. A functional dependence is likely here.

High correlation is established at an absolute value of the coefficient of 0.8–0.9. Expressed correlation is considered at an absolute value of the coefficient of 0.6–0.7. Partial the correlation exists at an absolute value of the coefficient of 0.4–0.5.

Absolute values ​​of the correlation coefficient less than 0.4 indicate a very weak correlation and, as a rule, are not taken into account. No correlation stated at a coefficient value of 0.

In addition, in psychology, when assessing the closeness of a connection, the so-called “private” classification of correlations is used. It is focused not on the absolute value of the correlation coefficients, but on the level of significance of this value for a certain sample size. This classification is used in the statistical evaluation of hypotheses. With this approach, it is assumed that the larger the sample, the lower the value of the correlation coefficient can be accepted to recognize the reliability of the relationships, and for small samples even absolutely great importance coefficient may be unreliable.

By focus The following types of correlations are distinguished: positive (direct) and negative (inverse). Positive(direct) correlation is registered with a coefficient with a “plus” sign: with an increase in the value of one variable, an increase in the other is observed. Negative(inverse) correlation occurs when the coefficient has a minus sign. This means an inverse relationship: an increase in the value of one variable entails a decrease in the other.

By form The following types of correlations are distinguished: linear and curvilinear. At rectilinear connection, uniform changes in one variable correspond to uniform changes in another. If we talk not only about correlations, but also about functional dependencies, then such forms of dependence are called proportional. In psychology, strictly linear connections are a rare phenomenon. At curvilinear connection, a uniform change in one characteristic is combined with an uneven change in another. This situation is typical for psychology.

Linear correlation coefficient according to K. Pearson (r) calculated using the following formula:


Where X X from the sample mean (Mx), y– deviation of an individual value Y from sample average (M y), bx– standard deviation for X, ? y– standard deviation for Y, N– number of pairs of values X And Y.

The significance of the correlation coefficient is assessed using the table (see Statistical Appendix 4).

When comparing ordinal data, use rank correlation coefficient according to Ch. Spearman (R):


Where d– the difference in ranks (ordinal places) of two quantities, N– the number of compared pairs of values ​​of two variables (X and Y).

The significance of the correlation coefficient is assessed using the table (see Statistical Appendix 5).

Implementation in Scientific research automated data processing tools allows you to quickly and accurately determine any quantitative characteristics any data arrays. Various computer programs have been developed that can be used to carry out appropriate statistical analysis of almost any sample. Of the mass of statistical techniques in psychology, the most widely used are the following: 1) complex calculation of statistics; 2) correlation analysis; 3) analysis of variance; 4) regression analysis; 5) factor analysis; 6) taxonomic (cluster) analysis; 7) scaling. You can get acquainted with the characteristics of these methods in the specialized literature (“Statistical methods in pedagogy and psychology” by Stanley J., Glass J. (M., 1976), “Mathematical psychology” by G.V. Sukhodolsky (St. Petersburg, 1997), “ Mathematical methods psychological research" A.D. Nasledov (St. Petersburg, 2005), etc.).

Home > Document

V. V. NIKANDROV

NON-EMPIRICAL METHODS OF PSYCHOLOGY

SPEECH

St. Petersburg 2003

BBK 88.5 N62

Printed by decree

editorial and publishing council

St. Petersburg State University

Reviewers: Doctor of Psychology L. V. Kulikov, Candidate of Psychological Sciences Yu. I. Filimonenko. Nikandrov V.V. H62 Non-empirical methods of psychology: Textbook. allowance. - St. Petersburg: Rech, 2003. - 53 p. The manual contains basic information about methods of organizing psychological research, processing empirical material and interpreting results, united under the name “non-empirical methods of psychology.” The manual is addressed to students, graduate students and other categories of students in psychological fields. BBK 88.5 ISBN 5-9268-0174-5 ISBN 5-9268-0174-5 © V. V. Nikandrov, 2003 © Rech Publishing House, 2003 © P. V. Borozenets, cover design, 2003

Introduction 7 1. Organizational methods 11 1.1. Comparative method 11 1.2. Longitudinal method 12 1.3. Complex method 15 2. Data processing methods 16 2.1. Quantitative methods 18 2.1.1. Primary processing methods 18 2.1.2. Secondary processing methods 19 2.1.2.1. General understanding of secondary processing 19 2.1.2.2. Complex calculation of statistics 25 2.1.2.3. Correlation analysis 25 2.1.2.4. Analysis of variance 26 2.1.2.5. Factor analysis 26 2.1.2.6. Regression analysis 27 2.1.2.7. Taxonomic analysis 28 2.1.2.8. Scaling 28 2.2. Qualitative methods 38 2.2.1. Classification 38 2.2.2. Typology 40 2.2.3. Systematization 43 2.2.4. Periodization 43 2.2.5. Psychological casuistry 44

3. Interpretive methods 45

3.1. Genetic method 45 3.2. Structural method 46 3.3. Functional method 47 3.4. Complex method 48 3.5. System method 49 Literature 52

Introduction

Non-empirical methods of psychology- these are scientific research techniques of psychological work outside the framework of contact (direct or indirect) of the researcher with the object of research. These techniques, firstly, contribute to the organization of obtaining psychological information using empirical methods and, secondly, they provide the opportunity to transform this information into reliable scientific knowledge. As is known, to the very first approximation, any scientific research, including psychological, goes through three stages: 1) preparatory; 2) main; 3) final. At the first stage the goals and objectives of the research are formulated, orientation is made to the body of knowledge in this area, an action program is drawn up, organizational, material and financial issues are resolved. On main stage The actual research process is carried out: the scientist, using special methods, comes into contact (direct or indirect) with the object being studied and collects data about it. It is this stage that usually best reflects the specifics of the research: the reality being studied in the form of the object and subject under study, the area of ​​knowledge, the type of research, and methodological equipment. On final stage The received data is processed and converted into the desired result. The results are correlated with the stated goals, explained and included in the existing knowledge system in the field. The above stages can be divided, and then a more detailed diagram is obtained, analogues of which in one form or another are given in the scientific literature:

I. Preparatory stage:

1. Statement of the problem; 2. Proposing a hypothesis; 3. Study planning. II. Main (empirical) stage: 4. Data collection. III. Final stage: 5. Data processing; 6. Interpretation of results; 7. Conclusions and inclusion of results in the knowledge system. Non-empirical methods are used in the first and third stages of the study, empirical methods - in the second. In science there are many classifications of psychological methods, but most of them concern empirical methods. Non-empirical methods are presented in a few classifications, of which the most convenient are those based on the criterion of the stages of the psychological process. Among them, the most successful and widely recognized is the classification psychological methods, proposed by B. G. Ananyev, who in turn relied on the classification of the Bulgarian scientist G. Pirov. It is believed that B. G. Ananyev “developed a classification corresponding modern level science and stimulated further research on this central problem for the methodology of psychology." The breakdown of the course of psychological research into stages according to B. G. Ananyev, although it does not completely coincide with what we gave above, is still very close to it: A) organizational stage (planning); B) empirical stage (data collection); B) data processing; D) interpretation of results. Having slightly changed and supplemented the classification of B. G. Ananyev, we will obtain a detailed system of methods, which we recommend as a reference when studying psychological tools:

I. Organizational methods (approaches).

1. Comparative. 2. Longitudinal. 3. Comprehensive.

P. Empirical methods.

1. Observational (observation): a) objective observation; b) introspection (introspection). 2. Verbal communication methods. a) conversation; b) survey (interview and questionnaire). 3. Experimental methods: a) laboratory experiment; b) natural experiment; c) formative experiment. 4. Psychodiagnostic methods: a) psychodiagnostic tests; b) psychosemantic methods; c) psychomotor methods; d) methods of socio-psychological diagnostics of personality. 5. Psychotherapeutic methods. 6. Methods for studying the products of activity: a) reconstruction method; b) method of studying documents (archival method); c) graphology. 7. Biographical methods. 8. Psychophysiological methods: a) methods for studying the work of the autonomic nervous system; b) methods for studying the functioning of the somatic nervous system; c) methods for studying the functioning of the central nervous system. 9. Praximetric methods: a) general methods for studying individual movements and actions; b) special methods for studying labor operations and activities. 10. Modeling. 11. Specific methods of branch psychological sciences.

III. Data processing methods:

1. Quantitative methods; 2. Qualitative methods.

IV. Interpretive methods (approaches):

1. Genetic; 2. Structural; 3. Functional; 4. Comprehensive; 5. Systemic. [ 9] The above classification does not pretend to be exhaustive or strictly systematic. And following B. G. Ananyev, we can say that “the contradictions of modern methodology, methods and techniques of psychology are reflected quite deeply in the proposed classification.” Nevertheless, it still gives a general idea of ​​the system of methods used in psychology, and methods with well-established designations and names in the practice of their use. So, based on the proposed classification, we have three groups of non-empirical methods: organizational, data processing and interpretive. Let's look at them one by one.

    ORGANIZATIONAL METHODS

These methods should rather be called approaches, since they represent not so much a specific method of research as a procedural strategy. The choice of one or another method of organizing research is predetermined by its objectives. And the chosen approach, in turn, determines the set and order of application of specific methods for collecting data about the object and subject of study.

1.1. Comparative method

Comparative method consists in comparing different objects or different aspects of one object of study at some point in time. The data taken from these objects are compared with each other, which gives rise to the identification of relationships between them. Sub-move allows you to study spatial diversity, relationships And evolution mental phenomena. Diversity and relationships are studied either by comparing various manifestations of the psyche in one object (person, animal, group) at a certain point in time, or by simultaneous comparison different people(animals, groups) according to any one type (or complex) of mental manifestations. For example, the dependence of reaction speed on the type of signal modality is studied on an individual individual, and on gender, ethnic or age characteristics - on several individuals. It is clear that “simultaneity”, like “a certain moment in time”, in this case are relative concepts. They are determined by the duration of the study, which can be measured in hours, days and even weeks, but will be negligible compared to the life cycle of the object being studied. [ 11] Especially bright comparative method manifests itself in the evolutionary study of the psyche. Objects (and their indicators) corresponding to certain stages of phylogenesis are subject to comparison. Primates, archanthropes, paleoanthropes are compared with modern humans, data about which is supplied by zoopsychology, anthropology, paleopsychology, archeology, ethology and other sciences about animals and the origin of man. The science that deals with such analysis and generalizations is called “Comparative Psychology.” Outside the comparative method, the entire psychology of differences is unthinkable ( differential psychology). An interesting modification of the comparative method, common in developmental psychology and called the “cross-sectional method”. Cross sections are a collection of data about a person at certain stages of his ontogenesis (infancy, childhood, old age, etc.), obtained in studies of relevant populations. Such data in a generalized form can act as level standards mental development a person for a certain age in a particular population. The comparative method allows the use of any empirical method when collecting data about the object of study.

1.2. Longitudinal method

Longitudinal method (lat. long - long) - long-term and systematic study of the same object. Such long-term tracking of an object (usually according to a pre-compiled program) makes it possible to identify the dynamics of its existence and predict its further development. In psychology, longitudinal studies are widely used in the study of age dynamics, mainly in childhood. A specific form of implementation is the method of “longitudinal sections”. Longitudinal sections are a collection of data about an individual for a certain period of his life. These periods can be measured in months, years and even decades. The result of the longitudinal method as a way of organizing a multi-year research cycle “is an individual monograph or a set of such monographs describing the course of mental development, covering a number of phases of periods of human life. A comparison of such individual monographs makes it possible to fairly fully present the range of fluctuations in age norms and the moments of transition from one phase of development to another. However, constructing a series of functional tests and experimental methods, periodically repeated when studying the same person, is an extremely difficult matter, since the adaptation of the subject to the experimental conditions and special training can influence the picture of development. In addition, the narrow base of such a study, limited to a small number of selected objects, does not provide grounds for constructing age-related syndromes, which is successfully carried out through the comparative method of “cross-sections”. Therefore, it is advisable to combine, whenever possible, longitudinal and comparative methods. J. Shvantsara and V. Smekal offer the following classification of types of longitudinal research: A. Depending on the duration of the study: 1. Short-term observation; 2. Long-term follow-up; 3. Faster observation. B. Depending on the direction of the study: 1. Retrospective observation; 2. Prospective (prospective) observation; 3. Combined observation. B. Depending on the methods used: 1. True longitudinal observation; 2. Mixed observation; 3. Pseudo-longitudinal observation. Short term Observation is recommended to be carried out to study the stages of ontogenesis, rich in changes and leaps in development. For example, the infant period of infancy, the period of maturation in adolescence - youth, etc. If the purpose of the study is to study the dynamics of large-scale periods of development, the relationship between individual periods and individual changes, then it is recommended yes long-term longitudinal Accelerated option is intended for studying long periods of development, but in a short time. Used mainly in child psychology. Several age groups are subject to observation at once. The age range of each group depends on the purpose of the study. In the practice of monitoring children, it is usually 3-4 years. Adjacent groups overlap each other for one to two years. Parallel observation of a number of such groups makes it possible to link the data of all groups into a single cycle, covering the entire set of these groups from the youngest to the oldest. Thus, a study conducted over, say, 2-3 years can provide a longitudinal slice over 10-20 years of ontogeny. Retrospective the form allows us to trace the development of a person or his individual qualities in the past. It is carried out by collecting biographical information and analyzing the products of activity. For children, these are primarily autobiographical conversations, testimonies from parents, and anamnesis data. Perspective, or prospective, method is current observations of the development of a person (animal, group) up to a certain age. Combined the study assumes the inclusion of retrospective elements in a prospective longitudinal study. True longitudinal is a classic long-term observation of one object. Mixed It is considered a method of longitudinal research in which true longitudinal observation at some stages is supplemented by cross-sections that provide comparative information about other objects of the same type as the one being studied. This method is beneficial when observing groups that “melt” over time, that is, their composition decreases from period to period. Pseudo-longitudinal The research consists of obtaining “norms” for different age groups and chronological ordering of these indicators. The norm is obtained through cross sections of the group, i.e., through averaged data for each group. Here the inadmissibility of contrasting transverse and longitudinal sections is clearly demonstrated, since the latter, as we see, can be obtained through a sequential (chronological) series of transverse sections. By the way, it is in this way that “most of the hitherto known norms of ontogenetic psychology were obtained.” [ 14]

1.3. Complex method

Integrated method (approach) involves organizing a comprehensive study of an object. In essence, this is, as a rule, an interdisciplinary study devoted to the study of an object common to several sciences: the object is one, but the subjects of research are different. [ 15]

    DATA PROCESSING METHODS

Data processing is aimed at solving the following problems: 1) organizing the source material, transforming a set of data into a holistic system of information, on the basis of which further description and explanation of the object and subject being studied is possible; 2) detection and elimination of errors, shortcomings, gaps in information; 3) identifying trends, patterns and connections hidden from direct perception; 4) discovery of new facts that were not expected and were not noticed during the empirical process; 5) determining the level of reliability, reliability and accuracy of the collected data and obtaining scientifically based results on their basis. Data processing has quantitative and qualitative aspects. Quantitative processing there is a manipulation with the measured characteristics of the object (objects) being studied, with its “objectified” properties in external manifestation. High-quality processing- this is a method of preliminary penetration into the essence of an object by identifying its unmeasurable properties on the basis of quantitative data. Quantitative processing is aimed mainly at a formal, external study of an object, while qualitative processing is mainly aimed at a meaningful, internal study of it. In quantitative research, the analytical component of cognition dominates, which is reflected in the names of quantitative methods for processing empirical material, which contain the category “analysis”: correlation analysis, factor analysis, etc. The main result of quantitative processing is an ordered a set of “external” indicators of an object (objects). Quantitative processing is carried out using mathematical and statistical methods. In qualitative processing, the synthetic component of cognition dominates, and in this synthesis the unification component prevails and the generalization component is present to a lesser extent. Generalization is the prerogative of the next stage of the interpretive research process. In the phase of qualitative data processing, the main thing is not to reveal the essence of the phenomenon being studied, but for now only in the appropriate presentation of information about it, ensuring its further theoretical study. Typically, the result of qualitative processing is an integrated representation of the set of properties of an object or set of objects in the form of classifications and typologies. Qualitative processing largely appeals to the methods of logic. The contrast between qualitative and quantitative processing (and, consequently, the corresponding methods) is rather arbitrary. They form an organic whole. Quantitative analysis without subsequent qualitative processing is meaningless, since by itself it is not able to transform empirical data into a system of knowledge. And a qualitative study of an object without basic quantitative data in scientific knowledge is unthinkable. Without quantitative data, qualitative cognition is a purely speculative procedure, not characteristic of modern science. In philosophy, the categories “quality” and “quantity,” as is known, are combined into the category “measure.” The unity of quantitative and qualitative understanding of empirical material clearly appears in many methods of data processing: factor and taxonomic analysis, scaling, classification, etc. But since traditionally in science the division into quantitative and qualitative characteristics, quantitative and qualitative natural methods, quantitative and qualitative descriptions, we will accept the quantitative and qualitative aspects of data processing as independent phases of one research stage, to which certain quantitative and qualitative methods correspond. Quality processing naturally results in description And explanation phenomena being studied, which constitutes the next level of their study, carried out at the stage interpretations results. Quantitative processing refers entirely to the data processing stage.

2.1. Quantitative methods

The quantitative data processing process has two phases: primary And secondary.

2.1.1. Primary processing methods

Primary processing aims at arranging information about the object and subject of study obtained at the empirical stage of the study. At this stage, “raw” information is grouped according to certain criteria, entered into summary tables, and presented graphically for clarity. All these manipulations make it possible, firstly, to detect and eliminate errors made when recording data, and, secondly, to identify and remove from the general array ridiculous data obtained as a result of violation of the examination procedure, non-compliance instructions from the subjects, etc. In addition, the initially processed data, presented in a form convenient for review, gives the researcher a first approximation of the nature of the entire data set as a whole: their homogeneity - heterogeneity, compactness - scatteredness, clarity - blurriness etc. This information is easily readable on visual forms of data presentation and is associated with the concepts of “data distribution”. The main methods of primary processing include: tabulation, i.e. the presentation of quantitative information in tabular form, and diagramming(rice. I), histograms (Fig. 2), distribution polygons (Fig. 3) And distribution curves(Fig. 4). Diagrams reflect the distribution of discrete data; other graphical forms are used to represent the distribution of continuous data. It’s easy to move from a histogram to a plot frequency distribution polygon, and from the latter - to the distribution curve. A frequency polygon is constructed by connecting the upper points of the central axes of all sections of the histogram with straight segments. If you connect the vertices of the sections using smooth curved lines, you get distribution curve primary results. The transition from a histogram to a distribution curve allows, by interpolation, to find those values ​​of the variable under study that were not obtained in the experiment. [ 18]

2.1.2. Secondary processing methods

2.1.2.1. Understanding Recycling

Secondary processing lies mainly in statistical analysis results of primary processing. Tabulating and plotting graphs, strictly speaking, is also statistical processing, which, together with the calculation of measures of central tendency and dispersion, is included in one of the sections of statistics, namely descriptive statistics. Another section of statistics - inductive statistics- checks the compliance of the sample data with the entire population, i.e. solves the problem of the representativeness of the results and the possibility of moving from private knowledge to general knowledge. Third big section - correlation statistics- identifies connections between phenomena. In general, one must understand that “statistics is not mathematics, but, first of all, a way of thinking, and to apply it you only need to have a little common sense and know the basics of mathematics.” Statistical analysis of the entire set of data obtained in the study makes it possible to characterize it in an extremely compressed form, since it allows answering three main questions: 1) Which value is most typical for the sample?; 2) is the spread of data relative to this characteristic value large, i.e., what is the “fuzziness” of the data?; 3) is there a relationship between individual data in the existing population and what is the nature and strength of these connections? The answers to these questions are provided by some statistical indicators of the sample under study. To solve the first question, calculate measures of central tendency(or localization), second - measures of variability(or dispersion, scattering), third - communication measures(or correlations). These statistical indicators are applicable to quantitative data (ordinal, interval, proportional). Measures of central tendency(m.c.t.) are the quantities around which the rest of the data is grouped. These values ​​are, as it were, indicators that generalize the entire sample, which, firstly, allows one to judge the entire sample by them, and secondly, makes it possible to compare different samples, different series with each other. Measures of central tendency include: arithmetic mean, median, mode, geometric mean, harmonic mean. In psychology, the first three are usually used. Arithmetic mean (M) is the result of dividing the sum of all values (X) by their number (N): M = EX / N. Median (Me) - this is a value above and below which the number of different values ​​is the same, i.e. this is the central value in a sequential series of data. Examples: 3,5,7,9,11,13,15; Me = 9. 3,5,7,9, 11, 13, 15, 17; Me = 10. It is clear from the examples that the median does not have to coincide with the existing measurement, it is a point on the scale. A match occurs in the case of an odd number of values ​​(answers) on the scale, a discrepancy occurs in the case of an even number. Fashion (Mo)- this is the value that occurs most frequently in the sample, i.e. the value with the highest frequency. Example: 2, 6, 6, 8, 9, 9, 9, 10; Mo = 9. If all values ​​in a group occur equally often, then it is considered that no fashion(for example: 1, 1, 5, 5, 8, 8). If two adjacent values ​​have the same frequency and they are greater than the frequency of any other value, there is a mode average these two values ​​(for example: 1, 2, 2, 2, 4, 4, 4, 5, 5, 7; Mo = 3). If the same applies to two non-adjacent values, then there are two modes and the group of estimates is bimodal(for example: 0, 1, 1, 1, 2, 3, 4, 4, 4, 7; Mo = 1 and 4). Usually the arithmetic mean is used when striving for the greatest accuracy and when the standard deviation later needs to be calculated. Median - when the series contains “atypical” data that sharply affects the average (for example: 1, 3, 5, 7, 9, 26, 13). Fashion - when high accuracy is not needed, but the speed of determining the m.c. is important. T. Measures of variability (dispersion, spread)- these are statistical indicators that characterize the differences between individual sample values. They make it possible to judge the degree of homogeneity of the resulting set, its compactness, and indirectly, the reliability of the data obtained and the results arising from them. The indicators most used in psychological research are: range, average deviation, dispersion, standard deviation, semiquartile deviation. Swing (P) is the interval between the maximum and minimum values ​​of the characteristic. It is determined easily and quickly, but is sensitive to randomness, especially with a small number of data. Examples: (0, 2, 3, 5, 8; P = 8); (-0.2, 1.0, 1.4, 2.0; P - 2.2). Mean Deviation (MD) is the arithmetic mean of the difference (in absolute value) between each value in the sample and its average: MD = Id / N, where: d = |X-M|; M - sample average; X - specific value; N is the number of values. The set of all specific deviations from the average characterizes the variability of the data, but if they are not taken in absolute value, then their sum will be equal to zero, and we will not receive information about their variability. MD shows the degree of crowding of data around the average. By the way, sometimes when determining this characteristic of a sample, instead of the mean (M), other measures of central tendency are taken - the mode or median. Dispersion (D)(from lat. dispersus - scattered). Another way to measure the degree of crowding of data involves avoiding the zero sum of specific differences (d = X-M) not through their absolute values, but through their squaring. In this case, the so-called dispersion is obtained: D = Σd 2 / N - for large samples (N > 30); D = Σd 2 / (N-1) - for small samples (N< 30). Standard deviation (δ). Due to the squaring of individual deviations d when calculating the dispersion, the resulting value turns out to be far from the initial deviations and therefore does not give a clear idea of ​​them. To avoid this and obtain a characteristic comparable to the average deviation, an inverse mathematical operation is performed - the square root is extracted from the variance. Its positive value is taken as a measure of variability, called the root mean square or standard deviation: MD, D and d are applicable for interval and proportional data. For ordinal data, the measure of variability is usually taken semiquartile deviation (Q), also called semiquartile coefficient or half-interquartile range. This indicator is calculated as follows. The entire data distribution area is divided into four equal parts. If observations are counted starting from the minimum value on the measuring scale (on graphs, polygons, histograms, the counting is usually from left to right), then the first quarter of the scale is called the first quartile, and the point separating it from the rest of the scale is designated by the symbol Q ,. The second 25% of the distribution is the second quartile, and the corresponding point on the scale is Q 2 . Between the third and fourth quarter point Q is located in the distribution. The semi-quarterly coefficient is defined as half the interval between the first and third quartiles: Q = (Q.-Q,) / 2. It is clear that with a symmetric distribution, point Q 0 coincides with the median (and therefore with the average), and then it is possible to calculate the coefficient Q to characterize the spread of data relative to the middle of the distribution. With an asymmetric distribution, this is not enough. And then the coefficients for the left and right sections are additionally calculated: Q a lion = (Q 2 -Q,) / 2; Q rights= (Q, - Q 2) / 2. Communication measures The previous indicators, called statistics, characterize the totality of data according to one particular characteristic. This changing characteristic is called a variable value or simply “variable”. Measures of connection reveal relationships between two variables or between two samples. These connections, or correlations (from lat. correlatio - “correlation, relationship”) is determined through calculation correlation coefficients (R), if the variables are in a linear relationship with each other. It is believed that most mental phenomena are subject to linear dependencies, which predetermined the widespread use of correlation analysis methods. But the presence of a correlation does not mean that there is a causal (or functional) relationship between the variables. Functional dependence is a special case of correlation. Even if the relationship is causal, correlation indicators cannot indicate which of two variables is the cause and which is the effect. In addition, any connection discovered in psychology, as a rule, exists due to other variables, and not just the two considered. In addition, the interrelations of psychological signs are so complex that their determination by one cause is hardly consistent; they are determined by many causes. Types of correlation: I. According to the closeness of the connection: 1) Complete (perfect): R = 1. Mandatory interdependence between the variables is stated. Here we can already talk about functional dependence. 2) no connection was identified: R = 0. [ 23] 3) Partial: 0 2) Curvilinear.

This is a relationship in which a uniform change in one characteristic is combined with an uneven change in another. This situation is typical for psychology. Correlation coefficient formulas: When comparing ordinal data, apply rank correlation coefficient according to Ch. Spearman (ρ): ρ = 6Σd 2 / N (N 2 - 1), where: d is the difference in ranks (ordinal places) of two quantities, N is the number of compared pairs of values ​​of two variables (X and Y). When comparing metric data, use product correlation coefficient according to K. Pearson (r): r = Σ xy / Nσ x σ y where: x is the deviation of an individual value of X from the sample average (M x), y is the same for Y, O x is the standard deviation for X, a - the same for Y, N - the number of pairs of values ​​of X and Y. The introduction of computer technology into scientific research makes it possible to quickly and accurately determine any quantitative characteristics of any data arrays. Various computer programs have been developed that can be used to carry out appropriate statistical analysis of almost any sample. Of the mass of statistical techniques in psychology, the most widely used are the following: 1) complex calculation of statistics; 2) correlation analysis; 3) analysis of variance; 4) regression analysis; 5) factor analysis; 6) taxonomic (cluster) analysis; 7) scaling.

2.1.2.2. Comprehensive statistics calculation

Using standard programs, both the main sets of statistics presented above and additional ones not included in our review are calculated. Sometimes the researcher is limited to obtaining these characteristics, but more often the totality of these statistics represents only a block included in a wider set of indicators of the sample being studied, obtained using more complex programs. Including programs that implement the methods of statistical analysis given below.

2.1.2.3. Correlation analysis

Reduces to calculating correlation coefficients in a wide variety of relationships between variables. The relationships are set by the researcher, and the variables are equivalent, i.e., what is the cause and what is the effect cannot be established through correlation. In addition to the closeness and direction of connections, the method allows you to establish the form of connection (linearity, nonlinearity). It should be noted that nonlinear connections cannot be analyzed using mathematical and statistical methods generally accepted in psychology. Data related to nonlinear zones (for example, at points where connections are broken, in places of abrupt changes) are characterized through meaningful descriptions, refraining from their formal quantitative presentation. Sometimes it is possible to use nonparametric mathematical and statistical methods and models to describe nonlinear phenomena in psychology. For example, the mathematical theory of disaster is used.

2.1.2.4. Analysis of variance

Unlike correlation analysis, this method allows us to identify not only the relationship, but also the dependencies between variables, i.e., the influence of various factors on the characteristic being studied. This influence is assessed through dispersion relations. Changes in the characteristic being studied (variability) can be caused by the action of individual factors known to the researcher, their interaction and the effects of unknown factors. Analysis of variance makes it possible to detect and evaluate the contribution of each of these influences to the overall variability of the trait under study. The method allows you to quickly narrow the field of conditions influencing the phenomenon under study, highlighting the most significant of them. Thus, analysis of variance is “the study of the influence of variable factors on the variable being studied by variance.” Depending on the number of influencing variables, one-, two-, and multivariate analysis is distinguished, and depending on the nature of these variables - analysis with fixed, random or mixed effects. Analysis of variance is widely used in experimental design.

2.1.2.5. Factor analysis

The method makes it possible to reduce the dimension of the data space, i.e., to reasonably reduce the number of measured characteristics (variables) by combining them into certain aggregates that act as integral units characterizing the object being studied. In this case, these composite units are called factors, from which it is necessary to distinguish the factors of variance analysis, representing which are individual characteristics (variables). It is believed that it is the totality of signs in certain combinations that can characterize a mental phenomenon or the pattern of its development, while individually or in other combinations these signs do not provide information. As a rule, factors are not visible to the eye, hidden from direct observation. Factor analysis is especially productive in preliminary research, when it is necessary to identify, to a first approximation, hidden patterns in the area under study. The basis of the analysis is the correlation matrix, i.e. tables of correlation coefficients of each characteristic with all the others (the “all with all” principle). Depending on the number of factors in the correlation matrix, there are single-factor(according to Spearman), bi-factor(according to Holzinger) and multifactorial(according to Thurston) analyses. Based on the nature of the relationship between factors, the method is divided into analysis with orthogonal(independent) and with oblique(dependent) factors. There are other varieties of the method. The very complex mathematical and logical apparatus of factor analysis often makes it difficult to choose a method option that is adequate to the research tasks. Nevertheless, its popularity in the scientific world is growing every year.

2.1.2.6. Regression analysis

The method allows you to study the dependence of the average value of one quantity on variations of another (other) quantity. The specificity of the method lies in the fact that the quantities under consideration (or at least one of them) are random in nature. Then the description of the dependence is divided into two tasks: 1) identifying the general type of dependence and 2) clarifying this type by calculating estimates of the parameters of the dependence. There are no standard methods for solving the first problem, and here a visual analysis of the correlation matrix is ​​carried out in combination with a qualitative analysis of the nature of the quantities (variables) being studied. This requires high qualifications and erudition from the researcher. The second task is essentially finding an approximating curve. Most often this approximation is done using the mathematical method of least squares. The idea of ​​the method belongs to F. Galto- well, who noticed that very tall parents had children that were somewhat shorter, and very short parents had taller children. He called this pattern regression.

2.1.2.7. Taxonomic analysis

The method is a mathematical technique for grouping data into classes (taxa, clusters) in such a way that objects included in one class are more homogeneous in some respect compared to objects included in other classes. As a result, it becomes possible to determine in one metric or another the distance between the objects being studied and to give an orderly description of their relationships at a quantitative level. Due to the insufficient development of the criterion for the effectiveness and admissibility of cluster procedures, this method is usually used in combination with other methods of quantitative data analysis. On the other hand, taxonomic analysis itself is used as additional insurance for the reliability of results obtained using other quantitative methods, in particular, factor analysis. The essence of cluster analysis allows us to consider it as a method that explicitly combines quantitative processing data from their qualitative analysis. Therefore, it is apparently not legitimate to classify it unambiguously as a quantitative method. But since the procedure of the method is predominantly mathematical and the results can be presented numerically, then the method as a whole will be classified as quantitative.

2.1.2.8. Scaling

Scaling, to an even greater extent than taxonomic analysis, combines the features of quantitative and qualitative study of reality. Quantitative aspect scaling is that its procedure in the vast majority of cases includes measurement and numerical representation of data. Qualitative aspect scaling is expressed in the fact that, firstly, it allows you to manipulate not only quantitative data, but also data that does not have common units of measurement, and secondly, includes elements of qualitative methods (classification, typology, systematization). Another fundamental feature of scaling, which makes it difficult to determine its place in the general system of scientific methods, is combining procedures for data collection and processing. We can even talk about the unity of empirical and analytical procedures when scaling. Not only in a specific study is it difficult to indicate the sequence and separation of these procedures (they are often performed simultaneously and jointly), but also in theoretical terms it is not possible to detect a staged hierarchy (it is impossible to say what is primary and what is secondary). The third point that does not allow scaling to be unambiguously attributed to one or another group of methods is its organic “growth” into specific areas of knowledge and its acquisition along with signs general scientific method signs highly specific. If other methods of general scientific significance (for example, observation or experiment) can be quite easily presented both in general form and in specific modifications, then scaling at the level of the general without losing the necessary information is very difficult to characterize. The reason for this is obvious: the combination of empirical procedures with data processing in scaling. Empirics is concrete, mathematics is abstract, therefore the fusion of general principles of mathematical analysis with specific methods of data collection gives the indicated effect. For the same reason, the scientific origins of scaling have not been precisely defined: several sciences lay claim to the title of its “parent”. Among them is psychology, where such outstanding scientists as L. Thurston, S. Stevens, V. Torgerson, A. Pieron worked on the theory and practice of scaling. Having realized all these factors, we still place scaling in the category quantitative methods data processing, since in the practice of psychological research scaling occurs in two situations. The first one is construction scales, and the second - their usage. In the case of construction, all the mentioned features of scaling are fully manifested. When used, they fade into the background, since the use of ready-made scales (for example, “standard” scales for testing) simply involves comparison. Comparison with them of indicators obtained at the data collection stage. Thus, here the psychologist only uses the fruits of scaling, and at the stages following the collection of data. This situation is a common phenomenon in psychology. In addition, the formal construction of scales, as a rule, is carried out beyond the scope of direct measurements and collection of data about an object, i.e., the main scale-forming actions of a mathematical nature are carried out after the collection of data, which is comparable to the stage of their processing. In the most general sense scaling is a way of understanding the world through modeling reality using formal (primarily numerical) systems. This method is used in almost all areas of scientific knowledge (in natural, exact, humanities, social, technical sciences) and has wide applied significance. The most rigorous definition seems to be the following: scaling is the process of mapping empirical sets into formal ones according to given rules. Under empirical set refers to any set of real objects (people, animals, phenomena, properties, processes, events) that are in certain relationships with each other. These relations can be represented by four types (empirical operations): 1) equality (equal - not equal); 2) rank order (more - less); 3) equality of intervals; 4) equality of relations. By In the nature of the empirical set, scaling is divided into two types: physical And psychological. IN In the first case, objective (physical) characteristics of objects are subject to scaling, in the second - subjective (psychological). Under formal set is understood as an arbitrary set of symbols (signs, numbers) interconnected by certain relations, which, according to empirical relations, are described by four types of formal (mathematical) operations: 1) “equal - not equal” (= ≠); 2) “more - less” (><); 3) «сло-жение - вычитание» (+ -); 4) «умножение - деление» (* :). При шкалировании обязательным условием является one-to-one correspondence between the elements of the empirical and formal sets. This means that each element of the first multiplicity Only one element of the second must correspond to each other, and vice versa. In this case, a one-to-one correspondence of the types of relations between the elements of both sets (isomorphism of structures) is not necessary. If these structures are isomorphic, the so-called direct (subjective) scaling, in the absence of isomorphism, is carried out indirect (objective) scaling. The result of scaling is the construction scales(lat. scala - “ladder”), i.e. some sign (numerical) models of the reality under study, with the help of which this reality can be measured. Thus, scales are measuring instruments. A general idea of ​​the whole variety of scales can be obtained from works where their classification system is given and brief descriptions of each type of scale are given. The relationships between the elements of the empirical set and the corresponding admissible mathematical operations (admissible transformations) determine the level of scaling and the type of the resulting scale (according to the classification of S. Stevens). The first, simplest type of relationship (= ≠) corresponds to the least informative name scales, second (><) - order scales, third (+ -) - interval scales, fourth (*:) - the most informative relationship scales. Process psychological scaling can be conditionally divided into two main stages: empirical, at which data is collected about the empirical set (in this case, about the set of psychological characteristics of the objects or phenomena being studied), and the stage formalization, i.e. mathematical and statistical processing of data at the first stage. The features of each stage determine the methodological techniques for a specific implementation of scaling. Depending on the objects of study, psychological scaling comes in two varieties: psychophysical or psychometric. Psychophysical scaling consists in constructing scales for measuring the subjective (psychological) characteristics of objects (phenomena) that have physical correlates with the corresponding physical units of measurement. For example, the subjective characteristics of sound (loudness, pitch, timbre) correspond to physical parameters of sound vibrations: amplitude (in decibels), frequency (in hertz), spectrum (in terms of component tones and envelope). Thus, psychophysical scaling makes it possible to identify the relationship between the values ​​of physical stimulation and mental reaction, as well as to express this reaction in objective units of measurement. As a result, any types of indirect and direct scales of all levels of measurement are obtained: scales of names, order, intervals and ratios. Psychometric scaling consists in constructing scales for measuring the subjective characteristics of objects (phenomena) that do not have physical correlates. For example, personality characteristics, the popularity of artists, team cohesion, expressiveness of images, etc. Psychometric scaling is implemented using some indirect (objective) scaling methods. As a result, judgment scales are obtained that, according to the typology of permissible transformations, usually belong to order scales, less often to interval scales. In the latter case, the units of measurement are indicators of the variability of judgments (answers, assessments) of respondents. The most characteristic and common psychometric scales are rating scales and attitude scales based on them. Psychometric scaling underlies the development of most psychological tests, as well as measurement methods in social psychology (sociometric methods) and in applied psychological disciplines. Since the judgments underlying the psychometric scaling procedure can also be applied to physical sensory stimulation, these procedures are also applicable to identify psychophysical dependencies, but in this case the resulting scales will not have objective units of measurement . Both physical and psychological scaling can be unidimensional or multidimensional. One-dimensional scaling is the process of mapping an empirical set into a formal set according to one criterion. The resulting one-dimensional scales reflect either relationships between one-dimensional empirical objects (or the same properties of multidimensional objects), or changes in one property of a multidimensional object. One-dimensional scaling is implemented using both direct (subjective) and indirect (objective) scaling methods. Under multidimensional scaling the process of mapping an empirical set into a formal set simultaneously according to several criteria is understood. Multidimensional scales reflect either relationships between multidimensional objects, or simultaneous changes in several characteristics of one object. The process of multidimensional scaling, in contrast to one-dimensional scaling, is characterized by a greater labor intensity of the second stage, i.e., data formalization. In this regard, a powerful statistical and mathematical apparatus is used, for example, cluster or factor analysis, which is an integral part of multidimensional scaling methods. The study of multidimensional scaling problems is related to With named after Richardson and Torgerson, who proposed his first models. Shepard started the development of non-metric multidimensional scaling methods. The most widespread and first theoretically substantiated multidimensional scaling algorithm was proposed by Kruskal. M. Davison summarized information on multidimensional scaling. The specifics of multidimensional scaling in psychology are reflected in the work of G.V. Paramei. Let us expand on the previously mentioned concepts of “indirect” and “direct” scaling. Indirect, or objective, scaling is the process of mapping an empirical set into a formal one with mutual inconsistency (lack of isomorphism) between the structures of these sets. In psychology, this discrepancy is based on Fechner’s first postulate about the impossibility of direct subjective assessment of the magnitude of one’s sensations. To quantify sensations, external (indirect) units of measurement are used, based on various assessments of the subjects: barely noticeable differences, reaction time (RT), variance of discrimination, spread of categorical assessments. Indirect psychological scales, according to the methods of their construction, initial assumptions and units of measurement, form several groups, the main of which are the following: 1) accumulation scales or log-rhythmic scales; 2) scales based on the measurement of BP; 3) judgment scales(comparative and categorical). The analytical expressions of these scales are given the status of laws, the names of which are associated with the names of their authors: 1) Weber-Fechner logarithmic law; 2) for- Pieron's con (for a simple sensorimotor reaction); 3) Thurston’s law of comparative judgments and 4) Tor-gerson’s law of categorical judgments. Judgment scales have the greatest applied potential. They allow you to measure any mental phenomena, implement both psychophysical and psychometric scaling, and provide the possibility of multidimensional scaling. According to the typology of permissible transformations, indirect scales are mainly represented by scales of order and intervals. Direct, or subjective, scaling is the process of mapping an empirical set into a formal one with a one-to-one correspondence (isomorphism) of the structures of these sets. In psychology, this correspondence is based on the assumption of the possibility of direct subjective assessment of the magnitude of one’s sensations (the denial of Fechner’s first postulate). Subjective scaling is implemented using procedures that determine how many times (or by how much) the sensation caused by one stimulus is greater or less than the sensation caused by another stimulus. If such a comparison is made for sensations of different modalities, then we talk about cross-modal subjective scaling. Direct scales, according to the method of their construction, form two main groups: 1) scales based on the definition sensory relationships; 2) scales based on definition magnitudes of incentives. The second option opens the way to multidimensional scaling. A significant part of direct scales is well approximated by a power function, which was proved by S. Stevens, using a large amount of empirical material, after whom the analytical expression of direct scales is named - Stevens' power law. To quantify sensations during subjective scaling, psychological units of measurement are used, specialized for specific modalities and experimental conditions. Many of these units have generally accepted names: “sons” for loudness, “brils” for brightness, “gusts” for taste, “vegs” for heaviness, etc. According to the typology of permissible transformations, direct scales are represented mainly by scales intervals and relations. In conclusion of the review of the scaling method, it is necessary to point out the problem of its relationship with measurement. In our opinion, this problem is due to the scaling features noted above: 1) combined the introduction of empirical procedures for data collection and analytical procedures for data processing; 2) the unity of the quantitative and qualitative aspects of the scaling process; 3) a combination of general science and narrow profile, i.e., the “fusion” of general principles of scaling with specific procedures of specific techniques. Some researchers explicitly or implicitly equate the concepts of “scaling” and “measurement”. This point of view is supported especially strongly by the authority of S. Stevens, who defined measurement as “the attribution of numerical forms to objects or events in accordance with certain rules” and immediately pointed out that such a procedure leads to the construction of scales. But since the process of developing a scale is a process of scaling, we end up with the result that measurement and scaling are one and the same thing. The opposite position is that only metric scaling associated with the construction of interval and proportional scales is compared with measurement. It seems that the second position is stricter, since measurement presupposes the quantitative expression of what is being measured, and therefore, the presence of a metric. The severity of the discussion can be removed if measurement is understood not as a research method, but as instrumental support for one or another method, including scaling. By the way, metrology (the science of measurements) includes the concept of “measurement” as a mandatory attribute of the measuring instrument. For scaling (at least for non-metric scaling), measuring instruments are not necessary. True, metrology is interested mainly in the physical parameters of objects, and not in the psychological ones. Psychology, on the contrary, is primarily concerned with subjective characteristics (large, heavy, bright, pleasant, etc.). This allows some authors to take the person himself as a means of measurement. This means not so much the use of parts of the human body as units of measurement (elbow, arshin, fathom, stade, foot, inch, etc.), but rather its ability to subjectively quantify any phenomena. But the infinite variability of individual differences in humans, including the variability of evaluative abilities, cannot provide information commonly used units of measurement at the stage of collecting data about the object. In other words, in the empirical part of scaling the subject cannot be considered as a measuring instrument. This role can, with great stretch, be attributed to him only after manipulations no longer with empirical, but with formal sets. Then a subjective metric is artificially obtained, most often in the form of interval values. G.V. Sukhodolsky points to these facts when he says that ordering (and this is what the subject does at the stage of “evaluation” of empirical objects) “is a preparatory, but not a measuring operation.” And only then, at the stage of processing primary subjective data, the corresponding scale-forming actions (for Sukhodolsky, ranking) “metrize the one-dimensional topological space of ordered objects, and. therefore, they measure the "magnitude" of objects." The ambiguity of the relationship between the concepts of "scaling" and "measurement" in psychology increases when they are compared with the concepts of "test" and "testing." There is no doubt that tests are classified as measuring instruments, however their application in psychology has two aspects. The first is the use of the test in the testing process, i.e., examination (psychodiagnostics) of specific psychological objects. The second is the development, or construction of the test. In the first case, with some reason we can say about measurement, since a reference measure - a standard scale - is “applied” to the examined object (test person).In the second case, it is obviously more correct to talk about scaling, since the quintessence of test construction is the process of constructing a standard scale and associated These are the operations of defining empirical and formal sets, the reliability and isomorphism of which are not least ensured by the standardization of the procedure for collecting empirical data and the collection of reliable “statistics”. Another aspect of the problem arises from the fact that the test as a measuring instrument consists of two parts: 1) a set of tasks (questions) with which the subject directly deals at the stage of collecting data about him and 2) a standard scale with which the test is compared. Empirical data are collected at the interpretation stage. Where should we talk about measurement, where about scaling, if they are not the same thing? It seems to us that the empirical part of the testing process, i.e., the test subject’s performance of the test task, is not a purely measurement procedure, but it is necessary for scaling. The argument is as follows: the actions performed by the subject themselves are not a measure of the severity of the qualities being diagnosed. Only the result of these actions (time spent, number of errors, type of answers, etc.), determined not by the test subject, but by the diagnostician, represents a “raw” scale value, which is subsequently compared with standard values. The indicators of the results of the subject’s actions are called “raw” here for two reasons. First of all, they. As a rule, they are subject to translation into other units of expression. Often - into “faceless”, abstract points, walls, etc. And secondly, a common thing in testing is the multidimensionality of the mental phenomenon being studied, which presupposes for its assessment the registration of several changing parameters, which are subsequently synthesized into a single indicator. Thus, only the stages of data processing and interpretation of test results, where “raw” empirical data are translated into comparable ones and the latter are applied to a “measuring ruler,” i.e., a standard scale, can be referred to as measurement without reservations. This problematic knot is being tightened even more tightly due to the isolation and development of such scientific sections as “Psychometry” and “Mathematical Psychology” into independent disciplines. Each of them considers the concepts we are discussing as their own key categories. Psychometry can be considered psychological metrology, covering “the whole range of issues related to measurement in psychology.” Therefore, it is not surprising that scaling is included in this “range of issues”. But psychometry does not clarify its relationship with measurement. Moreover, the matter is confused by the variety of interpretations of psychometric science itself and its subject. For example, psychometry is considered in the context of psychodiagnostics. “Often the terms “psychometry” and “psychological experiment” are used as synonyms... It is a very popular opinion that psychometry is mathematical statistics taking into account the specifics of psychology... A stable understanding of psychometry: the mathematical apparatus of psychodiagnostics. .. Psychometry is the science of using mathematical models in the study of mental phenomena.” As for mathematical psychology, its status is even more vague. “The content and structure of mathematical psychology have not yet acquired a generally accepted form; the choice and systematization of mathematical-psychological models and methods are to some extent arbitrary.” Nevertheless, there is already a tendency to absorb psychometry into mathematical psychology. It is still difficult to say whether this will affect the discussed problem of the relationship between scaling and measurement and whether their place in the general system of psychological methods will become clearer.

2.2. Qualitative methods

Qualitative methods (QM) make it possible to identify the most essential aspects of the objects being studied, which makes it possible to generalize and systematize knowledge about them, as well as to comprehend their essence. Very often, CMs rely on quantitative information. The most common techniques are: classification, typologization, systematization, periodization, casuistry.

2.2.1. Classification

Classification(lat. classic - rank, facere - to do) is the distribution of many objects into groups (classes) depending on their common characteristics. Reduction into classes can be done both by the presence of a generalizing characteristic and by its absence. The result of such a procedure is a set of classes, which, like the grouping process itself, is called classification. The classification procedure is essentially a deductive division operation (decomposition): a known set of elements is divided into subsets (classes) according to some criterion. Classes are built by defining the boundaries of subsets and including certain elements within these boundaries. Elements with characteristics that go beyond the boundaries of a given class are placed in other classes or dropped out of the classification. The opinion found in science about two possible ways of implementing the classification procedure, namely deductive and inductive, seems to us incorrect. Only some known set of objects can be subject to classification, i.e. a “closed” set, since the classification criterion is chosen in advance, and it is the same for all elements of the set. Consequently, one can only divide into classes. It is impossible to “add” one class to another, since during such a procedure it is not known in advance whether subsequent objects will have characteristics that correspond to the selected criterion. And the process of such group formation becomes impractical and meaningless. But if with this procedure it is possible to change the criteria for combining (or diluting) elements, then we obtain a process of specific group formation, based not on induction (and especially not on deduction), but on traduction. That is why such a procedure gives “adjacent groupings”, and a deductive one - predominantly “hierarchical classifications”. According to G. Selye, “classification is the most ancient and simplest scientific method. It serves as a prerequisite for all types of theoretical constructions, including a complex procedure for establishing cause-and-effect relationships that connect classified objects. Without classification we wouldn't even be able to talk. In fact, the basis of any common noun (man, kidney, star) is the recognition of the class of objects behind it. To define a certain class of objects (for example, vertebrates) means to establish those essential characteristics (spine) that are common to all elements that make up this class. Thus, classification involves identifying those smaller elements that are part of a larger element (the class itself). All classifications are based on the detection of some order or another. Science deals not with individual objects as such, but with generalizations, i.e., classes and those laws in accordance with which the objects that form the class are ordered. This is why classification is a fundamental mental process. This, as a rule, is the first step in the development of science." If the classification is based on a feature that is essential for these objects, then the classification is called natural. For example, a subject catalog in libraries, a classification of sensations by modality. If the criterion is not essential for the objects themselves, but is only convenient for any ordering of them, then we get artificial classification. For example, an alphabetic library catalog, a classification of sensations by the location of receptors.

2.2.2. Typology

Typology- this is a grouping of objects according to the most significant systems of signs for them. This grouping is based on the understanding of type as a unit of division of the reality being studied and a specific ideal model of objects of reality. As a result of typology, we get typology, i.e. totality types. The process of typologization, as opposed to classification, is an inductive (compositional) operation: elements of a certain set are grouped around one or more elements that have standard characteristics. When identifying types, boundaries between them are not established, but the structure of the type is set. Other elements are correlated with it on the basis of equality or similarity. Thus, if classification is a grouping based on differences, then typologization is a grouping based on similarities. There are two principal approaches to understanding and describing type: 1) type as average(extremely generalized) and 2) type as extreme(extremely peculiar). In the first case, a typical object is one with properties that are close in their expression to the average value of the sample. In the second - with the most pronounced properties. Then in the first case they talk about a typical representative of a particular group (subset), and in the second - about a bright representative of the group, about a representative with a strong manifestation of qualities specific to this group. Thus, the definition of “a typical representative of the intelligentsia” should be attributed to the first option, and “refined intellectual" to the second. The first understanding of type is characteristic of fiction and art, where types are derived. The second interpretation is inherent in scientific descriptions of the type. Both approaches are observed in everyday practice. Any option leads to the formation of a holistic image - a standard with which real objects are compared. Both varieties of the type are identical in composition, since they are manifested in ideas about the structure of the leading characteristics of the type. The differences between them arise at the stage of correlating real objects with them. Type as an average (artistic type) acts as a model with which it is necessary to establish the degree of similarity and proximity of a particular object. Moreover, the “similarity” of the latter can be determined both from the side of lack of expression of quality (“falls short” of the standard) and from the side of excess of expression (exceeds the standard). The type as an extreme (scientific type) serves as a standard by which the difference between a particular object and the extent to which the latter falls short of it is determined. Thus, the scientific type is an ideal, something like a role model. So, an artistic type is an extremely generalized example for combining objects based on the degree of similarity of the systems of their essential features. A scientific type is an extremely unique standard for combining objects based on the degree of difference between the systems of their essential features, which formally (but not in essence!) brings typologization closer to classification. Analysis of psychological typologies shows that psychological scientific types have a number of specific features. They do not have a metric, i.e., a measure of the severity of characteristics - all these descriptions are qualitative. There is no hierarchy of characteristics, no indications of leading and subordinate, basic and additional qualities. The image is amorphous and subjective. Therefore, it is very difficult to attribute a real object to any one type. Such descriptions are characterized by terminological ambiguity. The so-called “halo” is common, when the characteristics of a type are taken not to be its qualities, but the consequences arising from them. For example, when describing the types of temperament, the areas of effective activity of people with a similar temperament are given. Known in psychological science four types of typologies: 1) constitutional (typologies of E. Kretschmer and W. Sheldon); 2) psychological (typologies of K. Jung, K. Leonhard, A. E. Lichko, G. Shmi-shek, G. Eysenck); 3) social (types of management and leadership); 4) as-tropsychological (horoscopes). Understanding a psychological type as a set of maximally expressed properties “allows us to imagine the psychological status of any specific person as a result of the intersection of the properties of universal human types.” As we see, classification and typology are two different ways of qualitative processing of empirical data, leading to two completely different types of representation of research results - classification as a set of groups (classes) and typology as a set of types. Therefore, it is impossible to agree with the rather widespread confusion of these concepts, and even more so with their identification. Class is a certain set of similar real objects, and type- this is an ideal sample, which real objects resemble to one degree or another. The fundamental difference between a class and a type predetermines the fundamental separation of the procedures of typology and classification and the categorical distinction between the results of these procedures - typology and classification. In this regard, the position of some sociologists is unclear, who, on the one hand, are skeptical about the non-distinction between classification and typology, and on the other, consider it possible to consider classification as a way of constructing a typology: “if the term used “ typology" is closely related to the meaningful nature of the corresponding division of the population into groups, with a certain level of knowledge, then the term "classification" does not have a similar property. We do not put any epistemological meaning into it. We need it only for convenience, so that we can talk about the correspondence of formal methods of dividing a population into groups with a meaningful idea of ​​the types of objects.” However, such “convenience” leads to the actual identification of two completely different and oppositely directed processes: the classification procedure is defined “as the division of the original set of objects into classes”, and “the typologization process as the process of division of some kind into types, concepts into corresponding elements." The only difference here is that classes apparently mean single-level groups, and genera and species mean multi-level groups. The essence of both processes is the same: partitioning a set into subsets. Therefore, it is not surprising that these researchers complain that “when solving typology problems using formal classification methods, it does not always turn out that the resulting classes correspond to types in the meaningful sense of interest to the sociologist.”

2.2.3. Systematization

Systematization is the ordering of objects within classes, classes among themselves, and sets of classes with other sets of classes. This is the structuring of elements within systems of different levels (objects in classes, classes in their set, etc.) and the coupling of these systems with other single-level systems, which allows us to obtain systems of a higher level of organization and generality. In the extreme, systematization is the identification and visual representation of the maximum possible number of connections of all levels in a set of objects. In practice, this results in a multi-level classification. Examples: taxonomy of flora and fauna; systematics of sciences (in particular, human sciences); taxonomy of psychological methods; taxonomy of mental processes; taxonomy of personality properties; taxonomy of mental states.

2.2.4. Periodization

Periodization- this is a chronological ordering of the existence of the object (phenomenon) being studied. It consists of dividing the life cycle of an object into significant stages (periods). Each stage usually corresponds to significant changes (quantitative or qualitative) in the object, which can be correlated with the philosophical category “leap”. Examples of periodization in psychology: periodization of human ontogenesis; stages of personality socialization; periodization of anthropogeny; stages and phases of group development (group dynamics), etc. [ 43]

2.2.5. Psychological casuistry

Psychological casuistry is a description and analysis of both the most typical and exceptional cases for the reality under study. This technique is typical for research in the field of differential psychology. An individual approach in psychological work with people also predetermines the widespread use of casuistry in practical psychology. A clear example of the use of psychological casuistry can be the incident method used in professional studies. [ 44]

3. INTERPRETATION METHODS

Even more than organizational ones, these methods deserve the name approaches, since they are, first of all, explanatory principles that predetermine the direction of interpretation of the research results. In scientific practice they have developed genetic, structural, functional, complex And systemic approaches. Using one method or another does not mean cutting out others. On the contrary, a combination of approaches is common in psychology. And this applies not only to research practice, but also to psychodiagnostics, psychological counseling and psychocorrection.

3.1. Genetic method

The genetic method is a way of studying and explaining phenomena (including mental ones), based on the analysis of their development both in ontogenetic and phylogenetic plans. This requires establishing: I) the initial conditions for the occurrence of the phenomenon, 2) the main stages and 3) the main trends in its development. The purpose of the method is to identify the connection of the phenomena being studied over time, to trace the transition from lower to higher forms. So wherever it is necessary to identify the time dynamics of mental phenomena, the genetic method is an integral research tool for the psychologist. Even when the research is aimed at studying the structural and functional characteristics of a phenomenon, the effective use of the method cannot be ruled out. Thus, the developers of the well-known theory of perceptual actions under microstructures In a new analysis of perception, they noted that “the genetic research method turned out to be the most suitable.” Naturally, the genetic method is especially characteristic of various branches of developmental psychology: comparative, developmental, historical psychology. It is clear that any longitudinal study presupposes the use of the method in question. The genetic approach can generally be considered as a methodological implementation of one of the basic principles of psychology, namely development principle. With this vision, other options for implementing the principle of development can be considered as modifications of the genetic approach. For example, historical And evolutionary approaches.

3.2. Structural method

Structural approach- a direction focused on identifying and describing the structure of objects (phenomena). It is characterized by: in-depth attention to the description of the current state of objects; clarification of their inherent timeless properties; interest is not in isolated facts, but in the relationships between them. As a result, a system of relationships is built between the elements of the object at various levels of its organization. Usually, with a structural approach, the relationship between parts and the whole in an object and the dynamics of the identified structures are not emphasized. In this case, the decomposition of the whole into parts (decomposition) can be carried out according to various options. An important advantage of the structural method is the relative ease of visual presentation of results in the form of various models. These models can be given in the form of descriptions, a list of elements, a graphic diagram, classification, etc. An inexhaustible example of such modeling is the representation of the structure and types of personality: the three-element model according to 3. Freud; Jung's personality types; "Eysenck circle"; multifactorial model by R. Assagioli. Our domestic science has not lagged behind foreign psychology in this matter: endo- and exopsychics according to A.F. Lazursky and the development of his views by V.D. Balin; personality structure ty of the four complex complexes according to B. G. Ananyev; individual-individual scheme of V. S. Merlin; lists of A. G. Kovalev and P. I. Ivanov; dynamic functional structure of personality according to K. K. Platonov; scheme by A.I. Shcherbakov, etc. The structural approach is an attribute of any research devoted to the study of the constitutional organization of the psyche and the structure of its material substrate - the nervous system. Here we can mention the typology of GNI by I. P. Pavlov and its development by B. M. Teplov, V. D. Nebylitsyn and others. The models of V. M. Rusalov, reflecting the morphological, neuro- and psychodynamic constitution of a person, have received wide recognition. Structural models of the human psyche in spatial and functional aspects are presented in the works. Classic examples of the approach under consideration are the associative psychology of F. Hartley and its consequences (in particular, the psychophysics of “pure sensations” of the 19th century), as well as the structural psychology of W. Wundt and E. Titchener. A specific concretization of the approach is the method of microstructural analysis, which includes elements of genetic, functional, and systemic approaches.

3.3. Functional method

Functional approach Naturally, it is focused on identifying and studying the functions of objects (phenomena). The ambiguity of the interpretation of the concept of “function” in science makes it difficult to define this approach, as well as to identify with it certain areas of psychological research. We will adhere to the opinion that a function is a manifestation of the properties of objects in a certain system of relations, and properties are a manifestation of the quality of an object in its interaction with other objects. Thus, a function is the realization of the relationship between an object and the environment, and also “the correspondence between the environment and the system.” Therefore, the functional approach is mainly interested in connections between the object being studied and the environment. It is based on the principle of self-regulation and maintaining the balance of objects of reality (including the psyche and its carriers). [ 47] Examples of the implementation of the functional approach in the history of science are such well-known directions as “functional psychology” and “behaviorism”. A classic example of the embodiment of a functional idea in psychology is the famous dynamic field theory of K. Lewin. In modern psychology, the functional approach is enriched with components of structural and genetic analysis. Thus, the idea of ​​the multi-level and multi-phase nature of all human mental functions, operating simultaneously at all levels as a single whole, has already been firmly established. The above examples of personality structures, the nervous system, and the psyche can rightfully be taken as an illustration of the functional approach, since most authors of the corresponding models also consider the elements of these structures as functional units that embody certain connections between a person and reality.

3.4. Complex method

A complex approach- this is a direction that considers the object of research as a set of components to be studied using an appropriate set of methods. Components can be both relatively homogeneous parts of the whole, and its heterogeneous sides, characterizing the object under study in different aspects. Often, an integrated approach involves studying a complex object using the methods of a complex of sciences, i.e., organizing interdisciplinary research. It is obvious that an integrated approach presupposes the use, to one degree or another, of all previous interpretive methods. A striking example of the implementation of an integrated approach in science is concept of human knowledge, according to which man, as the most complex object of study, is subject to the coordinated study of a large complex of sciences. In psychology, this idea of ​​the complexity of the study of man was clearly formulated by B. G. Ananyev. A person is considered simultaneously as a representative of the biological species homo sapiens (individual), as a carrier of consciousness and an active element cognitive and reality-transforming activity (subject), as a subject of social relations (personality) and as a unique unity of socially significant biological, social and psychological characteristics (individuality). This view of a person allows us to study his psychological content in terms of: subordination (hierarchical) and coordination. In the first case, mental phenomena are considered as subordinate systems: more complex and general ones subordinate and include simpler and more elementary ones. In the second, mental phenomena are considered as relatively autonomous formations, but closely related and interacting with each other. Such a comprehensive and balanced study of man and his psyche, in fact, is already connected with a systems approach.

3.5. System method

Systems approach- this is a methodological direction in the study of reality, considering any fragment of it as a system. The most tangible impetus for the understanding of the systems approach as an integral methodological and methodological component of scientific knowledge and for its strict scientific formulation was the work of the Austro-American scientist L. Bertalanffy (1901-1972), in which he developed a general theory of systems. System there is a certain integrity that interacts with the environment and consists of many elements that are in certain relationships and connections with each other. The organization of these connections between elements is called structure. Sometimes the structure is interpreted broadly, bringing its understanding to the volume of the system. This interpretation is typical for our everyday practice: “commercial structures”, “state structures”, “political structures”, etc. Occasionally, such a view of structure is found in science, although with certain reservations. Element- the smallest part of a system that retains its properties within a given system. Further dismemberment of this part leads to the loss of the corresponding properties. So, an atom is an element with certain physical properties - we, a molecule - with chemical properties, a cell - an element with the properties of life, a person (personality) - an element of social relations. The properties of elements are determined by their position in the structure and, in turn, determine the properties of the system. But the properties of the system are not reduced to the sum of the properties of the elements. The system as a whole synthesizes (combines and generalizes) the properties of parts and elements, as a result of which it has properties of a higher level of organization, which, in interaction with other systems, can appear as its functions. Any system can be considered, on the one hand, as combining simpler (smaller) subsystems with its properties and functions, and on the other - how a subsystem of more complex (larger) systems. For example, any living organism is a system of organs, tissues, and cells. It is also an element of the corresponding population, which, in turn, is a subsystem of the animal or plant world, etc. Systemic research is carried out using systemic analysis and synthesis. In progress analysis the system is isolated from the environment, its composition (set of elements), structure, functions, integral properties and characteristics, system-forming factors, and relationships with the environment are determined. In progress synthesis a model of a real system is created, the level of generalization and abstraction of the description of the system is increased, the completeness of its composition and structures, the patterns of its development and behavior are determined. Description of objects as systems, i.e. system descriptions, perform the same functions as any other scientific descriptions: explanatory and predictive. But more importantly, system descriptions perform the function of integrating knowledge about objects. A systematic approach in psychology makes it possible to reveal the commonality of mental phenomena with other phenomena of reality. This makes it possible to enrich psychology with ideas, facts, methods of other sciences and, conversely, the penetration of psychological data into other areas of knowledge. It allows you to integrate and systematize psychological knowledge, eliminate redundancy in accumulated information, reduce the volume and increase the clarity of descriptions, and reduce subjectivity in the interpretation of mental phenomena. Helps to see gaps in knowledge about specific objects, to detect them completeness, determine the tasks of further research, and sometimes predict the properties of objects about which there is no information, by extrapolation and interpolation of available information. In educational activities, systematic methods of description make it possible to present educational information in a more visual and adequate form for perception and memorization, to give a more holistic picture of the illuminated objects and phenomena, and, finally, to move from an inductive presentation of psychology to a deductive-inductive one. -tive. The previous approaches are actually organic components of the systems approach. Sometimes they are even considered as its varieties. Some authors compare these approaches with the corresponding levels of human qualities that constitute the subject of psychological research. Currently, most scientific research is carried out in line with the systems approach. The most complete coverage of the systems approach in relation to psychology was found in the following works. [ 51]

Literature

    Ananyev B. G. On the problems of modern human science. M., 1977. Ananyev B.G. On the methods of modern psychology // Psychological methods in a comprehensive longitudinal study of students. L., 1976. Ananyev B. G. Man as an object of knowledge. L.. 1968. Balin V.D. Mental reflection: Elements of theoretical psychology. St. Petersburg, 2001. Balin V.D. Theory and methodology of psychological research. L., 1989. Bendatalafanri L. Application of correlation and spectral analysis. M., 1983. Bertalanfanry L. History and status of general systems theory // System Research. M.. 1973. Bertalanffy L. General systems theory - review of problems and results // Systems research. M., 1969. Blagush P. Factor analysis with generalizations. M, 1989. Borovkov A. A. Mathematical statistics: Estimation of parameters. Testing hypotheses. M.. 1984. Braverman E.M.,Muchnik I. B. Structural methods for processing empirical data, M.. 1983. Burdun G.V., Markov, S.M. Fundamentals of metrology. M., 1972. Ganzen V. A. Guidelines for the course “System methods in psychology.” L., 1987. Ganzen V. A. System descriptions in psychology. L., 1984. Ganzen V. A. Systematic approach in psychology. L., 1983. Ganzen V. A., Fomin A. A. On the concept of type in psychology // Bulletin of SNbSU. ser. 6, 1993, issue. 1 (No. 6). Ganzen V. A., Khoroshilov B. M. The problem of systematic description of qualitative changes in psychological objects. Dep. VINITI, 1984, No. 6174-84. Glass J., Stanley J. Statistical methods in pedagogy and psychology. M.. 1976. Godefroy J. What is psychology? T. 1-2. M, 1992. Gordon V. M., Zinchenko V. P. System-structural analysis of cognitive activity // Ergonomics, vol. 8. M., 1974. Gusev E. K., Nikandrov V. V. Psychophysics. L., 1987. Gusev E.K., Nikandrov V.V. Psychophysics. Part P. Psychological scaling. L., 1985. Draneper I.. Smith G. Applied regression analysis. In 2 books. 2nd ed. M.. 1987. Druzhinin V.I. Experimental psychology. M.. 1997. Davison M. Multidimensional scaling. Methods for visual presentation of data. M., 1988. Durand B., Odell P. Cluster analysis. M., 1977. Ezekiel M., Fox K.A. Methods for analyzing correlations and regressions. M.. 1966. Zarochentsev K.D., Khudyakov A.I. Basics of psychometrics. St. Petersburg, 1996. Zinchenko V. P. On the microstructural method of studying cognitive activity//Ergonomics, vy. 3. M., 1972. Zinchenko V. P., Zinchenko T. P. Perception//General Psychology/Ed. L. V. Petrovsky. Ed. 2nd. M.. 1976. Iberla K. Factor analysis. M., 1980. Itelson L.B. Mathematical and cybernetic methods in pedagogy. M., 1964. Kagan M.S. Systematic approach and humanitarian knowledge. L.. 1991. Kolkot E. Significance check. M.. 1978. Kornilova G.V. Introduction to psychological experiment. M., 1997. Koryukin V.I. Concepts of levels in modern scientific knowledge. Sver-dlovsk, 1991. Krylov A.A. Systematic approach as the basis for research in engineering psychology and labor psychology // Methodology of research in engineering psychology and labor psychology, part 1. Leningrad, 1974. Kuzmin V.P. Systematic principles in the theory and methodology of K. Marx. Ed. 2nd. M.. 1980. Kuzmin V.P. Various directions in the development of a systems approach and their epistemological foundations // Questions of Philosophy, 1983, No. 3. Kulikov L.V. Psychological research. Methodological recommendations for carrying out. 6th ed. St. Petersburg, 2001. Kyun Yu. Descriptive and inductive statistics. M., 1981. Leman E. L. Testing statistical hypotheses. 2nd ed. M., 1979. Lomov B.F. Methodological and theoretical problems of psychology. M., 1984. Lomov B.F. On the systems approach in psychology // Questions of psychology, 1975, No. 2. Lomov B.F. On the ways of development of psychology // Questions of psychology. 1978. No. 5. Lawley D., Maxwell L. Factor analysis as a statistical method. M., 1967. Mazilov V. A. On the relationship between theory and method in psychology // Ananyevye readings - 98 / Materials of scientific and practical studies. conferences. St. Petersburg, 1998. Malikov S. F., Tyurin N. I. Introduction to metrology. M, 1965. Mathematical psychology: theory, methods, models. M, 1985. Mirkin B. G. Analysis of qualitative features and structures. M.. 1980. Miroshnikov S. A. Study of the levels of organization of human mental activity // Theoretical and applied issues of psychology, vol. 1, part II. St. Petersburg, 1995. Mondel I. D. Cluster analysis. M., 1988. Nikaidrov V.V. On a systematic description of the functional structure of the psyche // Theoretical and applied issues of psychology, vol. 1. St. Petersburg, 1995. Nikandrov V.V. Historical psychology as an independent scientific discipline//Bulletin of Leningrad State University, ser. 6. 1991, issue. 1 (No. 6). Nikandrov V.V. On the relationship between psychological macrocharacteristics of a person // Bulletin of St. Petersburg State University, vol. 3. 1998. Nikandrov V.V. Spatial model of the functional structure of the human psyche // Bulletin of St. Petersburg State University, 1999, no. 3, no. 20. Okun Ya. Factor analysis. M., 1974. Paramey G.V. Application of multidimensional scaling in psychological research // Bulletin of Moscow State University, ser. 14. 1983, no. 2. Pir'ov G. D. Experimental psychology. Sofia, 1968. Pir'ov G. D. Classification of methods in psychology // Psychodiagnostics in socialist countries. Bratislava, 1985. Plokhinsky N. A. Biometrics. 2nd ed. M., 1970. Poston T., Stewart I. Catastrophe theory and its applications. M., 1980. Workshop on psychodiagnostics. Differential psychometrics/Ed. V. V. Stolina, A. G. Shmeleva. M., 1984. The principle of development in psychology / Rep. ed. L. I. Antsyferova. M., 1978. The problem of levels and systems in scientific knowledge. Minsk, 1970. Pfanzagl I. Theory of measurements. M., 1976. PierroiA. Psychophysics//Experimental psychology, vol. 1-2. M.. 1966. Rappoport A. Systematic approach in psychology // Psychological journal, 1994, No. 3. Rogovin M. S. Structural-level theories in psychology. Yaroslavl, 1977. Rudestam K. Group psychotherapy. M., 1980. Rusalov V. M. Biological bases of individual psychological differences. M., 1979. Selye G. From dream to discovery: How to become a scientist. M., 1987. Sergeants V.F. Introduction to the methodology of modern biology. L., 1972. Sergeants V.F. Man, his nature and the meaning of existence. L., 1990. Sidorenko E. V. Methods of mathematical processing in psychology. St. Petersburg, 2001. Systematic approach to the psychophysiological problem / Rep. ed. V. B. Shvyrkov. M., 1982. Steven S S. Mathematics, measurement and psychophysics // Experimental psychology / Ed. S. S. Stephen. T. 1. M.. 1960. Stephen S.S. On the psychophysical law // Problems and methods of psycho-physics. M., 1974. Sukhodolsky G.V. Mathematical psychology. St. Petersburg.. 1997. Sukhodolsky G.V. Fundamentals of mathematical statistics for psychologists. L., 1972. Thurston L.L. Psychological analysis // Problems and methods of psychophysics. M., 1974. Typology and classification in sociological research//Responsible. ed. V. G. Andreenkov, Yu. N. Tolstova. M., 1982. Uemov A. I. Systems approach and general systems theory. M., 1978. Factorial discriminant and cluster analysis / Ed. I. S. Enyu-kova. M., 1989. Harman G. G. Modern factor analysis. M., 1972. Shvaitsara I. and others. Diagnostics of mental development. Prague, 1978. Sheffe G. Analysis of variance. M., 1963. SchreiberD. Problems of scaling // Process of social research. M., 1975. BertalanffyL. General System theory. Foundations. Development, Applications. N.Y., 1968. Choynowski M. Die Messung in der Psychologic /7 Die Probleme der mathematischen Psychologic Warschaw, 1971. Guthjahr W. Die Messung psychischer Eigenschaftcn. Berlin, 1971. Leinfellner W. Einfuhrung in die Erkenntnis und Wisscnschafts-theorie. Mannheim, 1965. Lewin K. A dynamic theory of personality. N.Y., 1935. Lewin K. Principles of topological psychology. N. Y., 1936. Sixtl F. Mesmethoden der psychologic Weinheim, 1966, 1967. Stevens S.S. Sensory scales of taste intensity // Percept, a. Psychophys. 1969 Vol. 6. Torgerson W. S. Theory and methods of scaling. N.Y., 1958.
  1. Tutorial. St. Petersburg: Rech Publishing House, 2003. 480 p. BBC88

    Tutorial

    In the textbook, experimental psychology is considered as an independent scientific discipline that develops the theory and practice of psychological research and has a system of psychological methods as the main subject of study.

  2. Andreeva G. M., Bogomolova N. N., Petrovskaya L. A. "Foreign social psychology of the twentieth century. Theoretical approaches"" (1)

    Document
  3. Andreeva G. M., Bogomolova N. N., Petrovskaya L. A. "Foreign social psychology of the twentieth century. Theoretical approaches"" (2)

    Document

    The first edition of this book was published in 1978 (G. M. Andreeva, N. N. Bogomolova, L. A. Petrovskaya “Social psychology in the West”). If we consider that at that time the “publishing path” was very long, it becomes clear that the manuscript

  4. State exam program in pedagogy and psychology of education direction

    Program

    The standard period for mastering the main educational program for master's training in the direction 050700.68 Pedagogy for full-time study is 6 years.

  5. Psychology of the 21st century volume 2

    Document

    Members of the Organizing Committee: Akopov G.V., Bazarov T.Yu., Zhuravlev A.L., Znakov V.V., Erina S.I., Kashapov S.M., Klyueva N.V., Lvov V.M. , Manuilov G.M., Marchenko V.

Data processing is aimed at solving the following tasks:

1) organizing the source material, transforming a set of data into a holistic system of information, on the basis of which further description and explanation of the object and subject being studied is possible;

2) detection and elimination of errors, shortcomings, and gaps in information; 3) identifying trends, patterns and connections hidden from direct perception; 4) discovery of new facts that were not expected and were not noticed during the empirical process; 5) determining the level of reliability, reliability and accuracy of the collected data and obtaining scientifically based results on their basis.

Data processing has quantitative and qualitative aspects. Quantitative processing there is a manipulation with the measured characteristics of the object (objects) being studied, with its properties “objectified” in external manifestation. High-quality processing- this is a method of preliminary penetration into the essence of an object by identifying its unmeasurable properties on the basis of quantitative data.

Quantitative processing is aimed mainly at a formal, external study of an object, while qualitative processing is mainly aimed at a meaningful, internal study of it. In quantitative research, the analytical component of cognition dominates, which is reflected in the names of quantitative methods for processing empirical material, which contain the category “analysis”: correlation analysis, factor analysis, etc. The main result of quantitative processing is an ordered set of “external” indicators of the object (objects) ). Quantitative processing is carried out using mathematical and statistical methods.

In qualitative processing, the synthetic component of cognition dominates, and in this synthesis the unification component prevails and the generalization component is present to a lesser extent. Generalization is the prerogative of the next stage of the research process - interpretive. In the phase of qualitative data processing, the main thing is not to reveal the essence of the phenomenon being studied, but for now only in the appropriate presentation of information about it, ensuring its further theoretical study. Typically, the result of qualitative processing is an integrated representation of the set of properties of an object or set of objects in the form of classifications and typologies. Qualitative processing largely appeals to the methods of logic.

The contrast between qualitative and quantitative processing (and, consequently, the corresponding methods) is rather arbitrary. They form an organic whole. Quantitative analysis without subsequent qualitative processing is meaningless, since by itself it is not able to transform empirical data into a system of knowledge. And a qualitative study of an object without basic quantitative data in scientific knowledge is unthinkable. Without quantitative data, qualitative knowledge is a purely speculative procedure, not characteristic of modern science. In philosophy, the categories “quality” and “quantity,” as is known, are combined into the category “measure.” The unity of quantitative and qualitative understanding of empirical material clearly appears in many methods of data processing: factor and taxonomic analysis, scaling, classification, etc. But since traditionally in science it is accepted to divide into quantitative and qualitative characteristics, quantitative and qualitative methods, quantitative and qualitative descriptions, let us accept quantitative and qualitative aspects of data processing as independent phases of one research stage, to which certain quantitative and qualitative methods correspond.

Quality processing naturally results in description And explanation phenomena being studied, which constitutes the next level of their study, carried out at the stage interpretations results. Quantitative processing refers entirely to the data processing stage.

Quantitative and qualitative data in experiment and other research methods.

Qualitative data– text, description in natural science language. Can be obtained through the use of qualitative methods (observation, survey, etc.)

Quantitative data– the next step in organizing qualitative data.

Distinguish between quantitative processing of results and measurement of variables.

Quality – eg. observation. The postulate of immediacy of observation data is the presentation of psychological reality to observation. The activity of the observer in organizing the observation process and the involvement of the observer in the interpretation of the facts obtained.

Different approaches to the essence of psychological measurement:

1. Presentation of the problem assigning numbers on a scale of a psychological variable for the purpose of ordering psychological objects and perceived psychological properties. Assumption that The properties of the measuring scale correspond to the empirically obtained measurement results . It is also assumed that the presented statistical criteria for data processing are adequate to researchers’ understanding of different types of scales , but the documents are lowered.

2. Goes back to the traditions of psychophysical experiment, where the measurement procedure has the ultimate goal of describing phenomenal properties in terms of changes in objective (stimulus_h-k. Merit of Stevens)

He introduced a distinction between types of scales:

names, order (fulfillment of the monotonicity condition, ranking is possible here), intervals (for example, IQ indicators, here the answer to the question “how much” is possible), ratios (here the answer to the question “how much”, absolute zero and units of measurement - psychophysics)

Thanks to this, psi measurement began to act not only as an establishment of quantitative psychophysical dependencies, but also in the broader context of measuring psi variables.

Qualitative description– 2 types: description in a natural language dictionary and development of systems of symbols, signs, units of observation. Categorical observation – reduction of units into categories – generalization. An example is Bales's standardized observation procedure for describing the interaction of small group members in solving a problem. Category system(in a narrow sense) – a set of categories that covers all theoretically permissible manifestations of the process being studied.

Quantitative assessment): 1) event-sampling– complete verbal description of behavioral events, their subsequent reading and psychological reconstruction. Narrow meaning of the term: the observer's precise temporal or frequency reflection of the "units" of description. 2) time-sampling– the observer records certain time intervals, i.e. determines the duration of events. Time sampling technique. Also specially developed for quantitative assessment subjective scales(Example: Sheldon, somatotype temperaments).

Share