Processing of the results of measurements of the physical quantities fokine. The order of processing the results of direct measurements. Calculation of direct measurement errors

To reduce the influence of random errors, it is necessary to measure this value several times. Suppose we are measuring some value x. As a result of the measurements, we obtained the following values:

x1, x2, x3, ... xn. (2)

This series of x values ​​is called a sample. Having such a sample, we can evaluate the measurement result. We will denote the value that will be such an estimate. But since this evaluation value of the measurement results will not represent the true value of the measured quantity, it is necessary to estimate its error. Let us assume that we can determine the estimate of the error Δx. In this case, we can write the measurement result in the form

Since the estimated values ​​of the measurement result and the error Dx are not accurate, the record (3) of the measurement result must be accompanied by an indication of its reliability P. Reliability or confidence probability is understood as the probability that the true value of the measured quantity is contained in the interval indicated by record (3). This interval itself is called the confidence interval.

For example, when measuring the length of a certain segment, we wrote the final result as

l = (8.34 ± 0.02) mm, (P = 0.95)

This means that out of 100 chances - 95 that the true value of the length of the segment lies in the range from 8.32 to 8.36 mm.

Thus, the task is to, having a sample (2), find an estimate of the measurement result, its error Dx and reliability P.

This problem can be solved with the help of probability theory and mathematical statistics.

In most cases, random errors follow the normal distribution law established by Gauss. The normal distribution of errors is expressed by the formula

where Dx - deviation from the value of the true value;

y is the true mean square error;

2 - variance, the value of which characterizes the spread of random variables.

As can be seen from (4), the function has a maximum value at x = 0, in addition, it is even.

Figure 16 shows a graph of this function. The meaning of function (4) is that the area of ​​the figure enclosed between the curve, the Dx axis and two ordinates from the points Dx1 and Dx2 (shaded area in Fig. 16) is numerically equal to the probability with which any sample falls into the interval (Dx1, Dx2 ) .

Since the curve is distributed symmetrically about the y-axis, it can be argued that errors of equal magnitude but opposite in sign are equally likely. And this makes it possible to take the average value of all elements of the sample as an estimate of the measurement results (2)

where - n is the number of dimensions.

So, if n measurements are made under the same conditions, then the most probable value of the measured quantity will be its average value (arithmetic). The value tends to the true value m of the measured value at n > ?.

The mean square error of a single measurement result is the value (6)

It characterizes the error of each individual measurement. When n > ? S tends to a constant limit y

With an increase in y, the scatter of readings increases, i.e. measurement accuracy becomes lower.

The root-mean-square error of the arithmetic mean is the value (8)

This is the fundamental law of increasing accuracy as the number of measurements increases.

The error characterizes the accuracy with which the average value of the measured value is obtained. The result is written as:

This error calculation technique gives good results (with a reliability of 0.68) only when the same value is measured at least 30 - 50 times.

In 1908, Student showed that the statistical approach is also valid for a small number of measurements. Student's distribution for the number of measurements n > ? goes into the Gaussian distribution, and at a small number it differs from it.

To calculate the absolute error for a small number of measurements, a special coefficient is introduced that depends on the reliability P and the number of measurements n, called the coefficient

Student t.

Omitting the theoretical justifications for its introduction, we note that

Dx = t. (10)

where Dx is the absolute error for a given confidence level;

mean square error of the arithmetic mean.

Student's coefficients are given in the table.

It follows from what has been said:

The value of the root-mean-square error allows you to calculate the probability that the true value of the measured value will fall into any interval near the arithmetic mean.

When n > ? > 0, i.e. the interval in which the true value of m is found with a given probability tends to zero with an increase in the number of measurements. It would seem that by increasing n, one can obtain a result with any degree of accuracy. However, the accuracy increases significantly only until the random error becomes comparable with the systematic one. Further increase in the number of measurements is inexpedient, because the final accuracy of the result will depend only on the systematic error. Knowing the value of the systematic error, it is easy to set the admissible value of the random error, taking it, for example, equal to 10% of the systematic error. By setting a certain value P for the confidence interval chosen in this way (for example, P = 0.95), it is easy to find the required number of measurements, which guarantees a small effect of a random error on the accuracy of the result.

To do this, it is more convenient to use the Student's coefficient table, in which the intervals are given in fractions of the value of y, which is a measure of the accuracy of this experiment with respect to random errors.

When processing the results of direct measurements, the following order of operations is proposed:

Record the result of each measurement in a table.

Calculate mean of n measurements

Find the error of an individual measurement

Calculate Squared Errors of Individual Measurements

(Dx 1)2, (Dx 2)2, ... , (Dx n)2.

Determine the standard error of the arithmetic mean

Specify the reliability value (usually take P = 0.95).

Determine the Student's coefficient t for a given reliability P and the number of measurements made n.

Find the confidence interval (measurement error)

If the value of the error of the measurement result Δx turns out to be comparable with the value of the error of the instrument d, then take as the boundary of the confidence interval

If one of the errors is less than three or more times the other, then discard the smaller one.

Write the final result as

Random errors have the following properties.

    With a large number of measurements, errors of the same magnitude but opposite in sign occur equally often.

    Large errors are less likely to occur than small ones. From relations (1), rewriting them in the form

X \u003d x 1 + x 1

X = x 2 + x 2

X = x n + x n

and adding up in a column, you can determine the true value of the measured value as follows:

or
.

(2)

those. the true value of the measured quantity is equal to the arithmetic mean of the measurement results, if there are an infinite number of them. With a limited, and even more so with a small number of measurements, which we usually deal with in practice, equality (2) is approximate.

Let the following values ​​of the measured quantity X be obtained as a result of several measurements: 13.4; 13.2; 13.3; 13.4; 13.3; 13.2; 13.1; 13.3; 13.3; 13.2; 13.3; 13.1. Let's build a diagram of the distribution of these results, plotting the instrument readings along the abscissa axis in ascending order. The distances between adjacent points along the abscissa axis are equal to twice the maximum reading error on the instrument. In our case, the countdown is made up to 0.1. This is equal to one division of the scale marked on the x-axis. On the ordinate axis, we plot values ​​proportional to the relative number of results corresponding to a particular reading of the device. The relative number, or the relative frequency of results equal to x k, will be denoted by W(x k). In our case

We assign each x to

(3)

where A is the coefficient of proportionality.




The diagram, which is called a histogram, differs from the usual graph in that the points are not connected by a smooth curved line, but steps are drawn through them. It is obvious that the area of ​​the step over some value of x k is proportional to the relative frequency of occurrence of this result. By choosing the proportionality coefficient in expression (3) in an appropriate way, this area can be made equal to the relative frequency of the result x k. Then the sum of the areas of all steps, as the sum of the relative frequencies of all results, should be equal to one

From here we find A=10. Condition (4) is called the normalization condition for function (3).

If you make a series of measurements with n measurements in each series, then with a small n the relative frequencies of the same value x k found from different series can differ significantly from each other. As the number of measurements in the series increases, the fluctuations in the values ​​of W(x k) decrease and these values ​​approach a certain constant number, which is called the probability of the result x k and is denoted by P (x k).

Let us assume that, while making an experiment, we do not count the result to whole divisions of the scale or their shares, but we can fix the point where the arrow stopped. Then, for an infinitely large number of measurements, the arrow will visit each point on the scale. The distribution of measurement results in this case acquires a continuous character and is described by a continuous curve y=f(x) instead of a stepped histogram. Based on the properties of random errors, it can be concluded that the curve must be symmetrical and, therefore, its maximum falls on the arithmetic mean of the measurement results, which is equal to the true value of the measured quantity. In the case of a continuous distribution of measurement results, there is no


it makes sense to talk about the probability of any of their values, because there are values ​​arbitrarily close to the one under consideration. Now we should already raise the question of the probability of meeting during measurements the result in a certain interval around the value of x k, equal to
,
. Just as on the histogram the relative frequency of the result x to equaled the area of ​​the step built over this result, on the graph for a continuous distribution the probability of finding the result in the interval (
,
) is equal to the area of ​​the curvilinear trapezoid constructed over this interval and bounded by the curve f(x). The mathematical notation of this result is

if
little, i.e. the area of ​​the hatched curvilinear trapezoid is replaced by the approximate area of ​​a rectangle with the same base and a height equal to f(xk). The function f(x) is called the probability density of the distribution of measurement results. The probability of finding x in some interval is equal to the probability density for the given interval multiplied by its length.

The distribution curve of the measurement results obtained experimentally for a certain section of the instrument scale, if it is continued, asymptotically approximating the abscissa axis from the left and right, is analytically well described by a function of the form

(5)

Just as the total area of ​​all the steps on the histogram was equal to one, the entire area between the f (x) curve and the abscissa axis, which has the meaning of the probability of meeting at least some value of x during measurements, is also equal to one. The distribution described by this function is called the normal distribution. The main parameter of the normal distribution is the variance  2 . The approximate value of the dispersion can be found from the measurement results using the formula

(6)

This formula gives a dispersion close to the real value only for a large number of measurements. For example, σ 2 found from the results of 100 measurements may have a deviation from the actual value of 15%, found from 10 measurements already 40%. The variance determines the shape of the normal distribution curve. When the random errors are small, the dispersion, as follows from (6), is small. The curve f(x) in this case is narrower and sharper near the true value of X and tends to zero faster when moving away from it than with large errors. The following figure will show how the form of the curve f(x) for a normal distribution changes depending on σ.

In probability theory, it is proved that if we consider not the distribution of measurement results, but the distribution of arithmetic mean values ​​found from a series of n measurements in each series, then it also obeys the normal law, but with a dispersion that is n times smaller.

The probability of finding the measurement result in a certain interval (
) near the true value of the measured value is equal to the area of ​​the curvilinear trapezoid built over this interval and bounded from above by the curve f(x). Interval value
usually measured in units proportional to the square root of the variance
Depending on the value of k per interval
there is a curvilinear trapezoid of a larger or smaller area, i.e.

where F(k) is some function of k. Calculations show that for

k=1,

k=2,

k=3,

This shows that in the interval
accounts for approximately 95% of the area under the curve f(x). This fact is in full agreement with the second property of random errors, which states that large errors are unlikely. Errors greater than
, occurs with a probability of less than 5%. The expression (7) rewritten for the distribution of the arithmetic mean of n measurements takes the form

(8)

Value in (7) and (8) can be determined on the basis of measurement results only approximately by formula (6)

Substituting this value into expression (8), we will get on the right not F(k), but some new feature, which depends not only on the size of the considered interval of values ​​X, but also on the number of measurements made
And

because only for a very large number of measurements does formula (6) become sufficiently accurate.

Having solved the system of two inequalities in brackets on the left side of this expression with respect to the true value of X, we can rewrite it in the form

Expression (9) determines the probability with which the true value of X is in a certain interval of length about value . This probability in the theory of errors is called reliability, and the interval corresponding to it for the true value is called the confidence interval. Function
calculated depending on t n and n and a detailed table has been compiled for it. The table has 2 inputs: pt n and n. With its help, for a given number of measurements n, it is possible to find, given a certain value of reliability Р, the value of t n , called the Student's coefficient.

An analysis of the table shows that for a certain number of measurements with the requirement of increasing reliability, we obtain growing values ​​of t n , i.e. an increase in the confidence interval. A reliability equal to one would correspond to a confidence interval equal to infinity. Given a certain reliability, we can make the confidence interval for the true value narrower by increasing the number of measurements, since S n does not change much, and decreases both by decreasing the numerator and by increasing the denominator. Having made a sufficient number of experiments, it is possible to make a confidence interval of any small value. But for large n, a further increase in the number of experiments very slowly reduces the confidence interval, and the amount of computational work increases much. Sometimes in practical work it is convenient to use an approximate rule: in order to reduce the confidence interval found from a small number of measurements by several times, it is necessary to increase the number of measurements by the same factor.

EXAMPLE OF DIRECT MEASUREMENT RESULTS PROCESSING

Let's take as experimental data the first three results out of 12, according to which the histogram X was built: 13.4; 13.2; 13.3.

Let's ask ourselves the reliability, which is usually accepted in the educational laboratory, P = 95%. From the table for P = 0.95 and n = 3 we find t n = 4.3.

or

with 95% reliability. The last result is usually written as an equality

If the confidence interval of such a value does not suit (for example, in the case when the instrumental error is 0.1), and we want to halve it, we should double the number of measurements.

If we take, for example, the last 6 values ​​​​of the same 12 results (for the first six, it is proposed to do the calculation yourself)

X: 13.1; 13.3; 13.3; 13.2; 13.3; 13.1,

then

The value of the coefficient t n is found from the table for Р = 0.95 and n = 6; tn = 2.6.

In this case
Let's plot the confidence interval for the true value in the first and second cases on the numerical axis.







The interval calculated from 6 measurements is, as expected, within the interval found from three measurements.

The instrumental error introduces a systematic error into the results, which expands the confidence intervals depicted on the axis by 0.1. Therefore, the results written taking into account the instrumental error have the form

1)
2)

The order of processing the results of direct measurements

1. Before processing the measurement results, it is extremely important to set the value of the confidence probability α (usually 0.9 or 0.95).

2. Analyze the results record table and identify possible misses. Missing results should be discarded.

3. Calculate the arithmetic mean of a series of measurements:

where n is the number of measurements, A i- result i th dimension.

4. Find the errors of individual measurements:

Δ А i = А i – ‹А›. (2)

5. Calculate the root-mean-square error of the arithmetic mean of the result of a series of measurements:

(3)

6. Estimate the contribution of random errors to the half-width of the confidence interval:

Δ BUT c = t(n,α) S(A), (4)

where t(n,α) - Student's coefficient (Table 1).

Table 1 - Student's coefficient at different values confidence probability α and a different number of experiments n

α Number of experiences n
0,9 6,3 2,9 2,4 2,1 2,0 1,9 1,9 1,9 1,8 1,8 1,8 1,7 1,7 1,7 1,7
0,95 12,7 4,3 3,2 2,8 2,6 2,4 2,4 2,3 2,3 2,2 2,2 2,1 2,1 2,0 2,0
0,99 63,7 9,9 5,8 4,6 4,0 3,7 3,5 3,4 3,3 3,2 3,1 2,9 2,8 2,8 2,7

7. Determine the instrument error Δ BUT pr (the absolute error of the device is indicated in the device passport or is calculated based on the accuracy class of the device).

8. Find the half-width of the confidence interval (absolute error) of the measured value using the approximate formula:

(5)

(More precise formulas for processing the results of direct measurements are given, for example, in).

9. Record the measurement result as a confidence interval:

A=(‹A›± Δ BUT) units, α = … (6)

10. Determine the relative error:

(7)

The order of processing the results of direct measurements - the concept and types. Classification and features of the category "Procedure for processing the results of direct measurements" 2017, 2018.

Estimation of errors in measurement results

Measurement errors and their types

Any measurements are always made with some errors associated with the limited accuracy of measuring instruments, the wrong choice, and the error of the measurement method, the physiology of the experimenter, the features of the measured objects, changes in measurement conditions, etc. Therefore, the measurement task includes finding not only the quantity itself , but also measurement errors, i.e., the interval in which the true value of the measured quantity is most likely to be found. For example, when measuring a time interval t with a stopwatch with a division value of 0.2 s, we can say that its true value is in the interval from https://pandia.ru/text/77/496/images/image002_131.gif" width="85 "height="23 src=">с..gif" width="16" height="17 src="> and X are the true and measured values ​​of the investigated quantity, respectively. The value is called absolute error(error) measurements, and the expression characterizing the measurement accuracy is called relative error.

It is quite natural for the experimenter to strive to make every measurement with the greatest attainable accuracy, but such an approach is not always expedient. The more accurately we want to measure this or that quantity, the more complex the instruments we must use, the more time these measurements will require. Therefore, the accuracy of the final result should correspond to the purpose of the experiment. The theory of errors gives recommendations on how measurements should be taken and how results should be processed so that the margin of error is as small as possible.

All errors arising during measurements are usually divided into three types - systematic, random and misses, or gross errors.

Systematic errors due to the limited accuracy of the manufacture of devices (instrument errors), shortcomings of the chosen measurement method, inaccuracy of the calculation formula, incorrect installation of the device, etc. Thus, systematic errors are caused by factors that act in the same way when the same measurements are repeated many times. The value of this error is systematically repeated or changed according to a certain law. Some systematic errors can be eliminated (in practice, this is always easy to achieve) by changing the measurement method, introducing corrections to instrument readings, and taking into account the constant influence of external factors.

Although the systematic (instrumental) error during repeated measurements gives a deviation of the measured value from the true value in one direction, we never know in which direction. Therefore, the instrumental error is written with a double sign

Random errors are caused by a large number of random causes (changes in temperature, pressure, shaking of the building, etc.), the effect of which on each measurement is different and cannot be taken into account in advance. Random errors also occur due to the imperfection of the experimenter's sense organs. Random errors also include errors due to the properties of the measured object.

It is impossible to exclude random errors of individual measurements, but it is possible to reduce the influence of these errors on the final result by carrying out multiple measurements. If the random error turns out to be significantly less than the instrumental (systematic) error, then there is no point in further reducing the random error by increasing the number of measurements. If the random error is greater than the instrumental error, then the number of measurements should be increased in order to reduce the value of the random error and make it less than or one order of magnitude with the instrumental error.

Mistakes or blunders- these are incorrect readings on the device, incorrect recording of a reading, etc. As a rule, misses due to the indicated reasons are clearly visible, since the readings corresponding to them differ sharply from other readings. Misses must be eliminated by control measurements. Thus, the width of the interval in which the true values ​​of the measured quantities lie will be determined only by random and systematic errors.

2. Estimation of the systematic (instrumental) error

For direct measurements the value of the measured quantity is read directly on the scale of the measuring instrument. The reading error can reach several tenths of a scale division. Usually, in such measurements, the magnitude of the systematic error is considered equal to half the scale division of the measuring instrument. For example, when measuring with a caliper with a division value of 0.05 mm, the value of the instrumental measurement error is taken equal to 0.025 mm.

Digital measuring instruments give the value of the quantities they measure with an error equal to the value of one unit of the last digit on the scale of the instrument. So, if a digital voltmeter shows a value of 20.45 mV, then the absolute error in the measurement is mV.

Systematic errors also arise when using constant values ​​determined from tables. In such cases, the error is taken equal to half of the last significant digit. For example, if the steel density value in the table is given as 7.9∙103 kg/m3, then the absolute error in this case is https://pandia.ru/text/77/496/images/image009_52.gif" width= "123" height="24 src=">use formula

, (1)

where https://pandia.ru/text/77/496/images/image012_40.gif" width="16" height="24">, are the partial derivatives of the function with respect to the variable https://pandia.ru/text/77 /496/images/image014_34.gif" width="65 height=44" height="44">.

Partial derivatives with respect to variables d And h will be equal

https://pandia.ru/text/77/496/images/image017_27.gif" width="71" height="44 src=">.

Thus, the formula for determining the absolute systematic error in measuring the volume of a cylinder in accordance with has the following form

,

where and instrumental errors in measuring the diameter and height of the cylinder

3. Random error estimation.

Confidence Interval and Confidence Probability

https://pandia.ru/text/77/496/images/image016_30.gif" width="12 height=23" height="23">.gif" width="45" height="21 src="> - distribution function of random errors (errors), which characterizes the probability of occurrence of an error, σ - root mean square error.

The value σ is not a random variable and characterizes the measurement process. If the measurement conditions do not change, then σ remains constant. The square of this quantity is called dispersion of measurements. The smaller the dispersion, the smaller the spread of individual values ​​and the higher the measurement accuracy.

The exact value of the root-mean-square error σ, as well as the true value of the measured quantity, is unknown. There is a so-called statistical estimate of this parameter, according to which the mean square error is equal to the mean square error of the arithmetic mean. The value of which is determined by the formula

, (3)

where https://pandia.ru/text/77/496/images/image027_14.gif" width="15" height="17"> is the arithmetic mean of the obtained values; n is the number of measurements.

The greater the number of measurements, the less https://pandia.ru/text/77/496/images/image027_14.gif" width="15" height="17 src=">, and the random absolute error, then the measurement result will be written in the form https://pandia.ru/text/77/496/images/image029_11.gif" width="45" height="19"> up to , which contains the true value of the measured quantity μ, is called confidence interval. Since https://pandia.ru/text/77/496/images/image025_16.gif" width="19 height=24" height="24"> is close to σ. To find the confidence interval and confidence level with a small number of measurements , with which we deal in the course of laboratory work, is used Student's probability distribution. This is the probability distribution of a random variable called Student's coefficient, gives the value of the confidence interval in fractions of the root mean square error of the arithmetic mean .

The probability distribution of this value does not depend on σ2, but essentially depends on the number of experiments n. With an increase in the number of experiments n Student's distribution tends to a Gaussian distribution.

The distribution function is tabulated (Table 1). The value of the Student's coefficient is at the intersection of the line corresponding to the number of measurements n, and the column corresponding to the confidence level α

Table 1.

Using the data in the table, you can:

1) determine the confidence interval, given a certain probability;

2) choose a confidence interval and determine the confidence level.

For indirect measurements, the root mean square error of the arithmetic mean of the function calculated according to the formula

. (5)

Confidence interval and confidence probability are determined in the same way as in the case of direct measurements.

Estimation of the total measurement error. Recording the final result.

The total error of the measurement result of the X value will be defined as the root mean square value of the systematic and random errors

, (6)

where δx - instrumental error, Δ X is a random error.

X can be either a directly or indirectly measured quantity.

, α=…, Е=… (7)

It should be borne in mind that the formulas of the theory of errors themselves are valid for a large number of measurements. Therefore, the value of the random, and consequently, the total error is determined for a small n with a big mistake. When calculating Δ X when the number of measurements is recommended to be limited to one significant figure if it is greater than 3 and two if the first significant figure is less than 3. For example, if Δ X= 0.042, then discard 2 and write Δ X=0.04, and if Δ X=0.123, then we write Δ X=0,12.

The number of digits of the result and the total error must be the same. Therefore, the arithmetic mean of the error should be the same. Therefore, the arithmetic mean is first calculated by one digit more than the measurement, and when recording the result, its value is refined to the number of digits of the total error.

4. Methodology for calculating measurement errors.

Errors of direct measurements

When processing the results of direct measurements, it is recommended to adopt the following order of operations.

Measurements of a given physical parameter are carried out n times under the same conditions and the results are recorded in a table. If the results of some measurements differ sharply in their value from the rest of the measurements, then they are discarded as misses if they are not confirmed after verification. The arithmetic mean of n identical measurements is calculated. It is taken as the most probable value of the measured quantity

The absolute errors of individual measurements are found. The squares of the absolute errors of individual measurements are calculated (Δ X i)2 Determine the root mean square error of the arithmetic mean

.

The value of the confidence probability α is set. In the laboratories of the workshop, it is customary to set α=0.95. The Student's coefficient is found for a given confidence probability α and the number of measurements made (see table) Random error is determined

The total error is determined

The relative error of the measurement result is estimated

.

The final result is written as

With α=… E=…%.

5. Error of indirect measurements

When evaluating the true value of an indirectly measured value https://pandia.ru/text/77/496/images/image045_6.gif" width="75" height="24">, two methods can be used.

First way is used if the value y determined under various experimental conditions. In this case, for each of the values, , and then the arithmetic mean of all values ​​is determined yi

The systematic (instrumental) error is found on the basis of the known instrumental errors of all measurements according to the formula. The random error in this case is defined as the direct measurement error.

Second way applies if the function y is determined several times with the same measurements..gif" width="75" height="24">. y. The systematic (instrumental) error, as in the first method, is found on the basis of the known instrumental errors of all measurements according to the formula

. (10)

To find the random error of an indirect measurement, the root-mean-square errors of the arithmetic mean of individual measurements are first calculated. Then the root mean square error is found y. Setting the confidence level α, finding the Student's coefficient https://pandia.ru/text/77/496/images/image048_2.gif" width="83" height="23">, with α=… Е=…%.

6. An example of designing a laboratory work

Lab #1

CYLINDER VOLUME DETERMINATION

Accessories: vernier caliper with a division value of 0.05 mm, a micrometer with a division value of 0.01 mm, a cylindrical body.

Objective: familiarization with the simplest physical measurements, determining the volume of a cylinder, calculating the errors of direct and indirect measurements.

Take at least 5 measurements of the cylinder diameter with a caliper, and its height with a micrometer.

Calculation formula for calculating the volume of a cylinder

where d is the diameter of the cylinder; h is the height.

Measurement results

Table 2.

Measurement No.

5.4. Calculation of the total error

Absolute error

; .

5. Relative error, or measurement accuracy

; E = 0.5%.

6. Recording the final result

The final result for the quantity under study is written as

Note. In the final record, the number of digits of the result and the absolute error must be the same.

6. Graphical representation measurement results

The results of physical measurements are very often presented in graphical form. Graphs have a number of important advantages and valuable properties:

a) make it possible to determine the type of functional dependence and the limits in which it is valid;

b) make it possible to visually compare the experimental data with the theoretical curve;

c) when constructing a graph, they smooth out jumps in the course of a function that occur due to random errors;

d) make it possible to determine certain quantities or carry out graphical differentiation, integration, solution of an equation, etc.

Graphs, as a rule, are performed on special paper (millimetric, logarithmic, semi-logarithmic). It is customary to plot the independent variable along the horizontal axis, i.e., the value whose value is set by the experimenter himself, and along the vertical axis, the value that he determines in this case. It should be borne in mind that the intersection of the coordinate axes does not have to coincide with the zero values ​​of x and y. When choosing the origin of coordinates, one should be guided by the fact that the entire area of ​​\u200b\u200bthe drawing is fully used (Fig. 2.).

On the coordinate axes of the graph, not only the names or symbols of the quantities are indicated, but also the units of their measurement. The scale along the coordinate axes should be chosen so that the measured points are located over the entire area of ​​the sheet. At the same time, the scale should be simple, so that when plotting points on a graph, one does not perform arithmetic calculations in the mind.

Experimental points on the graph should be displayed accurately and clearly. Points obtained under different experimental conditions (for example, when heated and cooled) are useful to apply different colors or different icons. If the error of the experiment is known, then instead of a point it is better to depict a cross or a rectangle, the dimensions of which along the axes correspond to this error. It is not recommended to connect the experimental points to each other with a broken line. The curve on the graph should be drawn smoothly, making sure that the experimental points are located both above and below the curve, as shown in Fig.3.

When plotting graphs, in addition to a coordinate system with a uniform scale, the so-called functional scales are used. By choosing the appropriate x and y functions, you can get a simpler line on the graph than with the usual construction. Often this is necessary when selecting a formula for a given graph to determine its parameters. Functional scales are also used in cases where it is necessary to stretch or shorten any part of the curve on the graph. Most often, from the functional scales, the logarithmic scale is used (Fig. 4).

1. Objective: study of measurement methods physical quantities, practical methods of processing and analysis of measurement results. The study of verniers.

2. Brief theory

Methods for measuring physical quantities. Measurement errors

Measurement in the broad sense of the word is an operation by which a numerical ratio is established between the measured value and a pre-selected measure. We will consider the measurement of physical quantities.

A physical quantity is a property that is qualitatively common to many objects (physical systems, their states and processes occurring in them), but quantitatively - individual for each physical object.

To measure a physical quantity means to compare it with another, homogeneous quantity, taken as a unit of measurement.

To measure physical quantities, various technical means are used, specially designed for this purpose and having normalized metrological properties.

Let us explain some of these measuring instruments.

A measure is a measuring instrument in the form of a body or a device designed to reproduce the values ​​​​of one or more dimensions, the values ​​\u200b\u200bof which are known with the accuracy necessary for measurements. An example of a measure is a weight, a measuring flask, a scale ruler.

Unlike a measure, a measuring instrument does not reproduce the known value of a quantity. The measured value in it is converted into an indication or signal proportional to the measured value in a form available for direct reproduction. An example of a measuring instrument is an ammeter, voltmeter, thermocouple, etc.



Measurements of physical quantities may differ from each other by features of a technical or methodological nature. From a methodological point of view, measurements of physical quantities lend themselves to a certain systematization. They can, for example, be divided into direct and indirect.

If the measured value is directly compared with the corresponding unit of its measurement or is determined by reading the readings of the measuring instrument, graduated in the appropriate units, then such a measurement is called direct. For example, measurements of wire thickness with a micrometer, time interval with a stopwatch, current strength with an ammeter are direct.

Most physical quantities are measured indirectly. An indirect measurement is such a measurement in which the desired physical quantity is not directly measured, but is calculated from the results of direct measurements of some auxiliary quantities associated with the desired quantity by a certain functional dependence.

With any measurements of physical quantities, results are obtained that inevitably contain errors (errors). These errors are due to a variety of reasons (imperfection of measures and measuring instruments, imperfection of our feelings). The measurement results are therefore only approximate, more or less close to the true values ​​of the measured quantities.

Difference between the true value of the measured value X and actually measured is called the true absolute error, or measurement error:


The ratio of the true absolute error to the true value of the measured quantity X is called the true relative measurement error:

Relative error is an abstract value, it is expressed in fractions of a unit or in percent and therefore allows you to compare the accuracy of measurements that are independent of each other (for example, the accuracy of measuring the diameter and height of a cylinder).

Since no measurement can give the true value of the measured quantity, the task of measuring any physical quantity is to find the approximate most probable value of this quantity, as well as to determine and evaluate the error made in this case.

Errors (errors) that occur when measuring physical quantities are divided into three groups: gross, systematic, random. Gross errors (misses) are errors that clearly distort the measurement results. The causes of gross errors may be malfunctions of the experimental setup or measuring instrument. But most often this is a consequence of the experimenter’s own mistakes: incorrect determination of the division value of the measuring device, incorrect reading of the divisions on the instrument scale, erroneous recording of the results of direct measurements, etc. In the following, we will assume that the measurements do not contain gross errors (misses) .

Systematic errors are due to the action of factors constant in magnitude and direction. For example, inaccuracy in the manufacture of measures, incorrect graduation of scales or incorrect installation of measuring instruments, as well as constant and one-sided influence on the measured value or measuring installation of some external factor.

With repeated measurements of a given value under the same conditions, the systematic error is repeated each time, having the same value and sign, or changes according to a certain law. With a careful analysis of the principle of operation of the instruments used, the measurement technique and the surrounding conditions, systematic errors can either be eliminated in the measurement process itself, or taken into account in the final measurement result, making an appropriate correction.

Random errors are caused by the action of a large number of the most diverse, as a rule, variable factors, which for the most part cannot be taken into account and controlled and manifest themselves differently in each individual measurement. Due to the disorder of the cumulative action of these factors, it is impossible to foresee the appearance of a random error and to predict its magnitude and sign. An error of this kind is called random because its appearance is a matter of chance, its appearance does not follow from the given conditions of the experiment. She may or may not be.

Random errors manifest themselves in the fact that, under unchanged experimental conditions and with completely eliminated systematic errors, the results of repeated measurements of the same quantity turn out to be somewhat different from each other. Random errors, for the above reasons, cannot be excluded from the measurement results, such as systematic errors.

distribution law of random errors

It is impossible to completely avoid or eliminate completely random errors, since the factors that cause them cannot be taken into account and are of a random nature. The question arises: how to reduce the influence of random errors on the final measurement result and how to evaluate the accuracy and reliability of the latter? The answer to this question is given by the theory of probability. Probability theory is a mathematical science that explains the patterns of random events (phenomena) that manifest themselves under the action of a large number of random factors.

Random measurement errors belong to the group of continuous quantities. Continuous quantities are characterized by an innumerable set of possible values. The probability of any value of a continuous random variable is infinitesimal. Therefore, in order to reveal the probability distribution for some continuous random variable, for example, the value , consider a number of intervals of values ​​of this value and calculate the frequency of occurrence of values ​​of the value in each interval . A table that shows the intervals in the order of their distribution along the x-axis and the corresponding frequencies is called a statistical series (Table 1).

Table 1

Intervals I . . . . . . . . . . . . . .
Frequencies R* . . . . . . . . . . . . . .

A statistical series is graphically represented as a step curve, which is called a histogram. When constructing a histogram, the intervals of possible values ​​of a random variable are plotted along the abscissa axis, and the frequencies or the number of cases when the value of a random variable falls within this interval are plotted along the ordinate axis. For most random errors of interest to us, the histogram has the form shown in Fig. 1. In this figure, the height and, consequently, the area of ​​the rectangle for each error interval are proportional to the number of experiments in which this error was observed.

With an increase in the number of experiments (measurements) and a decrease in the interval of splitting the abscissa axis, the histogram loses its stepped character and tends (transitions) to a smooth curve (Fig. 2). Such a curve is called the distribution density curve for a given random variable, and the equation describing this curve is called the distribution law of the random variable.

It is considered that a random variable is completely determined if the law of its distribution is known. This law can be represented (specified) in integral or differential form. The integral distribution law of a random variable is denoted by the symbol and is called the distribution function. The derivative function of is called the probability density of a random variable X or the differential distribution law:

.

When solving many practical problems, there is no need to characterize a random variable in an exhaustive way. It is enough to indicate only some of its numerical characteristics, for example, its mathematical expectation (you can write ) and variance (you can write ).

For a continuous random variable X with a probability density, the mathematical expectation is calculated by the formula

. (3)

For a continuous random variable X dispersion is determined by the formula:

. (4)

The positive square root of the variance is denoted by the symbol and is called the standard deviation (abbreviated s.k.o.):

. (5)

With a finite number of experiments, the arithmetic mean of the observed (measured) values ​​is taken as an estimate , i.e. and and - the mathematical expectation and the standard deviation - the parameters of the normal distribution, the physical meaning and method of calculation of which were explained above.

When considering the properties and characteristics of the distribution of random errors, we will limit ourselves to the normal law, since random measurement errors are most often distributed normally (according to the Gauss law). It means:

1) a random measurement error can take any values ​​in the interval

2) random errors equal in absolute value, but opposite in sign, are equally likely, that is, they occur equally often;

3) the larger the absolute value of random errors, the less likely they are, that is, they are less common.