Using Uncertainties with Graphs
In our previous lab, we were able to use graphical methods to choose between two proposed functional relations, because one plot of points clearly landed closer to a common line than the other. In this lab, we will be asked to test just a single "theory" (functional relation). In such a case, an experiment can never conclusively prove a theory, because there is always small errors in the measurements, which means the proposed function will never run exactly through all the data points. But if the function misses the data points badly, we certainly want to be able to discard the theory. So the question we need to answer is, "How do we know when the data points come 'close enough' to the theoretical function that we conclude that that the theory is worth keeping?"
When we draw a best-fit line through a set of data points in a graph, the line generally misses most or all of the points. This line presumably represents the theoretical linear relationship, and it does not match the data points, but it doesn't have to – it only needs to come close to every point, and as we saw in the first lab, "close enough" is defined as being within the uncertainty. So if the best-fit line misses a data point by an amount of 0.12, and the uncertainty in the measurement of that data point is 0.23, then as far as that data point is concerned, the best-fit line is appropriate. Of course, there are many data points, each with its own uncertainty, and the line must pass within the allowed range for every data point in order to be acceptable.
It is common to display this graphically by plotting not only the measured data points, but also the points that represent one standard deviation above and below the data value – i.e. the range of "acceptable" values. So if a data point has an \(x\)-value of 1.4, with an uncertainty of \(\sigma_x=\pm 0.3\), then not only will the graph include the data point, but also two additional points with the same \(y\)-value that have \(x\) values of 1.1 and 1.7. Usually a line segment is drawn (called an error bar), connecting these two outer points, to indicate the acceptable range for the graph of the function to pass through.
Figure 3.1.1 – Error Bars
Naturally the values measured for the vertical axis also have uncertainties, which then results in vertical error bars as well. When one of these error bars is much smaller than the other, it is often left off the graph, as the fluctuation range of the data point is still represented well under the assumption that the measurement on the axis with the small uncertainty is essentially exact.
So given that a graph of the theoretical function must lie within the error bars of all the data points in order to not be rejected, one can think of the set of error bars produced by the data as a sort of "channel" through which the graph must pass. Assuming we have used our method of creating a straight line from our function, we are allowed to move and rotate the line all we want in an effort to fit it within this channel. If no amount of movement or rotation will do the trick, then the experiment refutes the theory. This also explains why we seek to reduce the errors as much as possible. If the error bars are huge, then the channel they create will fit a very large number of graphs, and we are not able to narrow-down the number of viable theories.
Figure 3.1.2 – Using Error Bar "Channel" to Confirm or Refute Theory