# 4.1: Background Material

- Page ID
- 22697

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## Text References

## Estimated Uncertainty

In Lab #1, we noted the importance of measuring and accounting for uncertainty in experimental results. In that case, we calculated uncertainty from a range of measurements of a single quantity (the landing positions of a marble relative to its average landing point). We called this a statistical uncertainty, which we said arises cases where there is randomness in repeating the process (often due to human involvement). In this lab, the uncertainty will not come from this source, but rather will come from the limitations of our measuring devices (e.g. we can't measure distances to within microns using a meter stick). Rather than taking several runs, we will *estimate* uncertainties that are introduced into the experiment in this way, and we will use them to appropriately define the limitations of our experimental results.

## Percentage Uncertainty

Usually the uncertainty in a quantity has little meaning out of context. For example, if we are measuring the speed of an object, and compute the uncertainty in that speed to be \(\pm1.0\frac{cm}{s}\), then the level of our knowledge about this object's speed is quite impressive if we are talking about a bullet fired from a gun, and it is not so impressive if we are talking about a strolling tortoise. It is therefore useful to define *percentage uncertainty*, which is the ratio of the *absolute uncertainty* (whether it is statistical or estimated) and the quantity in question:

\[\text{percentage uncertainty in measured quantity } x = e_x = \dfrac{\sigma_x}{x}\]

## Uncertainty Propagation

In this lab, we will not be measuring the physical quantity in question directly. Instead, we will measure multiple quantities, and put them together mathematically to compute what we are looking for. This poses us with a new problem – there will be uncertainties in all of our measurements, so how do we use these to determine the uncertainty of their combination? We will virtually never be adding or subtracting quantities, so we really only have to worry about how we deal multiplying/dividing uncertain numbers and raising uncertain numbers to powers.

Without going into the mathematical details behind it, we will simply state that whenever two uncertain quantities are multiplied or divided, the percentage uncertainty in the product or ratio is found by computing the *quadrature* (a fancy word that means "treat them like the legs of a right triangle aand use the Pythagorean theorem") of their individual percentage uncertainties:

\[\left.\begin{array}{c} z=x\cdot y \\ or \\ z=\dfrac{x}{y}\end{array}\right\} \;\;\;\Rightarrow\;\;\; e_z = \sqrt{e_x^2+e_y^2}\]

If the quantity we are calculating instead involves a power, then the rule is a little different. For example, if we have \(z=x^2\), it is *not* correct to simply use the quadrature formula above with \(x\) replacing the \(y\) (this would result in an uncertainty for \(z\) that is \(\sqrt{2}\) times the uncertainty of \(x\)). Instead, the rule is to *multiply the percentage uncertainty of the measured quantity by the power*:

\[z=x^n \;\;\;\Rightarrow\;\;\; e_z = n\cdot e_x\]

## Weakest Link Rule

Given that this is a physics lab, we don't want to be spending all of our time doing uncertainty calculations, so we will employ a shortcut that will reduce our workload somewhat. For just about every case where we will need to propagate uncertainty associated with multiple measurements, one of the measurements will have a significantly larger percentage uncertainty than the others. Say for example that we make measurements of two quantities that are multiplied, where one of the percentage uncertainties is 1% and the other is 4%. Putting these together gives:

\[\left.\begin{array}{c} z=x\cdot y \\ e_x = 1\% \\ e_y = 4\%\end{array}\right\} \;\;\;\Rightarrow\;\;\; e_z = \sqrt{\left(1\%\right)^2+\left(4\%\right)^2} = \sqrt{17}\%=4.1\%\]

As you can see, the resulting percentage uncertainty differs very little from the larger of the two percentage uncertainties. We will therefore use the shortcut we call the *weakest link rule*, which consists of simply finding the component that has the largest percentage uncertainty, and using that as the total uncertainty, without ever computing the quadature. Note that we still need to include the power rule shown above, however. For example, if the quantity we are computing looks like \(z=xy^2\) and \(x\) has a 4% uncertainty, while the uncertainty of \(y\) is 3%, the square of \(y\) in the computation of \(z\) makes its 6% contribution the weakest link.

## Comparing Two Uncertain Results

We know how to determine whether an experimental result agrees with an "exact" (theoretical) number – we just check to see if the experimental result lands within the absolute uncertainty of the exact value. But something we will do in several labs is perform two different experiments to find the same value (this is most common when we don't actually have a theoretical number to check against). We will want to know if these two experiments confirm each other's results, but how do we do this, when both provide inexact answers? The answer to this (again, without going into details) is to compare the two results (which are of course both averages of the data), and determine whether the amount that they differ lies within a certain range, which is defined by the quadrature of the *absolute* uncertainties generated for each of the results:

\[range = \sqrt{\sigma_1^2+\sigma_2^2}\]

Let's look at a quick example. One experiment yields a (unitless) result of \(7.40\pm 3\%\), while the result of the other experiment is \(7.63\pm 2\%\) (perhaps these percentages were found for each experiment using the weakest link method). Do these two experiments agree to within uncertainty? Well, if we add 3% to the first result, we get 7.622, so the second result does not land within the uncertainty of the first. Conversely, the first result does not lie within the uncertainty of the second result. But the real question is whether their difference of 0.23 lands within the range:

\[\left. \begin{array}{l} 0.03\cdot 7.40 = 0.222 \\ 0.02\cdot 7.63 = 0.1526\end{array} \right\} \;\;\; range = \sqrt{0.222^2+0.1526^2} = 0.269 > 0.23\]

So these experimental results are consistent with each other to within uncertainty. Notice that if one of the results is "exact," (whether it is a theoretical answer or an experiment with very small errors) its uncertainty is zero, and the range is just the uncertainty of the other experiment.