Skip to main content
Physics LibreTexts

4.3: The Uncertainty of Random Outcomes

  • Page ID
    94110
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Quantifying What We Don't Know

    When dealing with outcomes of random events, we can find expectation values, which accurately predict the average result over a large number of repeated attempts, but when it comes to a single attempt, this number is nothing more than an educated guess. So while we are confident about the average over a large (infinite) number of trials, we don't have such confidence about a single trial, and it would be good to know just how close we expect our guess to be to the actual result.  We can quantify this level of uncertainty mathematically.

    The expectation value gives us an average result for many trials, but we can also have a look at how much the results spread themselves out. If the results spread very wide, then selecting from the wide range of results could land very far from the mean, which indicates that we are quite uncertain of our estimate. If the results spread in a narrow range, then any given result is likely to be quite close to our estimate, and our uncertainty in our guess is low. We can compute a value for this uncertainty by checking how far every possible outcome is from our expectation value guess, and adding these "deviations" together. This measure is called standard deviation.

    The computation of standard deviation goes like as follows. We will start here with the case where there are discrete results, and then generalize to the case where a probability density is needed. First, we need to know how far each possible individual result, \(\omega_i\), is from the mean of all the results, \(\left<\omega\right>\):

    \[\text{separation of the }i^{th}\text{ result from the mean} = \omega_i-\left<\omega\right> \]

    We would like to know the "average separation" over all the results, but how do we define such an average? If we just take an average of the differences given above, it would come out to equal zero. The proof of this is easy:

    \[ \left< \omega_i-\left<\omega\right>\right> = \dfrac{1}{N}\sum \limits_{i=1}^N\left[\omega_i-\left<\omega\right>\right] = \dfrac{1}{N}\sum \limits_{i=1}^N\omega_i - \dfrac{\left<\omega\right>}{N}\sum \limits_{i=1}^N \left(1\right) = \left<\omega\right>- \left<\omega\right>=0 \]

    The problem is that these separations are both positive and negative, but to measure the spread, we don't care in which direction the deviation from the mean is. We could define the average deviation of the results from the mean as the average of the absolute values of the separations, but for rather mathematically complex reasons, it turns out that this is not the best definition. We won't go into details here, except to say that it is more useful to give a higher weighting to deviations as they get farther from the mean (the absolute value method weights all deviations equally).

    The "standard" deviation that we calculate also removes the problem of negative deviations, but also weights separation from the mean more as it becomes greater. It does this by squaring the separation of every result from the mean, averaging those squares, and then taking the square root of the sum. In other contexts where the mean is clearly zero (such as the current in an AC circuit), this is a measurement of the average magnitude of the value, and is often referred to as the root-mean-square, or rms value, for reasons that are obvious now that we know how it is calculated.

    Let's summarize the calculation of standard deviation before writing out the formula.

    • Start with the full set of outcomes, \(\omega_i\), and their accompanying probabilities, \(P_i\).
    • Calculate the mean (expectation value) of the outcomes, using Equation 4.1.3.
    • Calculate the separation of every outcome from the mean, using Equation 4.3.1.
    • Square all of these separations.
    • Find the mean of all these squares (add them all together and divide by the total number).
    • Take the square-root of this mean.

    \[ \Delta\omega =\sqrt{\dfrac{1}{N}\sum\limits_{i=1}^N \left[\omega_i - \left<\omega\right>\right]^2} \]

    This formula can actually be cast into another extremely useful form – so useful that we will end up using this alternative form pretty much exclusively. To get to it requires only a little bit of algebra: Expand the square inside the sum, and use the facts that \(\left<\omega\right>\) is a constant value, not dependent on \(n\), and \(\dfrac{1}{N}\sum\limits_{i=1}^N\omega=\left<\omega\right>\). The result is:

    \[ \Delta\omega =\sqrt{\left<\omega^2\right>-\left<\omega\right>^2} \]

    The description of this process is easy to put into words: "Compute the average of the squares of the outcomes, subtract the square of the average outcome, and take the square root." We will see that this form is especially useful when we go to the case of a continuum of possible outcomes, which we do now...

    Uncertainty for a Continuum of Outcomes

    We already know how to compute a mean using a probability density (Equation 4.2.5). All we have to do to calculate the uncertainty is compute two of these expectation value integrals (one for the value itself, and one for the square of the value) and then plug the results into Equation 4.3.4.

    \[ \left. \begin{array}{l} \left<\omega\right>=\int \limits_{-\infty}^{+\infty}\mathcal P\left(x\right)\omega\left(x\right)dx \\ \left<\omega^2\right>=\int \limits_{-\infty}^{+\infty}\mathcal P\left(x\right)\left[\omega\left(x\right)\right]^2dx \end{array} \right\} \;\;\; \Rightarrow\;\;\; \Delta \omega = \sqrt{\left<\omega^2\right>-\left<\omega\right>^2}\]

    Example \(\PageIndex{2}\)

    In Example 4.2.1, symmetry demands that the average position of the block is the origin. Find the uncertainty in the block's position.

    Solution

    With an average position of \(\left<x\right>=0\), Equation 1.3.8 tells us that the uncertainty in the position of the block is:

    \[\Delta x = \sqrt{\left<x^2\right>} \nonumber \]

    Now we can plug into the integral using the density function we found in Example 1.3.1, but that is reinventing the wheel. It's simpler to use what we found in that example:

    \[ \left<PE\right> = \frac{1}{4}kx_o^2 \;\;\; \Rightarrow \;\;\; \left<\frac{1}{2}kx^2\right> = \frac{1}{4}kx_o^2 \;\;\; \Rightarrow \;\;\; \left<x^2\right>=\frac{1}{2}x_o^2 \nonumber \]

    This gives us the uncertainty of \(x\):

    \[\Delta x = \frac{1}{\sqrt{2}} x_o \nonumber \]

     


    This page titled 4.3: The Uncertainty of Random Outcomes is shared under a CC BY-SA license and was authored, remixed, and/or curated by Tom Weideman.

    • Was this article helpful?