Skip to main content
Physics LibreTexts

2.4: Combining Probabilities

  • Page ID
    1186
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Consider two distinct possible outcomes, $X$ and $Y$, of an observation made on the system $S$, with probabilities of occurrence \(\begin{equation}P(X) \text { and } P(Y)\end{equation}\) , respectively. Let us determine the probability of obtaining the outcome \(\begin{equation}X \text { or } Y\end{equation}\) which we shall denote \(\begin{equation}P(X \mid Y)\end{equation}\). From the basic definition of probability,

    \begin{equation}P(X \mid Y)=\lim _{\Omega(\Sigma) \rightarrow \infty} \frac{\Omega(X \mid Y)}{\Omega(\Sigma)}\end{equation}

    where \(\begin{equation}\Omega(X \mid Y)\end{equation}\) is the number of systems in the ensemble which exhibit either the outcome $X$ or the outcome $Y$. Now,

    \begin{equation}\Omega(X \mid Y)=\Omega(X)+\Omega(Y)\end{equation}

    if the outcomes $X$ and $Y$ are mutually exclusive (which must be the case if they are two distinct outcomes). Thus,

    \begin{equation}P(X \mid Y)=P(X)+P(Y)\end{equation}

    So, the probability of the outcome $X$ or the outcome $Y$ is just the sum of the individual probabilities of $X$ and $Y$. For instance, with a six-sided die the probability of throwing any particular number (one to six) is $1/6$, because all of the possible outcomes are considered to be equally likely. It follows, from what has just been said, that the probability of throwing either a one or a two is simply $1/6+1/6$, which equals $1/3$.

    Let us denote all of the $M$, say, possible outcomes of an observation made on the system \(\begin{equation}S \text { by } X_{i}, \text { where } i\end{equation}\) runs from $1$ to $M$. Let us determine the probability of obtaining any of these outcomes. This quantity is unity, from the basic definition of probability, because each of the systems in the ensemble must exhibit one of the possible outcomes. But, this quantity is also equal to the sum of the probabilities of all the individual outcomes, by (4), so we conclude that this sum is equal to unity: i.e.,

    \begin{equation}\sum_{i=1}^{M} P\left(X_{i}\right)=1\end{equation}

    The above expression is called the normalization condition, and must be satisfied by any complete set of probabilities. This condition is equivalent to the self-evident statement that an observation of a system must definitely result in one of its possible outcomes.

    There is another way in which we can combine probabilities. Suppose that we make an observation on a system picked at random from the ensemble, and then pick a second system completely independently and make another observation. We are assuming here that the first observation does not influence the second observation in any way. The fancy mathematical way of saying this is that the two observations are statistically independent. Let us determine the probability of obtaining the outcome $X$ in the first system and the outcome $Y$ in the second system, which we shall denote \(\begin{equation}P(X \otimes Y)\end{equation}\). In order to determine this probability, we have to form an ensemble of all of the possible pairs of systems which we could choose from the ensemble \(\begin{equation}\Sigma\end{equation}\). Let us denote this ensemble \(\begin{equation}\Sigma \otimes \Sigma\end{equation}\). The number of pairs of systems in this new ensemble is just the square of the number of systems in the original ensemble, so

    \begin{equation}\Omega(\Sigma \otimes \Sigma)=\Omega(\Sigma) \Omega(\Sigma)\end{equation}

    Furthermore, the number of pairs of systems in the ensemble \(\begin{equation}\Sigma \otimes \Sigma\end{equation}\) which exhibit the outcome $X$ in the first system and $Y$ in the second system is simply the product of the number of systems which exhibit the outcome $X$ and the number of systems which exhibit the outcome $Y$ in the original ensemble, so that

    \begin{equation}\Omega(X \otimes Y)=\Omega(X) \Omega(Y)\end{equation}

     

    It follows from the basic definition of probability that
    \begin{equation}P(X \otimes Y)=\lim _{\Omega(\Sigma) \rightarrow \infty} \frac{\Omega(X \otimes Y)}{\Omega(\Sigma \otimes \Sigma)}=P(X) P(Y)\end{equation}

    Thus, the probability of obtaining the outcomes $X$ and $Y$ in two statistically independent observations is the product of the individual probabilities of $X$ and $Y$. For instance, the probability of throwing a one and then a two on a six-sided die is $1/6 \times 1/6$, which equals $1/36$.

    Contributors

    • Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin)

      \( \newcommand {\ltapp} {\stackrel {_{\normalsize<}}{_{\normalsize \sim}}}\) \(\newcommand {\gtapp} {\stackrel {_{\normalsize>}}{_{\normalsize \sim}}}\) \(\newcommand {\btau}{\mbox{\boldmath$\tau$}}\) \(\newcommand {\bmu}{\mbox{\boldmath$\mu$}}\) \(\newcommand {\bsigma}{\mbox{\boldmath$\sigma$}}\) \(\newcommand {\bOmega}{\mbox{\boldmath$\Omega$}}\) \(\newcommand {\bomega}{\mbox{\boldmath$\omega$}}\) \(\newcommand {\bepsilon}{\mbox{\boldmath$\epsilon$}}\)

    This page titled 2.4: Combining Probabilities is shared under a not declared license and was authored, remixed, and/or curated by Richard Fitzpatrick.

    • Was this article helpful?