Skip to main content
Physics LibreTexts

1.2: Combining Probabilities

  • Page ID
    15721
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Consider two distinct possible outcomes, \(X\) and \(Y\), of an observation made on the system \(S\), with probabilities of occurrence \(P(X)\) and \(P(Y)\), respectively. Let us determine the probability of obtaining either the outcome \(X\) or the outcome \(Y\), which we shall denote \(P(X\mid Y)\). From the basic definition of probability, \[P(X\mid Y) =\lim_{{\mit\Omega}({\mit\Sigma})\rightarrow\infty} \frac{ {\mit\Omega}(X \mid Y)}{{\mit\Omega}({\mit\Sigma})},\] where \({\mit\Omega}(X \mid Y)\) is the number of systems in the ensemble that exhibit either the outcome \(X\) or the outcome \(Y\). Now,

    \[{\mit\Omega}(X\mid Y) = {\mit\Omega}(X) + {\mit\Omega}(Y)\] if the outcomes \(X\) and \(Y\) are mutually exclusive (which must be the case if they are two distinct outcomes). Thus, \[P(X\mid Y) = P(X) + P(Y), \label{x2.4}\] which means that the probability of obtaining either the outcome \(X\) or the outcome \(Y\) is the sum of the individual probabilities of \(X\) and \(Y\). For instance, with a six-sided die the probability of throwing any particular number (one to six) is \(1/6\), because all of the possible outcomes are considered to be equally likely. It follows, from what has just been said, that the probability of throwing either a one or a two is simply \(1/6+1/6\), which equals \(1/3\).

    Let us denote all of the \(M\), say, possible outcomes of an observation made on the system \(S\) by \(X_i\), where \(i\) runs from \(1\) to \(M\). Let us determine the probability of obtaining any one of these outcomes. This quantity is unity, from the basic definition of probability, because each of the systems in the ensemble must exhibit one of the possible outcomes. But, this quantity is also equal to the sum of the probabilities of all the individual outcomes, by Equation ([x2.4]), so we conclude that this sum is equal to unity: that is, \[\sum_{i=1,M} P(X_i) =1.\label{x2.5}\] The previous expression is called the normalization condition, and must be satisfied by any complete set of probabilities.

    There is another way in which we can combine probabilities. Suppose that we make an observation on a system picked at random from the ensemble, and then pick a second similar system, completely independently, and make another observation. We are assuming that the first observation does not influence the second observation in any way. In other words, the two observations are statistically independent of one another. Let us determine the probability of obtaining the outcome \(X\) in the first system and obtaining the outcome \(Y\) in the second system, which we shall denote \(P(X\otimes Y)\). In order to determine this probability, we have to form an ensemble of all the possible pairs of systems that we could choose from the ensemble \( \left ( \Sigma  \right )\). Let us denote this ensemble \({\Sigma}\otimes {\Sigma}\). The number of pairs of systems in this new ensemble is just the square of the number of systems in the original ensemble, so

    \[{\mit\Omega}({\mit\Sigma}\otimes{\mit\Sigma}) = {\mit\Omega}({\mit\Sigma})\, {\mit\Omega}({\mit\Sigma})\]

    Furthermore, the number of pairs of systems in the ensemble \({\mit\Sigma}\otimes {\mit\Sigma}\) that exhibit the outcome \(X\) in the first system and the outcome \(Y\) in the second system is simply the product of the number of systems that exhibit the outcome \(X\) and the number of systems that exhibit the outcome \(Y\) in the original ensemble, so that \[{\mit\Omega}(X\otimes Y) = {\mit\Omega}(X) \,{\mit\Omega}(Y).\] It follows from the basic definition of probability that

    \[P(X\otimes Y) =\lim_{{\mit\Omega}({\mit\Sigma})\rightarrow\infty} \frac{{\mit\Omega}(X\otimes Y)}{{\mit\Omega}({\mit\Sigma}\otimes {\mit\Sigma})}= P(X) \,P(Y).\]

    Thus, the probability of obtaining the outcomes \(X\) and \(Y\) in two statistically independent observations is the product of the individual probabilities of \(X\) and \(Y\). For instance, the probability of throwing a one and then a two on a six-sided die is \(1/6 \times 1/6\), which equals \(1/36\).

    Contributors and Attributions

    • Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin)

      \( \newcommand {\ltapp} {\stackrel {_{\normalsize<}}{_{\normalsize \sim}}}\) \(\newcommand {\gtapp} {\stackrel {_{\normalsize>}}{_{\normalsize \sim}}}\) \(\newcommand {\btau}{\mbox{\boldmath$\tau$}}\) \(\newcommand {\bmu}{\mbox{\boldmath$\mu$}}\) \(\newcommand {\bsigma}{\mbox{\boldmath$\sigma$}}\) \(\newcommand {\bOmega}{\mbox{\boldmath$\Omega$}}\) \(\newcommand {\bomega}{\mbox{\boldmath$\omega$}}\) \(\newcommand {\bepsilon}{\mbox{\boldmath$\epsilon$}}\)

    This page titled 1.2: Combining Probabilities is shared under a not declared license and was authored, remixed, and/or curated by Richard Fitzpatrick.

    • Was this article helpful?