# 1.6: Index Notation

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$

( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\id}{\mathrm{id}}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\kernel}{\mathrm{null}\,}$$

$$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$

$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$

$$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$

$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$

$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vectorC}[1]{\textbf{#1}}$$

$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$

$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$

$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

You may be familiar with something called a dot product, which is a way of multiplying two vectors together. By definition, the dot product of two vectors $$\vec{a}$$ and $$\vec{b}$$ is usually written as

$\vec{a}\cdot\vec{b}=a_xb_x+a_yb_y+a_zb_z.\nonumber$

If you are familiar with linear algebra, you may know that the previous expression can also be written as

$\vec{a}\cdot\vec{b}=\begin{pmatrix}a_x &a_y & a_z\end{pmatrix}\begin{pmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}\begin{pmatrix}b_x \\ b_y \\ b_z\end{pmatrix}.\nonumber$

If you aren't familiar with matrix multiplication in linear algebra, you may instead think of it as

$\vec{a}\cdot\vec{b}=\sum\limits_{i,j=1}^{3}\eta_{ij}a^ib^j,\nonumber$

where

$\eta_{ij}=\begin{cases}1, &i=j\\ 0, &i\ne j\end{cases}\nonumber$

is called the metric of the space and where the indices 1-3 represent the spatial components. If you take the dot product of a vector with itself, then you end up with the Pythagorean Theorem, which means that the metric essentially tells you how to find the length of a line. More generally, the metric defines the rules of geometry.

## Definition: Metric

The metric is a function or matrix that can be used to determine the distance between two points. It can be thought of as defining the rules of geometry.

What is the relevance of all this? Recall that the spacetime interval is defined as

$\Delta \tau^2=\Delta t^2-\Delta x^2-\Delta y^2-\Delta z^2,\nonumber$

which suggests that it can also be written as

$\Delta \tau^2=\begin{pmatrix}\Delta t & \Delta x & \Delta y & \Delta z\end{pmatrix}\begin{pmatrix}1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\end{pmatrix}\begin{pmatrix}\Delta t\\ \Delta x \\ \Delta y \\ \Delta z\end{pmatrix}=\sum\limits_{\mu,\nu=0}^3\eta_{\mu\nu}\Delta x^\mu \Delta x^\nu,$

where

$\eta_{\mu\nu}=\begin{cases}1, &\mu=\nu=0\\ -1, &\mu=\nu=1,2,3\\ 0, &\mu\ne \nu\end{cases}$

and where $$x^0=t$$, $$x^1=x$$, $$x^2=y$$, and $$x^3=z$$ (note that these are superscripts, not exponents).

## Note

Some texts use the opposite sign convention for the metric, where the time component is negative and the spatial components are positive. Both sign conventions work, but some equations involving the metric will look different depending on which sign convention you are using.

For the remainder of this book, we will assume the following conventions:

1. If an index appears both "downstairs" and "upstairs," we can drop the summation symbol and assume the summation.
2. Roman letters such as i and j are for spatial components only.
3. Greek letters such as $$\mu$$ and $$\nu$$ are for all four spacetime components.

With this convention, the spacetime interval is

$\Delta \tau^2=\eta_{\mu\nu}\Delta x^\mu \Delta x^\nu.$

The following rules and definitions will also be useful to us.

1. Any index that is not summed over is called a free index.
2. The free indices on both sides of an equation must be the same.
3. Indices can be renamed.
4. The same index can't appear downstairs more than once or upstairs more the once.
5. Any index can be lowered using the metric (by definition). For example, $$x_\mu=\eta_{\mu\nu}x^\nu$$.
6. $$u_\mu u^\mu=1$$, where $$u^\mu$$ is the four-velocity. (See Box 1.6.1)
7. The inverse metric $$\eta_{\mu\nu}$$ is defined by $$\eta_{\alpha\mu}\eta^{\mu\nu}=\mathbf{I}$$, where I is the identity matrix (1's on the diagonal and 0's everywhere else). (See Box 1.6.2)

## Exercise $$\PageIndex{1}$$

How is a vector with downstairs index different from its upstairs counterpart?

One of our rules is that $$x_\mu=\eta_{\mu\nu}x^\nu$$. Note that the sum only occurs over $$\nu$$, since that is the only index that appears both downstairs and upstairs. The index $$\mu$$ is called a free index because it can take on any value 0-3 (i.e. t, x, y, or z).

$\begin{equation*}x_\mu=\eta_{\mu t}x^t+\eta_{\mu x}x^x+\eta_{\mu y}x^y+\eta_{\mu z}x^z\end{equation*}$

The result depends on the value of $$\mu$$. Let's check each one.

\begin{align*}x_t&=\eta_{tt}x^t+\eta_{tx}x^x+\eta_{ty}x^y+\eta_{tz}x^z&=x^t\\x_x&=\eta_{xt}x^t+\eta_{xx}x^x+\eta_{xy}x^y+\eta_{xz}x^z&=-x^x\\ x_y&=\eta_{yt}x^t+\eta_{yx}x^x+\eta_{yy}x^y+\eta_{yz}x^z&=-x^y\\ x_z&=\eta_{zt}x^t+\eta_{zx}x^x+\eta_{zy}x^y+\eta_{zz}x^z&=-x^z\end{align*}

Therefore $$x_\mu=\begin{pmatrix} t & -x & -y& -z\end{pmatrix}$$.

Note that $$x_\mu$$ is exactly the same as $$x^\mu$$ except that the signs of the spatial indices have been reversed. There is also nothing special about $$x^\mu$$; we could replace $$x^\mu$$ with any four-vector and the result would be that the vector with downstairs index has the same components but with the signs of the spatial components reversed.

## Exercise $$\PageIndex{2}$$

For each part below, indicate the indices that are free indices (or say "none").

a) $$a_{\mu\nu}b^\mu c^\nu$$

b) $$a_{\mu\nu}b^\nu_\alpha$$

c) $$a_{\alpha\mu}a_{\beta\nu} b^{\mu\nu}c_\sigma+a_{\sigma\mu}a_{\alpha\nu}b^{\mu\nu}c_\beta+a_{\beta\mu}a_{\sigma\nu}b^{\mu\nu}c_\alpha$$

d) $$a^{\mu\nu}b_{\mu\alpha}b_{\nu\beta}c^\alpha c^\beta$$

a) none

b) $$\mu, \alpha$$

c) $$\alpha, \beta$$, and $$\sigma$$

d) none

## Exercise $$\PageIndex{3}$$

Which of the following violate the index rules?

a) $$a^\mu=b^\alpha c^\mu_\beta$$

b) $$a^\mu b_{\mu\nu}c^\nu+m^2=0$$

c) $$a^\mu b_{\mu\nu}=m^2$$

c) $$a_{\alpha\beta}=a_{\mu\nu}b^\mu_\alpha b^{\nu\beta}$$

a) Violates. Left side has on free index while right side has three.

b) No violation. The first time has no free indices, so it can be added to a scalar.

c) Violates. The left side has a free index while the right side does not.

d) Violates. $$\alpha$$ and $$\beta$$ are free indices on both sides, but $$\beta$$ is downstairs on one side and upstairs on the other.

## Box $$\PageIndex{1}$$

Prove that $$u_\mu u^\mu=1$$, where u is the four-velocity

## Box $$\PageIndex{2}$$

Show that the inverse metric $$\eta^{\mu\nu}=\begin{pmatrix}1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\end{pmatrix}$$. (Note that both indices are upstairs.)