$$\require{cancel}$$

# 2.3: The n-dimensional vector space V(n)


The manipulation of directed quantities, such as velocities, accelerations, forces and the like is of considerable importance in classical mechanics and electrodynamics. The need to simplify the rather complex operations led to the development of an abstraction: the concept of a vector.

The precise meaning of this concept is implicit in the rules governing its manipulations. These rules fall into three main categories: they pertain to

2. the multiplication of vectors by numbers (scalars),

3. the multiplication of vectors by vectors (inner product and vector product.

While the subtle problems involved in 3 will be taken up in the next chapter, we proceed here to show that rules falling under 1 and 2 find their precise expression in the abstract theory of finite dimensional vector spaces.

The rules related to the addition of vectors can be concisely expressed as follows: vectors are elements of a set V that forms an Abelian group under the operation of addition, briefly an additive group.

The inverse of a vector is its negative, the zero vector plays the role of unity.

The numbers, or “scalars” mentioned under (ii) are usually taken to be the real or the complex numbers. For many considerations involving vector spaces there is no need to specify which of these alternatives is chosen. In fact all we need is that the scalars form a field. More explicitly, they are elements of a set which is closed with respect to two binary operations: addition and multiplication which satisfy the common commutative, associative and distributive laws; the operations are all invertible provided they do not involve division by zero.

A vector space V (F) over a field F is formally defined as a set of elements forming an additive group that can be multiplied by the elements of the field F.

In particular, we shall consider real and complex vector fields V (R) and V (C) respectively.

I note in passing that the use of the field concept opens the way for a much greater variety of interpretations, but this is of no interest in the present context. In contrast, the fact that we have been considering “vector” as an undefined concept will enable us to propose in the sequel interpretations that go beyond the classical one as directed quantities. Thus the above defintion is consistent with the interpretation of a vector as a pair of numbers indicating the amounts of two chemical species present in a mixture, or alternatively, as a point in phase space spanned by the coordinates and momenta of a system of mass points.

We shall now summarize a number of standard results of the theory of vector spaces.

Suppose we have a set of non-zero vectors $$\left\{\vec{x}_{1}, \vec{x}_{2}, \ldots, \vec{x}_{n}\right\}$$ in $$V$$ which satisfy the relation

$\sum_{k} a_{k} \vec{x}_{k}=0\label{1}$

where the scalars $$a_{k} \in F$$, and not all of them vanish. In this case the vectors are said to be linearly dependent. If, in contrast, the relation \ref{1} implies that all $$a_{k}=0$$, then we say that the vectors are linearly independent.

In the former, case there is at least one vector of the. set that.can be written as a linear combination of the rest:

$\vec{x}_{m}=\sum_{1}^{m-1} b_{k} \vec{x}_{k}\label{2}$

Definition 2.1. A (linear) basis in a vector space $$V$$ is a set $$E=\left\{\vec{e}_{1}, \vec{e}_{2}, \ldots, \vec{e}_{n}\right\}$$ of linearly independent vectors such that every vector in $$V$$ is a linear combination of the $$\vec{e}_{n}$$. The basis is said to span or generate the space.

A vector space is finite dimensional if it has a finite basis. It is a fundamental theorem of linear algebra that the number of elements in any basis in a finite dimensional space is the same as in any other basis. This number $$n$$ is the basis independent dimension of $$V$$ ; we include it into the designation of the vector space: $$V(n, F)$$

Given a particular basis we can express any $$\vec{x} \in V$$ as a linear combination

$\vec{x}=\sum_{1}^{n} x^{k} \vec{e}_{k}\label{3}$

where the coordinates $$x^{k}$$ are uniquely determined by $$E$$. The $$x^{k} \vec{e}_{k}(k=l, 2, \ldots, n)$$ are called the components of $$\vec{x}$$. The use of superscripts is to suggest a contrast between the transformation properties of coordinates and basis to be derived shortly.

Using bases, called also coordinate systems, or frames is convenient for handling vectors — thus addition performed by adding coordinates. However, the choice of a particular basis introduces an element of arbitrariness into the formalism and this calls for countermeasures.

Suppose we introduce a new basis by means of a nonsingular linear transformation:

$\vec{e}_{i}^{\prime}=\sum_{k} S_{i}^{k} \vec{e}_{k}\label{4}$

where the matrix of the transformation has a nonvanishing determinant

$\left|S_{i}^{k}\right| \neq 0\label{5}$

ensuring that the $$\vec{e}_{i}^{\prime}$$ form a linearly independent set, i.e., an acceptable basis. Within the context of the linear theory this is the most general transformation we have to consider.

We ensure the equivalence of the different bases by requiring that

$\vec{x}=\sum x^{k} \vec{e}_{k}=\sum x^{i \prime} \vec{e}_{i}^{\prime}\label{6}$

Inserting Equation \ref{4} into Equation \ref{6} we get

\begin{aligned} \vec{x} &=\sum x^{i^{\prime}}\left(\sum S_{i}^{k} \vec{e}_{k}\right) \\ &=\sum\left(\sum x^{i \prime} S_{i}^{k}\right) \vec{e}_{k} \end{aligned}\label{7}

and hence in conjunction with Equation \ref{5}

$x^{k}=\sum S_{i}^{k} x^{i^{\prime}}\label{8}$

Note the characteristic “turning around” of the indices as we pass from Equation \ref{4} to Equation \ref{8} with a simultaneous interchange of the roles of the old and the new frame. The underlying reason can be better appreciated if the foregoing calculation is carried out in symbolic form.

Let us write the coordinates and the basis vectors as $$n × 1$$ column matrices

$X=\left(\begin{array}{c} x^{1} \\ \vdots \\ x^{k} \end{array}\right) \quad E=\left(\begin{array}{c} \vec{e}_{1} \\ \vdots \\ \vec{e}_{k} \end{array}\right)\label{9}$

Equation \ref{6} appears then as a matrix product

$\vec{x}=X^{T} E=X^{T} S^{-1} S E=X^{\prime T} E^{\prime}\label{10}$

where the superscript stands for “transpose.”

We ensure consistency by setting

$E^{\prime}=S E\label{11}$

$X^{\prime T}=X^{T} S^{-1}\label{12}$

$X^{\prime}=S^{-1 T} X\label{13}$

Thus we arrive in a lucid fashion at the results contained in Equations \ref{4} and \ref{8}. We see that the “objective” or “invariant” representations of vectors are based on the procedure of transforming bases and coordinates in what is called a contragredient way.

The vector $$\vec{x}$$ itself is sometimes called a contravariant vector, to be distinguished by its transformation properties from covariant vectors to be introduced later.

There is a further point to be noted in connection with the factorization of a vector into basis and coordinates.

The vectors we will be dealing with have usually a dimention such as length, velocity, momentum, force and the like. It is important, in such cases, that the dimension be absorbed in the basis vectors $$\vec{e}_{k}$$. In contrast, the coordinates $$x^{k}$$ are elements of the field F, the products of which are still in F, they are simply numbers. It is not surprising that the multiplication of vectors with other vectors constitutes a subtle problem. Vector spaces in which there is provision for such an operation are called algebras; they deserve a careful examination.

It should be finally pointed out that there are interesting cases in which vectors have a dimensionless character. They can be built up from the elements of the field $$F$$, which are arranged as n-tuples, or as $$m × n$$ matrices.

The $$n×n$$ case is particularly interesting, because matrix multiplication makes these vector spaces into algebra in the sense just defined.

2.3: The n-dimensional vector space V(n) is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by László Tisza (MIT OpenCourseWare) .