# Vectors and bases

A *vector* is an arrow with a length and a
direction. Just like positions, vectors exist before we
measure or describe them. Unlike positions, vectors can
mean many different things, such as position vectors,
velocities, etc. Vectors are not anchored to particular
positions in space, so we can slide a vector around and
locate it at any position.

Change:

Two vectors, which may or may not be the same vector. Moving a vector around does not change it: it is still the same vector.

Some textbooks differentiate between *free
vectors*, which are free to slide around, and
*bound vectors*, which are anchored in space. We
will only use free vectors.

We will use the over-arrow notation $\vec{a}$ for vector quantities. Other common notations include bold $\boldsymbol{a}$ and under-bars $\underline{a}$. For unit (length one) vectors we will use an over-hat $\hat{a}$.

## Adding and scaling vectors

Vectors can be multiplied by a scalar number, which
multiplies their length. Vectors can also be added
together, using the *parallelogram law of
addition* or the *head-to-tail* rule.

Adding and scaling vectors to give new vectors.

## Vector bases

To describe vectors mathematically, we write them as a
combination of *basis vectors*. An *orthonormal
basis* is a set of two (in 2D) or three (in 3D) basis
vectors which are *orthogonal* (have 90°
angles between them) and *normal* (have length equal
to one). We will not be using non-orthogonal or non-normal
bases.

Any other vector can be written as a *linear
combination* of the basis vectors:

Components of a vector.

\[\vec{a} = a_1 \,\hat{\imath} + a_2 \,\hat{\jmath} + a_3 \,\hat{k}\]

The numbers $a_1, a_2, a_3$ are called the
*components* of $\vec{a}$ in the $\,\hat{\imath},
\hat{\jmath}, \hat{k}$ basis. If we are in 2D then we will
only have two components for a vector.

Writing a vector as the sum of scaled basis vectors. The scale factors are the components of the vector. Here $\vec{a} = 3\hat\imath + 2\hat\jmath$, so the components of $\vec{a}$ are $a_1 = 3$ and $a_2 = 2$.

We draw the symbol $\odot$ (arrow tip) to indicate a vector coming out of the page, and $\otimes$ (arrow fletching) to indicate an arrow going into the page.

Two standard arrangements of the basis vectors when working in 2D. Either $\hat\jmath$ is the vertical and $\hat{k}$ is out of the page, or $\hat{k}$ is the vertical and $\hat\jmath$ is into the page. In both cases $\hat\imath$ is horizontal.

Just as for position coordinates, we can write the vector components $3\hat\imath + 2\hat\jmath$ as the ordered list $(3, 2)$ if we know which basis we are using. Because we often will be using several bases simultaneously, we will generally write the components explicitly in the $3\hat\imath + 2\hat\jmath$ form.

The use of the letter $i,j,k$ for basis vectors is due to
William
Hamilton, who was motivated by thinking of basis
vectors as extensions of the complex number $i$. This
notation was popularized by the book *Vector
Analysis: A Text Book for the Use of Students of
Mathematics and Physics Founded upon the Lectures of
J. Willard Gibbs* (1901), by E. B. Wilson. This
book also introduced the use of bold letters to represent
vectors.

## Length of vectors

The length of a vector $\vec{a}$ is written
either $\|\vec{a}\|$ or just plain $a$. The
length can be computed using *Pythagorus’
theorem*:

Pythagorus' length formula.

\[a = \|\vec{a}\| = \sqrt{a_1^2 + a_2^2 + a_3^2}\]

First we prove Pythagorus' theorem for right-angle triangles. For side lengths $a$ and $b$ and hypotenuse $c$, the fact that $a^2 + b^2 = c^2$ can be seen graphically below, where the gray area is the same before and after the triangles are rotated in the animation:

Pythagorus' theorem immediately gives us vector lengths in 2D. To find the length of a vector in 3D we can use Pythagorus' theorem twice, as shown below. This gives the two right-triangle calculations: \[\begin{aligned} \ell^2 &= a_1^2 + a_2^2 \\ a^2 &= \ell^2 + a_3^2 = a_1^2 + a_2^2 + a_3^2. \end{aligned}\]

Click and drag to rotate.

Warning: Length must be computed in a single basis.

The Pythagorean length formula can only be used if all the components are written in a single orthonormal basis.

Computing the length of a vector using Pythagorus' theorem.

Some common integer vector lengths are $\vec{a} = 4\hat\imath + 3\hat\jmath$ (length $a = 5$) and $\vec{b} = 12\hat\imath + 5\hat\jmath$ (length $b = 13$).

Warning: Adding vectors does not add lengths.

If $\vec{c} = \vec{a} + \vec{b}$, then $\|\vec{c}\| \ne \|\vec{a}\| + \|\vec{b}\|$ unless $\vec{a}$ and $\vec{b}$ are parallel and in the same direction.

It will always be true, however, that $\|\vec{c}\| \le
\|\vec{a}\| + \|\vec{b}\|$. This fact is known as the
*triangle inequality*, for reasons that should be
obvious.

Sets of three integers $a,b,c$ where $a^2 + b^2 = c^2$ are
called *Pythagorean
triples*. A long list of such triples is given on
the Plimpton
322 clay tablet written by the ancient Babylonians
around 1800 BCE, although it is unclear how they generated
these numbers. Pythagorean triples lead to complex
mathematics, including the curious patterns shown below
and Fermat's
Last Theorem.

## Unit vectors

A *unit vector* is any vector with a length of
one. We use the special over-hat notation $\hat{a}$ to
indicate when a vector is a unit vector. Any non-zero vector
$\vec{a}$ gives a unit vector $\hat{a}$ that specifies the
direction of $\vec{a}$.

Normalization to unit vector.

\[\begin{aligned} \hat{a} = \frac{\vec{a}}{a}\end{aligned}\]

If we compute the length of $\hat{a}$ then we find: \[ \| \hat{a} \| = \left\| \frac{\vec{a}}{a} \right\| = \frac{\|\vec{a}\|}{a} = \frac{a}{a} = 1, \] so $\hat{a}$ is really a unit vector, and it is in the same direction as $\vec{a}$ as they differ only by a scalar factor.

Any vector can be written as the product of its length and direction:

Vector decomposition into length and direction.

\[\begin{aligned} \vec{a} = a \hat{a}\end{aligned}\]

This follows from rearranging #rvv-eu.

Three vectors and their decompositions into lengths and directional unit vectors.

## Vectors and units

When using vectors to describe physical quantities we need to have the correct units associated with them, just as for position coordinates. Basis vectors such as $\hat\imath,\hat\jmath,\hat{k}$ have no units (they are also dimensionless), so the components must have units. For example, a velocity vector $\vec{v} = (4\hat\imath + 3\hat\jmath){\rm\ m/s}$ has components $v_1 = 4{\rm\ m/s}$ and $v_2 = 3{\rm\ m/s}$, which then multiply the dimensionless (and unit-less) basis vectors $\hat\imath$ and $\hat\jmath$. For convenience, when all components of a vector have the same units, we will often write the units just once at the end, so that all of the following expressions are equivalent: \[\begin{aligned} \vec{v} &= (4{\rm\ m/s})\,\hat\imath + (3{\rm\ m/s})\,\hat\jmath \\ &= (4\hat\imath + 3\hat\jmath){\rm\ m/s} \\ &= 4\hat\imath + 3\hat\jmath {\rm\ m/s}. \end{aligned}\]

When a vector is expressed as $\vec{a} = a \hat{a}$, decomposed into a length and direction, the length $a$ has units but the direction unit vector $\hat{a}$ has no units. The terminology here is slightly confusing, as unit vectors have no units. For example, if we calculate the length of $\vec{v}$ above we obtain: \[ v = \sqrt{v_1^2 + v_2^2} = \sqrt{(4{\rm\ m/s})^2 + (3{\rm\ m/s})^2} = \sqrt{25{\rm\ m^2/s^2}} = 5{\rm\ m/s}. \] Then the unit vector is \[ \hat{v} = \frac{\vec{v}}{v} = \frac{(4{\rm\ m/s})\,\hat\imath + (3{\rm\ m/s})\,\hat\jmath}{5{\rm\ m/s}} = 0.8\,\hat\imath + 0.6\,\hat\jmath. \]

## Dot Product

The *dot product* (also called the *inner
product* or *scalar product*) is defined by

Dot product from components.

\[\vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3\]

An alternative expression for the dot product can be given in terms of the lengths of the vectors and the angle between them:

Dot product from length/angle.

\[\vec{a} \cdot \vec{b} = a b \cos\theta\]

We will present a simple 2D proof here. A more complete proof in 3D uses the law of cosines.

Start with two vectors $\vec{a}$ and $\vec{b}$ with an angle $\theta$ between them, as shown below.

Observe that the angle $\theta$ between vectors $\vec{a}$ and $\vec{b}$ is the difference between the $\theta_a$ and $\theta_b$ from horizontal.

If we use the angle sum formula for cosine, we have

\[\begin{aligned} a b \cos\theta &= a b \cos(\theta_b - \theta_a) \\ &= a b (\cos\theta_b \cos\theta_a + \sin\theta_b \sin\theta_a) \end{aligned}\]

We now want to express the sine and cosine of $\theta_a$ and $\theta_b$ in terms of the of $\vec{a}$ and $\vec{b}$.

We re-arrange the expression so that we can use the fact that $a_1 = a \cos\theta_a$ and $a_2 = a \sin\theta_a$, and similarly for $\vec{b}$. This gives:

\[\begin{aligned} a b \cos\theta &= (a \cos\theta_a) (b \cos\theta_b) + (a \sin\theta_a) (b \sin\theta_b) \\ &= a_1 b_1 + a_2 b_2 \\ &= \vec{a} \cdot \vec{b} \end{aligned}\]

The fact that we can write the dot product in terms of components as well as in terms of lengths and angle is very helpful for calculating the length and angles of vectors from the component representations.

Length and angle from dot product.

\[\begin{aligned} a &= \sqrt{\vec{a} \cdot \vec{a}} \\ \cos\theta &= \frac{\vec{b} \cdot \vec{a}}{b a}\end{aligned}\]

The angle between $\vec{a}$ and itself is $\theta = 0$, so $\vec{a} \cdot \vec{a} = a^2 \cos 0 = a^2$, which gives the first equation for the length in terms of the dot product.

The second equation is a rearrangement of #rvv-ed.

If two vectors have zero dot product $\vec{a} \cdot \vec{b}
= 0$ then they have an angle of $\theta = 90^\circ =
\frac{\pi}{2}\rm\ rad$ between them and we say that the
vectors are *perpendicular*, *orthogonal*, or
*normal* to each other.

In 2D we can easily find a perpendicular vector by rotating $\vec{a}$ counterclockwise with the following equation.

Counterclockwise perpendicular vector in 2D.

\[\vec{a}^\perp = -a_2\,\hat\imath + a_1\hat\jmath\]

It is easy to check that $\vec{a}^\perp$ is always perpendicular to $\vec{a}$: \[\vec{a} \cdot \vec{a}^\perp = (a_1\,\hat\imath + a_2\,\hat\jmath) \cdot (-a_2\,\hat\imath + a_1\hat\jmath) = -a_1 a_2 + a_2 a_1 = 0.\] The fact that $\vec{a}^\perp$ is a $+90^\circ$ rotation of $\vec{a}$ is apparent from Figure #rvv-fn.

In 2D there are two perpendicular directions to a given vector $\vec{a}$, given by $\vec{a}^\perp$ and $-\vec{a}^\perp$. In 3D there is are many perpendicular vectors, and there is no simple formula like #rvv-en for 3D.

The perpendicular vector $\vec{a}^\perp$ is always a $+90^\circ$ rotation of $\vec{a}$.

## Cross Product

The cross product can be defined in terms of components by:

Cross product in components.

\[ \vec{a} \times \vec{b} = (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \]

It is sometimes more convenient to work with cross products of individual basis vectors, which are related as follows.

Cross products of basis vectors.

\[\begin{aligned} \hat\imath \times \hat\jmath &= \hat{k} & \hat\jmath \times \hat{k} &= \hat\imath & \hat{k} \times \hat\imath &= \hat\jmath \\ \hat\jmath \times \hat\imath &= -\hat{k} & \hat{k} \times \hat\jmath &= -\hat\imath & \hat\imath \times \hat{k} &= -\hat\jmath \\ \end{aligned}\]

Writing the basis vectors in terms of themselves gives the components: \[\begin{aligned} i_1 &= 1 & i_2 &= 0 & i_3 &= 0 \\ j_1 &= 0 & j_2 &= 1 & j_3 &= 0 \\ k_1 &= 0 & k_2 &= 0 & k_3 &= 1. \end{aligned}\] These values can now be substituted into the definition #rvv-ex. For example, \[\begin{aligned} \hat\imath \times \hat\jmath &= (i_2 j_3 - i_3 j_2) \,\hat{\imath} + (i_3 j_1 - i_1 j_3) \,\hat{\jmath} + (i_1 j_2 - i_2 j_1) \,\hat{k} \\ &= (0 \times 0 - 0 \times 1) \,\hat{\imath} + (0 \times 0 - 1 \times 0) \,\hat{\jmath} + (1 \times 1 - 0 \times 0) \,\hat{k} \\ &= \hat{k} \end{aligned}\] The other combinations can be computed similarly.

Warning: The cross product is not associative.

The cross product is not associative, meaning that in general \[\vec{a} \times (\vec{b} \times \vec{c}) \ne (\vec{a} \times \vec{b}) \times \vec{c}.\] For example, \[\begin{aligned} \hat{\imath} \times (\hat{\imath} \times \hat{\jmath}) &= \hat{\imath} \times \hat{k} = - \hat{\jmath} \\ (\hat{\imath} \times \hat{\imath}) \times \hat{\jmath} &= \vec{0} \times \hat{\jmath} = \vec{0}. \end{aligned}\] This means that we should never write an expression like \[\vec{a} \times \vec{b} \times \vec{c}\] because it is not clear in which order we should perform the cross products. Instead, if we have more than one cross product, we should always use parentheses to indicate the order.

Rather than using components, the cross product can be defined by specifying the length and direction of the resulting vector. The direction of $\vec{a} \times \vec{b}$ is orthogonal to both $\vec{a}$ and $\vec{b}$, with the direction given by the right-hand rule. The magnitude of the cross product is given by:

Cross product length.

\[\| \vec{a} \times \vec{b} \| = a b \sin\theta\]

Using Lagrange's identity we can calculate: \[\begin{aligned} \| \vec{a} \times \vec{b} \|^2 &= \|\vec{a}\|^2 \|\vec{b}\|^2 - (\vec{a} \cdot \vec{b})^2 \\ &= a^2 b^2 - (a b \cos\theta)^2 \\ &= a^2 b^2 (1 - \cos^2\theta) \\ &= a^2 b^2 \sin^2\theta. \end{aligned}\] Taking the square root of this expression gives the desired cross-product length formula.

This second form of the cross product definition can also be related to the area of a parallelogram.

The area of a parallelogram is the length of the base multiplied by the perpendicular height, which is also the magnitude of the cross product of the side vectors.

A useful special case of the cross product occurs when vector $\vec{a}$ is in the 2D $\hat\imath,\hat\jmath$ plane and the other vector is in the orthogonal $\hat{k}$ direction. In this case the cross product rotates $\vec{a}$ by $90^\circ$ counterclockwise to give the perpendicular vector $\vec{a}^\perp$, as follows.

Cross product of out-of-plane vector $\hat{k}$ with 2D vector $\vec{a} = a_1\,\hat\imath + a_2\,\hat\jmath$.

\[\hat{k} \times \vec{a} = \vec{a}^\perp\]

Using #rvv-eo we can compute: \[\begin{aligned} \hat{k} \times \vec{a} &= \hat{k} \times (a_1\,\hat\imath + a_2\,\hat\jmath) \\ & a_1 (\hat{k} \times \hat\imath) + a_2 (\hat{k} \times \hat\jmath) \\ &= a_1\,\hat\jmath - a_2\,\hat\imath \\ &= \vec{a}^\perp. \end{aligned}\]

## Projection and complementary projection

The projection and complementary projection are:

Projection of $\vec{a}$ onto $\vec{b}$.

\[\operatorname{Proj}(\vec{a},\vec{b}) = (\vec{a} \cdot \hat{b}) \hat{b} = (a \cos\theta) \, \hat{b} \]

Complementary projection of $\vec{a}$ with respect to $\vec{b}$.

\[\begin{aligned} \operatorname{Comp}(\vec{a}, \vec{b}) &= \vec{a} - \operatorname{Proj}(\vec{a}, \vec{b}) = \vec{a} - (\vec{a} \cdot \hat{b}) \hat{b} \\ \left\| \operatorname{Comp}(\vec{a}, \vec{b}) \right\| &= a \sin\theta \end{aligned}\]

Adding the projection and the complementary projection of a vector just give the same vector again, as we can see on the figure below.

Projection of $\vec{a}$ onto $\vec{b}$ and the complementary projection.

As we see in the diagram above, the complementary projection is orthogonal to the reference vector:

Complementary projection is orthogonal to the reference.

\[\operatorname{Comp}(\vec{a}, \vec{b}) \cdot \vec{b} = 0\]

Using the definitions of the complementary projection rvv-em and projection rvv-ep, we compute: \[\begin{aligned} \operatorname{Comp}(\vec{a}, \vec{b}) \cdot \vec{b} &= \Big(\vec{a} - (\vec{a} \cdot \hat{b}) \hat{b}\Big) \cdot \vec{b} \\ &= \vec{a} \cdot \vec{b} - (\vec{a} \cdot \hat{b}) (\hat{b} \cdot \vec{b}) \\ &= a b \cos\theta - (a\cos\theta) b \\ &= 0. \end{aligned}\]

## Changing bases

To change the basis that a vector is written in, we need to know how the basis vectors are related. We do this by writing one set of basis vectors in terms of the other basis vectors. If we want to change from $\hat\imath,\hat\jmath$ to $\hat{u},\hat{v}$, then we need to write $\hat\imath,\hat\jmath$ in terms of $\hat{u},\hat{v}$ and then substitute the expressions.

Example: Basis change.

For example, if we have $\vec{a} = 3\,\hat{\imath} + 2\,\hat{\jmath}$ and we want to write this in the $\,\hat{u}, \,\hat{v}$ basis, then we need to know $\,\hat{\imath}, \,\hat{\jmath}$ in terms of $\,\hat{u}, \,\hat{v}$.

From above we see that: \[\begin{aligned} \hat{\imath} &= \cos\theta \, \hat{u} - \sin\theta \, \hat{v} = \frac{1}{\sqrt{2}} \,\hat{u} - \frac{1}{\sqrt{2}} \,\hat{v} \\ \hat{\jmath} &= \sin\theta \, \hat{u} + \cos\theta \, \hat{v} = \frac{1}{\sqrt{2}} \,\hat{u} + \frac{1}{\sqrt{2}} \,\hat{v}.\end{aligned}\]

Then we can substitute and re-arrange:

\[\begin{aligned} \vec{a} &= 3\,\hat{\imath} + 2\,\hat{\jmath} \\ &= 3\left(\frac{1}{\sqrt{2}} \,\hat{u} - \frac{1}{\sqrt{2}} \,\hat{v}\right) + 2\left(\frac{1}{\sqrt{2}} \,\hat{u} + \frac{1}{\sqrt{2}} \,\hat{v}\right) \\ &= \left(\frac{3}{\sqrt{2}} + \frac{2}{\sqrt{2}} \right) \,\hat{u} + \left(-\frac{3}{\sqrt{2}} + \frac{2}{\sqrt{2}} \right) \,\hat{v} \\ &= \frac{5}{\sqrt{2}} \,\hat{u} - \frac{1}{\sqrt{2}} \,\hat{v}.\end{aligned}\]

If we want to convert back the other way then we would need to know $\,\hat{u}, \,\hat{v}$ in terms of $\,\hat{\imath}, \,\hat{\jmath}$. We can find this by solving for $\,\hat{u}, \,\hat{v}$ above, giving: \[\begin{aligned} \hat{u} &= \cos\theta \, \hat\imath + \sin\theta \, \hat\jmath = \frac{1}{\sqrt{2}} \,\hat\imath + \frac{1}{\sqrt{2}} \,\hat\jmath \\ \hat{v} &= -\sin\theta \, \hat\imath + \cos\theta \, \hat\jmath = -\frac{1}{\sqrt{2}} \,\hat\imath + \frac{1}{\sqrt{2}} \,\hat\jmath.\end{aligned}\]

We can also write the general expressions for basis change, as below.

Change of basis formulas.

\[\begin{aligned} \vec{a} &= a_i \, \hat\imath + a_j \, \hat\jmath + a_k \, \hat{k} & \vec{a} &= a_u \, \hat{u} + a_v \, \hat{v} + a_w \, \hat{w} \\[1em] a_i &= a_u u_i + a_v v_i + a_w w_i & a_u &= a_i i_u + a_j j_u + a_k k_u \\ a_j &= a_u u_j + a_v v_j + a_w w_j & a_v &= a_i i_v + a_j j_v + a_k k_v \\ a_k &= a_u u_k + a_v v_k + a_w w_k & a_w &= a_i i_w + a_j j_w + a_k k_w \end{aligned}\]

We will derive the first set of equations (the second set are derived similarly). The vector $\vec{a}$ can be written in both the $\hat\imath,\hat\jmath,\hat{k}$ and $\hat{u},\hat{v},\hat{w}$ bases: \[\begin{aligned} \vec{a} &= a_i \hat\imath + a_j \hat\jmath + a_k \hat{k} & \vec{a} &= a_u \hat{u} + a_v \hat{v} + a_w \hat{w}. \end{aligned}\]

We can write each $\hat{u},\hat{v},\hat{w}$ basis vector in terms of the $\hat\imath,\hat\jmath,\hat{k}$ basis: \[\begin{aligned} \hat{u} &= u_i \hat\imath + u_j \hat\jmath + u_k \hat{k} \\ \hat{v} &= v_i \hat\imath + v_j \hat\jmath + v_k \hat{k} \\ \hat{w} &= w_i \hat\imath + w_j \hat\jmath + w_k \hat{k}. \end{aligned}\] Substituting these expressions into $\vec{a}$ gives: \[\begin{aligned} \vec{a} &= a_u \hat{u} + a_v \hat{v} + a_w \hat{w} \\ &= a_u (u_i \hat\imath + u_j \hat\jmath + u_k \hat{k}) + a_v (v_i \hat\imath + v_j \hat\jmath + v_k \hat{k}) + a_w (w_i \hat\imath + w_j \hat\jmath + w_k \hat{k}) \\ &= (a_u u_i + a_v v_i + a_w w_i) \hat\imath + (a_u u_j + a_v v_j + a_w w_j) \hat\jmath + (a_u u_k + a_v v_k + a_w w_k) \hat{k} \\ &= a_i \hat\imath + a_j \hat\jmath + a_k \hat{k}. \end{aligned}\] Comparing the last two lines gives the component formulas.

In 2D the change between two orthonormal bases is a rotation by an angle $\theta$, resulting in the change of basis expression below.

Change of basis formula in 2D.

\[\begin{aligned} \vec{a} &= a_i \, \hat\imath + a_j \, \hat\jmath & \vec{a} &= a_u \, \hat{u} + a_v \, \hat{v} \\[1em] a_i &= \cos\theta \, a_u - \sin\theta \, a_v & a_u &= \cos\theta \, a_i + \sin\theta \, a_j \\ a_j &= \sin\theta \, a_u + \cos\theta \, a_v & a_v &= -\sin\theta \, a_i + \cos\theta \, a_j \end{aligned}\]

Elementary geometry gives the relationships between the basis vectors: \[\begin{aligned} \hat\imath &= \cos\theta \, \hat{u} - \sin\theta \, \hat{v} & \hat{u} &= \cos\theta \, \hat\imath + \sin\theta \, \hat\jmath \\ \hat\jmath &= \sin\theta \, \hat{u} + \cos\theta \, \hat{v} & \hat{v} &= -\sin\theta \, \hat\imath + \cos\theta \, \hat\jmath. \end{aligned}\] Thus we have the components: \[\begin{aligned} i_u &= \cos\theta & i_v &= -\sin\theta & u_i &= \cos\theta & u_j &= \sin\theta \\ j_u &= \sin\theta & j_v &= \cos\theta & v_i &= -\sin\theta & v_j &= \cos\theta. \end{aligned}\] Substituting these into #rvv-eg and ignoring the third components gives the desired expressions.

Vector expressions are true no matter which basis we write the vectors in, even if they are written in different bases.

Example: Vector addition in different bases.

Adding $\vec{a}$ and $\vec{b}$ to get the result $\vec{c}$ is a well-defined operation even before any basis is used, so it cannot depend on the basis chosen. As we see below, we can do the calculation in either the $\hat\imath,\hat\jmath$ or $\hat{u},\hat{v}$ basis.

Show components: none $\hat\imath,\hat\jmath$ $\hat{u},\hat{v}$ mixed

\[\begin{aligned} \vec{c} &= \vec{a} + \vec{b} \\ &= (3\hat\imath + 2\hat\jmath) + (3\hat\imath - \hat\jmath) \\ &= 6\hat\imath + \hat\jmath \\ \vec{c} &= \vec{a} + \vec{b} \\ &= (3.5\hat{u} - 0.7\hat{v}) + (1.4\hat{u} - 2.8\hat{v}) \\ &= 4.9\hat{u} - 3.5\hat{v} \\ \vec{c} &= \vec{a} + \vec{b} \\ &= (3\hat\imath + 2\hat\jmath) + (1.4\hat{u} - 2.8\hat{v}) \\ &= 3\hat\imath - 2.8\hat{v} + 2\hat\jmath + 1.4\hat{u}. \end{aligned}\] The component order in the mixed expression is arbitrary.

Example Problem: Cross product in different bases.

\[\begin{aligned} \vec{a} &= 3\hat\imath + 2\hat\jmath = 3.5\hat{u} - 0.7\hat{v} & a &= \sqrt{3^2 + 2^2} = 3.6 \\ \vec{b} &= 3\hat\imath - \hat\jmath = 1.4\hat{u} - 2.8\hat{v} & b &= \sqrt{3^2 + 1^2} = 3.2 \end{aligned}\]

Show components: none $\hat\imath,\hat\jmath$ $\hat{u},\hat{v}$

Compute the cross product $\vec{a} \times \vec{b}$ using: (1) the angle formula #rvv-el; (2) the component formula #rvv-ex with $\vec{a},\vec{b}$ both in the $\hat\imath,\hat\jmath$ basis, both in the $\hat{u},\hat{v}$ basis, and with $\vec{a}$ in the $\hat\imath,\hat\jmath$ basis and $\vec{b}$ in the $\hat{u},\hat{v}$ basis.

(1) The dot product is $\vec{a} \cdot \vec{b} = 7$ and the vector lengths are $a = 3.6$ and $b = 3.2$, so $\cos\theta = 7 / (ab)$ and $\theta \approx 53^\circ$. Now using #rvv-el gives: \[\begin{aligned} \vec{a} \times \vec{b} &= a b \sin\theta ( -\hat{k}) \\ &\approx -9 \hat{k}. \end{aligned}\] (2) Using the component formula #rvv-ex gives: \[\begin{aligned} (3\hat\imath + 2\hat\jmath) \times (3\hat\imath - \hat\jmath) &= -3 \hat\imath \times \hat\jmath + 6 \hat\jmath \times \hat\imath \\ &= -3 \hat{k} - 6 \hat{k} \\ &= -9 \hat{k} \\ (3.5 \hat{u} - 0.7 \hat{v}) \times (1.4 \hat{u} - 2.8 \hat{v}) &= - (3.5 \times 2.8) \hat{u} \times \hat{v} - (0.7 \times 1.4) \hat{v} \times \hat{u} \\ &= -10 \hat{k} + \hat{k} \\ &= -9 \hat{k} \\ (3\hat\imath + 2\hat\jmath) \times (1.4\hat{u} - 2.8\hat{v}) &= (3 \times 1.4) \hat\imath \times \hat{u} - (3 \times 2.8) \hat\imath \times \hat{v} \\ &\quad + (2 \times 1.4) \hat\jmath \times \hat{u} - (2 \times 2.8) \hat\jmath \times \hat{v} \\ &= 4.2 \sin 53^\circ \, \hat{k} - 8.5 \sin 143^\circ \, \hat{k} \\ &\quad - 2.8 \sin 53^\circ \, \hat{k} - 5.7 \sin 53^\circ \, \hat{k} \\ &= - 9 \hat{k}. \end{aligned}\]

Example: Dot product is independent of basis.

Equation #rvv-ed makes it clear that the dot product does not depend on which basis we use to write $\vec{a}$ and $\vec{b}$, so long as we use the same orthonormal basis for both of them. This is because the dot product only depends on the lengths and angle between the vectors, which are real physical quantities that don’t change just because we use a different basis.

However, we can also verify directly that the component equation #rvv-es for the dot product does not depend on which basis we use. To keep the algebra short, we will only do this in 2D.

We compute the dot product using #rvv-es in the $\hat\imath,\hat\jmath$ basis and substitute in the change-of-basis expressions #rvv-eg, giving:

\[\begin{aligned} \vec{a} \cdot \vec{b} &= a_i b_i + a_j b_j \\ &= (u_i a_u + v_i a_v ) (u_i b_u + v_i b_v) + (u_j a_u + v_j a_v) (u_j b_u + v_j b_v) \\ &= (u_i^2 + u_j^2) a_u b_u + (v_i^2 + v_j^2) a_v b_v + (u_i v_i + u_j b_j) (a_u b_v + a_v b_u) \\ &= \| \hat{u} \|^2 a_u b_u + \| \hat{v} \|^2 a_v b_v + (\hat{u} \cdot \hat{v}) (a_u b_v + a_v b_u) \\ &= a_u b_u + a_v b_v. \end{aligned}\]

To get the last line we used the fact that $\hat{u}$ and $\hat{v}$ form an orthornormal basis, so that they each have length 1 (that is, $\|\hat{u}\| = \|\hat{v}\| = 1$) and they are orthogonal (that is, $\hat{u} \cdot \hat{v} = 0$).

This then shows that

\[\begin{aligned} a_i b_i + a_j b_j &= a_u b_u + a_v b_v \end{aligned}\]

and so it doesn’t matter which basis we use to compute $\vec{a} \cdot \vec{b}$, so long as we use an orthonormal basis.

## Time-dependent vectors and bases

We can have dynamic vectors which change over time, so their components also change. Alternatively, we can have a fixed vector but dynamic basis. Because we are using orthonormal bases, the only type of basis change that can occur is rigid rotational motion of the basis.

Movement: vector basis

A time-varying vector or basis. Whether the vector or the basis changes with time, in either case the components are also changing with time.