Skip to content

Lars Eighner's Homepage


LarsWiki

calumus meretrix et gladio innocentis

Vectors

A vector is usually described as a magnitude or length with a direction. Both length and direction should be understood broadly. Geometrically, vectors are directed line segments. Like all line segments, vectors have length, but because vectors are directed, lengths can be negative.

The vector from point A to point B is (not very often) represented as {$ \overrightarrow {A B} $}. Because vectors have direction:

{$$ \Large \overrightarrow {A B} \ne \overrightarrow {BA} $$}

A vector from the origin to the point A is sometimes represented as {$ \overrightarrow {O A} $}.

More commonly, vectors are represented as a boldface letter or, especially in blackboard or handwritten matter, as a letter with an arrow above:

{$$ \Large \eqalign { &\bf A \\ &\bf a \\ &\overrightarrow a }\tag {vector notation 1}$$}

Vectors can be generalized, but since "so much, that way" is fairly meaningless in itself, there must be some reference for both magnitude and direction.

The most familiar systems are those of the Cartesian (co-ordinate) plane {$ \mathbb R^2 $} as represented by ordinary graph paper and its 3D relative {$ \mathbb R^3 $}. Clearly the idea of a co-ordinate system can be extend to any number of dimensions ({$ \mathbb R^n $}) and to complex numbers ({$\mathbb C^n $}), although complex number play by their own rules.

In these systems, vectors are represented by an ordered set of numbers:

{$$ \Large {\bf a} = ( a_1, a_2, a_3, \ldots , a_n )$$}

The ordered set of numbers is exactly the same as is used for points, but in a vector context they mean the vector from the origin to the point. To some extent the direction of vectors expressed this way is also implicit. clearly {$ \bf a = (1, 3, 5)$} and {$ \bf b = (-5, 7, 4)$} have different directions, but how those directions differ depends upon assumptions about the relationship between the ordered numbers.

In {$ \mathbb R^2 $} and {$ \mathbb R^3 $} the numbers are units on mutually perpendicular axes and questions of direction can be answered with the applicable trigonometry.

In general,

{$$ \Large \bf a = ( a_1, a_2, a_3, \ldots , a_n ) = \sum_{n=1}^k a_1 \bf e_1 $$}

where {$ \bf e_n $} are vectors of magnitude 1 in the direction of each of the dimensions. {$ \bf e_n $} are chosen so that all vectors can be expressed, but no {$ \bf e_n $} can be expressed in terms of the others.

Vectors in {$ \large {\mathbb R^3} $}

In {$ \mathbb R^3 $} the minimal set of unit vectors which can express all the vectors but cannot be expressed by combinations of the others are usually represented by {$ ( \hat x, \hat y, \hat z)$} or {$ ( \hat i, \hat j, \hat k)$}. The caret above (read "hat") indicates a vector of magnitude 1, often called a unit vector. Occasionally the hat is omitted, especially with the i, j, k notation. The x, y, z are from the familiar x-, y-, and z-axes and is used in much analytical geometry and classical physics; i, j, k are often used in engineering and electrical contexts; and {$ \bf e_1, \bf e_2, \bf e_3 $} are used where the extensibility of results to more dimension is given special emphasis. The notations do not mean different things but are entirely synonymous.

Thus

{$$ \Large \eqalign{ {\bf a} &= a_1 {\bf \hat x} + a_2 {\bf \hat y} + a_3 {\bf \hat z} \\ &= a_1 {\bf i} + a_2 {\bf j} + a_3 {\bf k }\\&= a_1 \bf e_1 + a_2 \bf e_2 + a_3 \bf e_3 \\&= ( a_1, a_2, a_3) \\&= \left[ a_1, a_2, a_3 \right] \\ &= {\left[ \begin{matrix} a_1 \\ a_2 \\ a_3 \end{matrix} \right] }} \tag{vector notation 2}$$}

the latter two being row and column matrices, and all but the first 2 having obvious extensions into n-space. These are just the ways vectors are commonly represented in co-ordinates based on mutually perpendicular (orthogonal) axes. There are other co-ordinate system which will be taken up later.

The relationship of the axes is entirely arbitrary. A standard (convention) is necessary to ensure computations are intelligible to others (and to ourselves over time). The question usually is "Which is the positive direction of the z-axis?" since the conventional relationship of the x-axis to the y-axis is widely taught. The x-axis is a horizontal line on graph paper with the positive direction to the right. The y-axis on the graph paper is a vertical line with the positive direction being up (on the graph paper). The positive direction of the z-axis is out of the paper (toward the viewer of the paper). It might be easy enough to remember that, but the world is not graph paper. So the convention applicable on graph paper, in physical situations (so far as physical space approximates 3-space), and in diagrams attempting to render three-dimensions is the right-hand rule. The right-hand rule is: when the fingers of the right hand are curled from the x-axis toward the y-axis (that is, counter-clockwise), the thumb will point in the positive direction of the z-axis. The right-hand rule is sometimes stated differently, but come to the same thing.

Fortunately, screws (the real-world mechanical fasteners), corkscrews, bolts, and many valves work by the same rule: turning them counter-clockwise causes them to move out (toward you) and turning them clockwise cause them to move in (away from you). On paper and blackboards the direction out of the paper (toward you) is sometimes indicated by {$ \ \odot \ $}, supposedly representing an arrowhead as it comes toward you, and the direction in to the paper (away from you) is {$ \ \otimes \ $}, supposedly represent the fletching on an arrow moving away from you.

In flat representations of 3D graphs there may be some confusion about the x- and y-axes. The right-hand rule can be applied in reverse. Point the thumb in direction of the positive z-axis, and the fingers will curl from the x-axis to the y-axis.

The notation I favor here is like this:

{$$ \Large {\bf A} = A_x {\bf \hat x} + A_y {\bf \hat y} + A_z {\bf \hat z} $$}

Because such vectors are in a Cartesian co-ordinate system and are a linear combination of orthogonal vectors, each of length 1, these vectors can be approached with ordinary Euclidian geometry and trigonometry. In addition, length makes more sense as the magnitude of a vector.

Length of a vector

In Euclidian 3-space the length of a vector can be calculated via Pythagoras.

{$$ \lVert {\bf A} \rVert = \sqrt{A_x^2 + A_y^2 + A_z^2} \tag{length of a vector}$$}

{$ \lVert \bf A \rVert $} is more generally called the norm of {$ {\bf A} $}. It is not the absolute value, although occasionally single-bar brackets are used to denote it, but because these are real number, its value is always non-negative.

Null vector

There is a zero or null vector: {$ {\bf \overrightarrow 0 } $}. The null vector requires finessing the definition of vector because it either has no direction or all directions at once. For computational purposes:

{$$ {\bf\overrightarrow 0} = 0 {\bf \hat x} + 0 {\bf \hat y} + 0 {\bf \hat z} \tag {Null vector} $$}

Scalar Multiplication

Scalar multiplication is implicit is the notation we are using. It must be possible to multiply a vector by a scalar because we have been expressing vectors with the sum of scalar multiples of unit vectors. The scalars are real numbers in {$ \mathbb R^3 $}.

Do not confuse scalar multiplication with the scalar product (aka dot product, inner product) which will be covered later. Scalar multiplication has two operands: a scalar and a vector and from these produces a vector.

{$$ k {\bf A} = kA_x {\bf \hat x} + kA_y {\bf \hat y} + kA_z {\bf \hat z} \tag{scalar multiplication} $$}

Scalar multiplication may be thought of as adding a vector to itself some number of times (we are coming to vector addition). Scalar multiplication stretches a vector when the absolute value of the scalar is greater than 1, shrinks a vector when the absolute value of the scalar is between 0 and 1. If the scalar is 0, the result is the null vector. When the scalar is negative, the direction of the vector is reversed. It should be obvious (but by all means do the math if you are not convinced) that all scalar multiples of a vector (except 0) are colinear with the vector operand, so the result can only be in one of two possible directions: in the same direction as the vector operand and in the opposite direction of the scalar operand.

Vector Addition

Geometrically, vectors may be added by the tail-to-tail (parallelogram) method or the tail-to-head method. As should be obvious, the methods are equivalent.

Algebraically, vectors may be added by adding the coefficients on each of the basis vectors.

Given:

{$$ {\bf A} = A_x {\bf \hat x} + A_y {\bf \hat y} + A_z {\bf \hat z} $$}

and

{$$ {\bf B} = B_x {\bf \hat x} + B_y {\bf \hat y} + B_z {\bf \hat z} $$}

{$$ {\bf A} + {\bf B} = (A_x + B_x) {\bf \hat x} + (A_y + B_y) {\bf \hat y} + (A_z + B_z) {\bf \hat z} \tag{Vector Addition} $$}

Vector addition

Geometrically, vectors may be added by the tail-to-tail (parallelogram) method or the tail-to-head method. As should be obvious, the methods are equivalent. In Euclidian 3-space when only two vectors are added at a time, there is always a plane for this construction because when the vectors are joined tail-to-tail or head-to-tail only 3 points are involved.

Vector addition is commutitive:

{$$ {\bf A} + {\bf B} = {\bf B} + {\bf A} $$}

and associative:

{$$ ( {\bf A} + {\bf B} ) + {\bf C} = {\bf A} + ( {\bf B} + {\bf C} ) $$}

There is an additive identity, namely the null vector:

{$$ {\bf A} + {\bf \overrightarrow 0} = {\bf A} $$}

Scalar multiplication is distributive:

{$$ k ( {\bf A} + {\bf B} ) = k {\bf A} + k {\bf B} $$}

And when the scalar is -1, the additive inverse is produced:

{$$ {\bf A} + (-1) {\bf A} = {\bf \overrightarrow 0} $$}

Vector Dot Product

The dot product is also called the inner product in Euclidian contexts and the scalar product because it yields a scalar. Do not confuse scalar product with scalar multiplication which yields a vector.

The dot product of two vectors is the sum of the arithmetic product of the corresponding coefficients on the basis vectors.

{$$ {\bf A} \cdot {\bf B} = A_x B_x + A_y B_y + A_z B_z \tag{Dot product}$$}

An immediate consequence of this definition is that the dot product of a vector and itself is the square of the norm (length) of the vector:

{$$ \eqalign { {\bf A} \cdot {\bf A} &= A_x^2 + A_y^2 + A_z^2 \\ &= \sqrt{A_x^2 + A_y^2 + A_z^2}^2 \\ &= {\lVert {\bf A} \rVert}^2 } $$}

Via the Law of Cosines this leads to an alternate method of calculating the dot product where {$ \theta $} is the angle between the vectors:

{$$ {\bf A} \cdot {\bf B} = \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta $$}

This is especially convenient in practical problems when the frame can be chosen so that one of the dimensions can be neglected, as is often the case in classical mechanics.

To demonstrate the above where the vectors are not colinear and not perpendicular:

{$$ c^2 = a^2 + b^2 - 2 \cos \theta \text{, where } \theta \text{ is the angle opposite c.} $$}


Vector Triangle

So for the vector triangle with sides {$ {\bf A}, {\bf B}, \text{ and } {\bf C} $}:

{$$ \lVert {\bf C} \rVert^2 = \lVert {\bf A} \rVert^2 + \lVert {\bf B} \rVert^2 - 2 \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta $$}

So far this is just plane trigonometry as the norms of the vectors are simply lengths. But when we consider the vector triangle as composed of vectors, we see:

{$$ {\bf A} + {\bf C} = {\bf B} \Leftrightarrow {\bf C} = {\bf B} - {\bf A} $$}

(If you do not see this last step, complete the vector parallelogram and reverse the direction of {$ {\bf A} $}.)


C = B - A

So,

{$$ \lVert {\bf B} - {\bf A} \rVert^2 = \lVert {\bf A} \rVert^2 + \lVert {\bf B} \rVert^2 - 2 \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta \\ (B_x - A_x)^2 + (B_y - A_y)^2 + (B_z - A_z)^2 =\\ A_x^2 + A_y^2 + A_z^2 + B_x^2 + B_y^2 + B_z^2 - 2 \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta \\ B_x^2 - B_x^2 -2B_xA_x + A_x^2 - A_x^2 + \\B_y^2 - B_y^2 -2B_yA_y + A_y^2 - A_y^2 + \\ B_z^2 - B_z^2 -2B_zA_z + A_z^2 - A_z^2 = - 2 \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta \\ -2BxAx - 2ByAy - 2BzAz = - 2 \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta \\ AxBx + AyBy + AzBz = \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta \\ {\bf A} \cdot {\bf B} = \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta $$}

When the vectors are colinear {$ \theta $} is either 0 or a straight angle and {$ \cos \theta $} is either 1 or -1 (but not either at once as in a square root}.

When {$ {\bf A} \perp {\bf B} $} we do not have to use the Law of Cosines, but can appeal to Pythagoras directly:

{$$ \eqalign { \lVert {\bf B} - {\bf A} \rVert^2 &= \lVert {\bf A} \rVert^2 + \lVert {\bf B} \rVert^2 \\ \lVert {\bf B} - {\bf A} \rVert^2 - \lVert {\bf A} \rVert^2 - \lVert {\bf B} \rVert^2 &= 0 \\ -2BxAx - 2ByAy - 2BzAz &= 0 \\ AxBx + AyBy + AzBz &= 0 \\ {\bf A} \cdot {\bf B} &= 0 }$$}

Of course since {$ \cos \theta = 0 $} for perpendiculars it is easy to see {$ {\bf A} \cdot {\bf B} = \lVert {\bf A} \rVert \lVert {\bf B} \rVert \cos \theta $} holds in this case. This argument can be made backwards just as easily, so:

{$$ {\bf A} \cdot {\bf B} = 0 \Leftrightarrow {\bf A} \perp {\bf B} $$}

Sort of. This leads us to conclude that the null vector is perpendicular to every vector including itself. Live with it or exclude the null vector from these arguments.


Sources:

Recommended:

Category: Math

About Math


Read or Post Comments

No comments yet.

This is a student's notebook. I am not responsible if you copy it for homework, and it turns out to be wrong.

Figures are often enhanced by hand editing; the same results may not be achieved with source sites and source apps.

Backlinks

This page is MathVectors

August 05, 2017

  • HomePage
  • WikiSandbox

Lars

Contact by Snail!

Lars Eighner
APT 1191
8800 N IH 35
AUSTIN TX 78753
USA

Help

HOME

The best way to look for anything in LarsWiki is to use the search bar.

Page List

Categories

Physics Pages

Math Pages

Math Exercises

Math Tools

Sections