Magnitudes and Directions

I have been rereading one of the introductory chapters of Misner, Thorn and Wheeler and I decided to try to come up with my own notion of a 1-form based on the concept of a differential. I suspect the result is equivalent to the standard definition, but from a new perspective that treats "magnitudes" and "directions" as separate entities that combine to form a vector.

Let us start with the most bare (but relevant) mathematical entity I can think of: a set of points. Picture a set of points scattered throughout space. We next need to introduce a set of directions. At each point, imagine there is an associated set of directions. What is a direction you ask? I would define it as something you can take the dot product of, but you cannot add together. You cannot add north and east because you do not know "how much north" and "how much east," but you can certainly say north dot east = 0 because they are perpendicular. You could define the dot product of two directions as the cosine of the angle between them. The difference between dotting two directions and dotting two vectors is that the former gives you a number with no units. You could then think of a dot product as a mapping from pairs of directions to numbers on the interval [-1,1]. So we have a set of unique directions at each point. The number of directions associated with each point does not have to be the same. Some points could have eight neighbors while others could have two.

I am going to define the basis as the set of directions associated with each point. Normally, the basis is thought of as being composed of vectors, but in this context, I am going to think of them as directions without magnitude. We can make vectors by arranging a set of directions in a row and a set of "magnitudes" (numbers with units) in a column and doing matrix multiplication. Call your magnitudes v^\alpha and your directions \omega_\alpha . We can represent a vector as v^\alpha \omega_\alpha by using the Einstein summation convention.

From here, we may define the metric as:


The diagonal entries of this notion of a metric metric will always describe flat spacetime. If the basis is not orthogonal, you will get off-diagonal entries. Notice how the metric naturally shows up when taking dot products of two vectors:


Imagine that each basis direction at a given point pointed toward a unique neighboring point. Consider a parameter, x, that varies from point to point. We can define the gradient (exterior derivative for a scalar), \boldsymbol{d} , of that parameter as:


, where \omega^i(P) is theĀ dual of the direction, \omega_i(P) , that points from point, P , to its ith neighbor, P_{i} . The dual direction, \omega^i , is defined such that:


, where \delta^i_j is the Kronecker delta. In this sense, a 1-form is just a vector of infinitesimal magnitude spanned by the dual of the basis directions at a point. Since coordinates are just a set of parameters that vary from point to point, these differentials are completely compatible with the usual differentials used in calculus. Since all but one of the N coordinates are held constant in this definition, one can imagine (or at least imagine the existence of) N-1 dimensional sheets in which all other coordinates are held constant. The differentials point in the direction normal to these sheets. The depth of the sheets approaches zero as the points approach a continuum. This is why Misner, Thorn, and Wheeler describe 1-forms as sheets. They do not mean infinitesimal sheets but "isosurfaces" in which all but one coordinate is held constant.

It is worth noting that unless we let our set of points approach a continuum, the product rule works like this:


This does not generalize to arbitrary numbers of products in a simple way; one must iteratively apply the product rule.

Given a set of basis vectors, \boldsymbol{e}_j , we may derive a set of basis 1-forms and vice versa. Consider the quantity, \boldsymbol{g}_{ij}=\omega_i\cdot\boldsymbol{e}_j , its inverse, \tilde{\boldsymbol{g}}^{ij} , and the following quantities: \boldsymbol{e}^i=\tilde{\boldsymbol{g}}^{ij}\omega_j . Since \tilde{\boldsymbol{g}}^{ik}\boldsymbol{g}_{kj}=\delta^i_j (where \delta^i_j is the Kronecker delta),


The quantities, \boldsymbol{e}^i , form the basis for what are calledĀ covectors and can be considered dual to the basis \boldsymbol{e}_i . Whatever units you give the vectors, covectors have the inverse of those units. These covectors are like 1-forms that do not have infinitesimal magnitude.

I later came to the conclusion that what I call "directions" are analogous to what most texts call "unit vectors" except for the fact that I say you cannot add or subtract directions. In practice, I cannot think of a single scenario in which you would add or subtract unit vectors without first assigning them a magnitude of some sorts. So I think, in practice, it is generally safe to think of unit vectors as directions and apply the formalism I outlined in this post to tensor analysis.

5 thoughts on “Magnitudes and Directions

  1. Pingback: The Exterior Covariant Derivative | The Path Integral

  2. Pingback: The Covariant Codifferential | The Path Integral

  3. Pingback: Notes on Directions | The Path Integral

  4. I like the idea of scaling back your axioms to the bare minimum, but I suspect that if you are going to insist on having both direction and distance, you might as well just make it an inner product space. I think you're going to lose too much structure without an inner product and proving anything in this 'Alex-space' is going to be a nightmare.

    • I guess the nice thing about separating magnitudes from directions is you can easily see what structure of a more complicated space is attributed to direction and what structure is attributed to assigning magnitudes. I have generally found that you can pretty much set the stage for tensor calculus by starting with a set of points with directions and no magnitudes. You can then introduce the magnitudes by sticking tensors on your set of points. From this perspective, the directions are something inherent to the space you are working with, while the magnitudes are inherent to the objects that live on the space. Something about separating the two is quite satisfying to me. It may turn out to be completely useless, but I think it leads to a nice perspective that could work for discrete and continuous spaces--even curved ones or ones with irregular grids (composed of different polygons/polyhedra at different points). I guess the generality of this perspective is why I am drawn to it. I imagine if I can derive interesting results from this, they will be all the more powerful because they would apply to such a wide variety of more specific cases.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Before you post, please prove you are sentient.

We hear with our...