Tensors

A representation of a Tensor. Image courtesy of https://en.wikipedia.org/wiki/Tensor

One thing we will notice is these funny looking things called Tensors. Often they are characterized by a letter with two subscripts, Gµν, for example.

The left hand side of this equation contains two tensors (note the subscripts), and the right hand side just one. But here is the kicker, the entire left hand side can be condensed into one tensor! Image courtesy of www.scienceblogs.com

Before we get any further, let me stop to cover some more jargon.

A tensor can be characterized by something called it’s rank. Wolfram Math World (Click here for their page!) refers to a tensor as a physical object that can be characterized by a single magnitude, with multiple directions simultaneously. The multiple directions portion is where the term rank comes into play. Higher level mathematics aside, it can be easily represented as such:

  • Rank 0 tensors are simply scalars.
  • Rank 1 tensors are vectors.
  • Rank 2 tensors are NxN (square) matrices.
  • Rank 3 and above are, well, tensors.
  • A tensor of order p has Np components. N in this case represents dimensionality. So a rank three tensor in three dimensions will have 33=9 components.

A wonderful visual representation of this concept of tensor rank is in Dan Fleisch’s video “What’s a Tensor?”, as seen below:

(PLEASE VIEW “MATH IMAGE RESOURCES” PAGE FOR IMAGE SOURCE)

So a tensor is a mathematical object that helps us to operate in dimensions higher than three (but works just as well in lower ones). But what about these subscripts? Those are called indices, and we must cover some rules for them before continuing on.


Indices:

This notation is believed to have been introduced by Einstein. Was there anything this man couldn’t do? For one thing, they make writing out these equations much easier. Secondly, they are highly compatible with programming languages.

An index is denoted by a subscript, for example: Aij, the letters i & j are the indices of the tensor A. A letter index can appear twice, but no more: so no Aiiij. When it is repeated, however, it is understood to take on the values of 1 to N (N depending on the space considered).

When an index appears twice in a term (Akk),  we sum over this statement. So Akk=a11+a22+a33+…+aNN. We sum depending on the dimensionality (N) of consideration. Any index not repeated is free to take on values over the range from 1 to N.

Something interesting we see is that the tensor rank is equal to the number of free indices. So Aiij has a rank of one (j is the only index that is not free).


Raising and Lowering Indices

Here I will briefly discuss raising and lowering of indices. Why would anyone want to this, and what does it tell us? For a simple definition, Wikipedia comes in handy: “one can raise or lower indices to change a type (a, b) tensor to a (a + 1, b − 1) tensor (raise index) or to a (a − 1, b + 1) tensor (lower index)”. Check the page here! What is being said is that raising or lowering an index changes the type of the tensor.

By manipulating the indices, we can change a tensor from say, a matrix to a vector. How do we accomplish this? It looks complicated, but Wikipedia has an excellent (though not entirely mathematically accurate) explanation.

A wonderful Wikipedia page exists that describes the next bit very well, feel free to go here at any point during my explanation.

First, let’s consider some things, contravariance and covariance

Contravariance is normally denoted by upper (superscript) indices, so gij, for example. For instance a vector with contravariant components includes the position of an object relative to an observer. Not only does this apply to position. It is the component of a vector with respect to anything.

Covariance is denoted with lower (subscript) indices, gij. The covariance of a vector is defined as the components of dual vectors (also known as covectors). What are dual vectors? Let’s not worry about this for now.

Now, to manipulate indices.

To Raise: We must multiply our tensor by the contravariant metric tensor (gij). Here is what it looks like: gijAj = Bi

To Lower: We must multiply our tensor by the covariant metric tensor (gij). Here: gijAj = Bi

 


Some great (though more advanced) reading can be found here: Tensor Calculus MIT


Tensors in General Relativity

In the coming pages you will see lots of tensors in their short hand version. But in the last portion of this page on tensors, I would like to expose you to what these actually look like. The metric tensor has been mentioned on this page, the one with gµν from the Einstein equation.

In flat Minkowski space, and in Euclidean coordinates, this tensor actually appears as a square matrix below:

Image courtesy of http://hepweb.ucsd.edu/ph110b/110b_notes/node74.html
Image courtesy of http://hepweb.ucsd.edu/ph110b/110b_notes/node74.html

You may surmise that the indices µν come in handy here, as the only components that matter in this matrix are on the diagonal. The indices help us by taking advantage of symmetries in order to simplify calculations.

 

 

 

In spherical coordinates (what we deal with a lot for talking of objects in space) the tensor becomes:

Image courtesy of http://hepweb.ucsd.edu/ph110b/110b_notes/node74.html
Image courtesy of http://hepweb.ucsd.edu/ph110b/110b_notes/node74.html

The r in this equation deals with the radius of a circle, and the θ deals with the angle swept out in said circle, at a distance of radius r. As you can see, things begin to get more complicated when we deal with other coordinate systems. Throw in curved space as well, and things get even more complicated.

 


WANT TO MOVE ON TO HOW THESE THINGS APPEAR IN GENERAL RELATIVITY?

Space-time is curved!