Addendum 1998 by Chris Hillman

Original 1995 by Michael Weiss.

People sometimes argue over whether the Lorentz–Fitzgerald contraction is "real" or not. That's a topic for another FAQ entry, but here's a short answer: the contraction can be measured, but the measurement is frame dependent. Whether that makes it "real" or not has more to do with your choice of words than the physics.

Here we ask a subtly different question. If you take a snapshot of a rapidly
moving object, will it *look* flattened when you develop the film? What is
the difference between measuring and photographing? Isn't seeing believing?
Not always! When you take a snapshot, you capture the light rays that hit the
*film* at one instant (in the reference frame of the film). These rays may
have left the *object* at different instants; if the object is moving with respect
to the film, then the photograph may give a distorted picture. (Strictly speaking,
snapshots aren't instantaneous, but we're idealizing.)

Oddly enough, though Einstein published his famous relativity paper in 1905, and
Fitzgerald proposed his contraction several years earlier, no one seems to have asked this
question until the late '50s. Then Roger Penrose and James Terrell independently
discovered that the object will *not* appear flattened [1,2]. People
sometimes say that the object appears rotated, so this effect is called the
Penrose-Terrell rotation.

Calling it a rotation can be a bit confusing though. Rotating an object brings its backside into view, but it's hard to see how a contraction could do that. Among other things, this entry will try to explain in just what sense the Penrose-Terrell effect is a "rotation".

It will clarify matters to imagine *two* snapshots of the same object, taken by
two cameras moving uniformly with respect to each other. We'll call them
*his* camera and *her* camera. The cameras pass through each other at
the origin at t=0, when they take their two snapshots. Say that the object is at
rest with respect to his camera, and moving with respect to hers. By analysing the
process of taking a snapshot, the meaning of "rotation" will become clearer.

How should we think of a snapshot? Here's one way: consider a pinhole camera. (Just one camera, for the moment.) The pinhole is located at the origin, and the film occupies a patch on a sphere surrounding the origin. We'll ignore all technical difficulties (!), and pretend that the camera takes full spherical pictures: the film occupies the entire sphere.

We need more than just a pinhole and film, though: we also need a shutter. At t=0, the shutter snaps open for an instant to let the light rays through the pinhole; these spread out in all directions, and at t=1 (in the rest frame of the camera) paint a picture on the spherical film.

Let's call points in the snapshot *pixels*. Each pixel gets its color due
to an event, namely a light ray hitting the sphere at t=1. Now let's consider his &
her cameras, as we said before. We'll use t for his time, and t' for hers. At
t=t'=0, the two pinholes coincide at the origin, the two shutters snap simultaneously, and
the light rays spread out. At t=1 for *his* camera, they paint *his*
pixels; at t'=1 for *her* camera, they paint *hers*. So the definition
of a snapshot is frame dependent. But you already knew that. (Pop quiz: what
shape does *he* think *her* film has? Not spherical!) (More
technical difficulties: the rays have to pass right through one film to hit the
other.)

So there's a one-to-one correspondence between pixels in the two snapshots. Two pixels correspond if they are painted by the same light ray. You can see now that her snapshot is just a distortion of his (and vice versa). You could take his snapshot, scan it into a computer, run an algorithm to move the pixels around, and print out hers.

So what does the pixel mapping look like? Simple: if we put the usual latitude/longitude grid on the spheres, chosen so that the relative motion is along the north-south axis, then each pixel slides up towards the north pole along a line of longitude. (Or down towards the south pole, depending on various choices I haven't specified.) This should ring a bell if you know about the aberration of light: if our snapshots portray the night sky, then the stars are white pixels, and aberration changes their apparent positions.

Now let's consider the object: say, a galaxy. In passing from his snapshot to hers, the image of the galaxy slides up the sphere, keeping the same face to us. In this sense, it has rotated. Its apparent size will also change, but not its shape (to a first approximation).

The mathematical details are beautiful, but best left to the textbooks [3,4].
Just to entice you if you have the background: if we regard the two spheres as Riemann
spheres, then the pixel mapping is given by a fractional linear transformation.
Well-known facts from complex analysis now tell us two things. First, circles go to
circles under the pixel mapping, so a sphere will *always* photograph as a
sphere. Second, shapes of objects are preserved in the infinitesimally small
limit. (If you know about the double covering of SL(2), that also comes into
play. [3] is a good reference.)

Although the above description is one way of describing how a moving object appears, we
do stress that the object does *not* look completely rotated. For example,
consider a train carriage that moves with constant speed along a straight track, past us
who stand on the platform watching it. Light that leaves the back of the carriage
will arrive at our eyes coincident with light from the side of the carriage; and as well,
the carriage is Lorentz contracted. At first glance this seems to ensure that the
carriage will look rotated, because we are used to seeing just such an effect from
rotations in everyday (nonrelativistic) life. But clearly, on second glance, we will
realise the carriage is *not* rotated, because we notice that its wheels are still
attached to the track, which has not changed its aspect (since it's at rest relative to
us). So there will be a psychological effect of the carriage looking somewhat
rotated, but we will quickly notice that this is only an imperfect optical illusion.

[1] and [2] are the original articles. [3] and [4] are textbook treatments. [5] has beautiful computer-generated pictures of the Penrose-Terrell rotation. The authors of [5] later made a video [6] of this and other effects of "SR photography".

- R. Penrose, "The Apparent Shape of a Relativistically Moving Sphere", Proc. Camb. Phil. Soc., vol 55 Jul 1958.
- J. Terrell, "Invisibility of the Lorentz Contraction", Phys. Rev. vol 116 no. 4 pgs 1041—1045 (1959).
- R. Penrose, and W. Rindler, "Spinors and Space-Time", vol I chapter 1.
- Marion, "Classical Dynamics", Section 10.5.
- Hsiung, Ping-Kang, Robert H. Thibadeau, and Robert H. P. Dunn, "Ray-Tracing Relativity", Pixel, vol 1 no. 1 (Jan/Feb 1990).
- Hsiung, Ping-Kang, and Robert H. Thibadeau, "Spacetime Visualizations," a video, Imaging Systems Lab, Robotics Institute, Carnegie Mellon University.

The above article on Penrose-Terrell rotations mentions in passing the fact that every Lorentz transformations act on the celestial sphere the same way that the corresponding Moebius transformation acts on the Riemann sphere, but it is not very explicit. This addendum is an expanded entry on the following fascinating facts, which to my mind are the most interesting thing about the Lorentz group:

1. Vectors in R^{4} with Minkowski inner product, i.e. E^{(1,3)}, are
in bijection with two-by-two hermitian matrices according to the prescription

(t,x,y,z) <---> [t+z x+iy] [x-iy t-z]

Lorentz transformations can be represented by SL_{2}(C) matrices

Q = [a b] [c d], ad - bc = 1

where you can read off the Lorentz transformation L(Q) corresponding to Q by looking at X -> QXQ* where * is conjugate transpose. Going in the other direction, L(Q) can be represented by either Q or -Q but we have a bijection

+/-Q <---> L(Q) = L(-Q)

which gives a group homomorphism between the Lorentz group SO(1,3) and
PSL_{2}(C), the group SL_{2}(C) modulo its "center", +/- the identity
matrix.

In particular,

exp p/2 [1 0] = [ exp(p/2) 0 ] = boost by p along z axis [0 -1] [ 0 exp(-p/2) ] exp p/2 [i 0] = [ exp(ip/2) 0 ] = rotation by p about z axis [0 -i] [ 0 exp(-ip/2) ]

The appearance of "half angles" p/2 is characteristic of the two-fold "spinorial
covering" of the Lorentz group by SL_{2}(C). For instance, matrices of the
second form above yield a "one parameter subgroup" of SL_{2}(C) as we let p vary,
which "covers" the one parameter subgroup SO(2) of PSL_{2}(C) or SO(1,3) the same
way that the circular edge of a Moebius band covers its central circle.

2. A Moebius transformation, or linear fractional transformation, of the complex numbers, or more properly of the Riemann sphere (C augmented by a "point at infinity") is given by w -> (aw+b)/(cw+d). Moreover, the bijection

+/-Q <---> M(Q) = M(-Q)

where M(Q) is the Moebius transformation

a w + b w -> -------, ad - bc = 1 c w + d

and

Q = [a b] [c d], ad - bc = 1

gives an isomorphism between PSL_{2}(C) and the Moebius group. So,
PSL_{2}(C), the Lorentz group, and the Moebius group are all isomorphic as
abstract groups.

3. To understand the how the appearance of the night sky (apparent positions of stars
and galaxies on the celestial sphere) is altered by a Lorentz transformation, we must look
at null lines (one dimensional subspaces spanned by null vectors). Such null lines
represent the world lines of photons coming from distant stars and striking the viewers
retina "here and now". Every such null line passes through a unique point on the
sphere t=−1, x^{2} + y^{2} + z^{2} = 1, an "equator" of the light
cone, and thus corresponds to a unique point on the celestial sphere, i.e. on the Riemann
sphere. With one exception, every null line can be represented by a unique hermitian
matrix of form

N = [ |w|^{2}w ] [ w* 1 ]

where w is a complex number which can be associated with a unique location on the Riemann sphere by inverse stereographic projection. Then, a simple computation shows that QNQ* has the effect of the Moebius transformation M(Q)

a w + b w -> ------- c w + d

In other words, L(Q) transforms the celestial sphere (changes the appearance of the
night sky) in *exactly* the same way that M(Q) transforms the Riemann sphere.

4. Lorentz transformations may be classified into four types according to their geometric effect on the night sky:

- (i) elliptic:
- two fixed points, stars appear to have moved along circles "about" the fixed points; if the fixed points are antipodal points we have a pure rotation.
- (ii) hyperbolic:
- two fixed points, stars appear to have moved along circles through the fixed points; if the fixed points are antipodal points we have a pure boost in the direction of the "attracting" fixed point.
- (iii) loxodromic:
- two fixed points, stars appear to have moved along loxodromic spirals from one fixed point to the other; if the fixed points are antipodal points we have a pure boost in the direction of the attracting fixed point, composed with a commuting rotation about the same axis.
- (iv) parabolic:
- one fixed point, stars appear to have moved along circles through this fixed
point. An example of a parabolic Lorentz transformation is given by translation
in C,
Q = exp t/2 [0 1] = [1 t/2] [0 0] [0 1 ]

which corresponds via X -> QXQ* tot -> t + t (t

Physically speaking, such a parabolic transformation may be realized by following a boost along say the x axis with a rotation about say the z axis; if we "mostly boost" we get a hyperbolic transformation; if we "mostly rotate" we get an elliptic transformation; if we "balance" the amount of rotation and boosting just right we obtain the desired parabolic transformation.^{2}+ 4x - t z)/8 x -> x + t (t+z)/2 y -> y z -> z + t(t^{2}+ 4x - t z)/8

5. An excellent, very readable, quite elementary, and widely available reference for the Moebius group is Konrad Knopp, Elements of the Theory of Functions, Dover, 1952.