An Acceleration Interval?

home page.


Resum.

Statistics and Relativity.

A coincidence of geometric mean velocities.

Mean expansion for the Interval.

The Equivalence principle in the Michelson-Morley experiment.

Mach principle as statistics by another name, etc.


Resum

I'd better mention again where my ideas come from. I once studied social science, including scientific method, which I saw could be used to determine voting method, one of those limited, precise problems that scientific method does well. When I was young, I also noticed, in several ways, that special relativity was akin to an electoral method. But not till I was already old, could I give a fairly consistent analogy.

Anyway, in the course of the demoralising attempts at comparisons, I decided that I would have to put special relativity (SR) in statistical terms. Because, an election counts a distribution of choice. And I was thinking that SR could be considered as a distribution of choices that all observers are in an equal position to make of measuring an event.

At that stage, I didnt see the deeper senses in which elections and special relativity are intrinsicly statistical procedures. As to SR, I noticed that the contraction factor has the same form as a geometric mean, and to cut a long story short, it turned out that the Lorentz transformations and the Interval can be reformulated as geometric means. In particular, the Interval can be understood as the commonly observed geometric mean.

The geometric mean is a sort of average, a typical or representative item of a range of items or values, that forms a geometric series. And it is also possible to measure deviations from that average. These dispersion formulas were applicable to the Lorentz transformations for time and space, and to the Interval. (They also apply to the dynamic as well as the kinematic versions of SR.)

The interesting result of all these very simple dispersion formulas was that they showed for situations of classical mechanics with low velocities, that there is no significant dispersion about the geometric mean. In other words, if your geometric mean has no apparent range of values to represent, it is not apparent that it is a geometric mean. And that is why classical mechanics appears to be about uniquely determined values. In reality, it is implicitly statistical, the statistics only becoming apparent for high energy physics.

SR postulates a constant maximum speed of light. A statistical interpretation has a new slant on this. A geometric mean is suitable for measuring a range of values subject to diminishing returns, such as energising increases, in the velocity of an object, make it more massive. But light, has no rest mass, never being at rest, and is not subject to such diminishing returns. So, you would not consider an average speed of light from the point of view of a geometric mean but the more familiar average known as the arithmetic mean.

And the arithmetic mean has the property, which the geometric mean does not have, that for relative velocities equally above and below light speed, they exactly cancel. In other words, light speed considered as an arithmetic mean is a constant speed. So, this statistical interpretation of light speed gives the same result as SR but as an average speed without necessarily postulating that light always moves at the same rate.

This statistical interpretation is consistent with Quantum Electro-Dynamics. Feynman, in popular lectures on QED, says at extremely short ranges, light can move at slower or faster rates but that these very soon average out to the generally observed constant speed.

If SR can be explicitly reformulated as a statistical theory, then so might general relativity in a comparable way. Once again, relative observations might translate into statistical ranges of observations. And, at a first guess, classical gravity would involve the reduction to a mean without a significant range, which therefore was not apparent as a statistical average.


Statistics and Relativity.

To top.

In the nineteenth century, Laplace produced his treatise to estimate ranges of error in astronomy and so get nearer to some definitive measurement. Likewise, the Gaussian curve is also called the error curve. Clerk-Maxwell statistical theory, of molecular motion in the mass, was designed as a large scale approximation of molecular collisions, assumed, on their microscopic scale, to obey Newtonian laws. Einstein belonged to this tradition of assuming these laws were a definitive kind of law, tho in need of revision. Whereas statistical laws were supposed to be second-best approximations. Hence, his famous debates, with Niels Bohr, challenged (unsuccessfully) that statistics gives a complete explanation of quantum mechanics.

Two of Einsteins three famous papers, of 1905, are avowedly statistical: the study of Brownian motion, explained by random molecular motion, and the quantum statistics of the photo-electric effect. So, it is especially ironic that his third paper on Special Relativity also seems to have an underlying statistical explanation.

Previously, I have described the Fitzgerald-Lorentz contraction factor as a geometric mean. More than that, it is a geometric mean of a range, which, if averaged by the arithmetic mean, would make the speed of light appear constant.

The contraction factor is the square root of the following: one minus the ratio of relative velocity (between observers of a given event), v, squared, over light speed, c, squared. Or:

(1 - v/c)^1/2.

The contraction factor can be considered a geometric mean, because the terms in the brackets factorise to: (1 - v/c)(1 + v/c). These factors may represent a range of v/c below and above one. The contraction factor may be defined as the geometric mean velocity, a representative average of this range of velocities.

Suppose you take the arithmetic mean of the two factors of the contraction factor. That means adding (c-v)/c to (c+v)/c, which gives 2c/c, and dividing by two. The arithmetic mean is c/c. Essentially, the contraction factor involves a positive and negative range of velocity, v, about light velocity, c, such that their arithmetic mean is c, whatever the value of v. This makes a constant of c, even if it were a kind of average.

The situation reminds of that mentioned above, in QED, of variations in light speed, at extreme short ranges, that average out over distance to the perceived constant speed.

When the velocity of any normally massive object significantly approaches light speed, its path describes a geometric series, of diminishing returns of increased speed for increased energy of propulsion. That is represented by the contraction factor as a geometric mean which shows that the velocity can never quite reach that of light.

The contraction factor seems a geometric mean, with arithmetic mean implications, which appear in different degrees of combination, when the factor is expanded by the binomial theorem, as shown in later sections.

A coincidence of geometric mean velocities.

To top.

Galileo relativity principle derived from the experience of someone at rest on a moving boat relative to the bank. In the heyday of the train, we experienced this in the surprise we felt, when sitting in a railway carriage, at the start of a journey, that the station platform appeared to lurch away in the opposite direction to the way we were going.

The point of this odd feeling is that there is no way we can decide whether the train or the platform is the one that is really moving or really at rest. Neither situation is a privileged frame of reference.

The contraction factor was first introduced to explain the unexpected result of the Michelson-Morley experiment that light always measured at the same speed.
Consider two observers moving uniformly, at a significant fraction of light speed, in opposite directions, like passing trains. "Einstein proposed ...no measurement could determine which train was stationary and which was moving. That being the case, the equations of electricity and magnetism would have to appear the same on the two trains, and thus the speed of light must also be the same." (Robert Laughlin, "A Different Universe.")

The contraction factor weighted observers distance and time measurements of an event, to bring the Michelson-Morley calculation into line with evidence of light measuring the same speed. This factor was the first adjustment of classical mechanics to what was to become the theory of special relativity.

Previously, my "Statistical prediction of the Michelson-Morley experiment," gave a geometric mean time (and an analgous geometric mean mass). Following on from that approach, the geometric mean velocity is the square root of the product of light speed, c, minus a given velocity, v (if only of a supposed absolute motion of "the universal ether") and of light speed, c, plus velocity, v. That is the geometric mean velocity is: {(c-v)(c+v)}^1/2 = ( c - v )^1/2.

This geometric mean velocity averaged the up and down "ether stream" journeys. In practise, Earth velocity was used supposedly as the "bank" moving relative to the "ether stream." The point was that the same geometric mean time was found for the cross-stream journeys as for the up and down stream journeys.

Similarly, the same geometric mean velocity can be found for the cross-stream journeys, as for the up and down stream journeys. The cross-stream geometric mean is calculated using Pythagoras theorem that the triangle hypotenuse squared equals the sum of the other two squared sides.
The Michelson-Morley experiment sends part of a light beam at right angles or cross-ways to the up and down stream direction. Mirrors send both beams back on their return journey.

With regard to the cross journey, a light beam, sent perpendicular to the flow of the stream, is imagined to drift, like a boat, with the flow. This drift across stream is in effect the hypotenuse of a triangle traversed with light speed, c. By the time the beam has reached the far "bank," it is no longer directly opposite its point of origin on the near "bank." The far bank has moved on by a distance covered thru earth velocity, v.

The third side of the triangle is the perpendicular distance across stream. The velocity with which the other two sides were covered can tell us, by Pythagoras theorem, the velocity with which the perpendicular crossing is covered. This is the square root of c squared minus v squared, or (c - v)^1/2.

This perpendicular velocity is the same both ways back and forth. It is its own geometric mean velocity, as multiplying the same two values and taking their square root brings us back to where we started.
But this geometric mean velocity for the perpendicular crossing is the same as for the up and down stream crossing.

The contraction factor is defined as the geometric mean velocity divided by the light velocity, c, which just turns the form into a ratio, and is sometimes a more convenient way to do any algebraic working:
{(c - v)^1/2}/c = (1 - v/c)^1/2.

Mean expansion for the Interval.

To top.

The contraction factor, as a geometric mean, is only the most summary of geometric means. If special relativity is really amenable to a statistical treatment, then one might expect a fuller version of the geometric mean to apply. The contraction factor is in effect a geometric mean that works by multiplying the two end points of a range of values and taking their square root.

Tho I started out with the contraction factor as geometric mean, it turned out that a similar form, that can be found in the Minkowski Interval, is more central to my argument. And this should be borne in mind, as I shall shift from the contraction factor to it.
The Interval is in terms of a vector, of the three dimensions of space, subtracted from the so-called fourth dimension, in terms of light speed multiplied by observers clocks or time measures of an event.

A geometric mean may be taken not only of the extreme values but also of any number of inter-mediate values, on that one dimension. Tho this range of values are on only one dimension, each one of these range values may be a composite of the three spatial dimensions, that is to say, a three-vector.
Whatever the number of such values, they are all multiplied together. Then a root of the multiple gives the geometric mean. For two values multiplied, the root taken is the square root. For three items, the cubic root of the multiple gives the geometric mean. The number of the root is the number of values (on the range) multiplied.

The geometric mean of four values in a range multiplies them and takes the quartic root, or takes the multiple to the power of one quarter.

Instead of assuming a two-valued range of velocities, say, from (1 - v/c) to (1 + v/c), we assume a four-valued range of velocities.

Where f and g are new velocity terms, let the four-valued range of velocities from least to greatest be: (1 - f/c), (1 - g/c), (1 + g/c), (1 + f/c). Their geometric mean is their multiple to the power of one-quarter, or:

{(1 - f/c)(1 - g/c)(1 + g/c)(1 + f/c)}^1/4.

This relates to a form like the contraction factor (but can be more generally applied, as part of the Interval): (1 - r/c)^1/2.

Why might we do this?

Taking a geometric mean of more values may give a more detailed result. Nevertheless, when we only calculate with the two end values, we still hope the result is not far out. And in equating the simple end-valued geometric mean to the geometric mean of a fuller range of values, we may assume that the simpler calculation happens to be in accord with fuller calculations.

Hence, equation (1):

{(1 - f/c)(1 - g/c)(1 + g/c)(1 + f/c)}^1/4 = (1 - r/c)^1/2.

Raising both sides of the equation by the power of four, we get (2):

(1 - f/c)(1 - g/c) = (1 - r/c).

Therefore (3),

1 - (f/c + g/c) + fg/cc = 1 - 2r/c + rr/cc.

Cancel the ones and multiply thru by c. Then (4):

fg/c - f - g = rr/c - 2r.

Suppose r = fg. Then, r = (fg)^1/2.

Thus, for velocities significantly approaching light speed, the velocity, r, is the geometric mean velocity of a given observers velocities, considered as a velocity range from f to g.

For this to hold, then the square of the velocity, r, must also equal an arithmetic mean of the squares of the observers velocities, f and g.

That is:

r = ( f + g )/2.

An average, that is a geometric mean, with an arithmetic mean element in it, expands into a distribution of terms that are themselves averages, including the geometric mean in one term and an element of arithmetic mean in another term. The contraction factor type geometric mean is an average of averages.
Expansions involving more than two velocities show a refinement of this basic feature.

Taking the geometric mean, of a range of three velocities, would involve three multiplied factors. Two new range values, (1 - h/c) and (1 + h/c), would be multiplied by those on the right side of equation (1). This time the geometric mean of the six multiplied values would require a root to the power of one-sixth, for equation (5):

{(1 - f/c)(1 - g/c)(1 - h/c)(1 + h/c)(1 + g/c)(1 + f/c)}^1/6 = ( 1 - r/c )^1/2.

This simplifies to (6):

(1 - f/c)(1 - g/c)(1 - h/c) = (1 - r/c)^3.

Using the binomial theorem to expand both sides, (7):

1 - (f + g + h)/c + (fg + fh + gh)/c^4 - (fgh)/c^6

= 1 - 3(r/c) + 3(r^4/c^4) - r^6/c^6.

As for the example of two velocities, for the three velocities, we assume the terms, on each side of the equation, correspond. Taking from the last term on each side, r^6 = (fgh). So, r = (fgh)^1/3. The velocity, r, is the geometric mean of the three range velocities.

That assumes the other terms correspond. So, 3(r/c) = (f + g + h)/c. Here, the square of the velocity, r, is the arithmetic mean of the squares of the three range velocities.
It seems significant that the first term, on either side, namely one, is the arithmetic mean of the two factors that each observer contributes, such as (1 - f/c) and (1 + f/c), in the case of velocity measure f. Likewise, for the other two velocities g and h.

The term corresponding to 3(r^4/c^4), that is ( fg + fh + gh )/c^4, combines arithmetic mean and geometric mean elements of averaging. Three terms added and divided by three constitutes an arithmetic mean: r^4 is their arithmetic mean. But those three terms are multiples. And the square root of a multiplied pair is their geometric mean.

Anyone looking at 3(r/c) = ( f + g + h )/c might think that it involves Pythagoras theorem in three dimensions of space, with f, g and h, being velocities in the direction of x, y and z co-ordinates. And 3r appears the hypotenuse squared, moving in a sphere, like the radius vector outcome of a tug of war between the three velocities at right angles to each other.

(The term "geometric" mean as a kind of average of a "geometric" series is not to be confused with "geometric" as in the geometry of space that yields the likes of Pythagoras theorem, with its use of an arithmetic sum of squares.)

When I first wrote on-line about these ideas of the Interval as a geometric mean, nearly a decade previously, I didnt appreciate what I was, in effect, doing at this point. By introducing a second observed velocity, into a geometric mean-interpreted Interval, I was describing the Interval under-going a change in observed velocity, in other words, an "acceleration Interval."

This takes a decisive step away from special relativity, for relative velocities between observers, towards a new formulation for a general relativity of relating accelerated motion between observers with different frames of reference.

Thus, it would be possible to have two different observers, relating two different velocities. If one observer measures velocities, f and g, the other measures, say, velocities, f' and g' (distinguished by indices).

The two velocities would have their own times. For more than two velocities, some notation, like t#0, t#1, t#2, etc would have to be used, with corresponding times for the other observer: t'#0, t'#1, t'#2 etc. (Apologies for the clumsy notation.)

Then a simple (two-velocities) acceleration Interval would look like equation 8:

(I/c)^4 = (t#0)(1 - f/c).(t#1)(1 - g/c) = (t'#0)(1 - f'/c).(t'#1)(1 - g'/c).

And: (I/c)^4 = [T{1 - (r/c)}] = [T'{1 - (r'/c)}],

where (t#0)(t#1) = T and (t'#0)(t'#1) = T'.

Let us remind ourselves what the conventional (Special Relativity) Interval (of one velocity and time for each observer) looks like in this notation:

(I/c) = (t#0)(1 - f/c) = (t'#0)(1 - f'/c).

This is identical to:

(I/c) = T{1 - (r/c)} = T'{1 - (r'/c)}.

By putting times into equation 6, consider the Interval for three velocities and times per observer (equation 9):

(I/c)^6 = (t#0)(1 - f/c).(t#1)(1 - g/c).(t#2)(1 - h/c) = (t'#0)(1 - f'/c).(t'#1)(1 - g'/c).(t'#2)(1 - h'/c).

And: (I/c)^6 = [T{1 - (r/c)}]^4 = [T'{1 - (r'/c)}]^4,

where [(t#0)(t#1)(t#2)] = T^3 and (t'#0)(t'#1)(t'#2) = T'^3.

Consider the binomial expansion of equation 6, to equation 7, with the times inserted, to complete the Interval formula, for one observer (for brevity leaving out the other side of the Interval equation to a second observers related values with indexed times and velocities). Equation 10:

(I/c)^6 = (t#0)(t#1)(t#2){1 - (f + g + h)/c + (fg + fh + gh)/c^4 - (fgh)/c^6}

= T^6{1 - 3(r/c) + 3(r^4/c^4) - r^6/c^6}.

There is nothing special about using three velocities, f, g, h, to mark out a range of velocities. One could use any number. This means that change in velocity, or acceleration, could be measured to any degree of accuracy. So, this feature might be of practical value to the general theory of relativity for accelerated frames of reference.

In equation 10, it happens that three velocities, f, g, h, forming a geometric series, on a one-dimensional range, that can be averaged by the geometric mean, also partly resemble velocities, say: u, v, w, on the three dimensions of space. To approximate the conventional Interval, expressed in terms of u, v, w, (rather than a three-vector, say, r) the two last terms of the expansion have to be small enough to be ignored.

Looking at the composite term from equation (10), ( fg + fh + gh )/c^4, we note that if only one of the velocities, f, g, h, is significant compared to light speed, c, then none of the three sub-terms in the brackets can be more than of the order of 1/c. In that case, we assume that this whole term can be ignored.
A similar argument applies to the fourth term, (fgh)/c^6.

The remaining first two terms (in the curly brackets) resemble the conventional Interval. There are still the two times associated with the two velocity terms, negligible under certain conditions. It might be contrived that the two redundant times cancel out the Interval term from (I/c)^6 to the conventional (I/c).

Thus, supposing {(t#1)(t#2)} = (I/c)^4, then approximately:

(I/c) = (t#0){1 - (f + g + h)/c},

where one dimensional range velocities, f, g, h are approximately equivalent to the conventional Interval, in three-dimensions of velocities, u, v, w.

Altho f, g, h, are, at intervals on a (geometric or non-linear) scale of one dimension, each term can itself be a three-vector resultant of velocities in three (linear) dimensions, like u, v, w.
Thus velocity f could have co-ordinates u#1, v#1, w#1.
Velocity g has co-ods. u#2, v#2, w#2.
Velocity h has co-ods. u#3, v#3, w#3.

This could be continued indefinitely. Say the next term in the geometric series scale is velocity j with 3-D co-ods. u#4, v#4, w#4.

An interesting feature of this statistical derivation of an acceleration Interval is that relativistic time appears in a probability distribution.
In thermo-dynamics, the direction of time, or times arrow, is recognised by the fact that if you saw a broken bottle re-assembling itself, you would know it was a film being played in reverse and not actually happening. The reversed film shows how the re-assembly might happen in principle. But in all probability, you may safely assume it never happened.

Classical physics only accounts for time being reversible in principle, without explaining the probability of time having one direction.

Also, an explicit statistical treatment of relativity should be more compatible with quantum theory.
It makes sense to consider the probabilities of different space-times occuring, as exemplified in a binomial or other statistical distribution of geometries. The break-down of continuous space-time, at the Planck scale, into a quantum foam of abruptly changing curvatures, or indeed massive astronomic distortions of the fabric of space-time, might so be represented as probabilities of different space-time curvatures occuring.

The famous equation, E = mc, is essentially a binomial expansion of the contraction factor, for two locally observed masses of a body, taken to the first two terms, as later terms are insignificant.

Mass is proportional to time, respectively in the dynamic and kinematic versions of the formulas for special relativity. It may be that the conventional Interval is also an approximation of a statistical expansion, such as equation 10.

The binomial distribution here is a distribution of logicly possible combinations of variables. These form kinds of geometrical structure or "geometries" such as Euclid geometry. Also, another term plus the Euclidean term formed Minkowski Interval, a further geometry. The distribution clusters about an average geometry, such that its structure is about half arithmetic mean and half geometric mean, considered as an average as well as a geometry.

Different geometries can be considered as different averages. And the binomial theorem, on which the expansion is based, can be considered as an average of averages, which is also a geometry of geometries. In other words, geometries may be considered as averages, which can be expanded from second order averages or second order geometries.

Special relativity postulates no privileged frames of reference within the context of Minkowski space-time. Minkowski geometry is itself to some extent a confining frame of reference to observers only in relative velocities to each other, excluding consideration of accelerated reference frames. Minkowski Interval is a privileged reference frame, in Special Relativity, which only works for the conditions, it exacts for observers, of uniform motion in a straight line of flat space-time.

The conventional Minkowski Interval may be an approximation of some statistical expansion, that gives both an Euclidean treatment for uniform motion and a non-Euclidean or geometric mean treatment for non-uniform or accelerated motion.

This treatment may also imply no privileged classes of geometry frames of reference, (whether of Euclid, Minkowski, or presumably others). Different geometries (different classes of reference frame) may appear as terms in a random distribution represented by, and expanded from, a geometry of geometries, as an average of averages.


Suppose the binomial distribution that derives the Interval in its first two terms has its conditions changed. Suppose we multiply equation (10), by c, for equation (11):

(I^6)/c^4) = (t#0)(t#1)(t#2){c - (f + g + h) + (fg + fh + gh)/c - (fgh)/c^4}

Suppose also that (f + g + h) is as close as we like to c. This still allows a great deal of possible variation in the three velocities individually. But it effectively cancels out the first two terms, eliminating the Interval, as we know it. This still leaves the subsequent two terms which might be considered as an Interval, whose more or less geometric mean terms measure more or less curved space-times.

This final, fgh, term is a function of a geometric mean. This is an average of a series associated with curvature, the graphic description in geometry of acceleration or deceleration, characteristic of a geometric series. General Relativity, dealing with observers in accelerated reference frames, uses the geometry of curvature.
The third term in the distribution, before the geometric mean term, is the next closest to a geometric mean, but is not merely a multiple of all the velocities. Different velocities are partly multiplied and partly added.
So, the binomial distribution of terms progresses from arithmetic mean to geometric mean, with more or less one or the other mixed in, as the series progresses. When the binomial theorem is expanded, with more than three velocities, further inter-mediate terms are introduced, suggesting a more and more refined distinction between the original contrast of a conventional Interval of flat space-time and an acceleration Interval of curved space-time.

This statistical approach to an acceleration Interval might have the advantage of applying finite mathematics to problems in General Relativity, whose laws break down when its equations produce infinities, encountered as singularities in its solutions, such as at the origin of the Big Bang and at the destination of Black Holes.

Instead of a singularity, an infinitely dense point of zero spatial dimensions, statistics may come up with alternative, discrete formulations.

Classical mechanics is not the master science, it was once thought to be, not even with regard to the progress of mechanics itself. But it does seem a bit surprising that special relativity does not follow classical mechanics, in that the concept of acceleration follows in a straight-forward manner from space, time and velocity.

Special relativity deals in space, time and velocity, but acceleration seems to come out of no-where, based on a completely new theory, the general theory of relativity, on the basis of a principle of equivalence.
The next section attempts to relate this founding principle of general relativity to special relativity.

The Equivalence principle in the Michelson-Morley experiment.

To top.

The Michelson and Morley calculation didnt agree with their experimental result of equal times for the split light beams on the two journeys. This was despite certain "ether wind resistance" considerations with respect to a back and forth journey directly into the wind and a back and forth journey across the wind.

From Einsteins point of view, there was no universal ether wind but just two differing local times for the two observations of the split light beam at right angles to itself. In terms of the Lorentz transformations, that worked out as meaning that the observation of the cross-wind beam has an observers velocity of zero (say u = 0). Whereas the observation of the light beam with a head and tail (ether) wind has an observers velocity equal to the relative velocity between the two observations ( say u' = v).

In terms of Galileo relativity principle, the relative motion of two observers velocities is straight-forward addition or subtraction. That wont do for the Michelson-Morley experiment, even when one of the observers measures zero velocity, and the other observers velocity is the same as the relative velocities between the two observers.

The point may be that the Michelson and Morley calculation was based on Galileo relativity principle when it should have been based on Galileo force principle, which became Newton second law of motion. Light is a force.

Einsteins photo-electric effect demonstrates this. Light of shorter wave-lengths has more energy and acts like faster bullets which are powerful enough to knock electrons out of the surface of a metal. Only a few of them are enough to do this trick, whereas it doesnt matter how much of the lower energy light is played on the metal, its photons dont have the energy to dislodge the electrons.

Force is mass multiplied by acceleration, or more generally force is change of momentum over (change of) time. Change of momentum leaves the door open to special relativity recognising a possible change of mass as well as change of velocity. Einstein and Infeld, in The Evolution of Physics, say that Galileos great innovation, that opened the door to modern physics, was to recognise that force is not a function of motion but of change in motion, or acceleration.

When air-craft approach the speed of sound, they encounter a sound barrier. When bodies approach the speed of light they encounter a light barrier. This becomes an ever increasing deterrent force. Therefore, when motion significantly approaches light speed, it should be measured not in terms of Galilean relative motion, by adding (or from the other point of view, subracting) observers velocities with respect to each other, but in terms of Galilean law of force, in this case light force, which is measured not in terms of velocity but change in velocity, which is acceleration or deceleration.

For a steadily increasing input of energy into a bodys motion approaching light speed, there is a deceleration in the extra velocity the body achieves. The energy input increases the bodys mass, which in theory would have to become infinite for the body to reach light speed.

The Michelson-Morley calculation takes an average time for a light beams back and forth journey but it takes the wrong average, an arithmetic mean, which deals in terms appropriate to the adding and subtracting of velocities. The average should be the geometric mean dealing with change in velocity, in this case deceleration with respect to a body significantly approaching light speed.

The Michelson-Morley experiment, with the averaging calculation based on Galileos acceleration principle of force, means the geometric mean applies, instead of Galileos velocity principle of relative motion applying an arithmetic mean.
In that case, we can apply Einsteins principle of equivalence to the experiment.

Using his famous thought experiment of the accelerating lift in outer space, the man in the lift feels his feet hit floor as if rooted there under the force of gravity. A chain to the roof is really accelerating him away. A beam of light comes thru the window. To an outside observer, it is going in a straight line. To the man in the lift, he theoreticly sees the light beam slightly dip from its entrance. He assumes it too feels the force of gravity bending it down. But to the out-sider, it's just a case of the lift accelerating so fast that it is leaving the entering light beam behind, rather as it would leave the man in the lift further behind if he were not caught by the floor.

Hence, Einstein derived his principle of the equivalence of acceleration and gravity and predicted that light would bend under strong gravitational attraction. We may predict an analgous situation with regard to the Michelson-Morley experiment light beam split at right angles. The cross-wind or cross-stream light beam observation, for zero velocity, may be compared to the outside observer of Einsteins lift. The cross-stream journey was calculated by Michelson and Morley to take the same time. They used the arithmetic mean to average the time taken both ways.

In fact, the same result is obtained by using the geometric mean as an average, precisely because both trips do take the same time, and their average must be equal to both times.

The inside observer sees the light bend, because the lift, which is his reference frame, has lifted somewhat before the beam reaches the other side of the lift. The lifts acceleration effect, of appearing to curve the light beam, may be compared to the acceleration implicit in a geometric series of velocities, made apparent, in geometric averaging the Michelson-Morley experiments up and down stream journey.

For the prediction to conform to the experimental times of the light beams being equal, then the geometric mean is the average that must be taken of the up and down stream journeys. Thus, this beam should appear curved, because acceleration is equivalent to gravity in its effect on light, which has mass in motion and is therefore subject to gravitational attraction. And the reference frame of the man inside the lift is comparable to the reference frame of the observation of the up and down stream (or head and tail wind) light beam journeys.

Suppose, when the bending light beam reaches the other side of Einsteins lift, that it (or part of it) is reflected and the acceleration instantly reversed, as if the lift was pulled by an oscillating spring, so that the light beam repeats its path in the other direction, forming a loop. Thus, reflecting the beam, for a head and tail wind journey, implies a reverse gravitational effect, in the Michelson-Morley experiment.

The Einstein thought experiment may differ from the M-M experiment but it seems reasonable to compare Einsteins lift with the head-wind part of the M-M experiments head and tail wind journey. Einsteins lift in reverse would compare with the tail wind part of the head and tail wind journey. The outside observer, to Einsteins lift in reverse, sees the same straight line of light passing thru the lift only this time in the opposite direction. That is like the cross-stream beam in the M-M experiment.

The comparison, between Michelson-Morley experiment and Einstein thought experiment, is striking if perhaps not exact. Consider a possible difference involved between transverse and longitudinal waves. Take a string tied to a post or wall and hold the other end. Jerking the string end up and down will produce up and down waves along the string. These are called transverse waves. The more energy put into the vertical shaking, the more wave vibrations formed along the string. Since the string is fixed at both ends, the vibrations tend to form a definite number of up and down vibrating loops (refered to as harmonics). Just one loop is called the fundamental. This is like the reversed lift scenario, where the observer under-goes a single up and down oscillation with regard to the light beam.

Longitudinal waves are produced when the holder of the string pushes and pulls the string along the line of the string. This looks more like the Michelson-Morley situation of the head and tail wind journey, when the wind pushes you back or forward. (I avoid the expression, "up and down" stream, here to avoid any suggestion of vertical action.)

The more energeticly oscillatory the situations, such as those in Einsteins lift or Michelson-Morley experiment, the more oscillations to the gravitational wave.

More complicated oscillations presumably could result in further standing waves in the harmonic series. But to the observer in the lift, these light beam oscillations imply gravitational waves rather than reverse accelerations. He might assume he had been bobbed up and down on a ripple in the fabric of space-time caused by a super-nova explosion. To the outside observer, tho, this is not a gravitational ripple but just the lift being yanked up and down, with the light passing back and forth in a straight line.

Ive talked about light beams perhaps from reading the old popular accounts of physics. I am not well up on these matters but lasers have long since taken over as the light beams of choice in the M-M experiment. White light is light mixed at different wavelengths. Sound, at all different wavelengths, is noise. A laser coheres light into a beam all of one wavelength and phase.

Einsteins lift was only a theoretical argument. His prediction of its consequences was rather on an astronomical scale, namely that light from a star passing close by the sun would be bent in passing by gravitational attraction.
If there is a "loop" formed in the head and tail wind journey of the M-M experiment, I guess this might be marked by some decoherence in the sustained bounce of a laser beam between head and tail wind journeys, provided, of course, that the scale or definition were sufficiently great for such a minute effect to be measurable.

With current instrumentation, gravitational waves have not been directly detected. A supernova in the local heavens (not too local we hope) is one prospect for detectable gravitational waves.

[Post-script, june 2015.
I was evidently not familiar with the LISA project when I wrote this section in 2006.]


Mach principle as statistics by another name, etc.

To top.

In any case, there seems to be no reason why General Relativity should not have a statistical under-pinning. Special Relativity equates observers measures in uniform motion, thru their respective linear co-ordinate systems. General Relativity equates observers measures in accelerated motion, thru their respective curvilinear co-ordinate systems. (Albert Einstein discovered this extension from SR to GR in consequence of his "principle of equivalence".) If statistics can derive SR in those geometry terms, statistics should also be able to derive GR in its more sophisticated geometry terms - if with more difficulty!

We might take a glance at the nature of dimensions. In statistics, the normal distribution is a continuously bell-shaped curve version of the binomial distribution. These curves may be standardised for purposes of exact comparisons between them. And one of the standards used is to take the central position under the norm or highest point of the "bell," for any such curve, as zero. By convention, points to the right of zero, increase to +1, +2, +3 etc. Points to the left, decrease from -1, -2, -3 etc. In fact you have a range or dimension that measures indefinite increases or decreases. Zero is the implicit average of this dimension.

Then why not consider all dimensions as potential ranges about an average? Riemann geometry was adapted to GR to measure curved space. There are spaces of positive or negative curvature, which might be considered statisticly as ranges of curvature about an average curvature.

Special relativity is the geometry of a space-time of zero curvature. This four-dimensional flatness, on average, characterises the physics of the observed universe. One might also say that mathematicly, SR is about the statistics of average space-time curvature. And, perhaps, GR is about the statistics of deviations, from that average, to positive and negative ranges of space-time curvature.

Euclid geometry generating Pythagoras theorem and, by extension, the Interval were statisticly derived from an expansion of the geometric mean, as terms in a series. Einsteins Relativity assumed that there is no privileged co-ordinate frame of reference for the observation of physical laws. This is to make each frame a random choice.

It is enough to assume that observers frames of reference are random in relation to each other. Relativity of observation frames becomes Randomness of observation frames. And their relation to each other is expressed as an over-all, or representative, average of their random sample of observation.
More than that, statistical derivation of relativity, such as in the Interval or an acceleration Interval, has the benefit that no absolute framework of space and time, and no one and only geometry, such as Euclid, from classical physics, need be assumed.

A suitable statistics for the geometry of general relativity holds out the hope of compatibility with quantum theory. Under extremely small measurements, far below experimental scrutiny, at the Planck scale, space and time is likened to a sea that breaks up into a "quantum foam." According to physicists, space and time cease to be the basic concepts they are in classical mechanics and relativity.

Statistics does not have to assume a range of values in space and time, tho, of course, it can do. Statistics merely averages any range of values, tho these may derive geometry-like terms in a series of averages.

The Mach principle related the motions of bodies to the motions of all other bodies in the universe, rather than put them in some arbitrary context of any imposable space and time frame-work. Barbour and Bertotti derived Newtonian laws from Mach principle, which they also found was implicit in General Relativity.

Their process of "best matching" of bodies from one configuration to another may be likened to a statistical idea of finding the ranges of bodies motions by putting successive motions of bodies in a series. There are "goodness of fit" tests in statistics, such as the chi-squared test, that show how well actual ranges of values match theoreticly expected values. In principle, this class of tests might be used to best match configuration changes. At any rate, Barbour says: "it is very convenient to measure how far each body moves by making a comparison with a certain average of all the bodies in the universe."

As the Barbour title, The End of Time, suggests, he is trying to replace time as a basic concept in classical as well as quantum mechanics. A note suggests what I take to be a use of topology to remove distance measurements, that define the traditional concept of space, in physical theory. This would be consistent with a statistical treatment that also does not need to assume space and time.

In so far as Barbour and Bertotti worked a statistical treatment of General Relativity, it has the merit of showing the necessity of that treatment, without having pre-supposed that need with a program for reformulating GR in terms of statistics.

Mach principle appears to be a statistical program under another name. And it shows the kinship of science and democracy. It is scientific because it insists on reference only to observables. It does not impose any outside reference, which anyone can impose, resulting in a dead-lock of prejudices. Progress in knowledge depends on shedding unfounded assumptions for which there is no basis in agreement from common experience.

This is democratic progress because it depends on all points of view agreeing on the rules or laws, rather than an arbitrary rule, essentially a privileged anarchy, in imposing the conventions. Relativity is a democracy of observers all equally free to observe physics laws. Mach principle is a democracy of the observed physical phenomena, representatively measured, rather than externally measured.

Richard Lung.
27;28 September 2006, and 5 dec. '06.
Considerably revised 3 & 6 jan. 2007;
Much re-written june 2015.


To top.