An Expanded Explanation and Implications Regarding a Changing c

Lambert Dolphin, October 18, 1995

 

Setterfield's General Rule for the Constants

Australian astronomer Barry Setterfield suggests that all "constants" which carry units of "per second" have been decreasing since the beginning of the universe. Constants with dimensions of "seconds" have been increasing inversely. This is born out with some degree of statistical confidence by studying the available measurements of all the constants over time. The case for the velocity of light decreasing is better established than changes in any other constants because more data over longer time periods is available for c.

Measurements on constants of physics which do not carry dimensions of time (seconds or 1/seconds; or powers thereof) are found to be truly fixed and invariant. The variability of one set of constants does not lead to an unstable universe, nor to readily observable happenings in the physical world. The principle consequence is a decreasing run rate for atomic clocks as compared to dynamical clocks. The latter clocks depend on gravity and Newton's Laws of Motion.

In the first thorough statistical study of all the available data on the velocity of light in recent decades, presented in Barry Setterfield and Trevor Norman's 1987 report The Atomic Constants, Light, and Time, the authors also analyzed (in addition to values of c), measurements of the charge on the electron, e, the specific charge, e/mc, the Rydberg constant, R, the gyromagnetic ratio, the quantum Hall resistance, h/e2, 2e/h, and h/e, various radioactive decay constants, and Newton's gravitational constant G.

Three of these Norman and Setterfield quantities found to be truly fixed constants, namely e, R, and G. These constants are either independent of time or independent of atomic processes. The other five quantities, which are related to atomic phenomena and which involve time in their units of measurement, were found to trend, with the exception of the quantum Hall resistance.

Montgomery and Dolphin re-analyzed these data, carefully excluding outliers. Their results differed from Norman and Setterfield's only for the Rydberg constant where Montgomery and Dolphin obtained rejection of constancy at the 95% confidence level for the run test (but not the MSSD). The available measurements of radioactive decay constants, they found, do not have enough precision to be useful. Montgomery's latest work answers his critics and used statistical weighting.

Norman and Setterfield also believe that photon energy, (hf), remains constant over time even as c varies. This forces the value of (hc) to be constant in agreement with astronomical observations. What is measured astronomically are light wavelengths, not frequency. The consequence of this is that h must vary inversely with c and therefore the trend in the constants containing h are restricted as to their direction. The Fine Structure constant is invariant. An increasing value of h over time affects such things as the Heisenberg Uncertainty Principle.

Montgomery and Dolphin calculated the least-squares straight line for all the c-related constants and found no violation of this restriction. In all cases the trends in "h constants" are in the appropriate direction. In addition, a least squares line was plotted for c, the gyromagnetic ratio, q/mc, and h/e for the years 1945-80. The slopes continued to remain statistically significant, and in the appropriate direction. Furthermore the percentage rate of change varied by only one order of magnitude---very close, considering how small some of the cells are. By contrast, the t test results on the slopes of the other three constants (e, R, and G) were not statistically significant. See Is The Velocity of Light a Constant in Time?

To summarize: The Bohr Magnetron, gas constant R(0), Avagadro's number, N(0), Zeeman Displacement/gauss, the Schrodinger constant (fixed nucleus), Compton wavelengths, the Fine Structure Constant, deBroglie wavelengths, the Faraday and the Volt (hf/2e) all can be shown to be c-independent. The gravitational constant G, actually more properly speaking Gm, appears to be a fixed constant.

Maxwell's Equations

The velocity of electromagnetic waves has its present value of 299,792.458 km/sec, only in vacuum. When light enters a denser medium, such as glass or water the velocity in the medium drops immediately by a factor of one over the index of refraction (n) of the medium. For practical purposes, the index of refraction is equal to the square root of the dielectric constant of the medium---which is the real part of the dielectric permittivity of the medium. Materials other than vacuum are lossy, causing electromagnetic waves to undergo dispersion as well a change in wavelength in the medium.

For example, the dielectric constant of water at radio wavelengths is about 81, so the velocity of radio waves in water is 299,792.458 / 9 or 33,310.273 km/sec. In the visible light band, n is about 1.3 for water, giving a velocity of 225,407.863 km/sec for visible light rays.

Actually the velocity of light is a scaling constant, or metric, which appears in James Clerk Maxwell's equations for the propagation of electromagnetic waves in any medium. The velocity of light is dependent not only on the dielectric permittivity, e,---in free space designated as e(0); but also on the magnetic permeability of a medium, m,---which for free space is designated as m(0).

The propagation velocity for electromagnetic waves, c, is related to e and m according to the following equation,

1/c2 = m0e0

c = 1/[m0e0]1/2

After discussing both options as to whether it was m or e that might be varying, Setterfield and Norman originally suggested that the permittivity of free space has not changed with time according to the best available measurements. It was probably the permeability which was changing---possibly inversely proportional to c squared. The permeability of space was apparently related in some way to the stretching out of free space at the time of creation (Genesis 1:6-8, Psalm 104:2). It might be possible, therefore, that when God stretched out the "firmament of the heavens"---on the second day of creation week---that the value of (m) had its lowest value and had since increased.

According to this earlier hypothesis, sometime after creation the heavens apparently "relaxed" from their initial stretched-out condition, much as one would let air out of a filled balloon. If the universe had its maximum diameter at the end of creation week and had since shrunk somewhat, then the Big Bang theory of an expanding universe is incorrect. The shrinkage of free space would then account for the observed slowing down of the velocity of light. The red-shift would not be a measure of actual radial velocities of the galaxies receding from one another, but instead would be due entirely to a decrease in the value of c since creation. An initial value of c some 11 million times greater than the present value of c was suggested.

William Sumner's recent paper (see abstract) proposes a cosmology in which permittivity rather than permeability is the variable. Glenn R. Morton discusses both possibilities and their consequences in his useful CRSQ paper, Changing Constants and the Cosmos. (Creation Research Society Quarterly, vol. 27 no. 2, September 1990)---available from Creation Research Society

More recently Barry Setterfield (private communication) has suggested that he now believes that both e and m are varying. This follows from the fact that in the isotropic, non-dispersive medium of space, equal energy is carried by the Electric and Magnetic vector components of the electromagnetic wave, and the ratio of E/H is invariant with any change in c. Therefore both e and m have been changing over time since creation. In the revised view, the apparent decrease in c since creation could be due to a step input of Zero Point Energy (ZPE) being fed into the universe from "outside"---as a function of time, beginning just after the heavens were stretched out to the maximum diameter on Day Two of creation. The diameter of the universe has been fixed (static) ever since, so one must look for another explanation of the Red Shift than the old model of an expanding universe. This view does not rule out possible subsequent decreases in the ZPE input from the vacuum which might be associated with such catastrophes in nature as the fall of the angels, the curse on the earth at the fall of man, and the flood of Noah catastrophe. Such changes would result in the universe being more degenerative now than it was at the end of creation week.

Some additional published information by Setterfield is available by mail from Australia (Reference 1), but most Setterfield's later work is awaiting final peer review for journal publication as of this writing.

From Maxwell's electromagnetic theory, we can also calculate what is known as the "impedance of free space" (commonly used in antenna design). The present value is 377 ohms, and the formula is,

Z = [m0/e0]1/2

n = [e/m][E/H]

As noted above, the impedance of free space tells us how radio waves, or photons of light, will travel through space. Z also gives us the ratio of the electric field vector, E, to the magnetic field vector, H, in free space. Z is also invariant with changes in c. The refractive index of any medium---whether empty space or other material, n, measures the property of a glass lens to bend a beam of light for example. If c were found to be decreasing over the history of the universe it follows that optical path lengths everywhere in the universe have been changing since creation. This result has a number of consequences for astronomy---the true size and the age of the universe would be greatly affected for instance. It has been argued that no change in light spectra from distant stars has ever been observed and hence c could not have changed. As will be seen below, what is measured in light spectra is always wavelength not frequency; light wavelengths stay constant with varying c. Constants such as alpha, the fine-structure constant, (and so on) are invariant if c changes.

The energy carried by an electromagnetic propagating wave is contained in both the oscillating magnetic field and the oscillating electric field. The total energy flux is known as Poynting's Vector, S. S is equal to c times the cross product of the E and H vectors. Energy is conserved in propagating waves---at least no one wishes to throw out such an important principle at least as a first approach.

Energy Conservation with Decreasing c

Assuming energy is conserved under conditions of decreasing c the following must be true:

The energy of a photon can be calculated from Einstein's famous equation relating mass and energy. If we use this formula, it is easy to see that the photon has "apparent mass" as is often noted. Photon energy is also known to be equal to hf, where h is Planck's constant and f is the frequency of the emitted light of the photon. The energy of a photon can also be expressed in terms of wavelength, lambda, rather than frequency,

Energy, E = mc2

E = hv = hc/l

if c is non-constant then hc = constant and h ~ 1/c.

If c is not a fixed constant, Planck's "constant" should vary with time, that is inversely proportional to c. (That this is so experimentally is borne out with reasonable statistical confidence levels by data given in the Setterfield and Norman 1987 report and also by Montgomery and Dolphin in their Galilean Electrodynamics paper).

Energy Flux and the Red Shift

In their original theory Setterfield and Norman believed that the wavelength of radiation, at the time a radio-wave or light photon is emitted, is invariant for constant energy. However, once a radio-wave leaves the source, or a photon departs from its parent atom, energy and momentum are apparently both conserved. Also the product (hc) is a true constant which does not vary with time.

In their 1987 report, Setterfield and Norman show that the deBroglie wavelengths for moving particles and the Compton wavelength are c-independent. The energy of an orbiting electron, the fine-structure constant, and the Rydberg constant are also shown to be c-independent and thus truly constant with time. The gyro-magnetic ratio, g = e/ 2mc, is found to vary inversely proportionally to c.

Setterfield and Norman originally claimed that the wavelength of light emitted from atoms, (for instance, the atoms on a distant star), was independent of any changes in c. However, the relative energy of the emitted light wave is inversely proportional to c, and if c decreases while the light wave is on its journey, its energy and its momentum must be conserved in flight. The intensity of the light, related to the wave amplitude, increases proportionally to c. Thus there should be proportionally less dimming of light from distant stars. In order for light to maintain energy conservation in flight, as c decays, the frequency of the emitted light must decrease inversely proportionally to c. The relaxation of free space, causing the observed c-decay, and increasing optical path length, occurs everywhere in the universe at the same time.

A new explanation of the (quantized) red-shift involving a static (non-expanding) universe is the subject of a paper now in preparation by Barry Setterfield.

Setterfield's early attempts to explain the red shift as caused by the decrease in light velocity over time were not satisfactory. Several other researchers also tried to explain the red-shift as a Doppler-like effect. Setterfield revised his model in 1993 along the following lines:

Barry now assumes that energy flux from our sun or from distant stars is constant over time. (Energy flux is due to atomic processes and is the amount of energy radiated from the surface of a star per square centimeter per second). Setterfield also now proposes that when the velocity of light was (say) ten times higher than now, then 10 times as many photons per second (in dynamical time) were emitted from each square centimeter of surface. Each photon would however carry only one tenth as much energy, conserving the total energy flux. Setterfield says, "This approach has a dramatic effect. When light-speed c was 10 times higher, a star would emit 10 photons in one second compared with one now. This ten-photon stream then comprised part of a light beam of photons spaced 1/10th of a second apart. In transit, that light beam progressively slowed until it arrived at the earth with today's c value. This speed is only 1/10th of its original speed, so that the 10 photons arrive at one second intervals. The source appears to emit photons at today's rate of 1 per second. However, the photon's wavelength is red-shifted, since the energy per photon was lower when it was emitted."

Setterfield continues, "This red-shift of light from distant galaxies is a well-known astronomical effect. The further away a galaxy is from us, the further down into the red end of the rainbow spectrum is its light shifted. It has been assumed that this is like a Doppler effect: when a train blowing its whistle, passes an observer on a station, the pitch of the whistle drops. Similarly light from galaxies was thought to be red-shifted because the galaxies were racing away from us. Instead, the total red-shift effect seems due to c variation alone."

"When this scenario is followed through in mathematical detail an amazing fact emerges. The light from distant objects is not only red-shifted: this red-shift goes in jumps, or is 'quantised' to use the exact terminology. For the last 10 years, William Tifft, an astronomer at (an) Arizona Observatory USA, has been pointing this out. His most recent paper on the matter gives red-shift quantum values from observation that are almost exactly (those) obtained from c-variation theory. Furthermore, a theoretical value can be derived for the Hubble constant, H. As a consequence, we now know from the red-shift how far away a galaxy was, and the value of c at the time the light was emitted. We can therefore find the value of c right out to the limits of the universe...Shortly after the origin of the universe, the red-shift of light from distant astronomical objects was about 11 million times faster than now. At the time of the Creation of the Universe, then, this high value of c meant the atomic clock ticked off 11 million years in one orbital year. This is why everything is so old when measured by the atomic clock." (Ref. 1)

Energy and Mass with a Non-Constant c

Setterfield's original reasoning concerning the relationships between energy and mass were somewhat as follows: The energy, E, associated with a mass, m, is E = m c2, as stated earlier. This means that the mass of an object would seem to vary as 1/c2. At first this seems preposterous. However Setterfield noted that "m" in the above equation is the atomic (or rest) mass of a particle, not the mass of the particle if the particle were weighed on a gravity type scale.

The factor for converting mass from atomic mass to dynamical mass is precisely c squared. As c decreases no change in the mass of objects is observed in our ordinary experience because we observe the gravitational and inertial properties of mass in dynamical, not atomic time. To better understand the difference between atomic rest mass, and mass weighed in the world of our daily experience, consider Newton's Law of Gravity.

As far as gravity is concerned, the gravitational force, F, between objects of mass m and M is given by Newton's formula,

F = GMm/r2

where G is the universal gravitational constant and r is the distance between the objects. Space has built-in gravitational properties similar to its electrical properties mentioned above. This gives rise to the so-called "Schwartschild metric for free space," which also is related to the stretched-outness of free space. In this way of viewing things macroscopic mass measured by gravity is atomic rest mass multiplied by the so-called gravitational permeability of free space, corresponding to electromagnetic permeability in Maxwell's equations. (See Ref. 2)

Incidentally, the accepted value of G is 6.67259 x 10-11) and the units are: meters3 kg-1) sec-2. Clues as to which units should be fixed and which are invariant, as noted in the first paragraph above, are "constants" containing "seconds" or "1/seconds" or powers thereof. If Gm is invariant, then Setterfield's latest work implies that G itself varies inversely with c to the fourth power.

More recently Setterfield has attempted to relate a decreasing velocity of light with astronomer William Tifft's discovery that red-shifted light from the galaxies appears to be quantized. Setterfield also notes (as does Hal Putoff) that in classical atomic theory electrons circling the nucleus are accelerated particles and ought to radiate energy, but apparently they don't--according to the tacit assumptions of modern physics. Setterfield suggests that energy is actually being fed into every atom in the universe from the vacuum at precisely the rate electrons are dissipating this energy. The calculated total amount of this energy input is enormous, of the order of 1.071 x 10117 kilowatts per square meter. (Some have physicist have claimed that the latent energy resident in the vacuum is infinite, but Setterfield is content to be conservative, he says!) 10117 is of course a very large number in any case. [The total number of atoms in the universe is only ~ 1066, the total number of particles in the universe is only ~ 1080, the age of the universe is only about ~ 1017 seconds. And any event with a probability of less than 1 part in 1050 is considered "absurd."]

After the initial creation of space, time and matter, and the initial stretching out of the universe to its maximum (present) diameter, the above-mentioned energy input from the vacuum commenced as a step impulse and has continued at the same rate ever since. [Assuming no subsequent disruptions from "outside"]. This energy input has raised the energy density of the vacuum per unit volume over time and means the creation and anhiliation of more virtual particles as time moves forward. Photons are absorbed and reradiated more frequently as this takes place, hence the velocity of light decreases with time. All this is another way of saying that the properties of the vacuum as measured by mu and e have chnaged as a function of time since the creation event.

Furthermore as the velocity of light drops with time, atoms in the vicinity continue for a certain time period radiating photons of the same wavelengths for a season, and then abruptly every energy level drops by one quantum number. According to Setterfield's estimates, the velocity of light must decrease by the incremental value of 331 km/sec for one quantum jump of wavelengh in photons radiated from atoms to occur. (There have been somewhere around 500,000 total quantum jumps since the universe began, he estimates). The last jump occurred about 2800 B.C.

This, then, in brief provides a new explanation for the red-shift and the quantization of red-shifted light from the galaxies which has been documented by Wm. Tifft and others in recent years.

Setterfield now suggests that the product of G and m is a fixed constant, rather than G itself. When one attempts to measure G in the laboratory (this is now done with great precision) he claims that we actually measure Gm. In Setterfield's latest work, rest mass m varies inversely as c squared except at the quantum jumps when m is inversely proportional to c to the fourth power. In such a model energy is not conserved at the jumps, because more energy is being fed into the universe from the vacuum. Energy conservation holds between the quantum jump intervals. Since Setterfield's latest work has not been published, the best source of related information is his last published report and video, (Reference 1). Three charts from that report are accessible from this web page.

Setterfield's paper on this subject is in final journal review as of this writing. Overview of theory, Atomic Behaviour, Light and The Red-Shift.

Within the Atom

Consider the Bohr atom for purposes of illustration. The centripetal force pushing electrons away from the nucleus is exactly balanced by the electrostatic (Coulomb) force between electron and nucleus.

F = e2/4pe0r2 = mv2/r

v = e2/e02nh

hence v varies as c

v is the orbital velocity of the electron, e is its charge, h is Planck's constant, r is the orbital radius of the electron, F the force and m the rest mass of the electron.

From this simplistic approach, if c is decreasing with time, then Planck's constant is increasing, and orbital velocities were faster in the past---thus the "run rate" of the atomic clock was faster in the past. Of course Setterfield has worked out the mathematics for more sophisticated quantum mechanical models of the atom, and also shown that his conclusions do not conflict with either General or Special Relativity Theories.

If the above equation is solved for rest mass m, then m is proportional to Planck's constant squared. That makes m inversely proportional to c squared in the Setterfield model.

Additional notes from the 1987 Setterfield and Norman report: "For energy to be conserved in atomic orbits, the electron kinetic energy must be independent of c and obey the standard equation:

E(k) = mv2 / 2 = [Z e2] / (8 pi episilon(0) a] = invariant with changes in c.

The term e2 / e(0) is also c independent as are atomic and dynamical orbit radii. Thus, the atomic orbit radius, a, is invariant with changes in c.

However for atomic particles, the particles velocities, v, are proportional to c.

Now from Bohr's first postulate (the Bohr Model is used for simplicity throughout as it gives correct results to a first approximation) comes the relation,

mva = nh / 2 pi

where h is Planck's constant. Thus h varies as 1/c..."

"The expression for the energy of a given electron orbit, n, is,

E(n) = 2 pi2 e4 m / [h2 n2]

which is independent of c. With orbit energies unaffected by c decay, electron sharing between two atomic orbits results in the 'resonance energy' that forms the covalent bond being c independent. A similar argument also applies to the dative bond between coordinate covalent compounds. Since the electronic charge is taken as constant, the ionic or electrovalent bond strengths are not dependent on c.

Related to orbit energy is the Rydberg constant R.

R = 2 pi2 e4 m / [(c h3]

which is invariant with changes in c, as the mutually variable quantities cancel...

The Fine Structure constant, alpha, appears in combination with the Rydberg constant in defining some other quantities...

the fine structure constant, alpha = 2 p2 / (hc), which is invariant with c." (End of excerpt from 1987 Setterfield and Norman report)

Are Radioactive Decay Rates Non-Constant?

In their 1987 essay, Setterfield and Norman suggested that radioactive decay processes were proportional to c. (There are various mechanisms for radioactive emission processes, the equations for each model all involve c or h in the same general fashion).

The following notes are also taken from the Setterfield and Norman 1987 report: "...the velocity, v, at which nucleons move in their orbitals seems to be proportional to c. As atomic radii are c independent, and if the radius of the nucleus is r, then the alpha particle escape frequency lamba* (the decay constant) as defined by Gladstone and Von Buttlar is given as,

lamba* = P v / r

where P is the probability of escape by the tunneling process. Since P is a function of energy, which, from the above approach is c independent, then lambda* varies in proportion to c.

For beta decay processes, Von Buttlar defines the decay constant as,

lambda* = G f = m c2 g2 |M|2 f / [p2 h]

where f is a function of the maximum energy of emission and atomic number Z, both c independent. M, the nuclear matrix element dependent upon energy, is unchanged by c, as is the constant g. Planck's constant is h, so for beta decay, lambda* varies in proportion to c. An alternative formulation by Burcham leads to the same result.

For electron capture, the relevant equation from Burcham is lambda* = K2 |M|5 f / [2 p2]

where f is here a function of the fine structure constant, the atomic number Z, and total energy, all c independent. M is as above. K2 is defined by Burcham as,

K2 = g2 m2 c4 / [h / 2 p]

With g independent of c, this results in K2 proportional to c, so that for electron capture lambda* varies in proportion to c. This approach thus gives lambda* proportional to c for all radioactive decay [processes]...

The beta decay coupling constant, g, used above, also called the Fermi interaction constant, bears a value of 1.4 x 10-49 [erg-cm] [^ 3]. Conservation laws therefore require it to be invariant with changes in c. The weak coupling constant, g , is a dimensionless number that includes g. Wesson defines g(w) = {[g m2 c / (h / 2 p)3]}2, where m is the pion mass...this equation also leaves g(w) as invariant with changes in c. This is demonstrable in practice since any variation in gw would result in a discrepancy between the radiometric ages for alpha and beta decay processes. That is not usually observed. The fact that g(w) is also dimensionless hinted that it should be independent of c for reasons that become apparent shortly. Similar theoretical and experimental evidence also shows that the strong coupling constant, g has been invariant over cosmic time. Indeed, the experimental limits that preclude variation in all three coupling constants also place comparable limits on any variation in e or vice versa. The indication is, therefore, that they have remained constant on a universal timescale. The nuclear g-factor for the proton, g(p) , also proves invariant from astrophysical observation. Generally, therefore, the dimensionless coupling constants may be taken as invariant with changing c." (End of excerpt)

Radioactive decay rates have been experimentally measured only in this century. The available data has been statistically examined by Trevor Norman and also by Alan Montgomery (both very competent statisticians) but without conclusive results because of the paucity of data.

Was the energy released by radioactive decay processes faster in the past when c was higher? Setterfield says, "...there is an elegant answer to this question. Light is an electromagnetic phenomenon whereby energy is transported. In this scenario, the fundamental entity is not the energy as such, but rather the rate of flow of that energy at its point of emission. What is proposed here for variable 'c' is that the amount of energy being emitted per unit time from each atom, and from all atomic processes, is invariant. In other words the energy flux is conserved in all circumstances with c variation. This solves our difficulty.

"Under these new conditions the radio-active decay rate is indeed proportional to 'c'. However, the amount of energy that flows per orbital second from the process is invariant with changes in 'c'. In other words, despite higher 'c' causing higher decay rates in the past, this was no more dangerous then than today's rates are, since the energy flux is the same. This occurs because each emitted photon has lower energy. As the reactions powering the sun and stars have a similar process, a potential problem there disappears as well.

"What is being proposed is essentially the same as the water in a pipe analogy. Assume that the pipe has a highly variable cross-section over its length. As a result, the stream of water moves with varying velocity down the pipe. But no matter how fast or slow the stream is moving, the same quantity of water flows per unit time through all cross-sections of the pipe. Similarly, the emitted energy flux from atomic processes is conserved for varying c values. Under these conditions, when the equations are reworked all of the previously mentioned terrestrial and astronomical observations are still upheld. Indeed, the synchronous variations of the same constants still occur." (From Ref. 1)

Atomic Time Vs Dynamical Time scales

What is noticeably different in a universe where c is decreasing? Macroscopically not very much, Setterfield and Norman have claimed. Gravity is not affected, nor Newton's Laws of motion, nor most processes of chemistry or physics. The stability of the universe in the usual cosmological equations is unaffected, although one or more very different cosmological scenarios for the history of the universe can be developed as shown in the accompanying abstracts by Troitskii, Sumner, and Hsu and Hsu. Of course these new models differ from the currently prevailing Big Bang scenario in many significant ways.

Because wavelength of light (not frequency) is measured, we would not detect changing c by measurements of absolute wavelengths of light from distant stars over time, or by changes in spectral line splitting and so on.

The main effect of changing c concerns time scales measured inside the atom---on the atomic scale---as opposed to macroscopic events as measured outside the atom. Put another way, the run rate of the atomic clock would slow with respect to dynamical time (as measured by the motion of sun, moon, and stars.)

Prof. of Biology Dean Kenyon of San Francisco State University has suggested (private communication) that if c were higher in the past some biological processes could have been faster or more efficient in the past. Nerve impulses are of course not completely electrical in nature because of the ion-transfer processes at neuron synapses for instance.

return to History of the Speed of Light Research on the Discussion page