return to GSR Home page

 

General Objections

 

Observational Evidence Weak
Response to Professor Skiff
Einstein's Equation

Why not build on established work?
The use of the SED approach
Waves or Frequency?

 

Observational Evidence Weak

Comment:  Observational evidence in favor of the proposition that the speed of light is time-variable is so weak as to be practically non-existent. I think the proposition can be safely ignored, simply on the grounds of lack of evidence. The interpretation of quantization as a fundamental problem for standard cosmology is also in error. Tifft's argument is that galactic redshifts have a superimposed periodicity. If the redshift is caused by a Doppler effect, and if the matter distribution is periodic, then the Redshifts will be periodic too, just as Tifft argues that they are (although Tifft's results are not all that strong either). So, even if we accept the "quantization" (poor semantics, "periodicity" is better and more descriptive of what is actually observed), it works just fine in a modified Big Bang cosmology (one has to find a way to construct a periodic mass distribution on large Scales, which should not be an onerous task).

Setterfield:  A lot of cosmologists and science journal editors didn't think so. Neither did those editors who commissioned major articles on the topic.

There are in fact periodicities as well as redshift quantisation effects. The periodicities are genuine galaxy-distribution effects. However, they all involve high redshift differences such as repeats at z = 0.0125 and z = 0.0565. The latter value involves 6,200 quantum jumps of Tifft's basic value and reflects he large-scale structuring of the cosmos at around 850 million light-years. The smaller value is around 190 million light-years. This is the approximate distance between super-clusters.

The point is that Tifft's basic quantum states still occur within these large-scale structures and have nothing to do with the size of galaxies or the distances between them. The lowest observed redshift quantisation that can reasonably be attributed to an average distance between galaxies is the interval of 37.6 km/s that Guthrie and Napier picked up in our local supercluster. This comprises a block of 13 or 14 quantum jumps and a distance of around 1.85 million light-years. It serves to show that basic quantum states below the interval of 13 quantum jumps have nothing to do with galaxy size or distribution. Finally, Tifft has noted that there are red-shift quantum jumps within individual galaxies. This indicates that the effect has nothing to do with clustering. (November 16, 1999.)

 

Response to Professor Skiff

In Douglas Kelly's book, Creation and Change: Genesis 1.1 - 2.4 in the light of changing scientific paradigms (1997, Christian Focus Publications, Great Britain) a changing speed of light is discussed in terms of Genesis. Endeavoring to present both sides of the variable light speed argument, he asked for a comment from Professor Frederick N. Skiff. Professor Skiff responded with a private letter which Kelly published on pp. 153 and 154 of his book. The letter is quoted below and, after that, Barry Setterfield responds.

 From Professor Frederick N. Skiff:

 I see that Setterfield does indeed propose that Planck's constant is also changing. Therefore, the fine structure constant 'a' could remain truly constant and the electron velocity in the atom could then change in a fashion proportional to the speed of light. His hypothesis is plausible.

 My concern was that if you say 1) The speed of light is changing. And 2) The electron velocity in the atom is proportional to the speed of light, then you will generate an immediate objection from a physicist unless you add 3) Planck's constant is also changing in such a way as to keep the fine structure 'constant' constant.

 The last statement is not a small addition. It indicates that his proposal involves a certain relations between the quantum theory (in the atom) and relativity theory (concerning the speed of light). The relation between these theories, in describing gravity, space and time, is recognized as one of the most important outstanding problems in physics. At present these theories cannot be fully reconciled, despite their many successes in describing a wide rang of phenomena. Thus, in a way, his proposal enters new territory rather than challenging current theory. Actually, the idea has been around for more than a decade, but it has not been pursued for lack of proof. My concerns are the following:

The measurements exist over a relatively short period of time. Over this period of time the speed changes by only a small amount. No matter how good the fit to the data is over the last few decades, it is very speculative to extrapolate such a curve over thousands of years unless there are other (stronger) arguments that suggest that he really has the right curve. The fact is that there are an infinite number of mathematical curves which fit the data perfectly (he does not seem to realize this in his article). On the other hand, we should doubt any theory which fits the data perfectly because we know that the data contain various kinds of errors (which have been estimated). Therefore the range of potential curves is even larger, because the data contain errors. There is clearly some kind of systematic effect, but not one that can be extrapolated with much confidence. The fact that his model is consistent with a biblical chronology is very interesting, but not conclusive (there are an infinite number of curves that would also agree with this chronology). The fact that he does propose a relative well known, and simply trigonometric function is also curious, but not conclusive.

 The theoretical derivation that he gives for the variation of the speed of light contains a number of fundamental errors. He speaks of Planck's constant as the quantum unit of energy, but it is the quantum unit of angular motion. In his use of the conversion constant b he seems to implicitly infer that the 'basic' photon has a frequency of 1Hz, but there is no warrant for doing this. His use of the power density in an electromagnetic wave as a way of calculating the rate of change of the speed of light will not normally come out of a dynamical equation which assumes that the speed of light is a constant (Maxwell's Equations). If there is validity in his model, I don't believe that it will come from the theory that he gives. Unfortunately, the problem is much more complicated, because the creation is very rich in phenomena and delicate in structure.

Nevertheless, such an idea begs for an experimental test. The problem is that the predicted changes seem to be always smaller than what can be resolved. I share some of the concerns of the second respondent in the Pascal Notebook article.* One would not expect to have the rate of change of the speed of light related to the current state-of-the-art measurement (the graph of page 4 of Pascal's Notebook**) unless the effect is due to bias. Effects that are 'only there when you are not looking' can happen in certain contexts in quantum theory, but you would not expect them in such a measurement as the speed of light.

 There are my concerns. I think that it is very important to explore alternative ideas. The community which is interested in looking at theories outside of the ideologtical mainstream is small and has a difficult life. No one scientist is likely to work out a new theory from scratch. It needs to be a community effort, I think.

 Notes:

* A reference to "Decrease in the Velocity of Light: Its Meaning For Physics" in The Pascal Centre Notebook, Vol One, Number one, July, 1990. The second respondent to Setterfield's theory was Dr. Wytse Van Dijk, Professor of Physics and Mathematics, Redeemer College, who asked (concerning Professor Troistskii's model of the slowing down of the speed of light): 'Can we test the validity of Troitskii's model? If his model is correct, then atomic clocks should be slowing compared to dynamic clocks. The model could be tested by comparing atomic and gravitational time over several years to see whether they diverge. I think such a test would be worthwhile. The results might help us to resolve some of the issues relation to faith and science." ( p.5.)

 ** This graph consists of a correlation of accuracy of measurements of speed of light c with the rate of change in c between 1740 and 1980.

Setterfield: During the early 1980's it was my privilege to collect data on the speed of light, c. In that time, several preliminary publications on the issue were presented. In them the data list increased with time as further experiments determining c were unearthed. Furthermore, the preferred curve to fit the data changed as the data list became more complete. In several notable cases, this process produced trails on the theoretical front and elsewhere which have long since been abandoned as further information came in. In August of 1987, our definitive Report on the data was issued as The Atomic Constants, Light and Time in a joint arrangement with SRI International and Flinders University. Trevor Norman and I spent some time making sure that we had all the facts and data available, and had treated it correctly statistically. In fact the Maths Department at Flinders Uni was anxious for us to present a seminar on the topic. That report presented all 163 measurements of c by 16 methods over the 300 years since 1675. We also examined all 475 measurements of 11 other c-related atomic quantities by 25 methods. These experimental data determined the theoretical approach to the topic. From them it became obvious that, with any variation of c, energy is going to be conserved in all atomic processes. A best fit curve to the data was presented.

 In response to criticism, it was obvious the data list was beyond contention - we had included everything in our Report. Furthermore, the theoretical approach withstood scrutiny, except on the two issues of the redshift and gravitation. The main point of contention with the Report has been the statistical treatment of the data, and whether or not these data show a statistically significant decay in c over the last 300 years. Interestingly, all professional statistical comment agreed that a decay in c had occurred, while many less qualified statisticians claimed it had not! At that point, a Canadian statistician, Alan Montgomery, liaised with Lambert Dolphin and me, and argued the case well against all comers. He presented a series of papers which have withstood the criticism of both the Creationist community and others. From his treatment of the data it can be stated that c decay (cDK) [note: this designation has since been changed to 'variable c' or Vc] has at least formal statistical significance.

However, Zero Point Energy and the Redshift takes the available data right back beyond the last 300 years. In so doing, a complete theory of how cDK occurred (and why) has been developed in a way that is consistent with the observational data from astronomy and atomic physics. In simple terms, the light from distant galaxies is redshifted by progressively greater amounts the further out into space we look. This is also equivalent to looking back in time. As it turns out, the redshift of light includes a signature as to what the value of c was at the moment of emission. Using this signature, we then know precisely how c (and other c-related atomic constants) has behaved with time. In essence, we now have a data set that goes right back to the origin of the cosmos. This has allowed a definitive cDK curve to be constructed from the data and ultimate causes to be uncovered. It also allows all radiometric and other atomic dates to be corrected to read actual orbital time, since theory shows that cDK affects the run-rate of these clocks.

A very recent development on the cDK front has been the London Press announcement on November 15th, 1998, of the possibility of a significantly higher light-speed at the origin of the cosmos. I have been privileged to receive a 13 page pre-print of the Albrecht-Magueijo paper (A-M paper) which is entitled "A time varying speed of light as a solution to cosmological puzzles". From this fascinating paper, one can see that a very high initial c value really does answer a number of problems with Big Bang cosmology. My main reservation is that it is entirely theoretically based. It may be difficult to obtain observational support. As I read it, the A-M paper requires c to be at least 1060 times its current speed from the start of the Big Bang process until "a phase transition in c occurs, producing matter, and leaving the Universe very fine-tuned ...". At that transition, the A-M paper proposes that c dropped to its current value. By contrast, the redshift data suggests that cDK may have occurred over a longer time. Some specific questions relating to the cDK work have been raised. Penny wrote to me that someone had suggested "that the early measurements of c had such large probable errors attached, that (t)his inference of a changing light speed was unwarranted by the data." This statement may not be quite accurate, as Montgomery's analysis does not support this conclusion. However, the new data set from the redshift resolves all such understandable reservations.

There have been claims that I 'cooked' or mishandled the data by selecting figures that fit the theory. This can hardly apply to the 1987 Report as all the data is included. Even the Skeptics admitted that "it is much harder to accuse Setterfield of data selection in this Report". The accusation may have had some validity for the early incomplete data sets of the preliminary work, but I was reporting what I had at the time. The rigorous data analyses of Montgomery's papers subsequent to the 1987 Report have withstood all scrutiny on this point and positively support cDK. However, the redshift data in the forthcoming paper overcomes all such objections, as the trend is quite specific and follows a natural decay form unequivocally.

Finally, Douglas Kelly's book Creation and Change contained a very fair critique on cDK by Professor Fred Skiff. However, a few comments may be in order here to clarify the issue somewhat. Douglas Kelly appears to derive most of his information from my 1983 publication "The Velocity of Light and the Age of the Universe". He does not appear to reference the 1987 Report which updated all previous publications on the cDK issue. As a result, some of the information in this book is outdated. In the "Technical And Bibliographical Notes For Chapter Seven" on pp.153-155 several corrections are needed as a result. In the paragraph headed by "1. Barry Setterfield" the form of the decay curve presented there was updated in the 1987 Report, and has been further refined by the redshift work which has data back essentially to the curve's origin. As a result, a different date for creation emerges, one in accord with the text that Christ, the Apostles and Church Fathers used. Furthermore this new work gives a much better idea of the likely value for c at any given date. The redshift data indicate that the initial value of c was (2.54 x 1010) times the speed of light now. This appears conservative when compared with the initial value of c from the A-M paper of 1060 times c now.

Professor Skiff then makes several comments. He suggests that cDK may be acceptable if "Planck's constant is also changing in such a way as to keep the fine structure 'constant' constant." This is in fact the case as the 1987 Report makes clear.

Professor Skiff then addresses the problem of the accuracy of the measurements of c over the last 300 years. He rightly points out that there are a number of curves which fit the data. Even though the same comments still apply to the 1987 Report, I would point out that the curves and data that he is discussing are those offered in 1983, rather than those of 1987. It is unfortunate that the outcome of the more recent analyses by Montgomery are not even mentioned in Douglas Kelly's book.

Professor Skiff is also correct in pointing out that the extrapolation from the 300 years data is "very speculative". Nevertheless, geochronologists extrapolate by factors of up to 50 million to obtain dates of 5 billion years on the basis of less than a century's observations of half-lives. However, the Professor's legitimate concern here should be largely dissipated by the redshift results which take us back essentially to the origin of the curve and define the form of that curve unambiguously. The other issue that the Professor spends some time on is the theoretical derivation for cDK, and a basic photon idea which was used to support the preferred equation in the 1983 publication. Both that equation and the theoretical derivation were short-lived. The 1987 Report presented the revised scenario. The upcoming redshift paper has a completely defined curve, that has a solid observational basis throughout. The theory of why c decayed along with the associated changes in the related atomic constants, is rooted firmly in modern physics with only one very reasonable basic assumption needed. I trust that this forthcoming paper will be accepted as contributing something to our knowledge of the cosmos.

Professor Skiff also refers to the comments by Dr. Wytse Van Dijk who said that "If (t)his model is correct, then atomic clocks should be slowing compared to dynamical clocks." This has indeed been observed. In fact it is mentioned in our 1987 Report. There we point out that the lunar and planetary orbital periods, which comprise the dynamical clock, had been compared with atomic clocks from 1955 to 1981 by Van Flandern and others. Assessing the evidence in 1984, Dr. T. C. Van Flandern came to a conclusion. He stated that "the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing with respect to dynamical phenomena ..." This is the observational evidence that Dr. Wytse Van Dijk and Professor Skiff required. Further details of this assessment by Van Flandern can be found in "Precision Measurements and Fundamental Constants II", pp.625-627, National Bureau of Standards (US) Special Publication 617 (1984), B. N. Taylor and W. D. Phillips editors.

In conclusion, I would like to thank Fred Skiff for his very gracious handling of the cDK situation as presented in Douglas Kelly's book. Even though the information on which it is based is outdated, Professor Skiff's critique is very gentlemanly and is deeply appreciated. If this example were to be followed by others, it would be to everyone's advantage. 

 

Einstein's Equation

Comment:  As for the physical problems with the c-decay model, probably the easiest refutation for the layman to understand invokes probably the only science equation that is well known by all, e = m c2. Let us imagine, if you will, that we have doubled the speed of light [the c constant]. That would increase e by a factor of 4. The heat output of the sun would be 4 times as hot. And you thought we had a global warming problem now. In other words, if the speed of light was previously higher (and especially if it was exponentially higher), the earth would've been fried a long time ago and no life would have been able to exist.

Setterfield: In the 1987 Report which is on these web pages, we show that atomic rest masses "m" are proportional to 1/(c2). Thus when c was higher, rest masses were lower. As a consequence the energy output of stars etc from the (E = m c2) reactions is constant over time when c is varying. Furthermore, it can be shown that the product Gm is invariant for all values of c and m. Since all the orbital and gravitational equations contain G m, there is no change in gravitational phenomena. The secondary mass in these equations appears on both sides of the equation and thereby drops out of the calculation. Thus orbit times and the gravitational acceleration g are all invariant. This is treated in more detail in General Relativity and the ZPE.

 

Why not build on established work?

Comments: I quote the emininent gravitation theorist Charles Misner,intending no offense in quoting his terminology.  He wrote:

"By correspondence principles, here, I mean limiting relationships between one theory and another--the fact that general relativity grew out of Newtonian mechanics and special relativity and bears formal mathematical relations to them by which it can reproduce those theories in suitable circumstances and limits....

It is very characteristic of improper theories to be deficient in correspondence principles.  Relativists see many `crackpot' theories; people write letters to relativists proposing why special relativity is wrong because they can rethink the Michelson-Morley experiment, or the Lorentz contraction, from some other viewpoint.  The reason one regards most such proposals as `crackpot' is that they are not born within the milieu of evolving physical theory; they do not have roots and branches reaching out, securing them into the other, more firmly established, theories of physics.  One knows that if he begins working on such a theory he will have to reconstruct his whole world view, rather than just revise a current line of development.  Every experiment or observation from centuries past becomes a possible crucial test of the `crackpot' theory, because current standard theories, and the previous theories they improve upon, are not incorporated into the newly proposed theory by correspondence principles.  Thus one makes a demand that any theory be at most `conventionally revolutionary,' rather than `crackpot,' and this demand is based on a requirement for thorough confrontation with experiment.  The conventionally revolutionary theory (such as special relativity at its inception) may discard previously fundamental concepts and change the basic laws, but it does so in a way that leaves its testing against all previously satisfactory experiments under the control of the previous theory whose domain of authority or validity it clarifies.

Thus most of the discussion of a conventionally revolutionary theory properly focuses on the small group of experiments where it differs from the previous theory.  The `crackpot' theory usually directs its attention to a similar small group of critical experiments, but this limitation is now unjustified since the crackpot theory is--in my use of the word here--by definition deficient in correspondence principles and must, for an adequate testing, also discuss a host of experiments that were nonproblematic in standard theories with which the new theory has lost touch."

C. Misner, (pp. 84-85), "Cosmology and Theology", in Cosmology, History, and Theology, ed. W. Yourgrau and A. D. Breck, Plenum, 1977.

The task for Setterfield's notion of VSL [variable speed of light] is to acquire the sorts of correspondence principles that Misner discusses, and that will involve formulating the theory like a relativistic field theory.  Since people like Magueijo, Visser, and Moffat have shown ways to do that, why not use their work?  At that point one would make contact with current experiments.  The rapid light speed decay that Setterfield posits would then, I expect, be empirically falsified, but that would still be progress.

Setterfield: The main thrust of these comments is that if new work does not build on the bases established by general and special relativity, then it must be an "improper" theory. I find this attitude interesting as a similar view has been expressed by a quantum physicist. This physicist has accepted and taught quantum electrodynamics (QED) for many years and has been faithful in his proclamation of QED physics. However, when he was presented with the results of the rapidly developing new interpretation of atomic phenomena based on more classical lines called SED physics, he effectively called it an improper theory since it did not build on the QED interpretation. It did not matter to him that the results were mathematically the same, though the interpretation of those results was much more easily comprehensible. It did not matter to him that the basis of SED was anchored firmly in the work of Planck, Einstein and Nernst in the early 1900's, and that many important physicists today are seriously examining this new approach. It had to be incorrect because it did not build on the prevailing paradigm.

I feel that the above comments may perhaps be in a similar category.  The referenced quote implies that this lightspeed work does "not have roots and branches reaching out, securing them into the other, more firmly established, theories of physics."  However, I have gone back to the basics of physics and built from there.  But if by the basics of physics one means general and special relativity, I admit guilt.  However, there is a good reason that I do not build on special or general relativity and use the types of equations those formalisms employ.  In most of the work using those equations, the authors put lightspeed, c, and Planck's constant, h, equal to unity.  Thus at the top of their papers or implied in the text is the equation: c=ħ=1.  Obviously, in a scenario where both c and h are changing, such equations are inappropriate.  Instead, what the lightspeed work has done is to go back to first principles and basic physics, such as Maxwell's equations, and build on that rather than on special or general relativity.  This also makes for much simpler equations.  Why complicate the issue when it can be done simply?

There is a further reason. With significant changes to c and h, it may mean that general relativity should be re-examined.  A number of serious scientists have thought this way.  For example, SED physics is providing a theory of gravity which is already unified with the rest of physics.  This approach employs very different equations to those of general relativity.  A second example is Robert Dicke, who, in 1960, formulated his scalar-tensor theory of gravity as a result of observational anomalies.  This Brans-Dicke theory became a serious contender against general relativity up until 1976 when it was disproved on the basis of a prediction.  Note, however, that the original anomalous data that led to the theory still stand; the anomaly still exists in measurements today, and it is not accounted for by general relativity.  A third illustration comes from 2002.  In this last year, over 50 papers addressing the matter of lightspeed and relativity have been accepted for publication by serious journals.  These facts alone indicate that the last word has not been spoken on this matter.  It is true that the 2002 papers have been tinkering around the edges of relativity.  Perhaps the whole issue probably needs an infusion of new thinking in view of the changing values of c and this other anomalous data, despite the comments of Misner.  For these 3 reasons I am reluctant to dance with the existing paradigm and utilize those equations which may be an incomplete description of reality.

Therefore, I plead guilty in that I am not following the path dictated by relativity.  But this does not necessarily prove that I am wrong, any more than SED physics is wrong, or that Brans and Dicke were wrong in trying to find a theory to account for the observational anomalies.  On the basis of common sense and the history of scientific endeavor, I therefore feel that the "requirement" presented above  may legitimately be ignored.

 

Comments: I suspect that your QED physicist is not fully convinced that they are mathematically equivalent.  From my skimming of L. de la Peña and A. M. Cetto, "Quantum Theory and Linear Stochastic Electrodynamics", Foundations of Physics v. 31 (#12): pp.1703-1731, December 2001, that question is controversial, and its answer varies between stochastic electrodynamics and their new linear stochastic electrodynamics.  If SED, LSED, or the like really is mathematically equivalent to QED, and if it does restore classical properties substantially, then 1. I will be happy, and 2. the foundations of physics community will be eager to learn the consequences for resolving the QM measurement problem, explaining EPR correlations, and the like.  I doubt that this equivalence has been proved, and if it has, that fact has not become widely known.  I found nothing on stochastic electrodynamics at www.arxivorg, by the way, which is curious. SED advocates, who might have trouble with journal peer review, should like the arxiv a great deal.  I am acquainted with Bohm's work, which is catching on in some circles and has a credible claim on empirical equivalence with quantum mechanics.

[In addition,] If you have gone back to, say, 1905, then you have missed almost a century of experiments.

The statements c=ħ=1 are just conventions, and so cannot possibly exclude the expression of physical claims that some ostensible constants are varying.  The statement "the speed of light is changing" (relative to measurements made with meter sticks and stopwatches, I assume) can be expressed as something like "the metric tensor appearing in the Lagrangian density for electromagnetism is not conformally related to the metric tensor appearing in the matter Lagrangian density".  Formalisms for doing this kind of thing are being published today in relativistic classical field theoretic form--the language of the current state of this sort of physics--by Drummond, Magueijo, Visser, Moffat, and the like.  Without expressing a theory in a form like this, one fails to establish the correspondence principles that Misner wants.  As a result everyone's time is wasted, especially yours, because you cannot be confident that your theory is consistent with physical experiments, even old ones,that all standard theories satisfy as a matter of course.

You need to show that an empirically adequate story simpler than special and general relativity exists.  But does it?  I doubt it.  I especially doubt that one can know that without working in the language of these theories in order to express the alternative theories in a form manifestly comparable to them.  One cannot even read Clifford Will's Theory and Experiment in Gravitational Physics to know the current state of gravitational experiments, unless one uses a formalism that looks like classical relativistic field theory, general relativity, and the like.

Actually Brans-Dicke is still viable if you turn the knobs appropriately.  The experiments in question had to do with solar oblateness measurements and models.  But Brans-Dicke will not help here, because it is, as Misner puts it, conventionally revolutionary, not crackpot.  It is quite clear what the relation between Brans-Dicke gravity and general relativity is, and how to get Brans-Dicke experiments to agree in some limit with GR.  (One can't even express Brans-Dicke theory in a form that looks very different from general relativity!)  That is just what I want to see for c-decay work.

The following response was posted as part of a general discussion of the material by a third person:

Misner was quoted as stating "By correspondence principles, here, I means limiting relationships between one theory and another--the fact that general relativity grew out of Newtonian mechanics and special relativity and bears formal mathematical relations to them by which it can reproduce those theories in suitable circumstances and limits...."

As you know, the correspondence principle between General Relativity, QM and classical Newtonian mechanics is satisfied as
h --> 0 & c --> infinity.  So Barry's hypothesis of a varying 'c' is not necessarily disjointed from solid physics.

Also, as long as Planck's constant '
h' is not constant but also varies "contravariantly" with 'c,'  such that the product 'hc' is constant and as long as 'G' and 'e' don't vary then the total mass-energy of any SSM type universe would remain conserved. 

However there appears to be a difficulty with intra-system energy conservation across time. Perhaps a closer reading of Setterfield's material will shed some light on this matter.

I'm in sympathy with Setterfield regarding SED versus QED.  Moreover, having known Dirac and his enormous antipathy toward some of the outrageous mathematical license taken in QED, I'm almost positive that Dirac would concur.

However, much to my dismay, SED, the Casimer effect and so forth, appears to have been hijacked by new-agers and the rest of the free-energy-flying-saucer-extra-terrestrial-crowd based on the left coast.

Setterfield: "Science must not neglect any anomaly but give nature's answers to the world humbly and with courage." [Sir Henry Dale, President of the Royal Society, 1981]

By neglecting the anomalies associated with the dropping speed of light values, and neglecting the anomalies associated with mass measurements of various sorts and the problems with quantum mechanics, those adhering to relativistic theory have left themselves open to the charge that relativity has become theory-based rather than observationally-based.

Thanks [to the second correspondent] for your summation of the situation, which is largely correct. The evidence does indeed suggest that h tends to zero and c tends to infinity as one approaches earlier epochs astronomically. At the same time, the product hc has been experimentally shown to be invariant over astronomical time, just as you indicated.  These experimental results, the theoretical approach that incorporated them, and their effects on the atom, were thrashed out in the 1987 Report. These ideas have been subsequently developed further in Exploring the Vacuum. Later, Reviewing the Zero Point Energy refined this further. In this way the correspondence principle has been upheld from its inception, and I had thought that this part of the debate was substantially over. However, those unfamiliar with the 1987 Report would not be expected to know this and may wonder about its validity as a result.

As far as intra-system energy conservation is concerned, that issue was partly addressed in the 1987 Report where it was shown that atomic processes were such that energy was conserved in the system over time, and in more detail in the main 2001 paper. The outcome was hinted at in the Vacuum paper in the context of a changing zero-point energy and a quantized redshift. However, another paper dealing with these specific matters is proposed.

Finally, I, too, am dismayed by the hijacking of SED work by the new-agers, but that should not cloud the valid physics involved.

 

Comment: Setterfield wrote: "Thanks for your summation of the situation, which is largely correct.  The evidence does indeed suggest that h tends to zero and c tends to infinity as one approaches earlier epochs astronomically."

How does this square with the work of (I think it was) Webb et al, that get a fraction of a percent change in alpha=e2/(ħ*c)?  What are the consequences of a changing e, instead of a changing *c?

Furthermore, as already said:

"Also, as long as Planck's constant 'h' is not constant but also varies 'contravariantly' with 'c,' such that the product 'hc' is constant and as long as 'G' and 'e' don't vary then the total mass-energy of any SSM type universe would remain conserved."

in response to this: if alpha is varying (given the evidence above, which I am not yet comfortable asserting to be absolutely true), then it is possible that e is in fact varying.

From another person:

Comment: I would be interested in getting references to the evidence that suggests that "h tends to zero and c tends to infinity as one approaches earlier epochs astronomically." The latter would indicate that high energy physics is governed by classical rather than quantum mechanics at extreme temperatures and densities.; Some time ago I developed a model of particle structure where the statistics governing the dynamics was classical. I successfully applied this model to the study of the very early universe.

Setterfield: The key issue which is raised here concerns the work of Webb et al that indicated there was a change in alpha, the fine structure constant, by about one part in 100,000. A couple of points. The first problem is that this result is very difficult to disentangle from redshifted data. One first has to be sure that this change is separate from anything the redshift has produced.  The second potential difficulty is that all the data has been collected from only one observatory and may be an artifact of the instrumentation. This latter difficulty will soon be overcome as other instruments are scheduled to make the observations as well. That will be an important test. The third point is that observational evidence, some of which is listed in the 1987 Report, indicates that the product hc is invariant with time. This only leaves the electronic charge, e, or the permittivity of free space, epsilon, to be the quantities giving any actual variation in alpha, unless alpha itself is changing.  However, this whole situation appeals to my sense of humor. Physicists are getting excited over a suspected change of 1 part in 100,000 in alpha over the lifetime of the universe, but ignore a change of greater than 1% in c that has been measured by a variety of methods over 300 years.

It was noted that the results of the variable c (Vc) research applied to the early universe might “indicate that high energy physics is governed by classical rather than quantum mechanics at extreme temperatures and densities.” In response, it is fair to say that I have not investigated that possibility. What this research is showing is that the basic cause of all the changes in atomic constants can be traced to an increase with time in the energy density of the Zero-Point Energy (ZPE). Thus the ZPE was lower at earlier epochs. This has a variety of consequences which are being followed through in this series of papers, of which the Vacuum paper is the first. One consequence of a lower energy density for the ZPE is a higher value for c in inverse proportion as the permittivity and permeability of space are directly linked with the ZPE. Another consequence concerns Planck’s constant h. Planck, Einstein, Nernst and others have shown mathematically that the value of h is a measure of the strength of the ZPE. Therefore, any change in the strength of the ZPE with time also means a proportional change in the value of h. The systematic increase in h which has been measured over the last century as outlined in the 1987 Report implies an increase in the strength of the ZPE. Thus the invariance of hc is also explicable. But a lower value for h means that quantum uncertainty is also less in those epochs. This in turn means that atomic particle behaviour was more classical in the early days of the cosmos.  This result seems to be independent of the temperature and density of matter, but does not deny the possibility of other effects.  The final matter that the Vacuum paper addresses is the cause for the increasing strength of the ZPE. The work of Gibson allows it to be traced to the turbulence in the ‘fabric of space’ at the Planck length level induced by the expansion of the cosmos.

 

The use of the SED approach

Comments: Having read your recent paper for the Journal of Theoretics, and noticing the strong reliance of your thesis upon the tenets of Stochastic Electrodynamics, any of the following may be of interest:

R. Ragazzoni, M. Turatto, & W. Gaessler, "Lack of observational evidence for quantum structure of space-time at Planck scales,"http://www.arxiv.org/ e-Print archive, astro-ph/0303043; 

R. Lieu & L. Hillman, "The phase coherence of light from extragalactic sources - direct evidence against first order Planck scale fluctuations in time and space," http://www.arxiv.org/ e-Print archive, astro-ph/0301184; 

R. Lieu & L. W. Hillman, "Stringent limits on the existence of Planck time from stellar interferometry," http://www.arxiv.org/  e-Print archive, astro-ph/0211402.

It appears that observational evidence precludes the Planck-scale "partons" on which H. Puthoff based his rationale for the conservation of electron orbitals (and therefore on which you base your hypothesis of a time dependent decrease in c). 

One consequence of these results is that the speed of lght in vacuo must have been constant to one part in 1032.  It may be that your notion of "stretching out the heavens" is incorrect and should be modified.  Perhaps one of the "superfluid aether" models would offer a better foundation for your hypothesis than the SED approach to which you seem to have become enamored.

Setterfield: Many thanks for bringing these papers to my attention. However, they do not pose the problem to the SED approach and/or the Variable c (Vc) model that the questioner supposes.  Basically, on the Vc model, quantum uncertainty is less at the inception of the cosmos because Planck's constant times the speed of light is invariable.  Therefore, when the speed of light was high, quantum uncertainty was lower. But other issues are also raised by the papers referred to above.  Therefore, let us take this a step at a time. 

In the Lieu and Hillman paper of 18th  November 2002 entitled “Stringent limits on the existence of Planck time from stellar interferometry” they specifically state in the Abstract that they “present a method of directly testing whether time is ‘grainy’ on scales of … [about] 5.4 x 10-44 seconds, the Planck time.” They then use the techniques of stellar interferometry to “place stringent limits on the graininess of time.” Elucidation of the rationale behind their methodology comes in the first sentence, namely “It is widely believed that time ceases to be exact at intervals [less than or equal to the Planck time] where quantum fluctuations in the vacuum metric tensor renders General Relativity an inadequate theory.” They then go on to demonstrate that if time is ‘grainy’ or quantised, then the frequencies of light must also be quantised since frequency is a time-dependent quantity. Furthermore, they point out that quantum gravity demands that “the time t of an event cannot be determined more accurately than a standard deviation of [a specific form]…”  This form is then plugged into their frequency equations which indicate that light photons from a suitably distant optical source will have their phases changed randomly. But interferometers take two light rays from a distant astronomical source along different paths and converge them to form interference fringes. They then conclude “By Equ. (11), however, we see that if the time quantum exists the phase of light from a sufficiently distant source will appear random – when [astronomical distance] is large enough to satisfy Equ. (12) the fringes will disappear.” This paper, and their subsequent one, both point out that the fringes still exist even with very distant objects. The conclusion is that time is not ‘grainy’, in contradiction to quantum gravity theories. This result is a serious blow to all quantum gravity theories and a major re-appraisal of their validity is needed as a consequence. Insofar as these results also call into question the very existence of space-time, upon which all metric theories of gravity are based, then considerable doubt must be expressed as to the reality of this entity.

However, this is not detrimental to the SED approach, since gravity is already a unified force in that theory. It is in an attempt to unify gravity with the other forces of nature, including quantum phenomena, that quantum gravity was introduced. By contrast, SED physics presents a whole new view of quantum phenomena and gravity, pointing out that both arise simply as a result of the “jiggling” of subatomic particles by the electromagnetic waves of the Zero-Point Energy (ZPE). Since this ZPE jiggling is giving rise to uncertainty in atomic events, this uncertainty is not traceable to either uncertainty in other systems or to an intrinsic property of space or time. This point was made towards the close of my Journal of Theoretics article “Exploring the Vacuum”. As a consequence, it becomes apparent that time is not quantised on this Vc approach.

Ragazzoni, Turatto and Gaessler use more recent data to reinforce the original conclusions of Lieu and Hillman. These latter two then expand on their 2002 approach in their 27th January 2003 paper “The phase coherence of light from extragalactic sources – direct evidence against first order Planck scale fluctuations in time and space.” They take some Hubble Space Telescope results from very distant galaxies to reinforce their earlier conclusions. In the Abstract of this 2003 paper they also state that “According to quantum gravity, the time t of an event cannot be determined more accurately than a standard deviation of [a specific form]…likewise distances are subject to an ultimate uncertainty…” They then use this distance uncertainty relationship with light from astronomical sources to demonstrate that there is no ‘graininess’ to space at the Planck length.

Here is the key point. In order to obtain an uncertainty in distance, they multiply the uncertainty in time by the speed of light. If there is no uncertainty in time, as the Vc model indicates, then the equations used by Lieu and Hillman cannot be employed to discover if there is any uncertainty in distance at the Planck length. Furthermore, the final statement in their 2003 Abstract, namely that “The same result may be used to deduce that the speed of light in vacuo is exact to a few parts in 1032”, is also incorrect for the same reason.  Nevertheless, insofar as they are using quantum gravity equations and the resulting concept of the graininess of space-time, these results indicate that space-time is not grainy, and therefore quantum gravity is conceptually in error.

However, there are other ways of determining whether or not space itself is ‘grainy’ at the Planck length level. If metric theories of gravity have any validity at all, and the work of Lieu and Hilman has cast serious doubt on this, then an approach suggested by Y. Jack Ng and H. van Dam may soon provide observational evidence for the existence of the graininess of space-time. They write in their Abstract “We see no reason to change our cautious optimism on the detectability of space-time foam with future refinements of modern gravitational-wave interferometers like LIGO/VIRGO and LISA.” [arXiv:gr-qc/9911054 v2 28th March 2000]. Their metric equations indicate that over the size of the whole observable universe, an expected fluctuation of only 10-15 metres would manifest as quantised gravitational waves. Upcoming refinements in gravitational wave interferometers will soon allow this degree of uncertainty to be detected. If these refinements do not detect quantised gravitational waves, then there is further trouble for some metric theories of gravity. Indeed, as at the moment of writing, no gravitational waves have been detected at all by these expensive interferometers. If this situation continues to exist with the proposed refinements, then the validity of General Relativity may be called into question and the SED option become more attractive.

A different approach has been adopted by Baez and Olson which suggests that wave fluctuations the size of the Planck length are the only ones expected to exist if the fabric of space exhibits graininess at that scale. As a result, such graininess will be undetectable to gravitational wave th January 2002].

The outcome from this discussion is that the granular structure of space is still a very viable option when the SED approach is followed through, as it is in the Vc model. The anonymous reader’s final two paragraphs therefore draw incorrect conclusions. However, if, as on the Vc model, a decrease in the value of Planck's constant can also be construed as meaning a decrease in the uncertainty of time, then part of the problem that has been raised by these Hubble telescope observations may be overcome. If the decrease in the uncertainty of time at the inception of the cosmos is followed through, then this may provide an answer for the problem that these observations pose to theories of quantum gravity.    Thus the graininess of space is not called into question in the Vc approach. (April 3, 2003 updated)

In response to a request for a slightly more simple response, Barry wrote the following:

The theoretical basis for these experiments is the expected fuzziness or granularity of space and time that emerges from those theories that attempt to meld general relativitistic concepts of gravity with those of quantum mechanics. The respective papers by Lieu and Hillman, as well as Ragazzoni et al, have concentrated on the expected fuzziness or graininess of time. They deduced that if such a quantum fuzziness or granularity for time really exists, then there will be a smearing of light photons from a sufficiently distant source which will give slightly blurry pictures of very distant astronomical objects, the blurriness increasing with distance. As it turns out, the Hubble Space Telescope pictures of the most distant objects are sharp, not blurry. This may call into question the whole concept of quantum gravity. However, the newly developing branch of SED physics has a completely consistent approach to gravity that is already unified to other physical concepts, and therefore does not need “unifying” in the way that quantum gravity theories attempt to do. On this basis, the HST images of distant objects supports the SED approach rather than the quantum gravity approach.

Furthermore, these results are not detrimental to the variable speed of light (Vc) model. On this approach, quantum uncertainty was getting less the further back into the past we go. This uncertainty is given by Planck’s constant, h. At the inception of the cosmos, h was very much smaller than it is now. Since the units of Planck’s constant are energy multiplied by time, this means that the uncertainty in time was very much less (of the order of  1/107)for those distant astronomical objects. On that basis, the results from the Hubble Space Telescope are entirely explicable on the Vc model.

 

Waves or Frequency?

Comment: Light emitted from atoms is frequency-driven, not wave-length –driven.  As light enters a denser medium, it is the frequency which remains constant and the wave length which varies. By contrast, Setterfield has the wave length constant and the frequency varying. and claims a redshift on a different basis to that which is conventionally followed.  

Setterfield: This question raises an important issue. Consider the behaviour of an infinitely long beam of light from an object at the frontiers of the cosmos, or a wavetrain associated with a single photon of light, entering a medium such as air or water from a less dense medium when compared with a cosmologically changing ZPE. In the first instance, imagine the beam or wavetrain going from air into glass in such a way that the light ray is moving perpendicularly to the glass. In this case, “every point on a given wavefront enters the glass slab simultaneously and, hence, experiences a simultaneous retardation, since the velocity of light is less in glass than in air. The wave fronts in the glass are therefore parallel to those in the air but closer together…”  [Martin & Connor, Basic Physics, Vol.3, p.1193-194]. Thus the wave fronts bunch up in the glass as the waves behind approach the glass with higher speed, and so crowd together in the denser medium. The same effect can be seen on a highway with cars when an obstacle in the path slows the traffic stream, and cars bunch up near the obstacle. What is causing the effect is that this example has two concurrent values for lightspeed; one in air, the other in glass.

This situation does not apply to emitted light traveling through a cosmos where the ZPE is changing. In this case, an infinitely long beam or a photon wavetrain is traveling through the vacuum. The energy density of the vacuum is smoothly increasing simultaneously throughout the universe. This means that the infinitely long beam and the wavetrain have all parts slowing SIMULTANEOUSLY. In other words there is no bunching up effect because all parts of the beam or wavetrain are traveling with the same velocity. A similar situation would exist with cars on a highway if all cars were simultaneously slowing at the same rate. The distance between the cars would remain constant, but the number of cars passing any given point per unit of time would be lessening proportional to the speed of the traffic stream. For that reason, in the lightspeed case, wavelengths remain fixed in transit and the frequency, the number of waves passing a given point per unit time, drops in a manner proportional to the rate of travel. Therefore in a situation with cosmologically changing ZPE, the frequency of light is lightspeed dependent, while the wavelength remains fixed. It was the experimental proof of this very fact that was being seriously discussed by Raymond T. Birge in Nature in the 1930’s.

Another consideration applies here also. The equation for the energy E of a photon of light is given as E = hf where h is Planck’s constant, and f is frequency. In the situation which applies here, the energy density of the ZPE is uniformly increasing, and h is a measure of the energy density of the ZPE. Thus, as the ZPE strength increases, so does h. But it has been suggested that frequency f should remain constant for light in transit with these changes. If that is so, it means that every photon in transit through the universe must be gaining energy as it travels. In other words, energy is not conserved. However, with the formulation that has been adopted in the paper under review, energy is conserved as would be expected, and so it is wavelength that varies, not frequency. 

 

Comment: Because Setterfield considers the frequency to be varying instead of the wavelength, a redshift of spectral lines is obtained. However, Maxwell’s equations show it is the frequency of light that is constant in any situation with changing light speed, while wave length is the variable factor. Under these circumstances, there will be no redshift of spectral lines. 

Setterfield:  The approach that the reviewer has given in item 2 dictates the response that he sees as appropriate to item 3. He seems to have mistakenly applied the results obtained from light traveling in an in homogeneous medium to those in which there are simultaneous cosmos-wide changes in the medium. This is inappropriate as noted above and does not agree with experiment. If the approach is adopted that deals with a situation with simultaneous cosmological changes, then the redshift originates in the I have indicated in my work, and the reviewer's basic objection has already been answered. 

However, the applicability of Maxwell’s equations is also called into question here. It is occasionally mentioned that these equations imply a constant speed of light in the vacuum, and any variation elsewhere is treated on the basis of a changing refractive index of the medium concerned. As noted above, this approach is inappropriate for the situation considered in this paper. Since Maxwell’s equations seem to imply a constant value for c in a vacuum, this condition can only apply to a changing c scenario when seen from the atomic point of view. Let us explore this a little.

Since all atomic processes are linked to the behaviour of the ZPE as is lightspeed, then as c declines with increasing ZPE, so too does the rate of atomic processes, including atomic clocks and atomic frequencies. Therefore, as seen from the atom, lightspeed is constant and frequencies are constant. Thus Maxwell’s equations apply to an atomic frame of reference when c is varying cosmologically. This means that for Maxwell’s equation to apply in our dynamical or orbital time frame, we have to correct the atomic time that is used inherently in those equations to read dynamical time instead. When that is done, it is the frequency which varies, not the wavelength. In order to see this in a simple way, we note that the basic equation for lightspeed is c = f w where f is frequency and w is wavelength. The units of c are, for example, metres per second while the frequency is events per second. Thus it is the “per second” part of this equation that needs to be altered. Since wavelengths, w, will be in metres, and these have no time dependence, then all the “per second” changes can only occur in c and f. Thus it is the frequency that will vary under these conditions with varying c, not wavelength.      

CONCLUSION:

There is a problem which needs to be mentioned in closing; a problem which is underlying much of the problem some are having with the work presented on these pages.  Physics has currently seemed to reverse a sequence which should not have been reversed, and in doing so has made several wrong choices in the latter part of the twentieth century.  Those that are underlying the reviewer's criticisms have to do with the permeability of space, a mistaken idea about frequency in terms of the behavior of light, and the equations of Lorentz and Maxwell.   As mentioned in point 1, permeability was related to the speed of light early in the twentieth century, but divorced from it later and declared invariant.  It was invariant by declaration, not by data, and this is the first backwards move which has influenced the reviewer's thinking here.  Secondly, it has become accepted that the frequency of light is the basic quantity and that it is the wavelength which is subsidiary.  Until about 1960 it was the wavelength that was considered the basic quantity for measurement.  However since it had become easier to measure frequency with a greater degree of accuracy, the focus shifted from choosing wavelength as the basic quantity to using frequency in its stead, thus relegating wavelength to a subsidiary role.  The data dictates something else, however.  It is wavelength which remains constant and the frequency which varies when the speed of light changes.  This latter point was made plain by experimental data from the 1930’s, and was commented on by Birge himself.

In a similar way, although both Lorentz and Maxwell formulated their equations before Einstein adopted and worked with them, it has become almost required to derive the formulas of both Lorentz and Maxwell in terms on Einstein’s work.   Properly done, it should be the other way around, and the work of both earlier men should be allowed to stand alone without Einstein’s imposed conditions.

One final note: In the long run, it is the data which must determine the theory, and not the other way around. There are five anomalies cosmology cannot currently deal with in terms of the reigning paradigm.  These are easily dealt with, however, when one lets the data go where it will. The original data are in the Report. As given in my lectures, the anomalies concern  measured changes in Planck’s constant, the speed of light, changes in atomic masses, the slowing of atomic clocks, and the quantized redshift.  Modern physics seems to be showing a preference for ignoring much of this in favor of current theories.  That is not the way I wish to approach the subject.

The common factor for solving all five anomalies is increase through time of the zero point energy, for reasons outlined in “Exploring the Vacuum.”  The material has also been updated in Reviewing the Zero Point Energy.

 

return to GSR home page