Astronomical Discussion

 

A Static Universe?
Is the Universe Static or Expanding?
Light from distant stars
Missing Mass and Dark Matter?
More on Dark Matter and Dark Energy
Black Holes
Quasars and Pulsars
Pulsars
Pulsars and Problems
The Fine Structure Constant
Supernovas
    1987A
    1997ff
Doppler Shift
Slow motion effects?
What about the Big Bang?
Wandering Planets?
Fuzzy Space? 
Massive Bombardment?
Has Dark Matter Been Proven?
Particle-Wave Duality
Permittivity and Permeability of Space
Solar Activity
Stellar Brightness and the Tolman Test
Arp and the Red Shift of Quasars
Question about Wal Thornhill's ideas
The energy diffusion timescale for the Sun
Interstellar Water Mystery
CMBR and the Big Bang
What is the CMBR (Cosmic Microwave Background Radiation)?
Supernovas in the Milky Way Galaxy
Distant Starlight and a Talk Origins article answered
A Fractal Universe?
Orbital Periods of Stars
Electrostatic Attraction
Cosmic Ripples from the Birth of the Universe?
Planet X
Pole Star and Axis Tilt
When is Virgo visible?
The Big Bang and Star Formation
Erratic Star
Black Holes
Arp and the Red Shift
How are Star Distances Measured?
Evidence for an Expanding Universe
NEW: Crater Questions

A Static Universe?

In 1998, Barry Setterfield wrote the following:

Notes on a Static Universe:  Incredibly, an expanding universe does imply an expanding earth on most cosmological models that follow Einstein and or Friedmann. As space expands, so does everything in it. This is why, if the redshift signified cosmological expansion even the very atoms making up the matter in the universe would also have to expand. There would be no sign of this in rock crystal lattices etc since everything was expanding uniformly as was the space between them. This expansion occurred at the Creation of the Cosmos as the verses you listed have shown.

It is commonly thought that the progressive redshift of light from distant galaxies is evidence that this universal expansion is still continuing. However, W. Q. Sumner in Astrophysical Journal 429:491-498, 10 July 1994, pointed out a problem. The maths indeed show that atoms partake in such expansion, and so does the wavelength of light in transit through space. This "stretching" of the wavelengths of light in transit will cause it to become redder. It is commonly assumed that this is the origin of the redshift of light from distant galaxies. But the effect on the atoms changes the wavelength of emitted light in the opposite direction. The overall result of the two effects is that an expanding cosmos will have light that is blue-shifted, not red-shifted as we see at present. The interim conclusion is that the cosmos cannot be expanding at the moment (it may be contracting).

Furthermore, as Arizona astronomer William Tifft and others have shown, the redshift of light from distant galaxies is quantised, or goes in "jumps". Now it is uniformly agreed that any universal expansion or contraction does not go in "jumps" but is smooth. Therefore expansion or contraction of the cosmos is not responsible for the quantisation effect: it may come from light-emitting atoms. If this is so, cosmological expansion or contraction will smear-out any redshift quantisation effects, as the emitted wavelengths get progressively "stretched" or "shrunk" in transit. The final conclusion is that the quantised redshift implies that the present cosmos must be static after initial expansion. [Narliker and Arp proved that a static matter-filled cosmos is stable against collapse in Astrophysical Journal 405:51-56 (1993)].

Therefore, as the heavens were expanded out to their maximum size, so were the earth, and the planets, and the stars. I assume that this happened before the close of Day 4, but I am guessing here. Following this expansion event, the cosmos remained static. (Barry Setterfield, September 25, 1998)

In 2002, he authored the article "Is the Universe Static or Expanding", in which he presents further research on this subject.

 

2013 Question: Is the universe static or expanding?

Setterfield: As far as the Hubble Law is concerned, all Hubble did was to note that there was a correspondence between the distance of a galaxy and its redshift. His Law did not establish universal expansion. That interpretation came shortly after as a result of Einstein's excursions with relativity and the various possible cosmologies it opened up. His work suggested a Doppler shift which required the redshift, z, to be multiplied by the speed of light, c. This then gave rise to the "recession velocity" of the galaxies through static space. Hubble was always cautious about using this device and expressed his misgivings even as late as the 1930's.

However, Friedman and Lemaitre came along with a different approach using a refinement of Einstein's equations in which the galaxies were static and the space between them was expanding, and stretching light waves in transit to give a redshift. There are arguments which show that both these interpretations have their shortcomings, so for the serious student, the redshift cannot be used unequivocally to point to universal expansion.

Into this scenario, the work of William Tifft and others starting 1976 threw an additional factor into the mix. This was the evidence that the redshift is quantized or goes in jumps. If the galaxies are racing away from each other, that cannot happen with the velocity going in jumps. Its like a car accelerating down the highway, but only increasing its speed and traveling in multiples of 5 miles per hour. It just cannot happen.  If the fabric of space itself is the cause of the redshift, then it is equally impossible for space to expand in jumps. It is for this reason that mainstream astronomers try to discredit or reject this redshift quantization.  My article, Zero Point Energy and the Red Shift, is another approach which places the cause of the redshift with every atomic emitter whose energy is lower when the ZPE strength was lower. The evidence suggests that the ZPE strength has increased with time.

For these reasons, it is legitimate to associate the redshift with distance but not with universal expansion. The work of Halton Arp suggests that there may be exceptions to that, but there is additional evidence that his approach may have deficiencies. It is this evidence that I would like to throw into the mix as well. I do this, not to discredit Arp but to show that there is an entirely different way of looking at this problem. It has been presented on a number of occasions, including in the NPA Conference Proceedings. The articles there were authored by Lyndon Ashmore. The first was in the Proceedings for 2010 page 17-22 and entitled "An explanation of a redshift in a static universe."

The basic proposition goes like this: There are hydrogen clouds more or less uniformly distributed throughout the cosmos. If the cosmos is expanding, these clouds should be getting further and further apart. Therefore, when we look back at the early universe, that is looking at very distant objects, we should see the hydrogen clouds much closer together than in our own region of space. If the universe is static, the average distance between cloud should be approximately constant. We can tell when light goes through a hydrogen cloud by the Lyman Alpha absorption lines in the spectrum. This leaves its signature in the spectrum of any given galaxy. The further away the galaxy, the more Lyman alpha lines there are as the light has gone through more hydrogen clouds. These lines are at a position in the spectrum that corresponds to the cloud's redshift. Thus, for very distant objects, there is a whole suite of Lyman Alpha lines starting at a redshift corresponding to the object's distance and then at reducing redshifts until we come to our own galactic neighborhood. This suite of lines is called the Lyman Alpha forest.

The testimony of these lines is interesting. From a redshift, z = 6 down to a redshift of about z = 1.6 the lines get progressively further apart, indicating expansion. From about z = 1.6 down to z = 0 the lines are essentially at a constant distance apart. This indicates that the cosmos is now static after an initial expansion, which ceased at a time corresponding to a redshift of z = 1.6.

These data are a problem for Arp because, if the quasars are relatively nearby objects, there are nowhere near enough hydrogen clouds between us and the quasar to give the forest of lines that we see. This tends to suggest that the quasars really are at great distances.

Again, the original article regarding this is still appropriate.

 

Question:  What about the new evidence that the rate of expansion of the universe is accelerating as reported in recent science articles?

Setterfield:  The evidence that an accelerating expansion is occurring comes because distant objects are in fact further away than anticipated given a non-linear and steeply climbing red-shift/distance curve.

Originally, the redshift/distance relation was accepted as linear until objects were discovered with a redshift greater than 1. On the linear relation, this meant that the objects were receding with speeds greater than light, a no-no in relativity. So a relativistic correction was applied that makes the relationship start curving up steeply at great distances. This has the effect of making large redshift changes over short distances. Now it is found that these objects are indeed farther away than this curve predicts, so they have to drag in accelerating expansion to overcome the hassle.

The basic error is to accept the redshift as due to an expansion velocity. If the redshift is NOT a velocity of expansion, then these very distant objects are NOT travelling faster than light, so the relativistic correction is not needed. Given that point, it becomes apparent that if a linear redshift relation is maintained throughout the cosmos, then we get distances for these objects that do not need to be corrected. That is just what my current redshift paper does. (January 12, 1999)

 

Light from distant stars

Question:  If light velocity has not always been a constant "c", why can it be mathematically shown to be constant even from distant star light? (ie. wavelength (m) x frequency (1/s)= 2.99792 x 108m/s) this equation is consistent even when the variables are changed! Light speed (velocity is constant)

Setterfield: Recent aberration experiments have shown that distant starlight from remote galaxies arrives at earth with the same velocity as light does from local sources. This occurs because the speed of light depends on the properties of the vacuum. If we assume that the vacuum is homogeneous and isotropic (that it has the same properties uniformly everywhere at any given instant), then light-speed will have the same value right throughout the vacuum at any given instant. The following proposition will also hold. If the properties of the vacuum are smoothly changing with time, then light speed will also smoothly change with time right throughout the cosmos.

On the basis of experimental evidence from the 1920's when light speed was measured as varying, this proposition maintains that the wavelengths of emitted light do not change in transit when light-speed varies, but the frequency (the number of wave-crests passing per second) will. The frequency of light in a changing c scenario is proportional to c itself. Imagine light of a given earth laboratory wavelength emitted from a distant galaxy where c was 10 times the value it has now. The wavelength would be unchanged, but the emitted frequency would be 10 times greater as the wave-crests are passing 10 times faster. As light slowed in transit, the frequency also slowed, until when it reaches earth at c now, the frequency would be the same as our laboratory standard as well as the wavelength. Trust that this reply answers your question. (June 15, 1999)

 

Missing Mass and Dark Matter?

Question:  Tell me, what are your views on what has been called "dark matter" in space?  In your view, does that matter have enough mass to make the universe collapse?  Perhaps more importantly, what is it exactly?

Setterfield:  Thank you for your questions; let me see what I can do to answer them. Firstly about missing mass. There are two reasons why astronomers have felt that mass is "missing". The first is due to the apparent motion of galaxies in clusters. That motion in the outer reaches of the clusters appears to be far too high to allow the clusters to hold together gravitationally unless there is extra mass somewhere. However, that apparent motion is all measured by the redshift of light from those galaxies. Tifft's work with the quantised redshift indicates that redshift is NOT a measure of galaxy motion at all. In fact, motion washes out the quantisation. On that basis, the whole foundation on which the missing mass in galaxy clusters is built is faulty, as there is very little motion of galaxies in clusters at all. The second area where astronomers have felt that mass is "missing" is due to the behaviour of rotation rates of galaxies as you go out from their centres. The behaviour of the outer regions of galaxies is such that there must be more mass somewhere there to keep the galaxies from flying apart as their rotation rate is so high. It seems that there might indeed be a large envelope of matter in transparent gaseous form around galaxies that could account for this discrepancy. Alternatively, some astronomers are checking for Jupiter sized solid objecdts in the halo of our galaxy that could also overcome for the problem. A third possibility is that the Doppler equations on which everything is based may be faulty at large distances due to changes in light speed. I am looking into this.

Question: I have no objection to the existence of either dark matter or dark energy,  But I am skeptical that either has been "discovered." 

Setterfield:  The entire discussion about dark energy and dark matter is based, at least partly, on a mis-interpretation of the redshift of light from distant galaxies. But before I elaborate on that, let me first specifically address a recent article in Science (“Breakthrough of the Year: Illuminating the Dark Universe” Charles Seife, Science 302, 2038-2039, 2003) espousing the proof for dark energy. I find this amazing in view of the recent data that has come in from the European Space Agency's (ESA) X-ray satellite, the XMM-Newton. According to the ESA News Release for December 12, 2003 the data reveal "puzzling differences between today's clusters of galaxies and those present in the Universe around seven thousand million years ago." The news release says that these differences "can be interpreted to mean that 'dark energy' which most astronomers now believe dominates the universe simply does not exist." In fact, Alain Blanchard of the Astrophysical Observatory in the Pyrenees says the data show "There were fewer galaxy clusters in the past". He goes on to say that "To account for these results you have to have a lot of matter in the Universe and that leaves little room for dark energy."  

In other words, we have one set of data which can be interpreted to mean that dark energy exists, while another set of data suggests that it does not exist. In the face of this anomaly, it may have been wiser for Science to have remained more circumspect about the matter. Unfortunately, the scientific majority choose to run with an interpretation they find satisfying, and tend to marginalize all contrary data. I wonder if the European data may not have been published by the time that Science went to print on the issue. Thus there may be some embarrassment by these later results, and they may be marginalized as a consequence. 

The interpretation being placed on the WMAP observations of the microwave background is that it is the "echo" of the Big Bang event, and all other data is interpreted on this basis.  But Takaaki Musha from Japan pointed out in an article in Journal of Theoretics (3:3, June/July 2001)  that the microwave background may well be the result of the Zero Point Energy allowing the formation of virtual tachyons in the same way that it allows the formation of all other kinds of virtual particles. Musha demonstrated that all the characteristics of the microwave background can be reproduced by this approach. In that case, the usual interpretation of the WMAP data is in error and the conclusions drawn from it should be discarded and a different set of conclusions deduced.  

However, the whole dark matter/dark energy discussion points to the fact that anomalies exist which current theory did not anticipate. Let me put both of these problems in context. Dark matter became a necessity because groups of galaxies seemed to have individuals within the group which appeared to be moving so fast that they should have escaped long ago if the cosmos was 14 billion years old. If the cosmos was NOT 14 billion years old but, say, only 1 million or 10,000 years old, the problem disappears. However, another pertinent answer also has been elaborated by Tifft and Arp and some other astronomers, but the establishment does not like the consequences and tends to ignore them as a result. The answer readily emerges when it is realized that the rate of movement of the galaxies within a cluster is measured by their redshift.  The implicit assumption is that the redshift is a measure of galaxy motion. Tifft and Arp pointed out that the quantized redshift meant that the redshift was not intrinsically a measure of motion all but had another origin. They pointed out that, in the centre of the Virgo cluster of galaxies, where motion would be expected to be greatest under gravity, the ACTUAL motion of the galaxies smeared out the quantization. If actual motion does this, then the quantized redshift exhibited by all the other galaxies further out means that there is very little actual motion of those galaxies at all. This lack of motion destroys the whole basis of the missing matter argument and the necessity for dark matter then disappears. The whole missing matter or dark matter problem only arises because it is in essence a mis-interpretation of what the redshift is all about. 

In a similar fashion, the whole dark energy (or necessity for the cosmological constant) is also a redshift problem. It arises because there is a breakdown in the redshift/distance relationship at high redshifts. That formula is based on the redshift being due to the galaxies racing away from us with high velocities. Distant supernovae found up to 1999 proved to be even more distant than the redshift/distance formula indicated. This could only be accounted for on the prevailing paradigm if the controversial cosmological constant (dark energy) were included in the equations to speed up the expansion of the universe with time. The fact that these galaxies were further away than expected was taken as proof that the cosmological constant (dark energy) was acting. Then in October this year, Adam Riess announced that there were 10 even more distant supernovae whose distance was CLOSER than the redshift relation predicted. With a deft twist, this was taken as further proof for the existence of dark energy. The reasoning went that up to a redshift of about 1.5 the universal expansion was slowing under gravity. Then at that point, the dark energy repulsion became greater than the force of gravity, and the expansion rate progressively speeded up.  

What in fact we are looking at is again the mis-interpretation of the redshift. Both dark matter and dark energy hinge on the redshift being due to cosmological expansion. If it has another interpretation, and my latest paper being submitted today [late December, 2003] shows it does, then the deviation of distant objects from the standard redshift/distance formula is explicable, and the necessity for dark matter also disappears. Furthermore, the form of the actual equation rests entirely on the origin of the Zero Point Energy. The currently accepted formula can be reproduced exactly as one possibility of several. But the deviation from that formula shown by the observational evidence is entirely explicable without the need for dark energy or dark matter or any other exotic mechanism. 

I trust that this gives you a feel for the situation.  

 

More on Dark Matter and Dark Energy

Question: I have just recently come across your work and feel like I have come home!  For years I have squared the accounts of the bible and science (being from a science backgound myself) by passing Genesis off (in particular) as 'allegorical' and always telling christian friends that we must not ignore the scientific evidence.  I was uncomfortable with all the scientific explanations, though, and suspected things were not quite right.  I was aware that data was data and that the scientific explanations throughout history had been modified (in some cases drastically) in the light of new evidence.  Hence, the Newtonian view becoming the Einsteinian view.  So I was left with..'well this is what I currently believe until such time as new evidence indicates otherwise'.
 
My first change of thought came when I found this work by Frank Colijn which he terms 'Bible Code Research':-

 
http://members.home.nl/frankcolijn/frankcolijn/indexEN.htm
 
There is just this website, he hasn't written any books.  I am not the world's best mathematician but I couldn't find anything wrong with this work even though I was initially skeptical about it.  In any case, it changed my thinking on Genesis and that I should take it more literally. 
 
I had always felt that there was something inherently wrong with carbon dating and other dating methods...BUT...I still couldn't square Genesis and the bible with my (limited) understanding of light travelling from distant stars.  The bible gives the age of creation as 6000 years but physics and astronomy (as I understood it) measured in billions of years due to light travelling large distances through space.
 
Just recently I became unemployed and this has been a blessing for me because I have had time to research into all the questions I had.  Initially, I came across Louis Savain:-
 

http://www.rebelscience.org/
 
He doesn't mince his words but he set me off thinking about time and that the concepts of spacetime and the 'Big Bang' occurring 13.7 billion years ago were wrong. 
 
From there I went to Julian Barber and his work, particularly his idea that time is just an illusion, detailed in his book 'The End of Time'. 

 
http://www.platonia.com/
 
Both of these completely transformed my thinking, i.e. what we actually observe is change not time, time is just an abstract concept that comes out of the change, it can't be referenced against itself i.e. no 4th dimension or spacetime as such and all that really matters is 'now', there is in a sense no past and no future just the immediate past folding into the immediate future, so time travel etc. is all nonsense.
BUT...I still coudn't see how this would explain 6000 years from the bible and because my thinking had been changed with respect to the standard scientific viewpoint I was prepared to consider anything and everything.  I searched through all the stuff I could find including this Christian website by Dr. Richard Kent but still couldn't find an explanation for the light from distant stars.

 
http://www.freechristianteaching.org/
 
I began to wonder whether what we see as change in 3D space was not always constant, i.e. the speed of light was not always constant.  I went down several paths including the redshift, dark matter/dark energy,  cosmic microwave background radiation, missing mass and I then came across some interesting work by Paul Marmet:-
 
 http://www.newtonphysics.on.ca/hydrogen/index.html
http://www.spaceandmotion.com/cosmic-microwave-background-radiation.htm
 
This basically gave an explanation of dark matter, the redshift and missing mass as being due to molecular hydrogen in space and that the CMBR was simply the perfect blackbody radiation emitted by hydrogen at 3K and not the redshifted Planck radiation emitted by the Big Bang.  The work suggested no Big Bang and strong evidence for the steady state model of the universe.  Elation at first!!!! and then disappointment because a steady state model of the universe might suggest no creation in the past at least in the timescales we are talking about. 
 
I then read that someone had found a 'tiny change' in the speed of light but the way it was described didn't seem to be significant, i.e. 13.7 billion years was still just that 13.7 billion years.  So I started to trawl for any information and that is when (Praise the lord!) I came across your work.  I feel like I am the happiest man alive because this has been with me and troubled me for some time.  BLESS YOU!!!!

Setterfield: It is a delight that you have found us, and many thanks for the encouragement it gives to us as well! You have had quite a journey, but the Lord has brought you to some sort of haven of rest. We trust that you will be able to absorb what we have put on our website and share it with others as the Lord directs.

As for your specific questions, I would like to deal with the missing mass/dark matter problem. This really is a deep problem for gravitational astronomers. Currently these is no assured answer from that direction except to modify the laws of gravity and so on - all very messy procedures. However, what must be realized is that there are two aspects to the problem. The first that was noted was the missing mass in galaxy clusters. What was happening was that the redshifts of light from these galaxy clusters was being measured and interpreted as a Doppler shift, like the drop in pitch of a police siren passing you. The redshift on this basis was taken to mean that galaxies are racing away from us. When the average redshift of a cluster of galaxies was taken, and then the highest redshifts of galaxies in the cluster were compared with that, then, if the redshift was due to motion, these high redshift galaxies were moving so fast they would have escaped the cluster long ago and moved out on their own. As a result, since they are obviously still part of the cluster, there must be a lot of additional material in the cluster to give a gravitational field strong enough to hold the cluster together. The whole chain of reasoning depends on the interpretation of the redshift as galaxy motion.

In contra-distinction to this interpretation of the redshift, the Zero Point Energy approach indicates that the redshift is simply the result of a lower ZPE strength back in the past which affects atomic orbits so they emit redder light (probably in quantum steps as indicated by Tifft and others). On that basis, the redshift has nothing to do with velocity, so the galaxy clusters have no mass "missing" and hence no "dark matter." We can see the actual motion of galaxies in the center of clusters where the motion actually washes out the quantization, so we are on the right track there. On this approach, the clusters of galaxies are "quiet" without high velocities that pull them apart. 

The second reason for the search for "missing mass/dark matter" is the actual rotation of individual galaxies themselves.As we look at our solar system, the further out the planet is from the sun, the slower it moves around the sun, because of the Laws of gravity. Astronomers expected the same result when they looked at the velocities of stars in the disks of galaxies as they moved around the center of those galaxies. That is, they expected the stars in the outer part of the disk would move at a slower pace than stars nearer the center. They dont! The stars in the outer part of the disk orbit with about the same velocity as those closer in. For this to happen on the gravitational model, there must be some massive amounts of material in and above/below the disk that we cannot see to keep the rotation rate constant. Since we have been unable to find it, we call it missing matter or dark matter. That is the conundrum. Another suggestion is that the laws of gravity need amending.

However, if all galaxies were formed by a plasma process, as indicated by Anthony Peratt's simulations, then the problem is solved. In the formation of galaxies by plasma physics, Peratt has shown that the miniature galaxies produced in the laboratory spin in exactly the same way as galaxies do in outer space. No missing mass is needed; no modification to the laws of gravity. All that is needed is to apply the laws of plasma physics to the formation of galaxies, not the laws of gravitation. These laws involve electro-magnetic interaction, not gravitational interaction. The two are vastly different and the electro-magnetic interactions are up to 1039 times more powerful than gravity! Things can form much more quickly by this mechanism as well, which has other implications in the origins debate.

Well, that should give you some things to digest for the moment. You will have other questions. Get back to me when you want answers, and I will do my best.

 

April, 2012 -- Missing dark matter. What do you think?

http://www.eso.org/public/news/eso1217/

Setterfield: Many thanks for keeping us informed on this; it is appreciated!

The whole scenario with dark matter comes from the rotation rate of galaxy arms. If it is gravitational - the dark matter must exist. If galaxies rotate under the influence of plasma physics, nothing is missing - it is all behaving exactly as it should. Unfortunately gravitational astronomers have painted themselves into a corner with this one and cannot escape without losing a lot of credibility. The plasma physicists are having a good laugh.

 

Black Holes

Question:  I have been reading that the popular opinion is that there are black holes at the center of most galaxies, including ours.  What is your thought on exactly what these incredibly massive things are.  And, where DOES light go when it gets trapped? If matter cannot be created or destroyed, simply changed, what happens to light in a black hole?  Any radiation, for that matter.

Setterfield:  Original answer -- You ask where the radiation etc disappears to when it gets trapped inside a black hole. The quick answer is that it becomes absorbed into the fabric of space, as that fabric is made up of incredibly tiny Planck particle pairs that are effectively the same density as the black holes. The trapped radiation cannot travel across distances shorter than those between the Planck particles, and so gets absorbed into the vacuum. This brings us to your question of what a black hole really is. It essence, it is a region of space that has the same density as the Planck particle pairs that make up the fabric of space. It seems that these centres of galaxies may have formed as an agglomeration of Planck particle pairs at the inception of the universe and acted as the nucleus around which matter collected to form galaxies.

February, 2012 -- Since the above answer was written, there have been some developments in astronomy as a result of plasma physics and experiments conducted in plasma laboratories. In the laboratories, a spinning disk with polar jets can be readily produced as a result of electric currents and magnetic fields. The principle on which they operate is basically the same as the spinning disk of your electricity meter. What we see with "black holes" is the spinning disk and the polar jets. We measure the rate of spin of the disk and from that gravitational physics allows us to deduce that there is a tremendous mass in the center of the disk. This is where the idea of a black hole came from. However, with plasma physics, the rate of spin of the disk is entirely dependent upon the strength of the electric current involved. Plasma physics shows that there is a galactic circuit of current for every galaxy. As a consequence of this, and the experimental evidence we have, plasma physics does not have the necessity for a black hole to explain what we are seeing. It can all be explained by electric currents and magnetic fields. In reviewing this evidence, I came to the conclusion that plasma physics, applied to astronomy, is the better way to go, and that gravitational astronomy has caused itself some considerable problems in this and other areas. For example, in the case of our own galaxy, and the suspected black hole at its center, it has been deduced from the motion of objects in the vicinity that the mass of the black hole is such that it should cause gravitational lensing of objects in the area. We have searched for gravitational lensing for over two decades and none has been found -- despite extensive searches. This indicates that it is not a black hole we have in the center of our galaxy.

Keep in mind no one has ever seen a black hole. It is simply a necessary construct if gravity is to be considered the main force forming and maintaining objects in the universe. However, gravity is a very weak force, especially when compared to electromagnetism, which is what we have seen out there in the form of vast plasma filaments.

 

Quasars and Pulsars

Question:  Looked over your papers, enjoyed every bit of it and how it ties everything together theoretically and observationally. How do Pulsars and Quasars apply to changing speed of light? Could they offer some way to compare current speed vs speed in the past? Maybe even compare orbital time in past to current atomic clocks?

Setterfield: Astronomically, many of the important distant Quasars turn out to be the superluminous, hyper-active centres of galaxies associated with a supermassive black hole. There is a mathematical relationship between the size of the black hole powering the quasar and the size of the nucleus of any given galaxy. As a result of this relationship, there is a debate going on as to whether the quasar/black hole came first and the galaxy formed around it, or vice versa.   Currently, I tend to favour the first option.  Nevertheless, which ever option is adopted, the main effect of dropping values of c on quasars is that as c decays, the diameter of the black hole powering the quasar will progressively increase.  This will allow progressive engulfment of material from the region surrounding the black hole and so should feed their axial jets of ejected matter. This is the key prediction from the cDK model on that matter.

As far as pulsars are concerned, there recently has been some doubt cast on the accepted model for the cause of the phenomenon that we are observing.  Until a short time ago, it was thought that the age of a pulsar could be established from the rotation period of the object that was thought to give rise to the precisely timed signal. However, recent work on two fronts has thrown into confusion both the model for the age of the pulsar based on its rotation period, and also actual cause of the signal, and hence what it is that is rotating. Until these issues can be settled, it is difficult to make accurate predictions from the cDK model.

 

Pulsars

Question:  I've been asked why distant pulsars don't show a change in their rotation rate if the speed of light is slowing.  Do you have an explanation?  If you have any information on whether their observed rate of change should change with a slowing of c I would appreciate it.

Setterfield: Thanks for your question about pulsars. There are several aspects to this. First of all, pulsars are not all that distant, the furthest that we can detect are in small satellite galaxies of our own Milky Way system. Second, because the curve of lightspeed is very flat at those distances compared with the very steep climb closer to the origin, the change in lightspeed is small. This means that any pulsar slowdown rate originating with the changing speed of light is also small. The third point is that the mechanism that produces the pulses is in dispute as some theories link the pulses with magnetic effects separate from the star itself, so that the spin rate of the host star may not be involved. Until this mechanism is finally determined, the final word about the pulses and the effects of lightspeed cannot be given. If you have Cepheid variables in mind, a different situation exists. The pulsation, that gives the characteristic curve of light intensity variation, is produced by the behaviour of a thin segment near the star's surface layer. The behaviour of this segment of the star's outer layers is directly linked with the speed of light. This means that any slow-down effect of light in transit will already have been counteracted by a change in the pulsation rate of this layer at the time of emission. The final result will be that any given Cepheid variable will appear to have a constant period for the light intensity curve, no matter where it is in space and no matter how long we observe it.

Comment: Dr. Tom Bridgman website shows in calculus the metric for canceling out the red-shift. Even more important is the metric he shows for kinematic argument using pulsar periods out to a distance of 1,000 parsecs from earth. According to Tom, you have very observable effects in pulsar periods using c-decay that is way off so c-decay is not taking place within 1,000 parsecs of the earth.

Setterfield: Yes. As far as pulsars are concerned within 1000 parsecs of the earth, there should only be very, very minimal effects due to a changing c. In fact, at that distance, the change in c would be so small as to not even give rise to any redshift effects at all. The first redshift effect comes at the first quantum jump which occurs beyond the Magellanic Clouds. This shows that the rate at which c is climbing across our galaxy is very, very small. Consequently, any effect with pulsars within our galaxy is going to be negligible.

Recently the reason for the spindown rate in pulsars has been seriously questioned, and new models for pulsar behaviour will have to be examined. See, for example, New Scientist 28 April, 2001, page 28. There are other references which I do not have on hand at the moment, but which document other problems as well. Until a viable model of pulsar behaviour and the cause of the pulsars themselves has been finalised, it is difficult to make any predictions about how the speed of light is going to affect them. I am curious about the metric that Bridgman is using, because I show the change in wavelength over the wavelength (which is the definition of the redshift) is, in fact, occurring.

Response: Dr. William T. (Tom) Bridgman is using kinematics and a calculus formula to shoot down your c-decay metric. He even uses your predicted c-decay curve based upon the past 300 years or so of measurments for c and incorporates this into the pulsar changes that he claims should be observed. if correct then c-decay did not take place within past 300 years or even out to distances of 1 kiloparsec

Setterfield:  The curve that Tom is using for pulsar analysis is outdated and no longer applicable to the situation as it dates from the 1987 paper. The recent work undergoing review indicates a very different curve which includes a slight oscillation. This oscillation means that there is very little variability in light speed out to the limits of our galaxy. Thus, even if the rest of his math were correct, and the behaviour of pulsars were known accurately, Tom's conclusions are not valid. I suggest that he some of the more recent material before attempting fireworks.

Response: If c-decay has predictable and observable side effects like pulsar timing changes, changes in eclipses of Jupiter's and Saturn's moons, and also changes in stellar occultations in the ecliptic, these should be rigourously tested to see if they support or deny c-decay. At the moment Tom's metric denies c-decay as published based upon the 1987 c-decay curve

Setterfield: I have examined pulsar timing changes in detail and responded some years ago to that so-called "problem". Any observed changes are well within the limits predicted by this variable light-speed (Vc) model just introduced. One hostile website used the pulsar argument for a while until I pointed out their conceptual error to a friend and then it was deleted and has not appeared again since. I suspect that Tom is making the same error.

 The changes in the eclipse times of Jupiter's and Saturn's moons have in fact been used as basic data in the theory. Stellar occultations along the ecliptic have also been used as data based on the work of Tom van Flandern who studied the interval 1955-1981. He then came to the conclusion: "the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing down with respect to dynamical phenomena..." In this case the eclipse times and the occultations were used to build the original model and as such are not in conflict with it.

Question: A little further down in this Talk Origins link, there is a discussion of how pulsars invalidate any creationist model. Do you have a response to it?

Setterfield: Thanks for your question and thanks for drawing my attention to this website again. I have been mentioned many times on that site, not usually in a good context! This article is based on the majority model for pulsars among astronomers today. If that model is not correct, then neither are the conclusions which that website has drawn. So we need to examine the standard model in detail.  On that model we have a rapidly rotating, small and extremely dense neutron star which sends out a flash like a lighthouse every time it rotates. Rotation times are extremely fast on this model. In fact, the star is only dense enough to hold together under the rapid rotation if it is made up of neutrons. Those two facts alone present some of the many difficulties for astronomers holding to the standard model. Yet despite these difficulties, the model is persisted with and patched up as new data comes in. Let me explain.

First a number of university professionals have difficulty with the concept of a star made entirely of neutrons, or neutronium. In the lab, neutrons decay into a proton and electron in something under 14 minutes. Atom-like collections of two or more neutrons disrupt almost instantaneously. Thus the statement has been made that "there can be no such entity as a neutron star. It is a fiction that flies in the face of all we know about elements and their atomic nuclei." [D.E. Scott, Professor & Director of Undergraduate Program & Assistant Dept. Head & Director of Instructional Program, University of Massachusetts/Amherst]. He, and a number of other physicists and engineers remain unconvinced by the quantum/relativistic approach that theoretically proposed the existence of neutronium.They point out that it is incorrect procedure to state that neutronium must exist because of the pulsars behavior; that is circular reasoning. So the existence of neutronium itself is the first problem for the model.

Second, there is the rapid rate of rotation. For example, X-ray pulsar SAX J1808.4-3658 flashes every 2.5 thousanths of a second or about 24,000 revs per minute. This goes way beyond  what is possible even for a neutron star. In order for the model to hold, this star must have matter even more densly packed than neutrons, so "strange matter" was proposed. Like neutronium, strange matter has never been actually observed, so at this stage it is a non falsifiable proposition. So the evidence from the data itself suggests that we have the model wrong. If the model is changed, we do not need to introduce either the improbability of neutronium or the even worse scenario of strange matter.

Third, on 27th October, 2010, in "Astronomy News," a report from NRAO in Socorro, New Mexico was entitled "Astronomers discover most massive neutron star yet known." This object is pulsar PSR J1614-2230. It "spins" some 317 times per second and, like many pulsars, has a proven companion object, in this case, a white dwarf. This white dwarf orbits in just under 9 days. The orbital characteristics and data associated with this companion shows that the neutron star is twice as massive as our sun. And therein lies the next problem. Paul Demorest from NRAO in Tucson stated: "This neutron star is twice as massive as our Sun. This is surprising, and that much mass means that several theoretical models for the internal composition of neutron stars are now ruled out. This mass measurement also has implications for our understanding of all matter at extremely high densities and many details of nuclear physics." In other words, here is further proof that the model is not in accord with reality. Rather than rethink all of nuclear physics and retain the pulsar model, it would be far better to retain nuclear physics and rethink what is happening with pulsars.

In rethinking the model, the proponents of one alternative that has gained some attention point out some facts about the pulse characteristics that we observe in these pulsars. (1) The duty cycle is typically 5% so that the pulsar flashes like a strobe light. The duration of each pulse is only 5% of the length of time between pulses. (2) Some individual pulses vary considerably in intensity. In other words, there is not a consistent signal strength. (3) The pulse polarization indicates that it has come from a strong magnetic field. Importantly, all magnetic fields require electric currents to generate them. These are some important facts. Item (2) alone indicates that the pulsar model likened to a lighthouse flashing is unrealistic. If it was a neutron star with a fixed magnetic field, the signal intensity should be constant. So other options should be considered. Taken together, all these characteristics are typical of an electric arc (lightning) discharge between two closely spaced objects. In fact electrical engineers have known for many years that all these characteristics are typical of relaxation oscillators. In other words, in the lab we can produce there precise characteristics in an entirely different way. This way suggests a different, and probably more viable model. Here is how D.E. Scott describes it:

"A relaxation oscillator can consist of two capacitors (stars) and a non-linear resistor (plasma) between them. One capacitor charges up relatively slowly and, when its voltage becomes sufficiently high, discharges rapidly to the other capacitor (star). The process then begins again. The rate of this charge/discharge phenomenon  depends on the strength of the input (Birkeland) current, the capacitances (surface areas of the stars) and the breakdown voltage of the (plasma) connection. It in no way depends on the mass or density of the stars.

In the plasma that surrounds a star (or planet) there are conducting paths whose sizes and shapes are controlled by the magnetic field structure of the body. Those conducting paths are giant electric transmission lines and can be analyzed as such. Depending on the electrical properties of what is connected to the ends of the electrical transmission lines, it is possible for pulses of current and voltage (and therefore power) to oscillate back and forth from one end of the line to the other. The ends of such cosmic transmission lines can both be on the same object (as occurs on earth) or one end might be on one member of a closely spaced binary pairs of stars and the other end on the other member of the pair similar to the "flux tube" connecting Jupiter to its moon Io.

In 1995, an analysis was performed on a transmission line system having the properties believed to be those of a pulsar atmosphere. Seventeen different observed properties of pulsar emissions were produced in these experiments. This seminal work by Peratt and Healy strongly supports the electrical transmission line explanation of pulsar behavior." 

The paper outlining these proposals was entitled "Radiation Properties of Pulsar Magnetospheres: Observation, Theory and Experiment" and appeared in Astrophysics and Space Science 227(1995):229-253. Another paper outlining a similar proposal using a white dwarf star and a nearby planet instead of a double star system was published by Li, Ferrario and Wickramasinghe. It was entitled "Planets Around White Dwarfs" and appeared in the Astrophysical Journal 503:L151-L154 (20 August 1998). Figure 1 is pertinent in this case. Another paper by Bhardwaj and Michael, entitled the "Io-Jupiter System: A Unique Case of Moon-Planet Interaction" has a section devoted to exploring this effect in the case of Binary stars and Extra-Solar Systems. An additional study by Bhardwaj et al also appeared in Advances in Space Research vol 27:11 (2001) pp. 1915-1922. The whole community of plasma physicists and electrical engineers in the IEEE accept these models or something similar for pulsars rather than the standard neutron star explanation.

Well, where is this heading? The question involved the slow-down in the speed of light in the context of pulsars and their "rotation rate." If pulsars ar not rotating neutron stars at all, but rather involve a systematic electrical discharge in a double star or star and planet system with electric currents in a plasma or dusty disk, then the whole argument breaks down. In fact if the electric duscharge model is followed, then the paper on "Reviewing a Plasma Universe with Zero Point Energy" is extremely relevant. The reason is that an increasing ZPE not only slows the speed of light, but also reduces voltages and electric current strengths. When that is factored into the plasma model for pulsars the rate of discharge seen from earth will remain constant, as the slow-down of light cancels out the initial faster rate of discharge in the pulsar system when currents were higher.

I hope that this answers your question in a satisfactory manner.  

Response: Thanks for your response.  I guess they would say that neutron stars aren't like atoms in that gravitational forces are significant, so we shouldn't judge one by the other.  But I take your point that really it's just concocting a plausible theory that fits with their worldview, and thus doesn't necessarily have any connection with what's really happening.

Setterfield: Thanks for your further question. It is true that they try to overcome the difficulties with the thin surface of protons and electrons...etc. However, this is an entirely theoretical concept for which there is no physical evidence. The physical evidence we have comes from the existence of protons, neutrons and electrons in atoms and atomic nuclei. When we consider the known elements, even the heavy man-made elements as well, there is a requirement that, in order to hold a group of neutrons together in a nucleus, an almost equal number of proton-electron pairs are required. The stable nuclei of the lighter elements contain approximately equal numbers of neutrons and protons. This gives a neutron/proton ratio of 1. The heavier nuclei contain a few more neutrons than protons, but the absolute limit is close to 1.5 neutrons per proton. Nuclei that differ significantly from this ratio spontaneously undergo radioactive decay that changes the ratio so that it falls within the required range. Groups of neutrons are therefore not stable by themselves. That is the hard data that we have. If theory does not agree with that data, then there is something wrong with the theory. Currently, the theoretical approach that allows neutronium to exist flies in the face of atomic reality. It has only been accepted so that otherwise anomalous phenomena like pulsars can be explained by entrenched concepts. It is these concepts that need updating as well as the theoretical approaches that support them. It is only as we adopt concepts and theories that are rooted in reality and hard data that we get closer to the scientific truth of a situation. It is for this reason that the motto on our website reads "Letting data lead to Theory." I trust that you appreciate the necessity for that.

For more discussion on pulsars, see Pulsars and Problems

The Fine Structure Constant

Setterfield: In Physical Review Letters published on 27 August 2001 there appeared a report from a group of scientists in the USA, UK and Australia, led by Dr. John K. Webb of the University of New South Wales, Australia. That report indicated the fine-structure constant, a, may have changed over the lifetime of the universe. The Press came up with stories that the speed of light might be changing as a consequence. However, the change that has been measured is only one part in 100,000 over a distance of 12 billion light-years. This means that the differences from the expected measurements are very subtle. Furthermore, the complicated analysis needed to disentangle the effect from the data left some, like Dr. John Bahcall from the Institute for Advanced Study in Princeton, N.J., expressing doubts as to the validity of the claim.  This is further amplified because all the measurements and analyses have been done at only one observatory, and may therefore be the result equipment aberration.  Other observatories will be involved in the near future, according to current plans.   This may clarify that aspect of the situation. 

The suggested change in the speed of light in the Press articles was mentioned because light-speed, c, is one of the components making up the fine-structure constant. In fact K. W. Ford, (Classical and Modern Physics, Vol. 3, p.1152, Wiley 1974), among others gives the precise formulation as a = e2/(2ehc) where e is the electronic charge, e is the electric permittivity of the vacuum, and h is Planck’s constant. In this quantity a, the behaviour of the individual terms is important. For that reason it is necessary to make sure that the e term is specifically included instead of merely implied as some formulations do. Indeed, I did not specifically include it in the 1987 Report as it played no part in the discussion at that point. To illustrate the necessity of considering the behaviour of individual terms, the value of light-speed, c, has been measured as decreasing since the 17th or 18th century. Furthermore, while c was measured as declining during the 20th century, Planck’s constant, h, was measured as increasing. However, deep space measurements of the quantity hc revealed this to be invariant over astronomical time. The data obtained from these determinations can be found tabulated in the 1987 Report The Atomic Constants, Light, and Time by T. Norman and B. Setterfield. Since c has been measured as declining, h has been measured as increasing, and hc shown to be invariant, the logical conclusion from this observational evidence is that h must vary precisely as 1/c at all times. If there is any change in a, this observational evidence indicates it can only originate in the ratio e2/e. This quantity is discussed in detail in the paper "General Relativity and the Zero Point Energy."

Question:  Here's a link to a new result about the variable speed of  light.  They find no effect, to a more stringent level than Murphy & Webb.

http://arXiv.org/abs/astro-ph/0311280 

Setterfield: In answer to the question, the quantity being measured here is the fine structure constant, alpha. This is made up of 4 other atomic quantities in two groups of two each. The first is the product of Planck's constant, h, and the speed of light, c. Since the beginning of this work in the 1980's it has been demonstrated that the product hc is an absolute constant. That is to say it is invariant with all changes in the properties of the vacuum. Thus, if h goes up, c goes down in inverse proportion. Therefore, the fine structure constant, alpha, cannot register any changes in c or h individually, and, as we have just pointed out, the product hc is also invariant.

As a consequence, any changes in alpha must come from the other ratio involved, namely the square of the electronic charge, e, divided by the permittivity of free space. Our work has shown that this ratio is constant in free space. However, in a gravitational field this ratio may vary in such a way that alpha increases very slightly. If there are any genuine changes in alpha, this is the source of the change. The errors in measurement in the article that raised the query cover the predicted range of the variation on our approach. The details of what we predict and a further discussion on this matter are found in our article in the Journal of Theoretics entitled "General Relativity and the Zero Point Energy."

 

Supernovas

  1987A

Comment:  By the way, there's a pretty easy way to demonstrate that the speed of light has been constant for about 160,000 years using Supernova 1987A.

Setterfield: It has been stated on a number of occasions that Supernova 1987A in the Large Magellanic Cloud (LMC) has effectively demonstrated that the speed of light, c, is a constant. There are two phenomena associated with SN1987A that lead some to this erroneous conclusion. The first of these features was the exponential decay in the relevant part of the light-intensity curve. This gave sufficient evidence that it was powered by the release of energy from the radioactive decay of cobalt 56 whose half-life is well-known. The second feature was the enlarging rings of light from the explosion that illuminated the sheets of gas and dust some distance from the supernova. We know the approximate distance to the LMC (about 165,000 to 170,000 light years), and we know the angular distance of the ring from the supernova. It is a simple calculation to find how far the gas and dust sheets are from the supernova.

Consequently, we can calculate how long it should take light to get from the supernova to the sheets, and how long the peak intensity should take to pass.

The problem with the radioactive decay rate is that this would have been faster if the speed of light was higher. This would lead to a shorter half-life than the light-intensity curve revealed. For example, if c were 10 times its current value (c now), the half-life would be only 1/10th of what it is today, so the light-intensity curve should decay in 1/10th of the time it takes today. In a similar fashion, it might be expected that if c was 10c now at the supernova, the light should have illuminated the sheets and formed the rings in only 1/10th of the time at today's speed. Unfortunately, or so it seems, both the light intensity curve and the timing of the appearance of the rings (and their disappearance) are in accord with a value for c equal to c now. Therefore it is assumed that this is the proof needed that c has not changed since light was emitted from the LMC, some 170,000 light years away.

However, there is one factor that negates this conclusion for both these features of SN1987A. Let us accept, for the sake of illustration, that c WAS equal to 10c now at the LMC at the time of the explosion. Furthermore, according to the c decay (cDK) hypothesis, light-speed is the same at any instant right throughout the cosmos due to the properties of the physical vacuum. Therefore, light will always arrive at earth with the current value of c now. This means that in transit, light from the supernova has been slowing down. By the time it reaches the earth, it is only travelling at 1/10th of its speed at emission by SN1987A. As a consequence the rate at which we are receiving information from that light beam is now 1/10th of the rate at which it was emitted. In other words we are seeing this entire event in slow-motion. The light-intensity curve may have indeed decayed 10 times faster, and the light may indeed have reached the sheets 10 times sooner than expected on constant c. Our dilemma is that we cannot prove it for sure because of the slow-motion effect. At the same time this cannot be used to disprove the cDK hypothesis. As a consequence other physical evidence is needed to resolve the dilemma. This is done in "Zero Point Energy and the Redshift" as presented at the NPA Conference, June 2010, where it is shown that the redshift of light from distant galaxies gives a value for c at the moment of emission.

By way of clarification, at NO time have I ever claimed the apparent superluminal expansion of quasar jets verify higher values of c in the past. The slow-motion effect discussed earlier rules that out absolutely. The standard solution to that problem is accepted here. The accepted distance of the sheets of matter from the supernova is also not in question. That is fixed by angular measurement. What IS affected by the slow motion effect is the apparent time it took for light to get to those sheets from the supernova, and the rate at which the light-rings on those sheets grew.

Additional Note: In order to clarify some confusion on the SN1987A issue and light-speed, let me give another illustration that does not depend on the geometry of triangles etc. Remember, distances do not change with changing light-speed. Even though it is customary to give distances in light-years (LY), that distance is fixed even if light-speed  is changing.

 To start, we note that it has been established that the distance from SN1987A to the sheet of material that reflected the peak intensity of the light burst from the SN, is 2 LY, a fixed distance. Imagine that this distance is subdivided into 24 equal light-months (LM). Again the LM is a fixed distance. Imagine further that as the peak of the light burst from the SN moved out towards the sheet of material, it emitted a pulse in the direction of the earth every time it passed a LM subdivision. After 24 LM subdivisions the peak burst reached the sheet.

 Let us assume that there is no substantive change in light-speed from the time of the light-burst until the sheet becomes illuminated. Let us further assume for the sake of illustration, that the value of light-speed at the time of the outburst was 10c now. This means that the light-burst traversed the DISTANCE of 24 LM or 2 LY in a TIME of just 2.4 months. It further means that as the travelling light-burst emitted a pulse at each 1 LM subdivision, the series of pulses were emitted 1/10th month apart IN TIME.

 However, as this series of pulses travelled to earth, the speed of light slowed down to its present value. It means that the information contained in those pulses now passes our earth-bound observers at a rate that is 10 times slower than the original event. Accordingly, the pulses arrive at earth spaced one month apart in time. Observers on earth assume that c is constant since the pulses were emitted at a DISTANCE of 1 LM apart and the pulses are spaced one month apart in TIME.

 The conclusion is that this slow-motion effect makes it impossible to find the value of c at the moment of emission by this sort of process. By a similar line of reasoning, superluminal jets from quasars can be shown to pose just as much of a problem on the variable c model as on conventional theory. The standard explanation therefore is accepted here.

Question:  I've been following the dialog regarding the issue of the value of c at the location of supernova 1987A. I'm curious, how does one account for the constant gamma ray energies from known transitions (i.e. the same as in the earth's frame) and the neutrino fluxes (with the right kind of neutrinos at the expected energy) if c is significantly larger? Wasn't one of the first signals of this event a neutrino burst?

 For example, if positron annihilation gammas were observed in the event and the value of the speed of light at 1987A was 10c, wouldn't you expect a hundredfold increase in the gamma energy from .511MeV to 51.1MeV?

Setterfield: Thanks for the question, its an old one. You have assumed in your question that other atomic constants have in fact remained constant as c has dropped with time. This is not the case. In our 1987 Report, Trevor Norman and I pointed out that a significant number of other atomic constants have been measured as changing lock-step with c during the 20th century. This change is in such a way that energy is conserved during the cDK process. All told, our Report lists 475 measurements of 11 other atomic quantities by 25 methods in dynamical time.

 This has the consequence that in the standard equation [E = mc2] the energy E from any reaction is unchanged (within a quantum interval - which is the case in the example under discussion here). This happens because the measured values of the rest-mass, m, of atomic particles reveal that they are proportional to 1/(c2). The reason why this is so, is fully explored in Reviewing the Zero Point Energy. Therefore in reactions from known transitions, such as occurred in SN1987A with the emission of gamma rays and neutrinos, the emission energy will be unchanged for a given reaction. I trust this reply is adequate. (1/21/99)

Comments:  [from a Professor of Astronomy] The Decreasing Speed of Light Model (DSLM) has to not only to take into account photons, i.e., radiation, but they have to deal with matter also. It was the neutrinos, now believed to have mass, that first gave us the signal that Super Novae 1987a. The Star had collapsed and crushed protons and electrons into neutrons at a distance of 170,000 or so Light Years. The folks that espouse the Mature Creation Model (MCM) have to have the history of the explosion be "written into" the radiation and now the matter stream that came to us form the direction of the Large Magellanic Cloud. Of course, in the MCM, this never really "happened". It just appears that it happened. To the DSLM people, the neutrinos would give them an increasing rest mass (or rest energy if you like) as we go back into history. (Of course this effects all matter. If we believe in the conservation of energy, where has all this energy gone?) The Neutrinos would have been decreasing in rest mass as they traveled through space. Thus they would be radiating. Since Neutrinos permeate the universe in fantastic numbers, this radiation should be detectable. But, what we wold detect would be a continuum of frequencies, not a single temperatured, 3 degree, cosmic background radiation. If the speed of light enabled light waves to travel 10 billion light years in a day or so, this means light would be traveling 100,000 times faster. The rest mass would be 10 billion times larger! How do they deal with this? One other problem is that the radiation carries momentum varying with the speed of light.

Setterfield: It really does appear as if the Professor has not done his homework properly on the cDK (or DSLM) issue that he discussed in relation to Super Nova 1987 A. He pointed out that neutrinos gave the first signal that the star was exploding, and that neutrinos are now known to have mass. He then goes on to state (incorrectly) that neutrinos would have an increasing rest mass (or rest energy) as we go BACK into history. He then asks "if we believe in the conservation of energy, where has all this energy gone?" He concluded that this energy must have been radiated away and so should be detectable. Incredibly, the Professor has got the whole thing round the wrong way. If he had read our 1987 Report, he would have realised that the observational data forced us to conclude that with cDK there is also conservation of energy. As the speed of light DECREASES with time, the rest mass will INCREASE with time. This can be seen from the Einstein relation [E = mc2]. For Energy E to remain constant, the rest mass m will INCREASE with time in proportion to [1/ (c2)] as c is dropping. This INCREASE in rest-mass with time has been experimentally supported by the data as listed in Table 14 of our 1987 Report. There is thus no surplus energy to radiate away at all, contrary to the Professor's suggestion, and the rest-mass problem that he poses will also disappear.

 In a similar way, light photons would not radiate energy in transit as their speed drops. According to experimental evidence from the early 20th century when c was measured as varying, it was shown that wavelengths, [w], of light in transit are unaffected by changes in c. Now the speed of light is given by [c = fw] where [f] is light frequency. It is thus apparent that as [c] drops, so does the frequency [f], as [w] is unchanged. The energy of a light photon is then given by [E = hf] where [h] is Planck's constant. Experimental evidence listed in Tables 15A and 15B in the 1987 Report as well as the theoretical development shows that [h] is proportional to [1/c] so that [hc] is an absolute constant. This latter is supported by evidence from light from distant galaxies. As a result, since [h] is proportional to [1/c] and [f] is proportional to [c], then [E = hf] must be a constant for photons in transit. Thus there is no extra radiation to be emitted by photons in transit as light-speed slows down, contrary to the Professor's suggestion, as there is no extra energy for the photon to get rid of.

 I hope that this clarifies the matter. I do suggest that the 1987 Report be looked at in order to see what physical quantities were changing, and in what way, so that any misunderstanding of the real situation as given by observational evidence can be avoided.  "Reviewing the Zero Point Energy" should also be of help.

 

1997ff

Question:  What about SN1997ff?

 Setterfield:  There has been much interest generated in the press lately over the analysis by Dr. Adam G. Riess and Dr. Peter E. Nugent of the decay curve of the distant supernova designated as SN 1997ff.  In fact, over the past few years, a total of four supernovae have led to the current state of excitement. The reason for the surge of interest is the distances that these supernovae are found to be when compared with their redshift, z.  According to the majority of astronomical opinion, the relationship between an object's distance and its redshift should be a smooth function. Thus, given a redshift value, the distance of an object can be reasonably estimated.

One way to check this is to measure the apparent brightness of an object whose intrinsic luminosity is known. Then, since brightness falls off by the inverse square of the distance, the actual distance can be determined. For very distant objects something of exceptional brightness is needed.  There are such objects that can be used as 'standard candles', namely supernovae of Type Ia.  They have a distinctive decay curve for their luminosity after the supernova explosion, which allows them to be distinguished from other supernovae.

In this way, the following four supernovae have been examined as a result of photos taken by the Hubble Space Telescope. SN 1997ff at z = 1.7; SN 1997fg at z = 0.95; SN 1998ef at z = 1.2; and SN 1999fv also at z = 1.2. The higher the redshift z, the more distant the object should be. Two years ago, the supernovae at z = 0.95 and z = 1.2 attracted attention because they were FAINTER and hence further away than expected. This led to two main competing theories among cosmologists.  First, that the faintness was due to dust, or second, that the faintness was due to Einstein’s cosmological constant – a kind of negative gravity expanding the universe progressively faster than if the expansion was due solely to the Big Bang.

The cosmological constant has been invoked sporadically since the time of Einstein as the answer to a number of problems. It is sometimes called the "false vacuum energy." However, in stating this, it should be pointed out, as Haisch and others have done, that the cosmological constant has nothing to do with the zero-point energy. This cosmological constant, lambda, is frequently used in various models of the Big Bang, to describe its earliest moments.  It has been a mathematical device used by some cosmologists to inflate the universe dramatically, and then have lambda drop to zero. It now appears that it would be helpful if lambda maintained its prominence in the history of the cosmos to solve more problems. Whether it is the real answer is another matter.  Nevertheless, it is a useful term to include in some cosmological equations to avoid embarrassment.

At this point, the saga takes another turn. Recent work reveals that the object SN1997ff, the most distant of the four, turns out to be BRIGHTER than expected for its redshift value. This event has elicited the following comments from Adrian Cho in New Scientist for 7 April, 2001, page 6 in an article entitled "What's the big rush?" 

Two years ago, two teams of astronomers reported that distant stellar explosions known as type Ia supernovae, which always have the same brightness, appeared about 25 per cent dimmer from Earth than expected from their red shifts. That implied that the expansion of the Universe has accelerated.  This is because the supernovae were further away than they ought to have been if the Universe had been expanding at a steady rate for the billions of years since the stars exploded. But some researchers have argued that other phenomena might dim distant supernovae. Intergalactic dust might soak up their light, or type Ia supernovae from billions of years ago might not conform to the same standard brightness they do today.

This week's supernova finding seems to have dealt a severe blow to these [alternative] arguments [and supports] an accelerating Universe. The new supernova's red shift implies it is 11 billion light years away, but it is roughly twice as bright as it should be. Hence it must be significantly closer than it would be had the Universe expanded steadily. Neither dust nor changes in supernova brightness can easily explain the brightness of the explosion.

Dark energy [the action of the cosmological constant, which acts in reverse to gravity] can, however. When the Universe was only a few billion years old, galaxies were closer together and the pull of their gravity was strong enough to overcome the push of dark energy and slow the expansion. A supernova that exploded during this period would thus be closer than its red shift suggests. Only after the galaxies grew farther apart did dark energy take over and make the Universe expand faster.  So astronomers should see acceleration change to deceleration as they look farther back in time. ‘This transition from accelerating to decelerating is really the smoking gun for some sort of dark energy,’ Riess says.

Well, that is one option now that dust has been eliminated as a suspect. However, the answer could also lie in a different direction to that suggested above as there is another option well supported by other observational evidence. For the last two decades, astronomer William Tifft of Arizona has pointed out repeatedly that the redshift is not a smooth function at all but is, in fact, going in "jumps", or is quantised. In other words, it proceeds in a steps and stairs fashion. Tifft's analyses were disputed, so in 1992 Guthrie and Napier did a study to disprove the matter.  They ended up agreeing with Tifft.  The results of that study were themselves disputed, so Guthrie and Napier conducted an exhaustive analysis on a whole new batch of objects. Again, the conclusions confirmed Tifft's contention. The quantisations of the redshift that were noted in these studies were on a relatively small scale, but analysis revealed a basic quantisation that was at the root of the effect, of which the others were simply higher multiples. However, this was sufficient to indicate that the redshift was probably not a smooth function at all. If these results were accepted, then the whole interpretation of the redshift, namely that it represented the expansion of the cosmos by a Doppler effect on light waves, was called into question. This becomes apparent since there was no good reason why that expansion should go in a series of jumps, anymore than cars on a highway should travel only in multiples of, say, 5 kilometres per hour.

However, a periodicity on a much larger scale has also been noted for very distant objects. In 1990, Burbidge and Hewitt reviewed the observational history of these preferred redshifts. Objects were clumping together in preferred redshifts across the whole sky.  These redshifts were listed as z = 0.061, 0.30, 0.60, 0.96, 1.41, 1.96, 2.63 and 3.45 [G. Burbidge and A. Hewitt, Astrophysical Journal, vol. 359 (1990), L33]. In 1992, Duari et al. examined 2164 objects with redshifts ranging out to z = 4.43 in a statistical analysis [Astrophysical Journal, vol. 384 (1992), 35], and confirmed these redshift peaks listed by Burbidge and Hewitt.  This sequence has also been described accurately by the Karlsson formula. Thus two phenomena must be dealt with, both the quantisation effect itself and the much larger periodicities which mean objects are further away than their redshifts would indicate.  Apparent clustering of galaxies is due to this large-scale periodicity.

A straightforward interpretation of both the quantisation and periodicity is that the redshift itself is going in a predictable series of steps and stairs on both a small as well as a very large scale. This is giving rise to the apparent clumping of objects at preferred redshifts. The reason is that on the flat portions of the steps and stairs pattern, the redshift remains essentially constant over a large distance, so many objects appear to be at the same redshift.  By contrast, on the rapidly rising part of the pattern, the redshift changes dramatically over a short distance, and so relatively few objects will be at any given redshift in that portion of the pattern.

These considerations are important in the current context.  As noted above by Reiss, the objects at z = 0.95 and z = 1.2 are systematically faint for their assumed redshift distance.  By contrast, the object at z = 1.7 is unusually bright for its assumed redshift distance. Notice that the object at z = 0.95 is at the middle of the flat part of the step according to the redshift analyses, while z = 1.2 is right at the back of the step, just before the steep climb. Consequently for their redshift value, they will be further away in distance than expected, and will therefore appear fainter. By contrast, the object at z = 1.7 is on the steeply rising part of the pattern. Because the redshift is changing rapidly over a very short distance astronomically speaking, the object will be assumed to be further away than it actually is and will thus appear to be brighter than expected.

When interpreted this way, these recent results support the existence of the redshift periodicities noted by Burbidge and Hewitt, statistically confirmed by Duari et al., and described by the Karlsson formula. In so doing, they also imply that redshift behaviour is not a smooth function, but rather goes in a steps and stairs pattern.  If this is accepted, it means that the redshift is not a measure of universal expansion, but must have some other interpretation.

The research that has been conducted on the changing speed of light over the last 10 years has been able to replicate both the basic quantisation picked up by Tifft, and the large-scale periodicities that are in evidence here. On this research, the redshift and light-speed are related effects that mutually derive from changing vacuum conditions. The evidence suggests that the vacuum zero-point energy (ZPE) is increasing as a result of initial expansion of the cosmos. It has been shown by Puthoff [Physical Review D 35:10 (1987), 3266] that the ZPE is maintaining all atomic structures throughout the universe. Therefore, as the ZPE increases, the energy available to maintain atomic orbits increases. Once a quantum threshold has been reached, every atom in the cosmos will assume a higher energy state for a given orbit and so the light emitted from those atoms will be bluer than those in the past.  Therefore as we look back to distant galaxies, the light emitted from them will appear redder in quantised steps. At the same time, since the speed of light is dependent upon vacuum conditions, it can be shown that a smoothly increasing ZPE will result in a smoothly decreasing light-speed. Although the changing ZPE can be shown to be the result of the initial expansion of the cosmos, the fact that the quantised effects are not "smeared out" also indicate that the cosmos is now essentially static, just as Narliker and Arp have demonstrated [Astrophysical Journal vol. 405 (1993), 51]. In view of the dilemma that confronts astronomers with these supernovae, this observational alternative may be worth serious examination.

 

The Doppler Shift

 Comment:  One of the basic principles of quantum mechanics is that light does in fact exist in discrete packets of energy. The very word "quantum" is a reference to this, the whole area of physics dealing with photons is named after it.

Using the analogy given by Setterfield of cars moving along a road: The doppler effect measured from the cars as they go by gives a continuous kind of change. But, if sound existed in quanta the way that light does, then you would be measuring the doppler effect as changing in the discrete units that the redshift is measured as changing in.

Setterfield:   If a redshift is due to motion, it is not quantized; it is a smooth smearing depending on the velocity. There is not a quantized effect. We see this smooth smearing in velocities of stars, rotation of stars, and the movement of stars within galaxies. What happens is that the wavelength of the photon is stretched or contracted due to the velocity at the time of emission. Therefore the fact that the photon originates as a discrete packet of energy is irrelevent. The point that needs to be made is that in distant galaxies, photons of light have been emitted with a range of wavelengths. All these wavelengths are simultaneously shifted in jumps by the same fraction, and it is these jumps which Tifft has noted, and which are not indicative of a Doppler shift. So some other effect is at work.

 

Slow motion effects?

Comment:  If lightspeed was much faster in the past, but has decayed substantially since then, then astronomers would observe "slow motion" effects, the effect being stronger the more distant the light source from earth. However, such "slow motion" effects are simply not observed.

Setterfield:  Since many atomic processes are faster proportional to c, but the slow motion effect at the point of reception is also operating, the combined overall result is that everything seems to proceed at the same unchanged pace.

For example, Supernova 1987A involves the radioactive decay of Cobalt 56. Since this is an atomic process, this was decaying much faster when the speed of light was higher. However, this is exactly offset by the slow motion effect when that signal comes to earth. As a consequence, the decay of Cobalt 56 in Supernova 1987A seems to have the same half-life then as it has now. Therefore no astronomical evidence for a slow motion effect in atomic processes would be expected.

 

What About the Big Bang?

QuestionWhat is your view regarding the Big Bang?

Setterfield:  When George Gamow (1949) proposed a beginning to the expansion which was the accepted explanation for the redshift, Hoyle derisively called it the "Big Bang."  Nothing exploded or banged in Gamow's idea, though; there was simply a hot, dense beginning from which everything expanded.  Hoyle put up a different model in which matter was continuously being originated.

The major objection to Gamow's "Big Bang" was that it was too close to the biblical model of creation!  In fact, even up to 1985 the Cambridge Atlas of Astronomy (pp 381,384) referred to the "Big Bang" as the 'theory of sudden creation.'   On 10th August, 1989, Dr. John Maddocks, editor of Nature, declared the Big Bang philosophically unacceptable in an article entitled “Down with the Big Bang” (Nature, vol. 340, p. 425) 

So what is the difference between the BB and the biblical model? Essentially naturalism.  The Bible says God did it and secular science says it somehow just happened.  However the Bible does say, twelve times, that God stretched the heavens.  So that expansion is definitely in the Bible.

In the meantime, the steady state theory was effectively disproved by quasars being discovered in the mid '60's. 

It is interesting that Gamow's two young colleagues, Ralph Alpher and Robert Herman, predicted in a 1948 letter to Nature that the CBR temperature should found to be about 5 deg K.  [ Nature, 162,774.]  They predicted 10-15 degrees K.  The background radiation was found, but at a much lower temperature. 

The glitch for the BB right now in terms of a continuously expanding universe is the presence of the quantized redshift.  On May 5th and 7th of this year, two abstracts in astrophysics were published.*  In the second one, Morley Bell writes: "Evidence was presented recently suggesting that [galaxy] clusters studied by the Hubble Key Project may contain quantized intrinsic redshift components that are related to those reported by Tifft.  Here we report the results of a similar analysis using 55 spiral ... and 36 Type Ia supernovae galaxies.  We find that even when many more objects are included in the sample there is still clear evidence that the same quantized intrinsic redshifts are present..."

This is indication that the redshift might not be a Doppler effect.  Back in 1929, Hubble himself had some doubts about connecting the two.  The redshift number is obtained by comparing the light received with the standard laboratory measurements for whatever element is being seen.  So this is simply a difference between two measurements and there is no intrinsic connection between the redshift measurement and velocity.  In fact it has been noted that at the center of the Virgo cluster the high velocity of the galaxies wipes out the quantization of the redshift. 

Interestingly, when the Bible speaks of God stretching the heavens, eleven of the twelve times the verb translated "stretched" is in the past completed tense.

There is another possible cause for the quantized redshift, which is explored in Zero Point Energy and the Redshift.  Essentially, it has to do with the increasing zero point energy, as measured by Planck's constant.  The statistics regarding that are in his earlier major paper, here.

As far as the cosmic background radiation is concerned, the first point to note is that its temperature is much lower than that initially suggested by Gamow.  This leaves some room for doubt as to whether or not this is the effect he was predicting.  It is possible that even in the creation scenario, with rapid expansion from an initial super-dense state, that this effect would still be seen.  However, there is another explanation which has surfaced in the last year or so.  It has to do with the zero point energy. In the same way that the ZPE allows for the manifestation of virtual particle pairs, such electron/positron pairs, it is also reasonable to propose that the ZPE would allow the formation of virtual tachyon pairs. Calculation has shown that the energy density of the radiation from these tachyon pairs has the same profile as that of the cosmic microwave background.  So that is another point to consider.

As far as the abundance of elements is concerned, there are several anomalies existing with current BB theory.  Gamow originally proposed the building up of all elements in his BB scenario.  However a blockage was found in that process which was difficult to overcome.  As a result, Hoyle, Burbidge, and Fowler examined the possibility of elements being built up within stars, which later exploded and spread them out among the intergalactic clouds.  This proposal is now generally accepted.  However, it leads to a number of problems, such as the anomalous abundance of iron in the regions around quasars.  There are other problems, such as anomalous groups of stars near the center of our galaxy and the Andromeda galaxy which have high metal abundances.  Because of the current approach using the production of these elements in the first generation of stars, this process obviously takes time.  As a consequence, these anomalous stars can only be accounted for by collisions or cannibalization of smaller star systems by larger galaxies.  There is another possible answer, however, which creationists need to consider.  It has been shown that in a scenario with small black holes, such as Planck particles, the addition of a proton to the system or to a system with a negatively-charged black hole, the build-up of elements becomes possible.  The blockage that element formation in stars was designed to overcome is eliminated, because neutrons can also be involved, as can alpha particles.  As a consequence, is it possible to build up other elements than hydrogen and helium in the early phases of the universe.  This may happen in local concentrations where negative black holes formed by the agglomeration of Planck Particles exist.  Stars that form in those areas would then have apparently anomalous metal abundances.  Importantly, in this scenario, if Population II stars were formed on Day 1 of Creation Week, as suggested by Job 38, and Population I stars were formed half-way through day 4, as listed in Genesis 1:14, we have a good reason why the Population I stars contain more metals than the Population II stars, as this process from the agglomeration of black holes would have had time to act.

Regarding distance and age of galaxies:  There is no argument that distance indicates age.  This should be stated first.  It was this very fact that the further out we looked, the more different the universe appeared, that caused the downfall of the Steady State model.  Specifically, it was the discovery of quasars that produced this result.  Importantly, quasars become brighter and more numerous the further out we look.  At a redshift of around 1.7, their numbers and luminosity appear to plateau.  Closer in from 1.7, their numbers and intensity decline.  Furthermore, a redshift of 1.7 is also an important marker for the formation of stars.  We notice starburst galaxies of increasing activity as we go back to a redshift of 1.7.  At that point, star formation activity appears to reach a maximum where young, hot blue stars of Population I are being formed (therefore emitting higher amounts of UV radiation).  At a redshift of 1.7, the redshift/distance relationship also undergoes a major change.  The curve steepens up considerably as we go back from that point.  This has caused current BB thinking to introduce some extra terms into their equations which would indicate that the rate of expansion of the cosmos has speeded up as we come forward in time from that point.  On the lightspeed scenario, a redshift of 1.7 effectively marks the close of Creation Week, and so all of these above effects would be expected to taper off after that time.

* Astrophysics, abstract astro-ph/0305060

and Astrophysics, abstract astro-ph/0305112

 

Wandering Planets? 

Question: I have become aware of your work through Chuck Missler.  I've read most of your articles in the web site.  Very interesting and provocative.  I like how your theories are consistent with science and the Bible.

In the Earth History article, you mention several catastrophes caused primarily by radioactive heating, etc. but mention that comets, etc. could also have impacted the Earth, Mars, etc.  Do you have any thoughts on theories about Mars and/or Venus orbits being different in the past and that past orbit patterns might have caused near passbys, and thus, you might have crustal tides.  Chuck alludes to this being a possibility and I have run across some internet sites that talk about this as well.

Setterfield: The idea of the changes in orbits initially came from the work done by Velikovsky.  While his data collection is remarkable, I disagree with his conclusions.  For example, he talks about planet Venus in a wandering orbit, causing some catastrophes here on earth.  One of the problems with this, as with all similar proposals, is that the planet Venus lies in the plane of the ecliptic -- the same as all other planets except Pluto -- while comets and similar wandering bodies move above and below the plane of the ecliptic.  Furthermore, as the orbit of a wandering Venus eventually stabilized around the sun, it would still be highly elliptical.  By contrast, the orbit of Venus is the most nearly circular of all the planetary orbits.  Consequently, it is the least likely to have been a wanderer in the past. 

A similar statement may be made about Mars, although its orbit is somewhat more elliptical than the Earth's, but not nearly as elliptical as Pluto's.

 

Fuzzy Space?

Question:  In the current issue of SCIENCE (vol 301  29 August 2003 pages 116 and 117), they say, "--space and time aren't smooth at the smallest scale, but fuzzy and foamimg. Now that tantalizing prospect has vanished in a puff of gamms rays." I don't understand, from their observations, how they can conclude the above.
How does the changing speed of light over time impinge upon the results discussed in this article? If you can find the time I would greatly appreciate your expert opinion of this article.

Setterfield:  As far as the article and your question is concerned, the point that is being made is that if space is ‘fuzzy’ and foam-like at the smallest dimensions, this is going to interfere in some ways with the images that we are receiving from very distant galaxies and quasars.  In other words, these images should get progressively more fuzzy themselves.  The initial experiments to prove this turned out to disprove the proposition, and thereby have thrown one section of science into some confusion.   On this basis, the conclusions of the article are, in fact, correct.

The changing speed of light over time is not affected by this.  One reason for this is that I do not necessarily consider space to be ‘fuzzy’ or foam-like.  In an earlier presentation of this work, I made use of Planck Particle Pairs in a way which may have implied agreement with this idea of fuzziness.  However, a re-examination of the basis on which this was used has revealed that the presence of Planck Particle Pairs early in the history of the cosmos does not necessarily imply that they are connected with or form the fuzziness other astronomers and cosmologists have been referring to.  In fact, over time, the number of Planck Particle Pairs would have dramatically reduced.  As these positive and negative pairs have recombined, they have sent out pulses of radiation, which is what the Zero Point Energy is.  Thus, because the Planck Particle Pairs are decreasing, the ZPE is building up as a direct result.  As the ZPE builds up, the speed of light drops.

 

Massive Bombardment?

Question: I have not seen a satisfactory accounting (from creationist sources) for the heavy cratering in the solar system, either during the six days, or after.  Even distributed over the pre-flood years, the massive bombardment would destroy as much as the flood itself. 

Setterfield: You state that you have not seen a satisfactory accounting for the heavy cratering in the early solar system from creationist sources.  Some of this is addressed in my article, A Brief Stellar History, in part II.

 

Has Dark Matter Been Proven?

Question: I just came across an article stating proof of dark matter has been found, and was wondering if you've had time to look into it. This, if true, would seem to be a direct contradiction to variable light speed theory. If you have any information about this you could pass on to me, I would greatly appreciate it.

Setterfield: Thank you for sending this request.  I have needed a spark to get me going to respond to this.  As it turns out, there is a very good response printed in New Scientist, 9th Sept. 2006, p. 12.  The article is entitled "Dark Matter 'Proof' Called into Doubt."

I know not everyone has access to this article, but I do agree with what it says and will try to summarize it for you here.

Here is the opening, which may help:

"When Douglas Clowe of the University of Arizona in Tucson announced on 21 August that his team has 'direct proof of dark matter's existence,' it seemed that the issue had been settled.  Now proponents of the so-called modified theories of gravity, who explain the motion of stars and galaxies without resorting to dark matter, have hit back and are suggesting that Clowe's team has jumped the gun.  

"'One should not draw premature conclusions about the existence of dark matter without a careful analysis of alternative gravity theories,' writes John Moffatt, of the University of Waterloo in Ontario, Canada, who has pioneered an alternative theory of gravity known as MOG ...Moffatt claims that his MOG theory can explain the Bullet Cluster without an ounce of dark matter."

The article also mentions a number of other theories of gravity that achieve the same result.  In essence, my theory (the ZPE theory) also does this.  This is part of a paper which we are seeking to have published currently.  If we are unsuccessful in getting this paper published, we will put it here on the web.  Basically, gravity is caused by the ZPE acting on charged point-particles that give off a secondary radiation that is attractive.  This has been shown by Haisch, Rueda, and Putoff to actually be the source of gravity.  As I have a look at the equations that I am dealing with in this context, it turns out that there is an additional term which overcomes all of the problems of galaxy rotation and gravitational lensing.  This is a direct result of ZPE theory.

Particle-Wave Duality

Question: Check out this article on Wired.com.

Particle-Wave Duality Shown With Largest Molecules Yet

zero point energy secondary radiation phenomenon?

Setterfield: Many thanks for forwarding this article to us; it is deeply appreciated.

Wave-particle duality, and the production of interference patterns from particles going through double slits, because of the particles wave-like behavior, links directly in with SED physics and the ZPE. Here is something from a paper I am writing:

"In this way, classical physics using the ZPE, offers explanations in reality which quantum mechanics can only attempt to deal with in terms of theoretical laws.

"De Broglie’s 1924 proposal that matter could behave in a wave-like manner was also examined.  These wave-like characteristics of electrons were shown to exist in 1927 by Clinton Davisson and Lester Germer [M R Wehr and J A Richards ‘Physics of the Atom’, Addison Wesley, 1960, p.37]. De Broglie himself had supplied a basis for the ZPE explanation. He suggested that the famous E = mc2 and Planck’s E = hf, could be equated. In these equations, ‘E’ is the energy of the particle of mass ‘m’, and ‘c’ is the speed of light. This gives a frequency, f = mc2 / h, which is now called the Compton frequency. De Broglie felt that this frequency was an intrinsic oscillation of the charge of an electron or parton. If he had then identified the ZPE as the source of the oscillation, he would have been on his way to a solution.

Haisch and Rueda point out that the electron really does oscillate at the Compton frequency, when in its own rest frame, due to the ZPE. They note “when you view the electron from a moving frame there is a beat frequency superimposed on this oscillation due to the Doppler shift. It turns out that this beat frequency proves to be exactly the de Broglie wavelength of a moving electron. … the ZPF drives the electron to undergo some kind of oscillation at the Compton frequency… and this is where and how the de Broglie wavelength originates due to Doppler shifts.” Thus the Compton frequency is due to the ZPE-imposed oscillation of the particle at rest. The de Broglie wavelength results from both the motion of the particle and the oscillation, appearing as a "beat" phenomenon."

In fact that explanation still holds for large molecules because the ZPE is real, and so is the velocity which gives a Doppler shift. Thus, there is a valid explanation in terms of SED physics. The article in question noted that the scientists involved used the massive molecules with their slowest speed so that the beat wavelength would be as long as possible and so register as a wave when going through the interference screen. All this is in line with what SED physics expects. The actual point of crossover from so-called quantum effects to the macroscopic world is not likely to be a sharp one because particle motion is the actual cause when coupled with the jiggling of the ZPE. Thus the larger the molecule, the slower must be the velocity in order to pick up the effects. In theory it should be possible to increase the size of molecules and still get the effect of interferfence, provided that the molecule's velocity was slow enough.

I trust that helps your understanding of the situation. Thanks again for bringing this article to our attention.

 

Permittivity and Permeability of Space

Question: The permeability of free space is an arbitrary number equal to 4π/107.  If the permeability of free space is meant to be varying, proportional to the energy density of the ZPE, which of these numbers is varying?  It is possible that you are confusing systems of units with fundamental changes since the permeability of free space is a constant in any system of units?

Setterfield: The reviewer states that permeability has been defined as a constant. For the other readers, permeability is the term describing the magnetic properties of space. Interestingly, in the latter part of the nineteenth century and the early part of the twentieth century permeability was defined in a different way, which linked in with the speed of light. [S.G. Starling and A. J. Woodall, Physics, Longmans, Green and Co., London, 1958, p. 1262]  Any change in the speed of light, therefore, also meant a change in the permeability of free space. I believe we have made a retrograde step in accepting the current definition of permeability being a constant.

If the strength of the Zero Point Energy (ZPE) is changing, it inevitably means that permittivity (the term used to describe the electrical properties of space) and permeability have both changed as well. The following is a quote from my paper "Exploring the Vacuum", page 13 (the entire section dealing with this starts on page 12)

Barnett picks up on this point and explains further: “Scharnhorst and Barton suggest that a modification of the vacuum can produce a change in its permittivity [and permeability] with a resulting change in the speed of light. … The role of virtual particles in determining the permittivity of the vacuum is analogous to that of atoms or molecules in determining the relative permittivity of a dielectric material. The light propagating in the material can be absorbed … [but] the atoms remain in their excited states for only a very short time before re-emitting the light. This absorption and re-emission is responsible for the refractive index of the material and results in the well-known reduction of the speed of light” (63). Barnett concludes: “The vacuum is certainly a most mysterious and elusive object…The suggestion that [the] value of the speed of light is determined by its structure is worthy of serious investigation by theoretical physicists.”

[reference 63: S. Barnett, Nature 344 (1990), p.289]

As to the comment about confusing changes in permeability with other systems of units  several points should be noted.  First, the constancy of the permeability and its current numerical value is an artifact of the rationalized MKSA system that gave rise to the SI system now in use. For example, in SI Units by B. Chiswell and E.C.M. Grigg [John Wiley and Sons, Australasia, 1971], in Appendix 1, pages 108-110 the changes that occurred in the systems of units over the last century are discussed.  Interestingly, and just as Starling and Woodall noted, they point out that originally the permeability was defined as lightspeed dependent.  Any change the speed of light will result from a change in the permeability of free space.  As the development of the present system was occurring, the lightspeed dependence was dropped and the numerical value of the permeability underwent several changes which then finalized by the inclusion of a factor of 4π.

The second point that emerges here is that, as one looks at these developments, it becomes apparent that other alternatives may exist to an invariant permeability. It seems desirable to consider these options in a situation where lightspeed is changing cosmologically. One reason for this is that space is a non-dispersive medium. In other words, space does not split a light beam into colours as glass or water might do. In order to maintain this property, it is usually considered that the ratio between the permittivity and permeability must remain fixed. This means that the intrinsic impedance of free space, determined by this ratio, should remain unaltered. This means that it might be necessary to devise a system of units where both permittivity and permeability are lightspeed dependent.

However, the reviewer has pointed out that another way of utilizing current developments might be to consider using equations for refractive index instead. This may be an option. However, the behaviour of light waves in an inhomogeneous medium may be intrinsically different to that of light in a cosmologically changing Zero Point Energy (ZPE). Consideration of this leads on to the second item.

 

Solar Activity

Question: I've seen a couple of articles lately on the internet about some upsurge in "solar storm" activity in the next few years that "could be catastrophic" in its effect on electronics, etc. -- is it possible for them to truly predict this with any accuracy, or are they just trying to get people all excited?

Setterfield: Yes there is some basis for what they are saying. The upsurge in solar storm activity is likely to occur as the sun starts to move into its active phase of the 11 year cycle. This peak is expected to be in 2012 AD. At the present moment that upswing in activity is only just beginning to show. During the solar maximum as it is called, the numbers of sunspots increase remarkably as do solar storms and solar flares. The flares emit X-rays and Ultra Violet (UV) radiation.. These can affect earth's ionisphere and disrupt radio communications, and disturb the action of some radar devices.  The associated solar storms send streams of charged particles (protons) out from the sun. Protons can pass through a human body and do biological damage. These particle streams are effectively the same as a very strong electric current. When they hit our upper atmosphere, they cause auroras to occur near the poles. The currents involved in these auroras have been measured as being from 650,000 up to about one million amps. These particle streams also heat up the earth's atmosphere causing it to expand which then affects satellites in low earth orbit. If uncorrected, this causes the satellite's orbit to degrade undergoing aero-braking, so that it eventually burns up in our atmosphere.

When a stream of particles from the sun comes towards the earth, astronauts must get back to earth as quickly as possible. There is also a distinct danger to electronic equipment on satellites in earth orbit. The massive currents involved can wreck sensitive electronics. Under these circumstances, at a solar maximum, there is the possibility that the Global Positioning System of satellites used for guidance world-wide could be knocked out. In addition, any facility which relies on satellite communications can be knocked out in the same way that some bank communications did in Canada at the last maximum. This caused chaos in their banking system. There is an additional danger as long overland transmission lines for electricity (and some telephone communications) can have induced current surges from a solar storm. This has the capability of shutting down the system affected for a significant period. On 13th March 1989 in Quebec, over six million people were without electricity for 9 hours from this cause. Large computer networks can also suffer, and so can individual computers depending on circumstances.

That is the scenario that can occur when particles are sent earth-ward. That requires the particles from the sun to be sent in a specific direction, namely directly towards us. That does not always happen, and a lot of solar storms occur that do not affect earth because the particle beams are sent in a different direction. Astronomers keep a watch on the solar activity and can warn us when the particle streams are expected to impinge upon the earth. Our warning time may last as long as 48 hours or as short as 15 minutes depending on the violence of the explosions on the sun as part of the storm. In addition, not every stream of particles that comes towards us is a problem. Everything depends on the magnetic configuration. If the magnetic field of the earth and the proton stream are aligned with their magnetic fields pointing in the same direction, the chances are that the particle stream will be repelled by our magnetosphere. However, if the magnetic polarities are in opposite directions, the earth's magnetic field connects with the sun's and the particles will be injected into our magnetosphere. This forms a vast electric current, sometimes known as an "electro-jet" through our ionisphere which induces the currents in power lines down of the earth's surface and have the potential to disrupt national power grids, and electronics on the earth's surface.

In summary, yes there is a danger, but it can be prepared for. As far as personal computers are concerned, make sure that you have everything important backed up on disk or have it in printed form.

 

Stellar Brightness and the Tolman Test

Question: As I continue to dig into this field, I have been reading Hugh Ross' defenses of the Big Bang and one of the items that caught my attention is the Tolman test for surface brightness. Dr. Ross believes that this is a definitive proof of an expanding universe. As I looked into this test, it appears that it has as an inherent assumption that the red shift is caused by universal expansion. Additionally, if I read the results correctly, theory predicted that the brightness should drop by (z+1)4 for an expanding universe but the evidence seems to point to a different result ((z+1)3). I am not sure if this is significant since a static universe would show a brightness drop of (z+1).

Are these data significant and how would your theory use this test to demonstrate an expanded but now oscillating universe?

Setterfield: The Tolman test for surface brightness was one reason why Hubble felt that universal expansion was NOT the origin of the redshift right up until a few months before he died in 1953. Sandage's May 2009 paper details Hubble's misgivings. In case you had not caught up with it , it can be found here.

The situation is very much as you state, namely that if we really are living in an expanding universe, then the surface brightness of galaxies should decrease by a factor of (1 + z)4, where z is the redshift. The best experimental data give results for the exponent of (1 + z) to be between 2.6 and 3.4. In other words, this tells us that the brightness falls off as (1 + z)3. In order to salvage the expanding universe model, Sandage and many astronomers assume that this result can be accounted for if all galaxies were simply brighter in the past when they were younger; in other words a galaxy light evolution argument.

It may be wondered how do they get the factor (1 + z)4. The first factor of (1 + z) comes from the fact that the number of photons received per unit time from the source will be less than that at emission because of expansion. That situation also applies on the ZPE-plasma Model because the speed of light is slower now than at the time of emission. Consequently we are receiving fewer photons per second now than were emitted in one second at the source. Thus we have the same effect, but a different cause.

The second factor of (1 + z) comes from the fact that the energy of each photon is reduced by the redshift. This is also the case in the ZPE-plasma Model since the light that atoms emitted in the early universe was also less energetic (redder) because the ZPE strength was lower. Again there is the same effect but a different cause.

In an expanding universe, the apparent change in area of a galaxy results in a form of aberration which causes its brightness to drop off as (1 + z). In addition, the cosmological effects of a variety of space curvatures causes a geometrical aberration which is also proportional to the redshift. Neither of these aberrations occur in static cosmos models. With these two effects of aberration, the result is that there should be a clear (1 + z)4 drop off in surface brightness if the universe is expanding.

Currently this is not observed, only a (1 + z)3 drop off where the exponent 3 has an error in observation of +/- 0.4. In other words, the surface brightness does not decrease as rapidly as the expanding universe model predicts. Hugh Ross is therefore on uncertain ground here as he has to prop up his argument with the additional argument of galaxy brightness evolution, otherwise his argument falls to pieces.

However, the ZPE-Plasma Model is in a different situation. We have already shown above that the (1 + z) drop off rate has an exponent of 2; and there is one further factor that must now be noted. The energy density of photons was also lower at the time of emission when the ZPE had a lower strength. This resulted in a lower wave amplitude for all radiation. On this basis, then, the ZPE-Plasma Model has a drop off in galaxy brightness which is proportional to (1 + z)3. This is in accord with observation.

Addressing your specific question, however, the Tolman test cannot distinguish whether or not an initial expansion has occurred, or whether or not an oscillation of the universe is occurring now. I trust that this helps.

Further correspondence brought this from Barry:

Thanks for your reply. It got me digging around for more information. I found an excellent paper by Eric J, Lerner that is entitled "Tolman test from z = 0.1 to z = 5.5: Preliminary results challange the expanding Universe model". The results of his paper show that a static universe gives better results with the Tolman test than an expanding model. The way he assesses it is as follows. He took galaxies over a broad range of redshifts from close in to far away, just as the title to his paper suggests. He points out that previous Tolman tests have only been performed on objects that were consistently far away. What has not been done is to compare the surface brightness of similar galaxies over a broad range of redshifts. When this is done, he shows that the expanding universe model should give a curve rising up at a steep angle from zero until a redshift of about z = 1.5, following which the curve turns over and essentially becomes a horizontal line. In contrast, the static universe model should have data on a straight line. He then gives the results he obtained, and shows that the data are on a straight line over the entire redshift range. This means that the expanding universe model is not supported when galaxies of an extensive range of redshifts is considered. His paper can be found here.

I trust that this explanation makes what is being done somewhat more understandable.

 

Arp and the Red Shift of Quasars

Question: Do you disagree with Arp about the red shift of quasars as related to their distance?

Setterfield: Arp certainly believes that the redshift is quantized, so we are in agreement on that. However, the position that Arp adopts with quasars and redshifts is something different. Arp claims that quasars, nearly all of which have high redshifts (which are usually taken to represent great distance) are in fact relatively nearby objects which have been ejected from the cores of galaxies. His rationale for doing this is that some quasars are found to be in alignment with jets emanating from relatively nearby galaxies. The high redshift of these quasars is taken by Arp to show that they are very young and have not had time to develop into more mature objects like full galaxies. A number of quasars appear to be "interacting" with relatively nearby galaxies.

For what it is worth, the majority astronomical opinion is that these quasar-galaxy alignments are probably coincidental with the quasar far away and exceptionally brilliant compared with the nearby objects it is aligned with. This would give rise to the impression that they are associated with nearby objects. This contention appears to be true, but I recall several incidents some years back which were even more decisive for me. What had been done was to block out the light of the quasar itself. When that was done, it turned out that the rest of the galaxy surrounding the quasar became visible. The quasar was simply a part of the central core of a genuine galaxy. But the light from the rest of the galaxy was being swamping by the light from the quasar in its core. I have not seen any effective answer to this from Arp.

 

Question about Wal Thornhill's ideas

Question: Just spent several hours poking around the internet and ended up at --
http://www.plasmas.org/basics.htm
which I found both interesting and informative.  Before wading thru it again I would appreciate your impression of the following article THE ELECTRIC UNIVERSE by a Wal Thornhill (no creditials given).
http://www.holoscience.com/news.php?article=7y7d3dn5&pf=YES
He sounds good to me ....

Setterfield: Many thanks for the URL's. Wal Thornhill is a plasma physicist from Australia and is very active in the field developing a number of approaches and theories. This URL is very good:

http://www.plasmas.org/basics.htm
and all the statements reliable.The other URL, namely

http://www.holoscience.com/news.php?article=7y7d3dn5&pf=YES
is excellent to begin, but Thornhill has gone too far along the track that mythology alone has led him. In other words, the solar system forming from planets from brown dwarfs and becoming associated later really is pushing things a little far. There are other plasma alternatives that can be considered.

So all the problems that Thornhill has outlined initially with the gravitational model exist indeed, and his statements accurate. Even the discussion about brown dwarfs as such is correct. But he goes off the rails in linking our solar system with the Brown dwarf and subsequent capture. I believe that he is reading too much into the mythological accounts. While they do give information as to what was happening physically in the sky and between planets when currents and voltages were higher in the early solar system (because of lower ZPE), I think that the speculation of the ancients as to the cause of these effects is off-base.

I trust that this gives you a feel for what is happening. Remember this is a developing field, and some speculation is inevitable. If In can help further, please let me know.

 

The energy diffusion timescale for the Sun

Question: One of the alleged "proofs" for the Sun being > 10,000 years old is the emission of photons from the core of the Sun.  According to what we understand about the physics of the Sun, it should take 100,000+ years for photons generated in the core to reach its surface.  Now, I found an article by Robert Newton from the Technical Journal 17:1 (2003) that looks at this problem like this:

"The energy diffusion timescale for the Sun, however, does exceed six thousand years. Calculations show that energy produced in the core of the Sun today should take more than six thousand years to diffuse to the solar surface. Does this demonstrate that the Sun is older than 6,000 years, or is not powered by fusion? Not at all. Apparently, energy being released from the photosphere today was never produced by fusion, but is energy that has come from a subsurface layer—created on Day 4 of the Creation Week. God created the Sun in a stable state with an energy and temperature profile similar to those of today. The solar photosphere is constantly emitting its energy into space by thermal radiation, and would quickly cool—except this energy is replenished by energy from a hotter layer beneath the surface. This underlying layer obtains its energy from a still hotter, deeper layer, and so on to the core, which obtains energy directly from fusion.

So, the primary purpose of fusion is stability. Energy produced by fusion precisely matches energy released from the surface so that the internal temperature profile of the Sun remains constant. The fusion energy flux balances the force of gravity and maintains the stable temperature profile. Energy produced by fusion is not directly responsible for heating the solar photosphere today (because there has not been enough time) though it would eventually serve that purpose if the Sun were allowed to continue far enough into the future. So, a 6,000-year-old hydrogen-burning star does not require any unusual physics during the Creation Week. A fusion-powered Sun is perfectly consistent with the Biblical timescale, and with the available evidence."

Source: http://creation.com/images/pdfs/tj/j17_1/j17_1_64-65.pdf

I found this explanation to be a little less than satisfying in that it sounds too similar to the argument that God created the light in transit from the stars to make them visible.  Perhaps I'm misunderstand it.  Anyway...

I have two questions for you regarding this:

1) Would a variable C have an effect on the speed of the propagation of photons from the core of the Sun?
2) Would a plasma-based origin of the Sun create it in such a way that would fit the scenario proposed by Robert Newton?

Setterfield:

You are correct in your assessment of the explanation that you included. It is very similar to the light created in transit argument, only in this case it is light coming from a subsurface layer created on Day 4. I have produced another approach consistent with nuclear fusion which can be found here in one of my papers.

However, in the last couple of years, there have been a number of developments in astronomy based on plasma physics which is just emerging from a period of suppression due to scientific politics. On one of the plasma-related models, the light from the sun does not come from nuclear fusion. In fact, plasma theorists point out that sunspots have a dark center, and we are looking deeper into the sun at that point. A darker center means that the temperature in the lower layers is lower than at the surface, so the temperature of the center of the sun may not be higher than the surface after all. If not, then the possibility of nuclear fusion fades into oblivion.

The plasma theorists point out a series of facts about the sun and the solar wind and the solar corona, as well as the shape of the curve describing the color and intensity of light from stars known as the Hertzsprung-Russell (H-R) diagram. These facts all point to the sun and stars being powered by galactic electric currents with the surface of the stars being plasma in arc mode due to the current. The form of the H-R diagram is precisely defined by plasma theory with electric currents. Even the sunspot cycle can be shown to be due to the blocking action of the plasmasphere-magnetosphere of the planet Jupiter. It is up to 7 million miles in diameter, and when Jupiter moves in its orbit to a critical direction for the current, the effects are seen. Jupiter's orbital period closely matches the sunspot cycle.

If this is in fact the case, then it can be easily demonstrated that every star lit up as soon as it was formed because the electric current in the plasma filament which formed the star (and our solar system) has continued to flow and produce the arc mode  brilliance in the plasma on the stellar surface. No time delay; no necessity for nuclear fusion; no problems with a creation scenario. For more about the electric sun and the plasma approach, you might like to visit this website on The Sun, and the work (especially The Electric Sky) of Professor of electrical engineering, Donald E. Scott.

If you follow these links to related sites and study the literature, you will see that big possibilities exist for a major paradigm shift in astrophysics on this matter.

I trust that gives you several options to pursue. If there are any problems or further questions, please get back to me.

 

Interstellar Water Mystery

Question: Is there an (alternate, perhaps better supported theoretical) plasma-physics explanation than contained in this article for earthly water?  Just curious.

Setterfield: My reply on petroleum is also valid here, since plasma physics does give us an idea of where the water comes from. Remember the filament that formed our solar system and the daughter filaments which formed the planets? As indicated before, there is sorting of ions in those filaments by the process of Marklund convection, and that process concentrates hydrogen ions and oxygen ions close together. Since hydrogen is reactive, it will combine with the oxygen to form water in concentrations that depend on where the planet formed in the main filament. As a result, the plasma model does not need comets or asteroidal debris or dust particles to bring in the water from the depths of the solar system as the standard model does. It is all much simpler with just two levels of Marklund convection to consider.

 

CMBR and the Big Bang

Question: "Is it possible for the Cosmic Microwave Background Radiation (CMBR) to be produced by a scenario different from the Big Bang?"

Setterfield: The Big Bang is not the only scenario in which a CMBR might be produced. It also results from a plasma universe/ZPE (Zero Point Energy) model that I presented to my astronomy students. The reasoning goes along the following lines, but be a little patient with me as we lead into the scenario.

A number of researchers have pointed out that the vacuum Zero Point Energy (ZPE) maintains atomic orbit stability. This has been done by L. de la Pena "Stochastic Processes Applied to Physics", p.428-581), Ed. B. Gomez (world Scientific 1983); Hal Puthoff in Physical Review D 35(10):3266 (1987), V. Spika et al, "Beyond the Quantum," pp.247-270, Ed.  T.M. Nieuwenhuizen, (World Scientific, 2007).

From all these analyses, it becomes apparent that, as the ZPE strength increases, so, too, must atomic orbit energies and the light emitted from atomic orbit transitions is thereby more energetic. This means that light emitted by atoms becomes bluer with time. Thus as we look back in time, emitted light will be redder and this is a likely cause of the redshift. Spicka's analysis allows us to quantify that effect including suspected redshift quantization or periodicities. This was done in my NPA Conference paper in June 2010 entitled "Zero Point Energy and the Redshift." This paper links the physics of the atomic behavior induced by the ZPE with the redshift data.

An alternate way of achieving the same result comes from an entirely theoretical approach, which can be done in two ways. One starts with the standard redshift equation and works back to show the conditions which existed to produce the ZPE, since its production and build-up has a known cause. That was done in "The Redshift and the Zero Point Energy."

Alternately, we can start with the equations that govern experimental conditions in the lab which mimic those governing the production of the ZPE and work forward to the standard redshift equation as done in "Quantized Red Shifts and the Zero Point Energy."

These analyses all present a cohesive conclusion from which it becomes apparent that the ZPE has increased with time in a predictable way from the most distant parts of the universe up to a time represented by the redshift near the edge of our Local Group of galaxies.

At this point several things emerge. If the redshift is an inverse measure of the strength of the ZPE, then initially, when the redshift was very high, the ZPE strength was very low. Second, in the earlier days of our cosmos, the ZPE was so low that atomic orbits for electrons could not be sustained. If electrons could not become associated with atomic nuclei, then the state of matter had to be a plasma, with negative electrons and positive nuclei wandering around independently. During this era, plasma physics alone was in operation and the electrons and nuclei scattered light photons back and forth, just like the interior of a fog. It was only as the ZPE built up with time that atomic orbits could be sustained and neutral atoms form. And, here is the crunch point, it is only once neutral atoms form that light can penetrate or escape from this initial dense plasma. Then, looking back on that event, namely the clearing of the "initial fog," we see what we call the Cosmic Microwave Background Radiation, the CMBR. This origin for the CMBR can thereby be anchored in plasma physics, the ZPE behavior and its effects on atoms, not in the Big Bang.  Some recent results suggest that the cooler parts of the CMB can now be traced to voids between the plasma filaments which form the clusters of galaxies.

So this approach is one of a number of possible alternative explanations for the CMBR's existence. 

 

What is the CMBR (Cosmic Microwave Background Radiation)?

Question: In my explaining why there is an absolute reference point in the cosmos -- causing Einstein's SRT to be more or less obsolete -- I come across the CMBR, which you declare as being such an absolute reference frame. But in trying to imagine how this works out, I fail. Can you make it clear for me what, exactly IS this Cosmic Micro Background Radiation, WHERE it is, and WHY we can measure speeds and distances against it? Are there some fixed points in it, which do not shift and which we can recognize? Is there a clear structure, with point to which we can measure distances and speeds? Like Archimedes said: Give me a fixed point and I lift the world out of its place. Can you make this item a bit clearer to me?

Setterfield: By way of clarification, let me first give you an illustration. When you are traveling in a car through thick fog, the light from your car is scattered by the fog particles so that you essentially cannot see any distance. After traveling like this for a while, you may suddenly come out of the fog and then see clearly. After traveling a small distance, you are able to look back at the fog-bank and see its uniform character.

Let us now upgrade our picture to the universe. The early universe consisted entirely of plasma at a high temperature with a low ZPE. These two facts together ensured that it remained as plasma for some little time. In this situation, plasma acted in the same way as fog. All the ions and electrons in the plasma uniformly scattered light and all radiation so that the result was the same as being in a fog. Now the universe was being expanded out, which cooled the temperature, and at the same time increased the strength of the ZPE. When the temperature cooled sufficiently AND the ZPE strength was high enough, atoms formed out of the plasma. Now remember that plasma will still act as a plasma even if it is only 1% ionized. In other words a gas will still act as a plasma even if 99% has become neutral atoms, that is ions with electrons now orbiting them.

Now until neutral atoms start forming, the plasma is opaque to radiation, just like the fog we travel in. However, once neutral atoms have formed, the plasma becomes transparent to radiation, and the radiation streams out of the plasma. As we move further ahead in time, we can look back at this plasma like looking back at the uniform bank of fog. The uniform bank of fog with the radiation streaming out is the CMBR. This radiation is very smooth and uniform to one part in 100,000, so this CMBR "fog bank" presents an extremely consistent appearance from all parts of the sky.The characteristic feature of this radiation is the same as that of a perfectly black body radiating at a temperature of 2.7 degrees Kelvin.

The next part in the picture comes from the fact that if you are moving towards a radiation source, that radiation is blue-shifted somewhat by an amount dependent upon the velocity at which you are traveling. This is a genuine Doppler shift. The converse is also true. If you are traveling away from a source of radiation, the radiation will appear red-shifted from its normal value. Another way of looking at this is to say that when we travel towards a source of radiation, it appears slightly warmer in that direction (as bluer radiation is warmer radiation). Conversely, if we are moving away from the radiation source it appears slightly cooler (since redder radiation is cooler radiation). In the case of the CMBR, this radiation is coming uniformly from all parts of the sky. So if we are traveling in one direction, the CMBR in that direction is slightly bluer or warmer, while in the opposite direction we are moving away so the radiation in that direction looks redder or cooler.

Therefore, the amount of "blueing" or "warming" with its converse in the opposite direction, then gives us the absolute velocity and direction of motion. From this we can pick up the absolute motion of our solar system through space; our galaxy through space; and our Local Group of galaxies, which turns out to be in the direction of the Virgo cluster. At the same time it can be seen that the CMBR gives us an absolute reference frame for the whole universe. This, then, denies the veracity of Einstein's postulate that there is no absolute reference frame for any motion anywhere in the universe. The comments of Martin Harwit showing that relativity can only apply to the atomic frame of reference, follow from this automatically.

I trust that this helps. If you need further explanation or have further questions, please get back to me 

Response: Thanks for your explanation and patience with me. I think the picture is becoming clear. From what you said about the CMBR I conclude that this radiation is not universally around us, but as a kind of ‘fog’ somewhere at the edges of the universe, like on this drawing?


diagram

Then we can decide about direction and speed against this light grey band, because of the Doppler effect.
Is this a right way to put it in drawing?
Can you confirm or correct?

Setterfield: As far as the CMBR is concerned, you are correct...BUT...remember we can only pick up the CMBR by the radiation it has emitted, and that radiation has traveled through space to get to us from the CMBR. Therefore, we are seeing the "bank of fog" back there because of the radiation that was emitted as the light was bursting through the fog, and that radiation forms a continuous stream from the fog-bank to us. In that sense, this radiation is "all around us." It is similar to the light we see from a galaxy at the fromtiers of the cosmos. We see the galaxy "way out there", but only because there is a continuous stream of light waves (or particles) coming from that galaxy to us.

If you need further help, please don't hesitate to ask.

 

Distant Starlight and a Talk Origins article answered

Comment: Thought you might be interested in this one, if you haven't seen it already - you even get a mention by name!

http://www.talkorigins.org/faqs/hovind/howgood-add.html#A6

What would your explanation be?

Setterfield:

Thanks for your question and thanks for drawing my attention to this website again. I have been mentioned many times on that site, not usually in a good context! This article is based on the majority model for pulsars among astronomers today. If that model is not correct, then neither are the conclusions which that website has drawn. So we need to examine the standard model in detail.  On that model we have a rapidly rotating, small and extremely dense neutron star which sends out a flash like a lighthouse every time it rotates. Rotation times are extremely fast on this model. In fact, the star is only dense enough to hold together under the rapid rotation if it is made up of neutrons. Those two facts alone present some of the many difficulties for astronomers holding to the standard model. Yet despite these difficulties, the model is persisted with and patched up as new data comes in. Let me explain.

First a number of university professionals have difficulty with the concept of a star made entirely of neutrons, or neutronium. In the lab, neutrons decay into a proton and electron in something under 14 minutes. Atom-like collections of two or more neutrons disrupt almost instantaneously. Thus the statement has been made that "there can be no such entity as a neutron star. It is a fiction that flies in the face of all we know about elements and their atomic nuclei." [D.E. Scott, Professor & Director of Undergraduate Program & Assistant Dept. Head & Director of Instructional Program, University of Massachusetts/Amherst]. He, and a number of other physicists and engineers remain unconvinced by the quantum/relativistic approach that theoretically proposed the existence of neutronium.They point out that it is incorrect procedure to state that neutronium must exist because of the pulsars behavior; that is circular reasoning. So the existence of neutronium itself is the first problem for the model.

Second, there is the rapid rate of rotation. For example, X-ray pulsar SAX J1808.4-3658 flashes every 2.5 thousanths of a second or about 24,000 revs per minute. This goes way beyond  what is possible even for a neutron star. In order for the model to hold, this star must have matter even more densly packed than neutrons, so "strange matter" was proposed. Like neutronium, strange matter has never been actually observed, so at this stage it is a non falsifiable proposition. So the evidence from the data itself suggests that we have the model wrong. If the model is changed, we do not need to introduce either the improbability of neutronium or the even worse scenario of strange matter.

Third, on 27th October, 2010, in Astronomy News, a report from NRAO in Socorro, New Mexico was entitled "Astronomers discover most massive neutron star yet known. This object is pulsar PSR J1614-2230. It "spins" some 317 times per second and, like many pulsars, has a proven companion object, in this case, a white dwarf. This white dwarf orbits in just under 9 days. The orbital characteristics and data associated with this companion shows that the neutron star is twice as massive as our sun. And therein lies the next problem. Paul Demorest from NRAO in Tucson stated: "This neutron star is twice as massive as our Sun. This is surprising, and that much mass means that several theoretical models for the internal composition of neutron stars are now ruled out. This mass measurement also has implications for our understanding of all matter at extremely high densities and many details of nuclear physics." In other words, here is further proof that the model is not in accord with reality. Rather than rethink all of nuclear physics and retain the pulsar model, it would be far better to retain nuclear physics and rethink what is happening with pulsars.

In rethinking the model, the proponents of one alternative that has gained some attention point out some facts about the pulse characteristics that we observe in these pulsars. (1) The duty cycle is typically 5% so that the pulsar flashes like a strobe light. The duration of each pulse is only 5% of the length of time between pulses. (2) Some individual pulses vary considerably in intensity. In other words, there is not a consistent signal strength. (3) The pulse polarization indicates that it has come from a strong magnetic field. Importantly, all magnetic fields require electric currents to generate them. These are some important facts. Item (2) alone indicates that the pulsar model likened to a lighthouse flashing is unrealistic. If it was a neutron star with a fixed magnetic field, the signal intensity should be constant. So other options should be considered. Taken together, all these characteristics are typical of an electric arc (lightning) discharge between two closely spaced objects. In fact electrical engineers have known for many years that all these characteristics are typical of relaxation oscillators. In other words, in the lab we can produce there precise characteristics in an entirely different way. This way suggests a different, and probably more viable model. Here is how D.E. Scott describes it:

"A relaxation oscillator can consist of two capacitors (stars) and a non-linear resistor (plasma) between them. One capacitor charges up relatively slowly and, when its voltage becomes sufficiently high, discharges rapidly to the other capacitor (star). The process then begins again. The rate of this charge/discharge phenomenon  depends on the strength of the input (Birkeland) current, the capacitances (surface areas of the stars) and the breakdown voltage of the (plasma) connection. It in no way depends on the mass or density of the stars.

In the plasma that surrounds a star (or planet) there are conducting paths whose sizes and shapes are controlled by the magnetic field structure of the body. Those conducting paths are giant electric transmission lines and can be analyzed as such. Depending on the electrical properties of what is connected to the ends of the electrical transmission lines, it is possible for pulses of current and voltage (and therefore power) to oscillate back and forth from one end of the line to the other. The ends of such cosmic transmission lines can both be on the same object (as occurs on earth) or one end might be on one member of a closely spaced binary pairs of stars and the other end on the other member of the pair similar to the "flux tube" connecting Jupiter to its moon Io.

In 1995, an analysis was performed on a transmission line system having the properties believed to be those of a pulsar atmosphere. Seventeen different observed properties of pulsar emissions were produced in these experiments. This seminal work by Peratt and Healy strongly supports the electrical transmission line explanation of pulsar behavior." 

The paper outlining these proposals was entitled "Radiation Properties of Pulsar Magnetospheres: Observation, Theory and Experiment" and appeared in Astrophysics and Space Science 227(1995):229-253. Another paper outlining a similar proposal using a white dwarf star and a nearby planet instead of a double star system was published by Li, Ferrario and Wickramasinghe. It was entitled "Planets Around White Dwarfs" and appeared in the Astrophysical Journal 503:L151-L154 (20 August 1998). Figure 1 is pertinent in this case. Another paper by Bhardwaj and Michael, entitled the "Io-Jupiter System: A Unique Case of Moon-Planet Interaction" has a section devoted to exploring this effect in the case of Binary stars and Extra-Solar Systems. An additional study by Bhardwaj et al also appeared in Advances in Space Research vol 27:11 (2001) pp. 1915-1922. The whole community of plasma physicists and electrical engineers in the IEEE accept these models or something similar for pulsars rather than the standard neutron star explanation.

Well, where is this heading? The question involved the slow-down in the speed of light in the context of pulsars and their "rotation rate." If pulsars ar not rotating neutron stars at all, but rather involve a systematic electrical discharge in a double star or star and planet system with electric currents in a plasma or dusty disk, then the whole argument breaks down. In fact if the electric duscharge model is followed, then the paper on "Reviewing a Plasma Universe with Zero Point Energy" is extremely relevant. The reason is that an increasing ZPE not only slows the speed of light, but also reduces voltages and electric current strengths. When that is factored into the plasma model for pulsars the rate of discharge seen from earth will remain constant, as the slow-down of light cancels out the initial faster rate of discharge in the pulsar system when currents were higher.

I hope that this answers your question in a satisfactory manner.  

A Fractal Universe?

Question: The Unified Field Theory by Nassim Haramein states that the universe is fractal in nature. This implies that every part contains the whole, and so it should behave like the whole (hence the ancient Hermetic principle "As above, so below").

Loop Quantum Cosmology, rightful heir to the throne of the Inflation Theory, predicts that the universe contracts and expands in an endless cycle, whereby just before Planck's length is reached, the SCALE of the entire universe is downsized. That means relative distances remain unaltered.

I was wondering if the decrease of light speed can also be interpreted as the size of the universe increasing in absolute sense and not relative to our measure of size.

Appying the principle of fractality, we should see an equal increase in size in the microcosmos. I was wondering if the increase of atomic mass can be an indication that the size of an atom is equally increasing with the size of the universe in the same manner.

Could you please enlighten me on this question.

Setterfield: Thank you for your note and your questions. 

First of all, you are correct in saying that the Unified Field Theory by Nassim Haramein states the universe is fractal in nature.  Not all cosmologies agree with this approach, however.  Loop quantum cosmology (LQC) is getting a lot of interest at the moment.  There is an article about it in the December 13-19 'New Scientist.'  LQC is certainly an interesting proposition.  You are correct in saying that if LQC is right, relative distances will remain unaltered as the universe expands or contracts.  This will mean that an increase in the size of the cosmos will also mean an increase in the size of everything, uniformly (including atoms). 

Several comments now become necessary.  If the fractal principle is applied, in this case, to atoms, then the size of electrons and protons will also increase.  But if this increase in size is due to  the expansion of the fabric of space and time, mass should not alter as the energy remains unchanged.  What we are actually observing in the data that we have is that there is an increase in the volume of subatomic particles and an increase in Planck's Constant.  What has happened on the Zero Point Energy (ZPE) proposal that I am advocating is that with the ZPE, there is an increase in volume of subatomic particles due to the increased battering of an increasing ZPE.  This means that atomic masses will increase.  On the fractal proposal, the mass would not be expected to increase, as the energy content has remain unchanged.  It is only on the ZPE proposal that you get an explanation for the measured increase in mass. 

In addition, the 1970 reversal in the trend of associated atomic constants does not fit easily into a combination of the fractal and LQC approaches. 

With regard to your light speed question in these contexts, if relative distances remain unaltered down to subatomic dimensions, light speed would also be unaltered.  However, you are asking if there were a change in the size of the universe in an absolute sense, but no change our measure of sizes, whether there would be a change in the speed of light.  If the size of the universe itself increases, and everything is scaling accordingly, no relative size changes can occur, so no changes in the speed of light will occur either.  That is the case we have already discussed, but if space expands and matter does not, light in transit will have its wavelengths stretched, which will result in a red shift, but not a quantized red shift.  Therefore the quantized red shift cannot be considered as evidence for an expanding universe. 

Again, if space expands, and matter does not, the space between stars will be increasing, and the space between planets will be increasing.  This would mean that the speed of light, measured by Roemer's method, will show a variation, because the distance between Jupiter and the earth would be increasing. Other terrestrial methods would show no change of any significance, because the apparatus would have to expand significantly and the measured changes are far too large for this to be the case.  The 1970 reversal in the trend of the associated constants works against this idea and is in favor of the ZPE model. 

Thank you again for your interest.  I trust this has been of help.

Orbital Periods of Stars

Question: [note-- this is paraphrased as the question emailed was quite long]
You asked a question regarding the actual orbital periods of planets in view of the slow motion effect resulting from the dropping speed of light and the constancy of gravitational interactions. "Due to slow motion effects wouldn't this imply that the actually orbital velocity is much faster than what we are actually seeing?"

Setterfield: As you point out in your e-mail, the case of binary stars is discussed in the Monograph in Appendix C. It is there concluded that the events we are seeing now are in slow motion compared with the time of occurrence. They were orbiting faster than what we are actually seeing. The point was made that the actual equation governing orbital phenomena in the early universe contains two parameters, as in equation (4) on page 427 of the Monograph. One parameter is the gravitational interaction, the other is the electric and magnetic effects of highly charged bodies. For distant binaries, such as those in the Magellanic Clouds or in the Andromeda or Triangulum galaxies, the electromagnetic terms were over-riding the effects of gravity and giving faster orbit times.

The same basic principle applies to planetary orbits. However we are currently able to detect these planets out to relatively close distances. In this context, the speed of light has not significantly altered astronomical phenomena closer than about 4500 light years or more. It is only at greater distances than this that we expect electromagnetic terms in the equations to become prominent. Thus orbital times of many of the Exo-planets will not be greatly shorter than what we are seeing. However, the further out we go, the more the slow-motion effect will become apparent. Thus, at great distances, orbiting planets will be under the control of both electro-magnetic as well as gravitational influences and so orbit faster. A very distant planet orbiting a star in one actual year back then, would seem to take a longer time to us and so seem to be located far from the parent star.

It may be for this reason that early civilizations had orbit times for the earth of 360 days compared with our current year of 365.25 days.

 

Electrostatic Attraction

Question: Do you think the voltage difference between the sun and the planets leads to electrostatic attraction?  Could that be one factor that speeds up Mercury's orbit?  
     One thing that doesn't quite make sense: if current is flowing in one direction (outwards), how is the sun's supply of positive charge replenished?
    Also, you once mentioned that gravity can be explained by secondary electric fields even between neutral particles.  Do you have a link to the paper or an explanation of that.

Setterfield: In the early days of our universe (and our solar system), voltages were higher and currents stronger. As a result electric phenomena were more prominent.  There was stronger electrostatic attraction. Today, we still see that with comets which come in from the outermost parts of the solar system and are thereby strongly negatively charged. However, the acceleration of Mercury's orbit today can be explained by the Cosmology and the Zero Point Energy as you will find in chapter 7 on the ZPE and relativity. Nevertheless, as we look back in time (that is out into space) the effects of a higher ZPE will become more prominent. Thus there is an equation in the Appendix section on page 427 that shows how stars (or planets) will circle each other under electric effects as well as gravity.

In Chapter 7 there is also a full discussion on gravity and mass and also earlier in Chapter 4, which answers your second question.

Your third question deals with the Solar circuit under electric and magnetic effects. That circuit is reproduced in the attached diagram. There, remember that the heliospheric current sheet is the component that flows out from the sun's equator to join the interstellar field and the incoming current along the plasma filament from which the sun was formed enters and exits from the poles.

solar current circuit

 

Cosmic Ripples from the Birth of the Universe? -- March 26, 2014

We have received numerous inquiries regarding the recently published material regarding the BICEP2 discoveries about the early universe.  The following links are good examples of what has been in the news.

Scientists reveal ‘major discovery’ at Harvard-Smithsonian Centre for Astrophysics
Scientists fine cosmic ripples from birth of universe

Several  days ago I took the articles about this and discussed it with my astronomy class of high school seniors. We went over the articles together and noted the following sequence of reasoning:

1. What has been found is a polarization of the Cosmic Microwave Background Radiation (CMBR) in a specific pattern. That is the hard and only observational fact on which everything else in those articles is based.

2. This polarization is then claimed to result from "ripples in space-time" or "gravitational waves" caused by the expansion process.

3. In turn, this expansion process (inflation) is meant to come from the action of the cosmological constant which might be considered to act like gravity in reverse. There is some specialized new physics that allowed this conclusion to be drawn from the polarization data.

4. The conclusion is then that this polarization is "the result of ripples in space-time (gravitational waves) caused by the rapid expansion of the universe" that was caused by the action of the cosmological constant.

My comments to the class, after much discussion went along these lines:

1. Polarization has indeed been found in the CMBR. The same type of polarization was found a year ago at a different angular scale. Because it was not at the right scale it was dismissed. That was attributed to gravitational lensing by massive objects since this can produce exactly the same sort of polarization even if no gravitational waves are present. So the action of gravitational waves is not unique in causing this polarization.

2. The use of the cosmological constant to cause expansion or inflation, and the gravity waves resulting from it, is highly suspect. Observational data all show that the size of the cosmological constant must lie between zero and one. In contrast, theory requires it to be 10120 times stronger. This colossal mis-match between data and theory is a serious anomaly. The problem is not resolved by saying that it must be as large as that because of the "ripples in space-time" in the CMBR. Other evidence negates that. The polarization might have some other cause.

3. Another cause, apart from the gravitational lensing effect already noted, is the action of plasma filaments, whose incipient presence can be discerned in the CMBR. Plasma filaments themselves can cause lensing of light which might otherwise be attributed to gravitational effects. However, plasma filaments have Birkeland currents, and it has been demonstrated that these Birkeland currents in filaments also produce polarization of light and electromagnetic waves. So other options exist for explaining the polarization

4. Several commentators have noted that it might be wise to obtain some reproducible results before any conclusions are drawn.

Further Note:
After looking over some comments by one of the authors of the Report (Building BICEP2) that we had been discussing, I felt that another point had to be made, namely the "necessity" for "inflation" that the scientific mainstream needed in order to overcome some huge problems.

In relation to this discovery, Professor Jamie Bock (who was part of the team measuring the polarization) had this to say:

"This signal is an important confirmation of key aspects of the theory of cosmic inflation, about how the universe may have behaved in the first fractions of a second of its existence to create the universe we live in today. Inflation was first proposed in 1980 by Alan Guth, a theoretical physicist at the Massachusetts Institute of Technology (MIT), to explain some unusual features of our universe, especially its surprising homogeneity. For all the clumping of stars and galaxies we see in the night sky, the universe seen through the CMB is extremely uniform—so much so that it has been difficult for physicists to believe that the various pieces of the sky were not all in immediate contact with one another at an earlier point in the universe's development."

This comment makes it plain why the whole idea of cosmic inflation was proposed in the first place. It came as a suggested answer to a set of problems that astronomers face. The observations of the Cosmic Microwave Background Radiation (CMBR) indicate that there was a uniformity of temperature. Indeed, observation indicates that there is also a uniformity in structure, so that, no matter what direction we look in, the same basic picture presents itself. For this to be the case, it requires that all parts of the universe were in contact with each other until the time of the formation of the CMBR. This is currently impossible because the rate of transmission of such information is governed by the speed of light. Uniformity of temperature in a large cavity is only possible if all the temperature radiation in the cavity comes into equilibrium.

Because the universe is such a large place, and the speed of light is currently so low, it is impossible for the initial universe to have come to some equilibrium state under those conditions. Thus "inflation" was proposed by Alan Guth as the answer. In this approach the inflationary expansion was so rapid, and so early, that all the information was maintained intact without significant variation. What is claimed by those scientists who have been involved with measuring the polarization of the CMBR is that these measurements show that inflation actually occurred, and hence Alan Guth's suggestion is correct. Our earlier comments call this claim into question. If inflation did not occur, then how did the universe temperature etc. remain so uniform?

The answer is that several groups of scientists have looked at other proposals, among them Albrecht, Magueijo, Barrow and Davies. The proposal was that the speed of light was much faster then now in the early universe, and as a result of this, all parts of the cosmos remained in contact with each other. Unfortunately, these cosmologists have adopted a minimalist position in which they have tried varying only the speed of light in their equations, and have not looked at other associated constant which would vary simultaneously.

The work done on the increase of the vacuum Zero Point Energy (ZPE) shows that a number of other constants will indeed be varying at the same time. Thus as the ZPE strength increased, Planck's constant, h, also increased in direct proportion. However, as the ZPE strength increased, it can be shown that the speed of light, c, decreased in an inverse fashion. The result is the quantity hc is an absolute constant. It is rather like the number 12. It can be made up of 12x1 or 6x2 or 4x3 but the result is always the same.  But these physicists were measuring the complete quantity, hc, and trying to find if the speed of light varied. They failed because they did not consider that h would vary inversely with c. One of the quantities being measured at great distances is the fine structure constant where a change in the speed of light was looked for. However, the quantity hc occurs in the formulation of the fine structure constant, so the outcome is that the fine structure constant as a complete entity will not show any change. Indeed, Lineweaver has looked at this constant and has not detected any substantial change.  

In contrast, the ZPE research provides a potentially viable answer to these problems since it has an initially extremely high speed of light and a very low value for h, because the strength of the ZPE was low as expansion began. This allowed all parts of the universe to maintain contact. Inflation seems to be a more complicated way by comparison, and one that has a number of weaknesses. Therefore, an approach with the ZPE and plasma physics seems the better way to overcome all the observational difficulties in a fairly simple manner.

Pole Star and Axis Tilt

Question: I just found the very interesting article by G. F. Dodwell on your website. I googled "earth's axis inclination upright", hoping to find information on the Pole Star before the tilt.

Regular science says it always rotated around Draco, but I wonder which would have been the Pole Star,
if the earth's axis was upright (North-South = 0°)?

Hope your time allows to reply and maybe you have an idea where to search.

Setterfield: Many thanks for your email and question; it is an important one.

In order to find the pole star at any time, several things need to come together.
First, if the earth's axis were exactly upright with no tilt, then the plane of the earth's orbit will be the same plane as its equator. This orbital plane is called the ecliptic. The pole of the ecliptic plane will therefore be where the earth's axis was pointing if it were perfectly upright at any time. There is marked on most star maps the north pole of the ecliptic plane. It is in the constellation of Draco between the stars gamma and zeta in that constellation.

The original tilt of the axis was probably not perfectly vertical. But in order to discover the original tilt of the earth's axis we take a lesson from the natural satellites of Jupiter and Saturn. Each one of them which was not a captured moon is moving in the plane of the equator of their parent planet. If we make the assumption that the moon and earth were formed together out of a plasma filament, the same situation will apply. We therefore look at the tilt of the Moon's orbit relative to the ecliptic and that will give us the original angle of tilt of our equator relative to the ecliptic. That is the same angle as the axis tilt relative to the ecliptic pole. When that is done, it is found that the earth's original axis tilt was 5.15 degrees. Thus there would have been very gentle seasons, spring and autumn on the early earth.

This 5 degree tilt would have formed a precession circle around the ecliptic pole with a total diameter of 10 degrees. This was still well within the constellation of Draco, and the star zeta Draconis may have been the pole star at one stage as it is right on that circle. On the opposite side, the star delta is a little further out than the circle, but would have been an approximation to the pole star.

After a series of impact events, the earth's axis changed its tilt dramatically. The second last of these were the series of impacts which wiped out the dinosaurs. These impacts caused a higher axis tilt than now which ultimately resulted in an ice-age.

From the Dodwell recovery curve, it seems that the axis tilt was up to 3 or perhaps  a maximum of 4 degrees greater than now at the time of the ice-age. This means that the precession circle around the ecliptic pole was 3 or 4 degrees further out than now.

Since we have historical records, it seems that the swing over with the Dodwell event in 2345 BC occurred when the star alpha Draconis, Thuban, was near the pole. This is the only star of prominence in the entire area. The change in position in the heavens of Thuban by several degrees would have been noticeable, but not dramatic. Since then, the precession has carried the position of the pole counterclockwise from Thuban onto Polaris.

I hope that is some help. Please get back to me if you require further information.

Further Question: How did you find out about zeta draconis at 5° inclination?

Setterfield: The answer is fairly simple. You can do this yourself. If you find a star map which has hours around the equator and degrees vertically from equator to poles, look for the label Draco. Within Draco, find the North Pole of the Ecliptic - it should be shown. If the earth's axis was vertical, that is the place it would point.  If it is 5 degrees from vertical, the circle of precession will be around that north pole of the Ecliptic and the radius of the circle will be 5 degrees. Get a compass, set the tines 5 degrees apart for the scale on you map and draw a 5 degree circle around the north pole of the Ecliptic. You will see it goes through zeta Draconis.

Similarly, for an axis tilt of 23.5 degrees, the tines on the compass will be 23.5 degrees apart and the precession circle will again be around the north pole of the Ecliptic, only in this case it will also go through Polaris. For the larger tilt before 2345 BC, you take a circle around the same point with wider tines. The historical position of the axis pole star makes it certain that, around that date, Thuban was involved. So as you look at the two relevant circles, you will see that Thuban is not far from either of them. So the switch from one to the other at that time still retained the star in the approximately correct position.

I do not know of a computer package which allows you to do this without the map and compasses or I would have let you know. But this way, you are getting the basic astronomy behind it as well, which I encouraged my students to do.

I hope that answers your question satisfactorily. If not, or if you have further queries, please get back to me.

When is Virgo visible?

Question: When is the constellation Virgo first visible in the Northern and Southern Hemisphere and when is it last seen in both ??
If you could give me accurate dates or months it would be appreciated

Setterfield: In reply, let me state that the constellation Virgo is visible all nights of the year except when the sun is in that part of the sky. This year, the Sun enters the officially designated limits of Virgo on September 17th and leaves that constellation on October 31st.  A few days before September 17th Virgo will be visible just after sunset in the west, while a few days after October 31st it should emerge in the east just before the glare of the rising sun. These dates should hold for about a decade or so

Remember one thing, however; these dates will change with time. They were not applicable about 2000 years ago because of the precession of the equinoxes. So we cannot apply these dates to events such as the Christmas Star. In order to overcome this problem, you need a planetarium type program where the clock can be rolled back. Such programs are available for computers; one is called "Starry Night".

The Big Bang and Star Formation

Question: Can you please give me some references to the Big Bang processes whereby stars are formed and the problems they have?

Setterfield: First, let us get the order of events correct.

To begin, there is the Big Bang which theory claims originated the universe. In this process, the elements hydrogen and helium and a bit of lithium and beryllium were formed as quoted below (emphasis mine) in this in Wikipedia:

“Primordial nucleosynthesis is believed by most cosmologists to have taken place from 10 seconds to 20 minutes after the Big Bang, and is calculated to be responsible for the formation of most of the universe's hydrogen and helium as the isotope helium-4 (4He), along with small amounts of the hydrogen isotope deuterium (2H or D), the helium isotope helium-3 (3He), and a very small amount of the lithium isotope lithium-7 (7Li). … Essentially all of the elements that are heavier than lithium and beryllium were created much later, by stellar nucleosynthesis in evolving and exploding stars.”

So they then need massive stars to build up the other elements and explode these elements out into space. It is only after that process had occurred that complex molecules could form. These are molecules such as carbon monoxide CO, water H2O and so on.

The usual model of star formation is to have a cloud of gas collapsing under its own gravity. The basic problem is that such a collapse will heat up the cloud as gravitational energy is released. This heat re-expands the cloud and so prevents a star from forming. The way that the problem is overcome is to use some process to cool the cloud by radiating away the heat. This can happen if there are complex molecules which radiate the heat away in the infra-red.  This URL mentions that CO molecules are needed to cool the clouds to form protostars. The reason why is often not mentioned.

As they note the CO molecule is the most active in this cooling process, so the elements carbon and oxygen are needed to exist. This URL discusses the cooling in molecular clouds and a list of molecules which can do it (see under the slide headed “Cooling”). Again, the CO molecule is the most efficient one for the task.

 So the elements carbon and oxygen are needed to be in place to do the cooling required to form the stars. But the first stars only had hydrogen and helium in the gas clouds as other elements cannot form from the Big Bang process. So this process is of absolutely no use to get the first stars to form. And until the first stars are formed, which manufacture the other elements within their cores by nucleo-synthesis and then explode them out into space, there are no complex molecules which can act to cool the clouds to form other stars.

Another article can be found here.

This one tries to avoid the problem with cooling molecules by proposing that the pressure from a nearby supernova will cause the clouds to collapse, contract and form stars. But a supernova cannot occur until a star has already formed. It is still a matter of debate, though commonly stated, that such supernovae can actually cause the clouds to contract. Even so, if it does, the process still needs another star (a supernova) to be in existence before it can get the clouds to collapse. So basically the Big Bang has a problem with getting the stars to form, at least the first ones which were meant to get the process started. If those stars cannot form, neither can the other elements which are needed to form complex molecules to form the later generations of stars.

Erratic Star

Question: Recently I read an article about a star whose light output was so fast that some astronomers were claiming there was some kind of structure around the star which would collect energy for a civilization. What is your opinion about this?

Setterfield: I have caught up with the report about the erratically blinking light from a star. It is not that it is happening so fast as that it is happening irregularly. Pulsars blink at a very fast rate, and that is not a problem because they are regular in their blinking. This star under consideration is irregular, and furthermore, the light intensity varies considerably. There are a number of suggestions. It could be surrounded by a orbiting disk of debris which has clumps of material in it of various sizes and varying distribution. It is rather like a cloud or clouds of cometary or asteroidal material orbiting the star at various distances. The timing of the transit across the star's face is unpredictable as seen from earth and dims its light in an unpredictable way.

Because of the conjectural nature of these and other explanations, some have gone to the extreme of suggesting that life-forms have erected structures to orbit the star and collect its light and heat as energy for their civilization. I do not see that as a viable option: it seems very far--fetched, but the press like to jump on such things as it makes a good story and so they get a lot of attention on the net.

Arp and the Red Shift

Question: Your ZPE explanations make sense, but it would be nice have some experimental data such as the 1987 supernova, which unfortunately didn't help very much.  The speed of light has also inconveniently flattened out.  I was thinking about the red shift.  I think people have observed galaxies with two different red shifts and some that became more blue over time.  This is pretty good evidence.  Do you know where I can find the sources for those observations?

Also, this web page below seems to suggest that galaxies at the edges of galactic clusters have higher red shift.  I think it makes sense if the ZPE is somehow stronger in the middle of the cluster.  Is that correct?  What would causes that?  Electric fields?  Gravity?

http://www.bibliotecapleyades.net/ciencia/esp_ciencia_haltonarp.htm

btw, thanks for the LIGO / GBM article.  If the waves are caused by the changing gravity of moving objects, those objects must have been rotating at a extremely high revolutions per minute, which is probably why only something like a neutron star or black hole would be considered.  If we assume a slow motion effect, the problem gets worse.  But since you've introduced me to the electric universe, it's hard to believe in neutron stars and black holes.  It will be interesting when more detectors come online.

Setterfield: Many thanks for your email and its questions.

First you ask about the statements that some galaxies have shown a redshift that reduced by one quantum jump over time. This information can be found in a long article by W.G. Tifft in the Astrophysical Journal Vol.382 (1991) page 396 and following. The evidence presented there is extensive.

Second,  you asked about the distribution of redshift within clusters of galaxies. You made the comment that:: "Also, this web page below seems to suggest that galaxies at the edges of galactic clusters have higher red shift.  I think it makes sense if the ZPE is somehow stronger in the middle of the cluster.  Is that correct?  What would causes that?  Electric fields?  Gravity?"

You referred me to this article about Arp's work: 
http://www.bibliotecapleyades.net/ciencia/esp_ciencia_haltonarp.htm

However, the comments are misleading as they are interpreting what Arp is assuming is a cluster. In fact, there is one comment which is incorrect. The comment reads as follows:

"Now let’s look at a galactic cluster in the non-Big Bang universe. Let’s assume (as Halton Arp’s observations seem to suggest) that a galactic cluster is a family of galaxies and quasars and gaseous clouds of varying redshifts. At the center, we find a dominant galaxy - it’s usually the largest galaxy, and the galaxy with the lowest redshift of the cluster. This dominant galaxy is surrounded by low-to-medium redshift galaxies, and toward the edges of the cluster we find the highest redshift galaxies, HII regions, BL Lac objects and quasars."

The statement that the dominant galaxy is at the center of the cluster and is usually the largest galaxy is correct. What is not correct is the statement that it has the lowest redshift.  The large central galaxy usually has a redshift which is near the average value for the whole cluster. Those galaxies closer to us usually have lower redshifts and those further away have higher redshifts. This is why Tifft noticed redshift "bands" going through the whole Coma Cluster. Arp's interpretations seem to have confused the commentators in that article. Since these comments were the source of your confusion and hence the question, this should clarify the situation for you.

How are Star Distances Measured?

Question: I have a question about the distances of stars across the universe. If we measure distance by light speed (for example a star is 33 light years from earth) and the speed of light has changed (slowed as you teach)...it seems to me that the universe is bigger than we think because light travelled faster (further) for a time?

How do we estimate distance in space?

I just cannot quit get how this works. However if we know the red shift thus we know the time from the Big Bang and the universe is relatively static (or like a jello mold) then shouldn't we be able to calculate the distance ?

Setterfield: Distances are not measured by light speed, or even primarily by redshift.  To explain this:

First of all, we can measure the distances of nearby stars (those in our own galaxy) by parallax.  The way we do this is to measure the position of the star when the earth is on opposite sides of its orbit (spring/fall or summer/winter).  We measure the apparent angular difference between the two positions against the background of more distant stars.  Since we know the diameter of the earth's orbit and we can measure the angle that shows the difference in the two positions, simple trigonometry gives us the distance of the star in question. 

Some of these stars we can measure in this way are called Cepheid Variables.  This means it is a star which had its light output changing in a regular fashion and we have found that the period of variability of these stars corresponds to their intrinsic brightness.  The brighter they are, the slower the variability rate (pulsation rate).

Since we know the distance of these closer Cepheid Variables, we can pick up more distant Cepheid Variables in distant galaxies.  We can see their pulsation rate and thus know their actual brightness.  But we see them at a certain level of brightness here on earth and so we can measure the distance by the brightness. 

That takes us out quite a ways.  But there is space and stars and galaxies beyond that! 

In addition to Cepheid Variables, there are some stars which are exceptionally bright when they explode:  supernovae.  Some special supernovae have an intrinsic brightness to them.  We can pick up what sort of supernova it is by its light curve.  These special supernovae are so bright that we can see them right at the frontiers of the universe.  When we see these in very distant galaxies, we can measure the distance by their apparent brightness. 

The redshift/distance relationship is THEN calibrated by the Cepheid Variables and the special supernovae.  The redshift then simply serves as a cross check for very distant objects. 

I hope that helps.

Evidence for an Expanding Universe

Question: Other than the Bible, of course the best source, if you take the red shift data away is the cmb the only remaining generally cited evidence for the "expanding" universe?

Setterfield: The expansion of the universe is indicated by three sources.

(1). The redshift. There are two accounts given here; if one fails the other is used. The first account is that it is due to a Doppler shift of galaxies racing away from us through static space-time. The second account is that it is due to the expansion of the fabric of space-time in which the galaxies are carried away with it. Since light in transit will have its wavelengths lengthened as it travels through expanding space-time, you will get a redshift. Both accounts fail if the redshift is quantized. The authors of the massive book "Gravitation", Misner, Thorne and Wheeler trash the idea of Doppler velocities because the effects of extreme motion would disrupt the galaxies, and that is not happening (see page 767). They also show that the high redshifts cannot be gravitational in origin as redshifts greater that z = 0.5 are unstable against collapse in that case. As for the second account, it has been demonstrated that, if lightwaves in transit are stretched, then the stretching will also affect atoms and ultimately galaxies since gravity is not strong enough to stop expansion, even withing galaxy clusters. This eliminates these two possibilities and the redshift along with it.

(2)The second source for the idea of initial expansion of the cosmos is the CMBR. Note that this only indicates that initial expansion occurred and that the final temperature was about 2.73 Degrees K. It does nothing to prove that the expansion lasted the lifetime of the universe, only that it initially occurred.

(3) Hydrogen cloud data give the third option.The basic proposition goes like this: There are hydrogen clouds more or less uniformly distributed throughout the cosmos. If the cosmos is expanding, these clouds should be getting further and further apart. Therefore, when we look back at the early universe, that is looking at very distant objects, we should see the hydrogen clouds much closer together than in our own region of space. If the universe is static, the average distance between cloud should be approximately constant. We can tell when light goes through a hydrogen cloud by the Lyman Alpha absorption lines in the spectrum. This leaves its signature in the spectrum of any given galaxy. The further away the galaxy, the more Lyman alpha lines there are as the light has gone through more hydrogen clouds. These lines are at a position in the spectrum that corresponds to the cloud's redshift. Thus, for very distant objects, there is a whole suite of Lyman Alpha lines starting at a redshift corresponding to the object's distance and then at reducing redshifts until we come to our own galactic neighborhood. This suite of lines is called the Lyman Alpha forest.

The testimony of these lines is interesting. From a redshift, z = 6 down to a redshift of about z = 1.6 the lines get progressively further apart, indicating expansion. From about z = 1.6 down to z = 0 the lines are essentially at a constant distance apart. This indicates that the cosmos is now static after an initial expansion, which ceased at a time corresponding to a redshift of z = 1.6