The Redshift
Setterfield: In the simplest terms, 'redshift' is a term used to describe the fact that the light seen from distant galaxies shows up a little differently than it does here on earth. Each element has a 'fingerprint' in light. This is how we know which elements are in which stars. There is a certain pattern of lines associated with each element which identifies it, something like the bar codes you see on products in stores. Each pattern or 'bar code' of dark lines shows up with a spectrometer at a specific place along the rainbow of visible light. However, as we get further and further out in space, these identifying lines, while keeping the same identifying patterns for each element, appear shifted somewhat to the red end of the spectrum -- thus causing the light to appear redder than it would be here on earth. A somewhat more technical explanation can be found in the article Is the Universe Static or Expanding?
Setterfield: from Helen: the further out in space you go, the further back in time you go. So the further out we look, the more ancient the age we are viewing. The redshifting of light increases with distance, so the higher the redshift, the further back in time we are seeing. The argument is how long ago is it that we are looking at? This is where Barry's reserch comes in. from Barry: The missing component here is the Hubble Constant, which converts redshift measurements to distance -- and therefore age -- given a constant value for the speed of light, and presuming that redshift is a measure of expansion. Consequently, the value of the Hubble Constant has been a very hot topic in astronomy. There are two major views about the Hubble Constant. One has a high value (about 65 km/sec/megaparsec), which results in a younger universe, and a low Hubble Constant camp (about 55 km/sec/megaparsec), which results in an age of around about 20 billion years. To this end, the Hubble Space Telescope has been of some help, giving a value of the Hubble Constant of around 65 km/sec/megaparsec, which translates into a universe age of around about 12-14 billion years. As a consequence of this, discrepancies in the Hubble Constant are very vigorously 'discussed.' In addition, anything which decouples the link between redshift and expansion is viewed with horror, as both of these factors link to the age of the universe through the Hubble Constant.
Setterfield: Yes! According to the Friedmann model of the universe, which is basically Einsteinian, as space expands, the wavelengths of light in transit become stretched also. This is how the redshift of light from distant galaxies is accounted for by standard Big Bang cosmology. The reference is correct, but any serious text on the redshift will give the same story. This does not serve our theory except for one point. The redshift has been shown by Tifft to be quantised. It goes in jumps of about 2.7 km/s. It is very difficult to account for this by a smooth expansion of space. Alternatively, if the quantisation is accepted as an intrinsic feature of emitted wavelengths (rather than wavelengths in transit), it means that the cosmos cannot be expanding (or contracting) as the exact quantisations would be "smeared out" as the wavelengths stretched or contracted in transit.
Setterfield: Yes, all galaxies outside of our local group of galaxies show a redshift. This redshift increases systematically with distance. The current mainstream explanation is that it is due to the expansion of the universe, where light waves in transit are stretched, or lengthened, giving rise to a redder colour. However, this explanation has a problem with the quantized redshift measurements first made in 1976 by William Tifft and since confirmed by a number of others. These astronomers have shown that the redshift is not changing smoothly, as expansion would have produced. Instead the measurements are going in a series of jumps. I have explained this more fully below. If something I have said there confuses you, please feel free to write back.
Setterfield: Thank you for your additional enquiry. Your "lack of formal education," is, perhaps, adding to your common sense. As far as the redshift is concerned, it is all inclusive, and it is progressive. That is to say, galaxies further away have a higher redshift. The further away, the greater the redshift. A good picture of what is being envisaged by astronomers is to blow up a balloon. If there were evenly spaced spots marked on the balloon, and a particular one was watched as the balloon expanded out, it would be noticed that every spot moved away from every other spot. The further away they were, the greater the distance each one had moved in a given time. This is the picture that astronomers have of the expanding universe. Now there are two ways to achieve this result. First – as Einstein proposed -- the fabric of space is static and the galaxies are moving through it. This would give rise to a Doppler shift. Second, space itself is expanding and carrying the galaxies with it, as LeMaitre proposed. In this case, light waves are stretched as they travel through an expanding space, and the redshift arises from this stretching. This may give you a better idea of what has been proposed by astronomers.
Setterfield: When we refer to a series of measurements being quantized, we are referring to the fact that they are showing up in jumps and not as a smooth, continuous function. It would be as if an accelerating car were seen as going 5 mph, then 10 mph, then 15 mph, and so on, but not at any speeds in between. This sort of series of jumps in the redshift measurements has been recorded. It would be expected that they should be like a car when it is accelerating: showing a smooth series of measurements. But this is evidently not what the data is showing. It is for this reason that the assumption of an expanding universe based on redshift measurements may be false. Could the universe expand in jumps?
Is the Redshift Really quantized? Setterfield: A genuine
redshift anomaly seems to exist, one that would cause a re-think about
cosmological issues if the data are accepted. Let’s look at this for just a
moment. As we look out into space, the light from galaxies is shifted towards
the red end of the spectrum. The further out we look, the redder the light
becomes. The measure of this redshifting of light is given by the quantity z,
which is defined as the change in wavelength of a given spectral line divided by
the laboratory standard wavelength for that same spectral line. Each atom has
its own characteristic set of spectral lines, so we know when that
characteristic set of lines is shifted further down towards the red end of the
spectrum. This much was noted in the early 1920’s. Around 1929, Hubble noted
that the more distant the galaxy was, the greater was the value of the redshift,
z. Thus was born the redshift/distance relationship. It came to be accepted as
a working hypothesis that z might be a kind of Doppler shift of light because of
universal expansion. In the same way that the siren of a police car drops in
pitch when it races away from you, so it was reasoned that the redshifting of
light might represent the distant galaxies racing away from us with greater
velocities the further out they were. The pure number z, then was multiplied by
the value of lightspeed in order to change z to a velocity. However, Hubble was
discontent with this interpretation. Even as recently as the mid 1960’s Paul
Couderc of the Paris Observatory expressed misgivings about the situation and
mentioned that a number of astronomers felt likewise. In other words, accepting
z as a pure number was one thing; expressing it as a measure of universal
expansion was something else.
It is at this point that Tifft’s work enters the discussion. In 1976, William Tifft, an astronomer from Arizona, started examining redshift values. The data indicated that the redshift, z, was not a smooth function but went in a series of jumps. Between successive jumps the redshift remained fixed at the value attained at the last jump. The editor of the Astrophysical Journal who published the first article by Tifft, made a comment in a footnote to the effect that they did not like the idea, but referees could find no basic flaw in the presentation, so publication was reluctantly agreed to. Further data came in supporting z quantisation, but the astronomical community could not generally accept the data because the prevailing interpretation of z was that it represented universal expansion, and it would be difficult to find a reason for that expansion to occur in “jumps”. In 1981 the extensive Fisher-Tully redshift survey was published, and the redshifts were not clustered in the way that Tifft had suggested. But an important development occurred in 1984 when Cocke pointed out that the motion of the Sun and solar system through space had a genuine Doppler shift that added to or subtracted from every redshift in the sky. Cocke pointed out that when this true Doppler effect was removed from the Fisher-Tully observations, there were redshift “jumps” or quantisations globally across the whole sky, and this from data that had not been collected by Tifft. In the early 1990’s Bruce Guthrie and William Napier of Edinburgh Observatory specifically set out to disprove redshift quantisation using the best enlarged example of accurate hydrogen line redshifts. Instead of disproving the z quantisation proposal, Guthrie and Napier ended up in confirming it. The quantisation was supported by a Fourier analysis and the results published around 1995. The published graph showed over 60 successive peaks and troughs of precise redshift quantisations. There could be no doubt about the results. Comments were made in New Scientist, Scientific American and a number of other lesser publications, but generally, the astronomical community treated the results with silence. If redshifts come from an expanding cosmos, the measurements should be distributed smoothly like the velocity of cars on a highway. The quantised redshifts are similar to every car traveling at some multiple of 5 miles per hour. Because the cosmos cannot be expanding in jumps, the conclusion to be drawn from the data is that the cosmos is not expanding, nor are galaxies racing away from each other. Indeed, at the Tucson Conference on Quantization in April of 1996, the comment was made that "[in] the inner parts of the Virgo cluster [of galaxies], deeper in the potential well, [galaxies] were moving fast enough to wash out the quantization." In other words, the genuine motion of galaxies destroys the quantisation effect, so the quantised redshift it is not due to motion, and hence not to an expanding universe. This implies that the cosmos is now static after initial expansion. Interestingly, there are about a dozen references in the Scriptures which talk about the heavens being created and then stretched out. Importantly, in every case except one, the tense of the verb indicated that the "stretching out" process was completed in the past. This is in line with the conclusion to be drawn from the quantised redshift. Furthermore, the variable lightspeed (Vc) model of the cosmos gives an explanation for these results, and can theoretically predict the size of the quantisations to within a fraction of a kilometer per second of that actually observed. This seems to indicate that a genuine effect is being dealt with here. One basis on which Guthrie and Napier’s conclusions have been questioned and/or rejected concerns the reputed "small" size of the data set. It has been said that if the size of the data set is increased, the anomaly will disappear. Interestingly, the complete data set used by Guthrie and Napier set comprised 399 values. This was an entirely different data set than the many used by Tifft. Thus there is no 'small' data set, but a series or rather large ones. Every time a data set has been increased in size, the anomaly becomes more prominent. When Guthrie and Napier's material was statistically treated by a Fourier analysis a very prominent “spike” emerged in the power spectrum, which supported redshift quantisation at very high confidence level. The initial study was done with a smaller data set and submitted to Astronomy and Astrophysics. The referees asked them to repeat the analysis with another set of galaxies. They did so, and the same quantisation figure emerged clearly from the data, as it did from both data sets combined. As a result, their full analysis was accepted and the paper published. It appears that the full data set was large enough to convince the referees and the editor that there was a genuine effect being observed – a conclusion that other publications acknowledged by reporting the results. (Guthrie, B.N.G. and Napier, W.M. 1996 Astron. Astrophys. 239: 33) It is never good science to ignore anomalous data or to eliminate a conclusion because of some presupposition. Sir Henry Dale, one time President of the Royal Society of London, made an important comment in his retirement speech. It was reported in Scientific Australian for January 1980, p.4. Sir Henry said: "Science should not tolerate any lapse of precision, or neglect any anomaly, but give Nature's answers to the world humbly and with courage." To do so may not place one in the mainstream of modern science, but at least we will be searching for truth and moving ahead rather than maintaining the scientific status quo. Quantized Redshifts and the Zero Point Energy may also be of help here, as well as Zero Point Energy and the Redshift. Two other interesting articles may be found at More Evidence for Galactic "Shells" or "Something Else" and Galactic Shell .
Setterfield: Firstly, the initial quantisation that Tifft picked up was around 72 km/s. Later he noticed that there was one about half that, near 36 km/s, and another around 24 km/s. Guthrie and Napier in 1991 set out to disprove the thesis using neutral hydrogen redshifts that had an accuracy of better than 4 km/s. Instead, their final assessment confirmed a prominent 37.5 km/s quantisation, supported by a Fourier analysis, which showed the significance of the quantisation to be of the order of 1 million [Progress in New Cosmologies, Plenum Press, 1991; Scientific American 267:6 (1992) p. 19; New Scientist, 9 July (1994), p.17; Science 271 (1996), 759]. An assessment of this study is given by Halton Arp in Seeing Red, pp. 198-200, (Apeiron, 1998), along with Guthrie and Napier's data graphs. Meanwhile, Tifft had continued to work with data using the hydrogen 21 cm line. As B. M. Lewis of the Jodrell Bank and MERLIN network points out, the redshifts measured in this range have an accuracy in excess of 0.1 km/s at a very high signal to noise ratio [Lewis, Observatory 107 (1987), 201]. Using these data Tifft determined that the earlier quantisation figures were multiples of a "presumably more basic common interval near 8 km/s." [Properties of the Redshift III, Astrophysical Journal, 382:396-415, (1991) Dec. 1.]. In that paper, Tifft's analyses conclude that this common interval was in fact 7.997 km/s, and that the structure of the diagrams indicated that the final quantisation figure was 1/3 rd of that. This gives a value for the most basic quantisation of 8/3 = 2.667 km/s or, if the statistically treated data are accepted, 7.997/3 = 2.665 km/s. The cDK work indicates a basic quantisation of 2.671 km/s, which, when multiplied by 3, gives a value of 8.013 km/s, which is very close to Tifft's "basic common interval near 8 km/s" that is supported by the 21 cm line data and well within its limits of accuracy.
Setterfield: The following quotation concerning this phenomenon is from "Quantized Galaxy Redshifts" by William G. Tifft & W. John Cocke, University of Arizona, Sky & Telescope Magazine, Jan., 1987, pgs. 19-21: As the turn of the next century approaches, we again find an established science in trouble trying to explain the behavior of the natural world. This time the problem is in cosmology, the study of the structure and "evolution" of the universe as revealed by its largest physical systems, galaxies and clusters of galaxies. A growing body of observations suggests that one of the most fundamental assumptions of cosmology is wrong. Most galaxies' spectral lines are shifted toward the red, or longer wavelength, end of the spectrum. Edwin Hubble showed in 1929 that the more distant the galaxy, the larger this "redshift". Astronomers traditionally have interpreted the redshift as a Doppler shift induced as the galaxies recede from us within an expanding universe. For that reason, the redshift is usually expressed as a velocity in kilometers per second. One of the first indications that there might be a problem with this picture came in the early 1970's. William G. Tifft, University of Arizona noticed a curious and unexpected relationship between a galaxy's morphological classification (Hubble type), brightness, and red shift. The galaxies in the Coma Cluster, for example, seemed to arrange themselves along sloping bands in a redshift vs. brightness diagram. Moreover, the spirals tended to have higher redshifts than elliptical galaxies. Clusters other than Coma exhibited the same strange relationships. By far the most intriguing result of these initial studies was the suggestion that galaxy redshifts take on preferred or "quantized" values. First revealed in the Coma Cluster redshift v.s. brightness diagram, it appeared as if redshifts were in some way analogous to the energy levels within atoms. These discoveries led to the suspicion that a galaxy's redshift may not be related to its Hubble velocity alone. If the redshift is entirely or partially non-Doppler (that is, not due to cosmic expansion), then it could be an intrinsic property of a galaxy, as basic a characteristic as its mass or luminosity. If so, might it truly be quantized? Clearly, new and independent data were needed to carry this investigation further. The next step involved examining the rotation curves of individual spiral galaxies. Such curves indicate how the rotational velocity of the material in the galaxy's disk varies with distance from the center. Several well-studied galaxies, including M51 and NGC 2903, exhibited two distinct redshifts. Velocity breaks, or discontinuities, occurred at the nuclei of these galaxies. Even more fascinating was the observation that the jump in redshift between the spiral arms always tended to be around 72 kilometers per second, no matter which galaxy was considered. Later studies indicated that velocity breaks could also occur at intervals that were 1/2, 1/3, or 1/6 of the original 72 km per second value. At first glance it might seem that a 72 km per second discontinuity should have been obvious much earlier, but such was not the case. The accuracy of the data then available was insufficient to show the effect clearly. More importantly, there was no reason to expect such behavior, and therefore no reason to look for it. But once the concept was defined, the ground work was laid for further investigations. The first papers in which this startling new evidence was presented were not warmly embraced by the astronomical community. Indeed, an article in the Astrophysical Journal carried a rare note from the editor pointing out that the referees "neither could find obvious errors with the analysis nor felt that they could enthusiastically endorse publication." Recognizing the far-reaching cosmological implications of the single-galaxy results, and undaunted by criticism from those still favoring the conventional view, the analysis was extended to pairs of galaxies. Two galaxies physically associated with one another offer the ideal test for redshift quantization; they represent the simplist possible system. According to conventional dynamics, the two objects are in orbital motion about each other. Therefore, any difference in redshift between the galaxies in a pair should merely reflect the difference in their orbital velocities along the same line of sight. If we observe many pairs covering a wide range of viewing angles and orbital geometries, the expected distribution of redshift differences should be a smooth curve. In other words, if redshift is solely a Doppler effect, then the differences between the measured values for members of pairs should show no jumps. But this is not the situation at all. In various analyses the differences in redshift between pairs of galaxies tend to be quantized rather than continuously distributed. The redshift differences bunch up near multiples of 72 km per second. Initial tests of this result were carried out using available visible-light spectra, but these data were not sufficiently accurate to confirm the discovery with confidence. All that changed in 1980 when Steven Peterson, using telescopes at the National Radio Astronomy Observatory and Arecibo, published a radio survey of binary galaxies made in the 21-cm emission of neutral hydrogen. Wavelength shifts can be pegged much more precisely for the 21cm line than for lines in the visible portion of the spectrum. Specifically, redshifts at 21 cm can be measured with an accuracy better than the 20 km per second required to detect clearly a 72 km per second periodicity. Red shift differences between pairs group around 72, 144 and 216 km per second. Probability theory tells us that there are only a few chances in a thousand that such clumping is accidental. In 1982 an updated study of radio pairs and a review of close visible pairs demonstrated this same periodic pattern at similarly high significance levels. Radio astronomers have examined groups of galaxies as well as pairs. There is no reason why the quantization should not apply to larger collections of galaxies, so redshift differentials within small groups were collected and analyzed. Again a strongly periodic pattern was confirmed. The tests described so far have been limited to small physical systems; each group or pair occupies only a tiny region of the sky. Such tests say nothing about the properties of redshifts over the entire sky. Experiments on a very large scale are certainly possible, but they are much more difficult to carry out. One complication arises from having to measure galaxy redshifts from a moving platform. The motion of the solar system, assuming a doppler interpretation, adds a real component to every redshift. When objects lie close together in the sky, as with pairs and groups, this solar motion cancels out when one redshift is subtracted from another, but when galaxies from different regions of the sky are compared, such a simple adjustment can no longer be made. Nor can we apply the difference technique; when more than a few galaxies are involved, there are simply too many combinations. Instead we must perform a statistical test using redshifts themselves. As these first all-sky redshift studies began, there was no assurance that the quantization rules already discovered for pairs and groups would apply across the universe. After all, galaxies that were physically related were no longer being compared. Once again it was necessary to begin with the simplest available systems. A small sample of dwarf irregular galaxies spread around the sky was selected. Dwarf irregular galaxies are low-mass systems that have a significant fraction of their mass tied up in neutral hydrogen gas. They have little organized internal or rotational motion and so present few complications in the interpretation of their redshifts. In these modest collections of stars we might expect any underlying quantum rules to be the least complex. Early 20th century physicists chose a similar approach when they began their studies of atomic structure; they first looked at hydrogen, the simplest atom. The analysis of dwarf irregulars was revised and improved when an extensive 21-cm redshift survey of dwarf galaxies was published by J. Richard Fisher and R. Brent Tully. Once the velocity of the solar system was accounted for, the irregulars in the Fisher-Tully Catalogue displayed an extraordinary clumping of redshifts. Instead of spreading smoothly over a range of values, the redshifts appeared to fall into discrete bins separated by intervals of 24 km per second, just 1/3 of the original 72 km per second interval. The Fisher-Tully redshifts are accurate to about 5 km per second. At this small level of uncertainty the likelihood that such clumping would randomly occur is just a few parts in 100,000. Large-scale redshift quantization needed to be confirmed by analyzing redshifts of an entirely different class of objects. Galaxies in the Fisher-Tully catalogue that showed large amounts of rotation and interval motion (the opposite extreme from the dwarf irregulars) were studied. Remarkably, using the same solar-motion correction as before, the galaxies' redshifts again bunched around certain specific values. But this time the favored redshifts were separated by exactly 1/2 of the basic 72 km per second interval. This is clearly evident. Even allowing for this change to a 36 km per second interval, the chance of accidentally producing such a preference is less than 4 in 1000. It is therefore concluded that at least some classes of galaxy redshifts are quantized in steps that are simple fractions of 72 km per second. Current cosmological models cannot explain this grouping of galaxy redshifts around discrete values across the breadth of the universe. As further data are amassed the discrepancies from the conventional picture will only worsen. If so, dramatic changes in our concepts of large-scale gravitation, the origin and "evolution" of galaxies, and the entire formulation of cosmology would be required. Several ways can be conceived to explain this quantization. As noted earlier, a galaxys' redshift may not be a Doppler shift, it is the currently commonly accepted interpretation of the red shift, but there can be and are other interpretations. A galaxys' redshift may be a fundamental property of the galaxy. Each may have a specific state governed by laws, analogues to those in quantum mechanics that specify which energy states atoms may occupy. Since there is relatively little blurring on the quantization between galaxies, any real motions would have to be small in this model. Galaxies would not move away from one another; the universe would be static instead of expanding. This model obviously has implications for our understanding of redshift patterns within and among galaxies. In particular it may solve the so-called "missing mass" problem. Conventional analysis of cluster dynamics suggest that there is not enough luminous matter to gravitationally bind moving galaxies to the system.
What are the implications of a quantized redshift? Setterfield: If redshifts come from an expanding cosmos, the measurements should be distributed smoothly, but they are not. They are showing up as clumps, or quantized groupings. Because the cosmos cannot be expanding in jumps, the conclusion to be drawn from the data is that the cosmos is not expanding, nor are galaxies racing away from each other. Indeed, at the Tucson Conference on Quantization in April of 1996, the comment was made that “[in] the inner parts of the Virgo cluster [of galaxies], deeper in the potential well, [galaxies] were moving fast enough to wash out the quantization.” In other words, the genuine motion of galaxies destroys the quantisation effect, so the quantised redshift it is not due to motion, and hence not to an expanding universe. This implies that the cosmos is now static after initial expansion. Interestingly, there are about a dozen references in the Scriptures which talk about the heavens being created and then stretched out. Importantly, in every case except one, the tense of the verb indicated that the “stretching out” process was completed in the past. This is in line with the conclusion to be drawn from the quantised redshift. Furthermore, the variable lightspeed model of the cosmos gives an explanation for these results, and can theoretically predict the size of the quantisations to within a fraction of a kilometer per second of that actually observed. This seems to indicate that a genuine effect is being dealt with here. You will find a number of my papers dealing with the implications of the quantized redshift in the Research Papers section of this website.
Setterfield: Yes, the Vc model predicts the redshift should drop with time, but the period over which this change occurs for any given galaxy or cluster is more difficult to determine, and depends on modeling. Nevertheless, a drop in some redshifts over time has been noted by Tifft. He has noted a decrease in redshift values with time in several associated galaxies. In the Astrophysical Journal, Vol. 382:396-415, December 1st 1991, he records the groups where one quantum change has occurred. He emphasizes that all the older data recorded higher redshifts. The time period between the recordings of redshift was about 10 years. However, we do not know how long the galaxies had persisted at the previous redshift, before the change.
Cosmic Dark Energy? Barry Setterfield (April 11, 2001) There has been much interest generated in the press lately over the analysis by Dr. Adam G. Riess and Dr. Peter E. Nugent of the decay curve of the distant supernova designated as SN 1997ff. In fact, over the last two years, a total of four supernovae have led to the current state of excitement. The reason for the surge of interest is the distances that these supernovae are found to be when compared with their redshift, z. According to the majority of astronomical opinion, the relationship between an object's distance and its redshift should be a smooth function. Thus, given a redshift value, the distance of an object can be reasonably estimated. One way to check this is to measure the apparent brightness of an object whose intrinsic luminosity is known. Then, since brightness falls off by the inverse square of the distance, the actual distance can be determined. For very distant objects something of exceptional brightness is needed. There are such objects that can be used as 'standard candles', namely supernovae of Type Ia. They have a distinctive decay curve for their luminosity after the supernova explosion, which allows them to be distinguished from other supernovae. In this way, the following four supernovae have been examined as a result of photos taken by the Hubble Space Telescope. SN 1997ff at z = 1.7; SN 1997fg at z = 0.95; SN 1998ef at z = 1.2; and SN 1999fv also at z = 1.2. The higher the redshift z, the more distant the object should be. Two years ago, the supernovae at z = 0.95 and z = 1.2 attracted attention because they were FAINTER and hence further away than expected. This led cosmologists to state that Einstein's Cosmological Constant must be operating to expand the cosmos faster than its steady expansion from the Big Bang. Now the object SN 1997ff, the most distant of the four, turns out to be BRIGHTER than expected for its redshift value. This interesting turn of events has elicited the following comments from Adrian Cho in New Scientist for 7 April, 2001, page 6 in an article entitled "What's the big rush?"
Well, that is one option. Interestingly, there is another option well supported by other observational evidence. For the last two decades, astronomer William Tifft of Arizona has pointed out repeatedly that the redshift is not a smooth function at all but is, in fact, going in "jumps", or is quantised. In other words, it proceeds in a steps and stairs fashion. Tifft's analyses were disputed, so in 1992 Guthrie and Napier did a study to disprove the matter. They ended up agreeing with Tifft. The results of that study were themselves disputed, so Guthrie and Napier conducted an exhaustive analysis on a whole new batch of objects. Again, the conclusions confirmed Tifft's contention. The quantisations of the redshift that were noted in these studies were on a relatively small scale, but analysis revealed a basic quantisation that was at the root of the effect, of which the others were simply higher multiples. However, this was sufficient to indicate that the redshift was probably not a smooth function at all. If these results were accepted, then the whole interpretation of the redshift, namely that it represented the expansion of the cosmos by a Doppler effect on light waves, was called into question. This becomes apparent since there was no good reason why that expansion should go in a series of jumps, anymore than cars on a highway should travel only in multiples of 5 kilometres per hour. In 1990, Burbidge and Hewitt reviewed the observational history of preferred redshifts for extremely distant objects. Here the quantisation or periodicity was on a significantly larger scale. Objects were clumping together in preferred redshifts across the whole sky. These redshifts were listed as z = 0.061, 0.30, 0.60, 0.96, 1.41 and 1.96 [G. Burbidge and A. Hewitt, Astrophysical Journal, vol. 359 (1990), L33]. In 1992, Duari et al. examined 2164 objects with redshifts ranging out as far as z = 4.43 in a statistical analysis [Astrophysical Journal, vol. 384 (1992), 35]. Their analysis eliminated some suspected periodicities as not statistically significant. Only two candidates were left, with one being mathematically precise at a confidence interval exceeding 99% in four tests over the entire range. Their derived formula confirmed the redshift peaks of Burbidge and Hewitt as follows: z = 0.29, 0.59, 0.96, 1.42, 1.98. When their Figure 4 is examined, the redshift peaks are seen to have a width of about z = 0.0133. A straightforward interpretation of this periodicity is that the redshift itself is going in a predictable series of steps and stairs on a large as well as a small scale. This is giving rise to the apparent clumping of objects at preferred redshifts. The reason is that on the flat portions of the steps and stairs pattern, the redshift remains essentially constant over a large distance, so many objects appear to be at the same redshift. By contrast, on the rapidly rising part of the pattern, the redshift changes dramatically over a short distance, and so relatively few objects will be at any given redshift in that portion of the pattern. From the Duari et al. analysis the steps and stairs pattern of the redshift seems to be flat for about z = 0.0133, and then climbs steeply to the next step. These considerations are important in the current context. As noted above by Reiss, the objects at z = 0.95 and 1.2 are systematically faint for their assumed redshift distance. By contrast, the object at z = 1.7 is unusually bright for its assumed redshift distance. Notice that the object at z = 0.95 is at the middle of the flat part of the step according to the redshift analyses, while z = 1.2 is right at the back of the step, just before the steep climb. Consequently for their redshift value, they will be further away in distance than expected, and will therefore appear fainter. By contrast, the object at 1.7 is on the steeply rising part of the pattern. Because the redshift is changing rapidly over a very short distance astronomically speaking, the object will be assumed to be further away than it actually is and will thus appear to be brighter than expected. These recent results therefore verify the existence of the redshift periodicities noted by Burbidge and Hewitt and statistically confirmed by Duari et al. They also verify the fact that redshift behaviour is not a smooth function, but rather goes in a steps and stairs pattern. If this is accepted, it means that the redshift is not a measure of universal expansion, but must have some other interpretation. The research that has been conducted on the changing speed of light over the last 10 years has been able to replicate both the basic quantisation picked up by Tifft, and the large-scale periodicities that are in evidence here. On this research, the redshift and light-speed are related effects that mutually derive from changing vacuum conditions. The evidence suggests that the vacuum zero-point energy (ZPE) is increasing as a result of initial expansion of the cosmos. It has been shown by Puthoff [Physical Review D 35:10 (1987), 3266] that the ZPE is maintaining all atomic structures throughout the universe. Therefore, as the ZPE increases, the energy available to maintain atomic orbits increases. Once a quantum threshold has been reached, every atom in the cosmos will assume a higher energy state for a given orbit and so the light emitted from those atoms will be bluer than those in the past. Therefore as we look back to distant galaxies, the light emitted from them will appear redder in quantised steps. At the same time, since the speed of light is dependent upon vacuum conditions, it can be shown that a smoothly increasing ZPE will result in a smoothly decreasing light-speed. Although the changing ZPE can be shown to be the result of the initial expansion of the cosmos, the fact that the quantised effects are not "smeared out" also indicate that the cosmos is now static, just as Narliker and Arp have demonstrated [Astrophysical Journal vol. 405 (1993), 51]. In view of the dilemma that confronts astronomers with these supernovae, these alternative explanations may be worth serious examination.
Setterfield: There is still a lot of denial, even though some are considering it more thoughtfully as time goes by. As far as the effect on cosmology is concerned I need only point out the response of J. Peebles, a cosmologist from Princeton University. He is quoted as saying “I’m not being dogmatic and saying it cannot happen, but if it does, it’s a real shocker.” M. Disney, a galaxy specialist from the University of Wales is reported as saying that if the redshift was indeed quantised “It would mean abandoning a great deal of present research.” [Science Frontiers, No. 105, May-June 1996]. For that reason, this topic inevitably generates much heat, but it would be nice if the light that inevitably comes out of it could also be appreciated.
Setterfield: There are in fact periodicities as well as redshift quantisation effects. The periodicities are genuine galaxy-distribution effects. However, they all involve high redshift differences such as repeats at z = 0.0125 and z = 0.0565. The latter vale involved 6,200 quantum jumps of Tifft's basic value and reflects the large scale structuring of the cosmos at around 850 million light-years. The smaller value is around 190 million light-years. This is the approximate distance between super-clusters. The point is that Tifft's basic quantum states still occur within these large-scale structures and have nothing to do with the size of galaxies or the distances between them. The lowest observed redshift quantisation that can reasonably be attributed to an average distance between galaxies is the interval of 37.6 km/s that Guthrie and Napier picked up in our local supercluster. This comprises a block of 13 or 14 quantum jumps and a distance of around 1.85 million light-years. It serves to show that basic quantum states below the interval of 13 quantum jumps have nothing to do with galaxy size or distribution. Finally, Tifft has noted that there are redshift quantum jumps within individual galaxies. This indicates that the effect has nothing to do with clustering.
Setterfield: The quantised redshift that we are talking about is not dealing with quasars. They are too far out. In my work I accept the redshift/distance relationship and also the initial expansion of the cosmos. What Virginia Trimble is talking about here is the large-scale periodicities in the quasar distribution. This is a separate effect from quantisation.
Setterfield: She states that the hypothesis was tested from the same data set from which the hypothesis was derived. This is not the case. Data came from the Coma cluster, the Virgo cluster, and the Local Supercluster. All were different data sets. In fact, Guthrie and Napier established quantisation from a different data set altogether, apart from Tifft's. So did Cocke's analysis. Therefore the claim about using the same data set is false. No one drew any target around any landed arrow. The theory was determined after the data had been collected, and not before. Please also keep in mind that Guthrie and Napier had set out to DISprove Tifft and ended up agreeing with him.
Setterfield: In this paragraph, the statements are only true at high redshifts, around about 1.0. What Trimble is doing here is confusing quantisation and periodicity, as mentioned above. The periodicity in quasar redshifts, to which she is referring, only shows itself at great distances, whereas the quantisation has been established in objects closer to us. The two are different.
Setterfield: The new and extensive redshift surveys are of distant objects and redshift figures do not attain the accuracy needed to reveal the quantisation Tifft has picked up. What is being discussed by Trimble is again large-scale periodicities suggested by Duari et al, and Hewitt and Burbidge, not the quantisation of Tifft, Cocke, Guthrie and Napier.
Setterfield: The primary redshift periodicities, or quantizations, which Tifft had picked up were not with Quasars, which were too far out to be able to show the small changes involved, but were with objects much closer in. The work you have referenced here in no way negates Tifft's work. What it does do is show that a proposal for a periodicity in Quasars over large distances and large changes in redshift has been negated. This is a different story.
Setterfield: In order to overcome this deficiency, the following technical notes give the foundation of that part of the proposition. As will be discovered upon reading this document, no wave-crests will disappear at all on their way to earth, and the energy of any given photon remains unchanged in transit. Not only does this follow from the usual definitions, but it is also gives results in accord with observation. This model maintains that wavelengths remain unchanged and frequency alters as Lightspeed drops. In order to see what is happening, consider a wave train of 100 crests associated with a photon. Let us assume that those 100 crests pass an observer at the point of emission in 1 second. Now assume that the speed of light drops to 1/10th of its initial speed in transit, so that at the point of final observation it takes 10 seconds for all 100 crests to pass. The number of crests has not altered; they are simply travelling more slowly. Since frequency is defined as the number of crests per second, both the frequency and light-speed have dropped to 1/10th of their original value, but there has been no change in the wavelength or the number of crests in the wave-train. The frequency is a direct result of light-speed, and nothing else happens to the wave-train. Second, the standard redshift/distance relationship is not changed in this model. However, the paper demonstrates that there is a direct relationship between redshift and light-speed. Furthermore, astronomical distance and dynamical time are linearly related. As a consequence, the standard redshift/distance relationship is equivalent to the relationship between light-speed and time. The graph is the same; only the axes have been re-scaled. As far as the redshift, z, is concerned, the most basic definition of all is that z equals the observed change in emitted wavelength divided by the laboratory standard for that wavelength. The model shows that there will be a specified change in emitted wavelength at each quantum jump that results in a progressive, quantised redshift as we look back to progressively more distant objects. This does not change the redshift/distance relationship or the definition of redshift. What this model does do is to predict the size of the redshift quantisation, and links that with a change in light-speed. The maths are all in place for that. (Sept. 21, 2001)
Setterfield: On a large scale, the universe does appear to have structures interspersed with voids. In this case we are dealing with superclusters or megaclusters of galaxies separated by these voids you mention. However, this was not what was measured by Tifft's work. In his initial work he took the Coma cluster of galaxies and noted that redshift "bands", as he called them then, could be traced through the cluster. Later work revealed that redshift quantum jumps occurred within individual galaxies, which shows that this is not just a galaxy distribution effect. Further work by Tifft compared the redshifts of pairs of galaxies within various clusters. The quantisation was a notable feature of these galaxy pairs and these results were commented on by New Scientist and other publications. Thus, the concept of sheets of galaxies or super-clusters with voids in between does not provide an adequate expalnation for the quantisation effect, which is apparent on a much smaller scale.
Setterfield: First let me express thanks for bringing this material to our attention. It is an important matter. Second, it is the first time that I have seen a viable explanation for the dramatic redshift differences if Arp's proposal that quasars have been ejected from nearby galaxies is correct with the galaxy having a low redshift and its attendant quasar a high one. Third, it certainly provides another explanation for the effect that is interpreted as a minute change in the fine-structure constant. However, I still have doubts as to the validity of that change in the constant as it is so small and so much teasing out of information from the data is needed to accomplish it. Nevertheless, if it is true, then this does provide a viable explanation for what is observed, provided that it is only with quasars. Fourth, it must be emphasised that this effect being described seems to apply strictly to quasars rather than galaxies. Since galaxies have been found by Hubble with redshifts out to a redshift z of about 5, this does not provide an explanation for the general redshift/distance relationship. Thus, while it supports Arp's contention that galaxies and their presumed ejected quasars are related, it does not explain the progressive galaxy redshifts out to high z values. This leaves the door open for the standard interpretation of quasars as being the cores of distant galaxies to be correct. Fifth, the linkage of this effect with the 2.7 degree microwave background is interesting. On this basis, one would expect an increase of the background temperature with time. Instead, the reverse has been claimed, namely that the further out we look the higher is the background temperature with values measured (from memory) from 6 up to 14 degrees. If these results are verified, then it is an issue that the author of these linked papers may want to address.
Setterfield: Thanks for the important question. Tifft answers this in his 1st December 1991 paper in the Astrophysical Journal pp.396-415. Much of the work has been done on the 21 cm line and gave high levels of accuracy. Indeed, B. M. Lewis in "Observatory" for 1987, pp. 107 and 201, has claimed redshift accuracy in excess of 0.1 km/s. I am familiar with the way Lewis works, and he is a careful researcher. Since the quantum step is about 2.6 km/s, this accuracy is about 25 times greater than the quantity being measured.
The Redshift/Lightspeed Connection
Setterfield: The redshift and the speed of light are both 'children' of the same 'parent' -- the Zero Point Energy. The papers that I wrote in 2007 and 2008, Reviewing the Zero Point Energy , and Quantized Redshifts and the Zero Point Energy establish this link. Since the redshift and lightspeed are thus linked directly through the ZPE, and astronomical distance and time are linked, the graph of redshift against distance is the same as lightspeed against time. Furthermore, because the 1987 Report established that atomic clocks tick at a rate proportional to c, "Time, Life and Man."
Setterfield: The next quantum jump cannot be predicted. I suppose it is possible we might notice something, but that's a bit problematical. I can't think of anything right now. At one time, there was the suggestion that it might cause earthquakes, but that has been negated by more recent research.
Question: How would redshift be an automatic product of CDK? In laymen's terms, could you tell me what the prediction from CDK would be regarding the cosmic microwave background radiation and the redshift? Setterfield: The redshift and variable light speed are separate manifestations of a basic cosmological effect. One does not directly cause the other, but both are caused by the increase in the energy density of free space. As far as light speed is concerned, an energy density increase in the vacuum means that the vacuum effectively becomes a 'thicker' medium for light to travel through, so its speed is retarded. An increase in the energy density of free space also affects atomic behaviour. However atomic behaviour usually occurs in discrete 'jumps.' It therefore requires a buildup of the energy density of the vacuum to a certain threshold level before there will be a change in atomic phenomena. Once this threshold has been crossed, more energy is available to the atom, which takes up a higher energy state for its orbits. These higher energy states emit bluer light. Therefore, as we look back in time (i.e. distance), we're looking back at times in the cosmos where the energy density was lower and so atomic orbits had less energy intrinsically and therefore had redder light. Light speed changes smoothly with the vacuum This is discussed in Behavior of the Zero Point Energy and Atomic Constants. A recent paper also discusses this: Zero Point Energy and the Redshift. Regarding the microwave background: A high value for light speed has one effect on the microwave background. Understand this background is not itself a manifestation of light speed decay, but is rather related to the energy input at the moment of creation. A high value for light speed in the earliest moments of the cosmos would mean all radiation would be rapidly homogenized and smoothed out. In other words, a high value for light speed would get rid of any 'lumps' in the microwave background. A practical example may help here. If you have a very large box, a kilometer long, and light speed was one meter per second, it would take one thousand seconds for radiation from one part of the box to get to the other part of the box. It would be only after a number of internal reflections from within that box that any 'lumps' in the distribution of the radiation would be smoothed out. This will obviously take a little time. However, if the speed of light was 10,000 km/second, the smoothing out process would be much more rapid, and this is what a higher light speed has done, to give rise to an essentially smooth microwave background.
Setterfield: The first point to note about the Cosmic Microwave Background Radiation (CMBR) or CBR --cosmic background radiation) is that the distribution is essentially isotropic with very minor variations. The differences that we are looking at are of the order of about 1 part in about 50,000 - 100,000. This puts everything in perspective. Originally, the diagrams, or the photographs, of the variations in the CBR did not give concordant results. The first satellite that was used for measuring the CMBR was the COBE satellite. The second satellite that was launched was the WMAP, which has given more accurate results. There has been a suggestion that the gaps in the CMBR that the WMAP has picked up may be identified with some of the existing voids between galaxy clusters, but this has yet to be confirmed. The launch of the European Planck satellite in 2009 promises to give much better data. The primary run of observations by this satellite ends on the sixteenth of March 2012. However, in the meantime, there has been a very strong criticism of the WMAP data. This was announced June 11, 2010, by Sawangwit and Shanks at Durham University in the UK. Their criticism centered on the algorithm used by the computer which 'smoothed' out the ripples in the data. "They find that the smoothing is much larger than previously believed, suggesting that its measurement of the size of the CMB ripples is not as accurate as was thought. If true, this could mean that the ripples are significantly smaller, which could imply that dark matter and dark energy are not present after all." (Royal Astronomical Society News, June 14, 2010) In other words, the Cosmic Microwave Background Radiation is probably not as 'lumpy' as first proposed.
Question regarding work by the Gentrys
Setterfield: First of all, their comments about the significance of the redshift in relation to the Big Bang are certainly justified. They go to some trouble to point out that if the Big Bang fails on this point, one of its key bases would be destroyed. They point out that the evidence is that the universe – the fabric of space itself – is not expanding. I agree with this. One of the ways in which they do this is to point out the distinction between the Einstein’s and Friedmann’s mathematical descriptions of the universe. These descriptions differ in that Einstein’s equations have the fabric of space as being static, whereas Friedmann’s equations have it expanding. What the Gentrys have done is to show that the observational evidence supports the Einstein formulation rather than that of Friedmann. If only Einstein's formulation is valid, then this means that the redshift is not due to the fabric of space itself expanding and stretching light waves in transit. This only leaves motion of the galaxies themselves through the static fabric of space to account for the redshift in The Gentrys, however, consider that the bulk of the redshift is due to a gravitational effect because the earth is near the center of the universe as they see it. (They also consider that there is a component of the redshift that is due to the motion of galaxies.) The problem that I have with this is that Misner, Thorne, and Wheeler in their massive book entitled Gravitation point out that gravitational redshifts greater than a redshift of 0.5 will result in rapid gravitational collapse. It appears that the Gentrys have not yet addressed this issue. I want to emphasize that I totally agree with the Gentrys that an alternative explanation to the currently accepted Big Bang explanations is badly needed.
Setterfield: Basically, what has happened is a breakdown in the redshift/distance relationship. That relationship is usually given as the relativistic Doppler equation. I am old enough to remember when the straight-forward Doppler equation was in use. This worked fine until it was found that redshifts were appearing that were greater than z = 1. These results implied that the outer parts of the universe were expanding faster than light. In order to overcome that embarrassment, the relativistic Doppler equation was introduced and has been reasonably successful up to now. However, as a redshift of z = 1 is approached, a discrepancy is noted. This has been picked up relatively recently as a result of the Hubble Space Telescope observing distant supernovae of Type Ia which have a specific brightness that then allows an accurate distance estimate to be obtained. These observations indicated that these supernovae were fainter than expected for their redshift and so were further away than the formulae suggested. This could only happen if the cosmos was expanding increasingly faster as time went on. The equations describing the Big Bang scenario have a handful of different parameters which have been added or adjusted as different discoveries have been made. As a result, the model is beginning to look increasingly unwieldy. However, that is not all. The situation described above exists out to a redshift of z = 1.5 to 1.7. After that, the supernovae appear to be brighter than expected. This suggests that they are closer in than expected from the formulae. The majority of astronomers can only conclude that from the moment of the Big Bang out to a redshift of about 1.7 the universe was decelerating, and that the acceleration began after this. The action of the cosmological constant first postulated by Einstein has been invoked to account for these discrepant data. Basically, what these results mean is that as we go out into space, there is a change in the redshift distance relationship from the expected formula. The true formula is such that it climbs slightly more slowly than the accepted formula with increasing distance out to about a redshift of z = 1. After that point, the actual curve starts climbing more steeply than the accepted formula. In other words, the relativistic Doppler equation is not the correct formula to describe the behaviour of the cosmos. This carries with it the implication that the idea of the redshift being due to expansion may not be correct either. This implication is currently being avoided by invoking the action of dark energy through the action of the cosmological constant. Well, what is the correct formula to describe the behaviour of the cosmos? Incredibly, it appears that a mathematical approach that describes the origin of the Zero-Point Energy (ZPE) using Planck particle pairs (PPP) can reproduce the features required. Standard mathematical formulae describing the processes operating with PPP at the inception of the cosmos are used. This is work in progress at this moment, and we won’t have the full details until the end of 2003 or early 2004. But, in summary, there is a formula that is very similar to the relativistic Doppler formula that holds the potential to describe the behaviour of the redshift accurately, and that also describes the behaviour of the speed of light, Planck’s constant, the ticking of atomic clocks and atomic masses on the basis of the origin of the Zero Point Energy. Information on this is in several of my papers, including
I trust that this gives you an overall idea of what is happening.
The Redshift, ZPE, and Relativity
Setterfield: You ask about the significance of the Dayton Miller experiment, the ether drift he recorded, and how it relates to the ZPE. It is true that Miller, as well as Michelson and Morley produced relatively small but nonetheless positive results with their interferometers. Miller pointed out that the M-M experiment was in a heavily shielded environment. When Miller successively removed all shielding on his apparatus and transported it to a higher altitude, he obtained larger readings of the ether drift. On this basis, Miller concluded that the ether was entrained by the earth and so yielded lower results for any ‘drift’ the closer to the earth’s surface the experiment was conducted. Likewise, he concluded a similar effect for any shielding of the equipment. His final conclusion was that there was a motion of the solar system towards the southern hemisphere constellation of Dorado. In actual fact, the motion of the solar system has been accurately measured as being in a different direction and confirmed by studies using the microwave background radiation, which is also all-pervasive and provides an absolute reference frame. These data all indicate that we are moving at about 600 km/s towards the centre of the Virgo cluster of galaxies. Note that the very fact indicates that the microwave background provides us with an absolute reference frame and so negates some concepts basic to relativity. For a discussion of this latter point, see Martin Harwit, ‘Astrophysical Concepts’, second edition, p.178, Springer-Verlag, New York, 1988. One key point that you raised indirectly needs to be mentioned here. The presence of the ZPE is like an all-pervasive sea. Contrary to what Miller suggested for his ether, the ZPE is not inhibited in any way by the presence of matter. Irrespective of the density of matter, all the particles making up individual atoms are immersed in this ZPE ‘sea’. Instead of inhibiting its action, atoms are themselves sustained by the ZPE rather like pieces of debris carried along by the ocean. Now for your second question. Up until 1960 or thereabouts, the redshift appeared to be a linear relationship, that is to say redshift z = v/c where v was the so-called recession velocity and c the speed of light. This is what we have in equation (3) in the paper “The Redshift and the Zero Point Energy”. One point which was not made clear in the paper, but which should clarify things for you, is that up to 1960, we could not measure the spectral shifts of very distant galaxies because of the limitations of our equipment. As far out as we could measure, the redshift/distance graph was a straight line. On this basis it was expected that there would be no redshifts greater than z = 1, where the recessional velocity v = c. Sometime in the early 1960’s, as equipment improved, and we could see and measure spectral shifts to greater distances, it was noted that the data were deviating from a straight line by successively greater amounts at successively greater distances. The crunch point came when redshifts of z > 1 were measured. It was around that point in time, in the early 1960’s, that the straight forward and linear Doppler shift z = v/c that had been in use was discarded and the relativistic Doppler formula applied, namely z = [1 + (v/c) ]/{sqr [1-(v^2/c^2)]} – 1 as in equation (5).
Setterfield: The redshift of light from distant galaxies does indeed mean that the whole spectrum of colours is shifted towards the red. In other words, it is all light emitted that is proportionally redder the further away it is. The objects look physically redder. Astronomy often attributes this to a Doppler effect of galaxies moving away from us. All wavelengths are affected proportionally. The spectral lines of the elements are therefore also affected proportionally. On the model with changing lightspeed, these redder wavelengths are due to a lower strength of the Zero Point Energy (ZPE) which supports atomic structures across the cosmos. When the ZPE strength was lower, all atomic orbits had energies which were proportionally lower. Lower orbit energy for atoms means redder light emitted from those atoms. Because lightspeed is also affected by the ZPE, the higher the redshift the higher the value of lightspeed in direct proportion. Now comes something important. Up until now we have been talking about wavelengths of emitted light. As that light goes in transit across the universe and the speed of light drops, that emitted wavelength remains unchanged. In other words, when lightspeed changes simultaneously across the whole cosmos, it is not wavelength that changes, but frequency. The original wavelength at emission is locked in. This has recently been proven correct by Keith Wanser, a professor of physics, who was examining these ideas rather closely. He discovered that Maxwell’s equations predict that it will be the frequency of light that changes in any scenario with cosmological changes in lightspeed. Thus the wavelengths remain intact in transit. (In private correspondence, Keith Wanser has mentioned he hopes to publish this research shortly.) There was also experimental proof of this in the 1920’s to 1940’s when changes in lightspeed were being measured, but there were no measured changes in wavelengths of light in transit. This leads to the conclusion that it is frequency that varies as lightspeed drops in transit. Since frequency is simply the number of waves passing per second, as light of a given wavelength slows down, there are fewer waves of the same wavelength passing a given point per second. This is entirely logical. It is important to realize that the redshift in the wavelengths of light is the result of the lower ZPE on the atoms themselves. As the ZPE increases, the emitted light becomes more energetic because the energy of the atomic orbits is greater with a stronger ZPE. Since the blue end of the spectrum is the more energetic end, the light from these atoms will be bluer. Because atomic processes are quantized, or go in jumps, the light emitted from those atoms will become bluer in jumps as the strength of the ZPE increases. Thus, as we look back in time (which means when we look out into the depths of space) the light coming to us is redder (in jumps). Since our own local atoms are emitting light at wavelengths corresponding to the most recent (highest) value for the ZPE strength, our light will always be bluer than from distant objects.
The Lightspeed Curve and the Oscillation
Setterfield: The point I was trying to make was that light got to us from the furthest reaches of our galaxy very quickly at the beginning. For example, on the first day of creation, light from the center of our galaxy reached the earth in under three seconds. Therefore light from the furthest reaches of our galaxy could have reached the earth in less than ten seconds. This situation would have deteriorated rapidly as the buildup of the Zero Point Energy rapidly increased. This occurred at the potential energy invested in the newly stretched universe very quickly was converted to kinetic energy, much like what we see when we stretch a rubber band and then release it. Thus the speed of light would have rapidly dropped at first, as the build-up of the ZPE would have also caused the increase of virtual particles in any given volume of space at any given time. This is what causes light to take longer to reach its destination. With few or no virtual particles in the beginning, light would have reached its destination extraordinarily fast. This is how we have been able to see light from the most distant parts of the cosmos in less than ten thousand years. The beginning part of its journey was much more rapid than the final part! The actual curve shown by the dropping speed of light can be seen here. This curve is the same as the redshift curve. However the redshift curve starts some way out from our galaxy. The actual light speed curve that pertained from the earth out to the edge of our galaxy is still being determined. It has become apparent that an oscillation is involved which has become noticeable over this distance. Part of that oscillation took the speed of light marginally lower than its present value from about 1500 B.C. to about 400 A.D. As we go back in time, exactly at what point the speed of light started climbing from the oscillation trough I am not sure. It has yet to be fully determined and I am working on this area at the moment. It was for this reason that I said in the quoted sentence “this oscillation means there is very little variability in light speed out to the limits of our galaxy.” Keep in mind that our galaxy is an extremely tiny part of the entire universe and it is beyond our galaxy that the light speed data becomes much more indicative of change. In the same way that we do not see a major redshift change until we are outside of our galaxy, we also will probably not see much more than the oscillatory change in the speed of light until we are outside of our galaxy. There may be more evidence of change as we reach the outer edges of our galaxy. I am still researching that. In any case, the comment about little variability must be seen in the total context where the initial value of c was about 1011 times its current speed. In this context, any graph displaying the behaviour of light over the lifetime of the cosmos will show very little change at this end of the curve. The redshift graph is almost a horizontal line at this point, and the lightspeed graph follows this, but with a conversion factor included.
Setterfield: Thanks for the note with its question and the URL to Wikipedia. It is appreciated. In response I would direct you to The ZPE and the Redshift section of my paper here on this website, Behavior of the Zero Point Energy and Atomic Constants. I would ask you to note in particular that the Wikipedia article specifically states that it is the redshifts of QUASARS that are not quantized. The majority of the most recent work is dealing with objects (quasars) deep in space towards the frontiers of the cosmos at high redshifts. What was noticed initially with quasars was a large scale clustering effect - large numbers of quasars at preferred redshifts. However, this is different from what Tifft was noting within a given cluster of galaxies. There were consistent redshift jumps between individual members of the cluster, not preferred numbers of galaxies at a given redshift. This redshift jump cut right through individual galaxies and has nothing to do with the clustering which was initially picked up with Quasars. Indeed, the redshift quantum jump is small. This means it will be more difficult to detect at high redshifts because our instrumentation is less sensitive to small redshift changes then. So in summary, the effects of clustering which were initially observed with quasars, and which may have been negated by recent work, is NOT the same as the Tifft redshift quantum jumps which are apparent between individual members of a cluster or group of galaxies. This issue may have been blurred by Arp's contention that quasars are from nearby galaxies so his model DOES expect the quasar redshifts to show these preferred values. THAT may well have been disproved, and, if it has, it also disproves Arp's contention that Quasars are nearby phenomena. If, however, the quasars really are distant, as the ZPE model accepts, then the quantization of redshifts between galaxies in groups still stands, as it is difficult to detect such small changes at high reshifts, and, indeed, this specific effect has NOT been looked for between neighboring quasars.
Setterfield: In answer, let me first state that all galaxies have some motion through space. That motion will register on our equipment as a genuine Doppler shift: red-shifted if it is going away; blue shifted if it is approaching. This additional genuine Doppler shifting is added to the intrinsic redshift of the galaxy due to the effects of the Zero Point Energy (ZPE). In our Local Group of galaxies, those in our immediate vicinity, the redshift due to the ZPE is small as the ZPE has not changed much since light was emitted from them. Therefore the redshift component is low, but the genuine Doppler shift must then be added to that to get the overall result. In some cases it accentuates the small redshift, in other cases it negates it or overwhelms it completely. Thus the great Andromeda spiral galaxy, M 31, has a blue shift, as do a number of other members of our Local Group. The same situation applies to a lesser extent to galaxies in groups just outside our Local Group. Thus, some members of the M81 group also have blue shifts, but the majority have redshifts. At distances beyond the M81 Group, there are very few blue shifts. As far as our location in space is concerned, the redshift "shells" that we see around us would be the same anywhere in the cosmos. Our position is not unique. Since the Zero Point Energy built up in the same way uniformly throughout the cosmos, the same effects would have occurred on the light emitted from all atoms. Therefore, from any position in the universe, the shells of specific redshift would be at the same distance from that observer as we see from own here on earth. I hope that is a help. If you have further questions, please get back to us.
Setterfield: That is a very interesting question and one that I have puzzled over for a number of years off and on. When I started on this research, there was a debate among scientists. Those involved in biology were tending to support the idea that the eye was seeing color via wavelengths. In contrast, those involved in electronics were insisting that the structure of the eye was in the form of frequency receptors (like TV antennae), not wavelength receptors. At that stage, my work was supporting wavelength or energy receptors. Since then, back in the 1980's, biological science has progressed significantly. Here is what we currently know. The receptors in our eyes are rods and cones. The rods allow us to see in low light conditions and are basically monochromatic. The cones come in three types: those that are red sensitive, those that are green sensitive and those that are blue sensitive. The combination of signals from each one of these cones gives us the final color that is perceived. The situation is shown diagrammatically like this: Each type of cone has a color pigment which has a peak response in the wavelengths of the red, green or blue part of the spectrum. The responses can be seen from this graph: The structure of a rod or cone in more detail basically looks like this:
The rods and cones are photoreceptors and are the site where light is changed into a nerve signal. In their outer segment, both rods and cones contain photopigments, which are pigments that undergo a chemical change when they absorb light. The photopigments have two main parts: a photopsin which is a protein in the form of disks attached to the membrane, and a molecule, such as retinal, that absorbs light and is coating those disks . This process then works to activate the mitochondria in the inner segment of the structure that ultimately results in a cascade of electrical signals through the synapses. So the key to what is happening is the diagram shown as (b) above. We are now in a position to come to a possible decision on the question you asked. Here is how the analysis currently goes: When the ZPE strength was low, atomic orbit energies were also lower. This means that the light photons emitted had lower energy for a given orbit transition, that is the light was redder. But because all orbit energies were lower for atoms, their bonding energies were also proportionally lower. A given orbit transition therefore produced light with a longer (redder) wavelength originally. But all photopigment bonds also had proportionally lower energy then as well. Therefore, the light photons from the same given orbit transition which produce the photopigment response now, would have given the same response in the earlier days with lower ZPE, even though the light was redder. The conclusion is that the light would not have been perceived as redder back then.
Setterfield: The majority of the most recent work is dealing with objects (quasars) deep in space towards the frontiers of the cosmos at high redshifts. What was noticed initially with quasars was a large scale clustering effect - large numbers of quasars at preferred redshifts. However, this is different from what Tifft was noting within a given cluster of galaxies. There were consistent redshift jumps between individual members of the cluster, not preferred numbers of galaxies at a given redshift. This redshift jump cut right through individual galaxies and has nothing to do with the clustering which was initially picked up with Quasars. Indeed, the redshift quantum jump is small. This means it will be more difficult to detect at high redshifts because our instrumentation is less sensitive to small redshift changes then. So in summary, the effects of clustering which were initially observed with quasars, and which may have been negated by recent work, is NOT the same as the Tifft redshift quantum jumps which are apparent between individual members of a cluster or group of galaxies. This issue may have been blurred by Arp's contention that quasars are from nearby galaxies so his model DOES expect the quasar redshifts to show these preferred values. THAT may well have been disproved, and, if it has, it also disproves Arp's contention that Quasars are nearby phenomena. If, however, the quasars really are distant, as the ZPE model accepts, then the quantization of redshifts between galaxies in groups still stands, as it is difficult to detect such small changes at high reshifts, and, indeed, this specific effect has NOT been looked for between neighboring quasars. A much more detailed explanation can be found in Cosmology and the Zero Point Energy.
|