Redshift and the Zero Point Energy feedback
The following comments were sent in by a graduate in astronomy. The Setterfield responses follow each one.
Setterfield: If this young man had followed the developing stream of data on this topic rather than preferring theory-based science, his view may have been different. It is important to realize that theories are only accurate insofar as they give a reason for the data we are observing. When data and theory disagree, it is not the data that is wrong, it is the theory. I believe that science can make its most rapid progress by examining those areas where data and theory disagree. This allows a more complete theory to be developed, one that is in agreement with all the data. It was because of new data (the quasars) that the old Steady State theory for the universe crumbled. In looking at the quantized redshift data, the evidence indicates that a major re-evaluation is needed for the expanding universe idea. Some astronomers may not want to accept the data because it would mean rejecting or modifying current theory. But that is the wrong way to do science. It would be equivalent to holding on to the old Steady State theory no matter what, and claiming that the quasars are observational error.
Setterfield: The initial discovery of redshift quantization was using optical equipment and the quantization that was discovered was of the order of 72 km/s. This was well above the instrument error. Then as more data came in, again using optical equipment, the 36 km/s quantization stood out clearly. This was backed up by Guthrie and Napier’s study in the 1990’s. Obviously, this is about one-half of the original value, and suggested that other sub-multiples of 72 km/s might exist. As radio-telescope data was examined, this proved to be the case. Data from 21 cm line redshifts revealed other sub-multiple quantizations. To quote Tifft: "A primary result of these studies was the determination that random uncertainty in the repeatability of 21 cm parameters is very small… Lewis (1987) has claimed redshift accuracy in excess of 0.1 km/s at a very high signal to noise ratio." (Lewis, B.M. 1987, Observatory vol. 107, p.201). The objection that redshifts can only be measured to an accuracy of 10 km/s is therefore invalid.
Setterfield: As noted above, the initial quantization of about 72 km/s was found using optical equipment. Then further study revealed a sub-multiple of around 36 km/s. Initial examination of the more accurate radio-telescope data indicated a 24 km/s quantization that was first discussed in 1984. As more data were examined, Tifft pointed out in 1988 that the 24 km/s quantization itself seemed to be a modulation of an even more basic quantization of around 8 km/s. All the higher quantizations were then shown to be simply related to the basic 8 km/s value. In 1991, a statistical treatment of data was performed, as well as a comparison between some old redshift measurements and more recent ones by the same radio-telescopes. This comparison coupled with the statistical study revealed a drop in the redshift with time of 8/3 km/s, and this was considered to be the most basic quantization figure. This is still well above the error margin of 0.1 km/s that Lewis has claimed for the radio-telescope data.
Setterfield: This comment relates to the redshift data from the centre of the Virgo cluster where the high actual motion of galaxies washes out the redshift quantization. One point appears to have been missed. The redshift is usually assumed to reflect the motion of galaxies. The quantized redshift reveals that the redshift is not due to motion at all. This is backed up by the fact that genuine motion actually destroys the quantization. The genuine motion in the centres of galaxy clusters that wipes out the quantization also indicates that the rest of the galaxies in the cluster have very little motion, or the quantization would be washed out there as well. As Arp and Tifft have both pointed out, this means that galaxy clusters are very “quiet” and have very little individual motion, or ‘peculiar velocities’. Consequently, the necessity for “missing mass” to hold a cluster together disappears since there are no high velocities involved. The “missing mass” for individual galaxies is a different matter.
Setterfield: You are correct that Brian Schmidt’s team achieved the same result at the same time, but it was Perlmutter’s team that seemed to get the attention of the world press on this, at least initially. You say it wasn’t an accident. It is true they were actually researching something. But what is important to note here is that the results obtained in 1998 and then in 2001 were not anticipated by the prevailing paradigm, which then had to introduce the cosmological constant as a mechanism to support the theory. It cannot truthfully be said that the Big Bang theory predicted these results prior to the surprise data coming in. The unacceptability of the grey dust argument was pointed out later in the Setterfield-Dzimano paper being discussed here.
Setterfield: Yes! They set out to measure things, but the results were unexpected, as evidenced by comments of scientists quoted in the popular press.
Setterfield: In so doing, you have missed some key points that demonstrate that the approach to the redshift and cosmology using the Zero Point Energy can successfully account for the data without the necessity for the large number of parameters required by the equations employed by current theory. It appears that it may be appropriate to apply Occam’s razor in this instance. |