red shift blue shift

trevorjohnson83

Gold Member
Joined
Nov 24, 2015
Messages
2,152
Reaction score
327
Points
138
I did an experiment where I placed a recorder twenty feet away from a 440 hz tone and recorded several minutes. When I played the recorder tone next to the source and recorded them together there were 2 minute phase changes. AI explains the results for me.

In your model, the color shift of light (blue → clear → red) is driven by the amplitude or power of the wave reaching the observer, and you used a sound experiment to estimate how a small change in amplitude over distance can translate into a small change in frequency. In your experiment, a 440 Hz tone recorded from 20 feet away showed a phase drift that implied a frequency difference of about 0.0083 Hz over a two-minute comparison. Interpreted through your framework, this suggests that as the wave spreads out and loses amplitude with distance, its observed frequency slowly shifts. In the analogy you built, high amplitude close to the source corresponds to a blue shift, intermediate amplitude corresponds to a clear or neutral region, and very low amplitude at great distance corresponds to a red shift. This gradient mirrors the inverse-square drop in intensity that occurs as waves spread through space.


Using that experiment as a scaling reference, the per-distance frequency change was treated as extremely small but cumulative over long distances. When that scaling was applied to astronomical sources like the Sun and typical galaxies, the model estimated that a galaxy showing a redshift around z≈1 would lie on the order of tens of millions of light-years away in this amplitude-driven framework. Extending the same scaling to extremely distant galaxies with z≈11 gave an estimated distance of roughly 230 million light-years, whereas modern astronomical measurements place such objects at roughly 32 billion light-years when cosmic expansion is included. So the model reproduces the qualitative idea that larger redshifts correspond to greater distances, but the calculated distances come out much smaller than those derived from standard cosmology.


When the same reasoning was applied to the nearest star, the real measured distance of 4.24 light-years was divided by 140 (the factor comparing your model’s galaxy distances to the conventional ones). That scaling gave a distance of about 11 light-days. If a Sun-like star were located at 11 light-days, the light reaching Earth would be extremely weak because intensity falls with distance. The power arriving per square meter would be about 0.00037 W/m², compared with about 1361 W/m² from the Sun at Earth. In other words, the received power would be about 2.7 × 10⁻⁷ of the sunlight at Earth, meaning the star would appear extremely faint. Within your amplitude-based gradient, that level of received power would fall far into the low-amplitude “red” region, even though the star itself could have the same intrinsic power as the Sun.


Putting all of this together in your framework: waves begin near a source with high amplitude and blue shift, transition through a neutral or clear region as amplitude drops with distance, and eventually appear red-shifted when the amplitude becomes very small after traveling vast distances. Your sound experiment provides the small measurable shift used to estimate how quickly that change accumulates with distance, and when scaled up to astronomical sources it yields approximate distance estimates ranging from millions to hundreds of millions of light-years, with nearby stars potentially lying tens of light-days away under that same scaling.
 
I did an experiment where I placed a recorder twenty feet away from a 440 hz tone and recorded several minutes. When I played the recorder tone next to the source and recorded them together there were 2 minute phase changes. AI explains the results for me.

In your model, the color shift of light (blue → clear → red) is driven by the amplitude or power of the wave reaching the observer, and you used a sound experiment to estimate how a small change in amplitude over distance can translate into a small change in frequency. In your experiment, a 440 Hz tone recorded from 20 feet away showed a phase drift that implied a frequency difference of about 0.0083 Hz over a two-minute comparison. Interpreted through your framework, this suggests that as the wave spreads out and loses amplitude with distance, its observed frequency slowly shifts. In the analogy you built, high amplitude close to the source corresponds to a blue shift, intermediate amplitude corresponds to a clear or neutral region, and very low amplitude at great distance corresponds to a red shift. This gradient mirrors the inverse-square drop in intensity that occurs as waves spread through space.


Using that experiment as a scaling reference, the per-distance frequency change was treated as extremely small but cumulative over long distances. When that scaling was applied to astronomical sources like the Sun and typical galaxies, the model estimated that a galaxy showing a redshift around z≈1 would lie on the order of tens of millions of light-years away in this amplitude-driven framework. Extending the same scaling to extremely distant galaxies with z≈11 gave an estimated distance of roughly 230 million light-years, whereas modern astronomical measurements place such objects at roughly 32 billion light-years when cosmic expansion is included. So the model reproduces the qualitative idea that larger redshifts correspond to greater distances, but the calculated distances come out much smaller than those derived from standard cosmology.


When the same reasoning was applied to the nearest star, the real measured distance of 4.24 light-years was divided by 140 (the factor comparing your model’s galaxy distances to the conventional ones). That scaling gave a distance of about 11 light-days. If a Sun-like star were located at 11 light-days, the light reaching Earth would be extremely weak because intensity falls with distance. The power arriving per square meter would be about 0.00037 W/m², compared with about 1361 W/m² from the Sun at Earth. In other words, the received power would be about 2.7 × 10⁻⁷ of the sunlight at Earth, meaning the star would appear extremely faint. Within your amplitude-based gradient, that level of received power would fall far into the low-amplitude “red” region, even though the star itself could have the same intrinsic power as the Sun.


Putting all of this together in your framework: waves begin near a source with high amplitude and blue shift, transition through a neutral or clear region as amplitude drops with distance, and eventually appear red-shifted when the amplitude becomes very small after traveling vast distances. Your sound experiment provides the small measurable shift used to estimate how quickly that change accumulates with distance, and when scaled up to astronomical sources it yields approximate distance estimates ranging from millions to hundreds of millions of light-years, with nearby stars potentially lying tens of light-days away under that same scaling.

It seems you haven't accounted for any latency or phase shifting in your recording device.
 
It seems you haven't accounted for any latency or phase shifting in your recording device.
A reasonable response is that latency and device phase shifting are important things to check, but they do not necessarily invalidate the observation by themselves.


When you recorded the 440 Hz tone from 20 feet away and later compared it with the original signal, you observed a slow phase drift over about two minutes. In your interpretation, that drift suggested a tiny difference in effective frequency between the two recordings. Someone pointing out latency in the recorder is referring to the possibility that the recording system introduces a fixed delay (for example a few milliseconds from the microphone, analog-to-digital converter, or buffering). A fixed delay simply shifts the waveform in time once; it does not continuously change phase over minutes. So if the only effect were recorder latency, you would see the signals start offset but they would stay aligned afterward, rather than gradually drifting.


However, recording devices can also introduce other effects that could cause slow phase drift, such as slightly different sampling clocks, clock drift between recordings, or resampling during playback. Even a very small clock mismatch in digital audio can create a gradual phase change over long recordings that looks like a tiny frequency difference. In experiments like this, that’s usually the first thing engineers check because consumer recorders and playback systems are rarely perfectly synchronized.


So the key point is: latency alone would cause a constant offset, while clock or sampling differences could create the gradual phase drift you observed. To isolate the physical effect you’re interested in, you’d normally repeat the experiment using the same clock source for both signals or record both signals simultaneously on two channels of the same device, which removes most instrument-related phase drift.
 
repeat the experiment using the same clock source for both signals or record both signals simultaneously on two channels of the same device, which removes most instrument-related phase drift.
This.

You might also consider audio modeling software like hornresp.
 

New Topics

Latest Discussions

Back
Top Bottom