The trouble with anomalies

IanC

Gold Member
Sep 22, 2009
11,061
1,344
245
Verity Jones has two nice articles on how inclusion or exclusion of temperature anomalies can make a difference in constructing global data sets

The trouble with anomalies… Part 1 | Digging in the Clay

v-station-count.png


The trouble with anomalies… Part 2 | Digging in the Clay

amo.png


but the really funny punchline was at the end! he compared the whole thing to the AMO and it matched pretty well. hahaha. the warmists say that everything is known and controlled for but they are full of BS. every area of climate studies look OK superficially but it all breaks down once you look at things more carefully or if you add other variables. the science is not settled people!



I know most people dont really care to figure out how any of the science behind the climate wars actually works but a whole lot of information is out there.
 
Climatologists just aren't very good with large amounts of data and computer modeling. They have no training in those areas.
 
Climatologists just aren't very good with large amounts of data and computer modeling. They have no training in those areas.

Unfortunately you are correct. Gergis2012 is a good example of that. They attempted to use better methodologies, screwed up, and then found that their findings were impossible to attain without using cherrypicking and circular reasoning.
 
Too bad we don't have a satellite record back thru the 30s.. It would be pretty hard to "homogenize" that wouldn't it?

If you are losing that many stations over the span on which the anomalies are calculated, then simply -- the idea that anomalies are self-canceling is ridiculous. It's ridiculous for another reason that I didn't see specifically in the article -- but maybe it's in her references. And that is that the station biases dont' matter ONLY IF they are static during the span on which the anomalies are based. If Urban heating springs up 1/2 way into the span --- you can't ignore that effect on bias.

As for the match to AMO -- If you're fixated on coming up with ONE GLOBAL NUMBER -- the Earth is 75% ocean. AMO is about +/- 0.5degC.. How can you make the statement that "natural forcings are insignificant".. Especially when they accidentally show up in the "variance" part of your anomaly??

Guess what? Same reason --- biases only normalize out if they are STATIC during that period of calculating the span for the anomaly.. AMO PDO ?? Not static.. Isn't this WHY the variances SHOULD track things like AMO? Someone ought to break this down into quadrants with the Atlantic/Pacific North/South hemis.
 
Last edited:
Too bad we don't have a satellite record back thru the 30s.. It would be pretty hard to "homogenize" that wouldn't it?

If you are losing that many stations over the span on which the anomalies are calculated, then simply -- the idea that anomalies are self-canceling is ridiculous. It's ridiculous for another reason that I didn't see specifically in the article -- but maybe it's in her references. And that is that the station biases dont' matter ONLY IF they are static during the span on which the anomalies are based. If Urban heating springs up 1/2 way into the span --- you can't ignore that effect on bias.

As for the match to AMO -- If you're fixated on coming up with ONE GLOBAL NUMBER -- the Earth is 75% ocean. AMO is about +/- 0.5degC.. How can you make the statement that "natural forcings are insignificant".. Especially when they accidentally show up in the "variance" part of your anomaly??

Guess what? Same reason --- biases only normalize out if they are STATIC during that period of calculating the span for the anomaly.. AMO PDO ?? Not static.. Isn't this WHY the variances SHOULD track things like AMO? Someone ought to break this down into quadrants with the Atlantic/Pacific North/South hemis.

Biases only normalize if you have a valid statistical sample of the population.
 
That is true.. But here you have a dynamic bias. The argument is that even if your thermometers read slightly high or low because of placement or calibration -- that over the 20 or 30 period, if you subtract out the average and ONLY look at the anomaly (delta T/ Avg T) then all that positional and calibration bias goes away.. That's not true if the bias CHANGES over that period. The changes BECOME part of the anomaly. So if you have a weather station that's been there for 20 years into your period of averaging and THEN someone builds a parking lot next to it 5 years before the END of the averaging period. That's gonna be part of the anomaly.

The important bit for this OP is that NATURAL cycles can also become part of the anomaly if they vary cyclically during the averaging period. (like ocean temps eg.) And the anomaly measurements themselves will change and have differing variances depending on where you stop and start the averaging period.
 
Last edited:
you guys have both made good points.

the way I see it is that data handling for global temps has degenerated into numerology. guys like Hansen have crunched the numbers in a thousand ways looking for the most dangerous trends and then made their 'adjustments' accordingly.

while I do believe that global temps have risen a little over the last 150 years I also believe that a lot of the trend has been manufactured by selective choosing of sites and weighting poor quality data like the Arctic to give an overestimate of the real change. the US has the best data, and its trend using unadjusted readings is very small. it is only when you start adding areas like Arctic airports or African incomplete series that you get an even noticable jump.

we may end up spending trillions of dollars on a problem that doesnt even exist. I think we should employ specialists to do things like check the actual thermometer readings before they get 'lost' and only the 'corrected' ones are available. scientists seem to think cleaning up and repairing data series is a job that is beneath them. fair enough, get an accounting firm in to do the grunt work of catching the mistakes ( and there are many, many thousands of them). hire several teams of statisticians to develop competing methodologies for correcting some of the more obvious biases (anyone besides me wondering why the BEST papers still havent passed peer review after more than a year?). GISS has never had to undergo scrutiny for its methodologies even though obvious flaws have been pointed out on numerous occasions.

even with best practises and continued improvement of data collection we need to have more realistic accessment of the uncertainties involved. often the size of the effect we are looking for is much smaller than the error bars for the measurements. while future data is likely to improve, you cant get 'better' data out of old noisy data sets. often all of the trend comes from 'adjustments'.
 

Forum List

Back
Top