Challenge to the Skeptics: What's Your Theory?

I'd forgotten about this little nugget which I had personally verified a few years ago:

How does one get a monthly and yearly mean? A monthly mean should be an average of all readings over the month. Is a yearly mean an average of all the months? No. But Hansen's source code does just that.

Why isn't a yearly mean the average of all the months?

A year is 365.25 days long. February is 28 days long most of the time. That means February readings have more weight than January readings.

That's a good point I hadn't thought of that, but I doubt it would change the results by any significant measure.
 
Why isn't a yearly mean the average of all the months?

A year is 365.25 days long. February is 28 days long most of the time. That means February readings have more weight than January readings.

That's a good point I hadn't thought of that, but I doubt it would change the results by any significant measure.

An error of 1 hundredth of a degree each year would certainly be significant would it not? How many calculation errors would it take to do that? Remember, the premise is that humans are causing a temperature rise, this rise will be disastrous so humans must act decisively now. This claimed rise is less than one hundredth of a degree per year.

I think I'd rather have accuracy instead of a bunch of errors people assume would not "change the results by any significant measure."

Another criticism I have of these models which compute averages only to a tenth of a degree (and incorrectly sometimes at that), and then try to extrapolate a long term trend in the tenths of a degree. If any regression analysis had been done the margin of error would be larger than the claimed anomaly.
 
Last edited:
the rounding errors sound like all the other errors that 'dont matter' when they are pointed out. Real Climate is still backing up Mann's use of the Tiljander proxies by saying they 'dont matter'.
 
actually its the hockey team's clubhouse but that doesnt mean they have to defend the indefensible.

Jim Hansen( same building same floor as Real Climate) hand waved the -0.15C correction for US Temps after the Y2K bug as 'dont matter' because it didnt make much of a change to global temps. how many screwups does it take before 'it matters'?
 
actually its the hockey team's clubhouse but that doesnt mean they have to defend the indefensible.

Jim Hansen( same building same floor as Real Climate) hand waved the -0.15C correction for US Temps after the Y2K bug as 'dont matter' because it didnt make much of a change to global temps. how many screwups does it take before 'it matters'?

Let's not even get into data entry errors. The standard professional data entry error rate is 3%. That is, 3% of all data entered by professionals who do this all the time is wrong. What's the error rate of an untrained non-professional?

All these little things add up. Remember we're talking about an increase of less than one degree over a period of over 130 years, a very very small amount.
 
actually its the hockey team's clubhouse but that doesnt mean they have to defend the indefensible.

Jim Hansen( same building same floor as Real Climate) hand waved the -0.15C correction for US Temps after the Y2K bug as 'dont matter' because it didnt make much of a change to global temps. how many screwups does it take before 'it matters'?
What's in it for them to say "no prob, folks"?
 
God, what a bunch of retards. Lying for money? Who has the billions and billions of dollars to lie for? Scientists? Are you joking. What companies have billions in profits per quarter, profits put in danger if the public decides their product is dangerous to our maintaining our fine lives in the present environment?

And you just get further out there all the time. Considering that green organizations and the government have outspent skeptics something like 3500:1, one doesn't need to be a rocket surgeon to know where the real profit in dishonesty lies.
 
A year is 365.25 days long. February is 28 days long most of the time. That means February readings have more weight than January readings.

That's a good point I hadn't thought of that, but I doubt it would change the results by any significant measure.

An error of 1 hundredth of a degree each year would certainly be significant would it not? How many calculation errors would it take to do that? Remember, the premise is that humans are causing a temperature rise, this rise will be disastrous so humans must act decisively now. This claimed rise is less than one hundredth of a degree per year.

I think I'd rather have accuracy instead of a bunch of errors people assume would not "change the results by any significant measure."

Another criticism I have of these models which compute averages only to a tenth of a degree (and incorrectly sometimes at that), and then try to extrapolate a long term trend in the tenths of a degree. If any regression analysis had been done the margin of error would be larger than the claimed anomaly.

Mere error in measurement doesn't seem significant to me. If it's true error, there should be as many ups as downs. What matters is the trend. What do you mean "if" regression analysis had been done? Of course, they've been done. It's basic statistics and doesn't include a "margin of error", but rather an R-value that tells how close to prefect the regression is. Do you really mean '95% Confidence Levels'?
 
That's a good point I hadn't thought of that, but I doubt it would change the results by any significant measure.

An error of 1 hundredth of a degree each year would certainly be significant would it not? How many calculation errors would it take to do that? Remember, the premise is that humans are causing a temperature rise, this rise will be disastrous so humans must act decisively now. This claimed rise is less than one hundredth of a degree per year.

I think I'd rather have accuracy instead of a bunch of errors people assume would not "change the results by any significant measure."

Another criticism I have of these models which compute averages only to a tenth of a degree (and incorrectly sometimes at that), and then try to extrapolate a long term trend in the tenths of a degree. If any regression analysis had been done the margin of error would be larger than the claimed anomaly.

Mere error in measurement doesn't seem significant to me. If it's true error, there should be as many ups as downs.

Not necessarily true, which is why the Scientific Method doesn't allow building in those assumptions. It calls for consistent procedures and accurate measurements.

What matters is the trend.

Separate from your previous comment, true. The trend matters. All the more reason to look for unforeseen biases. Incorrect data type conversion in a computer program plus a coincidence in the data would cause an upward bias in the rounding errors. Assuming that rounding errors would be up and down equally and therefore cancel each other out is not acceptable.

What do you mean "if" regression analysis had been done? Of course, they've been done.

Not on the GISS raw data. If I'm incorrect please show me.

It's basic statistics and doesn't include a "margin of error", but rather an R-value that tells how close to prefect the regression is.

Correct. I used less technical terms. Your version is correct though.

Do you really mean '95% Confidence Levels'?

Essentially the same thing (as I learned it MOE is half the confidence interval).
 

Forum List

Back
Top