I had conjectured that when detrending time series, closer fits will tend to better control for coincidence. This intuition makes perfect sense, in my view. Consider that detrending with a linear fit is better than not detrending at all. After that, it's not hard to imagine there are coincidental time series where linear detrending does not make sense at all. I've also found time series where a second-order detrending is quite poor, and I've had to use a third-order detrending. The cumulative CO

_{2}emissions time series is case in point.

The problem with detrending too closely is that there is some loss of information. To give you an example, if we only had 7 data points and detrended them using a 6th-order fit, the fit would be perfect, and we'd be left with zero information. This is presumably not so much of an issue when you have many data points, but there has to be some loss of information either way.

Kenneth had tried my analysis with a 6th-order detrending and found that statistical significance was lost. This was interesting, but I subsequently pointed out that if you attempted the association by assuming there's a lag of 1 year between temperature and storms, statistical significance remained. I had previously found a lag of 1 year produced a better association than a lag of 0 years, and the 6th-order detrending confirms it. The 6th-order detrending is pretty remarkable too. There are no hints of cycles in a visual inspection of the detrended time series.

The exercise left me quite sure that there was still an association, but I got the sense that there's something missing as far as convincing some readers. I think many people are unconvinced by slopes, confidence intervals and theoretical Math. You need a good graph to be convincing. Unfortunately, both the temperature data and the storms data contain a lot of noise. You can sort of see a pattern if you look closely, but it's not something that is slam dunk convincing.

So I had an idea. We just need to smooth out the noise. And what's a simple way to smooth out noise? We just get central moving averages. In fact, this idea is so simple that I'd be very surprised no one has thought of it before. Here's what I did. For the year 1859 I calculated the "smooth" temperature as the average of raw temperatures from 1851 to 1867. For the year 1860, it was the 1852-1868 average, and so forth. Same for named storms. The resulting graph follows.

At times I think a better name for this blog might have been "Deny This." :)

Some remarks:

- The effect given by a straight comparison of the time series appears to be 8 storms for every 1 degree (C). This is somewhat higher than the effect I had previously reported from an analysis of the residuals, which was 6 storms for every 1 degree.
- The graph provides support for the contention that old storm records are unreliable. I would not recommend using storm counts prior to 1890.
- My prediction that at an anomaly of 2 degrees (C) the average season will be similar to the 2005 season is unchanged.
- The lag from the graph appears to be 2 years, and not 1 year, as suggested by various analyses of residuals.

## 7 comments:

How would you characterize the difference between a "denialist" and a "skeptic"?

OT, but good question. My view is that a denialist is someone who apparently cannot be persuaded by any amount of evidence. A skeptic critically considers evidence and data, but isn't blindly skeptical of everything just for the sake of being skeptical. A skeptic will generally accept the scientific concensus, for example. That's not always true, though. Here's an example of me going against what might be considered scientific concensus. But in this case, I think the evidence is on my side and no one has refuted my argument.

Others have written on the topic of denialism, e.g. here, here, and here.

So I think the 6th sense we have is really the 6th order polynomial...

as it appearers that we humans can distinguish relatively accurately a match visually of what you have demonstrated by the maths Joseph!

ps will we get access to your earthquake analysis? I would dearly like to find out more accurately when the big one is due for the Vancouver/Seattle area.

:)

I did check whether there's an association between earthquakes and temperatures. I can report the residual trend is virtually flat, no different than if I were to try to associate temperatures with the Chinese zodiac sign.

I think the hypothesis of an association had to do with glaciar earthquakes or something of the sort. It's not surprising an association with major earthquakes was not found. But I think it became a punching bag of the global warming "skeptics", from what I see on Google.

In fact, earthquakes seem to be on a downward trend.

Probably not very interesting for a full blog post, but there's my report on that.

The exercise left me quite sure that there was still an association, but I got the sense that there's something missing as far as convincing some readers...

How about the right data to begin with!

I always thought the 1930's were hotter than the 1990's. Why does that fact not show up?

In what dataset Anonymous? If you give me the precise claim, I can confirm it for you. Certainly, though, if you smooth out the data (like I did with CMAs) recent years are much warmer than anything that has been seen in the past.

The data I'm using is Northern Hemisphere sea-surface temperature, specifically Jun-Nov averages. Although in this new analysis, it shouldn't matter if you use annual means.

Note: The graph at the bottom of the post has begun to beg disbelief, so I'm making a spreadsheet available through this post for easy verification.

Post a Comment