Suppose I wanted to determine whether the current temperature trend is consistent with some projected trend. In order to do this, let's say I calculate the temperature slope of the last 200 days, and its confidence interval in the standard manner. Then I check to see if the projected trend is in the confidence interval. But maybe I want a tighter confidence interval. I could use more data points in this case, say, temperatures in the last 1,000 minutes. If we assume temperature series approximate AR(1) with white noise, this should be fine.
That makes no sense at all, does it?
Intuitively, it seems that confidence intervals on temperature slopes (when we want to compare them with a long term trend) should depend more on the working time range than on the number of data points, or on how well those data points fit a linear regression. We should have more confidence on a 20-year trend than a 10-year trend, almost regardless of whether we use monthly data as opposed to annual data. Certainly, the standard slope confidence interval calculation is not going to do it. We need to come up with a different method to compare short-term trends with long-term ones.
I will suggest one such method in this post. First, we need to come up with a long projected trend we can test the method on. We could use a 100-year IPCC trend line, if there is such a thing. For simplicity, I will use a third-order polynomial trend line as my "projected trend." Readers can repeat the exercise with any arbitrary trend line if they so wish. I should note that the third-order polynomial trend line projects a temperature change rate of 2.2C / century from 1998 to 2008.
The following is a graph of GISS global annual mean temperatures, along with the "projected trend." For the year 2008 I'm using 0.44C as the mean temperature. You can use other temperature data sets and monthly data too. I don't think that will make a big difference.
We have 118 years of 11-year slopes we can analyze. There are different ways to do this. To make it easy to follow, I will detrend the temperature series according to our projected trend. This way we can compare apples with apples as far as slopes go. The detrended series is shown in the following graph.
The long term slope of detrended temperatures is, of course, zero. All 11-year slopes in the detrended series will distribute around zero. We know that the 1998-2008 slope is -1.53C / century. The question we want an answer for is whether the 1998-2008 slope is unusual compared to 11-year slopes observed historically, which would indicate there's likely a point of change away from the projected trend.
We can start by visualizing the distribution of 11-year slopes throughout the detrended series. The following is a graph of the number of years in slope ranges of width 0.2C / century. For example, the number of years that have slopes between 0.1 and 0.3 is 10.
This is roughly a normal distribution of years according to their slopes. In it, approximately 95% of years have slopes in the -2.7 to 2.7 range. That is, 4 years have slopes of -2.7 or lower, and 3 years have slopes of 2.7 or higher. I put forth that the real confidence interval for 11-year temperature slopes relative to long-term 3rd-order polynomial trend lines is approximately ± 2.7 C / century.
The 11-year slope for 1998 is only -1.53C / century, well within the estimated confidence interval. Therefore, it's a little premature to say that the 1998-2008 trend falsifies 2C / century. Of course, if 2009 is a cold year, that might change this evaluation.