Thursday, July 24, 2008

The "Hockey Stick" is Fine

There seems to be considerable controversy over the well-known "hockey stick" temperature reconstruction of the last two millennia – Mann & Jones (2003). I have even found what look like accusations of fraud, all embedded in discussions of very complicated statistics and algorithmic procedures that the average person couldn't possibly hope to evaluate.

I'm not interested in getting involved in the politics of the whole thing. I just want to point out that the raw data of the temperature reconstruction is made available by the NOAA Paleoclimatology Program. I contend that most people reading this can double-check if the raw data tells us we are living in unusually warm times – which is basically what the "hockey stick" construct conveys.

Of course, there are those who will say that we are living in unusually warm times relative to most of the last thousand years simply because the little ice age has ended. But we can control for this fairly easily.

There is a general temperature trend historically. We can remove this trend from the data, and then check if we're still living in unusually warm times after the removal. Specifically, we want to remove the warming trend that is a natural part of the end of the little ice age.

I would suggest that a 4th-order polynomial trend line will capture the general temperature trend of the last 1781 years more than sufficiently. (Excel will produce polynomial trend lines for you, up to 6th-order ones). The trend is characterized by a medieval warm period, followed by a period of cooling, and a subsequent period of warming. We can detrend the temperature time series based on the polynomial fit and see if the modern era remains special.

This sort of detrending methodology has apparently been used in climatology before. Holme et al. (2008) point out that "sophisticated statistical methods have been applied to [climate] series, but perhaps sometimes these methods might even be too sophisticated." They further claim that "the [detrending] method provides a rigorous way of defining climate 'events', and allows comparison of long-term trends and events in time series of climatic records from different archives."

The detrending method in Holme et al. is actually more sophisticated than what we can do in a straightforward manner, but the authors are interested in long-term quasiperiodic trends.

Let's first see what the temperature time series looks like, along with the proposed 4th-order polynomial fit. We will only be looking at the global temperature reconstruction in this post.

mann & jones hockey stick temperature reconstruction

In order to detrend the time series, we simply subtract temperatures modeled by the polynomial equation from observed (reconstructed) temperatures. The Y axis offset is not important to this analysis. (Note that in the equation shown in the figure above, x = year - 199). The result of the detrending procedure is illustrated in the following figure.

detrended mann & jones hockey stick temperature reconstruction

So now we have a nice detrended temperature time series, which – if I may be redundant – has an entirely flat trend. What do we do with it?

Let's sort data rows by detrended temperature in descending order. If we look at the top 5% (89) years ranked in this manner, we see that they have a detrended temperature greater than 0.123. In other words, if we were to pick a year at random from the data set, there is only a 5% chance that its detrended temperature is greater than 0.123. (If you must know, the residuals of the polynomial regression are normally distributed).

In statistics, a 5% probability is the standard for rejection of hypotheses. If we hypothesize that a given year is not an unusually warm year, its detrended temperature should be 0.123 or lower. Yet, this is not the case for many of the years in the modern era, as shown in the following figure.

mann & jones hockey stick temperature reconstruction warm years

All but 3 of the years from 1968 to 1980 are statistically warm years, even after detrending the whole 1781-year time series. This cannot be explained as a consequence of the culmination of the little ice age. Clearly, we are in the midst of a "climate event."

Is it an unprecedented event? If you only consider the 1968-1980 range as special, then no. There was an 11-year "climate event" between the years 668 and 678 when detrended temperatures were higher than 0.123. That is the closest precedent that can be found in the 1781-year temperature series. If we consider that temperatures have increased after 1980, then I'd have to agree with Mann & Jones that modern era global warming "dwarfs" anything from the last 2 millennia.

Sunday, July 20, 2008

Global Warming Forecast - Based on 3.46C Model

So far we have estimated climate sensitivity to CO2 doubling, and tested the results of the analysis with a hindcast. I will close the series with a forecast.

It will be a simple forecast in the sense that we will only consider CO2 trends. While I would caution this is an important limitation of the forecast, I would also note the hindcast had the same exact limitation. Of course, it's quite possible that in analyses of historic data, CO2 acts as a proxy of other anthropogenic forcings. The behavior of this confounding in the past may differ from its future behavior.

That said, the part of the forecast that I really can't be very confident about has to do with projecting future CO2 atmospheric concentrations. This basically amounts to attempting to predict human behavior and world-wide policy decisions. What I will do is to simply define 2 scenarios based on the Mauna Loa data, as follows.
  • Scenario A: A second-order polynomial forecast of CO2 concentrations.

  • Scenario B: A third-order polynomial forecast of CO2 concentrations.

Each scenario is illustrated in the following graph.

co2 polynomial forecasts

If it is true that peak oil is either looming or behind us, I would say Scenario B is considerably more likely.

To get "high" and "low" estimates I was initially planning to use the 95% confidence interval of the rate of temperature change formula. This range produces forecasts that are very similar. So instead what I did is produce new formulas for sensitivities of 3.0C (low) and 4.0C (high). For additional details on how the forecast is done, see the hindcast post.

The resulting forecasts of each scenario are illustrated in the following graphs.

global warming forecast

global warming forecast

Again, I consider scenario B to be more probable. We'll see how they do. Under either scenario it would seem that a global temperature anomaly of 1 degree Celsius by the early 2020s is a done deal. The model also tells us that it takes about 10 years for temperatures to level off after CO2 concentrations do. Under scenario B, we are apparently at a peak in the rate of temperature change – roughly 2C/century. This rate will begin to drop. It will be 1.5C/century by 2035.

Friday, July 18, 2008

How Well Does a Sensitivity of 3.46C Hindcast?

[Note: Revised 08/02/2008]

In the last post we estimated the most likely climate sensitivity to CO2 doubling by means of an analysis of temperature change rates. The result (3.46C) is in the high end of the range of sensitivities considered plausible by the scientific community. A hindcast should not only tell us if the estimate is in fact too high, but it should also test some of the other results from the analysis. And to make it interesting, we will do a hindcast of the last 150 years. Sound crazy? See Figure 1.

global warming hindcast co2

This turned out much better than I expected. In fact, I suspect the chart might beg disbelief among some readers, so I'm making the spreadsheet available here (XLS). Formulas can be verified to match those of the analysis.

The only inputs to the hindcast are (1) CO2 atmospheric concentrations from 1853 to 2004 (estimated in ppmv as described at the end of this post), and (2) observed temperatures from 1853 to 1856. The observed temperatures used (Column D) are actually central moving averages of period 7.

My expectation for the hindcast was that error would accumulate, and in the end we would have a deviation from the observed temperature trend, but hopefully not a big one. That's because the way temperature for year Y is predicted in the hindcast is by adding the temperature in Y-2 plus the predicted temperature change rate in Y-1 times 2. Intuitively, it doesn't seem like this technique would tend to maintain accuracy over a time series this long.

There is a good reason why the model hindcasts this well, nevertheless. First, it helps that formulas were derived in part from the data we're hindcasting. But more importantly, what we're looking at is a self-correcting system. Local variability cannot make the system resolve its imbalance any faster or slower. If temperature becomes higher than it should be, for whatever reason, the temperature change rate will drop. Similarly, temperatures lower than they should be will be corrected by a positive change in the rate. Sooner or later, the observed trend will rejoin the predicted trend.

This speculative observation is testable in the hindcast. We can break the chain of predicted temperatures, insert artificial values, and see if the model resolves. This can be done in the spreadsheet by modifying one of the predicted temperature columns (e.g. column K, any row greater than 9). What I did is introduce an artificial warming between 1910 and 1913 so it ended up at 0.1C. The results can be seen in Figure 2.

global warming hindcast

I think that's interesting, and I'm sure there's some insight about what's been occurring since 1998 somewhere in there.

For those who are interested in the details, the following is a recap of the results from the analysis that are used to produce the hindcast.

  1. T' = 11.494 log C - 28.768

  2. R = (T' - T) * 0.0915

  3. An unexplained lag of 3 years for imbalance to take effect on the rate of temperature change.


Where

  • C = The atmospheric concentration of CO2 given in ppmv.

  • T' = The equilibrium temperature, given in degrees Celsius anomalies as defined in CRUTEM3v data set.

  • T = The observed temperature. In the hindcast, this is actually the predicted temperature, except for 4 years we use as inputs.

  • R = The rate of temperature change, given in degrees Celsius per year.


The high and low hindcast predictions are based on the confidence interval given in the formula for R.

As an example, the following is how the predicted temperature for 1857 is calculated.

T(1857) = T(1855) + 2 * R(1856)

R(1856) = 0.0915 * (T'(1853)-T(1853))

That's all the hindcast is.

Next up: We'll attempt a forecast.

Tuesday, July 15, 2008

Here's How You Can Estimate CO2 Climate Sensitivity From Historic Data

Most likely value = 3.46C

[Note: Revised 08/02/2008]

When I first became interested in the science of Global Warming (which was not too long ago) I had some substantial misconceptions. For example, I thought the current temperature anomaly (about 0.6C globally) was due to the current levels of greenhouse gases in the atmosphere, primarily CO2 (about 380 ppmv). Reality is more complicated. The issue is not that there's some lag between greenhouse gas concentrations and temperature either – it's a bit more complicated that this.

I've been learning about a concept called CO2 climate sensitivity, which is defined as the equilibrium temperature increase expected if the atmospheric concentration of CO2 were to double. The word equilibrium needs to be emphasized. At current CO2 concentrations, I would estimate the equilibrium temperature anomaly should be 0.89C, but the actual temperature anomaly is only about 0.6C. There's a significant imbalance, and the imbalance is corrected by temperature change. Simplifying, the mechanism that causes temperature change is called CO2 forcing.

There is much debate and uncertainty about the most likely climate sensitivity value. For a good overview, see James' Empty Blog.

What I want to do in this post is go over a relatively simple analysis where we estimate climate sensitivity by using publicly available historic data. We will also come up with formulas that tell us the most likely equilibrium temperature for a given CO2 concentration, and the most likely temperature change rate for a given actual temperature and CO2 concentration. The plausibility of these results will be illustrated with a graph.

First, let's go over some of the underlying theory. Given the way climate sensitivity is defined, it's clear that the expected equilibrium temperature change is the same for any doubling of CO2 concentrations, be it from 100 to 200 ppmv, or 1000 to 2000 ppmv. This tells me there's a logarithmic relationship between temperature and CO2 concentrations (assuming all else is equal) as follows:

T' = a log C + b


T' is the equilibrium temperature and C is the atmospheric concentration of CO2; a and b are constants. Climate sensitivity is thus

S = (a log 2C + b) - (a log C + b) = a log 2


When the observed temperature (T) differs from the equilibrium temperature (T'), there's imbalance. We will define imbalance (I) as follows.

I = T' - T


Further, I put forth that temperature change rate is given by

R = d I


where d is a constant. We're guessing a bit here, but the above is consistent with Newton's Law of Cooling.

Finally, let me define a construct (J) that I will use in the analysis. It is simply the imbalance minus the constant b, as follows.

J = I - b = a log C - T


If we know S, then we know a. When we have S, a and C for any given year, we can calculate J for any given year. Since we should be able to determine the temperature change rate (R) for any given year, we can model J vs. R (a linear relationship). The relationship between J and R should be equivalent to the relationship between I and R, except for a shift given by the constant b.

Here's the plan. We need to test different hypotheses on the value of S. The way we determine a hypothesis is good is by checking if the resulting relationship between I and R is suitable. And we measure this by means of the "goodness of fit" of the linear association between J and R. (This methodology is called "selection of hypotheses by goodness of fit" and it seems adequate in this case, judging by Figure 3, which I will mention shortly).

Before I get into the nuances of the analysis (which are important) I wanted to show the reader how I chose the best value of S. Figure 1 models S vs. the goodness of fit of the linear association between J and R.



This tells us that the value of S that makes most sense is 3.46.

After we have determined the most likely value of S, we can calculate the constant b. The linear association between J and R is as follows.

R = 0.09152J - 2.63281


The slope should be the same in the association between I and R, except here the intercept must be zero.

R = 0.09152I


Therefore, b may be calculated as follows.

0.091521I = 0.091521(I - b) - 2.63281
b = 2.63281 / 0.091521 = -28.768


Figure 2 is the scatter graph that illustrates the association between imbalance (I) and temperature change rate (R) when we assume S=3.46. This confirms the slope of the linear fit and the "goodness of fit" we had previously found.

co2 climate sensitivity

A very important graph is one that shows the R and I time series side by side, under the same assumption (S=3.46). See Figure 3.

co2 climate sensitivity

Figure 3 validates much of the underlying theory. It's one of those graphs that, once again, show anthropogenic global warming to be an unequivocal reality.

Figure 3 can also be used to visually check different values of S. When S is less than 3.46, you will see the imbalance (I) time series rotate in a clockwise direction. When it is greater than 3.46, it will rotate in a counter-clockwise direction. This provides subjective confidence about the adequacy of the hypothesis selection methodology.

Note that the imbalance (I) time series in Figure 3 is shifted three years to the right. An initial inspection of the graph clearly showed there was a lag of 3 years between imbalance and temperature change rate. I would've expected the effect to be immediate, but that's why it's important to put your data in graphs. I couldn't begin to theorize why it takes time for imbalance to take effect, but this finding needs to be taken into account in the analysis; otherwise the results won't make sense.

Another important aspect of the analysis is that time series noise needs to be reduced, otherwise you probably won't notice details like the 3 year lag. I calculated central moving averages of period 7 from the CRUTEM3v global data set. For example, the "smooth" temperature for 1953 is calculated as the average between 1950 and 1956. Additionally, the temperature change rate (R) is calculated based on the "smooth" temperatures, looking 4 years ahead and 4 years in the past. If you also consider the 3 year imbalance lag, this leaves us with a workable time range spanning 1859 to 2000.

How do I get CO2 concentration data spanning that time frame? I discussed how I estimate that here. Basically, I try to find the best possible constant half-life of extra CO2 by matching emission data with the Hawaii data. The best half-life is 70 years or so.

I should note that this technique produces pre-industrial CO2 concentrations that are higher than I believe is generally accepted. My estimate gives about 294 ppmv for the 1700s. From ice cores, I understand the concentration has been determined to be 284 ppmv circa 1830. However, I can report that I tried a different estimation method that produces a value closer to 284 ppmv in the early 1800s, and this data produces much poorer fits in the analysis. For this reason, I went with my original estimation based on a constant half-life.

Let's look at the results of the analysis.

S = 3.46

T' = 11.494 log C - 28.768

R = 0.0915I [ 95% CI 0.074I to 0.109I ]


Temperatures are given as anomalies in degrees Celsius, as defined in the CRUTEM3v data set. The rate of change (R) is given in degrees per year.

What's the confidence interval on S? We'll leave that as an unsolved exercise. It's not only that there's uncertainty on the various data sets used, but it's unclear how we would calculate the uncertainty on the best "goodness of fit." It's not a matter of calculating confidence intervals on R2 values, which is easy. We basically have to determine the likelihood that the best "goodness of fit" is other than the one we found. This seems non-trivial, but maybe a reader can suggest a method. From what I've seen in a visual inspection of Figure 3, I would say S is unlikely to fall outside the range 2.8 to 4.0. Of course, things might happen in the future which invalidate these results, as they are applicable to historic data.

Next up: We'll see how well these results hind-cast.

Monday, July 14, 2008

Hurricanes and Global Warming - Revisited

I previously wrote an analysis on the association between sea surface temperature and named storms. The post met some scrutiny which was actually pretty decent, primarily from a commenter named Kenneth Fritsch over at Climate Audit. I understand Climate Audit is one of the major AGW denial blogs.

I had conjectured that when detrending time series, closer fits will tend to better control for coincidence. This intuition makes perfect sense, in my view. Consider that detrending with a linear fit is better than not detrending at all. After that, it's not hard to imagine there are coincidental time series where linear detrending does not make sense at all. I've also found time series where a second-order detrending is quite poor, and I've had to use a third-order detrending. The cumulative CO2 emissions time series is case in point.

The problem with detrending too closely is that there is some loss of information. To give you an example, if we only had 7 data points and detrended them using a 6th-order fit, the fit would be perfect, and we'd be left with zero information. This is presumably not so much of an issue when you have many data points, but there has to be some loss of information either way.

Kenneth had tried my analysis with a 6th-order detrending and found that statistical significance was lost. This was interesting, but I subsequently pointed out that if you attempted the association by assuming there's a lag of 1 year between temperature and storms, statistical significance remained. I had previously found a lag of 1 year produced a better association than a lag of 0 years, and the 6th-order detrending confirms it. The 6th-order detrending is pretty remarkable too. There are no hints of cycles in a visual inspection of the detrended time series.

The exercise left me quite sure that there was still an association, but I got the sense that there's something missing as far as convincing some readers. I think many people are unconvinced by slopes, confidence intervals and theoretical Math. You need a good graph to be convincing. Unfortunately, both the temperature data and the storms data contain a lot of noise. You can sort of see a pattern if you look closely, but it's not something that is slam dunk convincing.

So I had an idea. We just need to smooth out the noise. And what's a simple way to smooth out noise? We just get central moving averages. In fact, this idea is so simple that I'd be very surprised no one has thought of it before. Here's what I did. For the year 1859 I calculated the "smooth" temperature as the average of raw temperatures from 1851 to 1867. For the year 1860, it was the 1852-1868 average, and so forth. Same for named storms. The resulting graph follows.

hurricanes storms global warming temperature

At times I think a better name for this blog might have been "Deny This." :)

Some remarks:
  • The effect given by a straight comparison of the time series appears to be 8 storms for every 1 degree (C). This is somewhat higher than the effect I had previously reported from an analysis of the residuals, which was 6 storms for every 1 degree.

  • The graph provides support for the contention that old storm records are unreliable. I would not recommend using storm counts prior to 1890.

  • My prediction that at an anomaly of 2 degrees (C) the average season will be similar to the 2005 season is unchanged.

  • The lag from the graph appears to be 2 years, and not 1 year, as suggested by various analyses of residuals.

Saturday, July 12, 2008

Post on Global Warming Appears to Upset Denialists

A couple weeks ago I wrote a post in my primary blog that, if I may say so myself, convincingly and conclusively shows anthropogenic global warming is a reality. I believe the analysis is such that you don't need to have a degree in Math to follow it.

Not surprisingly, some global warming "skeptics" showed up in the comments and argued some points that are, frankly, not relevant to the analysis. But they were mostly civil. More recently, however, a commenter shows up, saying things like...

Good grief!

There is too much wrong with this analysis to do a thorough critique...

There is nothing at all impressive about your statistics...


Personally, I find these types of comments fairly rude, but that wouldn't matter so much if the commenter had actually advanced some challenges of note. Have you ever encountered guys like this? While this is the first time I've come across global warming denialists, I do have considerable experience with their anti-science counterparts in the autism community. We call them "anti-vaxers" and "the mercury militia." I doubt global warming denialists are nearly as nasty, though. But I digress.

Additionally, it's a little funny that the guy hadn't apparently read the post at all, judging by the following comment.

Also, there's not a thinking person on the planet who disagrees that from 1850 to present both carbon dioxide and temperature have increased. That alone will cause a positively-sloped line.


In the first paragraph of my post I had made it perfectly clear that my intention was to test a methodology that controls for potentially coincidental trends. In the first paragraph! I don't think I would've bothered to do a global warming analysis otherwise. You have to keep in mind that I have no dog in this fight (except perhaps for the fact that I live in this warmed up planet). My interest in the topic is scientific and not political.

This is a good opportunity to repost more clear versions of the figures from the analysis, nevertheless. Figure 1 shows the two time series without any adjustments. Figure 2 shows the residuals of the time series relative to the modeled trend lines. I've come to realize that a more intuitive way to think of Figure 2 is as a detrending of the time series from Figure 1. Note that in Figure 2 the residuals of temperature are calculated from a temperature time series that is 10 years ahead of observed values. I've also widened the CO2 Y scale a bit for clarity.

co2 temperature

detrended co2 temperature cross-correlation

I encourage the reader to click on the figures to get familiar with their nuances. Print them if you prefer. I hereby also grant permission to use these images in any way the reader sees fit.

Note that Figure 2 includes linear fits of both detrended time series. The fits are completely flat. This means that the temperature residuals are not associated with the year, and neither are the cumulative CO2 residuals. Any independent property of the year should not associate with either. If the residuals cross-associate, at 99.99999999% confidence, then it's very difficult to argue that we're not looking at an actual effect.

Let me get back to some of the points the commenter raised.

If you wish to prove Anthropogenic Global Warming, you'll need to use temperatures from the whole globe. You cannot simply ignore the entire Southern Hemisphere. And you really should test other temperature data sets using your methodology...


Here the commenter seems to be suggesting that finding an effect of CO2 on Northern Hemisphere (NH) temperatures is not convincing enough. Unless we can show the whole planet is affected, it doesn't really matter if CO2 is warming the NH. Plus we have to show this using all data sets. Amazing.

When I first did the analysis, I didn't know much about all the data sets available. I just wanted to find one that contains as many data points as possible. When it came time to pick a data set, I chose a NH one simply because most CO2 is generated in the NH, and so by choosing this data set theoretically less noise would be introduced in the analysis.

The general temperature trend behavior is similar when you compare the globe with the NH and SH, even though the size of the effect of greenhouse gases varies. This is true of all data sets. If the commenter hopes the analysis won't hold if we look at different temperature data sets, frankly, he's engaging in self-deception.

When you're trying to validate a theory, you have to use measurements of what's ACTUALLY IN THE THEORY. For AGW, this means you have to model the CO2 concentrations in the atmosphere.


Here the commenter is suggesting that cumulative human CO2 emissions are not a good proxy of the CO2 concentrations in the atmosphere. This is not true, as I will elaborate on, but in any case, how does this explain the association found?

As far as I know, data on CO2 atmospheric concentration is only available for the range 1958 to 2004. I don't believe this is enough for this type of analysis considering how noisy the data in question is. Would you find Figure 2 convincing if you could only see a third of the graph? But more importantly, early on I realized that if I wanted to make an argument about anthropogenic global warming, it was key to look at the human contribution of CO2.

I have modeled cumulative CO2 emissions vs. atmospheric concentrations at Mauna Loa, Hawaii. The fit is excellent. For those who are versed in statistics, if I put both data sets in a scatter and do a linear fit, the R2 of the fit is 0.9981.

I can get slightly better fits by assuming there's a constant half-life of CO2. To do this I use a simple model where our total atmospheric contribution at any point in time is calculated as follows.

total(year) = (total(year - 1) + emissions(year)) * constant


The constant is what tells us how much of the extra CO2 we've put into the atmosphere is lost after 1 year. Of course, we're assuming that naturally produced CO2 is in equilibrium with the environment; which was roughly the case before the industrial revolution.

I've tested different values of constant and compared the resulting R2 fit measures of the linear association between total emissions and atmospheric concentrations. The results can be seen in the following graph.

goodness of fit co2 half-life

What this tells us is that the best values of constant are somewhere between 0.99 and 0.9908. These translate to an atmospheric half-life between 69 and 75 years.

None of this detracts from the fact that cumulative emissions are an excellent proxy of our contribution to atmospheric concentrations. But in case readers have any doubts, the following is a graph of anthropogenic CO2 contribution where we assume a half-life of 69 years. Please compare and contrast with Figure 1.

co2 cumulative emissions trend 69-year half-life

Evidently, this is all just a distraction from the facts in evidence: An association was found, and data imprecisions cannot explain it away.

Monday, July 7, 2008

Shouldn't It Be Considerably Warmer?

In a prior residual correlation analysis of cumulative CO2 emissions and northern hemisphere temperatures the effect found appeared to be much larger than expected for short-term fluctuations. It was a clear effect, too, in the sense that it was evident graphically. I speculated that cumulative CO2 emissions are probably not a good reflection of actual atmospheric concentrations because some CO2 probably does get removed from the atmosphere after some time.

That finding peeked my interest though. In the original analysis, I basically assumed the half-life of CO2 was 'infinite'. We were only interested in fluctuations from the general trend, so the assumption was sufficient to prove a point then.

I subsequently went ahead and calculated human CO2 contribution assuming a constant atmospheric half-life of 50 years. (A constant half-life doesn't match up with the numbers very well, but we'll set this aside for the time being). Going from a half-life of 'infinite' to a half-life of 50 years, I expected to see a decreased effect.

Instead, the effect was about the same, using the best fluctuation lag I had previously found: 8 years. The slope was 3.181x10-5 ± 9.927x10-6. By matching up with atmospheric concentration data sampled at Mauna Loa, Hawaii, this translates to 0.081 (± 0.025) degrees (C) for every 1 ppmv increase in CO2 concentration. (I've done the analysis in other ways which I'm not going to go into, and I'm confident this is about right).

Keeping in mind that this was a northern hemisphere temperature analysis, the effect is still huge. Assuming the relationship is linear, it would mean that a fluctuation of 100 ppmv should result in a temperature fluctuation of about 8 degrees (C). At this point is when I started to think of where the error might be. Of course, there are subtleties involved in how such a result should be interpreted, and I'll get to that, but I kept coming back to a graph I had previously seen.



In this graph we see that, historically, a fluctuation of 100 ppmv CO2 corresponds to a fluctuation of 8 to 10 degrees (C). I realize there are feedbacks involved, but this is interesting nevertheless.

Could it be that at current CO2 levels the expected temperature anomaly should be 5 or 10 degrees, as opposed to 1 degree? Let's consider the finding that a fluctuation of 1 ppmv should result in a temperature increase of about 0.05 degrees globally. In the analysis, 8 years were enough for this temperature increase to be realized for such a small fluctuation. Let's round that to 10. Temperature cannot increase with arbitrary speed I suppose. If it takes 10 years for a 0.05 degree increase, could it be that it takes 1,000 years for an expected 5 degree increase to materialize?

No, I don't think so. The rate of temperature increase cannot be constant or bounded by such a low value. If it were, we would not be able to detect short-term CO2 increase effects. Temperature would already be slowly working its way up towards a target and small green house gas fluctuations would not have an effect in the rate of increase. So instead of 1,000 years, we could be talking about hundreds or less.

What's going on with the data is not very intuitive, so I came up with an analogy that I believe is helpful. Imagine the planet is a car and its temperature is the speed of the car. Pumping CO2 into the atmosphere would be analogous to pressing the gas pedal. When you press the gas pedal, there will be an immediate effect: the speed of the car (temperature) will begin to increase, but it will take some time until it reaches a stable speed. The more you press the gas pedal, the faster the speed increase, but the target stable speed is farther ahead.

This suggests we've been looking at the results of the fluctuation analysis all wrong. It tells us not about the effects of CO2 concentrations on temperature, but about its effects on temperature increase. This is an important distinction. In the end, what we're seeing in the analysis is that for every 1 ppmv fluctuation, there's a fluctuation of about 0.008 degrees per year in the rate of increase of temperature (maybe 0.005 globally). But once again, this relationship cannot possibly be linear. It all gets fairly complicated from this point forward.

I presume climate models take this into account, either implicitly or explicitly. But I've never heard it explained this way. It is mistaken to suppose that current CO2 levels are what drive current temperature levels; they actually drive the rate of increase of temperature up to a target temperature that is probably very far off yet. I'm no climate scientist, but this seems quite obvious in retrospect.

If my intuition is correct, some additional questions come to mind.

  • If CO2 were to level off at current levels, would temperature continue to increase? For how long? Up to what point?
  • Does this all mean CO2 levels should be brought down to at most 300 ppmv for species in this planet to be able to survive long term?
  • Should we expect an acceleration of the rate of increase of temperature? Is there a limit to how fast it can increase?

Saturday, July 5, 2008

"There is a much better correlation between sun activity and temperature"

Shortly after I wrote my first post on global warming, a commenter noted that "there is a much better correlation between sun activity and temperature." I've read other blog discussions on the topic, and this seems to come up from time to time.

So I decided to put the data in scatters to see if there's any merit to this claim. I'm not going to standardize the data in any way. These will be straight plots of existing data.

First, let's look at a scatter (Figure 1) of atmospheric CO2 concentration vs. global temperature anomalies 8 years later from 1959 to 1999 (corresponding to 1967 to 2007 for temperature).



Why 8 years later? This is the best lag I found in my initial analysis of CO2 emissions vs. temperature anomalies. Even without this lag, you will find a similar association. The 8 year lag is probably an underestimate when we're talking about long-term increases in CO2. That was a lag applicable to a fluctuating trend. (And yes, this is bad news).

Finally, let's look at a scatter (Figure 2) of SunSpot number vs. global temperature anomaly, between 1881 and 2007.



Is that what they call a "much better correlation"?