Friday, August 15, 2008

Graph of NH SSTs and Named Storms Questioned

I have written about the association between the number of named storms in the Atlantic basin and Northern Hemisphere sea surface temperature anomalies several times now (last time here). I am quite confident there's a causal association there (even considering the possibility of coincidental trends).

The problem is that my posts on the subject have begged disbelief. You see, the scientific literature is not clear on the matter, and not even top climate scientists seem to agree on whether the association exists. That's why I'm making this spreadsheet available.

In particular, there is a graph that is very difficult to deny. Sometimes you can express doubt about mathematical analyses on technical grounds, but clear and easily reproducible graphs are difficult to argue with. The graph in question is that of 17-year central moving averages of northern hemisphere sea surface temperature anomalies, and the number of named storms in the Atlantic basin, from the 1850s to the present time.

In the new spreadsheet I'm making available, I calculated both 15-year and 21-year moving averages of both data sets. You will find comments in column headers with the URLs of where the raw data comes from. Having to do this seems over the top, but there really are people who apparently don't believe the original graph is real; plus they seem to be misunderstanding the graph completely, as you can see in the comments section of this post at AccuWeather.com.

The 15-year and 21-year CMA graphs are posted below, in that order.





Comment Policy

I will state my comment policy here, for future reference. I do not enable comment moderation. The only comments I delete are those that are clearly in violation of Blogger's Content Policy. Scrutiny is more than welcome. If you believe I made a mistake, tell me. If you believe I'm making things up, you absolutely should tell me, but you better be right.

Tuesday, August 12, 2008

NOAA Study Seems To Confirm Observation From 07/14 Post

No so long ago I wrote a follow-up to an earlier analysis on the association between the number of named storms in the Atlantic basin and northern hemisphere sea surface temperatures. At the end of the post I listed a number of conclusions, one of which was the following.

The graph provides support for the contention that old storm records are unreliable. I would not recommend using storm counts prior to 1890.


I had posted a graph of 17-year central moving averages of NH sea surface temperature and named storm series, reproduced below. You will note I had placed a vertical line around the year 1890 in order to indicate there was some sort of point of change there.



I didn't use any mathematical analysis to determine that 1890 was in any way special. It was simply obvious, visually, that something was not right in the named storms series prior to 1890. Of course, the central moving average smoothing helped in terms of being able to see that.

Enter Vecchi & Knutson (2008), a NOAA study of North Atlantic historical cyclone activity. The authors determined, based on known ship tracks, that early ships missed many storms, especially in the 19th century.

Now, this study is being touted as evidence that global warming and the number of storms in the Atlantic are not associated. Clearly, that is nonsense, if you just look at the figure above. If you'd like to see some Math, I have done a detrended cross-correlation analysis as well. All that is necessary to demonstrate an association is to do a linear detrending on series that go from 1900 to the present time. The detrending should take care of any problems related to unreliability of old storm counts. I can further report that even after detrending the series based on 6th-order polynomial fits, a statistically significant association is still there, provided storms are presumed to lag temperatures by at least one year.

About The Disingenuous "Global Warming Challenge" by JunkScience.com

I read somewhere that JunkScience.com had issued a "global warming challenge" some time back that is promoted as follows.

$500,000 will be awarded to the first person to prove, in a scientific manner, that humans are causing harmful global warming.


That's also what people will say whenever they tout the "challenge." If you are certain anthropogenic global warming is real, you should be able to prove it. Who wouldn't want to make $500,000?

But as you can imagine, there's a catch. You need to falsify two hypotheses.


UGWC Hypothesis 1

Manmade emissions of greenhouse gases do not discernibly, significantly and predictably cause increases in global surface and tropospheric temperatures along with associated stratospheric cooling.

UGWC Hypothesis 2

The benefits equal or exceed the costs of any increases in global temperature caused by manmade greenhouse gas emissions between the present time and the year 2100, when all global social, economic and environmental effects are considered.


Now, hypothesis #1 should be falsifiable now. The only issue I have with it is that they have made it unnecessarily difficult (to cover their asses no doubt) by including stratospheric cooling as a requirement. Don't get me wrong. I'm sure stratospheric cooling is an important matter to climate scientists, but why does it matter to the challenge? Isn't surface temperature warming due to anthropogenic causes interesting enough?

Technically, the issue is that there's not a lot of data on stratospheric temperatures, as far as I know. Considering lags and so forth, it's probably difficult to demonstrate an association in a decisive way. I haven't run the numbers, but this is my preliminary guess.

Hypothesis #2 is not falsifiable right now. We'd have to wait until about 2100 to either validate it or falsify it. Peak oil is probably looming or behind us, so we can't say what might happen by 2100. There are policy decisions to consider. There might be technological advances that change the general outlook. If we make certain assumptions, then sure, it's theoretically possible to give confidence ranges on certain predictions, such as sea level rises or changes in storm intensity.

Clearly, the "challenge" is designed such that it's impossible or nearly impossible to win. Despite its name, JunkScience.com is not a site about junk science. If you visit it you will see it's nothing but a propaganda outlet for global warming denialism books and videos. A site that is truly about junk science would probably discuss things like the paranormal, Homeopathy, the vaccine-autism hypothesis, etc. JunkScience.com does not.

In fact, what is the evidence that JunkScience.com has $500,000 to give out? Have they been collecting pledges? If they have collected funds, and there's no winner to their challenge, which I can almost certainly assure you there won't be, will they keep the money?

Call me cynical, but I doubt JunkScience.com is either capable or willing to give out $500,000 to anybody, regardless of the entries they receive.

Counter-Challenge

Here's a counter-challenge for JunkScience.com. Reduce the stakes if you need to. Then change the requirements of the challenge to include a single hypothesis to falsify, as follows.

Manmade emissions of greenhouse gases do not discernibly, significantly and predictably cause increases in global temperatures.


What's there to fear, JunkScience.com?

Friday, August 8, 2008

Just in case there are any doubts about anthropogenic influence in atmospheric CO2

You would think this is the least controversial aspect of the global warming debate, but you'd be surprised. I realized this after reading some of the comments in a post by Anthony Watts about a recent correction in the way Mauna Loa data is calculated (see also reactions by Tamino and Lucia).

Tamino subsequently wrote an interesting post on differences in CO2 trends as observed in three different sites: Mauna Loa (Hawaii), Barrow (Alaska) and South Pole station. Most notably, there's a pronounced difference in the annual cycle between these stations, which according to Tamino, is explained by there being more land mass in the Northern Hemisphere. I would imagine higher CO2 emissions in the Northern Hemisphere might also play a role, but I'm speculating.

In this post I want to show that available data is quite clear about anthropogenic influence in atmospheric CO2. Additionally, I want to discuss how we can tell that excess CO2 stays in the atmosphere for a long time.

I will use about 170 years of data for this. There's a reconstruction of CO2 concentrations from 1832 to 1978 made available by CDIAC, and derived by Etheridge et al. (1998) from the Law Dome DE08, DE08-2, and DSS ice cores. You will note that there's an excellent match between these data and Mauna Loa data for the period 1958 to 1978. Mauna Loa data has an offset of 0.996 ppmv relative to Etheridge et al. (1998), so I applied this simple adjustment to it in order to end up with a dataset that goes from 1832 to 2004.

CDIAC also provides data on global CO2 emissions. What we need, however, is an estimate of excess anthropogenic CO2 that would be expected to remain in the atmosphere at any given point in time. We could simply calculate cumulative emissions since 1751 for any given year, but this is not necessarily accurate. Some excess CO2 is probably reclaimed by the planet every year. What I will do is make an assumption about the atmospheric half-life of CO2 in order to obtain a dataset of presumed excess CO2. I will use a half-life of 24.4 years (i.e. 0.972 of excess CO2 remains after 1 year). I should note that I have tried this same analysis with half-lifes of 50, 70 and 'infinite' years, and the general results are the same.

Figure 1 shows the time series of the two data sets.

co2 concentration and emissions

The trends are clear enough. CO2 emissions appear to accumulate in the atmosphere and are then observed in ice cores (and at various other sites like Mauna Loa). Every time we compare time series, though, there's a possibility that we're looking at coincidental trends. A technique that can be used to control for potentially coincidental trends is called detrended cross-correlation analysis (Podobnik & Stanley, 2007). In our case, the detrended cross-correlation is obvious enough graphically, and we'll leave it at that. See Figure 2. Basically, we take the time series and remove their trends, which are given by third-order polynomial fits. You can do the same thing with linear fits or second-order first. The third-order fit is a better fit and produces more fluctuations around the trend, which makes the correlation more obvious and less likely to be explained by coincidence.

detrended residuals co2 concentration emissions

With that out of the way, how do we know that excess CO2 stays in the atmosphere for a long time? First, let's check what the scientific literature says on the subject, specifically, Moore & Braswell (1994):

If one assumes a terrestrial biosphere with a fertilization flux, then our best estimate is that the single half-life for excess CO2 lies within the range of 19 to 49 years, with a reasonable average being 31 years. If we assume only regrowth, then the average value for the single half-life for excess CO2 increases to 72 years, and if we remove the terrestrial component completely, then it increases further to 92 years.


In general, it is widely accepted that the atmospheric half-life of CO2 is measured in decades, not years.

One type of analysis that I have attempted is to select the half-life hypothesis that maximizes the Pearson's correlation coefficient of the series from Figure 1. If I do this, I find that the best half-life is about 24.4 years. Nevertheless, I had attempted the same exercise with the Mauna Loa series (1958-2004) previously, and the best half-life then seems to be about 70 years. It varies depending on the time frame, and there's not necessarily a trend in the half life. This just comes to show that there's uncertainty in the calculation, and that the half-life model is a simplification of the real world.

Another approach we can take is to try to estimate the weight of excess CO2 currently in the atmosphere, and see how this compares to data on emissions. The current excess of atmospheric CO2 is agreed to be roughly 100 ppmv. If by 'atmosphere' we mean 20 Km above ground (this is fairly arbitrary) then the volume of the atmosphere is about 1.03x1010 Km3. This would mean that the total volume of excess CO2 is 1.03x106 Km3, or 1.03x1015 m3. The density of CO2 is 1.98 kg/m3, so the total weight of excess CO2 should be about 2.03x1015 Kg, or 2,030,000 millions of metric tons.

Something is not right, though. If we add all annual CO2 emissions from 1751 to 2004, we come up with 334,000 millions of metric tons total. This can't be. I'd suggest that CDIAC data does not count all sources of anthropogenic emissions of CO2. It obviously can't be considering feedbacks either. Furthermore, our assumptions in the calculations above might not be accurate (specifically that a 100 ppmv excess is maintained up to an altitude of 20Km). In any case, it's hard to see how these numbers would support the notion that the half-life of CO2 is low.

Sunday, August 3, 2008

Why the 1998-2008 Temperature Trend Doesn't Mean a Whole Lot

Suppose I wanted to determine whether the current temperature trend is consistent with some projected trend. In order to do this, let's say I calculate the temperature slope of the last 200 days, and its confidence interval in the standard manner. Then I check to see if the projected trend is in the confidence interval. But maybe I want a tighter confidence interval. I could use more data points in this case, say, temperatures in the last 1,000 minutes. If we assume temperature series approximate AR(1) with white noise, this should be fine.

That makes no sense at all, does it?

Intuitively, it seems that confidence intervals on temperature slopes (when we want to compare them with a long term trend) should depend more on the working time range than on the number of data points, or on how well those data points fit a linear regression. We should have more confidence on a 20-year trend than a 10-year trend, almost regardless of whether we use monthly data as opposed to annual data. Certainly, the standard slope confidence interval calculation is not going to do it. We need to come up with a different method to compare short-term trends with long-term ones.

I will suggest one such method in this post. First, we need to come up with a long projected trend we can test the method on. We could use a 100-year IPCC trend line, if there is such a thing. For simplicity, I will use a third-order polynomial trend line as my "projected trend." Readers can repeat the exercise with any arbitrary trend line if they so wish. I should note that the third-order polynomial trend line projects a temperature change rate of 2.2C / century from 1998 to 2008.

The following is a graph of GISS global annual mean temperatures, along with the "projected trend." For the year 2008 I'm using 0.44C as the mean temperature. You can use other temperature data sets and monthly data too. I don't think that will make a big difference.

GISS temperature

We have 118 years of 11-year slopes we can analyze. There are different ways to do this. To make it easy to follow, I will detrend the temperature series according to our projected trend. This way we can compare apples with apples as far as slopes go. The detrended series is shown in the following graph.

detrended GISS temperature

The long term slope of detrended temperatures is, of course, zero. All 11-year slopes in the detrended series will distribute around zero. We know that the 1998-2008 slope is -1.53C / century. The question we want an answer for is whether the 1998-2008 slope is unusual compared to 11-year slopes observed historically, which would indicate there's likely a point of change away from the projected trend.

We can start by visualizing the distribution of 11-year slopes throughout the detrended series. The following is a graph of the number of years in slope ranges of width 0.2C / century. For example, the number of years that have slopes between 0.1 and 0.3 is 10.

GISS detrended temperature 11-year slope distribution

This is roughly a normal distribution of years according to their slopes. In it, approximately 95% of years have slopes in the -2.7 to 2.7 range. That is, 4 years have slopes of -2.7 or lower, and 3 years have slopes of 2.7 or higher. I put forth that the real confidence interval for 11-year temperature slopes relative to long-term 3rd-order polynomial trend lines is approximately ± 2.7 C / century.

The 11-year slope for 1998 is only -1.53C / century, well within the estimated confidence interval. Therefore, it's a little premature to say that the 1998-2008 trend falsifies 2C / century. Of course, if 2009 is a cold year, that might change this evaluation.

Saturday, August 2, 2008

Wherein I Revise Previous Sensitivity Estimate Down to 3.13C

I found an annual reconstruction of CO2 atmospheric concentrations that goes from 1832 to 1978. It is made available by CDIAC and it comes from Etheridge et al. (1998). There's a more than adequate match between this data and the data collected at Mauna Loa, Hawaii for the range 1958 to 1978.

Naturally, I thought this CO2 data would be more accurate than that estimated from emissions, which I had used in my calculation of climate sensitivity to CO2 doubling. (BTW, that calculation was based on 150 years of data). So I reran the analysis, and the following is the new formula for the rate of temperature change (R) given a CO2 concentration in ppmv (C) and a temperature anomaly in degrees Celsius (T).

R = 0.0857 ( 10.398 log C - 26 - T )

The equilibrium temperature (T') is calculated as follows.

T' = 10.398 log C - 26

This means that climate sensitivity to CO2 doubling (based on this model which only considers this one forcing) is most likely 3.13 degrees Celsius.

I also rebuilt the hindcast graph, which follows.

global warming hindcast co2

I think this is a subjectively better hindcast than the original. Note that it even predicts a nearly flat temperature trend in the 1950s. This is simply what the more accurate CO2 data does. While sensitivity is lower (I had originally estimated it at 3.46C), the range of CO2 concentrations is wider. Estimations based on emissions produce a concentration of about 295 ppmv in 1850. Etheridge et al. (1998) determines the concentration is 283.5 ppvm at that point.

The model predicts that the rate of temperature change should be about 2.1C / century in 2007.

I also wanted to attempt a 1000-year hindcast. I had previously discussed the 1781-year temperature reconstruction that is the product of Mann & Jones (2003). It just so happens that there's also a 1000-year CO2 reconstruction from Etheridge et al. (1998). Well, this more ambitious hindcast didn't turn out to be as accurate. At first I thought this is just what happens when you fail to consider other important climate forcings. But then I went back and examined other 1000-year temperature reconstructions. I'm sure readers have seen that graph many times. It turns out that there's considerable uncertainty in these types of reconstructions.

Either way, I will post my first attempt at a 1000-year hindcast below. The red line is the reconstruction from Mann & Jones (2003). I also added a green line, which is a reconstruction based on glacier records that comes from Oerlemans (2005).

global warming 1000-year hindcast

It could be better. I'm now curious as to what would happen if other major climate forcings were considered.