Friday, August 15, 2008

Graph of NH SSTs and Named Storms Questioned

I have written about the association between the number of named storms in the Atlantic basin and Northern Hemisphere sea surface temperature anomalies several times now (last time here). I am quite confident there's a causal association there (even considering the possibility of coincidental trends).

The problem is that my posts on the subject have begged disbelief. You see, the scientific literature is not clear on the matter, and not even top climate scientists seem to agree on whether the association exists. That's why I'm making this spreadsheet available.

In particular, there is a graph that is very difficult to deny. Sometimes you can express doubt about mathematical analyses on technical grounds, but clear and easily reproducible graphs are difficult to argue with. The graph in question is that of 17-year central moving averages of northern hemisphere sea surface temperature anomalies, and the number of named storms in the Atlantic basin, from the 1850s to the present time.

In the new spreadsheet I'm making available, I calculated both 15-year and 21-year moving averages of both data sets. You will find comments in column headers with the URLs of where the raw data comes from. Having to do this seems over the top, but there really are people who apparently don't believe the original graph is real; plus they seem to be misunderstanding the graph completely, as you can see in the comments section of this post at AccuWeather.com.

The 15-year and 21-year CMA graphs are posted below, in that order.





Comment Policy

I will state my comment policy here, for future reference. I do not enable comment moderation. The only comments I delete are those that are clearly in violation of Blogger's Content Policy. Scrutiny is more than welcome. If you believe I made a mistake, tell me. If you believe I'm making things up, you absolutely should tell me, but you better be right.

41 comments:

Joseph said...

Note: I've posted a comment with a link to this post over at AccuWeather.com, currently pending moderation.

TomG said...

Thank you for the great graphs.
Removes all question from my mind of the link between SST and name storms.

Anonymous said...

Interesting graph.

Why use NH temps instead of North Atlantic?

Joseph said...

Hi Anon. Good question. I don't know if there's a 150-year temperature reconstruction for the North Atlantic alone. Do you have a link?

I used the HadSST2 data set which is divided into NH, SH and global. The data is here.

Anonymous said...

Hi Joseph,

Here's a site I was looking at last night when I was contemplating your graphs. The data are labeled as SST anomalies, although there may be additional factors I might be missing.

http://www.cdc.noaa.gov/Timeseries/AMO/

The table seems to be based on Hadley data, so maybe you can find even more targeted data (say from 0 deg - 45 deg N) either at the parent site of this link or at Hadley. The actual link I've got is monthly data, which you can import into Excel and calculate annual averages from. It might also be useful to calculate averages for June-Nov, which is prime hurricane forming season in the N. Atl. I'm confident you'll find a link to SST, but the trick is separating that out from the well-known AMO. Also, the number of tropical storms is interesting, but the real key is whether the big boys become bigger.

Here's a paper that addresses that, and a recent paper by Emanual of MIT has info on it too, but these studies get deep, fast, and they don't always come to the same conclusion.

http://www.seas.harvard.edu/climate/pdf/2006/michaels2006.pdf
ftp://texmex.mit.edu/pub/emanuel/PAPERS/Emanuel_etal_2008.pdf

BTW, I don't usually bother to register an ID. I usually go by "John M", but obviously, that's not a terribly distinguished or unique pseudonym.

John M

Anonymous said...

Last two links got truncated.

Hope this works.

Michaels
Emanuel

Joseph said...

Thanks for the links, John. It might take me a while to look into them, though. BTW, you can select the Name/URL radio button in the comment box, and that way you can give yourself a pseudonym other than Anonymous.

I also thought that a Jun-Nov average would be more accurate than an annual average, but then there's a clear lag in the graphs, so apparently something other than an immediate effect is going on there.

Anonymous said...

Thanks Joseph,

But my current passwords drive me crazy as it is.

If you separate NA from PAC sst, that might help shed some light on your lag. As I recall, El Ninos (warmer sst in the Pacific) tend to hinder NA hurricane formation.

It will be interesting to see what you come up with.

John M

Anonymous said...

Hi Joseph,

Can you reupload the spreadsheet. It seems to have expired.

Joseph said...

That seems to work for me. But I've uploaded it to a different service here.

Anonymous said...

Got it now. Thanks.

Anonymous said...

Joseph,

Something else to consider as you evaluate the SST_hurricane relationship .

John M

paulm said...

Came across this....may be some connection with your 2yr storm lag somewhere.

NASA’s Goddard Institute for Space Studies (GISS) has released its final report on “2008 Global Temperatures.”
http://data.giss.nasa.gov/gistemp/2008/

"Because of the large thermal inertia of the ocean, the surface temperature response to the 10-12 year solar cycle lags the irradiance variation by 1-2 years. Thus, relative to the mean, i.e, the hypothetical case in which the sun had a constant average irradiance, actual solar irradiance will continue to provide a negative anomaly for the next 2-3 years"

Anonymous said...

I anyone is still interested, this is an interesting addition to the discussion.

http://www.climateaudit.org/?p=5449

John M

VangelV said...

Nice work. There are a few issues that need to be addressed.

First, when you mean named storms are you talking about storms that make landfall or all storms? (If it is the latter how is the improved storm detection accounted for? After all, there were few flights across the Atlantic before the 1940s and no satellites tracking storm activities in mid-ocean until relatively late in the game.)

Second, why is the analysis cut off in 1998? As far as I know, there is accurate up to date data available. And given the fact that the analysis is done with a spreadsheet, it takes no more time to do it with a 2008 end point than it is in 1998.

Also, it must be very inconvenient for the AGW case to see the Accumulated Cyclone Energy Index at a 30 year low even though CO2 concentrations have gone up or to see a fourty year period during which CO2 emissions exploded but the number of named storms stayed constant. Does this mean that you are saying that CO2 emissions are not well correlated to hurricane activity?

Joseph said...

@VangeIV:

First, when you mean named storms are you talking about storms that make landfall or all storms?That would be all named storms. I simply took the data right out of this NOAA page.

Second, why is the analysis cut off in 1998?Because those are at least 15-year central moving averages, which means that the last data point needs to be at least 7 years in the past.

Also, it must be very inconvenient for the AGW case to see the Accumulated Cyclone Energy Index at a 30 year low even though CO2 concentrations have gone up or to see a fourty year period during which CO2 emissions exploded but the number of named storms stayed constant.I have looked at the graph you're referring to, but I haven't analyzed it in detail to be able to comment.

Does this mean that you are saying that CO2 emissions are not well correlated to hurricane activity?Possibly. When you have complex causation A -> B -> C, the association between A and C is not necessarily obvious.

Joseph said...

(sorry, the comment system in Blogger was recently broken such that some paragraph breaks are removed.)

VangelV said...

Tracking total storms when detection methods automatically add to the numbers is a meaningless exercise. That is why you need to do an apples to apples comparison and look at storms that make landfall.

Joseph said...

While you can challenge the accuracy of data that has been collected, it's much more difficult to try to explain why data correlates well with other data (temperature) that has been collected independently. Unless there are confounds, or unless you think the producers of each dataset were conspiring to produce data that matched. Not only were they conspiring to produce the data, they must have secretly known about methods that would be used in the future to smooth and chart that data.

VangelV said...

While you can challenge the accuracy of data that has been collected, it's much more difficult to try to explain why data correlates well with other data (temperature) that has been collected independently. Unless there are confounds, or unless you think the producers of each dataset were conspiring to produce data that matched. Not only were they conspiring to produce the data, they must have secretly known about methods that would be used in the future to smooth and chart that data.





It isn't the accuracy that is the problem. It is the bias that is introduced when new methods can find storms that would not have been observed previously. As I said, prior to the 1950s there were few flights across the Atlantic to report storms that never made a landing. And after the satellites were put into orbit all kinds of storms that would never show up in the statistics before were counted by the improved detection methods. To do a true comparison you would have to look at storms that made landings. We have good records of those that go back hundreds of years and their analysis could yield important information if we wanted to do a true comparison.

Your data is also damaging to the AGW argument about storm activity because it shows very little correlation between CO2 levels and storm activity. Like I said, when you have three decades of exploding emissions and rising CO2 levels that show no real trend in storm activity it is hard for you to make a case that CO2 is a factor.

So what you have are two things. First, you have an apples to oranges comparison that doesn't show anything meaningful over the total period you are analysing. Second, you have a graph that suggests no connection between CO2 emissions and storm activity. If you intended to make an anti-AGW argument I congratulate you. If not you need to go back and try again.

Joseph said...

I'm aware of that issue, as noted in this post. That seems to be a bias that exists mostly for counts prior to 1900. It could be a general bias that has a gradual effect in the trend, so the slope of the blue line might be a little steeper than it is in reality, but it doesn't change the general correlation, and it doesn't explain features of the smoothed series, such as a noticeable rise in storm count averages after about 1988.

llewelly said...

Like I said, when you have three decades of exploding emissions and rising CO2 levels that show no real trend in storm activity it is hard for you to make a case that CO2 is a factor.
CO2 does not cause instant global warming. It takes many years. Global average surface temperatures lag CO2 levels by about 25-50 years. Beyond that - there are other important human influences on climate, such as methane, land use changes, black carbon aerosols, SO2, and so on.

VangelV said...

CO2 does not cause instant global warming. It takes many years. Global average surface temperatures lag CO2 levels by about 25-50 years. Beyond that - there are other important human influences on climate, such as methane, land use changes, black carbon aerosols, SO2, and so on.

Actually, the GHG model shows that CO2 starts to absorb in the IR spectrum as soon as it enters the atmosphere. While you could argue for a slight lag the effects are supposed to be very rapid. Given the feedback assumptions used by the IPCC models you can't have a decade without warming, which is the reason why none of the models that I am aware of were predicting that the trend would end for a dozen years in 1998. If you recall, at the time that this cooling trend began the 'experts' were calling for an acceleration or continuation and not a reversal of the trend because those predictions were consistent with the theory. The results have falsified the AGW theory so it makes sense to move on to another explanation.

Joseph said...

Just as an FYI, my analysis of the data indicate that temperature fluctuations lag CO2 fluctuations by about 10 years. I got this both from detrending observed CO2 and temperature series, and also from modeling a hypothetical CO2 sine wave that has a direct effect on equilibrium temperature.

VangelV said...

Just as an FYI, my analysis of the data indicate that temperature fluctuations lag CO2 fluctuations by about 10 years. I got this both from detrending observed CO2 and temperature series, and also from modeling a hypothetical CO2 sine wave that has a direct effect on equilibrium temperature.

First, that does not fit the GHG theory because there is no reason for CO2 not to start absorbing IR radiation as soon as it enters the atmosphere. Second, it makes no difference even if it were true because there was a substantial increase of CO2 starting in the 1990s that should have kept temperatures rising in the 2000s. As I said, the actual temperature data does not support the feedback assumptions that are being pushed by the alarmists.

Joseph said...

The effect of CO2 could be instantaneous, sure, but it affects equilibrium temperature, not actual temperature.

Imagine that you put hot and cold objects next to each other. There's an equilibrium temperature (see Newton's Law of Cooling) but it takes time for this temperature to be reached.

In any case, this is unrelated to the recent observed temperature trend. There's nothing in it that is inconsistent with predictions. E.g. see this hindcast. Around 1998 temperatures were actually a bit higher than they should be. It's not surprising they are a bit lower now.

Anonymous said...

Joseph,

Regarding the hindcast, what is the "negative forcing" in the 19th century due to?

John M

Joseph said...

@John M: In the real world, I don't know. But in the hindcast, it's simply an artifact of starting out with a temperature higher that what the equilibrium temperature must've been at the time, given the estimated CO2 concentration.

Anonymous said...

"But in the hindcast, it's simply an artifact of starting out with a temperature higher that what the equilibrium temperature must've been at the time, given the estimated CO2 concentration."

Huh? Given the stated CO2 forcing, how can the 19th century forcing be negative? The CO2 concentration never went down, did it?

John M

Joseph said...

In the hindcast, you have to start out with an initial temperature. I'd have to go back and look at the details, but I probably just grabbed whatever the temperature was in 1850 or so.

If you look in the hindcast post, the equilibrium temperature is calculated as follows:

T' = 10.398 log C - 26

C is the CO2 concentration and is given in ppmv. That's a natural log.

So for a concentration of, say, 284 ppmv, what should be the equilibrium temperature anomaly?

If you start out with a higher temperature than that, there will be a negative forcing, even if the CO2 concentration is increasing.

Anonymous said...

Joseph,

Thanks for your response. But shouldn't a model have physical meaning? If your model requires a negative forcing from CO2, it has no physical meaning, and any sensitivity calculated from it must be considered suspect.

It's really no better than fitting past stock market data to mathematical equations and neglecting the underlying conditions that have driven the market.

John M

Joseph said...

But you see, CO2 is not the only thing that affects temperature. There are other forcings, random fluctuations, measurement errors, and so forth.

It's not impossible for temperature to be higher than what you expect the equilibrium temperature to be given the concentration of green house gases. When this happens, there will be a negative forcing. Case in point, 1998.

It's not true that as long as the concentration of green house gases increases, the temperature has to necessarily increase. Nope. The rate of temperature change depends on green house gases, but it also depends on the actual temperature.

Anonymous said...

Joseph,

Thank you for your patience.

That's quite a long downward trend (about 20 years) to be introduced into your model by "random" climate events or measurement error. If we agree that CO2 can't be a negative forcing, then your model has no ability to account for the decrease in temperatures observed in the late 19th century, other than an unknown negative forcing baked into the "equilibrium temperature" calculation. That being the case, you have just repeated a process that "skeptics" are always getting dinged for: You assume some unknown forcing is at work.

If your model implicitly requires an unknown or unstated negative forcing to account for the 19th century temperature behavior (and how else can your model generate a negative temperature slope?), who's to say there is not also an unknown positive forcing to account for the late 20th century temperature behavior?

But as long as we're doing mathematical fitting and allow ourselves to consider natural variations, here's a model I've built using simple CO2 forcing (no feedbacks) and an oscillatory natural phenomenon. The oscillation is based on the PDO, which, unlike your unknown forcings, is inspired by empirical scientific studies. Granted, I've idealized it as a sinusoidal phenomenon and it's premature to claim it has predictive capability.

The equation is:

temp. anom. = a + b*sin(c + 2*(pi)*t/p) + CO2 forcing

where

a = fitting constant to provide y offset (simply slides the curve up and down)
b = amplitude of oscillation from PDO(K)
c = fitting constant to provide x offset (simply slides the curve right and left)
t = time (years)
p = PDO period (years for 1 cycle)

CO2 forcing = (ln(ppmCO2/275)) * S/ln2 (275 is the pre-industrial CO2 level)

S = doubling sensitivity in K

Note that the model does not have any net forcings from the PDO.

Temp data is HadCRUT3, CO2 levels were taken from Mauna Loa after 1959, and picked off various internet graphs pre-1959. As a realty check, the pre-1959 data were cross-checked using these data.

http://cdiac.ornl.gov/ftp/trends/co2/siple2.013

Future CO2 levels assume a constant increase of 1.75 ppm/year.

Here's a curve fit I've generated using a = -0.45 K, b = 0.095 K, c = -51, p = 60 years, and S = 1.5 K

http://img268.imageshack.us/img268/9291/pdomodel.jpg

R^2 is 0.81. No need for random unknown forcings.

Weaknesses?

I've fit parameters to a sine function rather than try to extract data from the historical record of the PDO (although the period is reasonble for the observed PDO).

I don't account for known (volcanic) and speculative (anthropogenic aerosols, solar variability) forcings.

No other GHGs are included.

Future CO2 levels are speculative, as is the assumption that the PDO is reliably oscillatory.

And finally, my Excel spreadsheet is too damn ugly to post (I put some pretty ugly functions in it as I built it and don't know when I'll have time to go back and clean it up)

Comments welcome.

John M

Anonymous said...

Hmmm...

That imageshack link got truncated. It should end in jpg.

John M

Joseph said...

That's quite a long downward trend (about 20 years) to be introduced into your model by "random" climate events or measurement error.

I don't think so. In the graph you can see that deviations of up to 0.2C from the projected trend occur periodically.

If your model implicitly requires an unknown or unstated negative forcing to account for the 19th century temperature behavior (and how else can your model generate a negative temperature slope?), who's to say there is not also an unknown positive forcing to account for the late 20th century temperature behavior?

It does not implicitly require that, and I'm not exactly sure what you mean by that.

The model simply starts out with a temperature higher than what the model predicts the equilibrium temperature should be. Hence, the model corrects that over time, and this looks like a negative forcing. The only forcing the model considers is CO2. It's a very simple model, trivial to reproduce.

About your sine-based model, I don't doubt that it gives a good fit. For any signal-like series, you can find a good sine or cosine transform.

There are other problems with your model. For example, it assumes that the temperature depends on CO2, logarithmically, with no lag. It doesn't really work that way in reality.

The oscilatory nature of temperature series probably has to do with the fact that some forcings are signusoidal, like solar irradiance, and the dust veil index. That's not too surprising.

If what you want is a fitting function, there are all kinds of things you can do. The point is to confirm the physics with the observed data.

Anonymous said...

That's quite a long downward trend (about 20 years) to be introduced into your model by "random" climate events or measurement error.

I don't think so. In the graph you can see that deviations of up to 0.2C from the projected trend occur periodically.

[reply] My problem isn't with the deviations of the observed data from the projected trend, I have a problem with your model that is based on CO2 forcing going down for 20 years. If it's because the temperature at the start was "wrong" and it took a while for you calculation to fix it, that doesn't have much physical meaning for me.[end reply]

If your model implicitly requires an unknown or unstated negative forcing to account for the 19th century temperature behavior (and how else can your model generate a negative temperature slope?), who's to say there is not also an unknown positive forcing to account for the late 20th century temperature behavior?

It does not implicitly require that, and I'm not exactly sure what you mean by that.

The model simply starts out with a temperature higher than what the model predicts the equilibrium temperature should be. Hence, the model corrects that over time, and this looks like a negative forcing. The only forcing the model considers is CO2. It's a very simple model, trivial to reproduce.

[reply] What physical process forces your model down? All I see is an iterative process that seems to be highly dependent on the starting point, which forces down the modeled temperature for 20 years.[end reply]

About your sine-based model, I don't doubt that it gives a good fit. For any signal-like series, you can find a good sine or cosine transform.

[reply] That's why they're used so often to model physical and natural processes.[end reply]

There are other problems with your model. For example, it assumes that the temperature depends on CO2, logarithmically, with no lag. It doesn't really work that way in reality.

[reply] Fair enough. I've rerun the model with 5, 10 and 20 year lags, although admittedly in a simple way. I used CO2 levels from 5, 10, and 20 years behind the modeled year to account for the lag. The resulting sensitivities are 1.6, 1.7, and 2.5 K for CO2 doubling, compared to 1.5 with no lag. I believe in an earlier thread you said a 10 year lag was reasonable.[end reply]

The oscilatory nature of temperature series probably has to do with the fact that some forcings are signusoidal, like solar irradiance, and the dust veil index. That's not too surprising.

[reply] And don't forget oceanic oscillations. Sure that's not surprising. That's my whole point!.[end reply]

If what you want is a fitting function, there are all kinds of things you can do. The point is to confirm the physics with the observed data.

[reply] Again, what physics does your 20 year negative modeled slope confirm? My model takes into account forcing from CO2 and the sine function is intended to model the PDO, which is observed data.[end reply]

Joseph said...

What physical process forces your model down?

That's simple enough. Basically, there's more heat than the model predicts, so it irradiates out, if you will.

It takes time for heat to irradiate out. The model works exactly like Newton's Law of Cooling.

By way of analogy, suppose there's a climate anomaly that causes the temperature to be 1.0C next year. The anomaly goes away in one year. Does that mean that temperature should continue its upward trend above 1.0C as the CO2 concentration keeps increasing? Nope. It should get back down to about 0.6C and then it will resume its upward trend.

VangelV said...

Here is a draft EPA report on the issue of AGW.

http://cei.org/cei_files/fm/active/0/DOC062509-004.pdf

The Obama administration, which kept talking about honesty and transparency seems to have wanted to release a final version.

http://www.cbsnews.com/blogs/2009/06/26/politics/politicalhotsheet/entry5117890.shtml

The Australian has an interesting take on the issue.

http://www.theaustralian.news.com.au/story/0,25197,25703935-20261,00.html

While I am at it, here is what the raw surface data looks like.

http://cdiac.ornl.gov/epubs/ndp/ushcn/rawurban3.5_pg.gif

To get warming the data has to be adjusted by adding to the current temperatures.

http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

Without this adjustment there is no warming and as James Hansen wrote in 1999, "The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934."

http://www.giss.nasa.gov/research/briefs/hansen_07/

Frankly, I can't see how one can model a process that is not well understood and when the data is not reliable.

Anonymous said...

VangeIV

There's nothing wrong with trying to model imperfect data as long as the intent is to gain a better understanding to help forge a hypothesis for testing.

The problem comes when models are treated as if they have infallible ability to forecast the futre and are believed as if they were chistled onto stone tablets.

John M

Joseph said...

Note: I've posted yet another follow-up.

VangelV said...

There are no studies that can find a statistically significant connection between storm activity and sea surface temperatures in the Atlantic basin. This makes sense because there are other factors that can aid or assist the development of storms that have nothing to do with SSTs or with the historical reporting of storms.

From what I see, some researchers and analysts are trying to make names for themselves by pretending that they know far more about weather and climate than is actually known. As expected, they have failed to come up with any explanation of substance that stands the test of time.