Arctic sea ice : the canary in the coal mine

Figure 1. Arctic sea ice extent ( areas with more than 15% ice coverage ).

Mass media outlets have been paying little attention to Arctic sea ice in recent years apart from cries of alarm at carefully selected low points in the record. After much excitement and breathless claims of imminent “ice-free summers” in the Arctic starting around and inspired by the release of IPCC’s AR4 in 2007, we were told that Arctic sea ice was “the canary in the coal mine”, the harbinger of the catastrophic changes happening to the climate system and caused by human actions.

Fortunately for the purveyors of this point of view, 2007 experienced the lowest summer sea ice extent in the relatively short satellite record. Worse, after a few years of mild recovery, we witnessed the OMG minimum of 2012. Media spin went into over-drive with claims it was “worse than we thought”, and claims from activist-scientists that the Arctic was in a “death spiral”.[1]

Now with over a decade and a half since AR4 it would be instructive to see how the “run away melting” is progressing. To check in on our canary and see whether it has fallen from its perch and is lying in the saw-dust with its stiff little legs sadly pointing towards the heavens.

NSIDC maintains a very instructive and useful interactive graph [5], allowing display of any selected years from the satellite record on a day by day basis . They also publish the ice extent data for each day of the 45 year record in text format, as well as the date and magnitude of minimum ice extent each year.

Since the September minimum is the most volatile this became a favourite metric and was a regular media climate highlight each September. In 2007 Al Gore was famously saying (unnamed) scientists had told him there may be no more Arctic ice at all in summer by as early as 2013.

Climatologists frequently explain the idea of the “albedo feedback” whereby less ice leads to more solar energy entering the sea, causing warmer waters, more ice melting, more solar … and a “tipping point” being reached where irreversible, run-away melting would occur. This explanation, while plausible, is of a naive simplicity and does not even examine what other effects more open water may have and what other feedbacks, positive or negative, may come into play.

  • More conductive heat loss since the ice was a good insulating barrier.
  • More evaporative heat loss due to more open water exposed to persistently strong Arctic winds.
  • More radiative heat loss, since water has a high emissivity in the infra-red and will be radiating more 24/7 throughout the summer and continuing into the winter when the Arctic is in permanent darkness and there is zero incident sunlight.

Even in the summer months, the little sunlight there is arrives at very low incident angles and a high proportion is reflected not absorbed at all. This weakens the supposed albedo feedback. It seems this has not been measured or quantified in place. It remains speculative but is somehow expected/assumed to be a dominant factor in the changing polar climate.

So what does the 45 years of daily satellite data tell us?

We can see that in 2007 and even up until 2012, the reduction in sea ice extent was indeed reducing significantly and at an accelerating rate. A quadratic function, corresponding to a constantly increasing rate of melting, did provide a reasonably good fit to the date from around 1995. This does not prove that AGW was the cause of that change but it did at least seem a reasonable hypothesis which merited proper investigation. Instead this was taken as a self-evident truth which did not require any proof.

Had that indeed been the case there would have been no summer ice by around 2023/24. However, as the subsequent record now tells us, this simplistic interpretation no longer fits the observed data and therefore is formally rebutted. Not to recognise this would be “science denial” or to display a “flat-earther” mentality. It may even constitute “climate change denial” !

With 16 years more data under our belts, we see a very different outcome. The 2023 sea ice minimum on 18/19 September was indistinguishable from that of 2007 when all the hysterical screaming began. ZERO net change in 17 years. The linear trend since 2007 is indistinguishable from zero ( around -0.17% per year ). Sadly, virtually no one seems to be aware of this GOOD NEWS because there is a stony silence from the media who steadfastly avoid mentioning it and climatologists who prefer to divert the discussion elsewhere : ice maximum, Antarctic sea ice, calving glaciers …. anything but canaries !!

At best we are told the lowest 17y on record are the last 17y, without also being told that period shows no net change.[2] Or we are told sea ice IS shrinking implying it is still happening. The grammatically a falsehood and at best wilful misdirection. eg. NASA Vital signs: “Key Takeaway: Summer Arctic sea ice extent is shrinking by 12.3% per decade due to warmer temperatures.”[3] Climate science seems to have moved from “Hide the decline” to “Hide the lack of decline” !

Regime change
Sumatra et al 2023 [4] Determines that there has been a regime change in the Artic since 2007 witnessed by the thickness and character of ice flow through the Fram Straight.

“Here we show that the Arctic sea ice regime shifted in 2007 from thicker and deformed to thinner and more uniform ice cover. Continuous sea ice monitoring in the Fram Strait over the last three decades revealed the shift.”

https://climategrog.wordpress.com/wp-content/uploads/2023/10/fram-straight-thickness-1.png
Figure 2.

Analysis of year-to-year variation in the date of the summer sea ice minimum also shows a distinct change around 2007 from a trend to later date of minimum ice from 1987-2007 to a trend to earlier minima from 2007-2017. This jumped to later dates close to 2007 timing in recent years. There is a strong biannual (circa 2y) component throughout the record. There may also be indications of the repetition of a 30y cycle here but the dataset is too short for a clear determination of such a pattern.

Figure 3.

Derivation of this result is shown here:
https://climategrog.wordpress.com/arctic-min-dates/
With a more detailed discussion here:
https://climategrog.wordpress.com/category/periodic-analysis/

Conclusion
The detailed daily satellite data of sea ice extent provides the basis for extended study to understand the variation and forces driving change. Sadly much of the discussion seems based on drawing a straight line through the entire dataset and reducing it to single scalar value: the “trend”, which is instantly, and spuriously, attributed to the monotonic rise in atmospheric CO2. This is lazy and convenient but not scientific. The rich granularity of 45y of daily data shows the variation is anything but monotonic and that other factors and feedbacks are at play.

More serious analysis is necessary to determine the extent that long term temperature rise is contributing to change, what feedbacks ( both positive and negative ) are at play and what this tells us about long term change. Trivial “trend” fitting is clearly grossly inadequate to understand the cryosphere and inform energy policy consequences and adaptation measures.

More honest reporting is required from media outlets, climate scientists and government bodies about the true nature of change, good news as well as bad, instead of highly selective reporting or misreporting to build an alarmist narrative.

[1] Climate Central : Arctic Death Spiral: More Bad News about Sea Ice
https://www.climatecentral.org/news/arctic-death-spiral-more-bad-news-about-sea-ice

“The Arctic is in a death spiral. How much longer will it exist?” https://www.theguardian.com/us-news/ng-interactive/2020/oct/13/arctic-ice-melting-climate-change-global-warming

“Unfortunately it has become too late to save Arctic summer sea ice,” said Prof Dirk Notz, of the University of Hamburg” https://www.theguardian.com/environment/2023/jun/06/too-late-now-to-save-arctic-summer-ice-climate-scientists-find

[2] NSIDC: ” The last 17 years, from 2007 to 2023, are the lowest 17 sea ice extents in the satellite record.” https://nsidc.org/arcticseaicenews/2023/09/arctic-sea-ice-minimum-at-sixth/

[3] NASA Vital Signs: “Key Takeaway: Summer Arctic sea ice extent is shrinking by 12.3% per decade due to warmer temperatures.”
https://climate.nasa.gov/vital-signs/arctic-sea-ice/

[4] Sumata, H., de Steur, L., Divine, D.V. et al. Regime shift in Arctic Ocean sea ice thickness. Nature 615, 443–449 (2023). https://doi.org/10.1038/s41586-022-05686-x

[5] NSIDC Charctic interactive sea ice graph:
https://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/

An analytic function for AOD


The volcanic radiative “forcing” produced by major stratospheric eruptions is usually estimated from observations of atmospheric optical depth ( AOD ). This is obtained by a variety of methods including ground and satellite based measurements. Several groups have worked on producing global averages or geographically gridded data. One such dataset is provided by NASA and covers the Mt Pinatubo eruption period.

Data is given as a series of layers of the stratosphere ( above 15km ) and does not cover the relatively minor contributions from the troposphere.
http://data.giss.nasa.gov/modelforce/strataer
http://data.giss.nasa.gov/modelforce/strataer/tau.map_2012.12.txt

It is sometimes convenient to have a mathematical description of the time evolution of such a “forcing” rather than a time series of discrete data points.

It was found that the convolution of a exponential function with another exponential will fit Mt Pinatubo disturbance very closely.

The above graph selects the tropical bands from the data and sums the optical densities over the different height layers. Since AOD is a logarithmic measurement based on the transmission of light, summing different layer values is appropriate to get the total AOD for the column.

[ Tropical bands used: 19.6 11.7 3.9 -3.9 -11.7 -19.6 ]

The two time constants found to give a close fit were 3 and 9 months. Non-linear least squares fitting produced 2.95 and 9.1 months. It was found that adding an offset of 0.3 months from the time of eruptions gave a better fit. This can probably be explained as being due to the initial geographic dispersal of the ejecta after the eruption.

This curve provides convenient analytical mathematical description of the time evolution of AOD but it would be good to have an explanation that this does actually represent the physical processes and is not just a fortuitous resemblance of form.

The third line on the graph is a further convolution which represents a linear relaxation response of the climate system to the radiative perturbation. The derivation of the 8 month climate response is discussed here: https://climategrog.wordpress.com/2015/01/17/on-determination-of-tropical-feedbacks/

The major atmospheric effects of volcanoes, once the initial dust and ash has settled, is in the creation of aerosols. Sulphur dioxide combines with water and water vapour to produce a fine aerosol of dilute sulphuric acid. These persist in the stratosphere for several years.

In kinetic chemistry these are referred to as linear rate processes, where the rate of the process is determined by the concentration of reactant [1].

For a fixed initial amount, the concentration of the reactant decays exponentially with time. In engineering terms this can be called the impulse response of the system.

One way to calculate the system reaction to a varying input in such a situation, is by convolution of the continuous input with exponential impulse response.

Approximating the explosive injection of SO2 into the atmosphere as an instantaneous impulsion producing a step change in the amount of SO2 and applying a convolution to represent its conversion by reaction with water vapour into the dilute acid aerosol, then feeding this into a second convolution to represent the removal process, will produce a simple model of the evolution of aerosol concentration over time.

The analytical function resulting from these convolutions, where the reaction rates are different, is given by [2]:

λ1λ2 / ( λ1 – λ2 ) . ( exp1t – exp2t ) ; Eqn. 1

where λ1 and λ2 are the reciprocals of the time constants of the two reactions.

A special case where the two reactions have the same rate gives [3]:

λ2t . exp t ; Eqn. 2

Douglass & Knox 2005 [4] used a function, such as eqn. 2, to model AOD, though no explanation was given for the choice. It seems to have been chosen as a convenient function which provided a reasonably close fit to the data, rather than having a specific theoretical origin. Here is figure 2 from that paper with the authors’ original caption:

The single time-constant of this model was derived to 7.6 months, between the two values found above for the dual exponential model. It can be noted that this does not rise quickly enough, peaks a little too late and decays somewhat too quickly in the tail.

Though the fit is good and mean squared residuals are small, it does mean that two of key features that are of importance when applying this to studying the climate reaction are more notably affected: the overall scaling is less, and the timing of the peak is later.

Also, if the fitted AOD model decays too quickly, the implied climate response will be lengthened. This was another parameter that the paper sought to estimate.

None of these differences will be large and do not undermine the results of the paper, however the double decay model may provide a more accurate result and has the appeal of being grounded in a physical explanation of the chemical processes.

Investigation of the magnitude of the differences may justify using the single exponential model as a more parsimonious substitution for the physically grounded description in some situations. The impact on any derived parameters, as discussed above, and knock on effects to subsequent regression analyses and attribution studies should also be determined. A physically meaningful model would seem preferable in trying to understand the behaviour of the system, rather than just describe it in the most parsimonious way.

References:

[1] U.C. Davis
http://chemwiki.ucdavis.edu/Physical_Chemistry/Kinetics/Reaction_Rates/First-Order_Reactions

[2] European Journal of Statistics and Probability
Vol.1, No.2, pp.1-8, December 2013
“ON THE SUM OF EXPONENTIALLY DISTRIBUTED RANDOM VARIABLES: A CONVOLUTION APPROACH”
Oguntunde P.E1; Odetunmibi O.A2 ;and Adejumo, A. O 3.
http://www.eajournals.org/wp-content/uploads/On-the-Sum-of-Exponentially-Distributed-Random-Variables-A-Convolution-Approach1.pdf

[3] Dartmouth university course notes: section 7.2
http://www.dartmouth.edu/~chance/Fteaching_aids/books_articles/Fprobability_book/Chapter7.pdf

[4] Douglass and Knox 2005: “Climate forcing by the volcanic eruption of Mount Pinatubo”
http://www.pas.rochester.edu/~douglass/papers/2004GL022119_Pinatubo.pdf

ERBE SW vs TLS anomaly

Tropical top of atmosphere ( TOA ) radiation measurements are compared to temperature of the lower stratosphere ( TLS ) from satellite based passive microwave sounding data.

TLS is filtered with a low pass filter to remove the annual variation.

Top of atmosphere, reflected short-wave ( SW ) radiation has had the annual variability removed by subtraction of the annual cycle. Since major eruptions have a notable effect on the annual variations an adaptive method is applied that is detailed here:

On Determination of Tropical Feedbacks

There is an apparently permanent drop down in net reflected SW at TOA of approximately 1.8 W/m2. This is synchronous with the change in TLS.

Linear regression of unfiltered tropical TLS, from the annual peak at 1994.63 to the annual peak at 2013.63 :

m_lin = 0.00130434 +/- 0.005792 (444%)
c = 205.534 +/- 0.06168 (0.03001%)

Statistically indistinguishable from zero. No detectable recovery towards pre-eruption values.

There is very little lag between the change in reflected SW and the temperature change, indicating a rapid equilibration. Due to the rarefied nature of the stratosphere this is unsurprising.

Data source: University of Alabama, Huntsville: temperature of the lower stratosphere from satellite microwave sounding unit (MSU) data.

http://vortex.nsstc.uah.edu/data/msu/t4/tlsday_5.6.txt
http://vortex.nsstc.uah.edu/data/msu/t4/tlsdayac8110_5.6.txt

Earth Radiation Budget Experiment (ERBE) Top of atmosphere radiation measurements:
https://eosweb.larc.nasa.gov/sites/default/files/project/erbe/edition3_rev1/Edition3_Rev1_wfov_sf_monthly_tropics.txt

Derivative-of-Gaussian filter

The following awk script will apply a gaussian filter to the rate of change ( derivative ) of the data.  This is simpler than doing two separate operations and provides a more accurate numerical estimation of the derivative than using two point “first difference”.

Since we know the analytical form of the derivative of the gaussian function, the derivative can be applied before making a quantised approximation for numerical processing of quantised data.

Analytically, since both the convolution and the derivative are linear operations they can be done in either order and produce identical results. This allows taking the derivative of the gaussian then doing a single convolution without loss of generality.

There is still the inaccuracy of approximating the infinite gaussian function by a finite, quantised kernel but there is no loss of accuracy from also approximating the derivative. Thus the result is mathematically more accurate than using the first difference and then a gaussian filter.

The usual 3-sigma window for a gaussian filter is slightly extended to maintain similar accuracy in the D-o-G filter.

Since the derivate has high-pass properties ( attenuation inversely proportional to frequency ) and the gaussian is a low-pass filter the combined filter is a band pass filter. One common use if for edge detection, for example in medical imagery. Equally if the derivative of the data is being studied and some low-pass filtering is required it provides a one-step solution that does not rely on a digitised approximation of the derivative.

An example of edge detection is shown in determining long-term variations in the date of Arctic sea-ice minima:

The Death Spiral is Dead

 

Select and copy the text in the following block to get a copy of the script.

 

#!/bin/awk -f
# 3-sigma derviative of gaussian filter
# sigma, if not given, is 2 data points wide
# usage : ./dgauss.awk filename <sigma=2> <scale_factor=1>
# optional scale_factor simply scales the output
# sigma can be compared to the period of the -3dB point of the filter
# result is centred, ie not phase shifted. resulting dataset shorter by 3*sigma ( half window ) each end
# nov 2012 OFMT="%10.8f"


### ensure data is continuous and equally spaced ### !!



# FWHM (-3dB) = 2*sigma*sqrt(2*ln2)
# half height freq = sigma * 1.1774
# 3 sigma gives 99.7% accuracy for gauss, extend to 4-sigma for deriv-gauss

function abs(x){return (((x < 0.0) ? -x : x) + 0.0)}

BEGIN { OFMT="%10.8f"
# ARGV[1]=filename; argv[0] is script name, hence ARGC>=1

 datacol=2 # fixed

 if (ARGV[1]=="") { print " # usage : ./dgauss.awk filename <sigma=2> <scale_factor=1>" ; exit 1}

 pi= 3.14159265359811668006

 if ( ARGC >3 ) {scaleby=ARGV[3];ARGV[3]=""} else {scaleby=1};
 if ( ARGC >2 ) {sigma=ARGV[2];ARGV[2]=""} else {sigma=2};

 print "# filtering "ARGV[1]" with gaussian-derivative of sigma= ",sigma
 root2pi_sigma=sqrt(2*pi)*sigma;
 two_sig_sqr=2.0*sigma*sigma;

# extend 3 sigma window to 3.5 for derivate-gauss to maintain accuracy.
 gw=(4*sigma)-1;

# calculate normalised gaussian coeffs
# for (tot_wt=j=0;j<=gw;j++) {tot_wt+=gwt[-j]=gwt[j]=exp(-j*j/two_sig_sqr)/ root2pi_sigma};
# tot_wt=0

### NB d/dt(gauss) is asymmetric, so must be time reversed to correctly do convolution.
 for (tot_wt=j=0;j<=gw;j++) {tot_wt+=gwt[j]= j/sigma/sigma*exp(-j*j/two_sig_sqr) / root2pi_sigma };
 tot_wt=2*tot_wt-gwt[0]; # to scale each lobe to |0.5| ; gwt[0]=0
 tot_wt/=scaleby;
 for (j=0;j<=gw;j++) {
# gwt[j]/=tot_wt; # gaussian is already normalised, don't rescale derivate !
 gwt[-j]=-gwt[j]
 };

# strip off last .xxx part of file name
# FIXME : improve this (doesn't work with paths like ../filename)

 if (ARGV[1]=="-") {
 out_file="-";
 }
 else {
 split(ARGV[1],fn,".");
 basename=fn[1]
 out_file=basename"-gauss-deriv"sigma".dat";
 print "# ",out_file >out_file;
 }

 ln=-1;
}

( ($0 !~ /^#/) && ($0 !~ /^[" ",\t]*$/) ) {
 xdata[++ln]=$1;
 ydata[ln]=$datacol;

 if (ln>2*gw)
 {

# cf diff.awk
# print (xdata[ln]+xdata[ln-step])/2. ,ydiff/xdiff
# print (xdata[ln]+xdata[ln-step])/2. ,ydiff/xdiff >> out_file;

 xdiff=(xdata[ln]-xdata[ln-1]);
# ydiff=(ydata[ln]-ydata[ln-step]);

 dgauss=0
 for (j=-2*gw;j<=0;j++) {dgauss+=ydata[ln+j]*gwt[j+gw]}
 if (out_file=="-")
   print xdata[ln-gw],dgauss/xdiff , ydata[ln-gw] 
 else {
   print NR,xdata[ln-gw],dgauss/xdiff ,ydata[ln-gw]
   print xdata[ln-gw],dgauss/xdiff , ydata[ln-gw] >> out_file;
  }
 }
else
 {
# print $1,$2;

 }
}


END {
 if (ARGV[1]=="") { exit 1}
 print "# gausssian-derivative window width = "gw+gw+1",done"
 print "# output file = "out_file
 # sum=0; for (j=-gw;j<=gw;j++) {sum+= abs(gwt[j]);}
 # print "# abs sum of coeffs = ",sum;

}

The Death Spiral is Dead

Introduction

This year, as every year, there has been much excitement in the media about ‘catastrophic’ melting of Arctic sea-ice, run-away melting, tipping points, death spirals and “ice-free” summers.

There has been the usual guessing game about when exactly the minimum will / has occurred and what the ice area or extent will be on that day.

Claims of ‘ice-free’ conditions at some time in the summer have been bandied about for years in various forms but as the reality sinks in that it’s not as bad as some had claimed, the dates when this is expected happen have often been pushed out beyond the life expectancy of those making the claims.

The meaning of “ice-free” has also been the subject of some serious misrepresentation.

IPCC AR5 states:

Year-round reductions in Arctic sea ice are projected for all RCP scenarios. A nearly ice-free11 Arctic Ocean in the summer sea-
ice minimum in September before mid-century is likely for RCP8.512 (medium confidence). {2.2.3, Figure 2.1}

RCP8.5 is the most extreme emissions ‘pathway’ and is already likely to be avoided.
Footnote 11 defines the term “nearly ice-free”:

11 When sea-ice extent is less than one million km2 for at least five consecutive years.

IPCC consistently uses the term “nearly ice-free”, yet this often gets perverted to “ice-free” in the media and scientific loose talk and the significant rider “for at least five consecutive years” gets dropped.

The problem with this obsessive focusing on one single data point out of 365, is that there is a lot of short term, weather driven variability that can affect the exact timing and size of the minimum in ice coverage. Since the main interest ( outside maritime navigational requirements ) is the hope to find some indications of long term changes in climate, this is not a very instructive way to use the detailed daily data available.

There have been three notably low summer minima in recent years: 2007, 2012 and 2016. The 2012 event was the lowest in the satellite record going back to 1979. The other two years tie for second place, meaning the current minimum is indistinguishable from the one which occurred nine years previously and the lowest one lies between the two. This incompatible with claims of run-away melting. This was a reasonable hypothesis and cause for concern  in 2007 when the relatively short record could be characterised as an increasing rate of change but this interpretation is not compatible with what has happened since.

In the golden days of “run-away” melting, leading up to the ‘catastrophic’ 2007 Arctic sea-ice minimum, this was often presented as the ‘canary in the coal-mine’ for global warming and climate change. It is still referred to as such.

The aim here is to try to separate the changes in the annual cycle from the weather-driven and other short-term variability and to see if we can learn something about the life expectancy of our canary.

Method and Results


In order to determine an objective timing for the sea-ice minimum, it is necessary to filter out short-term variability. Visual inspection of the northern hemisphere sea-ice data [1] shows that there are periods of about a week in length having alternating greater and lesser rates of change throughout most of the year. Spectral analysis also reveals notable cyclic components at around 4, 10, 16 days. This means that the date of the minimum tends to fluctuate rather erratically over the first three weeks of September.

The criterion adopted here for having sufficiently removed the short term variability in order to get a stable estimation of the turning point ( minimum sea-ice extent ) of the annual cycle, was that there shall be only one change of direction. Since the annual cycle only has one minimum, anything producing multiple localised minima is noise in this context.

To detect this condition, the rate of change of the NSIDC NH sea-ice extent was taken and a 3-sigma gaussian low-pass filter applied. The length of the filter was increased progressively until all years in the data had a unique turning point (ie.  a change of sign in the time derivative ).

This was achieved in practice by a single operation: convolution of the daily data with a diff-of-gaussian kernel [*] of the chosen length. The data point at the change of sign was taken as the date of the minimum in the annual cycle for each year.

As is typically the case for satellite records, the data processed by NDISC comes from a number of different satellites as instruments and spacecraft fail. They are cross-calibrated as well as possible to provide one continuous time series. One notable change in the data is that the earlier orbits only provided  full surface coverage every six days [2] and were originally processed to give a data point every three days. These have apparently been interpolated to give a homogeneous time series. Obviously data with only a three day time resolution will be less accurate at determining the exact date of the turn-around. To mark the different nature of the early data this section is plotted in a different colour.

Comparison to the minima of 5 day trailing average presented by NSIDC shows the expected reduction in the variability that was the aim of this study. This manifests primarily as the reduced occurrence of later minimum dates. The pre-1998 data change little. This is probably due to the blurring effect of the extended 6 day orbital coverage being similar to the gaussian smoothing applied in this analysis.

Since the rate of change at the beginning of refreezing tends to be faster than end of melting, removing higher frequencies with a filter introduces a general drift to a earlier date of minimum ice extent. This simply reflects the asymmetry of the annual cycle. This underlines the caution needed in merging the two sections of the dataset : the earlier Nimbus and the later military satellites which had different orbit resolution. The data are not fully compatible and comparing dates of ice minimum without suitable filtering will lead to an exaggerated impression of later melting in the post-Ninbus years.


Discussion

There is a clear non linear nature to the variation in the timing of the annual minimum in Arctic sea-ice extent from NSIDC. It reached its latest recent date for the turning point in 2007 and has been getting progressively earlier ever since.
That is to say, that the freezing season has been beginning steadily earlier for about a decade, having drifted in the opposite sense for the previous two decades.

There is much talk with naive assumptions of the effects of reduced ice area on the energy budget of the region. The simplistic argument being that less solar energy will be reflected by water compared to an ice or snow surface leading to further warming and thus ever more melting. However, there are several other effects which need to be measured and taken into account to determine whether this extra energy input will cause an overall positive feedback ( and accelerated melting ) or whether it will be counted by other effects.  Open water emits more infra-red energy to space and evaporation of surface water in the windy Arctic region will remove substantial amounts of heat from the surface.  Both of these effects will continue during 12 months of the year, not just the short summer. It is far from obvious how these factors and others will balance out. Climate models are notoriously poor at reproducing both Arctic temperatures and sea ice coverage. (They tend to under-estimate both temperature and ice loss). Clearly the ‘basic physics’ for this region is poorly understood.

The observational data can help shed some light on the how the region is responding to the reduced ice cover since the 2007 minimum.

The decadal rate of melting has also reduced since 2007 as measured by estimations of ice area retrieved by the University of Illinois [4].

The derivation of that graph is detailed here:

On identifying inter-decadal variation in NH sea ice

Conclusion

There are two ways to interpret the observational data:

1) the net feedback from open water is negative , not positive and run-away melting was an erroneous interpretation. It is not happening.

2) the feedbacks are not the key driver of Arctic sea ice melting, there is another external force, such as N. Atlantic sea surface temperature and ocean currents,  which is dominant and run-away melting was an erroneous interpretation. It is not happening.

The death spiral is dead.


Resources

[1] The NH sea-ice extent data are provided by NSIDC as daily anomalies form an average cycle plus the annual cycle which has been subtracted. The “near real time” data for the current, incomplete year is provided as a separate file in the same format.

Documentation for the NSIDC data : http://nsidc.org/data/docs/noaa/g02135_seaice_index


[2] DMSP Satellite F8 was the first of a series a spacecraft and flew from 7 September 1987, it was flown by US department of defence. Data from this family is combined with that from earlier NASA  provided Nimbus-7  spacecraft which flew similar but somewhat different polar orbits.

https://nsidc.org/data/docs/daac/nimbus-7_platform.gd.html

“The Nimbus-7 observatory provides global coverage every six days, or every 83 orbits.”

This section of the combined record is less reliable for estimating the turning point due to lack of time resolution. The data has to be in-filled to produce daily data series and  contributions to one ice extent figure are actually spread over 6 days of flight time.

The DMSP 5D family spacecraft, provide daily resolution data :
http://nsidc.org/data/nsidc-0001


[3] An implementation of the diff-of-gaussian filter is presented here:

Derivative-of-Gaussian filter


[4] The sea-ice area data used in the decadal trend analysis are provided by Cryosphere Today team at U. Illinois.

http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008

Processed data of turning points after 14d gaussian filter.

1979.69756852    0.6976
1980.68047869    0.6805
1981.69076810    0.6908
1982.69010575    0.6901
1983.68670554    0.6867
1984.69425698    0.6943
1985.68811892    0.6881
1986.68198074    0.6820
1987.67584268    0.6758
1988.68339412    0.6834
1989.69642138    0.6964
1990.68754535    0.6875
1991.69783470    0.6978
1992.68074494    0.6807
1993.68829638    0.6883
1994.68215826    0.6822
1995.69792343    0.6979
1996.68357158    0.6836
1997.69386093    0.6939
1998.69593655    0.6959
1999.68979843    0.6898
2000.69187404    0.6919
2001.69394966    0.6939
2002.68781154    0.6878
2003.69810089    0.6981
2004.69196277    0.6920
2005.70225212    0.7023
2006.69611400    0.6961
2007.70366544    0.7037
2008.69752732    0.6975
2009.69686503    0.6969
2010.69620273    0.6962
2011.69280252    0.6928
2012.69761605    0.6976
2013.69147793    0.6915
2014.69629146    0.6963
2015.69015334    0.6902

Are land + sea averages meaningful ?

Introduction

Several of the major datasets that claim to represent “global average surface temperature” are directly or effectively averaging land air temperatures with sea surface temperatures. These are typically derived by weighting a global land average and global SST average according to the 30:70 land-sea geographical surface area ratio. However, there is very little consideration of whether such a result has any physical meaning and what, if anything, it means.

One thing it does not represent is a metric of global surface heat content. However, this is ( often implicitly ) one of the most common uses for such data.

Temperatures don’t add !

In technical terms, temperature is not an extensive quantity. That is illustrated by the fact that if you have one bucket of water at 30 degrees Celsius and you add another bucket of water at 30 degrees Celsius, you do not end up with water at 60 deg. C.

Energy is an extensive property: if you have a volume of water with a thermal energy of 4000 megajoules and you add a second similar volume you will have twice the thermal energy. The average energy per unit area can be compared to the radiative energy budget per unit area.

The ratio of temperature to thermal energy is not the same for all materials, it varies greatly depending on the physical properties of the substance. It also depends on the amount of a substance present, ie the mass. In physics and materials science, it is often most convenient to study “specific heat capacity”, that is the change in energy content per unit mass, per degree change in temperature. It is thus a property of each type of material, independent of any particular object.

In S.I. ( Système Internationale ) units this is measured in joule / kilogram / kelvin or J/kg/K . The kelvin is the same size as one degree C. and is interchangeable in this context. Some examples for common materials :

Material S.H.C.
Fresh Water 4.19
Sea Water ( 2 deg. C ) 3.93
Mercury 0.14
Dry Air 1.01
Stone 0.84
Dry Earth 1.26
Clay 0.92
Tar 1.47
Concrete 0.75

Table 1. Specific heat capacity of various materials in J/kg/K

So one could consider temperature change as a “proxy” for change in thermal energy for equivalent VOLUMES of the SAME material. In this context one could calculate an ‘average change in temperature’ for that medium and use it infer a change in thermal energy, which can be related to incoming and outgoing radiation, for example. If this is a surface temperature ( eg SST ) this implies assuming that the surface represents the temperature of a certain depth of water and that this representative depth remains about the same over regions that are being averaged, in order to respect the “volume” condition above. That is somewhat questionable for the ocean ‘mixed layer’ but may provide a crude energy proxy.

However, it is immediately clear that one cannot start adding or averaging air and SST, or land and sea temperatures. They are not compatible media. It is like asking what is the average of an apple and an orange: it has no physical meaning. It certainly can not be the basis of an energy budget calculation, since it is no longer a measure of the change in thermal energy.

As seen from the above figures: air, stone and earth will change temperature about four times as much as water in response to the same energy input.

No one would think of trying average temperature records in deg. Fahrenheit with records in deg. C, yet, for some reason, mixing land and sea data does not seem to raise any eyebrows.

Rate of change in global temperature datasets


Figure 1. Comparing the rate of change of temperature in land and sea datasets ( 30 month low-pass gaussian filter ).

Figure 1 shows the rate of change in two SST datasets and the BEST land dataset scaled down by a factor of two. They are all reasonably close with this scaling factor. The large peak in ICOADS data is a recognised sampling issue due to changes in shipping routes and sampling methods during and after WWII. The UK Met Office processed HadISST dataset aims to remove this bias.

The rate of change of near surface land air temperature as estimated in the Berkeley “BEST” dataset is very similar to the rate of change in the sea surface temperature record, except that it shows twice the rate of change.

Sea water has a specific heat capacity about 4 times that of rock. This means that rock will change in temperature four times more than water for the same change in thermal energy, for example from incoming solar radiation.

Since soil, in general, is a mix of fine particles of rock and organic material with a significant water content. The two temperatures records are consistent with the notion of considering land as ‘moist rock’. This also partly explains the much larger temperature swings in desert regions: the temperature of dry sand will change four times faster than ocean water and be twice as volatile as non-desert land regions.

This also underlines why is it inappropriate to average land and sea temperatures as is done in several recognised global temperature records such as HadCRUT4 ( a bastard mix of HadSST3 and CRUTem4 ) as well as GISS-LOTI and the new BEST land and sea averages.

It is a classic case of ‘apples and oranges’. If you take the average of an apple and an orange, the answer is a fruit salad. It is not a useful quantity for physics based calculations such as earth energy budget and the impact of a radiative “forcings”.

The difference in heat capacity will skew the data in favour of the land air temperatures which vary more rapidly and will thus give an erroneous basis for making energy based calculations. In addition land air temperatures are also compromised by urban heat island and other biases, so these will be effectively doubled before contaminating the global land + sea record.

In this sense the satellite data provide a more physically consistent global average because they are measuring a more consistent medium. If the aim is to do radiation based energy calculations it is probably more meaningful to use SST as the calorimeter.

Climate sensitivity is defined as the ∆rad , ∆T ratio, usually in the context of a linear approximation to the Planck feedback which is valid over relatively small deviations in the circa 300K temperature range. Other feedbacks are seen a perturbations that either add or subtract from the dominant Planck radiative feedback. All this and even the far more complex general circulation climate models are basically energy balance calculations. The conservation of energy is one of the defining axioms of physics. A fundamental test of any theory or equation is whether it respects the conservation of energy.

Horizontal heat transfer ensures that land temperature is constrained by ocean heat capacity: the thermal anchor of the climate system. It is well known that temperatures in coastal regions are stabilised by the proximity of sea/ocean and the centre of continents show greater extremes of diurnal and annual variation. However, land near-surface temperature remains more volatile than SST and analysis of climate models shows that they display greater climate sensitivity over land, and produce a different lapse rate.[1] IF that can be taken as being reliable.

In this context, temperature rise is the final result of all inputs, “forcings” and feedbacks many of which may be different over land. Heat capacity and available moisture both play an important role. Obviously these two factors are related. Using a non-thermodynamically relevant “average” temperature from two different ecologies with different climate sensitivities and lapse rates to produce an ‘average’ CS also seems open to bias.

Conclusion

Temperatures are not abstract statistics, their physical meaning needs to be considered when choosing how to process them. Using land + sea global average temperature datasets, biased by giving undue weight to the more volatile land-based temperatures, will produce physically incorrect results.

Most climate data are not just dimensionless numbers. Any processing should be considered in the context of the physical quantities that they represent. If temperature or temperature anomaly is being considered as an energy proxy for energy based calculations this should be explicitly stated and any biases that this may introduce should be discussed.

The physical significance, validity and limitations of “average” land + sea temperatures should be considered where they are used. This is rarely, if ever, done.

UPDATE

A typical equation for the definition of the settled change in temperature in response to a change in radiative ‘forcing’ F has the form:

∆F = λ * ∆T + ∆N ; where ∆N is the change in top-of-atmosphere radiation.

λ is the reciprocal of climate sensitivity ( CS ) . A more realistic model to asses the effect of differing responses would be :

∆F = α * λland * ∆Tland + (1 – α) * λsea * ∆Tsea + ∆N

Here alpha represents the geographic proportion of land area and is what is usually taken to weight the land and sea mean temperatures into a single “mean temperature”. Land temperatures will change by a greater magnitude due to the larger CS as indicated in the model runs in Geoffroy at al [1]

Due to it’s lesser heat capacity, land will equilibrate faster than the oceans. In this intermediate period there will be horizontal heat transfer from land to sea to redress the imbalance. This extra heat flux will somewhat increase the ocean temperature response thus increasing the effective transient climate sensitivity ( TCS ). The opposite will apply to land.

After hundreds of years, a dynamic equilibrium will establish where the horizontal flux balances the different responses of the two media. Land will heat more but is constrained by the ocean response.

The details of how this will level out is not trivial and will depend on arguments of heat capacity, lapse rate, moisture content and the mechanics of the horizontal heat transfer.

It is the λ * ∆T product ( a heat flux term ) that is being averaged, not temperature itself. That is as it should be to retain a physically meaningful calculation, so if temperatures are to be added ( or averaged ) they should be weighted not only by the land area but by the ratio λland / λsea. This respects the scientific requirement to be working in extensive properties, not an intensive one and restores the physical meaning to the resulting “global mean temperature”. It does not mean abandoning the GMST index, simply applying a correct weighting to account for the different media in a similar way to what is already done to account for land area.

According to the model runs this lies between 1.4 and 1.9 . Not dissimilar to the crude factor of 2 scaling of BEST land and SST shown in figure 1.

This implies that the classic 30/70% weighting of land and sea averages should probably be more like 15/85% or 20/80%.

DATA:

The data used in figure 1 can be obtained from KNMI climate explorer:
http://climexp.knmi.nl/start.cgi?id=someone@somewhere

The values of specific heat capacity shown in table 1 are provided by the Engineering Toolbox:
http://www.engineeringtoolbox.com/specific-heat-fluids-d_151.html

The 3-sigma gaussian filter is a standard filter available on most data processing packages. A description and graph of the frequency response is provided accompanied by a script to apply this filter at the following link:

Gaussian low-pass filter


Geoffroy et al 2015 : “Land-sea warming contrast: the role of the horizontal energy transport” [ paywalled ]
http://link.springer.com/article/10.1007/s00382-015-2552-y

On the relationship of ACE and SST

Introduction

Surface ocean temperature has been shown to be a key condition for development of tropical and cyclonic storms, thought there are strongly conflicting opinions on its impact on long term trends. The present study looks at correlation of accumulated cyclone energy and surface sea temperature in the N. Atlantic basin and finds common periodicities linking the two. Notably, a clear lunar period is revealed. The similarity between the post-2005 drop in cyclone energy and a similar drop in the record from the 1930-1950 period is also discussed.


Figure 1. Comparing North Altantic SST to Accumulated Cyclone Energy

Data Sources:

Accumulated cyclone energy ( ACE ) is an estimation of kinetic energy contained in cyclone class storms, based on wind speed, storm extent and other information. This is catalogued both by storm and as annual totals. In the following, all ACE data [DS1] and discussion thereof, refer to the North Atlantic basin. Much research has been done recently to try to improve the earlier periods of the record. [1] [2]

The ACE metric is intended to provide better insight that simply counting the number of storms, since this has been strongly affected by improvements in detection and reporting and possible statistical variation in teh proportions of storm intensity. Many smaller storms are now reported that would not have been detected without modern surveying techniques [2]. Since storm energy originates in ocean heat energy, ( Gray 1968 ) [3]. total cyclone energy is a more appropriate metric than non-dimensional quantities such as storm count. when searching for possible a causal link.

Sea surface temperature data ( SST ) are taken from two processed data sources, both derived primarily from ICOADS source observations.

HadSST3[DS2] is an “anomaly” dataset ie deviations from the local seasonal mean from an arbitrary reference period.
Data from 0 -70W, 0 – 60N were averaged.

ERSSTv3 [DS3] is a dataset of actual SST. A single offset was subtracted from the N.Atlantic mean to align it with HadSST3. NOAA provide a detrended, N.Atlantic average they call AMO index. The data used here is NON detrended version, ie actual SST.

Data Processing:

Since ERSST has a strong annual variation, and HadSST3 has considerable high frequency noise and annual residuals, both were low-pass filtered using a 3-sigma gaussian filter with sigma = 12mo. This removes most variability below about 2 years in period.

Considerable adjustments are made to SST for a number of reasons which are discussed on the Met. Office web site In particular, substantial adjustments are made to WWII period and subsequent years ( many of which are similarly adopted for both data sources used here ).

  1. ERI biases (North Atlantic): In the North Atlantic between 1970 and 1994, Engine Room Intake biases were generated from the means and standard errors from Kent and Kaplan 2006.
  2. Unknown measurements: AR(1) series with lag correlation 0.99 and range between 0 and 1 were used to assign a time varying fraction of unknown measurements to bucket. The remainder were set to be ERI measurements. The same value was used at all locations.
  3. ERI recorded as bucket: 30±10% of bucket observations were reassigned as Engine Room Intake measurements. One value per realisation was used and applied to all times and places after 1940.
  4. Canvas to rubber bucket: A linear switchover was assumed. The start point (all canvas) was chosen randomly between 1954 and 1957. The end point (all rubber) was chosen randomly between 1970 and 1980.

Most of the post-WWII cooling is the result of these somewhat speculative “bias corrections” in particular the phasing in of the “canvas to rubber bucket”” adjustment from 1954 to 1980. The adjustments made to HadSST3 were discussed in detail in the following article On the adjustments to the HadSST3 data set

The N.Atlantic annual cyclone season peaks in late summer. Annual ACE data were plotted at the mid-point of the year. Where ACE is shown low-pass filtered, a similar 3-sigma gaussian filter was used.

ACE data will have also been affected by changes in shipping routes and usage during the WWII period, possibly leading to significant non-reporting of storms well away from land.

In general all three datasets should be regarded as unreliable during the WWII period.

Discussion

Similarities :

There is a strong similarity in the ACE and SST data, both at inter-decadal and inter-annual scales. The much discussed 60 year pseudo-cyclic variability is clearly seen in both SST and ACE. The inter-annual variability is particularly close in the more reliable post-1950 period but shows a strong similarity throughout the record. Since cyclones derive their energy from the heat of the ocean surface [3] this not particularly surprising but provides a means of comparing two fundamentally different measurements of climatic conditions.

Differences:

There are two very visible discrepancies in the two metrics. War-time ACE is very low and probably reflects some degree of under-sampling in that period. However, the low activity goes from 1934 to 1949, a far broader period that can be accounted for by WWII disruption to shipping. Also systematic airborne monitoring was started in 1944 (Klotzbach & Landsea 2015) [4] and there was no noticeable increase in ACE until 1950, suggesting the lower level of activity in the period, though less reliable, is real. A detailed account of uncertainty in the HURDAT data is provided by Landsea & Franklin (2013):[5]

There is a large peak in ACE 1893 that is opposed to the variation shown in the SST record. 1952 also shows a mismatch in the two records. With those two exceptions, the inter-annual variability in the two metrics follow each other quite closely throughout the record.

The post-2005 drop in ACE

Applying a low-pass filter to the annual ACE data allows a more direct comparison to SST.


Figure 2.

An interesting feature becomes apparent: unlike SST, the end of the cyclone data has dropped sharply since its peak in 2004/2005. This looks very similar to the trough that started in the 1930s.

This drop is also reported by Klotzbach et al (2015) [5] which includes the most recent data.

Large, significant downward trends are present in accumulated cyclone energy in the Northern Hemisphere, the Southern Hemisphere, and globally.

It can also be noted that the inter-annual variations correlate much more closely during both the early and later 20th c. warming periods.

Other studies, such as Holland & Webster (2007)[6] , based on storm count rather than ACE seem to give a less clear picture of the SST linkage and, due to earlier publishing date, do not include the more recent decline.


Figure 3. From Holland & Webster 2007, ( fig. 1 )

Common variability of ACE and SST can be examined by cross-correlation of the two time series. Common periodicities can be examined by spectral analysis of the cross-correlation function.


Figure 4. Cross power spectrum N.Atl. ACE vs SST

The two main common periods are close to 9 and 60 years, with a smaller peak around 5.6 years. The circa 9y peak is centred on 9.07 years. It is noted that this is exactly the mean frequency corresponding to 8.85y and 9.3y, both of which are lunar cycles. The 8.85y period corresponds to the precession of the lunar of apsides and 9.3y is half the nodal precession period.

This circa 9.1 years would appear to be the same periodicity reported as common to Atlantic and Pacific SST by Muller et al (2013) [8] and conclusively shown to be of lunar origin by N Scaffeta [9] by examining data from the NASA / JPL ephemeris.

Though tides are traditionally measured by the height of the water level, it should be remembered that tides are longitudinal waves and the principal movement of water is horizontal, not vertical. Long period tidal variations have the potential to move vast volumes of surface water and hence heat, thereby modifying ocean currents. This seems to have received little recognition or quantitative study.


Figure 5. Showing detail of later 20th c. ACE and SST with major volcanoes marked.

While there is still too little data to confirm the length of the post 2005 reduction in activity, the dearth of land falling hurricanes in the U.S. has been widely reported, as has the failure of hurricane activity to match the “hottest year ever” claims made about SST.

If this lower level of activity continues in the next few years, it will suggest that the stagnant temperatures of “the pause”, like those of the 30s and 40s are not conducive to a high level of hurricane activity. This should then inform attempts to model cyclone formation and intensity.

While there is clearly a linkage between SST and cyclone energy, it is not a simple on-to-one relationship of warmer SST causing “more frequent and more extreme” storms. A more detailed study of other factors such as wind shear and vorticity is covered in Emanuel 1995 [14]. A brief overview of more recent work is provided by Vecchi and Soden 2007[15]

Some studies of the correlation of SST and cyclone intensity and frequency [10] have, for reasons partly motivated by data quality issues, focused on the post-1960 period. However, it is clear that this will not give an accurate or representative assessment.

Whatever the motivation, focussing on a limited period dominated by a rising section of the data runs a substantial risk of drawing false conclusions.

Emanuel (2005) [10]

… show that this index [potential destructiveness of hurricanesderived from the third power of wind speed] has increased markedly since the mid-1970s. This trend is due to both longer storm lifetimes and greater storm intensities. I find that the record of net hurricane power dissipation is highly correlated with tropical sea surface temperature, … My results suggest that future warming may lead to an upward trend in tropical cyclone destructive potential…

The accuracy of the claims an calculations made in Emanuel (2005) were the subject of a reply to the paper in Gray (2005)[11], which found them to be invalid.

This paper’s conclusions are partially based on an analysis of poorly documented and biased tropical cyclone (TC) maximum wind (Vmax ) data from the Northwest (NW) Pacific. This study presents a sum of each tropical cyclone’s maximum winds – a very hard measurement to realistically obtain over a 3-5 decade period of changing maximum wind measurement techniques. The author then cubes these maximum wind speeds (Vmax3) of questionable accuracy. Such a cubing process greatly exaggerates any errors in the original Vmax estimate.

Almost any climate dataset will will show either an upward or downward trend. Suggesting evidence of causation from nothing more than sign of the long term trend has little justification. Extrapolating such a relationship outside the period of study, on nothing more than a baseless “if the current trend continues”, is more likely to be misleading then informative. There seems to be some unspoken assumptions that the most likely outcome is that a trend calculated over some arbitrary period will continue, while no logical or statistical basis for that assumption is ever put forward.

A study showing correlation of variation in both directions, or of less trivial variations would be far more persuasive.

A rapid drop in hurricane energy is noted before the El Chichon and Mt Pinatubo eruptions ( figure 5.) and is already visible in SST. In fact, both events were preceded by a higher level of cyclone activity that coincided with a peaks in the 9y lunar cycle in SST and also coincided with peaks in solar activity. The temporal coincidence of all these factors raises the danger of false attribution

Volcanoes play a significant role in attempts to model past climate. Generally, on a global scale, climate models produce little more than general GHG driven warming and a lot of internal noise. The cooling effect attributed to major stratospheric eruptions is important in putting a few bumps and dips in about the right place to give the visual impression that the model, at least roughly, resembles the recent climate history ( models are tuned primarily to the period from 1960 onwards ).

In multivariate regression studies or arbitrarily adjusting climate model parameters to reconcile model output with the climate record ( Hansen et al 2002 [12] ), false attribution of subsequent cooling to volcanic forcing would lead to unduly high estimations of climate sensitivity to volcanic forcing. Consequently other radiative opposing ( warming ) “forcings” will also likely be over-estimated.

A separate study of volcanic forcing is the tropics suggests a similar over-estimation of climate sensitivity.

On determination of tropical feedbacks

Conclusion :

Secular trends underlying the obvious 60 year pseudo-periodic variability in both SST and cyclone data are probably determined more by sampling bias and the various adjustments and corrections applied to SST than they are by possible long term changes in the physical conditions. As such, caution should be applied before suggesting that they represent a real indication of a possible long term causal relationship.

The clear 60y and 9y periodicities draw attention to the need to investigate long period lunar tidal influences on ocean and atmospheric climate variables. This is needed in order to quantify and account for natural variability and also to avoid false attribution to other climate variables. In particular, large scale variations in surface transport of heat by ocean currents due to long period tides appears to be a much neglected factor.

There is a stronger correlation during the two major warming periods of the 20th century than in the plateaux or cooling periods. SST is not the single dominant factor determining cyclone energy. There is not a trivial linear relationship and it is probably not appropriate to project relationships derived from the study of data primarily from warming periods onto times of relatively stable SST conditions.

There is an indication that the drop in North Atlantic hurricane activity which apparently occurred during the pre-WWII warm period is repeating during the current temperature plateau.

Data sources:

ACE : annual accumulated cyclone energy derived from HURDAT2 database:
http://www.aoml.noaa.gov/hrd/tcfaq/E11.html
Documentation of Atlantic Hurricane Database Re-analysis Project:
http://www.aoml.noaa.gov/hrd/data_sub/re_anal.html

UK Met. Office HadSST3.1.1.0 [ 0-70W,0-60N ] recovered from KNMI http://climexp.knmi.nl”

ERSSTv3 non-detrended N.Atlantic SST :
http://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.mean.data”

Explanation of accumulated cyclone energy:
http://www.aoml.noaa.gov/hrd/tcfaq/E11.html

“ACE” = Accumulated Cyclone Energy – An index that combines the numbers of systems, how long they existed and how intense they became. It is calculated by squaring the maximum sustained surface wind in the system every six hours that the cyclone is a Named Storm and summing it up for the season. It is expressed in 104 kt2.

The Atlantic hurricane database (or HURDAT) extends back to 1851. However, because tropical storms and hurricanes spend much of their lifetime over the open ocean – some never hitting land – many systems were “missed” during the late 19th and early 20th Centuries (Vecchi and Knutson 2008). Starting in 1944, systematic aircraft reconnaissance was commenced for monitoring both tropical cyclones and disturbances that had the potential to develop into tropical storms and hurricanes. This did provide much improved monitoring, but still about half of the Atlantic basin was not covered (Sheets 1990). Beginning in 1966, daily satellite imagery became available at the National Hurricane Center, and thus statistics from this time forward are most complete (McAdie et al. 2009).

References :

[1] Vecchi & Knutson 2008

Click to access gav0802.pdf

[2] The Atlantic Hurricane Database Re-analysis Project
http://www.aoml.noaa.gov/hrd/data_sub/re_anal.html

[3] Gray, W. M., 1968: Global view of the origin of tropical disturbances and storms. Mon. Wea. Rev., 96,
669–700.

Click to access 0114_Bluebook.pdf

[4] Klotzbach & Landsea 2015
“Extremely Intense Hurricanes: Revisiting Webster et al. (2005) after 10 Years”

Click to access klotzbachandlandsea2015.pdf

[5] Landsea, C. W., and J. L. Franklin, 2013: “Atlantic hurricane database uncertainty and presentation of a new database format.” Mon.
Wea. Rev., 141, 3576–3592, doi:10.1175/MWR-D-12-00254.1.

Click to access landsea-franklin-mwr2013.pdf

[6] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”

Click to access hansen_etal_2002.pdf

[7] Holland & Webster 2007 :
“Heightened tropical cyclone activity in
the North Atlantic: natural variability or
climate trend?”

Click to access 2695.full.pdf

[8] Muller et al (2013) “Decadal variations in the global atmospheric land temperatures”

Click to access berkeley-earth-decadal-variations.pdf

[9] Nicola Scafetta / Journal of Atmospheric and Solar-TerrestrialPhysics 72 (2010) 951–970
“Empirical evidence for a celestial origin of the climate oscillations and its implications”

Click to access scafetta-JSTP2.pdf

[10] Emanual 2005 “Increasing destructiveness of tropical cyclones over the past 30 years” [paywalled]
http://www.nature.com/nature/journal/v436/n7051/abs/nature03906.html

[11] William M. Gray
Comments on: “Increasing destructiveness of tropical cyclones over the past 30 years” by Kerry Emanuel, Nature, 31 July 2005, Vol. 436, pp. 686-688
http://arxiv.org/pdf/physics/0601050

[12] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”

Click to access hansen_etal_2002.pdf

[13] Webster et al (2005)
“Changes in Tropical Cyclone Number, Duration, and Intensity in a Warming Environment”
http://www.sciencemag.org/content/309/5742/1844.full

[14] Kerry A. Emanuel 1995
“Sensitivity of Tropical Cyclones to Surface Exchange Coefficients and a Revised Steady-State Model incorporating Eye Dynamics”
ftp://texmex.mit.edu/pub/emanuel/PAPERS/sens95.pdf

[15] Vecchi & Soden 2007
“Increased tropical Atlantic wind shear in model projections of global warming”
http://onlinelibrary.wiley.com/doi/10.1029/2006GL028905/full

On Determination of Tropical Feedbacks

This article has been posted for discussion at Judith Curry’s “Climate Etc.”

On determination of tropical feedbacks

Abstract

Satellite data for the period surrounding the Mt Pinatubo eruption in 1991 provide a means of estimating the scale of the volcanic forcing in the tropics. A simple relaxation model is used to examine how temporal evolution of the climate response will differ from the that of the radiative forcing. Taking this difference into account is essential for proper detection and attribution of the relevant physical processes. Estimations are derived for both the forcing and time-constant of the climate response. These are found to support values derived in earlier studies which vary considerably from those of current climate models. The implications of these differences in inferring climate sensitivity are discussed. The study reveals the importance of secondary effects of major eruptions pointing to a persistent warming effect in the decade following the 1991 eruption.

The inadequacy of the traditional application of linear regression to climate data is highlighted, showing how this will typically lead to erroneous results and thus the likelihood of false attribution.

The implications of this false attribution to the post-2000 divergence between climate models and observations is discussed.

Keywords: climate; sensitivity; tropical feedback; global warming; climate change; Mount Pinatubo; volcanism; GCM; model divergence; stratosphere; ozone; multivariate; regression; ERBE; AOD ;

Introduction

For the period surrounding the eruption of Mount Pinatubo in the Philippines in June 1991, there are detailed satellite data for both the top-of-atmosphere ( TOA ) radiation budget and atmospheric optical depth ( AOD ) that can be used to derive the changes in radiation entering the climate system during that period. Analysis of these radiation measurements allows an assessment of the system response to changes in the radiation budget. The Mt. Pinatubo eruption provides a particularly useful natural experiment, since the spread of the effects were centred in the tropics and dispersed fairly evenly between the hemispheres. [1] fig.6 ( This meta-study provides a lot of detail about the composition and dispersal of the aerosol cloud.)

The present study investigates the tropical climate response in the years following the eruption. It is found that a simple relaxation response, driven by volcanic radiative ‘forcing’, provides a close match with the variations in TOA energy budget. The derived scaling factor to convert AOD into radiative flux, supports earlier estimations [2] based on observations from the 1982 El Chichon eruption. These observationally derived values of the strength of the radiative disturbance caused by major stratospheric eruptions are considerably greater than those currently used as input parameters for general circulation climate models (GCMs). This has important implications for estimations of climate sensitivity to radiative ‘forcings’ and attribution of the various internal and external drivers of the climate system.

Method

Changes in net TOA radiative flux, measured by satellite, were compared to volcanic forcing estimated from measurements of atmospheric optical depth. Regional optical depth data, with monthly resolution, are available in latitude bands for four height ranges between 15 and 35 km [DS1] and these values were averaged from 20S to 20N to provide a monthly mean time series for the tropics. Since optical depth is a logarithmic scale, the values for the four height bands were added at each geographic location. Lacis et al [2] suggest that aerosol radiative forcing can be approximated by a linear scaling factor of AOD over the range of values concerned. This is the approach usually adopted in IPCC reviewed assessments and is used here. As a result the vertical summations are averaged across the tropical latitude range for comparison with radiation data.

Tropical TOA net radiation flux is provided by Earth Radiation Budget Experiment ( ERBE ) [DS2].

One notable effect of the eruption of Mt Pinatubo on tropical energy balance is a variation in the nature of the annual cycle, as see by subtracting the pre-eruption, mean annual variation. As well as the annual cycle due to the eccentricity of the Earth’s orbit which peaks at perihelion around 4th January, the sun passes over the tropics twice per year and the mean annual cycle in the tropics shows two peaks: one in March the other in Aug/Sept, with a minimum in June and a lesser dip in January. Following the eruption, the residual variability also shows a six monthly cycle. The semi-annual variability changed and only returned to similar levels after the 1998 El Nino.


Figure 1 showing the variability of the annual cycle in net TOA radiation flux in tropics. (Click to enlarge)

It follows that using a single period as the basis for the anomaly leaves significant annual residuals. To minimise the residual, three different periods each a multiple of 12 months were used to remove the annual variations: pre-1991; 1992-1995; post 1995.

The three resultant anomaly series were combined, ensuring the difference in the means of each period were respected. The mean of the earlier, pre-eruption annual cycles was taken as the zero reference for the whole series. There is a clearly repetitive variation during the pre-eruption period that produces a significant downward trend starting 18 months before the Mt. Pinatubo event. Since it may be important not to confound this with the variation due to the volcanic aerosols, it was characterised by fitting a simple cosine function. This was subtracted from the fitting period. Though the degree to which this can be reasonably assumed to continue is speculative, it seems some account needs to be taken of this pre-existing variability. The effect this has on the result of the analysis is assessed.

The break of four months in the ERBE data at the end of 1993 was filled with the anomaly mean for the period to provide a continuous series.


Figure 2 showing ERBE tropical TOA flux adaptive anomaly. (Click to enlarge)



Figure 2b showing ERBE tropical TOA flux adaptive anomaly with pre-eruption cyclic variability subtracted. (Click to enlarge)

Since the TOA flux represents the net sum of all “forcings” and the climate response, the difference between the volcanic forcing and the anomaly in the energy budget can be interpreted as the climate response to the radiative perturbation caused by the volcanic aerosols. This involves some approximations. Firstly, since the data is restricted to the tropical regions, the vertical energy budget does not fully account for energy entering and leaving the region. There is a persistent flow of energy out of the tropics both via ocean currents and atmospheric circulation. Variations in wind driven ocean currents and atmospheric circulation may be part of any climate feedback reaction.

Secondly, taking the difference between TOA flux and the calculated aerosol forcing at the top of the troposphere to represent the top of troposphere energy budget assumes negligible energy is accumulated or lost internally to the upper atmosphere. Although there is noticeable change in stratospheric temperature as a result of the eruption, the heat capacity of the rarefied upper atmosphere means this is negligible in this context.

Figure 3 showing changes in lower stratosphere temperature due to volcanism. (Click to enlarge)

A detailed study on the atmospheric physics and radiative effects of stratospheric aerosols by Lacis, Hansen & Sato [2] suggested that radiative forcing at the tropopause can be estimated by multiplying optical depth by a factor of 30 W / m2.

This value provides a reasonably close match to the initial change in ERBE TOA flux. However, later studies[3], [4] , attempting to reconcile climate model output with the surface temperature record have reduced the estimated magnitude of the effect stratospheric aerosols. With the latter adjustments, the initial effect on net TOA flux is notably greater than the calculated forcing, which is problematic; especially since Lacis et al reported that the initial cooling may be masked by the warming effect of larger particles ( > 1µm ). Indeed, in order for the calculated aerosol forcing to be as large as the initial changes in TOA flux, without invoking negative feedbacks, it is necessary to use a scaling of around 40 W/m2. A comparison of these values is shown in figure 4.

What is significant is that from just a few months after the eruption, the disturbance in TOA flux is consistently less than the volcanic forcing. This is evidence of a strong negative feedback in the tropical climate system acting to counter the volcanic perturbation. Just over a year after the eruption, it has fully corrected the radiation imbalance despite the disturbance in AOD still being at about 50% of its peak value. The net TOA reaction then remains positive until the “super” El Nino of 1998. This is still the case with reduced forcing values of Hansen et al as can also be seen in figure 4.



Figure 4 comparing volcanic of net TOA flux to various estimations aerosol forcing. ( Click to enlarge )

The fact that the climate is dominated by negative feedbacks is not controversial since this is a pre-requisite for overall system stability. The main stabilising feedback is the Planck response ( about 3.3 W/m2/K at typical ambient temperatures ). Other feedbacks will increase or decrease the net feedback around this base-line value. Where IPCC reports refer to net feedbacks being positive or negative, it is relative to this value. The true net feedback will always be negative.

It is clear that the climate system takes some time to respond to initial atmospheric changes. It has been pointed out that to correctly compare changes in radiative input to surface temperatures some kind of lag-correlation analysis is required : Spencer & Braswell 2011[5], Lindzen & Choi 2011[6] Trenberth et al 2010 [7]. All three show that correlation peaks with the temperature anomaly leading the change in radiation by about three months.


Figure 5 showing climate feedback response to Mt Pinatubo eruption. Volcanic forcing per Lacis et al. ( Click to enlarge )

After a few months, negative feedbacks begin to have a notable impact and the TOA flux anomaly declines more rapidly than the reduction in AOD. It is quickly apparent that a simple, fixed temporal lag is not an appropriate way to compare the aerosol forcing to its effects on the climate system.

The simplest physical response of a system to a disturbance would be a linear relaxation model, or “regression to the equilibrium”, where for a deviation of a variable X from its equilibrium value, there is a restoring action that is proportional to the magnitude of that deviation. The more it is out of equilibrium the quicker its rate of return. This kind of model is common in climatology and is central of the concept of climate sensitivity to changes in various climate “forcings”.

dX/dt= -k*X ; where k is a constant of proportionality.

The solution of this equation for an infinitesimally short impulse disturbance is a decaying exponential. This is called the impulse response of the system. The response to any change in the input can found by its convolution with this impulse response. This can be calculated quite simply since it is effectively a weighted running average calculation. It can also be found by algebraic solution of the ordinary differential equation if the input can be described by an analytic function. This is the method that was adopted in Douglass & Knox 2005 [17] comparing AOD to lower tropospheric temperature ( TLT ).

The effect of this kind of system response is a time lag as well as a degree of low-pass filtering which reduces the peak and produces a change in the profile of the time series, compared to that of the input forcing. In this context linear regression of the output and input is not physically meaningful and will give a seriously erroneous value of the presumed linear relationship.

The speed of the response is characterised by a constant parameter in the exponential function, often referred to as the ‘time-constant’ of the reaction. Once the time-constant parameter has been determined, the time-series of the system response can be calculated from the time-series of the forcing.

Here, the variation of the tropical climate is compared with a linear relaxation response to the volcanic forcing. The magnitude and time-constant constitute two free parameters and are found to provide a good match between the model and data. This is not surprising since any deviation from equilibrium in surface temperature will produce a change in the long-wave Planck radiation to oppose it. The radiative Planck feedback is the major negative feedback that ensures the general stability of the Earth’s climate. While the Planck feedback is proportional to the fourth power of the absolute temperature, it can be approximated as linear for small changes around typical ambient temperatures of about 300 kelvin. This is effectively a “single slab” ocean model but this is sufficient since diffusion into deeper water below the thermocline is minimal on this time scale. This was discussed in Douglass & Knox’s reply to Robock[18]

A more complex model which includes a heat diffusion term to a large deep ocean sink can be reduced to the same analytical form with a slightly modified forcing and an increased “effective” feedback. Both these adjustments would be small and do not change the mathematical form of the equation and hence the validity of the current method. See supplementary information.

It is this delayed response curve that needs to be compared to changes in surface temperature in a regression analysis. Regressing the temperature change against the change in radiation is not physically meaningful unless the system can be assumed to equilibrate much faster than the period of the transient being studied, ie. on a time scale of a month or less. This is clearly not the case, yet many studies have been published which do precisely this, or worse multivariate regression, which compounds the problem. Santer et al 2014 [8], Trenberth et al 2010 [7], Dessler 2010 b [9], Dessler 2011 [10] Curiously, Douglass & Knox [17] initially calculate the relaxation response to AOD forcing and appropriately regress this against TLT but later in the same paper regress AOD directly against TLT and thus find an AOD scaling factor in agreement with the more recent Hansen estimations. This apparent inconsistency in their method confirms the origin of the lower estimations of the volcanic forcing.

The need to account for the fully developed response can be seen in figure 6. The thermal inertia of the ocean mixed layer integrates the instantaneous volcanic forcing as well as the effects of any climate feedbacks. This results in a lower, broader and delayed time series. As shown above, in a situation dominated by the Planck and other radiative feedbacks, this can be simply modelled with an exponential convolution. There is a delay due to the thermal inertia of the ocean mixed layer but this is not a simple fixed time delay. The relaxation to equilibrium response introduces a frequency dependent phase delay that changes the profile of the time series. Simply shifting the volcanic forcing forward by about a year would line up the “bumps” but not match the profile of the two variables. Therefore neither simple regression nor a lagged regression will correctly associate the two variables: the differences in the temporal evolution of the two would lead to a lower correlation and hence a reduced regression result leading to incorrect scaling of the two quantities.

Santer et al 2014 attempts to remove ENSO and volcanic signals by a novel iterative regression technique. A detailed account, provided in the supplementary information[8b], reports a residual artefact of the volcanic signal.

The modelled and observed tropospheric temperature residuals after removal of ENSO and volcano signals, τ , are characterized by two small maxima. These maxima occur roughly 1-2 years after the peak cooling caused by El Chichon and Pinatubo cooling signals.


Figure XXX. Santer et al 2014 supplementary figure 3 ( panel D )
“ENSO and volcano signals removed”

This description matches the period starting in mid-1992, shown in figure 6 below, where the climate response is greater than the forcing. It peaks about 1.5 years after the peak in AOD, as described. Their supplementary fig.3 shows a very clear dip and later peak following Pinatubo. This corresponds to the difference between the forcing and the correctly calculated climate response shown in fig. 6. Similarly, the 1997-98 El Nino is clearly visible in the graph of observational data (not reproduced here) labeled “ENSO and volcano signals removed”. This failure to recognise the correct nature and timing of the volcanic signal leads to an incorrect regression analysis, incomplete removal and presumably incorrect scaling of the other regression variables in the iterative process. This is likely to lead to spurious attributions and incorrect conclusions.



Figure 6 showing tropical feedback as relaxation response to volcanic aerosol forcing ( pre-eruption cycle removed) ( Click to enlarge )

The delayed climatic response to radiative change corresponds to the negative quadrant in figure 3b of Spencer and Braswell (2011) [5] excerpted below, where temperature lags radiative change. It shows the peak temperature response lagging around 12 months behind the radiative change. The timing of this peak is in close agreement with TOA response in figure 6 above, despite SB11 being derived from later CERES ( Clouds and the Earth’s Radiant Energy System ) satellite data from the post-2000 period with negligible volcanism.

This emphasises that the value for the correlation in the SB11 graph will be under-estimated, as pointed out by the authors:

Diagnosis of feedback cannot easily be made in such situations, because the radiative forcing decorrelates the co-variations between temperature and radiative flux.

The present analysis attempts to address that problem by analysing the fully developed response.


Figure 7. Lagged-correlation plot of post-2000 CERES data from Spencer & Braswell 2011. (Negative lag: radiation change leads temperature change.)

The equation of the relationship of the climate response : ( TOA net flux anomaly – volcanic forcing ) being proportional to an exponential regression of AOD, is re-arranged to enable an empirical estimation of the scaling factor by linear regression.

TOA -VF * AOD = -VF * k * exp_AOD eqn. 1

-TOA = VF * ( AOD – k * exp_AOD ) eqn. 2

VF is the volcanic scaling factor to convert ( positive ) AOD into a radiation flux anomaly in W/m2. The exp_AOD term is the exponential convolution of the AOD data, a function of the time-constant tau, whose value is also to be estimated from the data. This exp_AOD quantity is multiplied by a constant of proportionality, k. Since TOA net flux is conventionally given as positive downwards, it is negated in equation 2 to give a positive VF comparable to the values given by Lacis, Hansen, etc.

Average pre-eruption TOA flux was taken as the zero for TOA anomaly and, since the pre-eruption AOD was also very small, no constant term was included.

Since the relaxation response effectively filters out ( integrates ) much of high frequency variability giving a less noisy series, this was taken as the independent variable for regression. This choice acts to minimise regression dilution due to the presence of measurement error and non-linear variability in the independent variable.

Regression dilution is an important and pervasive problem that is often overlooked in published work in climatology, notably in attempts to derive an estimation of climate sensitivity from temperature and radiation measurements and from climate model output. Santer et al 2014, Trenberth et al 2010, Dessler 2011, Dessler 2010b, Spencer & Braswell 2011.

The convention of choosing temperature as the independent variable will lead to spuriously high sensitivity estimations. This was briefly discussed in the appendix of Forster & Gregory 2006 [11] , though ignored in the conclusions of the paper.

It has been suggested that a technique based on total least squares regression or bisector least squares regression gives a better fit, when errors in the data are uncharacterized (Isobe et al. 1990). For example, for 1985–96 both of these methods suggest YNET of around 3.5 +/- 2.0 W m-2 K-1 (a 0.7–2.4 K equilibrium surface temperature increase for 2 ϫ CO2), and this should be compared to our 1.0–3.6 K range quoted in the conclusions of the paper.

Regression results were thus examined for residual correlation.

Results

Taking the TOA flux, less the volcanic forcing, to represent the climatic reaction to the eruption, gives a response that peaks about twelve months after the eruption, when the stratospheric aerosol load is still at about 50% of its peak value. This implies a strong negative feedback is actively countering the volcanic disturbance.

This delay in the response, due to thermal inertia in the system, also produces an extended period during which the direct volcanic effects are falling and the climate reaction is thus greater than the forcing. This results in a recovery period, during which there is an excess of incoming radiation compared to the pre-eruption period which, to an as yet undetermined degree, recovers the energy deficit accumulated during the first year when the volcanic forcing was stronger than the developing feedback. This presumably also accounts for at least part of the post eruption maxima noted in the residuals of Santer et al 2014.

Thus if the lagged nature of the climate response is ignored and direct linear regression between climate variables and optical depth are conducted, the later extended period of warming may be spuriously attributed to some other factor. This represents a fundamental flaw in multivariate regression studies such as Foster & Rahmstorf 2013 [12] and Santer et al 2014 [8] , among others, that could lead to seriously erroneous conclusions about the relative contributions of the various regression variables.

For the case where the pre-eruption variation is assumed to continue to underlie the ensuing reaction to the volcanic forcing, the ratio of the relaxation response to the aerosol forcing is found to be 0.86 +/- 0.07%, with a time constant of 8 months. This corresponds to the value reported in Douglass & Knox 2005 [17] derived from AOD and lower troposphere temperature data. The scaling factor to convert AOD into a flux anomaly was found to be 33 W/m2 +/-11%. With these parameters, the centre line of the remaining 6 month variability ( shown by the gaussian filter ) fits very tightly to the relaxation model.



Figure 6 showing tropical feedback as relaxation response to volcanic aerosol forcing ( pre-eruption cycle removed) ( Click to enlarge )

If the downward trend in the pre-eruption data is ignored (ie its cause is assumed to stop at the instant of the eruption ) the result is very similar ( 0.85 +/-0.09 and 32.4 W/m2 +/- 9% ) but leads to a significantly longer time-constant of 16 months. In this case, the fitted response does not fit nearly as well, as can be seen by comparing figures 6 and 8. The response is over-damped: poorly matching the post-eruption change, indicating that the corresponding time-constant is too long.

Figure 8 showing tropical climate relaxation response to volcanic aerosol forcing, fitted while ignoring pre-eruption variability. ( Click to enlarge )

The analysis with the pre-eruption cycle subtracted provides a generally flat residual ( figure 9 ), showing that it accounts well for the longer term response to the radiative disruption caused by the eruption.

It is also noted that the truncated peak, resulting from substitution of the mean of the annual cycle to fill the break in the ERBE satellite data, lies very close to the zero residual line. While there is no systematic deviation from zero it is clear that there is a residual seasonal effect and that the amplitude of this seasonal residual also seems to follow the fitted response.

Figure 9 showing the residual of the fitted relaxation response from the satellite derived, top-of-troposphere disturbance. ( Click to enlarge )

Since the magnitude of the pre-eruption variability in TOA flux, while smaller, is of the same order as the volcanic forcing and its period similar to that of the duration of the atmospheric disturbance, the time-constant of the derived response is quite sensitive to whether this cycle is removed or not. However, it does not have much impact on the fitted estimation of the scaling factor ( VF ) required to convert AOD into a flux anomaly or the proportion of the exponentially lagged forcing that matches the TOA flux anomaly.

Assuming that whatever was causing this variability stopped at the moment of the eruption seems unreasonable but whether it was as cyclic as it appears to be, or how long that pattern would continue is speculative. However, approximating it as a simple oscillation seems to be more satisfactory than ignoring it.

In either case, there is a strong support here for values close to the original Lacis et al 1992 calculations of volcanic forcing that were derived from physics-based analysis of observational data, as opposed to later attempts to reconcile the output of general circulation models by re-adjusting physical parameters.

Beyond the initial climate reaction analysed so far, it is noted that the excess incoming flux does not fall to zero. To see this effect more clearly, the deviation of the flux from the chosen pre-eruption reference value is integrated over the full period of the data. The result is shown in figure 10.

Figure 10 showing the cumulative integral of climate response to Mt Pinatubo eruption. ( Click to enlarge )

Pre-eruption variability produces a cumulative sum initially varying about zero. Two months after the eruption, when it is almost exactly zero, there is a sudden change as the climate reacts to the drop in energy entering the troposphere. From this point onwards there is an ever increasing amount of additional energy accumulating in the tropical lower climate system. With the exception of a small drop, apparently in reaction to the 1998 ‘super’ El Nino, this tendency continues to the end of the data.

While the simple relaxation model seems to adequately explain the initial four years following the Mt Pinatubo event, this does not explain it settling to a higher level. Thus there is evidence of a persisent warming effect resulting from this major stratospheric eruption.

Discussion

Concerning the more recent estimations of aerosol forcing, it should be noted that there is a strong commonality of authors in the papers cited here, so rather than being the work of conflicting groups, the more recent weightings reflect the result of a change of approach: from direct physical modelling of the aerosol forcing in the 1992 paper, to the later attempts to reconcile general circulation model (GCM) output by altering the input parameters.

From Hansen et al 2002 [4] ( Emphasis added. )

Abstract:

We illustrate the global response to these forcings for the SI2000 model with specified sea surface temperature and with a simple Q-flux ocean, thus helping to characterize the efficacy of each forcing. The model yields good agreement with observed global temperature change and heat storage in the ocean. This agreement does not yield an improved assessment of climate sensitivity or a confirmation of the net climate forcing because of possible compensations with opposite changes of these quantities. Nevertheless, the results imply that observed global temperature change during the past 50 years is primarily a response to radiative forcings.

Form section 2.2.2. Radiative forcing:

Even with the aerosol properties known, there is
uncertainty in their climate forcing
. Using our SI2000
climate model to calculate the adjusted forcing for a
globally uniform stratospheric aerosol layer with optical
depth t = 0.1 at wavelength l = 0.55 mm yields a forcing of
2.1 W/m2 , and thus we infer that for small optical depths

Fa (W/m2) ~ 21 tau

…..
In our earlier 9-layer model stratospheric warming after El Chichon and Pinatubo was about half of observed values (Figure 5 of F-C), while the stratospheric warming in our current model exceeds observations, as shown below.

As the authors point out, it all depends heavily upon the assumptions made about size distribution of the aerosols used when interpreting the raw data. In fact the newer estimation is shown, in figure 5a of the paper, to be about twice the observed values following Pinatubo and El Chichon . It is unclear why this is any better than half observed values in their earlier work. Clearly the attributions are still highly uncertain and the declared uncertainty of +/-15% appears optimistic.

3.3. Model Sensitivity

The bottom line is that, although there has been some
narrowing of the range of climate sensitivities that emerge
from realistic models [Del Genio and Wolf, 2000], models
still can be made to yield a wide range of sensitivities by
altering model parameterizations.

If the volcanic aerosol forcing is underestimated, other model parameters will have to be adjusted to produce a higher sensitivity. It is likely that the massive overshoot in the model response of TLS is an indication of this situation. It would appear that a better estimation lies between these two extremes. Possibly around 25 W/m2.

The present study examines the largest and most rapid changes in radiative forcing in the period for which detailed satellite observations are available. The aim is to estimate the aerosol forcing and the timing of the tropical climate response. It is thus not encumbered by trying to optimise estimations of a range climate metrics over half a century by experimental adjustment of a multitude of “parameters”.

The result is in agreement with the earlier estimation of Lacis et al.

A clue to the continued excess over the pre-eruption conditions can be found in the temperature of the lower stratosphere, shown in figure 3. Here too, the initial disturbance seems to have stabilised by early 1995 but there is a definitive step change from pre-eruption conditions.

Noting the complementary nature of the effects of impurities in the stratosphere on TLS and the lower climate system, this drop in TLS may be expected to be accompanied by an increase in the amount of incoming radiation penetrating into the troposphere. This is in agreement with the cumulative integral shown in figure 10 and the southern hemisphere sea temperatures shown in figure 12.

NASA Earth Observatory report [13] that after Mt Pinatubo, there was a 5% to 8% drop in stratospheric ozone. Presumably a similar removal happened after El Chichon in 1982 which saw an almost identical reduction in TLS.

Whether this is, in fact, the cause or whether other radiation blocking aerosols were flushed out along with the volcanic emissions, the effect seems clear and consistent and quite specifically linked to the eruption event. This is witnessed in both the stratospheric and tropical tropospheric data. Neither effect is attributable to the steadily increasing GHG forcing which did not record a step change in September 1991.

This raises yet another possibility for false attribution in multivariate regression studies and in attempts to arbitrarily manipulate GCM input parameters to engineer a similarity with the recent surface temperature records.

With the fitted scaling factor showing the change in tropical TOA net flux matches 85% of the tropical AOD forcing, the remaining 15% must be dispersed elsewhere within the climate system. That means either storage in deeper waters and/or changes in the horizontal energy budget, ie. interaction with extra-tropical regions.

Since the model fits the data very closely, the residual 15% will have the same time dependency profile as the 85%, so these tropical/ex-tropical variations can also be seen as part of the climate response to the volcanic disturbance. ie. the excess in horizontal net flux, occurring beyond 12 months after the event, is also supporting restoration of the energy deficit in extra-tropical regions by exporting heat energy. Since the major ocean gyres bring cooler temperate waters into the eastern parts of tropics in both hemispheres and export warmer waters at their western extents, this is probably a major vector of this variation in heat transportation as are changes in atmospheric processes like Hadley convection.

Extra-tropical regions were previously found to be more sensitive to radiative imbalance that the tropics:

HadCRUT4 SH volcano stack cumulative sum of degree.days


Thus the remaining 15% may simply be the more stable tropical climate acting as a buffer and exerting a thermally stabilising influence on extra-tropical regions.

After the particulate matter and aerosols have dropped out there is also a long-term depletion of stratospheric ozone ( 5 to 8% less after Pinatubo ) [13]. Thompson & Solomon (2008) [14] examined how lower stratosphere temperature correlated with changes in ozone concentration and found that in addition to the initial warming caused by volcanic aerosols, TLS showed a notable ozone related cooling that persisted until 2003. They note that this is correlation study and do not imply causation.

 
Figure 11. Part of fig.2 from Thompson & Solomon 2008 showing the relationship of ozone and TLS.

A more recent paper by Soloman [15] concluded roughly equal, low level aerosol forcing existed before Mt Pinatubo and again since 2000, similarly implying a small additional warming due to lower aerosols in the decade following the eruption.

Several independent data sets show that stratospheric aerosols have increased in abundance since 2000. Near-global satellite aerosol data imply a negative radiative forcing due to stratospheric aerosol changes over this period of about -0.1 watt per square meter, reducing the recent global warming that would otherwise have occurred. Observations from earlier periods are limited but suggest an additional negative radiative forcing of about -0.1 watt per square meter from 1960 to 1990.

The values for volcanic aerosol forcing derived here being in agreement with the physics-based assessments of Lacis et al. imply much stronger negative feedbacks must be in operation in the tropics than those resulting from the currently used model “parameterisations” and the much weaker AOD scaling factor.

These two results indicate that secondary effects of volcanism may have actually contributed to the late 20th century warming. This, along with the absence of any major eruptions since Mt Pinatubo, could go a long way to explaining the discrepancy between climate models and the relative stability of observational temperatures measurements since the turn of the century.

Once the nature of the signal has been recognised in the much less noisy stratospheric record, a similar variability can be found to exist in southern hemisphere sea surface temperatures. The slower rise in SST being accounted for by the much larger thermal inertia of the ocean mixed layer. Taking TLS as an indication of the end of the negative volcancic forcing and beginning of the additional warming forcing, the apparent relaxation to a new equilibrium takes 3 to 4 years. Regarding this approximately as the 95% settling of three times-constant intervals ( e-folding time ) would be consistent with a time constant of between 12 and 16 months for extra-tropical southern hemisphere oceans.

These figures are far shorter than values cited in Santer 2014 ranging from 30 to 40 months which are said to characterise the behaviour of high sensitivity models and correspond to typical IPCC values of climate sensitivity. It is equally noted from lag regression plots in S&B 2011 and Trenberth that climate models are far removed from observational data in terms of producing correct temporal relationships of radiative forcing and temperature.


Figure 12. Comparing SH sea surface temperatures to lower troposphere temperature.

Though the initial effects are dominated by the relaxation response to the aerosol forcing, both figures 9 and 12 show a additional climate reaction has a significant effect later. This appears to be linked to ozone concentration and/or a reduction in other atmospheric aerosols. While these changes are clearly triggered by the eruptions, they should not be considered part of the “feedback” in the sense of the relaxation response fitted here. These effects will act in the same sense as direct radiative feedback and shorten the time-constant by causing a faster recovery. However, the settling time of extra-tropical SST to the combined changes in forcing indicates a time constant ( and hence climate sensitivity ) well short of the figures produced by analysing climate model behaviour reported in Santer et al 2014 [8].

IPCC on Clouds and Aerosols:

IPCC AR5 WG1 Full Report Jan 2014 : Chapter 7 Clouds and Aerosols:

No robust mechanisms contribute negative feedback.

7.3.4.2

The responses of other cloud types, such as those associated with deep convection, are not well determined.

7.4.4.2

Satellite remote sensing suggests that aerosol-related invigoration of deep convective clouds may generate more extensive anvils that radiate at cooler temperatures, are optically thinner, and generate a positive contribution to ERFaci (Koren et al., 2010b). The global influence on ERFaci is
unclear.

Emphasis added.

WG1 are arguing from a position of self-declared ignorance on this critical aspect of how the climate system reacts to changes in radiative forcing. It is unclear how they can declare confidence levels of 95%, based on such an admittedly poor level of understanding of the key physical processes.

Conclusion

Analysis of satellite radiation measurements allows an assessment of the system response to changes in radiative forcing. This provides an estimation of the aerosol forcing that is in agreement with the range of physics-based calculations presented by Lacis et al in 1992 and is thus brings into question the much lower values currently used in GCM simulations.

The considerably higher values of aerosol forcing found here and in Lacis et al imply the presence of notably stronger negative feedbacks in tropical climate and hence imply a much lower range of sensitivity to radiative forcing than those currently used in the models.

The significant lag and ensuing post-eruption recovery period underlines the inadequacy of simple linear regression and multivariate regression in assessing the magnitude of various climate ‘forcings’ and their respective climate sensitivities. Use of such methods will suffer from regression dilution, omitted variable bias and can lead to seriously erroneous attributions.

Both the TLS cooling and the energy budget analysis presented here, imply a lasting warming effect on surface temperatures triggered by the Mt Pinatubo event. Unless these secondary effects are recognised, their mechanisms understood and correctly modelled, there is a strong likelihood of this warming being spuriously attributed to some other cause such as AGW.

When attempting to tune model parameters to reproduce the late 20th century climate record, an incorrectly small scaling of volcanic forcing, leading to a spuriously high sensitivity, will need to be counter-balanced by some other variable. This is commonly a spuriously high greenhouse effect, amplified by presumed positive feedbacks from other climate variables, less well constrained by observation ( such as water vapour and cloud ). In the presence of the substantial internal variability, this can be made to roughly match the data while both forcings are present ( pre-2000 ). However, in the absence of significant volcanism there will be a steadily increasing divergence. The erroneous attribution problem, along with the absence of any major eruptions since Mt Pinatubo, could explain much of the discrepancy between climate models and the relative stability of observational temperature measurements since the turn of the century.

References:

[1] Self et al 1995
“The Atmospheric Impact of the 1991 Mount Pinatubo Eruption”
http://pubs.usgs.gov/pinatubo/self/

[2] Lacis et al 1992 : “Climate Forcing by Stratospheric Aerosols

Click to access 1992_Lacis_etal_1.pdf

[3] Hansen et al 1997 “Forcing and Chaos in interannual to decadal climate change”

Click to access Hansen_etal_1997b.pdf

[4] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”

Click to access hansen_etal_2002.pdf

[5] Spencer & Braswell 2011: “On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance”
http://www.mdpi.com/2072-4292/3/8/1603/pdf

[6] Lindzen & Choi 2011: “On the Observational Determination of Climate Sensitivity and Its Implications”

Click to access 236-Lindzen-Choi-2011.pdf

[7] Trenberth et al 2010: “Relationships between tropical sea surface temperature and top‐of‐atmosphere radiation”
http://www.mdpi.com/2072-4292/3/9/2051/pdf

 [8] Santer et al 2014: “Volcanic contribution to decadal changes in tropospheric temperature”
http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2098.html

 [8b] Supplementary Information:

Click to access ngeo2098-s1.pdf

 [9] Dessler 2010 b “A Determination of the Cloud Feedback from Climate Variations over the Past Decade”

Click to access dessler10b.pdf

 [10] Dessler 2011 “Cloud variations and the Earth’s energy budget”

Click to access Dessler2011.pdf

 [11]Forster & Gregory 2006
“The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data”

Click to access Forster_sensitivity.pdf

[12] Foster and Rahmstorf 2011: “Global temperature evolution 1979-2010”
http://stacks.iop.org/ERL/6/044022

[13] NASA Earth Observatory
http://earthobservatory.nasa.gov/Features/Volcano/

 [14] Thompson & Soloman 2008: “Understanding Recent Stratospheric Climate Change”
http://journals.ametsoc.org/doi/abs/10.1175/2008JCLI2482.1

 [15] Susan Soloman 2011: “The Persistently Variable “Background” Stratospheric Aerosol Layer and Global Climate Change”
http://www.sciencemag.org/content/333/6044/866

 [16] Trenberth 2002: “Changes in Tropical Clouds and Radiation”

Click to access 2095.full.pdf

 [17] Douglass & Knox 2005: “Climate forcing by the volcanic eruption of Mount Pinatubo”

Click to access 2004GL022119_Pinatubo.pdf

 [18] Douglass & Knox 2005b: “Reply to comment by A. Robock on ‘‘Climate forcing by the volcanic eruption of Mount Pinatubo’’”

Click to access reply_Robock_2005GL023829.pdf

Data sources:
[DS1] AOD data: NASA GISS
http://data.giss.nasa.gov/modelforce/strataer
Derived from SAGE (Stratospheric Aerosol and Gas Experiment) instrument.
“The estimated uncertainty of the optical depth in our multiple wavelength retrievals [Lacis et al., 2000] using SAGE observations is typically several percent. “[4]

[DS2] ERBE TOA data: Earth Radiation Budget Experiment
http://eosweb.larc.nasa.gov/sites/default/files/project/erbe/edition3_rev1/Edition3_Rev1_wfov_sf_monthly_tropics.txt

data description:

Click to access erbe_s10n_wfov_nf_sf_erbs_edition3.pdf

[*] Explanatory notes:
Negative feedback is an engineering term that refers to a reaction that opposes its cause. It is not a value judgement about whether it is good or bad. In fact, negative feedbacks are essential in keeping a system stable. Positive feedbacks lead to instability and are thus generally “bad”.

Convolution is a mathematical process that is used, amongst other things, to implement digital filters. A relaxation response can be implemented by convolution with an exponential function. This can be regarded as an asymmetric filter with a non-linear phase response. It produces a delay relative to the input and alters it’s shape.

Regression dilution refers to the reduction in the slope estimations produced by least squares regression when there is significant error or noise in the x-variable. Under negligible x-errors OLS regression can be shown to produce the best unbiased linear estimation of the slope. However, this essential condition is often over-looked or ignored, leading to erroneously low estimations of the relationship between two quantities. This is explained in more detail here:

On inappropriate use of least squares regression

Supplementary Information

Detailed method description.

The break of four months in the ERBE data at the end of 1993 was filled with the anomaly mean for the period to provide a continuous series. This truncates what would have probably been a small peak in the data, marginally lowering the local average, but since this was not the primary period of interest this short defect was considered acceptable.

The regression was performed for a range of time-constants from 1 to 24 months. The period for the regression was from just before the eruption, up to 1994.7, when AOD had subsided and the magnitude of the annual cycle was found to change, indicating the end of the initial climatic response ( as determined from the adaptive anomaly shown in figure 2 ).

Once the scaling factor was obtained by linear regression for each value of tau, the values were checked for regression dilution by examining the correlation of the residual of the fit with the AOD regressor while varying the value of VF in the vicinity of the fitted value. This was found to give a regular curve with a minimum correlation that was very close to the fitted value. It was concluded that the least squares regression results were an accurate estimation of the presumed linear relationship.

The low values of time-constant resulted in very high values of VF ( eg. 154 for tau=1 month ) that were physically unrealistic and way out side the range of credible values. This effectively swamps the TOA anomaly term and is not meaningful.

tau = 6 months gave VF = 54 and this was taken as the lower limit for the range time constant to be considered.

The two free parameters of the regression calculations will ensure an approximate fit of the two curves for each value of the time constant. The latter could then be varied to find the response that best fitted the initial rise after the eruption, the peak and the fall-off during 1993-1995.

This presented a problem since, even with adaptive anomaly approach, there remains a significant, roughly cyclic, six month sub-annual variability that approaches in magnitude the climate response of interest. This may be at least in part a result aliasing of the diurnal signal to a period close to six months in the monthly ERBE data as pointed out by Trenberth [16]. The relative magnitude of the two also varies depending on the time-constant used, making a simple correlation test unhelpful in determining the best correlation with respect to the inter-annual variability. For this reason it could not be included as a regression variable.

Also the initial perturbation and the response, rise very quickly from zero to maximum in about two months. This means that low-pass filtering to remove the 6 month signal will also attenuate the initial part of the effect under investigation. For this reason a light gaussian filter ( sigma = 2 months ) was used. This has 50% attenuation at 2.3 months. comparable to a 4 month running mean filter ( without the destructive distortions of the latter). This allowed a direct comparison of the rise, duration of the peak and rate of fall-off in the tail that is determined by the value of the time-constant parameter, and thus the value producing the best match to the climate response.

Due to the relatively course granularity of monthly data that limits the choice of the time-constant values, it was possible to determine a single value that produced the best match between the two variables.

The whole process was repeated with and without subtraction of the oscillatory, pre-eruption variability to determine the effects of this adjustment.

The uncertainty values given are those of the fitting process and reflect the variability of the data. They do not account for the experimental uncertainty in interpreting the satellite data.

Beyond the single slab model.

A more complex model which includes a heat diffusion term to a large deep ocean sink can be seen to reduce to the same analytical form with a slightly modified forcing and an increased “effective” feedback. Both these adjustments would be small and do not change the mathematical form of the equation and hence the validity of the current method.

A typical representation of the linear model used by many authors in published works [17] is of the form:

C *d/dt(Ts) = F – λTs

where Ts is surface temp anomaly ; F is radiative flux anomaly and C is the heat capacity of the mixed ocean layer. λ is a constant which is the reciprocal of climate sensitivity.

Eddy diffusion anomaly to the deep ocean heat sink at a constant temperature Td can be added using a diffusion constant κ With the relatively small changes in surface temperature over the period compared to the surface – thermocline temperature difference in the tropics, the square law of the diffusion equation can be approximated as linear. To within this approximation, a two slab diffusion model has the same form as signle slab model.

A simple rearrangement shows that this is equivalent to original model but with a slightly modified flux and an slightly increased feedback parameter.

C *d/dt(Ts) + κ*(Ts-Td) = F – λTs

C *d/dt(Ts) = F – λTs- κ*(Ts-Td)

C *d/dt(Ts) = F+ κ*(Td) – Ts(λ+κ)

This does not have a notable effect on the scaling of the volcanic forcing herein derived, nor the presence of the anomalous, post-eruption warming effect, which is seen to continue beyond the period of the AOD disruption and the topical climate resposne.

Triple running mean filter

The following script will call a simple running mean three times with appropriate window size to do effect triple running mean, as described in the article (as amended for the asymmetric kernel to minimise negative leakage):

Data corruption by running mean “smoothers”

It requires the runmean.awk script found here:

simple running-mean script ( as base for triple RM )

Select code with mouse to copy elsewhere.

#!/bin/bash

# call runmean.awk three times to compose triple running mean
# usage: ./r3m.sh file window_len   ; default window is 12 data point

if [ "x$1" == "x"  ]; then echo "$0 : err no file name,   usage: $0 filename "
 exit 1;
else fn=$1
fi

if [ "x$2" == "x"  ]; then win=12; else win=$2; fi

#win2=`awk "BEGIN{print $win/ 1.3371}"`
#win3=`awk "BEGIN{print $win/ 1.3371/ 1.3371}"`

# asymmetric stages with following window ratios minimise neg. lobe in freq. response
k=1.15; k2=1.58

win2=`awk "BEGIN{print $win/ "$k"}"`
win3=`awk "BEGIN{print $win/ "$k2" }"`

outfile=`echo $fn | awk '{ print substr($1,1,length($1)-4) }'`
outfile+="-3rm"$win".dat"

echo  "# triple running mean :  $win  $win2  $win3 " > $outfile

# echo $fn; echo $win; echo $win2; echo $win3

cat $fn | ./runmean.awk - $win |  ./runmean.awk - $win2 |  ./runmean.awk - $win3 >> $outfile

#echo "# cat $fn | ./runmean.awk - $win |  ./runmean.awk - $win2 |  ./runmean.awk - $win3 >> $outfile"

echo "# triple running mean :  $win  $win2  $win3 "
echo "# outfile = "$outfile;

On inappropriate use of least squares regression

https://climategrog.wordpress.com/wp-content/uploads/2013/11/ols_scatterplot_regression2.png?w=670

Figure 1 showing conventional and inverse ‘ordinary least squares’ fits to some real, observed climate variables.

Ordinary least squares regression ( OLS ) is a very useful technique, widely used in almost all branches of science. The principal is to adjust one or more fitting parameters to attain the best fit of a model function, according to the criterion of minimising the sum of the squared deviations of the data from the model.

It is usually one of the first techniques that is taught in schools for analysing experimental data. It is also a technique that is misapplied almost as often as it is used correctly.

It can be shown, that under certain conditions, the least squares fit is the best estimation of the true relationship that can be derived from the available data. In statistics this is often called the ‘best, unbiased linear estimator’ of the slope. (Those, who enjoy contrived acronyms, abbreviate this to “BLUE”.)

It is a fundamental assumption of this technique that the ordinate variable ( x-axis ) has negligible error: it is a “controlled variable”. It is the deviations of the dependant variable ( y-azix ) that are minimised. In the case of fitting a straight line to the data, it has been known since at least 1878 that this technique will under-estimate the slope if there is measurement or other errors in the x-variable. (R. J. Adcock) [0]

There are two main conditions for this result to be an accurate estimation of the slope. One is that the deviations of the data from the true relationship are ‘normally’ or gaussian distributed. That is to say that they are of a random nature. This condition can be violated by significant periodic components in the data or excessive number of out-lying data points. The latter may often occur when only a small number of data points is available and the noise, even if random in nature, is not sufficiently sampled to average out.

The other main condition is that there be negligible error ( or non-linear variability ) in the x variable. If this condition is not met, the OLS result derived from the data will almost always under-estimate the slope of the true relationship. This effect is sometimes referred to as regression dilution. The degree by which the slope is under-estimated is determined by the nature of the x and y errors but most strongly by those in x since they are required to be negligible for OLS to give the best estimation.

In this discussion, “errors” can be understood to be both observational inaccuracies and any variability due to some factor other than the supposed linear relationship that it is sought to determine by regression of the two variables.

In certain circumstances regression dilution can be corrected for, but in order to do so, some knowledge of the nature and size of the both x and y errors has to be known. Typically this is not the case beyond knowing whether the x variable is a ‘controlled variable’ with negligible error, although several techniques have been developed to estimate the error in the estimation of the slope [7].

A controlled variable can usually be attained in a controlled experiment, or when studying a time series, provided that the date and time of observations have been recorded and documented in a precise and consistent manner. It is typically not the case when both sets of data are observations of different variables, as is the case when comparing two quantities in climatology.

One way to demonstrate the problem is to invert the x and y axes and repeat the OLS fit. If the result were valid, irrespective of orientation, the first slope would be the reciprocal of second one. However, this is only the case when there is very small errors in both variables, ie. the data is highly correlated ( grouped closely around a straight line ). In the case of one controlled variable and one error prone variable, the inverted result will be incorrect. In the case of two datasets containing observational error, both results will be wrong and the correct result will generally lie somewhere in between.

Another way to check the result is by examining the cross-correlation between the residual and the independent variable ie. ( model – y ) vs x , then repeat for incrementally larger values of the fitted ratio. Depending on the nature of the data, it will often be obvious that the OLS result does not produce the minimum residual between the ordinate and the regressor, ie. it does not optimally account for co-variability of the two quantities.

In the latter situation, the two regression fits can be taken as bounding the likely true value but some knowledge of the relative errors is needed to decide where in that range the best estimation lies. There are a number of techniques such as bisecting the angle, taking the geometric mean (square root of the product), or some other average, but ultimately, they are no more objective unless driven by some knowledge of the relative errors. Clearly bisection would not be correct if one variable had low error, since the true slope would then be close to the OLS fit done with that quantity on the x-axis.



Figure 2. A typical example of linear regression of two noisy variables produced from synthetic randomised data. The true known slope used in generating the data is seen in between the two regression results. ( Click to enlarge graph and access code to reproduce data and graph. )



Figure 2b. A typical example of correct application of linear regression to data with negligible x-errors. The regressed slope is very close to the true value, so close as to be indistinguishable visually. ( Click to enlarge )

The larger the x-errors, the greater the skew in the distribution and the greater the dilution effect.

An Illustration: the Spencer simple model.

The following case is used to illustrate the issue with ‘climate-like’ data. However, it should be emphasised that the problem is an objective mathematical one, the principal of which is independent of any particular test data used. Whether the following model is an accurate representation of climate ( it is not claimed to be ) has no bearing on the regression problem.

In a short article on his site Dr. Roy Spencer provided a simple, single-slab ocean, climate model with a predetermined feedback variable built into it. He observed that attempting to derive the climate sensitivity in the usual way consistently under-estimated the know feedback used to generate the data.

By specifying that sensitivity (with a total feedback parameter) in the model, one can see how an analysis of simulated satellite data will yield observations that routinely suggest a more sensitive climate system (lower feedback parameter) than was actually specified in the model run.

And if our climate system generates the illusion that it is sensitive, climate modelers will develop models that are also sensitive, and the more sensitive the climate model, the more global warming it will predict from adding greenhouse gasses to the atmosphere.

This is a very important observation. Regressing noisy radiative flux change against noisy temperature anomalies does consistently produce incorrectly high estimations of climate sensitivity. However, it is not an illusion created by the climate system, it is an illusion created by the incorrect application of OLS regression. When there are errors on both variables, the OLS slope is no longer an accurate estimation of the underlying linear relationship being sought.

Dr Spencer was kind enough to provide an implementation of the simple model in the form of a spread sheet download so that others may experiment and verify the effect.

To demonstrate this problem, the spreadsheet provided was modified to duplicate the dRad vs dTemp graph but with the axes inverted, ie. using exactly the same data for each run but additionally displaying it the other way around. Thus the ‘trend line’ provided by the spreadsheet is calculated with the variables inverted. No changes were made to the model.

Three values for the predetermined feedback variable were used in turn. Two values: 0.9 and 1.9 that Roy Spencer suggests represent the range of IPCC values and 5.0 which he proposes as a value closer to that which he has derived from satellite observational data.

Here is a snap-shot of the spreadsheet showing a table of results from nine runs for each feedback parameter value. Both the conventional and the inverted regression slopes and their geometric mean have been tabulated.

Figure 3. Snap-shot of spreadsheet, click to enlarge.

Firstly this confirms Roy Spencer’s observation that the regression of dRad against dTemp consistently and significantly under-estimates the feedback parameter used to create the data in the first place (and hence over-estimates climate sensitivity of the model). In this limited test, error is between a third and a half of the correct value. There is only one value of the conventional least squares slope that is greater than the respective feedback parameter value.

Secondly, it is noted that the geometric mean of the two OLS regressions does provide a reasonably close to the true feedback parameter, for the value derived from satellite observations. Variations are fairly evenly spread either side: the mean is only slightly higher than the true value and the standard deviation is about 9% of the mean.

However, for the two lower feedback values, representing the IPCC range of climate sensitivities, while the usual OLS regression is substantially less than the true value, the geometric mean over-estimates and does not provide a reliable correction over the range of feedbacks.

All the feedbacks represent a net negative feedback ( otherwise the climate system would be fundamentally unstable ). However, the IPCC range of values represents less negative feedbacks, thus a less stable climate. This can be seen reflected in the degree of variability in data plotted in the spreadsheet. The standard deviations of the slopes are also somewhat higher. This can be expected with less feedback controlling variations.

It can be concluded that the ratio of the proportional variability in the two quantities changes as a function of the degree of feedback in the system. The geometric mean of the two slopes does not provide a good estimation of the true feedback for the less stable configurations which have greater variability. This is in agreement with Isobe et al 1990 [7] which considers the merits of several regression methods.

The simple model helps to see how this relates to Rad / Temp scatter plots and climate sensitivity. However, the problem of regression dilution is a totally general mathematical result and can be reproduced from two series having a linear relationship with added random changes, as shown above.

What the papers say

A quick review of several recent papers on the problems of estimating climate sensitivity shows a general lack of appreciation of the regression dilution problem.

Dessler 2010 b [1] :

Estimates of Earth’s climate sensitivity are uncertain, largely because of uncertainty in the long-term cloud feedback.

Spencer & Braswell 2011 [2] :

Abstract: The sensitivity of the climate system to an imposed radiative imbalance remains the largest source of uncertainty in projections of future anthropogenic climate change.

There seems to be agreement that this is the key problem in assessing future climate trends. However, many authors seem unaware of the regression problem and much published work on this issue seems to rely heavily on the false assumption that OLS regression of dRad against dTemp can be used to correctly determine this ratio, and hence various sensitivities and feedbacks.

Trenberth 2010 [3] :

To assess climate sensitivity from Earth radiation observations of limited duration and observed sea surface temperatures (SSTs) requires a closed and therefore global domain, equilibrium between the fields, and robust methods of dealing with noise. Noise arises from natural variability in the atmosphere and observational noise in precessing satellite observations.

Whether or not the results provide meaningful insight depends critically on assumptions, methods and the time scales ….

Indeed so, unfortunately he then goes on to contradict earlier work by Lindzen and Choi that did address the OLS problem including a detailed statistical analysis comparing their results, by relying on inappropriate use of regression. Certainly not an example of the “robust methods” he is calling for.


Figure 4. Excerpt from Lindzen & Choio 2011, figure 7, showing consistent under-estimation of the slope by OLS regression ( black line ).

Spencer and Braswell 2011 [2]

As shown by SB10, the presence of any time-varying radiative forcing decorrelates the co-variations between radiative flux and temperature. Low correlations lead to regression-diagnosed feedback parameters biased toward zero, which corresponds to a borderline unstable climate system.

This is an important paper highlighting the need to take account of the lagged response of the climate during regression to avoid the decorrelating effect of delays in the response. However, it does not deal with the further attenuation due to regression dilution. It is ultimately still based on regression of two error laden-variables and thus does not recognise regression dilution that is also present in this situation. Thus it is likely that this paper is still over-estimating sensitivity.

Dessler 2011 [4] :

Using a more realistic value of σ(dF_ocean)/σ(dR_cloud) = 20, regression of TOA flux vs. dTs yields a slope that is within 0.4% of lamba.

Then in the conclusion of the paper, emphasis added:

Rather, the evolution of the surface and atmosphere during ENSO variations are dominated by oceanic heat transport. This means in turn that regressions of TOA fluxes vs. δTs can be used to accurately estimate climate sensitivity or the magnitude of climate feedbacks.

Also from a previous paper:

Dessler 2010 b [1]

The impact of a spurious long-term trend in either dRall-sky or dRclear-sky is estimated by adding in a trend of T0.5 W/m 2/ decade into the CERES data. This changes the calculated feedback by T0.18 W/m2/K. Adding these errors in quadrature yields a total uncertainty of 0.74 and 0.77 W/m2/K in the calculations, using the ECMWF and MERRA reanalyses, respectively. Other sources of uncertainty are negligible.

The author was apparently unaware that the inaccuracy of regressing two uncontrolled variables is a major source of uncertainty and error.

Lindzen & Choi 2011 [5]

[Our] new method does moderately well in distinguishing positive from negative feedbacks and in quantifying negative feedbacks. In contrast, we show that simple regression methods used by several existing papers generally exaggerate positive feedbacks and even show positive feedbacks when actual feedbacks are negative.

… but we see clearly that the simple regression always under-estimates negative feedbacks and exaggerates positive feedbacks.

Here the authors have clearly noted that there is a problem with the regression based techniques and go into quite some detail in quantifying the problem, though they do not explicitly identify it as being due to the presence of uncertainty in the x-variable distorting the regression results.

The L&C papers, to their credit, recognise that regression based methods on poorly correlated data seriously under-estimates the slope and utilise techniques to more correctly determine the ratio. They show probability density graphs from Monte Carlo tests to compare the two methods.

It seems the latter authors are exceptional in looking at the sensitivity question without relying on inappropriate use of linear regression. It is certainly part of the reason that their results are considerably lower than almost all other authors on this subject.

Forster & Gregory 2006 [8]

For less than perfectly correlated data, OLS regression of Q-N against δTs will tend to underestimate Y values and therefore overestimate the equilibrium climate sensitivity (see Isobe et al. 1990).

Another important reason for adopting our regression model was to reinforce the main conclusion of the paper: the suggestion of a relatively small equilibrium climate sensitivity. To show the robustness of this conclusion, we deliberately adopted the regression model that gave the highest climate sensitivity (smallest Y value). It has been suggested that a technique based on total least squares regression or bisector least squares regression gives a better fit, when errors in the data are uncharacterized (Isobe et al. 1990). For example, for 1985–96 both of these methods suggest YNET of around 3.5 +/- 2.0 W m2 K-1 ( a 0.7–2.4-K equilibrium surface temperature increase for 2 ϫ CO2 ), and this should be compared to our 1.0–3.6-K range quoted in the conclusions of the paper.

Here, the authors explicitly state the regression problem and its effect on the results of their study on sensitivity. However, when writing in 2005, they apparently feared that it would impede the acceptance of what was already a low value of climate sensitivity if they presented the mathematically more accurate, but lower figures.

It is interesting to note that Roy Spencer, in a non peer reviewed article, found an very similar figure of 3.66 W/m2/K by comparing ERBE data to MSU derived temperatures following Mt Pinatubo. [10]

So Forster and Gregory felt constrained to bury their best estimation of climate sensitivity, and the discussion of the regression problem in an appendix. In view of the ‘gatekeeper’ activities revealed in the Climategate emails, this may have been a wise judgement in 2005.

Now, ten years after the publication of F&G 2006, proper application of the best mathematical techniques available to correct this systematic over-estimation of climate sensitivity is long overdue.

A more recent study Lewis & Curry 2014 [11] used a different method of identifying changes between selected periods and thus is not affected by regression issues. This method also found lower values of climate sensitivity.

 

Conclusion

Inappropriate use of linear regression can produce spurious and significantly low estimations of the true slope of a linear relationship if both variables have significant measurement error or other perturbing factors.

This is precisely the case when attempting to regress modelled or observed radiative flux against surface temperatures in order to estimate sensitivity of the climate system.

In the sense that this regression is conventionally done in climatology, it will under-estimate the net feedback factor (often denoted as ‘lambda’). Since climate sensitivity is defined as the reciprocal of this term, this results in an over-estimation of climate sensitivity.

If an incorrect evaluation of climate sensitivity from observations is used as a basis for the choice of parametrised inputs to climate models, the resulting model will be over sensitive and produce exaggerated warming. Similarly faulty analyses of their output will further inflate the apparent model sensitivity.

This situation may account for the difference between regression-based estimations of climate sensitivity and those produced by other methods. Many techniques to reduce this effect are available in the broader scientific literature, thought there is no single, generally applicable solution to the problem.

Those using linear regression to assess climate sensitivity need to account for this significant source of error when supplying uncertainly values in published estimations of climate sensitivity or take steps to address this issue.

The decorrelation due to simultaneous presence of both the in-phase and orthogonal climate reactions, as noted by Spencer et al, also needs to be accounted for to get the most accurate information from the available data. One possible approach to this is detailed here: https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

A mathematical explanation of the origin of regression dilution is provided here:
On the origins of regression dilution

 
 

References

 [1] Dessler 2010 b “A Determination of the Cloud Feedback from Climate Variations over the Past Decade”

Click to access dessler10b.pdf

 [2] Spencer and Braswell 2011: “On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance”
http://www.mdpi.com/2072-4292/3/8/1603/pdf

 [3] Trenberth et al 2010 “Relationships between tropical sea surface temperature and top‐of‐atmosphere radiation”
http://www.mdpi.com/2072-4292/3/9/2051/pdf

 [4] Dessler 2011
“Cloud variations and the Earth’s energy budget”

Click to access Dessler2011.pdf

 [5] Lindzen & Choi 2001 “On the Observational Determination of Climate Sensitivity and Its Implications”

Click to access 236-Lindzen-Choi-2011.pdf

 [6] Nic Lewis : “A Sensitive Matter: How The IPCC Buried Evidence Showing Good News About Global Warming ”
http://www.thegwpf.org/content/uploads/2014/02/A-Sensitive-Matter-Foreword-inc.pdf

 [7] Isobe et al 1990 “Linear Regression in Astronomy I”
http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?journal=ApJ..&year=1990&volume=.364&letter=.&db_key=AST&page_ind=116&data_type=GIF&type=SCREEN_GIF&classic=YES

 [8] Forster & Gregory 2006
“The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data”

Click to access Forster_sensitivity.pdf

Adcock, R.J., 1878. A Problem in Least Squares. The Analyst, 5, 53-54.

 [0] Quirino Paris 2004 : “Robust Estimators of Errors-In-Variables Models Part I”

Click to access 04-007.pdf

 [10] R Spencer “Revisiting the Pinatubo Eruption as a Test of Climate Sensitivity”

Revisiting the Pinatubo Eruption as a Test of Climate Sensitivity

 [11]Lewis & Curry 2014
“The implications for climate sensitivity of AR5 forcing and heat uptake estimates”

Click to access lewiscurry_ar5-energy-budget-climate-sensitivity_clim-dyn2014_accepted-reformatted-edited.pdf