The Death Spiral is Dead

Introduction

This year, as every year, there has been much excitement in the media about ‘catastrophic’ melting of Arctic sea-ice, run-away melting, tipping points, death spirals and “ice-free” summers.

There has been the usual guessing game about when exactly the minimum will / has occurred and what the ice area or extent will be on that day.

Claims of ‘ice-free’ conditions at some time in the summer have been bandied about for years in various forms but as the reality sinks in that it’s not as bad as some had claimed, the dates when this is expected happen have often been pushed out beyond the life expectancy of those making the claims.

The meaning of “ice-free” has also been the subject of some serious misrepresentation.

IPCC AR5 states:

Year-round reductions in Arctic sea ice are projected for all RCP scenarios. A nearly ice-free11 Arctic Ocean in the summer sea-
ice minimum in September before mid-century is likely for RCP8.512 (medium confidence). {2.2.3, Figure 2.1}

RCP8.5 is the most extreme emissions ‘pathway’ and is already likely to be avoided.
Footnote 11 defines the term “nearly ice-free”:

11 When sea-ice extent is less than one million km2 for at least five consecutive years.

IPCC consistently uses the term “nearly ice-free”, yet this often gets perverted to “ice-free” in the media and scientific loose talk and the significant rider “for at least five consecutive years” gets dropped.

The problem with this obsessive focusing on one single data point out of 365, is that there is a lot of short term, weather driven variability that can affect the exact timing and size of the minimum in ice coverage. Since the main interest ( outside maritime navigational requirements ) is the hope to find some indications of long term changes in climate, this is not a very instructive way to use the detailed daily data available.

There have been three notably low summer minima in recent years: 2007, 2012 and 2016. The 2012 event was the lowest in the satellite record going back to 1979. The other two years tie for second place, meaning the current minimum is indistinguishable from the one which occurred nine years previously and the lowest one lies between the two. This incompatible with claims of run-away melting. This was a reasonable hypothesis and cause for concern  in 2007 when the relatively short record could be characterised as an increasing rate of change but this interpretation is not compatible with what has happened since.

In the golden days of “run-away” melting, leading up to the ‘catastrophic’ 2007 Arctic sea-ice minimum, this was often presented as the ‘canary in the coal-mine’ for global warming and climate change. It is still referred to as such.

The aim here is to try to separate the changes in the annual cycle from the weather-driven and other short-term variability and to see if we can learn something about the life expectancy of our canary.

Method and Results


In order to determine an objective timing for the sea-ice minimum, it is necessary to filter out short-term variability. Visual inspection of the northern hemisphere sea-ice data [1] shows that there are periods of about a week in length having alternating greater and lesser rates of change throughout most of the year. Spectral analysis also reveals notable cyclic components at around 4, 10, 16 days. This means that the date of the minimum tends to fluctuate rather erratically over the first three weeks of September.

The criterion adopted here for having sufficiently removed the short term variability in order to get a stable estimation of the turning point ( minimum sea-ice extent ) of the annual cycle, was that there shall be only one change of direction. Since the annual cycle only has one minimum, anything producing multiple localised minima is noise in this context.

To detect this condition, the rate of change of the NSIDC NH sea-ice extent was taken and a 3-sigma gaussian low-pass filter applied. The length of the filter was increased progressively until all years in the data had a unique turning point (ie.  a change of sign in the time derivative ).

This was achieved in practice by a single operation: convolution of the daily data with a diff-of-gaussian kernel [*] of the chosen length. The data point at the change of sign was taken as the date of the minimum in the annual cycle for each year.

As is typically the case for satellite records, the data processed by NDISC comes from a number of different satellites as instruments and spacecraft fail. They are cross-calibrated as well as possible to provide one continuous time series. One notable change in the data is that the earlier orbits only provided  full surface coverage every six days [2] and were originally processed to give a data point every three days. These have apparently been interpolated to give a homogeneous time series. Obviously data with only a three day time resolution will be less accurate at determining the exact date of the turn-around. To mark the different nature of the early data this section is plotted in a different colour.

Comparison to the minima of 5 day trailing average presented by NSIDC shows the expected reduction in the variability that was the aim of this study. This manifests primarily as the reduced occurrence of later minimum dates. The pre-1998 data change little. This is probably due to the blurring effect of the extended 6 day orbital coverage being similar to the gaussian smoothing applied in this analysis.

Since the rate of change at the beginning of refreezing tends to be faster than end of melting, removing higher frequencies with a filter introduces a general drift to a earlier date of minimum ice extent. This simply reflects the asymmetry of the annual cycle. This underlines the caution needed in merging the two sections of the dataset : the earlier Nimbus and the later military satellites which had different orbit resolution. The data are not fully compatible and comparing dates of ice minimum without suitable filtering will lead to an exaggerated impression of later melting in the post-Ninbus years.


Discussion

There is a clear non linear nature to the variation in the timing of the annual minimum in Arctic sea-ice extent from NSIDC. It reached its latest recent date for the turning point in 2007 and has been getting progressively earlier ever since.
That is to say, that the freezing season has been beginning steadily earlier for about a decade, having drifted in the opposite sense for the previous two decades.

There is much talk with naive assumptions of the effects of reduced ice area on the energy budget of the region. The simplistic argument being that less solar energy will be reflected by water compared to an ice or snow surface leading to further warming and thus ever more melting. However, there are several other effects which need to be measured and taken into account to determine whether this extra energy input will cause an overall positive feedback ( and accelerated melting ) or whether it will be counted by other effects.  Open water emits more infra-red energy to space and evaporation of surface water in the windy Arctic region will remove substantial amounts of heat from the surface.  Both of these effects will continue during 12 months of the year, not just the short summer. It is far from obvious how these factors and others will balance out. Climate models are notoriously poor at reproducing both Arctic temperatures and sea ice coverage. (They tend to under-estimate both temperature and ice loss). Clearly the ‘basic physics’ for this region is poorly understood.

The observational data can help shed some light on the how the region is responding to the reduced ice cover since the 2007 minimum.

The decadal rate of melting has also reduced since 2007 as measured by estimations of ice area retrieved by the University of Illinois [4].

The derivation of that graph is detailed here:
https://climategrog.wordpress.com/2013/09/16/on-identifying-inter-decadal-variation-in-nh-sea-ice

Conclusion

There are two ways to interpret the observational data:

1) the net feedback from open water is negative , not positive and run-away melting was an erroneous interpretation. It is not happening.

2) the feedbacks are not the key driver of Arctic sea ice melting, there is another external force, such as N. Atlantic sea surface temperature and ocean currents,  which is dominant and run-away melting was an erroneous interpretation. It is not happening.

The death spiral is dead.


Resources

[1] The NH sea-ice extent data are provided by NSIDC as daily anomalies form an average cycle plus the annual cycle which has been subtracted. The “near real time” data for the current, incomplete year is provided as a separate file in the same format.

Documentation for the NSIDC data : http://nsidc.org/data/docs/noaa/g02135_seaice_index


[2] DMSP Satellite F8 was the first of a series a spacecraft and flew from 7 September 1987, it was flown by US department of defence. Data from this family is combined with that from earlier NASA  provided Nimbus-7  spacecraft which flew similar but somewhat different polar orbits.

https://nsidc.org/data/docs/daac/nimbus-7_platform.gd.html

“The Nimbus-7 observatory provides global coverage every six days, or every 83 orbits.”

This section of the combined record is less reliable for estimating the turning point due to lack of time resolution. The data has to be in-filled to produce daily data series and  contributions to one ice extent figure are actually spread over 6 days of flight time.

The DMSP 5D family spacecraft, provide daily resolution data :
http://nsidc.org/data/nsidc-0001


[3] An implementation of the diff-of-gaussian filter is presented here:

https://climategrog.wordpress.com/2016/09/18/diff-of-gaussian-filter/


[4] The sea-ice area data used in the decadal trend analysis are provided by Cryosphere Today team at U. Illinois.

http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008

Processed data of turning points after 14d gaussian filter.

1979.69756852    0.6976
1980.68047869    0.6805
1981.69076810    0.6908
1982.69010575    0.6901
1983.68670554    0.6867
1984.69425698    0.6943
1985.68811892    0.6881
1986.68198074    0.6820
1987.67584268    0.6758
1988.68339412    0.6834
1989.69642138    0.6964
1990.68754535    0.6875
1991.69783470    0.6978
1992.68074494    0.6807
1993.68829638    0.6883
1994.68215826    0.6822
1995.69792343    0.6979
1996.68357158    0.6836
1997.69386093    0.6939
1998.69593655    0.6959
1999.68979843    0.6898
2000.69187404    0.6919
2001.69394966    0.6939
2002.68781154    0.6878
2003.69810089    0.6981
2004.69196277    0.6920
2005.70225212    0.7023
2006.69611400    0.6961
2007.70366544    0.7037
2008.69752732    0.6975
2009.69686503    0.6969
2010.69620273    0.6962
2011.69280252    0.6928
2012.69761605    0.6976
2013.69147793    0.6915
2014.69629146    0.6963
2015.69015334    0.6902

On the relationship of ACE and SST

Introduction

Surface ocean temperature has been shown to be a key condition for development of tropical and cyclonic storms, thought there are strongly conflicting opinions on its impact on long term trends. The present study looks at correlation of accumulated cyclone energy and surface sea temperature in the N. Atlantic basin and finds common periodicities linking the two. Notably, a clear lunar period is revealed. The similarity between the post-2005 drop in cyclone energy and a similar drop in the record from the 1930-1950 period is also discussed.


Figure 1. Comparing North Altantic SST to Accumulated Cyclone Energy

Data Sources:

Accumulated cyclone energy ( ACE ) is an estimation of kinetic energy contained in cyclone class storms, based on wind speed, storm extent and other information. This is catalogued both by storm and as annual totals. In the following, all ACE data [DS1] and discussion thereof, refer to the North Atlantic basin. Much research has been done recently to try to improve the earlier periods of the record. [1] [2]

The ACE metric is intended to provide better insight that simply counting the number of storms, since this has been strongly affected by improvements in detection and reporting and possible statistical variation in teh proportions of storm intensity. Many smaller storms are now reported that would not have been detected without modern surveying techniques [2]. Since storm energy originates in ocean heat energy, ( Gray 1968 ) [3]. total cyclone energy is a more appropriate metric than non-dimensional quantities such as storm count. when searching for possible a causal link.

Sea surface temperature data ( SST ) are taken from two processed data sources, both derived primarily from ICOADS source observations.

HadSST3[DS2] is an “anomaly” dataset ie deviations from the local seasonal mean from an arbitrary reference period.
Data from 0 -70W, 0 – 60N were averaged.

ERSSTv3 [DS3] is a dataset of actual SST. A single offset was subtracted from the N.Atlantic mean to align it with HadSST3. NOAA provide a detrended, N.Atlantic average they call AMO index. The data used here is NON detrended version, ie actual SST.

Data Processing:

Since ERSST has a strong annual variation, and HadSST3 has considerable high frequency noise and annual residuals, both were low-pass filtered using a 3-sigma gaussian filter with sigma = 12mo. This removes most variability below about 2 years in period.

Considerable adjustments are made to SST for a number of reasons which are discussed on the Met. Office web site In particular, substantial adjustments are made to WWII period and subsequent years ( many of which are similarly adopted for both data sources used here ).

  1. ERI biases (North Atlantic): In the North Atlantic between 1970 and 1994, Engine Room Intake biases were generated from the means and standard errors from Kent and Kaplan 2006.
  2. Unknown measurements: AR(1) series with lag correlation 0.99 and range between 0 and 1 were used to assign a time varying fraction of unknown measurements to bucket. The remainder were set to be ERI measurements. The same value was used at all locations.
  3. ERI recorded as bucket: 30±10% of bucket observations were reassigned as Engine Room Intake measurements. One value per realisation was used and applied to all times and places after 1940.
  4. Canvas to rubber bucket: A linear switchover was assumed. The start point (all canvas) was chosen randomly between 1954 and 1957. The end point (all rubber) was chosen randomly between 1970 and 1980.

Most of the post-WWII cooling is the result of these somewhat speculative “bias corrections” in particular the phasing in of the “canvas to rubber bucket”” adjustment from 1954 to 1980. The adjustments made to HadSST3 were discussed in detail in the following article On the adjustments to the HadSST3 data set

The N.Atlantic annual cyclone season peaks in late summer. Annual ACE data were plotted at the mid-point of the year. Where ACE is shown low-pass filtered, a similar 3-sigma gaussian filter was used.

ACE data will have also been affected by changes in shipping routes and usage during the WWII period, possibly leading to significant non-reporting of storms well away from land.

In general all three datasets should be regarded as unreliable during the WWII period.

Discussion

Similarities :

There is a strong similarity in the ACE and SST data, both at inter-decadal and inter-annual scales. The much discussed 60 year pseudo-cyclic variability is clearly seen in both SST and ACE. The inter-annual variability is particularly close in the more reliable post-1950 period but shows a strong similarity throughout the record. Since cyclones derive their energy from the heat of the ocean surface [3] this not particularly surprising but provides a means of comparing two fundamentally different measurements of climatic conditions.

Differences:

There are two very visible discrepancies in the two metrics. War-time ACE is very low and probably reflects some degree of under-sampling in that period. However, the low activity goes from 1934 to 1949, a far broader period that can be accounted for by WWII disruption to shipping. Also systematic airborne monitoring was started in 1944 (Klotzbach & Landsea 2015) [4] and there was no noticeable increase in ACE until 1950, suggesting the lower level of activity in the period, though less reliable, is real. A detailed account of uncertainty in the HURDAT data is provided by Landsea & Franklin (2013):[5]

There is a large peak in ACE 1893 that is opposed to the variation shown in the SST record. 1952 also shows a mismatch in the two records. With those two exceptions, the inter-annual variability in the two metrics follow each other quite closely throughout the record.

The post-2005 drop in ACE

Applying a low-pass filter to the annual ACE data allows a more direct comparison to SST.


Figure 2.

An interesting feature becomes apparent: unlike SST, the end of the cyclone data has dropped sharply since its peak in 2004/2005. This looks very similar to the trough that started in the 1930s.

This drop is also reported by Klotzbach et al (2015) [5] which includes the most recent data.

Large, significant downward trends are present in accumulated cyclone energy in the Northern Hemisphere, the Southern Hemisphere, and globally.

It can also be noted that the inter-annual variations correlate much more closely during both the early and later 20th c. warming periods.

Other studies, such as Holland & Webster (2007)[6] , based on storm count rather than ACE seem to give a less clear picture of the SST linkage and, due to earlier publishing date, do not include the more recent decline.


Figure 3. From Holland & Webster 2007, ( fig. 1 )

Common variability of ACE and SST can be examined by cross-correlation of the two time series. Common periodicities can be examined by spectral analysis of the cross-correlation function.


Figure 4. Cross power spectrum N.Atl. ACE vs SST

The two main common periods are close to 9 and 60 years, with a smaller peak around 5.6 years. The circa 9y peak is centred on 9.07 years. It is noted that this is exactly the mean frequency corresponding to 8.85y and 9.3y, both of which are lunar cycles. The 8.85y period corresponds to the precession of the lunar of apsides and 9.3y is half the nodal precession period.

This circa 9.1 years would appear to be the same periodicity reported as common to Atlantic and Pacific SST by Muller et al (2013) [8] and conclusively shown to be of lunar origin by N Scaffeta [9] by examining data from the NASA / JPL ephemeris.

Though tides are traditionally measured by the height of the water level, it should be remembered that tides are longitudinal waves and the principal movement of water is horizontal, not vertical. Long period tidal variations have to potential to move vast volumes of surface water and hence heat, thereby modifying ocean currents. This seems to have received little recognition or quantitative study.


Figure 5. Showing detail of later 20th c. ACE and SST with major volcanoes marked.

While there is still too little data to confirm the length of the post 2005 reduction in activity, the dearth of land falling hurricanes in the U.S. has been widely reported, as has the failure of hurricane activity to match the “hottest year ever” claims made about SST.

If this lower level of activity contintues in the next few years, it will suggest that the stagnant temperatures of “the pause”, like those of the 30s and 40s are not conducive to a high level of hurricane activity. This should then inform attempts to model cyclone formation and intensity.

While there is clearly a linkage between SST and cyclone energy, it is not a simple on-to-one relationship of warmer SST causing “more frequent and more extreme” storms. A more detailed study of other factors such as wind shear and vorticity is covered in Emanuel 1995 [14]. A brief overvoew of more recent work is provided by Vecchi and Soden 2007[15]

Some studies of the correlation of SST and cyclone intensity and frequency [10] have, for reasons partly motivated by data quality issues, focused on the post-1960 period. However, it is clear that this will not give an accurate or representative assessment.

Whatever the motivation, focussing on a limited period dominated by a rising section of the data runs a substantial risk of drawing false conclusions.

Emanuel (2005) [10]

… show that this index [potential destructiveness of hurricanesderived from the third power of wind speed] has increased markedly since the mid-1970s. This trend is due to both longer storm lifetimes and greater storm intensities. I find that the record of net hurricane power dissipation is highly correlated with tropical sea surface temperature, … My results suggest that future warming may lead to an upward trend in tropical cyclone destructive potential…

The accuracy of the claims an calculations made in Emanuel (2005) were the subject of a reply to the paper in Gray (2005)[11], which found them to be invalid.

This paper’s conclusions are partially based on an analysis of poorly documented and biased tropical cyclone (TC) maximum wind (Vmax ) data from the Northwest (NW) Pacific. This study presents a sum of each tropical cyclone’s maximum winds – a very hard measurement to realistically obtain over a 3-5 decade period of changing maximum wind measurement techniques. The author then cubes these maximum wind speeds (Vmax3) of questionable accuracy. Such a cubing process greatly exaggerates any errors in the original Vmax estimate.

Almost any climate dataset will will show either an upward or downward trend. Suggesting evidence of causation from nothing more than sign of the long term trend has little justification. Extrapolating such a relationship outside the period of study, on nothing more than a baseless “if the current trend continues”, is more likely to be misleading then informative. There seems to be some unspoken assumptions that the most likely outcome is that a trend calculated over some arbitrary period will continue, while no logical or statistical basis for that assumption is ever put forward.

A study showing correlation of variation in both directions, or of less trivial variations would be far more persuasive.

A rapid drop in hurricane energy is noted before the El Chichon and Mt Pinatubo eruptions ( figure 5.) and is already visible in SST. In fact, both events were preceded by a higher level of cyclone activity that coincided with a peaks in the 9y lunar cycle in SST and also coincided with peaks in solar activity. The temporal coincidence of all these factors raises the danger of false attribution

Volcanoes play a significant role in attempts to model past climate. Generally, on a global scale, climate models produce little more than general GHG driven warming and a lot of internal noise. The cooling effect attributed to major stratospheric eruptions is important in putting a few bumps and dips in about the right place to give the visual impression that the model, at least roughly, resembles the recent climate history ( models are tuned primarily to the period from 1960 onwards ).

In multivariate regression studies or arbitrarily adjusting climate model parameters to reconcile model output with the climate record ( Hansen et al 2002 [12] ), false attribution of subsequent cooling to volcanic forcing would lead to unduly high estimations of climate sensitivity to volcanic forcing. Consequently other radiative opposing ( warming ) “forcings” will also likely be over-estimated.

A separate study of volcanic forcing is the tropics suggests a similar over-estimation of climate sensitivity.
http://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

Conclusion :

Secular trends underlying the obvious 60 year pseudo-periodic variability in both SST and cyclone data are probably determined more by sampling bias and the various adjustments and corrections applied to SST than they are by possible long term changes in the physical conditions. As such, caution should be applied before suggesting that they represent a real indication of a possible long term causal relationship.

The clear 60y and 9y periodicities draw attention to the need to investigate long period lunar tidal influences on ocean and atmospheric climate variables. This is needed in order to quantify and account for natural variability and also to avoid false attribution to other climate variables. In particular, large scale variations in surface transport of heat by ocean currents due to long period tides appears to be a much neglected factor.

There is a stronger correlation during the two major warming periods of the 20th century than in the plateaux or cooling periods. SST is not the single dominant factor determining cyclone energy. There is not a trivial linear relationship and it is probably not appropriate to project relationships derived from the study of data primarily from warming periods onto times of relatively stable SST conditions.

There is an indication that the drop in North Atlantic hurricane activity which apparently occurred during the pre-WWII warm period is repeating during the current temperature plateau.

Data sources:

ACE : annual accumulated cyclone energy derived from HURDAT2 database:
http://www.aoml.noaa.gov/hrd/tcfaq/E11.html
Documentation of Atlantic Hurricane Database Re-analysis Project:
http://www.aoml.noaa.gov/hrd/data_sub/re_anal.html

UK Met. Office HadSST3.1.1.0 [ 0-70W,0-60N ] recovered from KNMI http://climexp.knmi.nl”

ERSSTv3 non-detrended N.Atlantic SST :
http://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.mean.data”

Explanation of accumulated cyclone energy:
http://www.aoml.noaa.gov/hrd/tcfaq/E11.html

“ACE” = Accumulated Cyclone Energy – An index that combines the numbers of systems, how long they existed and how intense they became. It is calculated by squaring the maximum sustained surface wind in the system every six hours that the cyclone is a Named Storm and summing it up for the season. It is expressed in 104 kt2.

The Atlantic hurricane database (or HURDAT) extends back to 1851. However, because tropical storms and hurricanes spend much of their lifetime over the open ocean – some never hitting land – many systems were “missed” during the late 19th and early 20th Centuries (Vecchi and Knutson 2008). Starting in 1944, systematic aircraft reconnaissance was commenced for monitoring both tropical cyclones and disturbances that had the potential to develop into tropical storms and hurricanes. This did provide much improved monitoring, but still about half of the Atlantic basin was not covered (Sheets 1990). Beginning in 1966, daily satellite imagery became available at the National Hurricane Center, and thus statistics from this time forward are most complete (McAdie et al. 2009).

References :

[1] Vecchi & Knutson 2008
http://www.gfdl.noaa.gov/bibliography/related_files/gav0802.pdf

[2] The Atlantic Hurricane Database Re-analysis Project
http://www.aoml.noaa.gov/hrd/data_sub/re_anal.html

[3] Gray, W. M., 1968: Global view of the origin of tropical disturbances and storms. Mon. Wea. Rev., 96,
669–700.
https://dspace.library.colostate.edu/bitstream/handle/10217/107/0114_Bluebook.pdf

[4] Klotzbach & Landsea 2015
“Extremely Intense Hurricanes: Revisiting Webster et al. (2005) after 10 Years”

http://www.aoml.noaa.gov/hrd/Landsea/klotzbachandlandsea2015.pdf

[5] Landsea, C. W., and J. L. Franklin, 2013: “Atlantic hurricane database uncertainty and presentation of a new database format.” Mon.
Wea. Rev., 141, 3576–3592, doi:10.1175/MWR-D-12-00254.1.

http://www.aoml.noaa.gov/hrd/Landsea/landsea-franklin-mwr2013.pdf

[6] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”
http://apollo.eas.gatech.edu/yhw/publications/hansen_etal_2002.pdf

[7] Holland & Webster 2007 :
“Heightened tropical cyclone activity in
the North Atlantic: natural variability or
climate trend?”

http://rsta.royalsocietypublishing.org/content/roypta/365/1860/2695.full.pdf

[8] Muller et al (2013) “Decadal variations in the global atmospheric land temperatures”
http://static.berkeleyearth.org/pdf/berkeley-earth-decadal-variations.pdf

[9] Nicola Scafetta / Journal of Atmospheric and Solar-TerrestrialPhysics 72 (2010) 951–970
“Empirical evidence for a celestial origin of the climate oscillations and its implications”
http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf

[10] Emanual 2005 “Increasing destructiveness of tropical cyclones over the past 30 years” [paywalled]
http://www.nature.com/nature/journal/v436/n7051/abs/nature03906.html

[11] William M. Gray
Comments on: “Increasing destructiveness of tropical cyclones over the past 30 years” by Kerry Emanuel, Nature, 31 July 2005, Vol. 436, pp. 686-688
http://arxiv.org/pdf/physics/0601050

[12] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”
http://apollo.eas.gatech.edu/yhw/publications/hansen_etal_2002.pdf

[13] Webster et al (2005)
“Changes in Tropical Cyclone Number, Duration, and Intensity in a Warming Environment”
http://www.sciencemag.org/content/309/5742/1844.full

[14] Kerry A. Emanuel 1995
“Sensitivity of Tropical Cyclones to Surface Exchange Coefficients and a Revised Steady-State Model incorporating Eye Dynamics”
ftp://texmex.mit.edu/pub/emanuel/PAPERS/sens95.pdf

[15] Vecchi & Soden 2007
“Increased tropical Atlantic wind shear in model projections of global warming”
http://onlinelibrary.wiley.com/doi/10.1029/2006GL028905/full

On Determination of Tropical Feedbacks

This article has been posted for discussion at Judith Curry’s “Climate Etc.”

http://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

Abstract

Satellite data for the period surrounding the Mt Pinatubo eruption in 1991 provide a means of estimating the scale of the volcanic forcing in the tropics. A simple relaxation model is used to examine how temporal evolution of the climate response will differ from the that of the radiative forcing. Taking this difference into account is essential for proper detection and attribution of the relevant physical processes. Estimations are derived for both the forcing and time-constant of the climate response. These are found to support values derived in earlier studies which vary considerably from those of current climate models. The implications of these differences in inferring climate sensitivity are discussed. The study reveals the importance of secondary effects of major eruptions pointing to a persistent warming effect in the decade following the 1991 eruption.

The inadequacy of the traditional application of linear regression to climate data is highlighted, showing how this will typically lead to erroneous results and thus the likelihood of false attribution.

The implications of this false attribution to the post-2000 divergence between climate models and observations is discussed.

Keywords: climate; sensitivity; tropical feedback; global warming; climate change; Mount Pinatubo; volcanism; GCM; model divergence; stratosphere; ozone; multivariate; regression; ERBE; AOD ;

Introduction

For the period surrounding the eruption of Mount Pinatubo in the Philippines in June 1991, there are detailed satellite data for both the top-of-atmosphere ( TOA ) radiation budget and atmospheric optical depth ( AOD ) that can be used to derive the changes in radiation entering the climate system during that period. Analysis of these radiation measurements allows an assessment of the system response to changes in the radiation budget. The Mt. Pinatubo eruption provides a particularly useful natural experiment, since the spread of the effects were centred in the tropics and dispersed fairly evenly between the hemispheres. [1] fig.6 ( This meta-study provides a lot of detail about the composition and dispersal of the aerosol cloud.)

The present study investigates the tropical climate response in the years following the eruption. It is found that a simple relaxation response, driven by volcanic radiative ‘forcing’, provides a close match with the variations in TOA energy budget. The derived scaling factor to convert AOD into radiative flux, supports earlier estimations [2] based on observations from the 1982 El Chichon eruption. These observationally derived values of the strength of the radiative disturbance caused by major stratospheric eruptions are considerably greater than those currently used as input parameters for general circulation climate models (GCMs). This has important implications for estimations of climate sensitivity to radiative ‘forcings’ and attribution of the various internal and external drivers of the climate system.

Method

Changes in net TOA radiative flux, measured by satellite, were compared to volcanic forcing estimated from measurements of atmospheric optical depth. Regional optical depth data, with monthly resolution, are available in latitude bands for four height ranges between 15 and 35 km [DS1] and these values were averaged from 20S to 20N to provide a monthly mean time series for the tropics. Since optical depth is a logarithmic scale, the values for the four height bands were added at each geographic location. Lacis et al [2] suggest that aerosol radiative forcing can be approximated by a linear scaling factor of AOD over the range of values concerned. This is the approach usually adopted in IPCC reviewed assessments and is used here. As a result the vertical summations are averaged across the tropical latitude range for comparison with radiation data.

Tropical TOA net radiation flux is provided by Earth Radiation Budget Experiment ( ERBE ) [DS2].

One notable effect of the eruption of Mt Pinatubo on tropical energy balance is a variation in the nature of the annual cycle, as see by subtracting the pre-eruption, mean annual variation. As well as the annual cycle due to the eccentricity of the Earth’s orbit which peaks at perihelion around 4th January, the sun passes over the tropics twice per year and the mean annual cycle in the tropics shows two peaks: one in March the other in Aug/Sept, with a minimum in June and a lesser dip in January. Following the eruption, the residual variability also shows a six monthly cycle. The semi-annual variability changed and only returned to similar levels after the 1998 El Nino.


Figure 1 showing the variability of the annual cycle in net TOA radiation flux in tropics. (Click to enlarge)

It follows that using a single period as the basis for the anomaly leaves significant annual residuals. To minimise the residual, three different periods each a multiple of 12 months were used to remove the annual variations: pre-1991; 1992-1995; post 1995.

The three resultant anomaly series were combined, ensuring the difference in the means of each period were respected. The mean of the earlier, pre-eruption annual cycles was taken as the zero reference for the whole series. There is a clearly repetitive variation during the pre-eruption period that produces a significant downward trend starting 18 months before the Mt. Pinatubo event. Since it may be important not to confound this with the variation due to the volcanic aerosols, it was characterised by fitting a simple cosine function. This was subtracted from the fitting period. Though the degree to which this can be reasonably assumed to continue is speculative, it seems some account needs to be taken of this pre-existing variability. The effect this has on the result of the analysis is assessed.

The break of four months in the ERBE data at the end of 1993 was filled with the anomaly mean for the period to provide a continuous series.


Figure 2 showing ERBE tropical TOA flux adaptive anomaly. (Click to enlarge)



Figure 2b showing ERBE tropical TOA flux adaptive anomaly with pre-eruption cyclic variability subtracted. (Click to enlarge)

Since the TOA flux represents the net sum of all “forcings” and the climate response, the difference between the volcanic forcing and the anomaly in the energy budget can be interpreted as the climate response to the radiative perturbation caused by the volcanic aerosols. This involves some approximations. Firstly, since the data is restricted to the tropical regions, the vertical energy budget does not fully account for energy entering and leaving the region. There is a persistent flow of energy out of the tropics both via ocean currents and atmospheric circulation. Variations in wind driven ocean currents and atmospheric circulation may be part of any climate feedback reaction.

Secondly, taking the difference between TOA flux and the calculated aerosol forcing at the top of the troposphere to represent the top of troposphere energy budget assumes negligible energy is accumulated or lost internally to the upper atmosphere. Although there is noticeable change in stratospheric temperature as a result of the eruption, the heat capacity of the rarefied upper atmosphere means this is negligible in this context.

Figure 3 showing changes in lower stratosphere temperature due to volcanism. (Click to enlarge)

A detailed study on the atmospheric physics and radiative effects of stratospheric aerosols by Lacis, Hansen & Sato [2] suggested that radiative forcing at the tropopause can be estimated by multiplying optical depth by a factor of 30 W / m2.

This value provides a reasonably close match to the initial change in ERBE TOA flux. However, later studies[3], [4] , attempting to reconcile climate model output with the surface temperature record have reduced the estimated magnitude of the effect stratospheric aerosols. With the latter adjustments, the initial effect on net TOA flux is notably greater than the calculated forcing, which is problematic; especially since Lacis et al reported that the initial cooling may be masked by the warming effect of larger particles ( > 1µm ). Indeed, in order for the calculated aerosol forcing to be as large as the initial changes in TOA flux, without invoking negative feedbacks, it is necessary to use a scaling of around 40 W/m2. A comparison of these values is shown in figure 4.

What is significant is that from just a few months after the eruption, the disturbance in TOA flux is consistently less than the volcanic forcing. This is evidence of a strong negative feedback in the tropical climate system acting to counter the volcanic perturbation. Just over a year after the eruption, it has fully corrected the radiation imbalance despite the disturbance in AOD still being at about 50% of its peak value. The net TOA reaction then remains positive until the “super” El Nino of 1998. This is still the case with reduced forcing values of Hansen et al as can also be seen in figure 4.



Figure 4 comparing volcanic of net TOA flux to various estimations aerosol forcing. ( Click to enlarge )

The fact that the climate is dominated by negative feedbacks is not controversial since this is a pre-requisite for overall system stability. The main stabilising feedback is the Planck response ( about 3.3 W/m2/K at typical ambient temperatures ). Other feedbacks will increase or decrease the net feedback around this base-line value. Where IPCC reports refer to net feedbacks being positive or negative, it is relative to this value. The true net feedback will always be negative.

It is clear that the climate system takes some time to respond to initial atmospheric changes. It has been pointed out that to correctly compare changes in radiative input to surface temperatures some kind of lag-correlation analysis is required : Spencer & Braswell 2011[5], Lindzen & Choi 2011[6] Trenberth et al 2010 [7]. All three show that correlation peaks with the temperature anomaly leading the change in radiation by about three months.


Figure 5 showing climate feedback response to Mt Pinatubo eruption. Volcanic forcing per Lacis et al. ( Click to enlarge )

After a few months, negative feedbacks begin to have a notable impact and the TOA flux anomaly declines more rapidly than the reduction in AOD. It is quickly apparent that a simple, fixed temporal lag is not an appropriate way to compare the aerosol forcing to its effects on the climate system.

The simplest physical response of a system to a disturbance would be a linear relaxation model, or “regression to the equilibrium”, where for a deviation of a variable X from its equilibrium value, there is a restoring action that is proportional to the magnitude of that deviation. The more it is out of equilibrium the quicker its rate of return. This kind of model is common in climatology and is central of the concept of climate sensitivity to changes in various climate “forcings”.

dX/dt= -k*X ; where k is a constant of proportionality.

The solution of this equation for an infinitesimally short impulse disturbance is a decaying exponential. This is called the impulse response of the system. The response to any change in the input can found by its convolution with this impulse response. This can be calculated quite simply since it is effectively a weighted running average calculation. It can also be found by algebraic solution of the ordinary differential equation if the input can be described by an analytic function. This is the method that was adopted in Douglass & Knox 2005 [17] comparing AOD to lower tropospheric temperature ( TLT ).

The effect of this kind of system response is a time lag as well as a degree of low-pass filtering which reduces the peak and produces a change in the profile of the time series, compared to that of the input forcing. In this context linear regression of the output and input is not physically meaningful and will give a seriously erroneous value of the presumed linear relationship.

The speed of the response is characterised by a constant parameter in the exponential function, often referred to as the ‘time-constant’ of the reaction. Once the time-constant parameter has been determined, the time-series of the system response can be calculated from the time-series of the forcing.

Here, the variation of the tropical climate is compared with a linear relaxation response to the volcanic forcing. The magnitude and time-constant constitute two free parameters and are found to provide a good match between the model and data. This is not surprising since any deviation from equilibrium in surface temperature will produce a change in the long-wave Planck radiation to oppose it. The radiative Planck feedback is the major negative feedback that ensures the general stability of the Earth’s climate. While the Planck feedback is proportional to the fourth power of the absolute temperature, it can be approximated as linear for small changes around typical ambient temperatures of about 300 kelvin. This is effectively a “single slab” ocean model but this is sufficient since diffusion into deeper water below the thermocline is minimal on this time scale. This was discussed in Douglass & Knox’s reply to Robock[18]

A more complex model which includes a heat diffusion term to a large deep ocean sink can be reduced to the same analytical form with a slightly modified forcing and an increased “effective” feedback. Both these adjustments would be small and do not change the mathematical form of the equation and hence the validity of the current method. See supplementary information.

It is this delayed response curve that needs to be compared to changes in surface temperature in a regression analysis. Regressing the temperature change against the change in radiation is not physically meaningful unless the system can be assumed to equilibrate much faster than the period of the transient being studied, ie. on a time scale of a month or less. This is clearly not the case, yet many studies have been published which do precisely this, or worse multivariate regression, which compounds the problem. Santer et al 2014 [8], Trenberth et al 2010 [7], Dessler 2010 b [9], Dessler 2011 [10] Curiously, Douglass & Knox [17] initially calculate the relaxation response to AOD forcing and appropriately regress this against TLT but later in the same paper regress AOD directly against TLT and thus find an AOD scaling factor in agreement with the more recent Hansen estimations. This apparent inconsistency in their method confirms the origin of the lower estimations of the volcanic forcing.

The need to account for the fully developed response can be seen in figure 6. The thermal inertia of the ocean mixed layer integrates the instantaneous volcanic forcing as well as the effects of any climate feedbacks. This results in a lower, broader and delayed time series. As shown above, in a situation dominated by the Planck and other radiative feedbacks, this can be simply modelled with an exponential convolution. There is a delay due to the thermal inertia of the ocean mixed layer but this is not a simple fixed time delay. The relaxation to equilibrium response introduces a frequency dependent phase delay that changes the profile of the time series. Simply shifting the volcanic forcing forward by about a year would line up the “bumps” but not match the profile of the two variables. Therefore neither simple regression nor a lagged regression will correctly associate the two variables: the differences in the temporal evolution of the two would lead to a lower correlation and hence a reduced regression result leading to incorrect scaling of the two quantities.

Santer et al 2014 attempts to remove ENSO and volcanic signals by a novel iterative regression technique. A detailed account, provided in the supplementary information[8b], reports a residual artefact of the volcanic signal.

The modelled and observed tropospheric temperature residuals after removal of ENSO and volcano signals, τ , are characterized by two small maxima. These maxima occur roughly 1-2 years after the peak cooling caused by El Chichon and Pinatubo cooling signals.


Figure XXX. Santer et al 2014 supplementary figure 3 ( panel D )
“ENSO and volcano signals removed”

This description matches the period starting in mid-1992, shown in figure 6 below, where the climate response is greater than the forcing. It peaks about 1.5 years after the peak in AOD, as described. Their supplementary fig.3 shows a very clear dip and later peak following Pinatubo. This corresponds to the difference between the forcing and the correctly calculated climate response shown in fig. 6. Similarly, the 1997-98 El Nino is clearly visible in the graph of observational data (not reproduced here) labeled “ENSO and volcano signals removed”. This failure to recognise the correct nature and timing of the volcanic signal leads to an incorrect regression analysis, incomplete removal and presumably incorrect scaling of the other regression variables in the iterative process. This is likely to lead to spurious attributions and incorrect conclusions.



Figure 6 showing tropical feedback as relaxation response to volcanic aerosol forcing ( pre-eruption cycle removed) ( Click to enlarge )

The delayed climatic response to radiative change corresponds to the negative quadrant in figure 3b of Spencer and Braswell (2011) [5] excerpted below, where temperature lags radiative change. It shows the peak temperature response lagging around 12 months behind the radiative change. The timing of this peak is in close agreement with TOA response in figure 6 above, despite SB11 being derived from later CERES ( Clouds and the Earth’s Radiant Energy System ) satellite data from the post-2000 period with negligible volcanism.

This emphasises that the value for the correlation in the SB11 graph will be under-estimated, as pointed out by the authors:

Diagnosis of feedback cannot easily be made in such situations, because the radiative forcing decorrelates the co-variations between temperature and radiative flux.

The present analysis attempts to address that problem by analysing the fully developed response.


Figure 7. Lagged-correlation plot of post-2000 CERES data from Spencer & Braswell 2011. (Negative lag: radiation change leads temperature change.)

The equation of the relationship of the climate response : ( TOA net flux anomaly – volcanic forcing ) being proportional to an exponential regression of AOD, is re-arranged to enable an empirical estimation of the scaling factor by linear regression.

TOA -VF * AOD = -VF * k * exp_AOD eqn. 1

-TOA = VF * ( AOD – k * exp_AOD ) eqn. 2

VF is the volcanic scaling factor to convert ( positive ) AOD into a radiation flux anomaly in W/m2. The exp_AOD term is the exponential convolution of the AOD data, a function of the time-constant tau, whose value is also to be estimated from the data. This exp_AOD quantity is multiplied by a constant of proportionality, k. Since TOA net flux is conventionally given as positive downwards, it is negated in equation 2 to give a positive VF comparable to the values given by Lacis, Hansen, etc.

Average pre-eruption TOA flux was taken as the zero for TOA anomaly and, since the pre-eruption AOD was also very small, no constant term was included.

Since the relaxation response effectively filters out ( integrates ) much of high frequency variability giving a less noisy series, this was taken as the independent variable for regression. This choice acts to minimise regression dilution due to the presence of measurement error and non-linear variability in the independent variable.

Regression dilution is an important and pervasive problem that is often overlooked in published work in climatology, notably in attempts to derive an estimation of climate sensitivity from temperature and radiation measurements and from climate model output. Santer et al 2014, Trenberth et al 2010, Dessler 2011, Dessler 2010b, Spencer & Braswell 2011.

The convention of choosing temperature as the independent variable will lead to spuriously high sensitivity estimations. This was briefly discussed in the appendix of Forster & Gregory 2006 [11] , though ignored in the conclusions of the paper.

It has been suggested that a technique based on total least squares regression or bisector least squares regression gives a better fit, when errors in the data are uncharacterized (Isobe et al. 1990). For example, for 1985–96 both of these methods suggest YNET of around 3.5 +/- 2.0 W m-2 K-1 (a 0.7–2.4 K equilibrium surface temperature increase for 2 ϫ CO2), and this should be compared to our 1.0–3.6 K range quoted in the conclusions of the paper.

Regression results were thus examined for residual correlation.

Results

Taking the TOA flux, less the volcanic forcing, to represent the climatic reaction to the eruption, gives a response that peaks about twelve months after the eruption, when the stratospheric aerosol load is still at about 50% of its peak value. This implies a strong negative feedback is actively countering the volcanic disturbance.

This delay in the response, due to thermal inertia in the system, also produces an extended period during which the direct volcanic effects are falling and the climate reaction is thus greater than the forcing. This results in a recovery period, during which there is an excess of incoming radiation compared to the pre-eruption period which, to an as yet undetermined degree, recovers the energy deficit accumulated during the first year when the volcanic forcing was stronger than the developing feedback. This presumably also accounts for at least part of the post eruption maxima noted in the residuals of Santer et al 2014.

Thus if the lagged nature of the climate response is ignored and direct linear regression between climate variables and optical depth are conducted, the later extended period of warming may be spuriously attributed to some other factor. This represents a fundamental flaw in multivariate regression studies such as Foster & Rahmstorf 2013 [12] and Santer et al 2014 [8] , among others, that could lead to seriously erroneous conclusions about the relative contributions of the various regression variables.

For the case where the pre-eruption variation is assumed to continue to underlie the ensuing reaction to the volcanic forcing, the ratio of the relaxation response to the aerosol forcing is found to be 0.86 +/- 0.07%, with a time constant of 8 months. This corresponds to the value reported in Douglass & Knox 2005 [17] derived from AOD and lower troposphere temperature data. The scaling factor to convert AOD into a flux anomaly was found to be 33 W/m2 +/-11%. With these parameters, the centre line of the remaining 6 month variability ( shown by the gaussian filter ) fits very tightly to the relaxation model.



Figure 6 showing tropical feedback as relaxation response to volcanic aerosol forcing ( pre-eruption cycle removed) ( Click to enlarge )

If the downward trend in the pre-eruption data is ignored (ie its cause is assumed to stop at the instant of the eruption ) the result is very similar ( 0.85 +/-0.09 and 32.4 W/m2 +/- 9% ) but leads to a significantly longer time-constant of 16 months. In this case, the fitted response does not fit nearly as well, as can be seen by comparing figures 6 and 8. The response is over-damped: poorly matching the post-eruption change, indicating that the corresponding time-constant is too long.

Figure 8 showing tropical climate relaxation response to volcanic aerosol forcing, fitted while ignoring pre-eruption variability. ( Click to enlarge )

The analysis with the pre-eruption cycle subtracted provides a generally flat residual ( figure 9 ), showing that it accounts well for the longer term response to the radiative disruption caused by the eruption.

It is also noted that the truncated peak, resulting from substitution of the mean of the annual cycle to fill the break in the ERBE satellite data, lies very close to the zero residual line. While there is no systematic deviation from zero it is clear that there is a residual seasonal effect and that the amplitude of this seasonal residual also seems to follow the fitted response.

Figure 9 showing the residual of the fitted relaxation response from the satellite derived, top-of-troposphere disturbance. ( Click to enlarge )

Since the magnitude of the pre-eruption variability in TOA flux, while smaller, is of the same order as the volcanic forcing and its period similar to that of the duration of the atmospheric disturbance, the time-constant of the derived response is quite sensitive to whether this cycle is removed or not. However, it does not have much impact on the fitted estimation of the scaling factor ( VF ) required to convert AOD into a flux anomaly or the proportion of the exponentially lagged forcing that matches the TOA flux anomaly.

Assuming that whatever was causing this variability stopped at the moment of the eruption seems unreasonable but whether it was as cyclic as it appears to be, or how long that pattern would continue is speculative. However, approximating it as a simple oscillation seems to be more satisfactory than ignoring it.

In either case, there is a strong support here for values close to the original Lacis et al 1992 calculations of volcanic forcing that were derived from physics-based analysis of observational data, as opposed to later attempts to reconcile the output of general circulation models by re-adjusting physical parameters.

Beyond the initial climate reaction analysed so far, it is noted that the excess incoming flux does not fall to zero. To see this effect more clearly, the deviation of the flux from the chosen pre-eruption reference value is integrated over the full period of the data. The result is shown in figure 10.

Figure 10 showing the cumulative integral of climate response to Mt Pinatubo eruption. ( Click to enlarge )

Pre-eruption variability produces a cumulative sum initially varying about zero. Two months after the eruption, when it is almost exactly zero, there is a sudden change as the climate reacts to the drop in energy entering the troposphere. From this point onwards there is an ever increasing amount of additional energy accumulating in the tropical lower climate system. With the exception of a small drop, apparently in reaction to the 1998 ‘super’ El Nino, this tendency continues to the end of the data.

While the simple relaxation model seems to adequately explain the initial four years following the Mt Pinatubo event, this does not explain it settling to a higher level. Thus there is evidence of a persisent warming effect resulting from this major stratospheric eruption.

Discussion

Concerning the more recent estimations of aerosol forcing, it should be noted that there is a strong commonality of authors in the papers cited here, so rather than being the work of conflicting groups, the more recent weightings reflect the result of a change of approach: from direct physical modelling of the aerosol forcing in the 1992 paper, to the later attempts to reconcile general circulation model (GCM) output by altering the input parameters.

From Hansen et al 2002 [4] ( Emphasis added. )

Abstract:

We illustrate the global response to these forcings for the SI2000 model with specified sea surface temperature and with a simple Q-flux ocean, thus helping to characterize the efficacy of each forcing. The model yields good agreement with observed global temperature change and heat storage in the ocean. This agreement does not yield an improved assessment of climate sensitivity or a confirmation of the net climate forcing because of possible compensations with opposite changes of these quantities. Nevertheless, the results imply that observed global temperature change during the past 50 years is primarily a response to radiative forcings.

Form section 2.2.2. Radiative forcing:

Even with the aerosol properties known, there is
uncertainty in their climate forcing
. Using our SI2000
climate model to calculate the adjusted forcing for a
globally uniform stratospheric aerosol layer with optical
depth t = 0.1 at wavelength l = 0.55 mm yields a forcing of
2.1 W/m2 , and thus we infer that for small optical depths

Fa (W/m2) ~ 21 tau

…..
In our earlier 9-layer model stratospheric warming after El Chichon and Pinatubo was about half of observed values (Figure 5 of F-C), while the stratospheric warming in our current model exceeds observations, as shown below.

As the authors point out, it all depends heavily upon the assumptions made about size distribution of the aerosols used when interpreting the raw data. In fact the newer estimation is shown, in figure 5a of the paper, to be about twice the observed values following Pinatubo and El Chichon . It is unclear why this is any better than half observed values in their earlier work. Clearly the attributions are still highly uncertain and the declared uncertainty of +/-15% appears optimistic.

3.3. Model Sensitivity

The bottom line is that, although there has been some
narrowing of the range of climate sensitivities that emerge
from realistic models [Del Genio and Wolf, 2000], models
still can be made to yield a wide range of sensitivities by
altering model parameterizations.

If the volcanic aerosol forcing is underestimated, other model parameters will have to be adjusted to produce a higher sensitivity. It is likely that the massive overshoot in the model response of TLS is an indication of this situation. It would appear that a better estimation lies between these two extremes. Possibly around 25 W/m2.

The present study examines the largest and most rapid changes in radiative forcing in the period for which detailed satellite observations are available. The aim is to estimate the aerosol forcing and the timing of the tropical climate response. It is thus not encumbered by trying to optimise estimations of a range climate metrics over half a century by experimental adjustment of a multitude of “parameters”.

The result is in agreement with the earlier estimation of Lacis et al.

A clue to the continued excess over the pre-eruption conditions can be found in the temperature of the lower stratosphere, shown in figure 3. Here too, the initial disturbance seems to have stabilised by early 1995 but there is a definitive step change from pre-eruption conditions.

Noting the complementary nature of the effects of impurities in the stratosphere on TLS and the lower climate system, this drop in TLS may be expected to be accompanied by an increase in the amount of incoming radiation penetrating into the troposphere. This is in agreement with the cumulative integral shown in figure 10 and the southern hemisphere sea temperatures shown in figure 12.

NASA Earth Observatory report [13] that after Mt Pinatubo, there was a 5% to 8% drop in stratospheric ozone. Presumably a similar removal happened after El Chichon in 1982 which saw an almost identical reduction in TLS.

Whether this is, in fact, the cause or whether other radiation blocking aerosols were flushed out along with the volcanic emissions, the effect seems clear and consistent and quite specifically linked to the eruption event. This is witnessed in both the stratospheric and tropical tropospheric data. Neither effect is attributable to the steadily increasing GHG forcing which did not record a step change in September 1991.

This raises yet another possibility for false attribution in multivariate regression studies and in attempts to arbitrarily manipulate GCM input parameters to engineer a similarity with the recent surface temperature records.

With the fitted scaling factor showing the change in tropical TOA net flux matches 85% of the tropical AOD forcing, the remaining 15% must be dispersed elsewhere within the climate system. That means either storage in deeper waters and/or changes in the horizontal energy budget, ie. interaction with extra-tropical regions.

Since the model fits the data very closely, the residual 15% will have the same time dependency profile as the 85%, so these tropical/ex-tropical variations can also be seen as part of the climate response to the volcanic disturbance. ie. the excess in horizontal net flux, occurring beyond 12 months after the event, is also supporting restoration of the energy deficit in extra-tropical regions by exporting heat energy. Since the major ocean gyres bring cooler temperate waters into the eastern parts of tropics in both hemispheres and export warmer waters at their western extents, this is probably a major vector of this variation in heat transportation as are changes in atmospheric processes like Hadley convection.

Extra-tropical regions were previously found to be more sensitive to radiative imbalance that the tropics:
https://climategrog.wordpress.com/?attachment_id=285
Thus the remaining 15% may simply be the more stable tropical climate acting as a buffer and exerting a thermally stabilising influence on extra-tropical regions.

After the particulate matter and aerosols have dropped out there is also a long-term depletion of stratospheric ozone ( 5 to 8% less after Pinatubo ) [13]. Thompson & Solomon (2008) [14] examined how lower stratosphere temperature correlated with changes in ozone concentration and found that in addition to the initial warming caused by volcanic aerosols, TLS showed a notable ozone related cooling that persisted until 2003. They note that this is correlation study and do not imply causation.

 
Figure 11. Part of fig.2 from Thompson & Solomon 2008 showing the relationship of ozone and TLS.

A more recent paper by Soloman [15] concluded roughly equal, low level aerosol forcing existed before Mt Pinatubo and again since 2000, similarly implying a small additional warming due to lower aerosols in the decade following the eruption.

Several independent data sets show that stratospheric aerosols have increased in abundance since 2000. Near-global satellite aerosol data imply a negative radiative forcing due to stratospheric aerosol changes over this period of about -0.1 watt per square meter, reducing the recent global warming that would otherwise have occurred. Observations from earlier periods are limited but suggest an additional negative radiative forcing of about -0.1 watt per square meter from 1960 to 1990.

The values for volcanic aerosol forcing derived here being in agreement with the physics-based assessments of Lacis et al. imply much stronger negative feedbacks must be in operation in the tropics than those resulting from the currently used model “parameterisations” and the much weaker AOD scaling factor.

These two results indicate that secondary effects of volcanism may have actually contributed to the late 20th century warming. This, along with the absence of any major eruptions since Mt Pinatubo, could go a long way to explaining the discrepancy between climate models and the relative stability of observational temperatures measurements since the turn of the century.

Once the nature of the signal has been recognised in the much less noisy stratospheric record, a similar variability can be found to exist in southern hemisphere sea surface temperatures. The slower rise in SST being accounted for by the much larger thermal inertia of the ocean mixed layer. Taking TLS as an indication of the end of the negative volcancic forcing and beginning of the additional warming forcing, the apparent relaxation to a new equilibrium takes 3 to 4 years. Regarding this approximately as the 95% settling of three times-constant intervals ( e-folding time ) would be consistent with a time constant of between 12 and 16 months for extra-tropical southern hemisphere oceans.

These figures are far shorter than values cited in Santer 2014 ranging from 30 to 40 months which are said to characterise the behaviour of high sensitivity models and correspond to typical IPCC values of climate sensitivity. It is equally noted from lag regression plots in S&B 2011 and Trenberth that climate models are far removed from observational data in terms of producing correct temporal relationships of radiative forcing and temperature.


Figure 12. Comparing SH sea surface temperatures to lower troposphere temperature.

Though the initial effects are dominated by the relaxation response to the aerosol forcing, both figures 9 and 12 show a additional climate reaction has a significant effect later. This appears to be linked to ozone concentration and/or a reduction in other atmospheric aerosols. While these changes are clearly triggered by the eruptions, they should not be considered part of the “feedback” in the sense of the relaxation response fitted here. These effects will act in the same sense as direct radiative feedback and shorten the time-constant by causing a faster recovery. However, the settling time of extra-tropical SST to the combined changes in forcing indicates a time constant ( and hence climate sensitivity ) well short of the figures produced by analysing climate model behaviour reported in Santer et al 2014 [8].

IPCC on Clouds and Aerosols:

IPCC AR5 WG1 Full Report Jan 2014 : Chapter 7 Clouds and Aerosols:

No robust mechanisms contribute negative feedback.

7.3.4.2

The responses of other cloud types, such as those associated with deep convection, are not well determined.

7.4.4.2

Satellite remote sensing suggests that aerosol-related invigoration of deep convective clouds may generate more extensive anvils that radiate at cooler temperatures, are optically thinner, and generate a positive contribution to ERFaci (Koren et al., 2010b). The global influence on ERFaci is
unclear.

Emphasis added.

WG1 are arguing from a position of self-declared ignorance on this critical aspect of how the climate system reacts to changes in radiative forcing. It is unclear how they can declare confidence levels of 95%, based on such an admittedly poor level of understanding of the key physical processes.

Conclusion

Analysis of satellite radiation measurements allows an assessment of the system response to changes in radiative forcing. This provides an estimation of the aerosol forcing that is in agreement with the range of physics-based calculations presented by Lacis et al in 1992 and is thus brings into question the much lower values currently used in GCM simulations.

The considerably higher values of aerosol forcing found here and in Lacis et al imply the presence of notably stronger negative feedbacks in tropical climate and hence imply a much lower range of sensitivity to radiative forcing than those currently used in the models.

The significant lag and ensuing post-eruption recovery period underlines the inadequacy of simple linear regression and multivariate regression in assessing the magnitude of various climate ‘forcings’ and their respective climate sensitivities. Use of such methods will suffer from regression dilution, omitted variable bias and can lead to seriously erroneous attributions.

Both the TLS cooling and the energy budget analysis presented here, imply a lasting warming effect on surface temperatures triggered by the Mt Pinatubo event. Unless these secondary effects are recognised, their mechanisms understood and correctly modelled, there is a strong likelihood of this warming being spuriously attributed to some other cause such as AGW.

When attempting to tune model parameters to reproduce the late 20th century climate record, an incorrectly small scaling of volcanic forcing, leading to a spuriously high sensitivity, will need to be counter-balanced by some other variable. This is commonly a spuriously high greenhouse effect, amplified by presumed positive feedbacks from other climate variables, less well constrained by observation ( such as water vapour and cloud ). In the presence of the substantial internal variability, this can be made to roughly match the data while both forcings are present ( pre-2000 ). However, in the absence of significant volcanism there will be a steadily increasing divergence. The erroneous attribution problem, along with the absence of any major eruptions since Mt Pinatubo, could explain much of the discrepancy between climate models and the relative stability of observational temperature measurements since the turn of the century.

References:

[1] Self et al 1995
“The Atmospheric Impact of the 1991 Mount Pinatubo Eruption”
http://pubs.usgs.gov/pinatubo/self/

[2] Lacis et al 1992 : “Climate Forcing by Stratospheric Aerosols
http://pubs.giss.nasa.gov/docs/1992/1992_Lacis_etal_1.pdf

[3] Hansen et al 1997 “Forcing and Chaos in interannual to decadal climate change”
http://eaps4.mit.edu/research/papers/Hansen_etal_1997b.pdf

[4] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”
http://apollo.eas.gatech.edu/yhw/publications/hansen_etal_2002.pdf

[5] Spencer & Braswell 2011: “On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance”
http://www.mdpi.com/2072-4292/3/8/1603/pdf

[6] Lindzen & Choi 2011: “On the Observational Determination of Climate Sensitivity and Its Implications”
http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf

[7] Trenberth et al 2010: “Relationships between tropical sea surface temperature and top‐of‐atmosphere radiation”
http://www.mdpi.com/2072-4292/3/9/2051/pdf

 [8] Santer et al 2014: “Volcanic contribution to decadal changes in tropospheric temperature”
http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2098.html

 [8b] Supplementary Information:
http://www.nature.com/ngeo/journal/v7/n3/extref/ngeo2098-s1.pdf

 [9] Dessler 2010 b “A Determination of the Cloud Feedback from Climate Variations over the Past Decade”
http://geotest.tamu.edu/userfiles/216/dessler10b.pdf

 [10] Dessler 2011 “Cloud variations and the Earth’s energy budget”
http://geotest.tamu.edu/userfiles/216/Dessler2011.pdf

 [11]Forster & Gregory 2006
“The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data”
http://www.image.ucar.edu/idag/Papers/Forster_sensitivity.pdf

[12] Foster and Rahmstorf 2011: “Global temperature evolution 1979-2010”
http://stacks.iop.org/ERL/6/044022

[13] NASA Earth Observatory
http://earthobservatory.nasa.gov/Features/Volcano/

 [14] Thompson & Soloman 2008: “Understanding Recent Stratospheric Climate Change”
http://journals.ametsoc.org/doi/abs/10.1175/2008JCLI2482.1

 [15] Susan Soloman 2011: “The Persistently Variable “Background” Stratospheric Aerosol Layer and Global Climate Change”
http://www.sciencemag.org/content/333/6044/866

 [16] Trenberth 2002: “Changes in Tropical Clouds and Radiation”
http://sciencepubs.com/content/296/5576/2095.full.pdf

 [17] Douglass & Knox 2005: “Climate forcing by the volcanic eruption of Mount Pinatubo”
http://www.pas.rochester.edu/~douglass/papers/2004GL022119_Pinatubo.pdf

 [18] Douglass & Knox 2005b: “Reply to comment by A. Robock on ‘‘Climate forcing by the volcanic eruption of Mount Pinatubo’’”
http://www.pas.rochester.edu/~douglass/…/reply_Robock_2005GL023829.pdf

Data sources:
[DS1] AOD data: NASA GISS
http://data.giss.nasa.gov/modelforce/strataer
Derived from SAGE (Stratospheric Aerosol and Gas Experiment) instrument.
“The estimated uncertainty of the optical depth in our multiple wavelength retrievals [Lacis et al., 2000] using SAGE observations is typically several percent. “[4]

[DS2] ERBE TOA data: Earth Radiation Budget Experiment
http://eosweb.larc.nasa.gov/sites/default/files/project/erbe/edition3_rev1/Edition3_Rev1_wfov_sf_monthly_tropics.txt

data description:
https://eosweb.larc.nasa.gov/sites/default/files/project/erbe/quality_summaries/s10n_wfov/erbe_s10n_wfov_nf_sf_erbs_edition3.pdf

[*] Explanatory notes:
Negative feedback is an engineering term that refers to a reaction that opposes its cause. It is not a value judgement about whether it is good or bad. In fact, negative feedbacks are essential in keeping a system stable. Positive feedbacks lead to instability and are thus generally “bad”.

Convolution is a mathematical process that is used, amongst other things, to implement digital filters. A relaxation response can be implemented by convolution with an exponential function. This can be regarded as an asymmetric filter with a non-linear phase response. It produces a delay relative to the input and alters it’s shape.

Regression dilution refers to the reduction in the slope estimations produced by least squares regression when there is significant error or noise in the x-variable. Under negligible x-errors OLS regression can be shown to produce the best unbiased linear estimation of the slope. However, this essential condition is often over-looked or ignored, leading to erroneously low estimations of the relationship between two quantities. This is explained in more detail here:
https://climategrog.wordpress.com/2014/03/08/on-inappropriate-use-of-ols/

Supplementary Information

Detailed method description.

The break of four months in the ERBE data at the end of 1993 was filled with the anomaly mean for the period to provide a continuous series. This truncates what would have probably been a small peak in the data, marginally lowering the local average, but since this was not the primary period of interest this short defect was considered acceptable.

The regression was performed for a range of time-constants from 1 to 24 months. The period for the regression was from just before the eruption, up to 1994.7, when AOD had subsided and the magnitude of the annual cycle was found to change, indicating the end of the initial climatic response ( as determined from the adaptive anomaly shown in figure 2 ).

Once the scaling factor was obtained by linear regression for each value of tau, the values were checked for regression dilution by examining the correlation of the residual of the fit with the AOD regressor while varying the value of VF in the vicinity of the fitted value. This was found to give a regular curve with a minimum correlation that was very close to the fitted value. It was concluded that the least squares regression results were an accurate estimation of the presumed linear relationship.

The low values of time-constant resulted in very high values of VF ( eg. 154 for tau=1 month ) that were physically unrealistic and way out side the range of credible values. This effectively swamps the TOA anomaly term and is not meaningful.

tau = 6 months gave VF = 54 and this was taken as the lower limit for the range time constant to be considered.

The two free parameters of the regression calculations will ensure an approximate fit of the two curves for each value of the time constant. The latter could then be varied to find the response that best fitted the initial rise after the eruption, the peak and the fall-off during 1993-1995.

This presented a problem since, even with adaptive anomaly approach, there remains a significant, roughly cyclic, six month sub-annual variability that approaches in magnitude the climate response of interest. This may be at least in part a result aliasing of the diurnal signal to a period close to six months in the monthly ERBE data as pointed out by Trenberth [16]. The relative magnitude of the two also varies depending on the time-constant used, making a simple correlation test unhelpful in determining the best correlation with respect to the inter-annual variability. For this reason it could not be included as a regression variable.

Also the initial perturbation and the response, rise very quickly from zero to maximum in about two months. This means that low-pass filtering to remove the 6 month signal will also attenuate the initial part of the effect under investigation. For this reason a light gaussian filter ( sigma = 2 months ) was used. This has 50% attenuation at 2.3 months. comparable to a 4 month running mean filter ( without the destructive distortions of the latter). This allowed a direct comparison of the rise, duration of the peak and rate of fall-off in the tail that is determined by the value of the time-constant parameter, and thus the value producing the best match to the climate response.

Due to the relatively course granularity of monthly data that limits the choice of the time-constant values, it was possible to determine a single value that produced the best match between the two variables.

The whole process was repeated with and without subtraction of the oscillatory, pre-eruption variability to determine the effects of this adjustment.

The uncertainty values given are those of the fitting process and reflect the variability of the data. They do not account for the experimental uncertainty in interpreting the satellite data.

Beyond the single slab model.

A more complex model which includes a heat diffusion term to a large deep ocean sink can be seen to reduce to the same analytical form with a slightly modified forcing and an increased “effective” feedback. Both these adjustments would be small and do not change the mathematical form of the equation and hence the validity of the current method.

A typical representation of the linear model used by many authors in published works [17] is of the form:

C *d/dt(Ts) = F – λTs

where Ts is surface temp anomaly ; F is radiative flux anomaly and C is the heat capacity of the mixed ocean layer. λ is a constant which is the reciprocal of climate sensitivity.

Eddy diffusion anomaly to the deep ocean heat sink at a constant temperature Td can be added using a diffusion constant κ With the relatively small changes in surface temperature over the period compared to the surface – thermocline temperature difference in the tropics, the square law of the diffusion equation can be approximated as linear. To within this approximation, a two slab diffusion model has the same form as signle slab model.

A simple rearrangement shows that this is equivalent to original model but with a slightly modified flux and an slightly increased feedback parameter.

C *d/dt(Ts) + κ*(Ts-Td) = F – λTs

C *d/dt(Ts) = F – λTs- κ*(Ts-Td)

C *d/dt(Ts) = F+ κ*(Td) – Ts(λ+κ)

This does not have a notable effect on the scaling of the volcanic forcing herein derived, nor the presence of the anomalous, post-eruption warming effect, which is seen to continue beyond the period of the AOD disruption and the topical climate resposne.

On identifying inter-decadal variation in NH sea ice

[EDIT] above image has been updated with Dec 2015 data.

Introduction

The variation in the magnitude of the annual cycle in arctic sea ice area has increased notably since the minimum of 2007. This means that using a unique annual cycle fitted to all the data leaves a strong residual (or “anomaly”) in post-2007 years. This makes it difficult or impossible to visualise how the data has evolved during that period. This leads to the need to develop an adaptive method to evaluate the typical annual cycle, in order to render the inter-annual variations more intelligible.

Before attempting to asses any longer term averages or fit linear “trends” to the data it is also necessary to identify any repetitive variations of a similar time-scale to that of the fitting period, in order to avoid spurious results.

See also: an examination of sub-annual variation
https://climategrog.wordpress.com/?attachment_id=460

Method

Short-term anomalies: adapting calculations to decadal variability.

Adaptation can be achieved by identifying different periods in the data that have different degrees of variability and calculating the average annual variation for each period.

Details of how the average seasonal profile for each segment was calculated are shown in the appendix.


Figure 1. The post-2007 interval. (click to enlarge)

The data was split into separate periods that were identified by their different patterns of variation. For example, the period from 1997 to 2007 has a notably more regular annual cycle (leading to smaller variation in the anomalies). The earlier periods have larger inter-annual variations, similar to those of the most recent segment.

An approximately periodic variation was noted in the data, in the rough form of a rectified sine wave. In order to avoid corrupting the average slope calculations by spanning from a peak to a trough in this pattern, a mathematical sine function was adjusted manually to approximate the period and magnitude shown in the data. It was noted that, despite the break of this pattern in the early 2000’s, the phase of this pattern is maintained until the end of the record and aligns with the post-2007 segement. However, there was a notable drop in level ( about 0.5×10^6 km^2 ). Labels indicate the timing of several notable climate events which may account for some of the deviations from the observed cyclic pattern. These are included for ease of reference without necessarily implying a particular interpretation or causation.



Figure 2. Identifying periods for analysis

The early period (pre-1988) was also separated out since it was derived from a notably different instrument on a satellite with a much longer global coverage. It is mostly from Nimbus 7 mission which was in an orbit with a 6 day global coverage flight path. Somehow this data was processed to produce into a 2 day interval time-series, although documentation of the processing method seems light.

Later data, initially from the US defence meteorology platforms starts in 1988 and had total polar coverage twice per day,producing daily time-series.

In order to maintain the correct relationship between the different segments, the mean value for each segment from the constant term of the harmonic model used to derive the seasonal variations, was retained. The anomaly time-series was reconstructed by adding the deviations from the harmonic model to the mean, for each segment in turn. The calculated anomalies were extended beyond the base period at each end, so as to provide an overlap to determine suitable points for splicing the separate records together. A light low-pass filter was applied to the residual ‘anomalies’ to remove high frequency detail and improve visibility.

The result of this processing can be compared to the anomaly graph provided at the source of this data. The extent of the lines showing mean rates of change indicate the periods of the data used. These were chosen to be an integral number of cycles of the repetitive, circa five year pattern.

Cavalieri et al [2] also report a circa 5 year periodicity:

The 365-day running mean of the daily anomalies suggests a dominant interannual variability of about 5 years



Figure 3. Showing composite adaptive anomaly



Figure 4. Showing Cryosphere Today anomaly derived with single seasonal cycle

Discussion

The average slope for each segment is shown and clearly indicate the decrease in ice area was accelerating from the beginning of the era of the satellite observations until 2007. The derived values suggest this was a roughly parabolic acceleration. This was a cause for legitimate concern around 2007 and there was much speculation as to what this implied for the future development of arctic ice coverage. Many scientists have suggested “ice free” summers by 2013 or shortly thereafter.

The rate of ice loss since 2007 is very close to that of the 1990s but is clearly less pronounced than it was from 1997 to 2007, a segment of the data which in itself shows a clear downward curvature, indicating accelerating ice loss.

Since some studies estimate that much of the ice is now thinner, younger ice of only 1 or 2 years of age, recent conditions should be a more sensitive indicator of change. Clearly the predicted positive feedbacks, run-away melting and catastrophic collapse of Arctic sheet are not in evidence. The marked deceleration since 2007 indicates that either the principal driver of the melting has abated or there is a strongly negative regional feedback operating to counteract it.

The 2013 summer minimum is around 3.58 million km^2. Recent average rate of change is -0.043 million km^2 per year.

While it is pointless to suggest any one set of conditions will be maintained in an ever varying climate system, with the current magnitude of the annual variation and the average rate of change shown in the most recent period, it would take 83 years to reach ice free conditions at the summer minimum.

Some have preferred to redefine “ice free” to mean less than 1 millions km^2 sea ice remaining at the summer minimum. On that basis the “ice free” summer figure reduces to 61 years.

Conclusion

In order to extract and interpret inter-decadal changes in NH sea ice coverage, it is essential to characterise and remove short-term variation. The described adaptive method allows a clearer picture of the post 2007 period to be gained. It shows that what could be interpreted as a parabolic acceleration in the preceding part of the satellite record is no longer continuing and is replaced by a decelerating change in ice area. At the time of this analysis, the current decadal rate of change is about -430,000 km^2 per decade, against an annual average of around 8.7 million and a summer minimum of around 3.6 million km^2.

Whether the recent deceleration will continue or revert to earlier acceleration is beyond current understanding and prediction capabilities. However, it is clear that it would require a radical change for either an ice free ( or an less than one million km^2 ) arctic summer minimum to occur in the coming years.

Unfortunately data of this quality covers a relatively short period of climate history. This means it is difficult to conclude much except that the earlier areal acceleration has changed to a deceleration, with the current rate of decline being about half that of the 1997-2007 period. This is clearly at odds with the suggestion that the region is dominated by a positive feedback.

It is also informative to put this in the context of the lesser, but opposite, tendency of increasing sea ice coverage around Antarctica over the same period.


Figure 5. Showing changes in Arctic and Antarctic sea ice coverage



Figure 3. Showing composite adaptive anomaly

 
 


Appendix: approximating the typical seasonal cycle

Spectral analysis of the daily ice area data shows many harmonics of the annual variation, the amplitude generally diminishing with frequency. Such harmonics can be used to reconstruct a good approximation of the asymmetric annual cycle.

Here the typical cycle of seasonal variation for the period was estimated by fitting a constant plus 7 harmonic model to the data ( the 7th being about 52 days ). This is similar to the technique descrobed by Vinnikov et al [1]. The residual difference between the data and this model was then taken as the anomaly for the segment and low-pass filtered to remove 45 day (8th harmonic) and shorter variations.

This reveals an initial recovery from the 2007 minimum and a later drift downwards to a new minimum in 2012. By the time of the annual minimum for 2013, another strong recovery seems to have started.

This repetitive pattern, which was identified in the full record, is used to define trough-to-trough or peak-to-peak periods over which to calculate the average rate of change for each segment of the data. This was done by fitting a linear model to the unfiltered anomaly data ( using a non-linear least squares technique).

Supplementary Information

The fitted average rate of change for the respective periods, in chronological order ( million km^2 per year )

-0.020
-0.047
-0.088
-0.043

The cosine amplitude, half the total annual variation, of the fundamental of the harmonic model for successive data segments ( million km^2 )

pre 1988 -4.43
1988-1997 -4.47
1997-2007 -4.62
post 2007 -5.14

Mathematical approximation of repetitive pattern used to determine sampling intervals, shown in figure 2.

cosine period 11.16 years ( 5.58 year repetition )
cosine amplitude 0.75 x 10^6 km^2

Resources

Data source: http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008
(data downloaded 16th Sept 2013)

A extensive list of official sources both arctic and antarctic sea ice data can be found here:
http://wattsupwiththat.com/reference-pages/sea-ice-page/

 
[1] Vinnikov et al 2002
“Analysis of seasonal cycles in climatic trends with application to satellite observations of sea ice extent”
http://onlinelibrary.wiley.com/doi/10.1029/2001GL014481/pdf

 
[2] Cavalieri et al 2003
“30-Year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability”

http://www.meto.umd.edu/~kostya/Pdf/Seaice.30yrs.GRL.pdf

Amplitude Modulation Triplets

Understanding of “beats” and amplitude modulation requires going back to basics to distinguish two similar treatments that are often confused with one another.

Acoustic Beats
The following link shows the modulation which leads to the acoustic beats phenomenon and contains audio examples to listen to:
http://www.animations.physics.unsw.edu.au/jw/beats.htm
Frequencies in amplitude modulation:

The basic trigonometrical identity [1] that is used to relate modulation to interference patterns is this:

cos a * cos b = 0.5 * ( cos (a+b) + cos (a-b) )

If a physical system contains modulation as expressed by the multiplication on the left of the equation, spectral analysis will split things into a series of additive components and we will have to interpret what we find in the spectrum as being the sum and the difference of the physical frequencies that are modulating each other.

Superposition (beats).
In the other direction the superposition (addition) of two signals can be converted back to modulation :
cos (f1t) + cos ( f2t) = 2 cos ((f1t + f2t)/2) * cos ((f1t – f2t)/2)
which if we rename the variables using a and b
cos (a) + cos ( b) = 2 cos ((a + b)/2) * cos ((a – b)/2)

So the presence of two frequencies seen in a Fourier spectrum is equivalent to physical modulation of their average frequency by half the difference of their frequencies. This is a mathematical identity, the two interpretations are equivalent and interchangeable, thus this is totally general and independent of any physical system where this sort of pattern may be observed. If these kind of patterns are found, the cause could be either a modulation or superposition.

In the presence of perfect sampling, the two forms are mathematically identical and again what would be found in the spectra would be the left-hand side: the two additive signals. However, what happens to the modulating envelop on the right in the climate system may well mean that the faster one get smoothed out, or the sampling interval and all the averaging and data processing breaks it up . The longer, lower frequency signal may be all that is left and then that is what will show up in the spectrum.

This is similar to what is called “beats” in an acoustic or musical context, except that the ear perceives twice the real physical frequency since human audition senses the amplitude variation: the change in volume, NOT the frequency of the modulation. The amplitude peaks twice per cycle and what we hear as two “beats” per second is a modulation of 1 hertz. Care must be taken when applying this musical analogy to non-acoustic cycles such as those in climate variables.

Also, if one part ( usually the faster one ) gets attenuated or phase delayed by other things in climate it may still be visible but the mathematical equivalence is gone and the two, now separate frequencies are detected.

Triplets

Since the ‘side-lobe’ frequencies are symmetrically placed about the central frequency, this creates a symmetric pair of frequencies of equal magnitude whose frequencies are the sum and the difference of the originals. This is sometimes referred to as a ‘doublet’.

If the two cosine terms are equal as shown above, neither of the original signal frequencies remain. However, if the higher frequency is of larger amplitude, a residual amount of it will remain giving rise to a ‘triplet’ of frequencies. This is what is usually done in radio transmission of an amplitude modulated signal (AM radio). In this case the central peak is usually at least twice the magnitude of the each side bands.

It can be seem mathematically from the equations given above, that if both inputs are of equal amplitude the central frequency will disappear, leaving just a pair side frequencies. It may also be so small as to no longer be distinguishable from background noise in real measurements.

All of this can confound detection of the underlying cycles in a complex system because the periods of the causative phenomena may be shifted or no longer visible in a frequency analysis.

There are many non-linear effects, distortions and feedbacks that will deform any pure oscillation and thus introduce higher harmonics. Indeed such distortions will be the norm rather than a pure oscillation and so many harmonics would be expected to be found.

As a result, even identifying the basic cause of a signal can be challenging in a complex system with many interacting physical variables.

The triplet is useful pattern to look for, being suggested by the presence equally spaced frequencies, although the side peaks may attenuated by other phenomena and are not always of equal height as in the abstract example.

Examples of these kind of patterns can be found in variations of Arctic ice coverage.

https://climategrog.wordpress.com/?attachment_id=757
https://climategrog.wordpress.com/?attachment_id=756
https://climategrog.wordpress.com/?attachment_id=438

 References:

Sum and Product of Sine and Cosine:

Data corruption by running mean “smoothers”

[See update at end of article]

Running means are often used as a simple low pass filter (usually without understanding its defects). Often it is referred to as a “smoother”. In fact it does not even “smooth” too well either since it lets through enough high frequencies to give a spiky result.

Running means are fast and easy to implement. Since most people have some understanding of what an average does, the idea of a running average seems easily understood. Sadly it’s not that simple and running averages often cause serious corruption of the data.

So it smooths the data to an extent, but what else does it do?

The problem with an evenly weighted average is that the data is effectively masked by a rectangular window. The frequency response of such a rectangular window is the sinc function [1] and thus the effect on the frequency content of the data is to apply the sinc function as a frequency filter. The sinc function oscillates and has negative lobes that actually invert part of the signal it was intended to remove. This can introduce all sorts of undesirable artefacts into the data.

An example of one of the problems can be seen here:
running_mean_WTF
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
Figure 1 Comparing effects of different filters on a climate data time series ( 60 month running mean vs 30m triple running mean [blue] ).

It can be noted that the peaks and troughs in the running mean are absolutely wrong. When the raw data has a peak the running mean produces a trough. This is clearly undesirable.

The data is “smoother” than it was but its sense is perverted. This highlights the difference between simply “smoothing” data and applying appropriately chosen low-pass filter. The two are not the same but the terms are often thought to be synonymous.

Some other filters, such as the gaussian, are much more well behaved, however a gaussian response is never zero, so there is always some leakage of what we would like to remove. That is often acceptable but sometimes not ideal.

Comparing frequency of gaussian and running mean
figure 2 showing the magnitude of the frequency response. However, it should be noted that the sign of every other lobe of running mean is negative in sign, actually inverting the data.

Below is a comparison of two filters ( running mean and gaussian ) applied to some synthetic climate-like data generated from random numbers. Click to see the full extent of the graph.

rm_gauss_AR1
Figure 3. Showing artefacts introduced by simple running mean filter.

As well as the inversion defect, which is again found here around 1970, some of the peaks get bent sideways into an asymmetric form. In particular, this aberration can be noted around 1958 and 1981. In comparing two datasets in order to attribute causation or measure response times of events, this could be very disruptive and lead to totally false conclusions.

 

Triple running mean filters

Another solution is to improve the running mean’s frequency response.

The sinc function has the maximum of the troublesome negative lobe at πx=tan(πx). Solving this gives πx=1.4303 πx=1.3371…..
[Thanks to Peter Mott for pointing out the error.]
However, simply targeting the peak in the lobe does not produce optimal results. Reduced values leave less residual.

Now if a second running mean is passed after the first one with a period shorter by this ratio, it will filter out the the inverted data…. and produce another, smaller, positive lobe.

A similar operation will kill the new lobe and by this stage any residual problems are getting small enough that they are probably no longer a problem.

The triple running mean has the advantage that it has a zero in the frequency response that will totally remove a precise frequency as well letting very little of higher frequencies through. If there is a fixed, known frequency to be eliminated, this can be a better choice than a gaussian filter of similar period.

The two are shown in the plot above and it can be seen that a triple running mean does not invert the peaks as was the case for the simple running mean that is commonly used.

Example.

With monthly data it is often desirable to remove an annual variation. This can be approximated by the 12,8,6 triple RM shown:

12 / 1.3371 = 8.8975
12 / 1.3371 / 1.3371 = 6.712

It can be seen the second stage is pretty accurate but the final one is rather approximate. However, the error is not large in the third stage.



Figure 4. Comparing frequency response of gaussian and triple running mean filters.

A similar operation on daily data would use: 365, 273, 204

365.242 / 1.3371 = 273.29
365.242 / 1.3371 / 1.3371 = 204,39

Another advantage is that the data from r3m filter really is “smooth” since it does not let past some high frequencies that a simple running mean does. If the aim is simply to “smooth” the data, rather than target a specific frequency, a r3m filter with half the nominal width often gives a smoother result without losing as much information, as was shown in figure 1.

This defect in the smoothing can be seen in the example plot. For example, there is a spike near 1986 in the simple running mean. Worst of all this is not even a true spike in the data that is getting through the filter, it is an artefact.

Another example is the official NOAA [2] presentation of sun spot number (SSN) taken from SIDC [3], examined here:

In 2004, Svalgaard et al published a prediction of the cycle 24 peak [4]. That prediction has proved to be remarkably accurate. It would be even more remarkable if SIDC were to apply a “smoothing” filter that did not invert and displace the peak and reduce its value.

Using direct polar field measurements, now available
for four solar cycles, we predict that the approaching solar
cycle 24 (~2011 maximum) will have a peak smoothed
monthly sunspot number of 75 ± 8, making it potentially the
smallest cycle in the last 100 years.

SIDC processing converts a later trough into the peak value of cycle 24. The supposed peak aligns with the lowest monthly value in the last 2.5 years of data. Clearly the processing is doing more than the intended “smoothing”.

The filter used in this case is a running mean with the first and last points having reduced weighting. It is essentially the same and shares the same defects. Apparently the filter applied to SIDC data was introduced by the Zürich observatory at the end of the 19th century when all these calculations had to be done by hand ( and perhaps the defects were less well understood ). The method has been retained to provide consistency with the historical record. This practise is currently under review.

While it may have been a reasonable compromise in 19th century, there seems little reason other than ignorance of problems for using simple running mean “smoothers” in the 21st century.

Conclusion

Referring to a filter as a “smoother” is often a sign that the user is seeking a visual effect and may be unaware that this can fundamentally change the data in unexpected ways.


Wider appreciation of the corruption introduced by using running mean filters would be beneficial in many fields of study.

 

Refs.

  [1] Plot of sinc function http://mathworld.wolfram.com/SincFunction.html

  [2] NOAA/Space Weather Prediction Center http://www.swpc.noaa.gov/SolarCycle/index.html

  [3] SIDC sunspot data: http://sidc.oma.be/sunspot-data/
SIDC readme: http://sidc.oma.be/html/readme.txt
SIDC applies a 13 point running mean with first and last points weighted 50%. This is a slight improvement on a flat running mean but shares the same tendancy to invert certain features in the data.

  [4] Svalgaard, L.,E. W. Cliver, and Y. Kamide (2005), Sunspot cycle 24: Smallest
cycle in 100 years?, Geophys. Res. Lett., 32, L01104, doi:10.1029/
2004GL021664. http://www.leif.org/research/Cycle%2024%20Smallest%20100%20years.pdf

Appendix

Scripts to automatically effect a triple-running-mean are provided here:
https://climategrog.wordpress.com/2013/11/02/triple-running-mean-script/

Example of how to effect a triple running mean on Woodfortrees.org :
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17

Example of triple running mean in spread sheet:
https://www.dropbox.com/s/gp34rlw06mcvf6z/R3M.xls

[Update]

The main object of this article was to raise awareness of the strong, unintentional distortions introduced by the ubiquitous running mean “smoother”.

Filter design is a whole field of study in itself, of which even an introduction would be beyond the scope of this short article. However, it was also an aim to suggest some useful replacements for the simple running-average and to provide implementations that can easily be adopted. To that end, a small adjustment has been made to the r3m.sh script provided and another higher quality filter is introduced:

https://climategrog.wordpress.com/?attachment_id=659

A script to implement a low-pass Lanczos filter is provided here: https://climategrog.wordpress.com/2013/11/28/lanczos-filter-script/

An equivalent high-pass filter is provided here: https://climategrog.wordpress.com/2013/11/28/lanczos-high-pass-filter/

High-pass filters may be used, for example, to isolate sub-annual variability in order to investigate the presence or absense of a lunar infulence in daily data.

An example is the 66 day filter used in this analysis:
https://climategrog.wordpress.com/?attachment_id=460

The following points arose in discussion of the article.

Vaughan Pratt points out that shortening the window by a factor of 1.2067 (rather than 1.3371 originally suggested in this article) reduces the stop-band leakage. This provides a useful improvement.

Further optimisation can be attained by reducing negative leakage peaks at the cost of accepting slightly more positive leakage. Since the residual negative peaks are still inverting and hence corrupting the data, this will generally be preferable to simply reducing net residuals irrespective of sign.

The asymmetric triple running-mean is shown in the comparison of the frequency responses, along with a Lanczos filter, here:
https://climategrog.wordpress.com/?attachment_id=660

The Pratt configuration and the asymmetric 3RM result in identical averaging intervals when set to remove the annual cycle from monthly data. Both result in a choice of 8,10 and 12 month windows.

The difference will have an effect when filtering longer periods or higher resolutions, such as daily data.

If this is implemented in a spreadsheet, it should be noted that each average over an even interval will result in a 0.5 month shift in the data since it is not correctly centred. In a triple running-mean this results in 1.5 months shift with respect to the original data.

In this case the 1.3371 formula originally suggested in the article, giving 12,9,7 month averages and producing just one 0.5 month lag, may be preferable.

None of these issues apply if the scripts provided accompanying the article are used, since they all correctly centre the data.

A more technical discussion of cascading running-mean filters to achieve other profiles can be found in this 1992 paper, suggested by Pekka Pirilä and should serve as a starting point for further study of the subject.
http://www.cwu.edu/~andonie/MyPapers/Gaussian%20Smoothing_96.pdf

Cyclic components in ice cover

Draft

Introduction

Background

To understand the processes driving polar ice coverage it is necessary to identify cyclic variations. Some would wish to trivialise climate into AGW plus random “stochastic” variability. This is clearly unsatisfactory. Much of the variation is more structured than may be apparent from staring at the ups and downs of a time series.

There are many cyclic or pseudo-cyclic repetitions and before attempting to regression fit a linear trend to the data, it is necessary to identify and remove them or include them in the model. Failure to do this will lead to concluding invalid, meaningless “trends” in the data. See cosine warming ref 1.

One means of testing for the presence of periodicity in a dataset is spectral analysis. In particular spectral power distribution can be informative.

The power spectrum can be derived by taking the Fourier transform of the autocorrelation function. The latter is, in its own right, useful for identifying the presence or not of cyclic change in the data.

One condition to get useful results from Fourier analysis is that the data should be stationary (ref 1) . Amongst other things this requires that the mean of the data should be fairly constant over time. FT of a series with a rising/falling trend will produce a whole series of spurious peaks that are a result making an infinite series from a ramping, finite sample.

There are several tests and definitions of stationarity and it is to some degree a subjective question without a black or white answer. A commonly used test is the augmented Dickey-Fuller, unit root test: ADF. (ref 2)

If a time-series is found not to satisfy the condition of stationarity, a common solution is examine instead the rate of change. This is often more desirable than other ‘detrending’ techniques such as subtracting some arbitrary mathematical “trend” such as a linear trend or higher polynomial. Unless there is a specific reason for fitting such a model to remove a known physical phenomenon, such a detrending will introduce non physical changes into the data. Differencing is a linear process whose result is derived purely from the original data and thus avoids injecting arbitrary signals.

The time differential ( as approximated by the first difference of the discrete data ) will often be stationary when the time-series is not. For example a “random walk”, where the data is a sequence of small random variations added to the previous value, will be a series of random values in its differential and hence stationary. This is particularly applicable to climatic data, like temperature, where last year’s or last month’s value will determine to a large extent the next one. This kind of simple autoregressive model is often used to create artificial climate-like series for testing.

To ensure there is no step change, as the end of the data is wrapped around to the beginning, it is usual to also apply a window function that is zero at each extreme and fades the data down at each end. This has the disadvantage of distorting the longer term variations but avoids introducing large spurious signals that can disrupt the whole spectrum.

Most window functions produce some small artificial peaks or ‘ringing’ either side of a real peak. Some do this more than others. The choice of window function depends to some extent on the nature and shape of the data. The choice is often a compromise.

Method

Initial examination of the autocorrelation function of Arctic ice area data revealed the presence of notable periodicity other than the obvious annual cycle. Some recent published work is starting to comment on various aspects of this. (ref 3)

As a first step the annual cycle was removed by a triple running mean filter (ref 4) with a zero at 365 days and designed to avoid the usual distortions caused by simple running mean “smoothers”.

If a Fourier transform were to be done with the presence of the annual cycle, it’s magnitude, at least an order of magnitude greater than anything else, would reduce the accuracy of FFT and also introduce noticeable windowing artefacts in the 0.5 to 2.0 year period range. For this reason it was removed.

The adf.test() function in R package returned values indicating it was not possible to assume that the data was stationary. Contrariwise, the test on the derivative of the time series indicated strongly that it was stationary.

Non-stationarity is probably caused by long term trend or long period cyclic variation (long relative to the duration of the dataset).

Taking rate of change reduces linear trends to a constant and attenuates the amplitude of long periods by an amount proportional to the frequency, making it more amenable to analysis. The 365d filter will also attenuate frequencies less than about 5 years and this needs to be born in mind or corrected for when relative magnitudes are considered in the spectrum.

ref 1 Cosine warming
https://climategrog.wordpress.com/?attachment_id=209

ref1 stationarity reqt for FFT.

ref3 arctic periodicity

ref 2 (dickey-fuller)
http://homepages.strath.ac.uk/~hbs96127/adfnotes.pdf

ref 4
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/