The Death Spiral is Dead


This year, as every year, there has been much excitement in the media about ‘catastrophic’ melting of Arctic sea-ice, run-away melting, tipping points, death spirals and “ice-free” summers.

There has been the usual guessing game about when exactly the minimum will / has occurred and what the ice area or extent will be on that day.

Claims of ‘ice-free’ conditions at some time in the summer have been bandied about for years in various forms but as the reality sinks in that it’s not as bad as some had claimed, the dates when this is expected happen have often been pushed out beyond the life expectancy of those making the claims.

The meaning of “ice-free” has also been the subject of some serious misrepresentation.

IPCC AR5 states:

Year-round reductions in Arctic sea ice are projected for all RCP scenarios. A nearly ice-free11 Arctic Ocean in the summer sea-
ice minimum in September before mid-century is likely for RCP8.512 (medium confidence). {2.2.3, Figure 2.1}

RCP8.5 is the most extreme emissions ‘pathway’ and is already likely to be avoided.
Footnote 11 defines the term “nearly ice-free”:

11 When sea-ice extent is less than one million km2 for at least five consecutive years.

IPCC consistently uses the term “nearly ice-free”, yet this often gets perverted to “ice-free” in the media and scientific loose talk and the significant rider “for at least five consecutive years” gets dropped.

The problem with this obsessive focusing on one single data point out of 365, is that there is a lot of short term, weather driven variability that can affect the exact timing and size of the minimum in ice coverage. Since the main interest ( outside maritime navigational requirements ) is the hope to find some indications of long term changes in climate, this is not a very instructive way to use the detailed daily data available.

There have been three notably low summer minima in recent years: 2007, 2012 and 2016. The 2012 event was the lowest in the satellite record going back to 1979. The other two years tie for second place, meaning the current minimum is indistinguishable from the one which occurred nine years previously and the lowest one lies between the two. This incompatible with claims of run-away melting. This was a reasonable hypothesis and cause for concern  in 2007 when the relatively short record could be characterised as an increasing rate of change but this interpretation is not compatible with what has happened since.

In the golden days of “run-away” melting, leading up to the ‘catastrophic’ 2007 Arctic sea-ice minimum, this was often presented as the ‘canary in the coal-mine’ for global warming and climate change. It is still referred to as such.

The aim here is to try to separate the changes in the annual cycle from the weather-driven and other short-term variability and to see if we can learn something about the life expectancy of our canary.

Method and Results

In order to determine an objective timing for the sea-ice minimum, it is necessary to filter out short-term variability. Visual inspection of the northern hemisphere sea-ice data [1] shows that there are periods of about a week in length having alternating greater and lesser rates of change throughout most of the year. Spectral analysis also reveals notable cyclic components at around 4, 10, 16 days. This means that the date of the minimum tends to fluctuate rather erratically over the first three weeks of September.

The criterion adopted here for having sufficiently removed the short term variability in order to get a stable estimation of the turning point ( minimum sea-ice extent ) of the annual cycle, was that there shall be only one change of direction. Since the annual cycle only has one minimum, anything producing multiple localised minima is noise in this context.

To detect this condition, the rate of change of the NSIDC NH sea-ice extent was taken and a 3-sigma gaussian low-pass filter applied. The length of the filter was increased progressively until all years in the data had a unique turning point (ie.  a change of sign in the time derivative ).

This was achieved in practice by a single operation: convolution of the daily data with a diff-of-gaussian kernel [*] of the chosen length. The data point at the change of sign was taken as the date of the minimum in the annual cycle for each year.

As is typically the case for satellite records, the data processed by NDISC comes from a number of different satellites as instruments and spacecraft fail. They are cross-calibrated as well as possible to provide one continuous time series. One notable change in the data is that the earlier orbits only provided  full surface coverage every six days [2] and were originally processed to give a data point every three days. These have apparently been interpolated to give a homogeneous time series. Obviously data with only a three day time resolution will be less accurate at determining the exact date of the turn-around. To mark the different nature of the early data this section is plotted in a different colour.

Comparison to the minima of 5 day trailing average presented by NSIDC shows the expected reduction in the variability that was the aim of this study. This manifests primarily as the reduced occurrence of later minimum dates. The pre-1998 data change little. This is probably due to the blurring effect of the extended 6 day orbital coverage being similar to the gaussian smoothing applied in this analysis.

Since the rate of change at the beginning of refreezing tends to be faster than end of melting, removing higher frequencies with a filter introduces a general drift to a earlier date of minimum ice extent. This simply reflects the asymmetry of the annual cycle. This underlines the caution needed in merging the two sections of the dataset : the earlier Nimbus and the later military satellites which had different orbit resolution. The data are not fully compatible and comparing dates of ice minimum without suitable filtering will lead to an exaggerated impression of later melting in the post-Ninbus years.


There is a clear non linear nature to the variation in the timing of the annual minimum in Arctic sea-ice extent from NSIDC. It reached its latest recent date for the turning point in 2007 and has been getting progressively earlier ever since.
That is to say, that the freezing season has been beginning steadily earlier for about a decade, having drifted in the opposite sense for the previous two decades.

There is much talk with naive assumptions of the effects of reduced ice area on the energy budget of the region. The simplistic argument being that less solar energy will be reflected by water compared to an ice or snow surface leading to further warming and thus ever more melting. However, there are several other effects which need to be measured and taken into account to determine whether this extra energy input will cause an overall positive feedback ( and accelerated melting ) or whether it will be counted by other effects.  Open water emits more infra-red energy to space and evaporation of surface water in the windy Arctic region will remove substantial amounts of heat from the surface.  Both of these effects will continue during 12 months of the year, not just the short summer. It is far from obvious how these factors and others will balance out. Climate models are notoriously poor at reproducing both Arctic temperatures and sea ice coverage. (They tend to under-estimate both temperature and ice loss). Clearly the ‘basic physics’ for this region is poorly understood.

The observational data can help shed some light on the how the region is responding to the reduced ice cover since the 2007 minimum.

The decadal rate of melting has also reduced since 2007 as measured by estimations of ice area retrieved by the University of Illinois [4].

The derivation of that graph is detailed here:


There are two ways to interpret the observational data:

1) the net feedback from open water is negative , not positive and run-away melting was an erroneous interpretation. It is not happening.

2) the feedbacks are not the key driver of Arctic sea ice melting, there is another external force, such as N. Atlantic sea surface temperature and ocean currents,  which is dominant and run-away melting was an erroneous interpretation. It is not happening.

The death spiral is dead.


[1] The NH sea-ice extent data are provided by NSIDC as daily anomalies form an average cycle plus the annual cycle which has been subtracted. The “near real time” data for the current, incomplete year is provided as a separate file in the same format.

Documentation for the NSIDC data :

[2] DMSP Satellite F8 was the first of a series a spacecraft and flew from 7 September 1987, it was flown by US department of defence. Data from this family is combined with that from earlier NASA  provided Nimbus-7  spacecraft which flew similar but somewhat different polar orbits.

“The Nimbus-7 observatory provides global coverage every six days, or every 83 orbits.”

This section of the combined record is less reliable for estimating the turning point due to lack of time resolution. The data has to be in-filled to produce daily data series and  contributions to one ice extent figure are actually spread over 6 days of flight time.

The DMSP 5D family spacecraft, provide daily resolution data :

[3] An implementation of the diff-of-gaussian filter is presented here:

[4] The sea-ice area data used in the decadal trend analysis are provided by Cryosphere Today team at U. Illinois.

Processed data of turning points after 14d gaussian filter.

1979.69756852    0.6976
1980.68047869    0.6805
1981.69076810    0.6908
1982.69010575    0.6901
1983.68670554    0.6867
1984.69425698    0.6943
1985.68811892    0.6881
1986.68198074    0.6820
1987.67584268    0.6758
1988.68339412    0.6834
1989.69642138    0.6964
1990.68754535    0.6875
1991.69783470    0.6978
1992.68074494    0.6807
1993.68829638    0.6883
1994.68215826    0.6822
1995.69792343    0.6979
1996.68357158    0.6836
1997.69386093    0.6939
1998.69593655    0.6959
1999.68979843    0.6898
2000.69187404    0.6919
2001.69394966    0.6939
2002.68781154    0.6878
2003.69810089    0.6981
2004.69196277    0.6920
2005.70225212    0.7023
2006.69611400    0.6961
2007.70366544    0.7037
2008.69752732    0.6975
2009.69686503    0.6969
2010.69620273    0.6962
2011.69280252    0.6928
2012.69761605    0.6976
2013.69147793    0.6915
2014.69629146    0.6963
2015.69015334    0.6902

On the relationship of ACE and SST


Surface ocean temperature has been shown to be a key condition for development of tropical and cyclonic storms, thought there are strongly conflicting opinions on its impact on long term trends. The present study looks at correlation of accumulated cyclone energy and surface sea temperature in the N. Atlantic basin and finds common periodicities linking the two. Notably, a clear lunar period is revealed. The similarity between the post-2005 drop in cyclone energy and a similar drop in the record from the 1930-1950 period is also discussed.

Figure 1. Comparing North Altantic SST to Accumulated Cyclone Energy

Data Sources:

Accumulated cyclone energy ( ACE ) is an estimation of kinetic energy contained in cyclone class storms, based on wind speed, storm extent and other information. This is catalogued both by storm and as annual totals. In the following, all ACE data [DS1] and discussion thereof, refer to the North Atlantic basin. Much research has been done recently to try to improve the earlier periods of the record. [1] [2]

The ACE metric is intended to provide better insight that simply counting the number of storms, since this has been strongly affected by improvements in detection and reporting and possible statistical variation in teh proportions of storm intensity. Many smaller storms are now reported that would not have been detected without modern surveying techniques [2]. Since storm energy originates in ocean heat energy, ( Gray 1968 ) [3]. total cyclone energy is a more appropriate metric than non-dimensional quantities such as storm count. when searching for possible a causal link.

Sea surface temperature data ( SST ) are taken from two processed data sources, both derived primarily from ICOADS source observations.

HadSST3[DS2] is an “anomaly” dataset ie deviations from the local seasonal mean from an arbitrary reference period.
Data from 0 -70W, 0 – 60N were averaged.

ERSSTv3 [DS3] is a dataset of actual SST. A single offset was subtracted from the N.Atlantic mean to align it with HadSST3. NOAA provide a detrended, N.Atlantic average they call AMO index. The data used here is NON detrended version, ie actual SST.

Data Processing:

Since ERSST has a strong annual variation, and HadSST3 has considerable high frequency noise and annual residuals, both were low-pass filtered using a 3-sigma gaussian filter with sigma = 12mo. This removes most variability below about 2 years in period.

Considerable adjustments are made to SST for a number of reasons which are discussed on the Met. Office web site In particular, substantial adjustments are made to WWII period and subsequent years ( many of which are similarly adopted for both data sources used here ).

  1. ERI biases (North Atlantic): In the North Atlantic between 1970 and 1994, Engine Room Intake biases were generated from the means and standard errors from Kent and Kaplan 2006.
  2. Unknown measurements: AR(1) series with lag correlation 0.99 and range between 0 and 1 were used to assign a time varying fraction of unknown measurements to bucket. The remainder were set to be ERI measurements. The same value was used at all locations.
  3. ERI recorded as bucket: 30±10% of bucket observations were reassigned as Engine Room Intake measurements. One value per realisation was used and applied to all times and places after 1940.
  4. Canvas to rubber bucket: A linear switchover was assumed. The start point (all canvas) was chosen randomly between 1954 and 1957. The end point (all rubber) was chosen randomly between 1970 and 1980.

Most of the post-WWII cooling is the result of these somewhat speculative “bias corrections” in particular the phasing in of the “canvas to rubber bucket”” adjustment from 1954 to 1980. The adjustments made to HadSST3 were discussed in detail in the following article On the adjustments to the HadSST3 data set

The N.Atlantic annual cyclone season peaks in late summer. Annual ACE data were plotted at the mid-point of the year. Where ACE is shown low-pass filtered, a similar 3-sigma gaussian filter was used.

ACE data will have also been affected by changes in shipping routes and usage during the WWII period, possibly leading to significant non-reporting of storms well away from land.

In general all three datasets should be regarded as unreliable during the WWII period.


Similarities :

There is a strong similarity in the ACE and SST data, both at inter-decadal and inter-annual scales. The much discussed 60 year pseudo-cyclic variability is clearly seen in both SST and ACE. The inter-annual variability is particularly close in the more reliable post-1950 period but shows a strong similarity throughout the record. Since cyclones derive their energy from the heat of the ocean surface [3] this not particularly surprising but provides a means of comparing two fundamentally different measurements of climatic conditions.


There are two very visible discrepancies in the two metrics. War-time ACE is very low and probably reflects some degree of under-sampling in that period. However, the low activity goes from 1934 to 1949, a far broader period that can be accounted for by WWII disruption to shipping. Also systematic airborne monitoring was started in 1944 (Klotzbach & Landsea 2015) [4] and there was no noticeable increase in ACE until 1950, suggesting the lower level of activity in the period, though less reliable, is real. A detailed account of uncertainty in the HURDAT data is provided by Landsea & Franklin (2013):[5]

There is a large peak in ACE 1893 that is opposed to the variation shown in the SST record. 1952 also shows a mismatch in the two records. With those two exceptions, the inter-annual variability in the two metrics follow each other quite closely throughout the record.

The post-2005 drop in ACE

Applying a low-pass filter to the annual ACE data allows a more direct comparison to SST.

Figure 2.

An interesting feature becomes apparent: unlike SST, the end of the cyclone data has dropped sharply since its peak in 2004/2005. This looks very similar to the trough that started in the 1930s.

This drop is also reported by Klotzbach et al (2015) [5] which includes the most recent data.

Large, significant downward trends are present in accumulated cyclone energy in the Northern Hemisphere, the Southern Hemisphere, and globally.

It can also be noted that the inter-annual variations correlate much more closely during both the early and later 20th c. warming periods.

Other studies, such as Holland & Webster (2007)[6] , based on storm count rather than ACE seem to give a less clear picture of the SST linkage and, due to earlier publishing date, do not include the more recent decline.

Figure 3. From Holland & Webster 2007, ( fig. 1 )

Common variability of ACE and SST can be examined by cross-correlation of the two time series. Common periodicities can be examined by spectral analysis of the cross-correlation function.

Figure 4. Cross power spectrum N.Atl. ACE vs SST

The two main common periods are close to 9 and 60 years, with a smaller peak around 5.6 years. The circa 9y peak is centred on 9.07 years. It is noted that this is exactly the mean frequency corresponding to 8.85y and 9.3y, both of which are lunar cycles. The 8.85y period corresponds to the precession of the lunar of apsides and 9.3y is half the nodal precession period.

This circa 9.1 years would appear to be the same periodicity reported as common to Atlantic and Pacific SST by Muller et al (2013) [8] and conclusively shown to be of lunar origin by N Scaffeta [9] by examining data from the NASA / JPL ephemeris.

Though tides are traditionally measured by the height of the water level, it should be remembered that tides are longitudinal waves and the principal movement of water is horizontal, not vertical. Long period tidal variations have the potential to move vast volumes of surface water and hence heat, thereby modifying ocean currents. This seems to have received little recognition or quantitative study.

Figure 5. Showing detail of later 20th c. ACE and SST with major volcanoes marked.

While there is still too little data to confirm the length of the post 2005 reduction in activity, the dearth of land falling hurricanes in the U.S. has been widely reported, as has the failure of hurricane activity to match the “hottest year ever” claims made about SST.

If this lower level of activity continues in the next few years, it will suggest that the stagnant temperatures of “the pause”, like those of the 30s and 40s are not conducive to a high level of hurricane activity. This should then inform attempts to model cyclone formation and intensity.

While there is clearly a linkage between SST and cyclone energy, it is not a simple on-to-one relationship of warmer SST causing “more frequent and more extreme” storms. A more detailed study of other factors such as wind shear and vorticity is covered in Emanuel 1995 [14]. A brief overview of more recent work is provided by Vecchi and Soden 2007[15]

Some studies of the correlation of SST and cyclone intensity and frequency [10] have, for reasons partly motivated by data quality issues, focused on the post-1960 period. However, it is clear that this will not give an accurate or representative assessment.

Whatever the motivation, focussing on a limited period dominated by a rising section of the data runs a substantial risk of drawing false conclusions.

Emanuel (2005) [10]

… show that this index [potential destructiveness of hurricanesderived from the third power of wind speed] has increased markedly since the mid-1970s. This trend is due to both longer storm lifetimes and greater storm intensities. I find that the record of net hurricane power dissipation is highly correlated with tropical sea surface temperature, … My results suggest that future warming may lead to an upward trend in tropical cyclone destructive potential…

The accuracy of the claims an calculations made in Emanuel (2005) were the subject of a reply to the paper in Gray (2005)[11], which found them to be invalid.

This paper’s conclusions are partially based on an analysis of poorly documented and biased tropical cyclone (TC) maximum wind (Vmax ) data from the Northwest (NW) Pacific. This study presents a sum of each tropical cyclone’s maximum winds – a very hard measurement to realistically obtain over a 3-5 decade period of changing maximum wind measurement techniques. The author then cubes these maximum wind speeds (Vmax3) of questionable accuracy. Such a cubing process greatly exaggerates any errors in the original Vmax estimate.

Almost any climate dataset will will show either an upward or downward trend. Suggesting evidence of causation from nothing more than sign of the long term trend has little justification. Extrapolating such a relationship outside the period of study, on nothing more than a baseless “if the current trend continues”, is more likely to be misleading then informative. There seems to be some unspoken assumptions that the most likely outcome is that a trend calculated over some arbitrary period will continue, while no logical or statistical basis for that assumption is ever put forward.

A study showing correlation of variation in both directions, or of less trivial variations would be far more persuasive.

A rapid drop in hurricane energy is noted before the El Chichon and Mt Pinatubo eruptions ( figure 5.) and is already visible in SST. In fact, both events were preceded by a higher level of cyclone activity that coincided with a peaks in the 9y lunar cycle in SST and also coincided with peaks in solar activity. The temporal coincidence of all these factors raises the danger of false attribution

Volcanoes play a significant role in attempts to model past climate. Generally, on a global scale, climate models produce little more than general GHG driven warming and a lot of internal noise. The cooling effect attributed to major stratospheric eruptions is important in putting a few bumps and dips in about the right place to give the visual impression that the model, at least roughly, resembles the recent climate history ( models are tuned primarily to the period from 1960 onwards ).

In multivariate regression studies or arbitrarily adjusting climate model parameters to reconcile model output with the climate record ( Hansen et al 2002 [12] ), false attribution of subsequent cooling to volcanic forcing would lead to unduly high estimations of climate sensitivity to volcanic forcing. Consequently other radiative opposing ( warming ) “forcings” will also likely be over-estimated.

A separate study of volcanic forcing is the tropics suggests a similar over-estimation of climate sensitivity.

Conclusion :

Secular trends underlying the obvious 60 year pseudo-periodic variability in both SST and cyclone data are probably determined more by sampling bias and the various adjustments and corrections applied to SST than they are by possible long term changes in the physical conditions. As such, caution should be applied before suggesting that they represent a real indication of a possible long term causal relationship.

The clear 60y and 9y periodicities draw attention to the need to investigate long period lunar tidal influences on ocean and atmospheric climate variables. This is needed in order to quantify and account for natural variability and also to avoid false attribution to other climate variables. In particular, large scale variations in surface transport of heat by ocean currents due to long period tides appears to be a much neglected factor.

There is a stronger correlation during the two major warming periods of the 20th century than in the plateaux or cooling periods. SST is not the single dominant factor determining cyclone energy. There is not a trivial linear relationship and it is probably not appropriate to project relationships derived from the study of data primarily from warming periods onto times of relatively stable SST conditions.

There is an indication that the drop in North Atlantic hurricane activity which apparently occurred during the pre-WWII warm period is repeating during the current temperature plateau.

Data sources:

ACE : annual accumulated cyclone energy derived from HURDAT2 database:
Documentation of Atlantic Hurricane Database Re-analysis Project:

UK Met. Office HadSST3.1.1.0 [ 0-70W,0-60N ] recovered from KNMI”

ERSSTv3 non-detrended N.Atlantic SST :”

Explanation of accumulated cyclone energy:

“ACE” = Accumulated Cyclone Energy – An index that combines the numbers of systems, how long they existed and how intense they became. It is calculated by squaring the maximum sustained surface wind in the system every six hours that the cyclone is a Named Storm and summing it up for the season. It is expressed in 104 kt2.

The Atlantic hurricane database (or HURDAT) extends back to 1851. However, because tropical storms and hurricanes spend much of their lifetime over the open ocean – some never hitting land – many systems were “missed” during the late 19th and early 20th Centuries (Vecchi and Knutson 2008). Starting in 1944, systematic aircraft reconnaissance was commenced for monitoring both tropical cyclones and disturbances that had the potential to develop into tropical storms and hurricanes. This did provide much improved monitoring, but still about half of the Atlantic basin was not covered (Sheets 1990). Beginning in 1966, daily satellite imagery became available at the National Hurricane Center, and thus statistics from this time forward are most complete (McAdie et al. 2009).

References :

[1] Vecchi & Knutson 2008

[2] The Atlantic Hurricane Database Re-analysis Project

[3] Gray, W. M., 1968: Global view of the origin of tropical disturbances and storms. Mon. Wea. Rev., 96,

[4] Klotzbach & Landsea 2015
“Extremely Intense Hurricanes: Revisiting Webster et al. (2005) after 10 Years”

[5] Landsea, C. W., and J. L. Franklin, 2013: “Atlantic hurricane database uncertainty and presentation of a new database format.” Mon.
Wea. Rev., 141, 3576–3592, doi:10.1175/MWR-D-12-00254.1.

[6] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”

[7] Holland & Webster 2007 :
“Heightened tropical cyclone activity in
the North Atlantic: natural variability or
climate trend?”

[8] Muller et al (2013) “Decadal variations in the global atmospheric land temperatures”

[9] Nicola Scafetta / Journal of Atmospheric and Solar-TerrestrialPhysics 72 (2010) 951–970
“Empirical evidence for a celestial origin of the climate oscillations and its implications”

[10] Emanual 2005 “Increasing destructiveness of tropical cyclones over the past 30 years” [paywalled]

[11] William M. Gray
Comments on: “Increasing destructiveness of tropical cyclones over the past 30 years” by Kerry Emanuel, Nature, 31 July 2005, Vol. 436, pp. 686-688

[12] Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations”

[13] Webster et al (2005)
“Changes in Tropical Cyclone Number, Duration, and Intensity in a Warming Environment”

[14] Kerry A. Emanuel 1995
“Sensitivity of Tropical Cyclones to Surface Exchange Coefficients and a Revised Steady-State Model incorporating Eye Dynamics”

[15] Vecchi & Soden 2007
“Increased tropical Atlantic wind shear in model projections of global warming”

On identifying inter-decadal variation in NH sea ice

[EDIT] above image has been updated with Dec 2015 data.


The variation in the magnitude of the annual cycle in arctic sea ice area has increased notably since the minimum of 2007. This means that using a unique annual cycle fitted to all the data leaves a strong residual (or “anomaly”) in post-2007 years. This makes it difficult or impossible to visualise how the data has evolved during that period. This leads to the need to develop an adaptive method to evaluate the typical annual cycle, in order to render the inter-annual variations more intelligible.

Before attempting to asses any longer term averages or fit linear “trends” to the data it is also necessary to identify any repetitive variations of a similar time-scale to that of the fitting period, in order to avoid spurious results.

See also: an examination of sub-annual variation


Short-term anomalies: adapting calculations to decadal variability.

Adaptation can be achieved by identifying different periods in the data that have different degrees of variability and calculating the average annual variation for each period.

Details of how the average seasonal profile for each segment was calculated are shown in the appendix.

Figure 1. The post-2007 interval. (click to enlarge)

The data was split into separate periods that were identified by their different patterns of variation. For example, the period from 1997 to 2007 has a notably more regular annual cycle (leading to smaller variation in the anomalies). The earlier periods have larger inter-annual variations, similar to those of the most recent segment.

An approximately periodic variation was noted in the data, in the rough form of a rectified sine wave. In order to avoid corrupting the average slope calculations by spanning from a peak to a trough in this pattern, a mathematical sine function was adjusted manually to approximate the period and magnitude shown in the data. It was noted that, despite the break of this pattern in the early 2000’s, the phase of this pattern is maintained until the end of the record and aligns with the post-2007 segement. However, there was a notable drop in level ( about 0.5×10^6 km^2 ). Labels indicate the timing of several notable climate events which may account for some of the deviations from the observed cyclic pattern. These are included for ease of reference without necessarily implying a particular interpretation or causation.

Figure 2. Identifying periods for analysis

The early period (pre-1988) was also separated out since it was derived from a notably different instrument on a satellite with a much longer global coverage. It is mostly from Nimbus 7 mission which was in an orbit with a 6 day global coverage flight path. Somehow this data was processed to produce into a 2 day interval time-series, although documentation of the processing method seems light.

Later data, initially from the US defence meteorology platforms starts in 1988 and had total polar coverage twice per day,producing daily time-series.

In order to maintain the correct relationship between the different segments, the mean value for each segment from the constant term of the harmonic model used to derive the seasonal variations, was retained. The anomaly time-series was reconstructed by adding the deviations from the harmonic model to the mean, for each segment in turn. The calculated anomalies were extended beyond the base period at each end, so as to provide an overlap to determine suitable points for splicing the separate records together. A light low-pass filter was applied to the residual ‘anomalies’ to remove high frequency detail and improve visibility.

The result of this processing can be compared to the anomaly graph provided at the source of this data. The extent of the lines showing mean rates of change indicate the periods of the data used. These were chosen to be an integral number of cycles of the repetitive, circa five year pattern.

Cavalieri et al [2] also report a circa 5 year periodicity:

The 365-day running mean of the daily anomalies suggests a dominant interannual variability of about 5 years

Figure 3. Showing composite adaptive anomaly

Figure 4. Showing Cryosphere Today anomaly derived with single seasonal cycle


The average slope for each segment is shown and clearly indicate the decrease in ice area was accelerating from the beginning of the era of the satellite observations until 2007. The derived values suggest this was a roughly parabolic acceleration. This was a cause for legitimate concern around 2007 and there was much speculation as to what this implied for the future development of arctic ice coverage. Many scientists have suggested “ice free” summers by 2013 or shortly thereafter.

The rate of ice loss since 2007 is very close to that of the 1990s but is clearly less pronounced than it was from 1997 to 2007, a segment of the data which in itself shows a clear downward curvature, indicating accelerating ice loss.

Since some studies estimate that much of the ice is now thinner, younger ice of only 1 or 2 years of age, recent conditions should be a more sensitive indicator of change. Clearly the predicted positive feedbacks, run-away melting and catastrophic collapse of Arctic sheet are not in evidence. The marked deceleration since 2007 indicates that either the principal driver of the melting has abated or there is a strongly negative regional feedback operating to counteract it.

The 2013 summer minimum is around 3.58 million km^2. Recent average rate of change is -0.043 million km^2 per year.

While it is pointless to suggest any one set of conditions will be maintained in an ever varying climate system, with the current magnitude of the annual variation and the average rate of change shown in the most recent period, it would take 83 years to reach ice free conditions at the summer minimum.

Some have preferred to redefine “ice free” to mean less than 1 millions km^2 sea ice remaining at the summer minimum. On that basis the “ice free” summer figure reduces to 61 years.


In order to extract and interpret inter-decadal changes in NH sea ice coverage, it is essential to characterise and remove short-term variation. The described adaptive method allows a clearer picture of the post 2007 period to be gained. It shows that what could be interpreted as a parabolic acceleration in the preceding part of the satellite record is no longer continuing and is replaced by a decelerating change in ice area. At the time of this analysis, the current decadal rate of change is about -430,000 km^2 per decade, against an annual average of around 8.7 million and a summer minimum of around 3.6 million km^2.

Whether the recent deceleration will continue or revert to earlier acceleration is beyond current understanding and prediction capabilities. However, it is clear that it would require a radical change for either an ice free ( or an less than one million km^2 ) arctic summer minimum to occur in the coming years.

Unfortunately data of this quality covers a relatively short period of climate history. This means it is difficult to conclude much except that the earlier areal acceleration has changed to a deceleration, with the current rate of decline being about half that of the 1997-2007 period. This is clearly at odds with the suggestion that the region is dominated by a positive feedback.

It is also informative to put this in the context of the lesser, but opposite, tendency of increasing sea ice coverage around Antarctica over the same period.

Figure 5. Showing changes in Arctic and Antarctic sea ice coverage

Figure 3. Showing composite adaptive anomaly


Appendix: approximating the typical seasonal cycle

Spectral analysis of the daily ice area data shows many harmonics of the annual variation, the amplitude generally diminishing with frequency. Such harmonics can be used to reconstruct a good approximation of the asymmetric annual cycle.

Here the typical cycle of seasonal variation for the period was estimated by fitting a constant plus 7 harmonic model to the data ( the 7th being about 52 days ). This is similar to the technique descrobed by Vinnikov et al [1]. The residual difference between the data and this model was then taken as the anomaly for the segment and low-pass filtered to remove 45 day (8th harmonic) and shorter variations.

This reveals an initial recovery from the 2007 minimum and a later drift downwards to a new minimum in 2012. By the time of the annual minimum for 2013, another strong recovery seems to have started.

This repetitive pattern, which was identified in the full record, is used to define trough-to-trough or peak-to-peak periods over which to calculate the average rate of change for each segment of the data. This was done by fitting a linear model to the unfiltered anomaly data ( using a non-linear least squares technique).

Supplementary Information

The fitted average rate of change for the respective periods, in chronological order ( million km^2 per year )


The cosine amplitude, half the total annual variation, of the fundamental of the harmonic model for successive data segments ( million km^2 )

pre 1988 -4.43
1988-1997 -4.47
1997-2007 -4.62
post 2007 -5.14

Mathematical approximation of repetitive pattern used to determine sampling intervals, shown in figure 2.

cosine period 11.16 years ( 5.58 year repetition )
cosine amplitude 0.75 x 10^6 km^2


Data source:
(data downloaded 16th Sept 2013)

A extensive list of official sources both arctic and antarctic sea ice data can be found here:

[1] Vinnikov et al 2002
“Analysis of seasonal cycles in climatic trends with application to satellite observations of sea ice extent”

[2] Cavalieri et al 2003
“30-Year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability”

Amplitude Modulation Triplets

Understanding of “beats” and amplitude modulation requires going back to basics to distinguish two similar treatments that are often confused with one another.

Acoustic Beats
The following link shows the modulation which leads to the acoustic beats phenomenon and contains audio examples to listen to:
Frequencies in amplitude modulation:

The basic trigonometrical identity [1] that is used to relate modulation to interference patterns is this:

cos a * cos b = 0.5 * ( cos (a+b) + cos (a-b) )

If a physical system contains modulation as expressed by the multiplication on the left of the equation, spectral analysis will split things into a series of additive components and we will have to interpret what we find in the spectrum as being the sum and the difference of the physical frequencies that are modulating each other.

Superposition (beats).
In the other direction the superposition (addition) of two signals can be converted back to modulation :
cos (f1t) + cos ( f2t) = 2 cos ((f1t + f2t)/2) * cos ((f1t – f2t)/2)
which if we rename the variables using a and b
cos (a) + cos ( b) = 2 cos ((a + b)/2) * cos ((a – b)/2)

So the presence of two frequencies seen in a Fourier spectrum is equivalent to physical modulation of their average frequency by half the difference of their frequencies. This is a mathematical identity, the two interpretations are equivalent and interchangeable, thus this is totally general and independent of any physical system where this sort of pattern may be observed. If these kind of patterns are found, the cause could be either a modulation or superposition.

In the presence of perfect sampling, the two forms are mathematically identical and again what would be found in the spectra would be the left-hand side: the two additive signals. However, what happens to the modulating envelop on the right in the climate system may well mean that the faster one get smoothed out, or the sampling interval and all the averaging and data processing breaks it up . The longer, lower frequency signal may be all that is left and then that is what will show up in the spectrum.

This is similar to what is called “beats” in an acoustic or musical context, except that the ear perceives twice the real physical frequency since human audition senses the amplitude variation: the change in volume, NOT the frequency of the modulation. The amplitude peaks twice per cycle and what we hear as two “beats” per second is a modulation of 1 hertz. Care must be taken when applying this musical analogy to non-acoustic cycles such as those in climate variables.

Also, if one part ( usually the faster one ) gets attenuated or phase delayed by other things in climate it may still be visible but the mathematical equivalence is gone and the two, now separate frequencies are detected.


Since the ‘side-lobe’ frequencies are symmetrically placed about the central frequency, this creates a symmetric pair of frequencies of equal magnitude whose frequencies are the sum and the difference of the originals. This is sometimes referred to as a ‘doublet’.

If the two cosine terms are equal as shown above, neither of the original signal frequencies remain. However, if the higher frequency is of larger amplitude, a residual amount of it will remain giving rise to a ‘triplet’ of frequencies. This is what is usually done in radio transmission of an amplitude modulated signal (AM radio). In this case the central peak is usually at least twice the magnitude of the each side bands.

It can be seem mathematically from the equations given above, that if both inputs are of equal amplitude the central frequency will disappear, leaving just a pair side frequencies. It may also be so small as to no longer be distinguishable from background noise in real measurements.

All of this can confound detection of the underlying cycles in a complex system because the periods of the causative phenomena may be shifted or no longer visible in a frequency analysis.

There are many non-linear effects, distortions and feedbacks that will deform any pure oscillation and thus introduce higher harmonics. Indeed such distortions will be the norm rather than a pure oscillation and so many harmonics would be expected to be found.

As a result, even identifying the basic cause of a signal can be challenging in a complex system with many interacting physical variables.

The triplet is useful pattern to look for, being suggested by the presence equally spaced frequencies, although the side peaks may attenuated by other phenomena and are not always of equal height as in the abstract example.

Examples of these kind of patterns can be found in variations of Arctic ice coverage.


Sum and Product of Sine and Cosine:

Data corruption by running mean “smoothers”

[See update at end of article]

Running means are often used as a simple low pass filter (usually without understanding its defects). Often it is referred to as a “smoother”. In fact it does not even “smooth” too well either since it lets through enough high frequencies to give a spiky result.

Running means are fast and easy to implement. Since most people have some understanding of what an average does, the idea of a running average seems easily understood. Sadly it’s not that simple and running averages often cause serious corruption of the data.

So it smooths the data to an extent, but what else does it do?

The problem with an evenly weighted average is that the data is effectively masked by a rectangular window. The frequency response of such a rectangular window is the sinc function [1] and thus the effect on the frequency content of the data is to apply the sinc function as a frequency filter. The sinc function oscillates and has negative lobes that actually invert part of the signal it was intended to remove. This can introduce all sorts of undesirable artefacts into the data.

An example of one of the problems can be seen here:
Figure 1 Comparing effects of different filters on a climate data time series ( 60 month running mean vs 30m triple running mean [blue] ).

It can be noted that the peaks and troughs in the running mean are absolutely wrong. When the raw data has a peak the running mean produces a trough. This is clearly undesirable.

The data is “smoother” than it was but its sense is perverted. This highlights the difference between simply “smoothing” data and applying appropriately chosen low-pass filter. The two are not the same but the terms are often thought to be synonymous.

Some other filters, such as the gaussian, are much more well behaved, however a gaussian response is never zero, so there is always some leakage of what we would like to remove. That is often acceptable but sometimes not ideal.

Comparing frequency of gaussian and running mean
figure 2 showing the magnitude of the frequency response. However, it should be noted that the sign of every other lobe of running mean is negative in sign, actually inverting the data.

Below is a comparison of two filters ( running mean and gaussian ) applied to some synthetic climate-like data generated from random numbers. Click to see the full extent of the graph.

Figure 3. Showing artefacts introduced by simple running mean filter.

As well as the inversion defect, which is again found here around 1970, some of the peaks get bent sideways into an asymmetric form. In particular, this aberration can be noted around 1958 and 1981. In comparing two datasets in order to attribute causation or measure response times of events, this could be very disruptive and lead to totally false conclusions.


Triple running mean filters

Another solution is to improve the running mean’s frequency response.

The sinc function has the maximum of the troublesome negative lobe at πx=tan(πx). Solving this gives πx=1.4303 πx=1.3371…..
[Thanks to Peter Mott for pointing out the error.]
However, simply targeting the peak in the lobe does not produce optimal results. Reduced values leave less residual.

Now if a second running mean is passed after the first one with a period shorter by this ratio, it will filter out the the inverted data…. and produce another, smaller, positive lobe.

A similar operation will kill the new lobe and by this stage any residual problems are getting small enough that they are probably no longer a problem.

The triple running mean has the advantage that it has a zero in the frequency response that will totally remove a precise frequency as well letting very little of higher frequencies through. If there is a fixed, known frequency to be eliminated, this can be a better choice than a gaussian filter of similar period.

The two are shown in the plot above and it can be seen that a triple running mean does not invert the peaks as was the case for the simple running mean that is commonly used.


With monthly data it is often desirable to remove an annual variation. This can be approximated by the 12,8,6 triple RM shown:

12 / 1.3371 = 8.8975
12 / 1.3371 / 1.3371 = 6.712

It can be seen the second stage is pretty accurate but the final one is rather approximate. However, the error is not large in the third stage.

Figure 4. Comparing frequency response of gaussian and triple running mean filters.

A similar operation on daily data would use: 365, 273, 204

365.242 / 1.3371 = 273.29
365.242 / 1.3371 / 1.3371 = 204,39

Another advantage is that the data from r3m filter really is “smooth” since it does not let past some high frequencies that a simple running mean does. If the aim is simply to “smooth” the data, rather than target a specific frequency, a r3m filter with half the nominal width often gives a smoother result without losing as much information, as was shown in figure 1.

This defect in the smoothing can be seen in the example plot. For example, there is a spike near 1986 in the simple running mean. Worst of all this is not even a true spike in the data that is getting through the filter, it is an artefact.

Another example is the official NOAA [2] presentation of sun spot number (SSN) taken from SIDC [3], examined here:

In 2004, Svalgaard et al published a prediction of the cycle 24 peak [4]. That prediction has proved to be remarkably accurate. It would be even more remarkable if SIDC were to apply a “smoothing” filter that did not invert and displace the peak and reduce its value.

Using direct polar field measurements, now available
for four solar cycles, we predict that the approaching solar
cycle 24 (~2011 maximum) will have a peak smoothed
monthly sunspot number of 75 ± 8, making it potentially the
smallest cycle in the last 100 years.

SIDC processing converts a later trough into the peak value of cycle 24. The supposed peak aligns with the lowest monthly value in the last 2.5 years of data. Clearly the processing is doing more than the intended “smoothing”.

The filter used in this case is a running mean with the first and last points having reduced weighting. It is essentially the same and shares the same defects. Apparently the filter applied to SIDC data was introduced by the Zürich observatory at the end of the 19th century when all these calculations had to be done by hand ( and perhaps the defects were less well understood ). The method has been retained to provide consistency with the historical record. This practise is currently under review.

While it may have been a reasonable compromise in 19th century, there seems little reason other than ignorance of problems for using simple running mean “smoothers” in the 21st century.


Referring to a filter as a “smoother” is often a sign that the user is seeking a visual effect and may be unaware that this can fundamentally change the data in unexpected ways.

Wider appreciation of the corruption introduced by using running mean filters would be beneficial in many fields of study.



  [1] Plot of sinc function

  [2] NOAA/Space Weather Prediction Center

  [3] SIDC sunspot data:
SIDC readme:
SIDC applies a 13 point running mean with first and last points weighted 50%. This is a slight improvement on a flat running mean but shares the same tendancy to invert certain features in the data.

  [4] Svalgaard, L.,E. W. Cliver, and Y. Kamide (2005), Sunspot cycle 24: Smallest
cycle in 100 years?, Geophys. Res. Lett., 32, L01104, doi:10.1029/


Scripts to automatically effect a triple-running-mean are provided here:

Example of how to effect a triple running mean on :

Example of triple running mean in spread sheet:


The main object of this article was to raise awareness of the strong, unintentional distortions introduced by the ubiquitous running mean “smoother”.

Filter design is a whole field of study in itself, of which even an introduction would be beyond the scope of this short article. However, it was also an aim to suggest some useful replacements for the simple running-average and to provide implementations that can easily be adopted. To that end, a small adjustment has been made to the script provided and another higher quality filter is introduced:

A script to implement a low-pass Lanczos filter is provided here:

An equivalent high-pass filter is provided here:

High-pass filters may be used, for example, to isolate sub-annual variability in order to investigate the presence or absense of a lunar infulence in daily data.

An example is the 66 day filter used in this analysis:

The following points arose in discussion of the article.

Vaughan Pratt points out that shortening the window by a factor of 1.2067 (rather than 1.3371 originally suggested in this article) reduces the stop-band leakage. This provides a useful improvement.

Further optimisation can be attained by reducing negative leakage peaks at the cost of accepting slightly more positive leakage. Since the residual negative peaks are still inverting and hence corrupting the data, this will generally be preferable to simply reducing net residuals irrespective of sign.

The asymmetric triple running-mean is shown in the comparison of the frequency responses, along with a Lanczos filter, here:

The Pratt configuration and the asymmetric 3RM result in identical averaging intervals when set to remove the annual cycle from monthly data. Both result in a choice of 8,10 and 12 month windows.

The difference will have an effect when filtering longer periods or higher resolutions, such as daily data.

If this is implemented in a spreadsheet, it should be noted that each average over an even interval will result in a 0.5 month shift in the data since it is not correctly centred. In a triple running-mean this results in 1.5 months shift with respect to the original data.

In this case the 1.3371 formula originally suggested in the article, giving 12,9,7 month averages and producing just one 0.5 month lag, may be preferable.

None of these issues apply if the scripts provided accompanying the article are used, since they all correctly centre the data.

A more technical discussion of cascading running-mean filters to achieve other profiles can be found in this 1992 paper, suggested by Pekka Pirilä and should serve as a starting point for further study of the subject.

Cyclic components in ice cover




To understand the processes driving polar ice coverage it is necessary to identify cyclic variations. Some would wish to trivialise climate into AGW plus random “stochastic” variability. This is clearly unsatisfactory. Much of the variation is more structured than may be apparent from staring at the ups and downs of a time series.

There are many cyclic or pseudo-cyclic repetitions and before attempting to regression fit a linear trend to the data, it is necessary to identify and remove them or include them in the model. Failure to do this will lead to concluding invalid, meaningless “trends” in the data. See cosine warming ref 1.

One means of testing for the presence of periodicity in a dataset is spectral analysis. In particular spectral power distribution can be informative.

The power spectrum can be derived by taking the Fourier transform of the autocorrelation function. The latter is, in its own right, useful for identifying the presence or not of cyclic change in the data.

One condition to get useful results from Fourier analysis is that the data should be stationary (ref 1) . Amongst other things this requires that the mean of the data should be fairly constant over time. FT of a series with a rising/falling trend will produce a whole series of spurious peaks that are a result making an infinite series from a ramping, finite sample.

There are several tests and definitions of stationarity and it is to some degree a subjective question without a black or white answer. A commonly used test is the augmented Dickey-Fuller, unit root test: ADF. (ref 2)

If a time-series is found not to satisfy the condition of stationarity, a common solution is examine instead the rate of change. This is often more desirable than other ‘detrending’ techniques such as subtracting some arbitrary mathematical “trend” such as a linear trend or higher polynomial. Unless there is a specific reason for fitting such a model to remove a known physical phenomenon, such a detrending will introduce non physical changes into the data. Differencing is a linear process whose result is derived purely from the original data and thus avoids injecting arbitrary signals.

The time differential ( as approximated by the first difference of the discrete data ) will often be stationary when the time-series is not. For example a “random walk”, where the data is a sequence of small random variations added to the previous value, will be a series of random values in its differential and hence stationary. This is particularly applicable to climatic data, like temperature, where last year’s or last month’s value will determine to a large extent the next one. This kind of simple autoregressive model is often used to create artificial climate-like series for testing.

To ensure there is no step change, as the end of the data is wrapped around to the beginning, it is usual to also apply a window function that is zero at each extreme and fades the data down at each end. This has the disadvantage of distorting the longer term variations but avoids introducing large spurious signals that can disrupt the whole spectrum.

Most window functions produce some small artificial peaks or ‘ringing’ either side of a real peak. Some do this more than others. The choice of window function depends to some extent on the nature and shape of the data. The choice is often a compromise.


Initial examination of the autocorrelation function of Arctic ice area data revealed the presence of notable periodicity other than the obvious annual cycle. Some recent published work is starting to comment on various aspects of this. (ref 3)

As a first step the annual cycle was removed by a triple running mean filter (ref 4) with a zero at 365 days and designed to avoid the usual distortions caused by simple running mean “smoothers”.

If a Fourier transform were to be done with the presence of the annual cycle, it’s magnitude, at least an order of magnitude greater than anything else, would reduce the accuracy of FFT and also introduce noticeable windowing artefacts in the 0.5 to 2.0 year period range. For this reason it was removed.

The adf.test() function in R package returned values indicating it was not possible to assume that the data was stationary. Contrariwise, the test on the derivative of the time series indicated strongly that it was stationary.

Non-stationarity is probably caused by long term trend or long period cyclic variation (long relative to the duration of the dataset).

Taking rate of change reduces linear trends to a constant and attenuates the amplitude of long periods by an amount proportional to the frequency, making it more amenable to analysis. The 365d filter will also attenuate frequencies less than about 5 years and this needs to be born in mind or corrected for when relative magnitudes are considered in the spectrum.

ref 1 Cosine warming

ref1 stationarity reqt for FFT.

ref3 arctic periodicity

ref 2 (dickey-fuller)

ref 4

Lunar-solar influence on SST

On Zen and the Art of Climate Analysis

by Greg Goodman

The author has a graduate degree in applied physics, professional experience in spectroscopy, electronics and software engineering, including 3-D computer modelling of scattering of e-m radiation in the Earth’s atmosphere.


In trying to understand changes in climate it would be logical to look at the rate of change directly rather than trying to guess at and its causes by looking at the various time series of temperature data.

This is also important since most of what climate science refers to as “forcings” are power terms measured in W/m2 . Temperature is a measure of energy, so it is the rate of change of temperature that reflects power. If there is a change in radiative ‘forcing’ such as solar or CO2, the response will be instantaneous in rate of change, however the subsequent change in temperature will take time to accumulate before it becomes evident, by which time some other factors have probably already obscured or confused the signal.

Most of what follows will be looking at rate of change of temperature : dT/dt, in particular sea surface temperature as contained in the International Comprehensive Ocean-Atmosphere Data Set ( ICOADS )

It is important in attempting to untangle the various forces acting upon climate to identify the various effects and their causes and their long-term interactions. It is not sufficient to dismiss everything that happens in the climate system as “internal variation” and assume that it all necessarily averages out. There is plenty of evidence that many of these effects are not “internal” but driven by forces from outside. El Nino and La Nina are not symmetrically opposite and equal phases of the same process. Each operates in a very different way. The assumption that they all average out over any particular time-scale is spurious.

The major oceanic basins have largely independent masses of water, at least in the upper levels. due to the large circulating currents referred to as gyres, driven by the Coriolis force . For this reason the major oceans are examined separately here. They are further broken down into their tropical and extra-tropical regions, since climate patterns are markedly different in the tropics. This helps in identifying underlying patterns that may be lost by the common, simplistic notion of global mean temperatures.


A useful way to examine whether there are repetitive patterns in a data series is by examining the autocorrelation function. This is generated by overylaying the data on itself with a progressively increasing offset in time. The correlation coefficient being calculated at each step. This method does not assume anything about the nature or form of any repetition and so is not imposing or assuming any structure on the data.

One way to extract the underlying frequencies is to look at the power density spectrum (PDS). This shows the strength of each frequency or band of frequencies. This is usually done on the square of the power. As already noted, dT/dt is a power term.

The PDS can be derived by taking the Fourier transform of the autocorrelation function. Doing so reveals a large number of oscillations are present in all basins, many of them common to several basins.

One problem with using the rate of change is that it amplifies shorter signals by 1/f and attenuates longer ones . The higher frequency terms, which in this context are the ‘noise’ of weather on the climate signal, get bigger. To prevent this h.f. swamping the signal of interest, a 12 month gaussian filter is used to attenuate the annual and shorter variations.

Original ICOADS SST was chosen for this study since there are notable differences in the autocorrelation and the power spectra of the re-gridded, re-processed and “corrected” Hadley SST datasets.

Examination of the literature detailing the calculation of the ‘climatology’ used in preparing HadSST3 (Brohan 2006 etc.) shows that crude running-mean filters are applied across adjacent grid cells to smooth the data and supposedly reduce noise. However, this implies both spatial and temporal phase shifts as well as introducing the frequency distortions inherent in using running means as a filter. This problem is aggravated since the process is repeated in a loop until the climatology “converges” to a stable result.

There is no evidence in the associated papers that any assessment has ever been made as to the effect that this kind of precessing has on the frequency characteristics of the data but they are substantial. It would seem incumbent on the authors of such work to demonstrate that this heavy processing is improving the data rather than distorting and degrading it. In the absence of such a assessment, it was considered more appropriate to work with the unprocessed data from ICOADS which , while not being without it’s own problems, probably better represents the frequency structure of the historical record .

This brief overview looks at groups of basins to examine the presence and period of common cyclic features.

The following is an example for the North Atlantic and South Pacific oceans.

Figure 1. Autocorrelation function of N. Atlantic , S. Pacific

It is clear that there are strong repetitive patterns in the data. In fact, it is surprising how similar these two basins are considering they are in opposite hemispheres and physically separate.

Another feature that is instantly obvious to anyone with experience in signal processing is that there is more than one major frequency present. The recurrent dips in amplitude are classic signs of an interference pattern between at least two, closely related frequencies.

The significance of the peaks and troughs in correlation depends upon the number of data used. So as the offset increases and there are less points that overlap and a given level of correlation is less significant (more possibly just chance). This is accounted for in the line showing 95% significance probability. The patterns are clearly of a significant level and not due to random variations.

Earth’s climate is a very complex system of interactions so it is often more appropriate to think in terms of pseudo-cycles than pure simple harmonic functions, though the underlying physical mechanisms can often produce basic forces and reactions that would lead to harmonic oscillations in a simpler context.

The power spectrum of the Nino 1 region, off the coast of Peru, is one of the least complicated. Notable, is the circa 13 year cycle and its second harmonic around 6.5 years. With the three nearby peaks around 5 years and the longer terms this will appear to be chaotic and unpredictable when viewed as a time series of temperature. But it can be seen to be highly structured once correctly analysed.

Figure 2. Power spectrum of El Nino 1 region.

It can be seen that many of the peaks are harmonic pairs and that they correspond to the various frequencies that Bart’s artful analysis of sun-spot number predicted:

He calculated the following periods: 5, 5.2, 5.4, 5.6, 5.9, 9.3, 10, 10.8, 11.8, 13, 20, 23.6 , 65.6 and 131 years in sun-spot number.

In addition to Bart’s periods, which were solely related to solar cycle itself, not it’s effects on terrestrial climate, there is another reason to expect several higher harmonics to be present in SST data.

When a radiative driver of climate, such as solar or CO2, varies there will an additional radiative forcing on the surface. This will instantly produce a rate of change of temperature. Once a higher (or lower) temperature is attained there will be a difference in emitted radiation and evaporation, two processes that depend upon temperature and provide a negative feedback. The temperature change will be the mathematical integral of the radiative forcing over time and will have a phase delay. In the case of a cyclic change, cosine will integrate to sine. The new radiative effect will, in turn, produce a change in temperature 90 degrees out of phase with the original, and so the loop repeats.

There is a trigonometrical  identity that  sin(x).cos(x)=cos(2x) [1], so mixing such a rate of change with its integral in this way will produce a signal of twice the frequency: the second harmonic. and so on. An equal mix will produce just the harmonic, a more likely unequal mix will result in both fundamental and harmonic being present.

So in the case of SST we should expect to find a series of pure harmonics being produced quite naturally.

The very weak peaks 10.26 and 11.2 (not annotated explicitly on the graph for the sake of clarity, but visible) match the peaks at 5.15, 5.7 and a tiny 2.86 peak .

These harmonics are indeed found to be very common.

It can be seen that the short harmonics lose power in favour of the longer periods as the Nino/Nina patterns progress from the peruvian Nino 1 to the central Nino 3.4 region.

Figure 2b. Power spectra of Nino 1 and Nino 3.4 regions

Figure 3. Power spectra ensemble for basins sharing 9 and 13 year peaks.

It can be seen that the 9 year cycle is clearly defined and stable. The 13 year shows greater variation between basins and is broader, showing more variation in the length of the pseudo cycle. This is typical of the pseudo cycles in solar SSN data.

Figure 4. Collection of power spectra for basins sharing 9 year peaks.

It is noted that the 9 year signal is prominent right across North and South, Atlantic and Pacific oceans in both tropical and extra-tropical zones.

Figure 5. Nicola Scafetta’s graph [2] , derived from NASA/JPL ephemeris, demonstrating a circa 9 year periodicity in variations of the Earth’s orbital distance from the sun, caused by the presence of the moon. The results of the MEM spectral analysis shows a period of 9.1 +/- 0.1 that corresponds closely to periodic fluctuations seen here in  SST. These are clearly distinct from usually noted 8.85 and 9.3 year lunar tidal periods.

The presence of a strong 9 year cycle prevents a simple identification of a link between solar activity, as witnessed by sun-spot number ( SSN ) and surface temperatures. Figure 6 below shows a comparative graph. It can be seen that at the end of the 19th century the two cycles were in phase and reasonably in step with SST. By 1920 the individual peaks are seen interspersed as the cycles are totally out of phase. There is a phase crisis in the relationship with SST. By 1960 they are back in phase and, by the end of the millennium, can be seen to be once again diverging.

This lack of any clear correspondence has often been cited as proof that there is no discernible link between global temperatures and SSN.

Figure 6 comparing rate of change of SST to sun-spot area.


Little published work seems to exist on this apparently strong lunar connection to climate. Prof.Harald Yndestad [3] has published several papers on the subject that also provide strong evidence, however they are very restricted in geographical scope and draw conclusions that are limited to suggesting a regional effect.

What is shown here is a much more significant, global effect. The presence of this strong 9 year cycle will confound attempts to detect the solar signal unless it is recognised. When the two are in phase (working together) the lunar effect will give an exaggerated impression of the scale of the solar signal and when they are out of phase the direct relationship between SSN and temperatures breaks down, leading many to conclude that any such linkage is erroneous or a matter of wishful thinking by less objective observers.

Such long term tidal or inertial effects can shift massive amounts of water and hence energy in and out of the tropics and polar regions. Complex interactions of these cycles with others, such as the variations in solar influence, create external inputs to the climate system, with periods of decadal and centennial length. It is essential to recognise and quantify these effects rather than making naive and unwarranted assumptions that any long term changes in climate are due to one simplistic cause such as the effects of trace gas like CO2.

It is totally unfounded to suggest that these effects will simply average out over the time-scale of the current surface temperature record without detecting and characterising their form, duration and interaction. Even more so to ignore their importance in the much quoted “latter half of the 20th century”.

Failure to recognise the importance of these not-so-internal cycles in climate variation likely accounts for the thorough failure of attempts to model and predict climate so far. Efforts which have now so obviously mislead our expectations over the first decades in which their predictions have been used to promote massive policy changes. Somewhat belated recognition that there are fundamental problems in current models has resulted in the recent reassessment of the projections made by Met. Office Hadley, from one of continued and alarming rise to one of five more years of non rising global temperatures following the last 16 years of no significant change.

Further work is needed in identifying and explaining these variations to determine the role they have played in recent changes in surface temperature before attempts are made to predict future variations.


Notes on file names used in graphs:
icoads indicates ICOADS v2.5 SST data
autocorr refers to auto-correlation function
g12m indicates 12m 3-sigma gaussian filter
ddt indicates time derivative dT/dt
s1200m indicates a sample of 1200 monthly temperatures
‘using 3:4’ is simply the data columns being plotted.

Regional ocean areas used:
the following regular shaped regions were selected to represent the basins used:





SST data were obtained from KNMI archive. Early years with many gaps were removed and sparse gaps were filled by averaging same month from previous and following year. This is crude but for the small number of cases will not have noticeable effect on the spectra.

Daily sun-spot area was provided by:


(set x=y in equation 6.4)

“Empirical evidence for a celestial origin of the climate oscillations and its implications”
N. Scafetta / Journal of Atmospheric and Solar-TerrestrialPhysics 72 (2010) 951–970


Pay-walled paper but many clear and explicit graphs available:

Many thanks to Tim Channon for providing the software used for Fourier analysis.
To the R project which was used to derive the autocorrelation series
To Gnuplot team for their amazingly powerful plotting software

To those sifting and maintaining the massive ICOADS dataset and the million seamen who have braved the elements through centuries of storms and wars to throw their buckets overboard
Finally to Steve Mosher for inspiring me to look into the question of celestial causes for climate change.


Useful and highly interesting paper by Ian Wilson showing the planetary origins of many of these peaks.


Figure A1.  showing comparison of HadSST3 and ICOADS power  density spectrum for extra-tropical North Pacific.  Spectra are similar, but with significant differences between 7 and 9 years.