Gaussian low-pass filter



Click to enlarge graph.

The graph is scaled to unit frequency (eg 1 Hz or one cycle per year …).

It illustrates how a 159 day gaussian will filter daily data. For monthly data 12*159/365=5.23 , so the nearest would be sigma=5mo . However since that is slightly short of the intended value and gaussian leaks a little, since it is never fully zero, six months would be better to suppress a 12mo cycle.

Code ( use copy / paste within code block ).

There must be NO space before the first line. ie “#!” are the first characters in the file.

#!/bin/awk -f

# pass input through 3 sigma gaussian filter where sigma, if not given, is 2 data points wide
# usage : ./gauss.awk filename <sigma=2> <scale_factor=1>
# optional scale_factor simply scales the output
# sigma can be compared to the period of the -3dB point of the filter
# result is centred, ie not shift. dataset shortened by half window each end

# operates on files with two columns of numeric data, or a single column of data
#  eg date as integer year or floating point decimal
# single column takes line number position to be evenly spaced ordinate variable.
# ensure that data is equally spaced and continuous !!
# nov 2012  OFMT="%10.8f"

# 159d gauss similar to  365d r3m

# FWHM (-3dB) = 2*sigma*sqrt(2*ln2)
# half height freq = sigma * 1.1774
# 3 sigma gives 99.7% accuracy

BEGIN { OFMT="%10.8f"
# ARGV[1]=filename; argv[0] is script name, hence ARGC>=1

  if (ARGV[1]=="") { print " # usage : ./gauss.awk filename <sigma=2> <scale_factor=1>" ; exit 1}

  pi= 3.14159265359811668006

  if  ( ARGC >3 ) {scaleby=ARGV[3];ARGV[3]=""} else {scaleby=1};
  if  ( ARGC >2 ) {sigma=ARGV[2];ARGV[2]=""} else {sigma=2};

  print "# filtering "ARGV[1]" with gaussian of sigma= ",sigma
  root2pi_sigma=sqrt(2*pi)*sigma;
  two_sig_sqr=2.0*sigma*sigma;

  gw=3*sigma-1;  # gauss is approx zero at 3 sigma, use 3 sig window

# calculate normalised gaussian coeffs
  for (tot_wt=j=0;j<=gw;j++) {tot_wt+=gwt[-j]=gwt[j]=exp(-j*j/two_sig_sqr)/ root2pi_sigma};
  tot_wt=2*tot_wt-gwt[0];
  tot_wt/=scaleby;
  for (j=-gw;j<=gw;j++) {gwt[j]/=tot_wt};

# strip off last .xxx part of file name
# improve this  (doesn't work with paths like ../filename)

  if (ARGV[1]=="-") {
    out_file="-";
  }
   else {
    split(ARGV[1],fn,".");
    basename=fn[1]
    out_file=basename"-gauss"sigma".dat";
    print "# ",out_file >out_file;
  }

  ln=-1;
}

( ($0 !~ /^#/) && ($0 !~ /^[" ",\t]*$/)  ) {
  xdata[++ln]=$1;
  ydata[ln]=$2;  

  if (ln>2*gw)
  {
    gauss=0
    for (j=-2*gw;j<=0;j++) {gauss+=ydata[ln+j]*gwt[j+gw]}
    if (out_file=="-")
      print xdata[ln-gw],gauss
    else {
      print NR,xdata[ln-gw],gauss
      print xdata[ln-gw],gauss >> out_file;
    }
  }
}

END {
  if (ARGV[1]=="") { exit 1}
  print "# gausssian window width = "gw+gw+1",done"
  print "# output file = "out_file

# for (i=0;i<=gw;i++){print gwt[i]}
}

Lanczos high-pass filter


The graph shows the frequency response of a 3-lobe Lanczos high-pass filter with central frequencies of 461 and 269 days, respectively.

These are chosen such that a periodicity of 1 year is either fully included or fully excluded in the filter’s pass-band. ie the 269 day filter will pass only frequencies shorter than 365 days and fully block annual cycles. 461 day filter will ensure that everything including the annual signal is unattenuated and excludes anything longer than about 1.6 years.

The equivalent low-pass filter is detailed here:
https://climategrog.wordpress.com/2013/11/28/lanczos-filter-script/

frequency axis is in units of “per year” not “year” as labelled.

Code: ( select text in code block and use copy/paste to get the code )

#!/bin/awk -f

#  usage: ./lanczos_HP.awk filename [-3dB_period=17] [lobes=3]"

# purpose: apply a high-pass filter by convolution of 2nd column of file with lanczos windowed sinc function
# column 1 data should be evenly spaced without gaps.
# optional second parameter is period of 3dB attenuation, as a period expressed  in data intervals
#    default = 17 ; eg. 17 monthly periods : half power at 17mo, close to 100% pass at 12 mo.
#                   to exclude 12mo, use 9 . This will pass 6mo and higher freq unattenuated.
#                   for daily data use 461 and 269 days , respectively.
# optional third parameter is the number of lobes in impulse response. More gives sharper transition but can produce overshoot ( ringing $
# comments at end of output give pass-band and stop band info, this is not sent to output file.
#    default=3; 4 is OK. ### >=5 may produce significant ringing artefacts around sharp, high amplitude changes###



function sinc(x) {
  if (x==0) return 1; else return (sin(x)/x);
}

BEGIN{  OFMT = "%8.6f"

  twopi=6.28318530717959

  if ( ARGC >3 ) {lobes=ARGV[3];ARGV[3]=""} else {lobes=3};
  if ( ARGC >2 ) {period=ARGV[2];ARGV[2]=""} else {period=17};
  if ( ARGC <2) { print "### usage: ./lanczos_HP.awk filename [-3dB_period=17] [lobes=3]"; err=1; exit 1; }
  else {len=ARGV[2]; ARGC=2;}

  pi= 3.14159265359811668006 ;
  halfk=int(period/2*lobes);
  kwid= 2*halfk;         # kernel = -halfk....+halfk = kwid+1 pts ie odd number
  w_taper=twopi/kwid;    # first lanczos lobe -pi to +pi to fill window
  w=twopi/period;        # sinc passes zero pi, 2*pi, 3*pi ....

# calculate normalised kernel coeffs
  for (tot_wt=j=0;j<=halfk;j++) {tot_wt+=gwt[-j]=gwt[j]=sinc(j*w)*sinc(j*w_taper)};
  tot_wt=2*tot_wt-gwt[0];

# for HP filter, invert LP and add unit impulse:
  for (j=-halfk;j<=halfk;j++) {gwt[j]/=-tot_wt};
  gwt[0]+=1;

# strip off last .xxx part of file name
# improve this  (doesn't work with paths like ../filename)


  if (ARGV[1]=="-") {
    out_file="-";
  }
  else {
    split(ARGV[1],fn,".");
    for (i=1;i<length(fn);i++){   # if multiple dots in name, build back up except last part
      if(i==1)basename=fn[i]
      else basename=basename"."fn[i];
    }
    out_file=basename"-lanc"lobes"HP_"period".dat";
    print "# ",out_file >out_file;
    print "# ",out_file
  }

  ln=-1;  # init linenumber counter
}

($0 !~ /^#/)&&($0 != "") {    # strip out comments, headers and blank lines
  xdata[++ln]=$1;
  ydata[ln]=$2;

  if (ln>2*halfk)
  {
    sum=0
    for (j=-2*halfk;j<=0;j++) {sum+=ydata[ln+j]*gwt[j+halfk]}
    if (out_file=="-")
      print xdata[ln-halfk],sum
    else {
      print NR,xdata[ln-halfk],sum
      print xdata[ln-halfk],sum >> out_file;
    }
  } #endif ln

}


END {
    if (err) exit err;
    if (FNR<=kwid) {print " ### insufficient data to fill "kwid+1" point filter buffer"; exit 1}
    print "# "lobes"-lobe lanczos kernel width = "2*halfk+1",done"
    if (lobes==5) printf ("# "lobes"-lobe lanczos HP on %s: zero-pass > %.3f; 99% pass < %.3f \n",ARGV[1],period/.752,period/1.258)
    if (lobes==4) printf ("# "lobes"-lobe lanczos HP on %s: zero-pass > %.3f; 99% pass < %.3f \n",ARGV[1],period/.698,period/1.325)
    if (lobes==3) printf ("# "lobes"-lobe lanczos HP on %s: zero-pass > %.3f; 99% pass < %.3f \n",ARGV[1],period/.588,period/1.436)

    print "# output file = "out_file
# uncomment following to dump kernel (impluse response):
# sum=0; for (j=-halfk;j<=halfk;j++) {sum+= gwt[j]; printf ("%.4f\n", gwt[j])}
# print "# abs sum of coeffs = ",sum;

}



Lanczos low-pass filter



Click to enlarge graph.

frequency axis is in units of “per year” not “year” as labelled.

Equivalent high-pass filter:
https://climategrog.wordpress.com/2013/11/28/lanczos-high-pass-filter/

Discussion: https://climategrog.wordpress.com/?attachment_id=659

Source Code: ( to get code, use browser copy / paste function within code block below).

#!/bin/awk -f

#  usage: ./lanczos_LP.awk filename [50%_attn_period=18] [lobes=3]"

# purpose: convolve 2nd column of file with lanczos windowed sinc function
# this provides low-pass filter with a fast transition from pass-band to stop-band , with minimal ripple in both.
# column 1 data should be evenly spaced without gaps.
# optional second parameter is period of 50% attenuation, as a period expressed in data intervals
# optional third parameter is the number of lobes in impulse response.
# zero pass req period * 1.436 eg 17.23mo or 525 days
#    default=18 ; eg. 18 monthly periods : 50% attenuation at 15mo, close to zero pass at 12 mo. for 3 lobe.
#    lobes=3 ; use 525 for daily ; lobes=5 ; use 461 for 365 data
# console output comments give pass-band and stop band info. this is not sent to output file.

#    More lobes gives sharper transition band but more overshoot distortion / ringing.
#    default=3; 3 and 4 are good. ### >=5 may produce significant ringing artefacts ###


function sinc(x) {
  if (x==0) return 1; else return (sin(x)/x);
}

BEGIN{  OFMT = "%8.6f"


  if ( ARGC >3 ) {lobes=ARGV[3];ARGV[3]=""} else {lobes=3};
  if ( ARGC >2 ) {period=ARGV[2];ARGV[2]=""} else {period=18};
  if ( ARGC <2) { print "### usage: ./lanczos_LP.awk filename [50%_attn_period=18] [lobes=3]"; err=1; exit 1; }
  else {len=ARGV[2]; ARGC=2;}

  pi= 3.14159265359811668006 ;
  twopi=2*pi;
  halfk=int(period/2*lobes);
  kwid= 2*halfk;         # kernel = -halfk....+halfk = kwid+1 pts ie odd number
  w_taper=twopi/kwid;    # first lanczos lobe -pi to +pi to fill window
  w=twopi/period;        # sinc passes zero pi, 2*pi, 3*pi ....

# calculate normalised kernel coeffs
  for (tot_wt=j=0;j<=halfk;j++) {tot_wt+=gwt[-j]=gwt[j]=sinc(j*w)*sinc(j*w_taper)};
  tot_wt=2*tot_wt-gwt[0];
  for (j=-halfk;j<=halfk;j++) {gwt[j]/=tot_wt};

# strip off last .xxx part of file name
# improve this  (doesn't work with paths like ../filename)

  if (ARGV[1]=="-") {
    out_file="-";
  }
  else {
    split(ARGV[1],fn,".");
    for (i=1;i<length(fn);i++){   # if multiple dots in name, build back up except last part
      if(i==1)basename=fn[i]
      else basename=basename"."fn[i];
    }
    out_file=basename"-lanc"lobes"LP_"period".dat";
    print "# ",out_file >out_file;
    print "# ",out_file
  }

  ln=-1;  # init linenumber counter
}

($0 !~ /^#/)&&($0 != "") {    # strip out comments, headers and blank lines
  xdata[++ln]=$1;
  ydata[ln]=$2;

  if (ln>2*halfk)
  {
    sum=0
    for (j=-2*halfk;j<=0;j++) {sum+=ydata[ln+j]*gwt[j+halfk]}
    if (out_file=="-")
      print xdata[ln-halfk],sum
    else {
      print NR,xdata[ln-halfk],sum
      print xdata[ln-halfk],sum >> out_file;
    }
  } #endif ln

}


END {
    if (err) exit err;
    if (FNR<=kwid) {print " ### insufficient data to fill "kwid+1" point filter buffer"; exit 1}
    print "# "lobes"-lobe lanczos kernel width = "2*halfk+1",done"
    if (lobes==5) printf ("# "lobes"-lobe lanczos LP on %s: zero-pass at %.3f; 99% pass > %.3f \n",ARGV[1],period/1.258,period/.752)
    if (lobes==4) printf ("# "lobes"-lobe lanczos LP on %s: zero-pass at %.3f; 99% pass > %.3f \n",ARGV[1],period/1.325,period/.698)
    if (lobes==3) printf ("# "lobes"-lobe lanczos LP on %s: zero-pass at %.3f; 99% pass > %.3f \n",ARGV[1],period/1.436,period/.588)

    print "# output file = "out_file
# uncomment following to dump kernel (impluse response):
# sum=0; for (j=-halfk;j<=halfk;j++) {sum+= gwt[j]; printf ("%.4f\n", gwt[j])}
# print "# abs sum of coeffs = ",sum;

}

simple running-mean script ( as base for triple RM )

The following script will calculate a simple running mean. On its own this is a very poor filter, see accompanying article :
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

This script is intended for use by the triple-running-mean script available here:
https://climategrog.wordpress.com/2013/11/02/triple-running-mean-script/

Select code with mouse to copy elsewhere.

#!/bin/awk -f

# calculate running mean of width 'window'
# line number taken as x if file if file is a single column of data
# [usage] ./runmean.awk filename <window>
# default window=12
# strips empty and comment lines

# operates on files with two columns of numeric data
#  eg date as integer year or floating point decimal
# check whether data is continuous and evenly spaced !!


BEGIN{  OFMT = "%10.8f"
# ARGV[1]=fn; argv[0] is script name, hence ARGC>=1

  if (ARGV[1]=="") { print " ### error, no input file given." ; err=1; exit 1}

  ln=0;
  if  ( ARGC >2 ) {window=ARGV[2];ARGV[2]=""} else {window=12};
  if (int(window)!=window) {window=int(window+0.5); window_adj=1; }

  print "# filtering "ARGV[1]" with running mean  window = ",window

  if (ARGV[1]=="-") {
    out_file="-";
  }
   else {
    split(ARGV[1],fn,".");
    basename=fn[1]
    out_file=basename"-rmean"window".dat";

    print "# ",out_file >out_file;
  }
}  # end BEGIN


{ # body
if ( ($0 !~ /^#/) && ($0 !~ /^[" ",\t]*$/)  )  {  # strip empty and comment lines

  if ($2==""){  # for single column data files
    xdata[++ln]=NR;
    ydata[ln]=$1;
  }else{
    xdata[++ln]=$1;
    ydata[ln]=$2;
  }

  sum+=ydata[ln];

  if (ln>=window)
  {
    mid_x=(xdata[ln]+xdata[ln-(window-1)])/2;  # -1 to centre correctly
    ymean=(sum)/window;
    sum-= ydata[ln-window+1];  # remove oldest
    if (out_file=="-")
      print mid_x,ymean;
    else {
      print NR,mid_x,ymean
      print mid_x,ymean >> out_file;
    }
  } # endif
 } # non empty
} # end body

END {
  if (err==1) { exit 1}
  if (window_adj==1) print "# warning : window rounded to nearest integer, window  = ",window
  print "# running mean window = "window",done"
  print "# output file = "out_file
}

triple running-mean filter

The following script will call a simple running mean three times with appropriate window size to do effect triple running mean, as described in the article (as amended for the asymmetric kernel to minimise negative leakage):

https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

It requires the runmean.awk script found here:
https://climategrog.wordpress.com/2013/11/02/574/

Select code with mouse to copy elsewhere.

#!/bin/bash

# call runmean.awk three times to compose triple running mean
# usage: ./r3m.sh file window_len   ; default window is 12 data point

if [ "x$1" == "x"  ]; then echo "$0 : err no file name,   usage: $0 filename "
 exit 1;
else fn=$1
fi

if [ "x$2" == "x"  ]; then win=12; else win=$2; fi

#win2=`awk "BEGIN{print $win/ 1.3371}"`
#win3=`awk "BEGIN{print $win/ 1.3371/ 1.3371}"`


# asymmetric stages with following window ratios:
k=1.15; k2=1.58

win2=`awk "BEGIN{print $win/ "$k"}"`
win3=`awk "BEGIN{print $win/ "$k2" }"`

outfile=`echo $fn | awk '{ print substr($1,1,length($1)-4) }'`
outfile+="-3rm"$win".dat"

echo  "# triple running mean :  $win  $win2  $win3 " > $outfile

# echo $fn; echo $win; echo $win2; echo $win3

cat $fn | ./runmean.awk - $win |  ./runmean.awk - $win2 |  ./runmean.awk - $win3 >> $outfile

#echo "# cat $fn | ./runmean.awk - $win |  ./runmean.awk - $win2 |  ./runmean.awk - $win3 >> $outfile"

echo "# triple running mean :  $win  $win2  $win3 "
echo "# outfile = "$outfile;

On identifying inter-decadal variation in NH sea ice

[EDIT] above image has been updated with Dec 2015 data.

Introduction

The variation in the magnitude of the annual cycle in arctic sea ice area has increased notably since the minimum of 2007. This means that using a unique annual cycle fitted to all the data leaves a strong residual (or “anomaly”) in post-2007 years. This makes it difficult or impossible to visualise how the data has evolved during that period. This leads to the need to develop an adaptive method to evaluate the typical annual cycle, in order to render the inter-annual variations more intelligible.

Before attempting to asses any longer term averages or fit linear “trends” to the data it is also necessary to identify any repetitive variations of a similar time-scale to that of the fitting period, in order to avoid spurious results.

See also: an examination of sub-annual variation
https://climategrog.wordpress.com/?attachment_id=460

Method

Short-term anomalies: adapting calculations to decadal variability.

Adaptation can be achieved by identifying different periods in the data that have different degrees of variability and calculating the average annual variation for each period.

Details of how the average seasonal profile for each segment was calculated are shown in the appendix.


Figure 1. The post-2007 interval. (click to enlarge)

The data was split into separate periods that were identified by their different patterns of variation. For example, the period from 1997 to 2007 has a notably more regular annual cycle (leading to smaller variation in the anomalies). The earlier periods have larger inter-annual variations, similar to those of the most recent segment.

An approximately periodic variation was noted in the data, in the rough form of a rectified sine wave. In order to avoid corrupting the average slope calculations by spanning from a peak to a trough in this pattern, a mathematical sine function was adjusted manually to approximate the period and magnitude shown in the data. It was noted that, despite the break of this pattern in the early 2000’s, the phase of this pattern is maintained until the end of the record and aligns with the post-2007 segement. However, there was a notable drop in level ( about 0.5×10^6 km^2 ). Labels indicate the timing of several notable climate events which may account for some of the deviations from the observed cyclic pattern. These are included for ease of reference without necessarily implying a particular interpretation or causation.



Figure 2. Identifying periods for analysis

The early period (pre-1988) was also separated out since it was derived from a notably different instrument on a satellite with a much longer global coverage. It is mostly from Nimbus 7 mission which was in an orbit with a 6 day global coverage flight path. Somehow this data was processed to produce into a 2 day interval time-series, although documentation of the processing method seems light.

Later data, initially from the US defence meteorology platforms starts in 1988 and had total polar coverage twice per day,producing daily time-series.

In order to maintain the correct relationship between the different segments, the mean value for each segment from the constant term of the harmonic model used to derive the seasonal variations, was retained. The anomaly time-series was reconstructed by adding the deviations from the harmonic model to the mean, for each segment in turn. The calculated anomalies were extended beyond the base period at each end, so as to provide an overlap to determine suitable points for splicing the separate records together. A light low-pass filter was applied to the residual ‘anomalies’ to remove high frequency detail and improve visibility.

The result of this processing can be compared to the anomaly graph provided at the source of this data. The extent of the lines showing mean rates of change indicate the periods of the data used. These were chosen to be an integral number of cycles of the repetitive, circa five year pattern.

Cavalieri et al [2] also report a circa 5 year periodicity:

The 365-day running mean of the daily anomalies suggests a dominant interannual variability of about 5 years



Figure 3. Showing composite adaptive anomaly



Figure 4. Showing Cryosphere Today anomaly derived with single seasonal cycle

Discussion

The average slope for each segment is shown and clearly indicate the decrease in ice area was accelerating from the beginning of the era of the satellite observations until 2007. The derived values suggest this was a roughly parabolic acceleration. This was a cause for legitimate concern around 2007 and there was much speculation as to what this implied for the future development of arctic ice coverage. Many scientists have suggested “ice free” summers by 2013 or shortly thereafter.

The rate of ice loss since 2007 is very close to that of the 1990s but is clearly less pronounced than it was from 1997 to 2007, a segment of the data which in itself shows a clear downward curvature, indicating accelerating ice loss.

Since some studies estimate that much of the ice is now thinner, younger ice of only 1 or 2 years of age, recent conditions should be a more sensitive indicator of change. Clearly the predicted positive feedbacks, run-away melting and catastrophic collapse of Arctic sheet are not in evidence. The marked deceleration since 2007 indicates that either the principal driver of the melting has abated or there is a strongly negative regional feedback operating to counteract it.

The 2013 summer minimum is around 3.58 million km^2. Recent average rate of change is -0.043 million km^2 per year.

While it is pointless to suggest any one set of conditions will be maintained in an ever varying climate system, with the current magnitude of the annual variation and the average rate of change shown in the most recent period, it would take 83 years to reach ice free conditions at the summer minimum.

Some have preferred to redefine “ice free” to mean less than 1 millions km^2 sea ice remaining at the summer minimum. On that basis the “ice free” summer figure reduces to 61 years.

Conclusion

In order to extract and interpret inter-decadal changes in NH sea ice coverage, it is essential to characterise and remove short-term variation. The described adaptive method allows a clearer picture of the post 2007 period to be gained. It shows that what could be interpreted as a parabolic acceleration in the preceding part of the satellite record is no longer continuing and is replaced by a decelerating change in ice area. At the time of this analysis, the current decadal rate of change is about -430,000 km^2 per decade, against an annual average of around 8.7 million and a summer minimum of around 3.6 million km^2.

Whether the recent deceleration will continue or revert to earlier acceleration is beyond current understanding and prediction capabilities. However, it is clear that it would require a radical change for either an ice free ( or an less than one million km^2 ) arctic summer minimum to occur in the coming years.

Unfortunately data of this quality covers a relatively short period of climate history. This means it is difficult to conclude much except that the earlier areal acceleration has changed to a deceleration, with the current rate of decline being about half that of the 1997-2007 period. This is clearly at odds with the suggestion that the region is dominated by a positive feedback.

It is also informative to put this in the context of the lesser, but opposite, tendency of increasing sea ice coverage around Antarctica over the same period.


Figure 5. Showing changes in Arctic and Antarctic sea ice coverage



Figure 3. Showing composite adaptive anomaly

 
 


Appendix: approximating the typical seasonal cycle

Spectral analysis of the daily ice area data shows many harmonics of the annual variation, the amplitude generally diminishing with frequency. Such harmonics can be used to reconstruct a good approximation of the asymmetric annual cycle.

Here the typical cycle of seasonal variation for the period was estimated by fitting a constant plus 7 harmonic model to the data ( the 7th being about 52 days ). This is similar to the technique descrobed by Vinnikov et al [1]. The residual difference between the data and this model was then taken as the anomaly for the segment and low-pass filtered to remove 45 day (8th harmonic) and shorter variations.

This reveals an initial recovery from the 2007 minimum and a later drift downwards to a new minimum in 2012. By the time of the annual minimum for 2013, another strong recovery seems to have started.

This repetitive pattern, which was identified in the full record, is used to define trough-to-trough or peak-to-peak periods over which to calculate the average rate of change for each segment of the data. This was done by fitting a linear model to the unfiltered anomaly data ( using a non-linear least squares technique).

Supplementary Information

The fitted average rate of change for the respective periods, in chronological order ( million km^2 per year )

-0.020
-0.047
-0.088
-0.043

The cosine amplitude, half the total annual variation, of the fundamental of the harmonic model for successive data segments ( million km^2 )

pre 1988 -4.43
1988-1997 -4.47
1997-2007 -4.62
post 2007 -5.14

Mathematical approximation of repetitive pattern used to determine sampling intervals, shown in figure 2.

cosine period 11.16 years ( 5.58 year repetition )
cosine amplitude 0.75 x 10^6 km^2

Resources

Data source: http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008
(data downloaded 16th Sept 2013)

A extensive list of official sources both arctic and antarctic sea ice data can be found here:
http://wattsupwiththat.com/reference-pages/sea-ice-page/

 
[1] Vinnikov et al 2002
“Analysis of seasonal cycles in climatic trends with application to satellite observations of sea ice extent”
http://onlinelibrary.wiley.com/doi/10.1029/2001GL014481/pdf

 
[2] Cavalieri et al 2003
“30-Year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability”

http://www.meto.umd.edu/~kostya/Pdf/Seaice.30yrs.GRL.pdf

Amplitude Modulation Triplets

Understanding of “beats” and amplitude modulation requires going back to basics to distinguish two similar treatments that are often confused with one another.

Acoustic Beats
The following link shows the modulation which leads to the acoustic beats phenomenon and contains audio examples to listen to:
http://www.animations.physics.unsw.edu.au/jw/beats.htm
Frequencies in amplitude modulation:

The basic trigonometrical identity [1] that is used to relate modulation to interference patterns is this:

cos a * cos b = 0.5 * ( cos (a+b) + cos (a-b) )

If a physical system contains modulation as expressed by the multiplication on the left of the equation, spectral analysis will split things into a series of additive components and we will have to interpret what we find in the spectrum as being the sum and the difference of the physical frequencies that are modulating each other.

Superposition (beats).
In the other direction the superposition (addition) of two signals can be converted back to modulation :
cos (f1t) + cos ( f2t) = 2 cos ((f1t + f2t)/2) * cos ((f1t – f2t)/2)
which if we rename the variables using a and b
cos (a) + cos ( b) = 2 cos ((a + b)/2) * cos ((a – b)/2)

So the presence of two frequencies seen in a Fourier spectrum is equivalent to physical modulation of their average frequency by half the difference of their frequencies. This is a mathematical identity, the two interpretations are equivalent and interchangeable, thus this is totally general and independent of any physical system where this sort of pattern may be observed. If these kind of patterns are found, the cause could be either a modulation or superposition.

In the presence of perfect sampling, the two forms are mathematically identical and again what would be found in the spectra would be the left-hand side: the two additive signals. However, what happens to the modulating envelop on the right in the climate system may well mean that the faster one get smoothed out, or the sampling interval and all the averaging and data processing breaks it up . The longer, lower frequency signal may be all that is left and then that is what will show up in the spectrum.

This is similar to what is called “beats” in an acoustic or musical context, except that the ear perceives twice the real physical frequency since human audition senses the amplitude variation: the change in volume, NOT the frequency of the modulation. The amplitude peaks twice per cycle and what we hear as two “beats” per second is a modulation of 1 hertz. Care must be taken when applying this musical analogy to non-acoustic cycles such as those in climate variables.

Also, if one part ( usually the faster one ) gets attenuated or phase delayed by other things in climate it may still be visible but the mathematical equivalence is gone and the two, now separate frequencies are detected.

Triplets

Since the ‘side-lobe’ frequencies are symmetrically placed about the central frequency, this creates a symmetric pair of frequencies of equal magnitude whose frequencies are the sum and the difference of the originals. This is sometimes referred to as a ‘doublet’.

If the two cosine terms are equal as shown above, neither of the original signal frequencies remain. However, if the higher frequency is of larger amplitude, a residual amount of it will remain giving rise to a ‘triplet’ of frequencies. This is what is usually done in radio transmission of an amplitude modulated signal (AM radio). In this case the central peak is usually at least twice the magnitude of the each side bands.

It can be seem mathematically from the equations given above, that if both inputs are of equal amplitude the central frequency will disappear, leaving just a pair side frequencies. It may also be so small as to no longer be distinguishable from background noise in real measurements.

All of this can confound detection of the underlying cycles in a complex system because the periods of the causative phenomena may be shifted or no longer visible in a frequency analysis.

There are many non-linear effects, distortions and feedbacks that will deform any pure oscillation and thus introduce higher harmonics. Indeed such distortions will be the norm rather than a pure oscillation and so many harmonics would be expected to be found.

As a result, even identifying the basic cause of a signal can be challenging in a complex system with many interacting physical variables.

The triplet is useful pattern to look for, being suggested by the presence equally spaced frequencies, although the side peaks may attenuated by other phenomena and are not always of equal height as in the abstract example.

Examples of these kind of patterns can be found in variations of Arctic ice coverage.

https://climategrog.wordpress.com/?attachment_id=757
https://climategrog.wordpress.com/?attachment_id=756
https://climategrog.wordpress.com/?attachment_id=438

 References:

Sum and Product of Sine and Cosine: