On identifying inter-decadal variation in NH sea ice

[EDIT] above image has been updated with Dec 2015 data.

Introduction

The variation in the magnitude of the annual cycle in arctic sea ice area has increased notably since the minimum of 2007. This means that using a unique annual cycle fitted to all the data leaves a strong residual (or “anomaly”) in post-2007 years. This makes it difficult or impossible to visualise how the data has evolved during that period. This leads to the need to develop an adaptive method to evaluate the typical annual cycle, in order to render the inter-annual variations more intelligible.

Before attempting to asses any longer term averages or fit linear “trends” to the data it is also necessary to identify any repetitive variations of a similar time-scale to that of the fitting period, in order to avoid spurious results.

See also: an examination of sub-annual variation
https://climategrog.wordpress.com/?attachment_id=460

Method

Short-term anomalies: adapting calculations to decadal variability.

Adaptation can be achieved by identifying different periods in the data that have different degrees of variability and calculating the average annual variation for each period.

Details of how the average seasonal profile for each segment was calculated are shown in the appendix.


Figure 1. The post-2007 interval. (click to enlarge)

The data was split into separate periods that were identified by their different patterns of variation. For example, the period from 1997 to 2007 has a notably more regular annual cycle (leading to smaller variation in the anomalies). The earlier periods have larger inter-annual variations, similar to those of the most recent segment.

An approximately periodic variation was noted in the data, in the rough form of a rectified sine wave. In order to avoid corrupting the average slope calculations by spanning from a peak to a trough in this pattern, a mathematical sine function was adjusted manually to approximate the period and magnitude shown in the data. It was noted that, despite the break of this pattern in the early 2000’s, the phase of this pattern is maintained until the end of the record and aligns with the post-2007 segement. However, there was a notable drop in level ( about 0.5×10^6 km^2 ). Labels indicate the timing of several notable climate events which may account for some of the deviations from the observed cyclic pattern. These are included for ease of reference without necessarily implying a particular interpretation or causation.



Figure 2. Identifying periods for analysis

The early period (pre-1988) was also separated out since it was derived from a notably different instrument on a satellite with a much longer global coverage. It is mostly from Nimbus 7 mission which was in an orbit with a 6 day global coverage flight path. Somehow this data was processed to produce into a 2 day interval time-series, although documentation of the processing method seems light.

Later data, initially from the US defence meteorology platforms starts in 1988 and had total polar coverage twice per day,producing daily time-series.

In order to maintain the correct relationship between the different segments, the mean value for each segment from the constant term of the harmonic model used to derive the seasonal variations, was retained. The anomaly time-series was reconstructed by adding the deviations from the harmonic model to the mean, for each segment in turn. The calculated anomalies were extended beyond the base period at each end, so as to provide an overlap to determine suitable points for splicing the separate records together. A light low-pass filter was applied to the residual ‘anomalies’ to remove high frequency detail and improve visibility.

The result of this processing can be compared to the anomaly graph provided at the source of this data. The extent of the lines showing mean rates of change indicate the periods of the data used. These were chosen to be an integral number of cycles of the repetitive, circa five year pattern.

Cavalieri et al [2] also report a circa 5 year periodicity:

The 365-day running mean of the daily anomalies suggests a dominant interannual variability of about 5 years



Figure 3. Showing composite adaptive anomaly



Figure 4. Showing Cryosphere Today anomaly derived with single seasonal cycle

Discussion

The average slope for each segment is shown and clearly indicate the decrease in ice area was accelerating from the beginning of the era of the satellite observations until 2007. The derived values suggest this was a roughly parabolic acceleration. This was a cause for legitimate concern around 2007 and there was much speculation as to what this implied for the future development of arctic ice coverage. Many scientists have suggested “ice free” summers by 2013 or shortly thereafter.

The rate of ice loss since 2007 is very close to that of the 1990s but is clearly less pronounced than it was from 1997 to 2007, a segment of the data which in itself shows a clear downward curvature, indicating accelerating ice loss.

Since some studies estimate that much of the ice is now thinner, younger ice of only 1 or 2 years of age, recent conditions should be a more sensitive indicator of change. Clearly the predicted positive feedbacks, run-away melting and catastrophic collapse of Arctic sheet are not in evidence. The marked deceleration since 2007 indicates that either the principal driver of the melting has abated or there is a strongly negative regional feedback operating to counteract it.

The 2013 summer minimum is around 3.58 million km^2. Recent average rate of change is -0.043 million km^2 per year.

While it is pointless to suggest any one set of conditions will be maintained in an ever varying climate system, with the current magnitude of the annual variation and the average rate of change shown in the most recent period, it would take 83 years to reach ice free conditions at the summer minimum.

Some have preferred to redefine “ice free” to mean less than 1 millions km^2 sea ice remaining at the summer minimum. On that basis the “ice free” summer figure reduces to 61 years.

Conclusion

In order to extract and interpret inter-decadal changes in NH sea ice coverage, it is essential to characterise and remove short-term variation. The described adaptive method allows a clearer picture of the post 2007 period to be gained. It shows that what could be interpreted as a parabolic acceleration in the preceding part of the satellite record is no longer continuing and is replaced by a decelerating change in ice area. At the time of this analysis, the current decadal rate of change is about -430,000 km^2 per decade, against an annual average of around 8.7 million and a summer minimum of around 3.6 million km^2.

Whether the recent deceleration will continue or revert to earlier acceleration is beyond current understanding and prediction capabilities. However, it is clear that it would require a radical change for either an ice free ( or an less than one million km^2 ) arctic summer minimum to occur in the coming years.

Unfortunately data of this quality covers a relatively short period of climate history. This means it is difficult to conclude much except that the earlier areal acceleration has changed to a deceleration, with the current rate of decline being about half that of the 1997-2007 period. This is clearly at odds with the suggestion that the region is dominated by a positive feedback.

It is also informative to put this in the context of the lesser, but opposite, tendency of increasing sea ice coverage around Antarctica over the same period.


Figure 5. Showing changes in Arctic and Antarctic sea ice coverage



Figure 3. Showing composite adaptive anomaly

 
 


Appendix: approximating the typical seasonal cycle

Spectral analysis of the daily ice area data shows many harmonics of the annual variation, the amplitude generally diminishing with frequency. Such harmonics can be used to reconstruct a good approximation of the asymmetric annual cycle.

Here the typical cycle of seasonal variation for the period was estimated by fitting a constant plus 7 harmonic model to the data ( the 7th being about 52 days ). This is similar to the technique descrobed by Vinnikov et al [1]. The residual difference between the data and this model was then taken as the anomaly for the segment and low-pass filtered to remove 45 day (8th harmonic) and shorter variations.

This reveals an initial recovery from the 2007 minimum and a later drift downwards to a new minimum in 2012. By the time of the annual minimum for 2013, another strong recovery seems to have started.

This repetitive pattern, which was identified in the full record, is used to define trough-to-trough or peak-to-peak periods over which to calculate the average rate of change for each segment of the data. This was done by fitting a linear model to the unfiltered anomaly data ( using a non-linear least squares technique).

Supplementary Information

The fitted average rate of change for the respective periods, in chronological order ( million km^2 per year )

-0.020
-0.047
-0.088
-0.043

The cosine amplitude, half the total annual variation, of the fundamental of the harmonic model for successive data segments ( million km^2 )

pre 1988 -4.43
1988-1997 -4.47
1997-2007 -4.62
post 2007 -5.14

Mathematical approximation of repetitive pattern used to determine sampling intervals, shown in figure 2.

cosine period 11.16 years ( 5.58 year repetition )
cosine amplitude 0.75 x 10^6 km^2

Resources

Data source: http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008
(data downloaded 16th Sept 2013)

A extensive list of official sources both arctic and antarctic sea ice data can be found here:
http://wattsupwiththat.com/reference-pages/sea-ice-page/

 
[1] Vinnikov et al 2002
“Analysis of seasonal cycles in climatic trends with application to satellite observations of sea ice extent”
http://onlinelibrary.wiley.com/doi/10.1029/2001GL014481/pdf

 
[2] Cavalieri et al 2003
“30-Year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability”

http://www.meto.umd.edu/~kostya/Pdf/Seaice.30yrs.GRL.pdf

Advertisements

Amplitude Modulation Triplets

Understanding of “beats” and amplitude modulation requires going back to basics to distinguish two similar treatments that are often confused with one another.

Acoustic Beats
The following link shows the modulation which leads to the acoustic beats phenomenon and contains audio examples to listen to:
http://www.animations.physics.unsw.edu.au/jw/beats.htm
Frequencies in amplitude modulation:

The basic trigonometrical identity [1] that is used to relate modulation to interference patterns is this:

cos a * cos b = 0.5 * ( cos (a+b) + cos (a-b) )

If a physical system contains modulation as expressed by the multiplication on the left of the equation, spectral analysis will split things into a series of additive components and we will have to interpret what we find in the spectrum as being the sum and the difference of the physical frequencies that are modulating each other.

Superposition (beats).
In the other direction the superposition (addition) of two signals can be converted back to modulation :
cos (f1t) + cos ( f2t) = 2 cos ((f1t + f2t)/2) * cos ((f1t – f2t)/2)
which if we rename the variables using a and b
cos (a) + cos ( b) = 2 cos ((a + b)/2) * cos ((a – b)/2)

So the presence of two frequencies seen in a Fourier spectrum is equivalent to physical modulation of their average frequency by half the difference of their frequencies. This is a mathematical identity, the two interpretations are equivalent and interchangeable, thus this is totally general and independent of any physical system where this sort of pattern may be observed. If these kind of patterns are found, the cause could be either a modulation or superposition.

In the presence of perfect sampling, the two forms are mathematically identical and again what would be found in the spectra would be the left-hand side: the two additive signals. However, what happens to the modulating envelop on the right in the climate system may well mean that the faster one get smoothed out, or the sampling interval and all the averaging and data processing breaks it up . The longer, lower frequency signal may be all that is left and then that is what will show up in the spectrum.

This is similar to what is called “beats” in an acoustic or musical context, except that the ear perceives twice the real physical frequency since human audition senses the amplitude variation: the change in volume, NOT the frequency of the modulation. The amplitude peaks twice per cycle and what we hear as two “beats” per second is a modulation of 1 hertz. Care must be taken when applying this musical analogy to non-acoustic cycles such as those in climate variables.

Also, if one part ( usually the faster one ) gets attenuated or phase delayed by other things in climate it may still be visible but the mathematical equivalence is gone and the two, now separate frequencies are detected.

Triplets

Since the ‘side-lobe’ frequencies are symmetrically placed about the central frequency, this creates a symmetric pair of frequencies of equal magnitude whose frequencies are the sum and the difference of the originals. This is sometimes referred to as a ‘doublet’.

If the two cosine terms are equal as shown above, neither of the original signal frequencies remain. However, if the higher frequency is of larger amplitude, a residual amount of it will remain giving rise to a ‘triplet’ of frequencies. This is what is usually done in radio transmission of an amplitude modulated signal (AM radio). In this case the central peak is usually at least twice the magnitude of the each side bands.

It can be seem mathematically from the equations given above, that if both inputs are of equal amplitude the central frequency will disappear, leaving just a pair side frequencies. It may also be so small as to no longer be distinguishable from background noise in real measurements.

All of this can confound detection of the underlying cycles in a complex system because the periods of the causative phenomena may be shifted or no longer visible in a frequency analysis.

There are many non-linear effects, distortions and feedbacks that will deform any pure oscillation and thus introduce higher harmonics. Indeed such distortions will be the norm rather than a pure oscillation and so many harmonics would be expected to be found.

As a result, even identifying the basic cause of a signal can be challenging in a complex system with many interacting physical variables.

The triplet is useful pattern to look for, being suggested by the presence equally spaced frequencies, although the side peaks may attenuated by other phenomena and are not always of equal height as in the abstract example.

Examples of these kind of patterns can be found in variations of Arctic ice coverage.

https://climategrog.wordpress.com/?attachment_id=757
https://climategrog.wordpress.com/?attachment_id=756
https://climategrog.wordpress.com/?attachment_id=438

 References:

Sum and Product of Sine and Cosine:

Data corruption by running mean “smoothers”

[See update at end of article]

Running means are often used as a simple low pass filter (usually without understanding its defects). Often it is referred to as a “smoother”. In fact it does not even “smooth” too well either since it lets through enough high frequencies to give a spiky result.

Running means are fast and easy to implement. Since most people have some understanding of what an average does, the idea of a running average seems easily understood. Sadly it’s not that simple and running averages often cause serious corruption of the data.

So it smooths the data to an extent, but what else does it do?

The problem with an evenly weighted average is that the data is effectively masked by a rectangular window. The frequency response of such a rectangular window is the sinc function [1] and thus the effect on the frequency content of the data is to apply the sinc function as a frequency filter. The sinc function oscillates and has negative lobes that actually invert part of the signal it was intended to remove. This can introduce all sorts of undesirable artefacts into the data.

An example of one of the problems can be seen here:
running_mean_WTF
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
Figure 1 Comparing effects of different filters on a climate data time series ( 60 month running mean vs 30m triple running mean [blue] ).

It can be noted that the peaks and troughs in the running mean are absolutely wrong. When the raw data has a peak the running mean produces a trough. This is clearly undesirable.

The data is “smoother” than it was but its sense is perverted. This highlights the difference between simply “smoothing” data and applying appropriately chosen low-pass filter. The two are not the same but the terms are often thought to be synonymous.

Some other filters, such as the gaussian, are much more well behaved, however a gaussian response is never zero, so there is always some leakage of what we would like to remove. That is often acceptable but sometimes not ideal.

Comparing frequency of gaussian and running mean
figure 2 showing the magnitude of the frequency response. However, it should be noted that the sign of every other lobe of running mean is negative in sign, actually inverting the data.

Below is a comparison of two filters ( running mean and gaussian ) applied to some synthetic climate-like data generated from random numbers. Click to see the full extent of the graph.

rm_gauss_AR1
Figure 3. Showing artefacts introduced by simple running mean filter.

As well as the inversion defect, which is again found here around 1970, some of the peaks get bent sideways into an asymmetric form. In particular, this aberration can be noted around 1958 and 1981. In comparing two datasets in order to attribute causation or measure response times of events, this could be very disruptive and lead to totally false conclusions.

 

Triple running mean filters

Another solution is to improve the running mean’s frequency response.

The sinc function has the maximum of the troublesome negative lobe at πx=tan(πx). Solving this gives πx=1.4303 πx=1.3371…..
[Thanks to Peter Mott for pointing out the error.]
However, simply targeting the peak in the lobe does not produce optimal results. Reduced values leave less residual.

Now if a second running mean is passed after the first one with a period shorter by this ratio, it will filter out the the inverted data…. and produce another, smaller, positive lobe.

A similar operation will kill the new lobe and by this stage any residual problems are getting small enough that they are probably no longer a problem.

The triple running mean has the advantage that it has a zero in the frequency response that will totally remove a precise frequency as well letting very little of higher frequencies through. If there is a fixed, known frequency to be eliminated, this can be a better choice than a gaussian filter of similar period.

The two are shown in the plot above and it can be seen that a triple running mean does not invert the peaks as was the case for the simple running mean that is commonly used.

Example.

With monthly data it is often desirable to remove an annual variation. This can be approximated by the 12,8,6 triple RM shown:

12 / 1.3371 = 8.8975
12 / 1.3371 / 1.3371 = 6.712

It can be seen the second stage is pretty accurate but the final one is rather approximate. However, the error is not large in the third stage.



Figure 4. Comparing frequency response of gaussian and triple running mean filters.

A similar operation on daily data would use: 365, 273, 204

365.242 / 1.3371 = 273.29
365.242 / 1.3371 / 1.3371 = 204,39

Another advantage is that the data from r3m filter really is “smooth” since it does not let past some high frequencies that a simple running mean does. If the aim is simply to “smooth” the data, rather than target a specific frequency, a r3m filter with half the nominal width often gives a smoother result without losing as much information, as was shown in figure 1.

This defect in the smoothing can be seen in the example plot. For example, there is a spike near 1986 in the simple running mean. Worst of all this is not even a true spike in the data that is getting through the filter, it is an artefact.

Another example is the official NOAA [2] presentation of sun spot number (SSN) taken from SIDC [3], examined here:

In 2004, Svalgaard et al published a prediction of the cycle 24 peak [4]. That prediction has proved to be remarkably accurate. It would be even more remarkable if SIDC were to apply a “smoothing” filter that did not invert and displace the peak and reduce its value.

Using direct polar field measurements, now available
for four solar cycles, we predict that the approaching solar
cycle 24 (~2011 maximum) will have a peak smoothed
monthly sunspot number of 75 ± 8, making it potentially the
smallest cycle in the last 100 years.

SIDC processing converts a later trough into the peak value of cycle 24. The supposed peak aligns with the lowest monthly value in the last 2.5 years of data. Clearly the processing is doing more than the intended “smoothing”.

The filter used in this case is a running mean with the first and last points having reduced weighting. It is essentially the same and shares the same defects. Apparently the filter applied to SIDC data was introduced by the Zürich observatory at the end of the 19th century when all these calculations had to be done by hand ( and perhaps the defects were less well understood ). The method has been retained to provide consistency with the historical record. This practise is currently under review.

While it may have been a reasonable compromise in 19th century, there seems little reason other than ignorance of problems for using simple running mean “smoothers” in the 21st century.

Conclusion

Referring to a filter as a “smoother” is often a sign that the user is seeking a visual effect and may be unaware that this can fundamentally change the data in unexpected ways.


Wider appreciation of the corruption introduced by using running mean filters would be beneficial in many fields of study.

 

Refs.

  [1] Plot of sinc function http://mathworld.wolfram.com/SincFunction.html

  [2] NOAA/Space Weather Prediction Center http://www.swpc.noaa.gov/SolarCycle/index.html

  [3] SIDC sunspot data: http://sidc.oma.be/sunspot-data/
SIDC readme: http://sidc.oma.be/html/readme.txt
SIDC applies a 13 point running mean with first and last points weighted 50%. This is a slight improvement on a flat running mean but shares the same tendancy to invert certain features in the data.

  [4] Svalgaard, L.,E. W. Cliver, and Y. Kamide (2005), Sunspot cycle 24: Smallest
cycle in 100 years?, Geophys. Res. Lett., 32, L01104, doi:10.1029/
2004GL021664. http://www.leif.org/research/Cycle%2024%20Smallest%20100%20years.pdf

Appendix

Scripts to automatically effect a triple-running-mean are provided here:
https://climategrog.wordpress.com/2013/11/02/triple-running-mean-script/

Example of how to effect a triple running mean on Woodfortrees.org :
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17

Example of triple running mean in spread sheet:
https://www.dropbox.com/s/gp34rlw06mcvf6z/R3M.xls

[Update]

The main object of this article was to raise awareness of the strong, unintentional distortions introduced by the ubiquitous running mean “smoother”.

Filter design is a whole field of study in itself, of which even an introduction would be beyond the scope of this short article. However, it was also an aim to suggest some useful replacements for the simple running-average and to provide implementations that can easily be adopted. To that end, a small adjustment has been made to the r3m.sh script provided and another higher quality filter is introduced:

https://climategrog.wordpress.com/?attachment_id=659

A script to implement a low-pass Lanczos filter is provided here: https://climategrog.wordpress.com/2013/11/28/lanczos-filter-script/

An equivalent high-pass filter is provided here: https://climategrog.wordpress.com/2013/11/28/lanczos-high-pass-filter/

High-pass filters may be used, for example, to isolate sub-annual variability in order to investigate the presence or absense of a lunar infulence in daily data.

An example is the 66 day filter used in this analysis:
https://climategrog.wordpress.com/?attachment_id=460

The following points arose in discussion of the article.

Vaughan Pratt points out that shortening the window by a factor of 1.2067 (rather than 1.3371 originally suggested in this article) reduces the stop-band leakage. This provides a useful improvement.

Further optimisation can be attained by reducing negative leakage peaks at the cost of accepting slightly more positive leakage. Since the residual negative peaks are still inverting and hence corrupting the data, this will generally be preferable to simply reducing net residuals irrespective of sign.

The asymmetric triple running-mean is shown in the comparison of the frequency responses, along with a Lanczos filter, here:
https://climategrog.wordpress.com/?attachment_id=660

The Pratt configuration and the asymmetric 3RM result in identical averaging intervals when set to remove the annual cycle from monthly data. Both result in a choice of 8,10 and 12 month windows.

The difference will have an effect when filtering longer periods or higher resolutions, such as daily data.

If this is implemented in a spreadsheet, it should be noted that each average over an even interval will result in a 0.5 month shift in the data since it is not correctly centred. In a triple running-mean this results in 1.5 months shift with respect to the original data.

In this case the 1.3371 formula originally suggested in the article, giving 12,9,7 month averages and producing just one 0.5 month lag, may be preferable.

None of these issues apply if the scripts provided accompanying the article are used, since they all correctly centre the data.

A more technical discussion of cascading running-mean filters to achieve other profiles can be found in this 1992 paper, suggested by Pekka Pirilä and should serve as a starting point for further study of the subject.
http://www.cwu.edu/~andonie/MyPapers/Gaussian%20Smoothing_96.pdf

Cyclic components in ice cover

Draft

Introduction

Background

To understand the processes driving polar ice coverage it is necessary to identify cyclic variations. Some would wish to trivialise climate into AGW plus random “stochastic” variability. This is clearly unsatisfactory. Much of the variation is more structured than may be apparent from staring at the ups and downs of a time series.

There are many cyclic or pseudo-cyclic repetitions and before attempting to regression fit a linear trend to the data, it is necessary to identify and remove them or include them in the model. Failure to do this will lead to concluding invalid, meaningless “trends” in the data. See cosine warming ref 1.

One means of testing for the presence of periodicity in a dataset is spectral analysis. In particular spectral power distribution can be informative.

The power spectrum can be derived by taking the Fourier transform of the autocorrelation function. The latter is, in its own right, useful for identifying the presence or not of cyclic change in the data.

One condition to get useful results from Fourier analysis is that the data should be stationary (ref 1) . Amongst other things this requires that the mean of the data should be fairly constant over time. FT of a series with a rising/falling trend will produce a whole series of spurious peaks that are a result making an infinite series from a ramping, finite sample.

There are several tests and definitions of stationarity and it is to some degree a subjective question without a black or white answer. A commonly used test is the augmented Dickey-Fuller, unit root test: ADF. (ref 2)

If a time-series is found not to satisfy the condition of stationarity, a common solution is examine instead the rate of change. This is often more desirable than other ‘detrending’ techniques such as subtracting some arbitrary mathematical “trend” such as a linear trend or higher polynomial. Unless there is a specific reason for fitting such a model to remove a known physical phenomenon, such a detrending will introduce non physical changes into the data. Differencing is a linear process whose result is derived purely from the original data and thus avoids injecting arbitrary signals.

The time differential ( as approximated by the first difference of the discrete data ) will often be stationary when the time-series is not. For example a “random walk”, where the data is a sequence of small random variations added to the previous value, will be a series of random values in its differential and hence stationary. This is particularly applicable to climatic data, like temperature, where last year’s or last month’s value will determine to a large extent the next one. This kind of simple autoregressive model is often used to create artificial climate-like series for testing.

To ensure there is no step change, as the end of the data is wrapped around to the beginning, it is usual to also apply a window function that is zero at each extreme and fades the data down at each end. This has the disadvantage of distorting the longer term variations but avoids introducing large spurious signals that can disrupt the whole spectrum.

Most window functions produce some small artificial peaks or ‘ringing’ either side of a real peak. Some do this more than others. The choice of window function depends to some extent on the nature and shape of the data. The choice is often a compromise.

Method

Initial examination of the autocorrelation function of Arctic ice area data revealed the presence of notable periodicity other than the obvious annual cycle. Some recent published work is starting to comment on various aspects of this. (ref 3)

As a first step the annual cycle was removed by a triple running mean filter (ref 4) with a zero at 365 days and designed to avoid the usual distortions caused by simple running mean “smoothers”.

If a Fourier transform were to be done with the presence of the annual cycle, it’s magnitude, at least an order of magnitude greater than anything else, would reduce the accuracy of FFT and also introduce noticeable windowing artefacts in the 0.5 to 2.0 year period range. For this reason it was removed.

The adf.test() function in R package returned values indicating it was not possible to assume that the data was stationary. Contrariwise, the test on the derivative of the time series indicated strongly that it was stationary.

Non-stationarity is probably caused by long term trend or long period cyclic variation (long relative to the duration of the dataset).

Taking rate of change reduces linear trends to a constant and attenuates the amplitude of long periods by an amount proportional to the frequency, making it more amenable to analysis. The 365d filter will also attenuate frequencies less than about 5 years and this needs to be born in mind or corrected for when relative magnitudes are considered in the spectrum.

ref 1 Cosine warming
https://climategrog.wordpress.com/?attachment_id=209

ref1 stationarity reqt for FFT.

ref3 arctic periodicity

ref 2 (dickey-fuller)
http://homepages.strath.ac.uk/~hbs96127/adfnotes.pdf

ref 4
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

Open Mind or Cowardly bigot?

ddt_Arctic_ice
 
What I considered a banal comment and an interesting plot about rate of change in Arctic sea ice area, that I posted on RC, has got Grant Forster a.k.a. “Tamino” sufficiently alarmed that he has made it the pre-occupation of his blog (laughably called “Open Mind”), according it two articles in the last week.

The discussion following his first hit piece resulted in him  blocking all further comment from me to avoid explaining why he though up was the same as down in the rate of change plot. His stated reason for not explaining to poor fool me, who still thought that up was the opposite of down, was “I’m not going to help you”. Strangely, his article claims to “school” me, yet he does not want to help me see where I’m wrong.

When I attempted to correct some of the other fictions he had spun out of my brief comment at RC he responded by more ranting and calling me liar twice, before running to hide behind his control of the blog content and permanently blocking my right to reply.

Come out all mouthy and then run and hide behind the door. Impressive that Grant, really impressive.

I invited him to assess for himself whether that was the actions of an open minded scientist or a cowardly bigot.

I have my own personal opinion on that but I’ll leave him and others to decide for themselves.

=====

So why such vitriol. Why did this merit such effort on his part for such a trivial comment at  #52 somewhere on an open thread on a fast moving blog.

Heck, there’s enough garbage on the internet every second for some short blog comment, that is supposedly wrong, to be safely ignored. Maybe he thinks I’m onto something.

Thankfully Grant Foster has co-authored one climate paper and seems to know his way around, so he was able to provide some useful input.

In his first article he reproduced my graph from the Cryosphere Today’s data, so that provides corroboration of the extraction and processing.
In his second thread he created some artificial “random” data using and AR1 model to provide a comparison.

Now, because of the 12 month gaussian filter they are both smooth curves of a similar scale. However, there is no obvious pattern in the AR1 model, no repetition of peaks at equal spacings.

The series of equally spaced cycles, broken by a 10 year period of accelerating ice loss, is certainly not reproduced by the AR1 test data. Perhaps it’s a feature of the data after all.

He maintained in his first article that my filter had simply amplified some insignificant peak out of the noise. But if that was the case, the same would presumably have happened with his synthetic AR1 data and there would be a similar strong mid-range cycle in the filtered AR1 test, most likely with a different frequency. But there isn’t.

Whether those cycles are “statistically significant” needs to be looked at in more detail. Foster made a lame attempt at showing they were not but when I raised a technical error he had made in his Fourier analysis, guess what? He deleted my comment from his blog to prevent others seeing his incompetence and refused to discuss any further, removing all my replies.

Now whether he is simply so bigoted and intolerant that he can not abide anything or anyone that disagrees with him, is a possibility that cannot be ignored. However, if that is not the case it would suggest that there is something in my analysis of that data that he feels a strong need to suppress.

I did intend to write this up at some time in the future. Foster’s reaction makes me think this may merit more immediate attention.

Thanks for the tip Tammy. Your encouragement is most welcome and thanks for the limited help you were able to provide.

Appendix
NOAA’s Arctic Oscillation index has also just gone below the the long term mean. Another indication that there is either multi-decadal natural variability now going negative or, at least a return to stable conditions.

The AO data is long enough to suggest it covers a full 60 year oscillation, though one cycle is not enough to clearly establish such a pattern with any degree of certainty. However, it certainly does not show any “run away warming” or “tipping points”. Quite the contrary.

The latter part of the AO record is consistent with my observations on the rate of change.

http://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/season.JFM.ao.gif

Arctic Oscillation

NOAA Arctic Oscillation Index.

Talkshop Immoderation cf. Tamino’s Open Mind

What happens to a science blog when moderators lack moderation

Having contributed an article to Roger Tatershall’s “Talkshop”, it seems I have fallen foul of The King’s wrath.

I am now banned from all discussion on this “top science blog”. LOL

Premier offence of Lèse-majesté: I used the word “magic” in reference to something His Majesty posted.

That saw all my subsequent posts blocked pending royal approval.

Second offence of Lèse-majesté: I did not reply to a question He asked me, on a thread I was no longer watching because he asked me to let them get on with it.

All subsequent posts again blocked pending that fulfil my servile duty and help His Majesty understand a Wikipedia page He had failed to read properly.

That was followed by an amusingly threatening email: “I’ll give you another hour to reply.”

OO-err!

I pointed out that this kind of behaviour, that Warmist sites are often scathed for, was not appropriate to open scientific debate and that I would not longer be contributing.

Predictably that also got removed without a trace. Anticipating that would happen I took a screen shot.

So did I break site rules , apparently not:

Rule (1) There are no rules.
Rule (2) See rule (1)
Rule (3) See rule (2)

And mind your manners while you do it.

Apparently that advice about manners is for others. Of course it does not apply to His Majesty.

[Moderators Reply] It looks like Greg prefers to flounce rather than accept his arse on a plate again – this time re barycentric orbits and solar inertial motion. Some people just can’t admit when they’ve got it wrong. Especially not to themselves. Not even when you give them the relevant JPL documentation to read.

In fact, on both occasions this “arse on plate” story is part His Majesty’s own personal fantasy world.

When I try to point out his false claim about what the JPL doc says, since he apparently did not read it or could not understand it, I find I’m banned from posting a reply.

Not held for moderation. All posts are automatically binned.

So much for scientific debate.

Apparently Willis Eschenbach also got banned some time back for criticising a paper His Majesty liked.

Now everyone is entitle to run their own little corner how they choose.

Just don’t pretend you are a top science blog if you’re that touchy about the slightest criticism and not prepared to allow open debate.

http://tallbloke.wordpress.com/2013/03/02/greg-goodman-lunar-solar-influence-on-sea-surface-temperature/

The rest of the discussion is worth reading. Some especially interesting posts from Paul Vaughan and Ian Wilson.

Addendum.

Grant Foster aka “Tamino” , having started a very vitriolic hit piece on my methods finds himself in a corner and uses his editorial control to avoid seeing something he does not want to see.

http://tamino.wordpress.com/2013/03/08/back-to-school/#comment-79849

Grant’s with his “Open Mind” has his eyes firmly closed it seems. Here’s my final challenge to him to explain why he disagrees that the rate of change of Arctic ice is receding. He chose to snip all further comment from me rather than reply to two simple questions. Open mind …. my foot.  Read the preceding posts to understand the context.

Greg Goodman says:
So what is your problem with explaining why you disagree. “Here’s my guess: ”  the data shows something you do not wish to recognise ?

Ok, so rate of change of ice cover _looks_ like its magnitude is reducing and _looks_ like the increasingly negative rate of change is now  a diminishing rate of change. But that’s just cos I know nothing. I don’t even know that up is the same as down. Sheesh , pwned again. !

Now you know that up is really down but are not prepared to “help” me by explaining why. You’re going to keep the all important explanation  to yourself until I produce something else you can use to distract attention and hopefully everyone will have forgotten you were going to  “help” me to understand why up is down.

Tamino:[Otherwise, stop pestering the adults.]

Don’t question gown-ups. Daddy knows best, now go and play.

Fine.

So if I show you something else that shows a change in direction in the Arctic, why would I expect that you will do anything different than
tell me that you disagree, that  I know nothing, but won’t say why because you don’t want to help me?

===