Thursday, September 5, 2013

Uncertainty of temperature trends.

This post follows a previous one. There have been intermittent controversies about quoted trends for temperature series. One from about four years ago was well-described by Hu McCulloch, with a good summary of the theory. The issue is that the simple OLS regression gives generally good trend measures, but the associated uncertainty is greatly underestimated if the residuals are correlated in time. Monthly series are usually quite correlated; annual series much less so, but they have higher uncertainties due to fewer degrees of freedom. The effect of correlated residuals is often likened to a reduced number of dof.

The standard remedy has been the one recommended by McCulloch (but long used before); an AR(1) regression model, though usually implemented as the OLS trend with a Quenouille correction (1952) to the confidence intervals of the trend. This post introduces an extension of the Quenouille approx to ARMA(1,1).

The Quenouille approximation is fine, but in 2011 Foster and Rahmstorf argued that AR(1) was inadequate, and ARMA(1,1), with an extra parameter, better described the observed autocorrelation functions.

This has some practical consequences, because ARMA(1,1) was then adopted by the Skeptical Science trend calculator and is being cited as the basis for various numbers of "years without statistically significant warming". However, other such quotes, like the famous "none since 1995" quote of Phil Jones, are based on the more standard AR(1) model.

My trend viewer uses AR(1) with Quenouille. Since I'm regularly updating now, the inconsistency may be an issue. So I decided to further examine the choice of model.


Some linear algebra of regression and ARIMA

This builds on an earlier post showing how autoregressive models could be fitted exactly by solving polynomial equations. Suppose we have a vector y of temperatures over consecutive time periods (months), and a matrix X of regressors, the first column just o\1's, and the second (and last) the times of each y. The 3-col matrix Y is just (y,X).

Then the OLS regression problem is to fit to the stochastic model
d = Y B = ε; B=(1, -b), b=(α,β)
The RHS ε is a vector of random variables, which are assumed iid (indentically independently distributed). b are the coefficients, d the residuals. The OLS fit is X* (Y B) = 0.

An AR(1) model brings in a lag matrix L. Ly is just y with every element moved back 1. It is just a square matrix with 0 everywhere except for a 1 just below each points on the diagonal. The stochastic model is just (I - ρL)d = ε.

The operator on d has these properties: it is banded (just two diagonals), and Toeplitz. Toeplitz means that each row is almost a copy of the one above, shifted one place to the right (so just one new element on the left). Here it expresses stationarity - ie the relation between variables doesn't change between timesteps (except that time moves on a step).

A shift of two places is just expressed by the matrix L2. The general ARMA model is
A d = M ε
where A and M are just low order polynomials in L. The notation ARMA(a,m) just lists the order of these polynomials. There are a+m parameters.

Again ε is assumed iid, so the correct fitting is the LS solution of M-1A d = 0. However, in terms of the coefficients b, this is little different from the OLS fit. The significant difference comes in the uncertainty attributed to the coefficients.

The OLS formula X* Y B=X* ε expresses the coefficients b as stochastic variables (the solution values correspond to mean ε=0),
b= (X*X)-1X* ε + mean(b)
so Covar(b) = (X*X)-1 X*K X (X*X)-1
where K=Covar(ε).


Autocorrelation


Continuing the idea that stationarity implies Toeplitz matrices, it is reasonable to expect that K can be modelled by a Toeplitz matrix, in which each row is a copy of the autocovariance, centered on the diagonal. And the covariance of b is then made up from the zero, first and second central moments of the acv. We could calculate it that way, but noise from the ragged ends is a problem. That's why ARMA models are used.

But in the ARMA models, we have a similar Toeplitz matrix which is
(A-1M)* A-1M
so it is reasonable to compare these with the acv. I'll normalise so central values are 1, which makes the acv an acf.

Here is a plot of the acf's of five major temperature indices from 1980 to present, using the R acf() command. It's an active plot - you can press the buttons for the case you want:



HADCRUT
GISS
NOAA
UAH
MSU-RSS


There are some marked features in these acf's, to an extent that I found surprising. The gerneral pattern is that the part close to zero represents the structure of randomness, while further away it is dominated by systematic variations - periodicities and other non-random deviations from linearity. There is a marked periodicity at about 45 months. It's in the data, and shows up in a FFT. Here it shows up as a sort of sinc function. Naturally the acf models can't follow it, nor should they. It would be better to take out some of these low frequency effects. I'll say more about that in a future post that looks at AIC considerations.

Now AR(1) fits a geometric progression, forcing a match to the first lag. As Foster and Rahmstorf say, this tends to underestimate, and ARMA(1,1) gives an extra degree of freedom which allows the later decay to vary to match the slower decay observed. So here are the same acf's but with the two models added. The coefficients are calculated using the R arima() command with the CSS optimisation, and the model acf's produced by ARMAacf().



HADCRUT
GISS
NOAA
UAH
MSU-RSS



But now you can see that, while the ARMA(1,1) is a good match until the acf gets close to zero, it of course stays positive while the acf goes negative. The central moments of the acf include the negative parts, so it's reasonable to say that the ARMA(1,1) approx overshoots, while the earlier undershoot of the AR(1) approx starts to look better.

What about other models? Here is HADCRUT 4 modelled with AR(2) and AR(3) as well.

A family of Quenouille approximations


If you center the time sequence so that it has zero mean, then the trend variance is proportional to the zero'th moment (area) under the acf within the range. Since the ARMA model acf's generally tend rather rapidly to zero relative to the trend range, it's a good approximation to take the whole acf area. If ρ is the parameter of the AR(1) model, then the acf G(i,j) = |ρ|i-j, and if you sum over all j for fixed i, the area is (1+ρ)/(1-ρ).

For general ARMA with polynomials A(L), M(L), L=lag, there are corresponding approx factors. I'll describe the general theory in a coming post (it involves complex analysis), but for ARMA(1,1)
with M(L)=1+mL, A(L)=1-rL,
Q=(1+r)/(1-r)*(1+m)^2/(1+2*m*r+m^2)

Results

The coefficients are calculated using the R arima() command with the CSS optimisation. The period is from 1980 to June 2013. Trends are quoted in °C/century. Note that I give standard errors only - you need to multiply these by 2 to get a conventional confidence interval.


ResultHADCRUT4GISSNOAAUAHMSU-RSS
OLS trend β1.56621.56831.48421.41391.3039
OLS s.e. β0.06560.07050.06210.09160.0900
AR(1) trend β1.57071.57941.49251.44901.3269
AR(1) s.e. β0.14520.13710.13310.25090.2372
AR(1) Quen s.e.0.14480.13670.13270.25040.2371
AR(1) ρ0.65910.57970.64090.76380.7480
ARMA(1,1) trend β1.58011.59561.50381.47141.3474
ARMA(1,1) s.e. β0.21260.19850.19110.32980.3393
ARMA(1,1) Quen s.e.0.21150.19760.19010.32900.3387
ARMA(1,1) ρ0.85950.83080.84660.87140.8861
ARMA(1,1) MA coef-0.3785-0.4007-0.3690-0.2677-0.3329

Conclusions and next steps

I think AR(1) probably underestimates the variability of the trend and ARMA(1,1) probably overestimates it. I think it is possible to use the area under the ACF directly without use of a model, but it will require tapering, which may introduce similar ambiguity.

One quantitative way of deciding when increasing the number of model parameters should end is the Akaike information criterion. Up to ARMA(1,1) there is no sign of overfitting.

I'll expand on the topic of the extended Quenouille approx in a future post. The mathematics is interesting, and the results are elegant. But it requires the model coefficients (as it does for AR(1)), so increasingly it only sidesteps a smaller part of the work. I described in an earlierautocorrelation post how these could be obtained by polynomial solution; I have a more systematic version which I'll post soon.

4 comments:

  1. > There is a marked periodicity at about 45 months

    ENSO ish? What if you try with the F+R ENSO removal method?

    ReplyDelete
    Replies
    1. I've tried just curve fitting trend removal - using Fourier peaks. The idea is to explain some of the variation by fitted curves rather than stochastic models, and reduce the oscillations in the acf. But yes, the simplest next step is probably to do the same analysis on the F&R adjusted series.

      Delete
  2. Nick,

    This is a fantastic post. Your explanations of the Toplitz matrices were very clear. I love simplification of something otherwise complex.

    I see different AR models as vague approximations of physical series. It is hard to associate too much credence to ANY of the results regarding significance, except for the fact that if ANY are close to significant, it often is reasonable. Your point at WUWT regarding monthly temperature series needing an extra chunk of variance taken from them is understandable. All of the temperature series suffer from more than a bit of change in recent annual anomaly in their signal. I'm not sure what the ultimate advantage is either way.

    ReplyDelete
  3. Thanks, Jeff.
    I played around with the various approximations here. I've been hatching a post which says more strongly that if you identify the smooth (non-AR) part of the deviation, there isn't much ARIMA left. That's not good news, really, because there's still uncertainty - just harder to quantify.

    ReplyDelete