# 时间系列分析及应用R语言（答案）

1 Solutions Manual to Accompany Time Series Analysis with Applications in R, Second Edition by Jonathan D. Cryer and Kung-Sik Chan Solutions by Jonathan Cryer and Xuemiao Hao, updated 7/28/08 CHAPTER 1 Exercise 1.1 Use software to produce the time series plot shown in Exhibit (1.2), page 2. The following R code will produce the graph. > library(TSA); data(larain); win.graph(width=3,height=3,pointsize=8) > plot(y=larain,x=zlag(larain),ylab='Inches',xlab='Previous Year Inches') Exercise 1.2 Produce the time series plot displayed in Exhibit (1.3), page 3. Use the R code > data(color); plot(color,ylab='Color Property',xlab='Batch',type='o') Exercise 1.3 Simulate a completely random process of length 48 with independent, normal values. Repeat this exer- cise several times with a new simulation, that is, a new seed, each time. > plot(ts(rnorm(n=48)),type='o') # If you repeat this command R will use a new “random numbers” each time. If you want to reproduce the same simulation first use the command set.seed(#########) where ######### is an integer of your choice. Exercise 1.4 Simulate a completely random process of length 48 with independent, chi-square distributed values each with 2 degrees of freedom. Use the same R code as in the solution of Exercise 1.3 but replace rnorm(n=48) with rchisq(n=48,df=2). Exercise 1.5 Simulate a completely random process of length 48 with independent, t-distributed values each with 5 degrees of freedom. Construct the time series plot. Use the same R code as in the solution of Exercise 1.3 but replace rnorm(n=48) with rt(n=48,df=5). Exercise 1.6 Construct a time series plot with monthly plotting symbols for the Dubuque temperature series as in Exhibit (1.7), page 6. (Make the plot full screen so that you can see all of detail.) > data(tempdub); plot(tempdub,ylab='Temperature') > points(y=tempdub,x=time(tempdub), pch=as.vector(season(tempdub))) CHAPTER 2 Exercise 2.1 Suppose E(X) = 2, Var(X) = 9, E(Y) = 0, Var(Y) = 4, and Corr(X,Y) = 0.25. Find: (a) Var(X + Y) = Var(X) + Var(Y) +2Cov(X,Y) = 9 + 4 + 2(3*2*0.25) = 16 (b) Cov(X, X + Y) = Cov(X,X) + Cov(X,Y) = 9 + ((3*2*0.25) = 9 + 3/2 = 10.5 (c) Corr(X + Y, X − Y). As in part (a), Var(X−Y) = 10. Then Cov(X + Y, X − Y) = Cov(X,X) − Cov(Y,Y) + Cov(X,Y) − Cov(X,Y) = Var(X) − Var(Y) = 9 − 4 = 5. So Exercise 2.2 If X and Y are dependent but Var(X) = Var(Y), find Cov(X + Y, X − Y). Cov(X + Y, X − Y) = Cov(X,X) − Cov(Y,Y) + Cov(X,Y) − Cov(Y,X) = Var(X) − Var((Y) = 0 Exercise 2.3 Let X have a distribution with mean μ and variance σ2 and let Yt = X for all t. (a) Show that {Yt} is strictly and weakly stationary. Let t1, t2,…, tn be any set of time points and k any time lag. Then Corr X Y X Y–,+()Cov X Y X Y–,+() Var X Y+()Var X Y–() ------------------------------------------------------------ 5 16 10× ---------------------- 5 410 ------------- 0 . 3 9 5 2 8 4 7 1==== 2 as required for strict stationarity. Since the autocovariance clearly exists, (see part (b)), the process is also weakly stationary. (b) Find the autocovariance function for {Yt}. Cov(Yt,Yt − k) = Cov(X,X) = σ2 for all t and k, free of t (and k). (c) Sketch a “typical” time plot of Yt. The plot will be a horizontal “line” (really a discrete-time horizontal line) at the height of the observed X. Exercise 2.4 Let {et} be a zero mean white noise processes. Suppose that the observed process is Yt = et + θet − 1 where θ is either 3 or 1/3. (a) Find the autocorrelation function for {Yt} both when θ = 3 and when θ =1/3. E(Yt) = E(et + θet−1) = 0. Also Var(Yt) = Var(et + θet − 1) = σ2 + θ2σ2 = σ2 (1 + θ2). Also Cov(Yt,Yt − 1) = Cov(et + θet − 1, et − 1 + θet − 2) = θσ2 free of t. Now for k > 1, Cov(Yt,Yt − k) = Cov(et + θet − 1, et − k + θet − k − 1) = 0 since all of these error terms are uncorrelated. So But 3/(1+32) = 3/10 and (1/3)/[1+(1/3)2] = 3/10. So the autocorrelation functions are identical. (b) You should have discovered that the time series is stationary regardless of the value of θ and that the autocor- relation functions are the same for θ = 3 and θ = 1/3. For simplicity, suppose that the process mean is known to be zero and the variance of Yt is known to be 1. You observe the series {Yt} for t = 1, 2,..., n and suppose that you can produce good estimates of the autocorrelations ρk. Do you think that you could determine which value of θ is correct (3 or 1/3) based on the estimate of ρk? Why or why not? Exercise 2.5 Suppose Yt = 5 + 2t + Xt where {Xt} is a zero mean stationary series with autocovariance function γk. (a) Find the mean function for {Yt}. E(Yt) = E(5 + 2t + Xt) = 5 + 2t + E(Xt) = 5 + 2t. (b) Find the autocovariance function for {Yt}. Cov(Yt,Yt − k) = Cov(5 + 2t + Xt, 5 + 2(t − k) + Xt − k) = Cov(Xt,Xt − k) = γk free of t. (c) Is {Yt} stationary? (Why or why not?) In spite of part (b), The process {Yt} is not stationary since its mean varies with time. Exercise 2.6 Let {Xt} be a stationary time series and define (a) Show that is free of t for all lags k. Cov(Yt,Yt − k) = Cov(Xt + 3,Xt − k + 3) = Cov(Xt,Xt−k) is free of t since {Xt} is stationary. (b) Is {Yt} stationary? {Yt} is not stationary since E(Yt) = E(Xt) = μX for t odd but E(Yt) = E(Xt + 3) = μX + 3 for t even. Exercise 2.7 Suppose that {Yt} is stationary with autocovariance function γk. (a) Show that Wt = ∇Yt = Yt − Yt−1 is stationary by finding the mean and autocovariance function for {Wt}. E(Wt) = E(Yt − Yt − 1) = E(Yt) − E(Yt − 1) = 0 since {Yt} is stationary. Also Cov(Yt,Yt − k) = Cov(Yt − Yt − 1,Yt − k − Yt − k − 1) = Cov(Yt,Yt − k) − Cov(Yt, Yt − k − 1) − Cov(Yt − 1,Yt − k) + Cov(Yt − 1,Yt − k − 1) = γk − γk+1 − γk − 1 + γk = 2γk − γk+1 − γk − 1, free of t. (b) Show that Ut = ∇2Yt = ∇[Yt − Yt − 1] = Yt − 2Yt − 1 + Yt − 2 is stationary. (You need not find the mean and auto- covariance function for {Ut}.) Ut is the first difference of the process{∇Yt}. By part (a), {∇Yt} is stationary. So Ut is the difference of a stationary process and, again by part (a), is itself stationary. Pr Yt1 yt1< Yt2 yt2≤…Ytn ytn≤,,,()Pr X yt1< Xyt2≤…Xytn≤,,,()= Pr Yt1 k– yt1< Yt2 k– yt2≤…Ytn k– ytn≤,,,()= Corr Yt Ytk–,() Cov Yt Ytk–,() Var Yt()Var Ytk–() --------------------------------------------------- 1 for k 0= θσ2 σ2 1 θ2+() -------------------------- θ 1 θ2+ ---------------= for k 1= 0for k 1>⎩ ⎪ ⎪ ⎨ ⎪ ⎪ ⎧ == Yt Xt Xt 3+⎩ ⎨ ⎧= for t odd for t even Cov Yt Ytk–,() 3 Exercise 2.8 Suppose that {Yt} is stationary with autocovariance function γk. Show that for any fixed positive integer n and any constants c1, c2,..., cn, the process {Wt} defined by is station- ary. First free of t. Also Exercise 2.9 Suppose Yt = β0 + β1t + Xt where {Xt} is a zero mean stationary series with autocovariance function γk and β0 and β1 are constants. (a) Show that {Yt} is not stationary but that Wt = ∇Yt = Yt − Yt − 1 is stationary. {Yt} is not stationary since its mean, β0 + β1t, varies with t. However, E(Wt) = E(Yt − Yt − 1) = (β0 + β1t) − (β0 + β1(t − 1)) = β1, free of t. The argument in the solution of Exercise 2.7 shows that the covariance function for {Wt} is free of t. (b) In general, show that if Yt = μt + Xt where {Xt} is a zero mean stationary series and μt is a polynomial in t of degree d, then ∇mYt = ∇(∇m − 1Yt) is stationary for m ≥ d and nonstationary for 0 ≤ m < d. Use part (a) and proceed by induction. Exercise 2.10 Let {Xt} be a zero-mean, unit-variance stationary process with autocorrelation function ρk. Suppose that μt is a nonconstant function and that σt is a positive-valued nonconstant function. The observed series is formed as Yt = μt + σtXt (a) Find the mean and covariance function for the {Yt} process. Notice that Cov(Xt,Xt − k) = Corr(Xt,Xt − k) since {Xt} has unit variance. E(Yt ) = E(μt + σtXt) = μt + σtE(Xt) = μt. Now Cov(Yt,Yt − k) = Cov(μt + σtXt,μt − k + σt − kXt − k) = σtσt − kCov(Xt,Xt − k) = σtσt − kρk. Notice that Var(Yt) = (σt)2. (b) Show that autocorrelation function for the {Yt} process depends only on time lag. Is the {Yt} process station- ary? Corr(Yt,Yt − k) = σtσt − kρk/[σtσt − k] = ρk but {Yt} is not necessarily stationary since E(Yt ) = μt. (c) Is it possible to have a time series with a constant mean and with Corr(Yt,Yt − k) free of t but with {Yt} not stationary? If μt is constant but σt varies with t, this will be the case. Exercise 2.11 Suppose Cov(Xt,Xt − k) = γk is free of t but that E(Xt) = 3t. (a) Is {Xt} stationary? No since E(Xt) varies with t. (b) Let Yt = 7 − 3t + Xt. Is {Yt} stationary? Yes, since the covariances are unchanged but now E(Xt) = 7 − 3t + 3t = 7, free of t. Exercise 2.12 Suppose that Yt = et − et − 12. Show that {Yt} is stationary and that, for k > 0, its autocorrelation func- tion is nonzero only for lag k = 12. E(Yt) = E(et − et − 12) = 0. Also Cov(Yt,Yt − k) = Cov(et − et − 12,et − k − et − 12 − k) = −Cov(et − 12,et − k) = −(σe)2 when k = 12. It is nonzero only for k = 12 since, otherwise, all of the error terms involved are uncorrelated. Exercise 2.13 Let . For this exercise, assume that the white noise series is normally distributed. (a) Find the autocorrelation function for {Yt}. First recall that for a zero-mean normal distribution and . Then which is constant in t and Also All other covariances are also zero. (b) Is {Yt} stationary? Yes, in fact, it is a non-normal white noise in disguise! Wt c1Yt c2Yt 1– … cnYtn–1++++= EWt() c1EYt c2EYt 1– … cnEYtn–1++++ c1 c2 … cn+++()μY== Cov Wt Wtk–,()Cov c1Yt c2Yt 1– … cnYtn–1++++ c1Ytk– c2Yt 1– k– … cnYtn–1k–++++,()= cjciCov Ytj– Ytk– i–,() i 0= n ∑ j 0= n ∑= cjciγjk– i– i 0= n ∑ j 0= n ∑= free of t. Yt et θet 1– 2–= Eet 1– 3()0= Eet 1– 4()3σe 4= EYt() θVar et 1–()– θσe 2–== Var Yt() Var et() θ2Var et 1– 2()+ σe 2 θ2 Eet 1– 4()Eet 1– 2()[]2–{}+== σe 2 θ2 3σe 4 σe 2[]2–{}+= σe 2 2θ2σe 4+= Cov Yt Yt 1–,()Cov et θet 1– 2– et 1– θet 2– 2–,()Cov θet 1– 2– et 1–,()θEet 1– 3()–0==== 4 Exercise 2.14 Evaluate the mean and covariance function for each of the following processes. In each case determine whether or not the process is stationary. (a) Yt = θ0 + tet . The mean is θ0 but it is not stationary since Var(Yt) = t2Var(et ) = t2σ2 is not free of t. (b) Wt = ∇Yt where Yt is as given in part (a). Wt = ∇Yt = (θ0 + tet )−(θ0 + (t−1)et − 1 ) = tet −(t−1)et − 1 So the mean of Wt is zero. However, Var(Wt) = [t2 + (t−1)2](σe)2 which depends on t and Wt is not stationary. (c) Yt = et et − 1. (You may assume that {et } is normal white noise.) The mean of Yt is clearly zero. Lag one is the only lag at which there might be correlation. However, Cov(Yt,Yt − 1) = E(et et − 1et − 1et − 2) = E(et ) E[et − 1]2E(et − 2) = 0. So the process Yt = et et − 1 is stationary and is a non-normal white noise! Exercise 2.15 Suppose that X is a random variable with zero mean. Define a time series by Yt =(−1)tX. (a) Find the mean function for {Yt}. E(Yt) = (−1)tE(X) = 0. (b) Find the covariance function for {Yt}. Cov(Yt,Yt − k) = Cov[(−1)tX,(−1)t − kX] = (−1)2t − kCov(X,X) = (−1)k(σX)2 (c) Is {Yt} stationary? Yes, the mean is constant and the covariance only depends on lag. Exercise 2.16 Suppose Yt = A + Xt where {Xt} is stationary and A is random but independent of {Xt}. Find the mean and covariance function for {Yt} in terms of the mean and autocovariance function for {Xt} and the mean and vari- ance of A. First E(Yt) = E(A) + E(Xt) = μA + μX, free of t. Also, since {Xt} and A are independent, Exercise 2.17 Let {Yt} be stationary with autocovariance function γk. Let . Show that Now make the change of variable t − s = k and t = j in the double sum. The range of the summation {1 ≤ t ≤ n, 1 ≤ s ≤n} is transformed into {1 ≤ j ≤ n, 1 ≤ j − k ≤ n} = {k + 1 ≤ j ≤ n + k, 1 ≤ j ≤ n} which may be written . Thus Use γk = γ−k to get the first expression in the exercise. Exercise 2.18 Let {Yt} be stationary with autocovariance function γk. Define the sample variance as . (a) First show that . (b) Use part (a) to show that . (Use the results of Exercise (2.17) for the last expression.) Cov Yt Ytk–,()Cov A Xt+ AXtk–+,()Cov A A,()Cov Xt Xtk–,()+ Var A() γk X+= = = free of t· Y _ 1 n--- Ytt 1= n∑= Var Y _ () γ0 n----- 2 n--- 1 k n---–⎝⎠ ⎛⎞γk k 1= n 1– ∑+= 1 n--- 1 k n-----–⎝⎠ ⎛⎞γk kn–1+= n 1– ∑= Var Y _ () 1 n2----- Var Ytt 1= n∑[]1 n2----- Cov Ytt 1= n∑ Yss 1= n∑,[]1 n2----- γts–s 1= n∑t 1= n∑== = k 0 k 1 jn≤≤+,>{}k 01 jnk+≤≤,≤{}∪ Var Y _ () 1 n2----- γkjk1+= n∑k 1= n 1–∑ γkj 1= nk+∑kn–1+= 0∑+[]= 1 n2----- nk–()γkk 1= n 1–∑ nk+()γkkn–1+= 0∑+[]= 1 n--- 1 k n-----–⎝⎠ ⎛⎞γ kkn–1+= n 1–∑= S2 1 n 1–------------ Yt Y _ –()2 t 1= n ∑= Yt μ–()2 t 1= n ∑ Yt Y _ –()2 t 1= n ∑ nY _ μ–()2+= Yt μ–()2 t 1= n ∑ Yt Y _ – Y _ μ–+()2 t 1= n ∑ Yt Y _ –()2 t 1= n ∑ Y _ μ–()2 t 1= n ∑ 2 Yt Y _ –()Y _ μ–() t 1= n ∑++== Yt Y _ –()2 t 1= n ∑ nY _ μ–()2 2 Y _ μ–()Yt Y _ –() t 1= n ∑++= Yt Y _ –()2 t 1= n ∑ nY _ μ–()2+= ES2() n n 1–------------γ0 n n 1–------------Var Y _ ()– γ0 2 n 1–------------ 1 k n---–⎝⎠ ⎛⎞γk k 1= n 1– ∑–== 5 (c) If {Yt} is a white noise process with variance γ0, show that E(S2) = γ0. This follows since for white noise γk = 0 for k > 0. Exercise 2.19 Let Y1 = θ0 + e1 and then for t > 1 define Yt recursively by Yt = θ0 + Yt − 1 + et. Here θ0 is a constant. The process {Yt} is called a random walk with drift. (a) Show that Yt may be rewritten as . Substitute Yt − 1 = θ0 + Yt − 2 + et − 1 into Yt = θ0 + Yt − 1 + et and repeat until you get back to e1. (b) Find the mean function for Yt. (c) Find the autocovariance function for Yt. Exercise 2.20 Consider the standard random walk model where Yt = Yt − 1 + et with Y1 = e1. (a) Use the above representation of Yt to show that μt = μt−1 for t > 1 with initial condition μ1 = E(e1) = 0. Hence show that μt = 0 for all t. Clearly, μ1 = E(Y1) = E(e1) = 0. Then E(Yt) = E(Yt − 1 + et) = E(Yt − 1) + E(et) = E(Yt − 1) or μt = μt − 1 for t > 1 and the result follows by induction. (b) Similarly, show that Var(Yt) = Var(Yt − 1) + , for t > 1 with Var(Y1) = , and, hence Var(Yt) = t. Var(Y1) = is immediate. Then . Recursion or induction on t yields Var(Yt) = t . (c) For 0 ≤ t ≤ s, use Ys = Yt + et +1 + et +2 ++ es to show that Cov(Yt, Ys) = Var(Yt) and, hence, that Cov(Yt, Ys) = min(t, s) . For 0 ≤ t ≤ s, and hence the result. Exercise 2.21 A random walk with random starting value. Let for t > 0 where Y0 has a distribution with mean μ0 and variance . Suppose further that Y0, e1,..., et are independent. (a) Show that E(Yt) = μ0 for all t. . (b) Show that Var(Yt) = t +. (c) Show that Cov(Yt, Ys) = min(t, s) + . Let t be less than s. Then, as in the previous exercise, (d) Show that . Just use the results of parts (b) and (c). Exercise 2.22 Let {et} be a zero-mean white noise process and let c be a constant with |c| < 1. Define Yt recursively by Yt = cYt−1 + et with Y1 = e1. This exercise can be solved using the recursive definition of Yt or by expressing Yt explicitly using repeated substi- tution as . Parts (c), (d), and (e) essen- tially assume you are working with the recursive version of Yt but they can also be solved using this explicit representation. ES2() E 1 n 1–------------ Yt Y _ –()2 t 1= n ∑ ⎝⎠ ⎜⎟ ⎜⎟ ⎛⎞1 n 1–------------EYt μ–()2 t 1= n ∑ nY _ μ–()2– ⎝⎠ ⎜⎟ ⎜⎟ ⎛⎞1 n 1–------------ EYt μ–()2[] t 1= n ∑ nE Y _ μ–()2–== = 1 n 1–------------ nγ0 nVar Y _ ()–[]= 1 n 1–------------ nγ0 n γ0 n----- 2 n--- 1 k n---–⎝⎠ ⎛⎞γk k 1= n 1– ∑+ ⎩⎭ ⎪⎪ ⎨⎬ ⎪⎪ ⎧⎫ – γ0 2 n 1–------------ 1 k n---–⎝⎠ ⎛⎞γk k 1= n 1– ∑–== Yt tθ0 et et 1– … e1++ + += EYt() Etθ0 et et 1– … e1++ + +()tθ0== Cov Yt Ytk–,()Cov tθ0 et et 1– … e1++ + + tk–()θ0 etk– et 1– k– … e1++ ++,[]= Cov etk– et 1– k– … e1+++etk– et 1– k– … e1+++,[]= Var etk– et 1– k– … e1+++()= tk–()σe 2= for tk≥ σe 2 σe 2 σe 2 σe 2 Var Yt() Var Yt 1– et+()Var Yt 1–()Var et()+ Var Yt 1–()σe 2+== = σe 2 … σe 2 Cov Yt Ys,()Cov Yt Yt et 1+ et 2+ … es++++,()Cov Yt Yt,()Var Yt() tσe2==== Yt Y0 et et 1– … e1++ + += σ0 2 EYt() EY0 et et 1– … e1++ + +()EY0()Eet() Eet 1–()… Ee1()++ ++ EY0() μ0== == σe 2 σ0 2 Var Yt() Var Y0 et et 1– … e1++ + +()Var Y1()Var et() Var et 1–()…Var e1()++ ++ σ0 2 tσe 2+== = σe 2 σ0 2 Cov Yt Ys,()Cov Yt Yt et 1+ et 2+ … es++++,()Var Yt() σ0 2 tσe 2+=== Corr Yt Ys,() tσa 2 σ0 2+ sσa 2 σ0 2+ ----------------------= for 0 ts≤≤ Yt ccYt 2– et 1–+()et+ … et cet 1– c2et 2– … ct 1– e1++ ++=== 6 (a) Show that E(Yt) = 0. First E(Y1) = E(e1) = 0. Then E(Yt) = cE(Yt−1) + E(et)= cE(Yt−1) and the result follows by induction on t. (b) Show that Var(Yt) = (1 + c2 +c4 ++ c2t−2). Is {Yt} stationary? . {Yt} is not stationary since Var(Yt) depends on t. Alternatively, (c) Show that and, in general, (Hint: Argue that Yt−1 is independent of et. Then use Cov(Yt, Yt − 1) = Cov(cYt − 1 + et, Yt − 1) So . Next So as required. (d) For large t, argue that so that {Yt} could be called asymptotically stationary. These two results follow from parts (b) and (c). (e) Suppose now that we alter the initial condition and put . Show that now {Yt} is station- ary. This part can be solved using repeated substitution to express Yt explicitly as Then show that . Exercise 2.23 Two processes {Zt} and {Yt} are said to be independent if for any time points t1, t2,..., tm and s1, s2,..., sn, the random variables { } are independent of the random variables { }. Show that if {Zt} and {Yt} are independent stationary processes, then Wt = Zt + Yt is stationary. First, E(Wt) = E(Zt) + E(Yt) = μZ + μY. Then Cov(Wt,Wt − k) = Cov(Zt + Yt, Zt − k + Yt − k) = Cov(Zt,Zt − k) + Cov(Yt,Yt − k) which is free of t since both {Zt} and {Yt} are stationary. Exercise 2.24 Let {Xt} be a time series in which we are interested. However, because the measurement process itself is not perfect, we actually observe Yt = Xt + et. We assume that {Xt} and {et} are independent processes. We call Xt the signal and et the measurement noise or error process. If {Xt} is stationary with autocorrelation function ρk, show that {Yt} is also stationary with σe 2 … Var Yt() Var et cet 1– c2et 2– … ct 1– e1++ ++()σe 2 1 c2 c4 … c2 t 1–()++++()σe 2 1 c2t– 1 c2– ----------------⎝⎠ ⎛⎞=== Var Yt() Var cYt 1– et+()c2Var Yt 1–()σe 2+ c2Var cYt 2– et 1–+()σe 2+== = c3Var Yt 2–()σe 2 1 c2+()+= … σe 2 1 c2 c4 … c2 t 1–()++++()== Corr Yt Yt 1–,()c Var Yt 1–() Var Yt()---------------------------= Corr Yt Ytk–,()ck Var Ytk–() Var Yt()--------------------------= for k 0> Cov Yt Yt 1–,()Cov cYt 1– et+ Yt 1–,()cVar Yt 1–()== Corr Yt Yt 1–,() cVar Yt 1–() Var Yt()Var Yt 1–() --------------------------------------------------- c Var Yt 1–() Var Yt()---------------------------== Cov Yt Ytk–,()Cov cYt 1– et+ Ytk–,()cCov cYt 2– et 1–+ Ytk–,()c2Cov Yt 2– Ytk–,()== = c2Cov cYt 3– et 2–+ Ytk–,()= … ckVar Ytk–()== Corr Yt Ytk–,() ckVar Ytk–() Var Yt()Var Ytk–() --------------------------------------------------- ck Var Ytk–() Var Yt()--------------------------== Var Yt() σe2 1 c2– --------------≈ and Corr Yt Ytk–,()ck≈ for k 0> Y1 e1 1 c2–()⁄= Yt ccYt 2– et 1–+()et+ … et cet 1– c2et 2– … ct 2– e2 ct 1– 1 c2– ------------------ e1++ +++=== Var Yt() σe 2 1 c2– --------------=andCorr Yt Ytk–,()ck= for k 0> Zt1 Zt2 … Ztm ,,, Ys1 Ys2 … Ysn ,,, Corr Yt Ytk–,()ρk 1 σe2 σX2⁄+ ---------------------------= for k 1≥ 7 We call the signal-to-noise ratio, or SNR. Note that the larger the SNR, the closer the autocorrelation function of the observed process {Yt} is to the autocorrelation function of the desired signal {Xt}. First, E(Yt) = E(Xt) + E(et) = μX free of t. Next, for k ≥ 1, Cov(Yt,Yt − k) = Cov(Xt + et, Xt − k + et − k) = Cov(Xt,Xt − k) + Cov(et,et − k) = Cov(Xt,Xt − k) = Var(Xt)ρk which is free of t. Finally, Exercise 2.25 Suppose where β0, f1, f2,..., fk are constants and A1, A2,..., Ak, B1, B2,..., Bk are independent random variables with zero means and variances Var(Ai) = Var(Bi) = . Show that {Yt} is stationary and find its covariance function. Compare this exercise with the results for the Random Cosine wave on page 18. First Next using the independence of A1, A2,..., Ak, B1, B2,..., Bk and some trig identities we have Exercise 2.26 Define the function . In geostatistics, Γt,s is called the semivariogram. (a) Show that for a stationary process . Without loss of generality, we may assume that the stationary process has a zero mean. Then (b) A process is said to be intrinsically stationary if Γt,s depends only on the time difference |t−s|. Show that the random walk process is intrinsically stationary. For the random walk for t > s we have so that and as required. Exercise 2.27 For a fixed, positive integer r and constant φ, consider the time series defined by . (a) Show that this process is stationary for any value of φ. The mean is clearly zero and σX 2 σe 2⁄ Corr Yt Ytk–,() Cov Yt Ytk–,() Var Yt()----------------------------------- Var Xt()ρk Var Xt()σe 2+ --------------------------------- σX 2 ρk σX 2 σe 2+ -------------------- ρk 1 σe 2 σX 2⁄+ ---------------------------====for k 1≥ Yt β0 Ai 2πfit()Bi 2πfit()sin+cos[] i 1= k ∑+= σi 2 EYt() β0 EAi() 2πfit()EBi() 2πfit()sin+cos[] i 1= k ∑+ β0== Cov Yt Ys,()Cov Ai 2πfit()Bi 2πfit()sin+cos[] i 1= k ∑ Aj 2πfjs()Bj 2πfjs()sin+cos[] j 1= k ∑, ⎩⎭ ⎪⎪ ⎨⎬ ⎪⎪ ⎧⎫ = Cov Ai 2πfit()Ai 2πfis()cos,()cos{} i 1= k ∑ Cov Bi 2πfjt()sin Bi 2πfjs()sin,{} i 1= k ∑+= 2πfit()2πfis()coscos{}Var Ai() i 1= k ∑ 2πfjt()sin 2πfjs()sin{}Var Bi() i 1= k ∑+= 2πfi ts–()()2πfi ts+()()cos+cos{} σi 2 2------ i 1= k ∑ 2πfi ts–()()2πfi ts+()()cos–cos{} σi 2 2------ i 1= k ∑+= 2πfi ts–()()cos σi 2 i 1= k ∑= hence the process is stationary. Γts, 1 2---EYt Ys–()2[]= Γts, γ0 γ ts––= Γts, 1 2---EYt Ys–()2[]1 2---EYt 2 2YtYs– Ys 2+[]1 2---EYt 2[]1 2---EYs 2[]1 2---E 2YtYs[]–+ γ0 γ ts––== = = Yt et et 1– … e1+++= Yt Ys– et et 1– … e1+++()es es 1– … e1+++()– et et 1– … es 1++++()== Γts, 1 2---EYt Ys–()2[]1 2---Var Yt Ys–()1 2---Var et et 1– … es 1++++()1 2--- ts–()σe 2=== = Yt et φet 1– φ2et 2– … φretr–++ ++= 8 (b) Find the autocorrelation function. We have so that The results in parts (a) and (b) can be simplified for φ ≠ 1 and separately for φ = 1. Exercise 2.28 (Random cosine wave extended) Suppose that where 0 < f < ½ is a fixed frequency and R and Φ are uncorrelated random variables and with Φ uniformly dis- tributed on the interval (0,1). (a) Show that E(Yt) = 0 for all t. But with a calculation entirely similar to the one on page 18. (b) Show that the process is stationary with and then use the calculations leading up to Equation (2.3.4), page 19 to show that Exercise 2.29 (Random cosine wave extended more) Suppose that where 0 < f1 < f2 < … < fm < ½ are m fixed frequencies, R1, Φ1, R2, Φ2,…, Rm, Φm are uncorrelated random vari- ables and with each Φj uniformly distributed on the interval (0,1). (a) Show that E(Yt) = 0 for all t. (b) Show that the process is stationary with . Parts (a) and (b) follow directly from the solution of Exercise (2.28) using the independence. Exercise 2.30 (Mathematical statistics required) Suppose that where R and Φ are independent random variables and f is a fixed frequency. The phase Φ is assumed to be uni- formly distributed on (0,1), and the amplitude R has a Rayleigh distribution with pdf for r > 0. Show that for each time point t, Yt has a normal distribution. (Hint: Let and X = . Now find the joint distribution of X and Y. It can also be shown that all of the finite dimensional distributions are multivariate normal and hence the process is strictly stationary.) For fixed t and f consider the one-to-one transformation defined by The range for (X,Y) will be . Also X2 + Y2 = R2. Furthermore, and the Jacobian is with inverse Jacobian . The joint density for R and Φ is for 0 < r and 0 < φ < 1. Hence the joint density for X and Y is given by Cov Yt Ytk–,()Cov et φet 1– φ2et 2– … φretr–++ ++ etk– φetk–1– φ2etk–2– … φretk– r–++ ++,( )= Cov et … φketk– φk 1+ etk–1– … φretr–++ + ++ etk– φetk–1– … φretk– r–+++,( )= φk φk 2+ φk 4+ …φk 2 rk–()+++++()σe 2=1φ2 φ4 …φ2 rk–()++++()σe 2φk= Var Yt() Var et φet 1– φ2et 2– … φretr–++ ++()1 φ2 φ4 …φ2r++++()σe 2== Corr Yt Ytk–,()1 φ2 φ4 …φ2 rk–()++++()φk 1 φ2 φ4 …φ2r++++() ---------------------------------------------------------------------------= Yt R 2π ft Φ+()()cos= for t 012…,±,±,= EYt() ER 2π ft Φ+()()cos{}ER()E 2π ft Φ+()()cos{}== E 2π ft Φ+()()cos{}0= γk 1 2---ER2() 2πfk()cos= γk ER2 2π ft k–()Φ+()()2π ft k–()Φ+()()coscos[]ER2()E 2π ft Φ+()()2π ft k–()Φ+()()coscos[]== E 2π ft Φ+()()2π ft k–()Φ+()()coscos[]2πfk()cos= Yt Rj 2π fj t Φj+()[]cos j 1= m ∑=for t 012…,±,±,= γk 1 2--- ERj2() 2πfj k()cos j 1= m ∑= Yt R 2π ft Φ+()[]cos= for t 012…,±,±,= fr() re r2 2⁄–= YR 2π ft Φ+()[]cos= R 2π ft Φ+()[]sin YR 2π ft Φ+()[]cos= X, R 2π ft Φ+()[]sin= ∞– X ∞<< ∞– Y ∞<<,{} R∂ ∂X Φ∂ ∂X R∂ ∂Y Φ∂ ∂Y 2π ft Φ+()[]cos 2πR 2π ft Φ+()[]sin 2π ft Φ+()[]sin 2πR 2π ft Φ+()[]cos = 2πR–2π X2 Y2+–= 12π X2 Y2+–()⁄ f r φ,()re r2 2⁄–= 9 for as required. CHAPTER 3 Exercise 3.1 Verify Equation (3.3.2), page 30, for the least squares estimates of β0 and of β1 when the model Yt = β0 + β1t + Xt is considered. This is a standard calculation in many first courses in statistics usually using calculus. Here we give an algebraic solution. Without loss of generality, we assume that Yt and t have each been standardized so that ΣYt = Σt = 0 and Σ(Yt)2 = Σt2 = n − 1. Then we have This is clearly smallest when = 0 and When these results are translated back to (unstandardized) original terms, we obtain the usual ordinary least squares regression results. In addition, by looking at the minimum value of Q we have where r is the correlation coefficient between Y and t. Since Q ≥ 0, this also provides a proof that correlations are always between −1 and +1. Exercise 3.2 Suppose Yt = μ + et − et − 1. Find . Note any unusual results. In particular, compare your answer to what would have been obtained if Yt = μ + et. (Hint: You may avoid Equation (3.2.3), page 28, by first doing some algebraic simplification on .) so The denominator of n2 is very unusual. We expect a denominator of n in the variance of a sample mean. The nega- tive autocorrelation at lag one makes it easier to estimate the process mean when compared with estimating the mean of a white noise process. Exercise 3.3 Suppose Yt = μ + et + et−1. Find . Compare your answer to what would have been obtained if Yt = μ + et. Describe the effect that the autocorrelation in {Yt} has on . so If Yt = μ + et we would have but in our present case , approximately four times larger. The positive autocorrelation at lag one makes it more difficult to estimate the process mean compared with estimating the mean of a white noise process. f xy,() x2 y2+ e x2 y2+()2⁄– 2π x2 y2+ -------------------------------------------------- e x2 2⁄– 2π --------------- e y2 2⁄– 2π ---------------==∞– x ∞<< ∞– y ∞<<, Q β0 β1,() Yt β0 β1t+()–[]2 t 1= n ∑= nβ0 2 Yt[]2 t 1= n ∑ β1 2 t2 t 1= n ∑ 2β0 Yt 2β0β1 t 2β1 tYt t 1= n ∑– t 1= n ∑+ t 1= n ∑–++= nβ0 2 Yt[]2 t 1= n ∑ β1 2 t2 t 1= n ∑ 2β1 tYt t 1= n ∑–++= nβ0 2 n 1–()1 β1 2+()2β1 tYt t 1= n ∑–+= nβ0 2 n 1–()n 1–()β1 1 n 1–()---------------- tYt t 1= n ∑– 2 n 1–()1 n 1–()---------------- tYt t 1= n ∑ 2 –++= β^ 0 β^ 1 1 n 1–()---------------- tYt t 1= n ∑= Q β^ 0 β^ 1,()n 1–()1 r2–()= Var Y _ () et et 1––()t 1= n∑ Y _ μ 1 n--- et et 1––()t 1= n∑+ μ 1 n--- en e0–()+==Var Y _ () 1 n2----- Var en e0–()2 n2----- σe 2== Var Y _ () Var Y _ () et et 1–+()t 1= n∑ en e0 2 ett 1= n 1–∑++= Var Y _ () 1 n2----- σe2 σe2 4 n 1–()σe2++[]22n 1–() n2----------------------- σe2== Var Y _ () 1 n⁄()σe 2= Var Y _ () 4 n⁄()σe 2≈ 10 Exercise 3.4 The data file hours contains monthly values of the average hours worked per week in the U.S. manufac- turing sector for July 1982 through June 1987. (a) Display the time series plot for these data and interpret. > data(hours); plot(hours,ylab='Monthly Hours',type='o') The plot displays an upward “trend,” in the first half of the series. However, there is certainly no distinct pattern in the display. (b) Now construct a time series plot that uses separate plotting symbols for the various months. Does your inter- pretation change from that in part (a)? > plot(hours,ylab='Monthly Hours',type='l') > points(y=hours,x=time(hours), pch=as.vector(season(hours))) The most distinct pattern in this plot is that Decembers are nearly always high months relative the others. Decem- bers stick out. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● Time Monthly Hours 1983 1984 1985 1986 1987 39.0 39.5 40.0 40.5 41.0 41.5 Time Monthly Hours 1983 1984 1985 1986 1987 39.0 39.5 40.0 40.5 41.0 41.5 J A SO N D J F M AM J J A SON D J FM A M J J A S O N D J F M A M J J A SON D J F M AM J J A S O N D J FM A M J 11 Exercise 3.5 The data file wages contains monthly values of the average hourly wages (\$) for workers in the U.S. apparel and textile products industry for July 1981 through June 1987. (a) Display the time series plot for these data and interpret. > data(wages); plot(wages,ylab='Monthly Wages',type='o') This plot shows a strong increasing “trend,” perhaps linear or curved. (b) Use least squares to fit a linear time trend to this time series. Interpret the regression output. Save the stan- dardized residuals from the fit for further analysis. > wages.lm=lm(wages~time(wages)); summary(wages.lm); y=rstudent(wages.lm) Call: lm(formula = wages ~ time(wages)) Residuals: Min 1Q Median 3Q Max -0.23828 -0.04981 0.01942 0.05845 0.13136 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -5.490e+02 1.115e+01 -49.24 <2e-16 *** time(wages) 2.811e-01 5.618e-03 50.03 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.08257 on 70 degrees of freedom Multiple R-Squared: 0.9728, Adjusted R-squared: 0.9724 F-statistic: 2503 on 1 and 70 DF, p-value: < 2.2e-16 With a multiple R-squared of 97% and highly significant regression coefficients, it “appears” as if we might have an excellent model. However,... ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●●● ● ● ● ● ● ●●●●●● ● ● ● ● ● ● ● ●●●● ● ● Time Monthly Wages 1982 1983 1984 1985 1986 1987 8.0 8.5 9.0 9.5 12 (c) Construct and interpret the time series plot of the standardized residuals from part (b). > plot(y,x=as.vector(time(wages)),ylab='Standardized Residuals',type='o') This plot does not look “random” at all. It has, generally, an upside down U shape and suggests that perhaps we should try a quadratic fit. (d) Use least squares to fit a quadratic time trend to the wages time series. Interpret the regression output. Save the standardized residuals from the fit for further analysis. > wages.lm2=lm(wages~time(wages)+I(time(wages)^2)) > summary(wages.lm2); y=rstudent(wages.lm) Call: lm(formula = wages ~ time(wages) + I(time(wages)^2)) Residuals: Min 1Q Median 3Q Max -0.148318 -0.041440 0.001563 0.050089 0.139839 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -8.495e+04 1.019e+04 -8.336 4.87e-12 *** time(wages) 8.534e+01 1.027e+01 8.309 5.44e-12 *** I(time(wages)^2) -2.143e-02 2.588e-03 -8.282 6.10e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.05889 on 69 degrees of freedom Multiple R-Squared: 0.9864, Adjusted R-squared: 0.986 F-statistic: 2494 on 2 and 69 DF, p-value: < 2.2e-16 Again, based on the regression summary and a 99% R-squared, it “appears” as if we might have an excellent model. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 1982 1983 1984 1985 1986 1987 −3 −2 −1 0 1 Time Standardized Residuals 13 (e) Construct and interpret the time series plot of the standardized residuals from part (d). > plot(y,x=as.vector(time(wages)),ylab='Standardized Residuals',type='o') This plot does not look “random” either. It hangs together too much—it is too smooth. See Exercise 3.11. Exercise 3.6 The data file beersales contains monthly U.S. beer sales (in millions of barrels) for the period January 1975 through December 1990. (a) Display the time series plot for these data and interpret the plot. > data(beersales); plot(beersales,ylab='Monthly Sales',type='o') In addition to a possible seasonality in the series, there is a general upward “trend” in the first part of the series. However, this effect “levels off” in the latter years. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 1982 1983 1984 1985 1986 1987 −2 −1 0 1 2 Time Standardized Residuals ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● Time Monthly Sales 1975 1980 1985 1990 10 12 14 16 14 (b) Now construct a time series plot that uses separate plotting symbols for the various months. Does your inter- pretation change from that in part (a)? > plot(beersales,ylab='Monthly Beer Sales',type='l') > points(y=beersales,x=time(beersales), pch=as.vector(season(beersales))) Now the seasonality is quite clear with higher sales in the summer months and lower sales in the winter. (c) Use least squares to fit a seasonal-means trend to this time series. Interpret the regression output. Save the standardized residuals from the fit for further analysis. > month.=season(beersales);beersales.lm=lm(beersales~month.);summary(beersales.lm) Call: lm(formula = beersales ~ month.) Residuals: Min 1Q Median 3Q Max -3.5745 -0.4772 0.1759 0.7312 2.1023 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.48568 0.26392 47.309 < 2e-16 *** month.February -0.14259 0.37324 -0.382 0.702879 month.March 2.08219 0.37324 5.579 8.77e-08 *** month.April 2.39760 0.37324 6.424 1.15e-09 *** month.May 3.59896 0.37324 9.643 < 2e-16 *** month.June 3.84976 0.37324 10.314 < 2e-16 *** month.July 3.76866 0.37324 10.097 < 2e-16 *** month.August 3.60877 0.37324 9.669 < 2e-16 *** month.September 1.57282 0.37324 4.214 3.96e-05 *** month.October 1.25444 0.37324 3.361 0.000948 *** month.November -0.04797 0.37324 -0.129 0.897881 month.December -0.42309 0.37324 -1.134 0.258487 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.056 on 180 degrees of freedom Multiple R-Squared: 0.7103, Adjusted R-squared: 0.6926 F-statistic: 40.12 on 11 and 180 DF, p-value: < 2.2e-16 This model leaves out the January term so all of the other effects are in comparison to January. The multiple R-squared is rather large at 71% and all the terms except November, December, and February are significantly dif- ferent from January. Time Monthly Beer Sales 1975 1980 1985 1990 10 12 14 16 J F M A M JJ A S O N D JFM A M J JA S O ND J F MA M J JA S OND JF M A M J J A S O N D JF MA MJ J A SO N D JF MA M J J A SO NDJF M A MJJ A S O N D J F MA M J JA S O N DJF M A MJJ A S O N D JF MA MJJA S O N D J F M A M J J A S O ND J F M A M J J A S O ND JF M AM J J A SO ND JF MA M J JA S O N D J F M A MJ J A S O N D J F M A MJJ A S O N D 15 (d) Construct and interpret the time series plot of the standardized residuals from part (c). Be sure to use proper plotting symbols to check on seasonality in the standardized residuals. > plot(y=rstudent(beersales.lm),x=as.vector(time(beersales)),type='l', ylab='Standardized Residuals') > points(y=rstudent(beersales.lm),x=as.vector(time(beersales)), pch=as.vector(season(beersales))) Display this plot full screen to see the detail. However, it is clear that this model does not capture the structure of this time series and we proceed to look for a more adequate model. (e) Use least squares to fit seasonal-means plus quadratic time trend to the beer sales time series. Interpret the regression output. Save the standardized residuals from the fit for further analysis. > beersales.lm2=lm(beersales~month.+time(beersales)+I(time(beersales)^2)) > summary(beersales.lm2) Call: lm(formula = beersales ~ month. + time(beersales) + I(time(beersales)^2)) Residuals: Min 1Q Median 3Q Max -2.03203 -0.43118 0.04977 0.34509 1.57572 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.150e+04 8.791e+03 -8.133 6.93e-14 *** month.February -1.579e-01 2.090e-01 -0.755 0.45099 month.March 2.052e+00 2.090e-01 9.818 < 2e-16 *** month.April 2.353e+00 2.090e-01 11.256 < 2e-16 *** month.May 3.539e+00 2.090e-01 16.934 < 2e-16 *** month.June 3.776e+00 2.090e-01 18.065 < 2e-16 *** month.July 3.681e+00 2.090e-01 17.608 < 2e-16 *** month.August 3.507e+00 2.091e-01 16.776 < 2e-16 *** month.September 1.458e+00 2.091e-01 6.972 5.89e-11 *** month.October 1.126e+00 2.091e-01 5.385 2.27e-07 *** month.November -1.894e-01 2.091e-01 -0.906 0.36622 month.December -5.773e-01 2.092e-01 -2.760 0.00638 ** time(beersales) 7.196e+01 8.867e+00 8.115 7.70e-14 *** I(time(beersales)^2) -1.810e-02 2.236e-03 -8.096 8.63e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.5911 on 178 degrees of freedom Multiple R-Squared: 0.9102, Adjusted R-squared: 0.9036 F-statistic: 138.8 on 13 and 178 DF, p-value: < 2.2e-16 1975 1980 1985 1990 −3 −2 −1 0 1 2 Time Standardized Residuals J F M A M J J A S ON D JF M A M J JA S O N D J F M A M J JAS O N D J F M AM J J A S OND JF M AM J J ASON D JF MA MJ J ASO ND J F MA MJJ A S ON D J F MA M J J A S O N DJFMAMJJ A S ON DJF M A MJJA S O N D J F M A M J J A S O ND J F M AM J J AS O N D JF MA MJ J A S O ND JF M AM J JASO N D J FM A MJ J A SO N D J F M A MJJ A S ON D 16 This model seems to do a better job than the seasonal means alone but we should reserve judgement until we look next at the residuals. (f) Construct and interpret the time series plot of the standardized residuals from part (e). Again use proper plot- ting symbols to check for any remaining seasonality in the residuals. > plot(y=rstudent(beersales.lm2),x=as.vector(time(beersales)),type='l', > ylab='Standardized Residuals') > points(y=rstudent(beersales.lm2),x=as.vector(time(beersales)), pch=as.vector(season(beersales))) This model does much better than the previous one but we would be hard pressed to convince anyone that the under- lying quadratic “trend” makes sense. Notice that the coefficient on the square term is negative so that in the future sales will decrease substantially and even eventually go negative! Exercise 3.7 The data file winnebago contains monthly unit sales of recreational vehicles from Winnebago, Inc. from November 1966 through February 1972. (a) Display and interpret the time series plot for these data. > data(winnebago); plot(winnebago,ylab='Monthly Sales',type='l') > points(y=winnebago,x=time(winnebago), pch=as.vector(season(winnebago))) As we would expect with recreational vehicles in the U.S., there is substantial seasonality in the series. However, there is also a general upward “trend” with increasing variation at the higher levels of the series. 1975 1980 1985 1990 −3 −2 −1 0 1 2 3 Time Standardized Residuals J F M A M J J A S ON D J F M A M J JA S O N D J F M A M J JA S O N D J F M AM J J A S ONDJF M A M J J AS ON DJ F MA M J J A SO N D J F MA MJJ A S ON D J F M A M J J A S O N D J FM AMJJ A S O N DJ F M A MJJ A S O N D J F M A M J J A S O ND J F M AM J J A S O N D J F M A M J J A S O ND J F M AM J JA SO N D J FM A MJ J A SO N D J F M A M J J A S ON D Time Monthly Sales 1967 1968 1969 1970 1971 1972 0 500 1000 1500 ND J FMA MJ J ASOND J FM AMJ J ASOND J FMA M J J ASO ND J F M A MJ J A S ON D J F M A M J J A S O N D J F 17 (b) Use least squares to fit a line to these data. Interpret the regression output. Plot the standardized residuals from the fit as a time series. Interpret the plot. > winnebago.lm=lm(winnebago~time(winnebago)); summary(winnebago.lm) Call: lm(formula = winnebago ~ time(winnebago)) Residuals: Min 1Q Median 3Q Max -419.58 -93.13 -12.78 94.96 759.21 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -394885.68 33539.77 -11.77 <2e-16 *** time(winnebago) 200.74 17.03 11.79 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 209.7 on 62 degrees of freedom Multiple R-Squared: 0.6915, Adjusted R-squared: 0.6865 F-statistic: 138.9 on 1 and 62 DF, p-value: < 2.2e-16 > plot(y=rstudent(winnebago.lm),x=as.vector(time(winnebago)),type='l', ylab='Standardized Residuals') > points(y=rstudent(winnebago.lm),x=as.vector(time(winnebago)), pch=as.vector(season(winnebago))) Although the “trend” has been removed, this clearly is not an acceptable model and we move on. 1967 1968 1969 1970 1971 1972 −2−101234 Time Standardized Residuals ND J FMA MJ J ASOND J FM A MJ J AS OND J FMA M J J A S O ND J F M A MJ J A S ON D J F M A M J J A S O N D J F 18 (c) Now take natural logarithms of the monthly sales figures and display and interpret the time series plot of the transformed values. > plot(log(winnebago),ylab='Log(Sales)',type='l') > points(y=log(winnebago),x=time(winnebago), pch=as.vector(season(winnebago))) In this we see that the seasonality is still present but that now the upward trend is accompanied by much more equal variation around that trend. (d) Use least squares to fit a line to the logged data. Display the time series plot of the standardized residuals from this fit and interpret. > logwinnebago.lm=lm(log(winnebago)~time(log(winnebago))); summary(logwinnebago.lm) > plot(y=rstudent(logwinnebago.lm),x=as.vector(time(winnebago)),type='l', ylab='Standardized Residuals') > points(y=rstudent(logwinnebago.lm),x=as.vector(time(winnebago)), pch=as.vector(season(winnebago))) The residual plot looks much more acceptable now but we still need to model the seasonality. (e) Now use least squares to fit a seasonal-means plus linear time trend to the logged sales time series and save the standardized residuals for further analysis. Check the statistical significance of each of the regression coefficients in the model. > month.=season(winnebago) > logwinnebago.lm2=lm(log(winnebago)~month.+time(log(winnebago))) Time Log(Sales) 1967 1968 1969 1970 1971 1972 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 N D J FM A M J J AS O N D J FM A MJ J A S OND J FMA M J J A S O ND J F M A MJ J A S ON D J F M AMJ J A S O ND J F 1967 1968 1969 1970 1971 1972 −3 −2 −1 0 1 2 Time Standardized Residuals N D J FM A M J J AS O N D J F M A MJ J A S OND J FM A M J J A S O N D J F M A MJ J A S O N D J F M A MJ J A S O N D J F 19 > summary(logwinnebago.lm2) Call: lm(formula = log(winnebago) ~ month. + time(log(winnebago))) Residuals: Min 1Q Median 3Q Max -0.92501 -0.16328 0.03344 0.20757 0.57388 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -997.33061 50.63995 -19.695 < 2e-16 *** month.February 0.62445 0.18182 3.434 0.00119 ** month.March 0.68220 0.19088 3.574 0.00078 *** month.April 0.80959 0.19079 4.243 9.30e-05 *** month.May 0.86953 0.19073 4.559 3.25e-05 *** month.June 0.86309 0.19070 4.526 3.63e-05 *** month.July 0.55392 0.19069 2.905 0.00542 ** month.August 0.56989 0.19070 2.988 0.00431 ** month.September 0.57572 0.19073 3.018 0.00396 ** month.October 0.26349 0.19079 1.381 0.17330 month.November 0.28682 0.18186 1.577 0.12095 month.December 0.24802 0.18182 1.364 0.17853 time(log(winnebago)) 0.50909 0.02571 19.800 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3149 on 51 degrees of freedom Multiple R-Squared: 0.8946, Adjusted R-squared: 0.8699 F-statistic: 36.09 on 12 and 51 DF, p-value: < 2.2e-16 This model explains a large percentage of the variation in sales but, as always, we should also look at the residuals. (f) Display the time series plot of the standardized residuals obtained in part (e). Interpret the plot. > plot(y=rstudent(logwinnebago.lm2),x=as.vector(time(winnebago)),type='l', > ylab='Standardized Residuals') > points(y=rstudent(logwinnebago.lm2),x=as.vector(time(winnebago)), > pch=as.vector(season(winnebago))) This residual plot is the best we have seen for models of this series but perhaps there are better models to be explored later. 1967 1968 1969 1970 1971 1972 −3 −2 −1 0 1 2 Time Standardized Residuals N D J F M A M J J ASO N D J FM A MJ J A SOND J F MA M J J A SO N D J F M A MJ J A S O N D J FM A MJ J A SO N D J F 20 Exercise 3.8 The data file retail lists total UK (United Kingdom) retail sales (in billions of pounds) from January 1986 through March 2007. The data are not “seasonally adjusted” and year 2000 = 100 is the base year. (a) Display and interpret the time series plot for these data. Be sure to use plotting symbols that permit you to look for seasonality. > data(retail); plot(retail,ylab='Monthly Sales',type='l') > points(y=retail,x=time(retail), pch=as.vector(season(retail))) There is a clear upward trend with at least some seasonality. Large holiday sales in November and especially December are striking. There is some tendency for increased variation at the higher levels of the series. (b) Use least squares to fit a seasonal-means plus linear time trend to this time series. Interpret the regression output and save the standardized residuals from the fit for further analysis. > month.=season(retail) > retail.lm=lm(retail~month.+time(retail)) > summary(retail.lm) Call: lm(formula = retail ~ month. + time(retail)) Residuals: Min 1Q Median 3Q Max -19.8950 -2.4440 -0.3518 2.1971 16.2045 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.249e+03 8.724e+01 -83.099 < 2e-16 *** month.February -3.015e+00 1.290e+00 -2.337 0.02024 * month.March 7.469e-02 1.290e+00 0.058 0.95387 month.April 3.447e+00 1.305e+00 2.641 0.00880 ** month.May 3.108e+00 1.305e+00 2.381 0.01803 * month.June 3.074e+00 1.305e+00 2.355 0.01932 * month.July 6.053e+00 1.305e+00 4.638 5.76e-06 *** month.August 3.138e+00 1.305e+00 2.404 0.01695 * month.September 3.428e+00 1.305e+00 2.626 0.00919 ** month.October 8.555e+00 1.305e+00 6.555 3.34e-10 *** month.November 2.082e+01 1.305e+00 15.948 < 2e-16 *** month.December 5.254e+01 1.305e+00 40.255 < 2e-16 *** time(retail) 3.670e+00 4.369e-02 83.995 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.278 on 242 degrees of freedom Multiple R-Squared: 0.9767, Adjusted R-squared: 0.9755 F-statistic: 845 on 12 and 242 DF, p-value: < 2.2e-16 Time Monthly Sales 1990 1995 2000 2005 50 100 150 JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJASO N D JFMAMJJAS O N D JFMAMJJASO N D JFMAMJJAS O N D JFMAMJJAS O N D JFMAMJJASO N D JFMAMJJAS O N D JFMAMJJAS O N D JFMAMJJASO N D JFMAMJJAS O N D JFM AMJJAS O N D JFM 21 All but the March effect is statistically significant at the usual levels and the R-square is very large. Let’s consider the residuals. (c) Construct and interpret the time series plot of the standardized residuals from part (b). Be sure to use proper plotting symbols to check on seasonality. > plot(y=rstudent(retail.lm),x=as.vector(time(retail)),type='l', ylab='Standardized Residuals', xlab='Time') > points(y=rstudent(retail.lm),x=as.vector(time(retail)), pch=as.vector(season(retail))) Clearly this model still leaves a lot to be desired. Exercise 3.9 The data file prescrip gives monthly U.S. prescription costs for the months August 1986 to March 1992. These data are from the State of New Jersey’s Prescription Drug Program and are the cost per prescription claim. (a) Display and interpret the time series plot for these data. Use plotting symbols that permit you to look for sea- sonality. > data(prescrip); plot(prescrip,ylab='Monthly Sales',type='l') > points(y=prescrip,x=time(prescrip), pch=as.vector(season(prescrip))) This plot shows a generally linear upward trend with possible seasonality as Septembers are generally peaks in all years. 1990 1995 2000 2005 −4 −2 0 2 4 Time Standardized Residuals JFM AMJJ AS O N D JF MA MJJAS O N D JFMAMJJASO N D JFM A M JJ AS O N D JF MAM JJAS O ND JF M AMJJASO N D JF MAMJJ ASO N D JFMAMJJASO N D JFM AMJJA S O N DJFMAMJJA SON D J FMAM J JA SO N DJFMAM JJ AS ON D JFMAMJ JASO ND JFMA MJJA S O NDJ FMAMJJAS ON D JFM AMJJASO N D JF MAM J J AS O N D J FMAMJJ AS O N D JFM AMJJASO N D JF M AMJJAS O N D JFM AMJJAS O N D J F M Time Monthly Sales 1987 1988 1989 1990 1991 1992 15 20 25 30ASONDJ FMAMJ JASONDJ FM AMJ JASONDJ FM A MJ JASONDJ FMA MJ JASONDJ F MA MJ JASON D J F M 22 (b) Calculate and plot the sequence of month-to-month percentage changes in the prescription costs. Again, use plotting symbols that permit you to look for seasonality. > perprescrip=na.omit(100*(prescrip-zlag(prescrip))/zlag(prescrip)) > plot(perprescrip,ylab='Monthly % Sales Change',type='l') > points(y=perprescrip,x=time(perprescrip), pch=as.vector(season(perprescrip))) The percentage changes look reasonably stable and stationary with perhaps some subtle seasonality. (c) Use least squares to fit a cosine trend with fundamental frequency 1/12 to the percentage change series. Interpret the regression output. Save the standardized residuals. > har.=harmonic(perprescrip); prescrip.lm=lm(perprescrip~har.); summary(prescrip.lm) Call: lm(formula = perprescrip ~ har.) Residuals: Min 1Q Median 3Q Max -3.8444 -1.3742 0.1697 1.4069 3.8980 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.2217 0.2325 5.254 1.82e-06 *** har.cos(2*pi*t) -0.6538 0.3298 -1.982 0.0518 . har.sin(2*pi*t) 1.6596 0.3269 5.077 3.54e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.897 on 64 degrees of freedom Multiple R-Squared: 0.3148, Adjusted R-squared: 0.2933 F-statistic: 14.7 on 2 and 64 DF, p-value: 5.584e-06 The cosine trend model is statistically significant but the R-squared is only 31%. Time Monthly % Sales Change 1987 1988 1989 1990 1991 1992 −4 −2 0 2 4 SO N D J F M A M J JA S O N D J F M A M J J AS ON D J F MAM J J A S ON D J F M A M J J A S O N D J F M A M J J A S O N D J F M 23 (d) Plot the sequence of standardized residuals to investigate the adequacy of the cosine trend model. Interpret the plot. > plot(y=rstudent(prescrip.lm),x=as.vector(time(perprescrip)),type='l', > ylab='Standardized Residuals') > points(y=rstudent(prescrip.lm),x=as.vector(time(perprescrip)), > pch=as.vector(season(perprescrip))) These residuals look basically random and without any significant patterns. Exercise 3.10 (Continuation of Exercise 3.4) Consider the hours time series again. (a) Use least squares to fit a quadratic trend to these data. Interpret the regression output and save the standard- ized residuals for further analysis. > data(hours); hours.lm=lm(hours~time(hours)+I(time(hours)^2)); summary(hours.lm) Call: lm(formula = hours ~ time(hours) + I(time(hours)^2)) Residuals: Min 1Q Median 3Q Max -1.00603 -0.25431 -0.02267 0.22884 0.98358 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -5.122e+05 1.155e+05 -4.433 4.28e-05 *** time(hours) 5.159e+02 1.164e+02 4.431 4.31e-05 *** I(time(hours)^2) -1.299e-01 2.933e-02 -4.428 4.35e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.423 on 57 degrees of freedom Multiple R-Squared: 0.5921, Adjusted R-squared: 0.5778 F-statistic: 41.37 on 2 and 57 DF, p-value: 7.97e-12 The quadratic fit looks to be quite significant, but, of course, the seasonality, if any, has not been accounted for. 1987 1988 1989 1990 1991 1992 −2 −1 0 1 2 Time Standardized Residuals S O N D J F M A M J J A S O N D J F M A M J J A S O N D J F MAM J J A S O N D J F M A M J J A S O N D J F M A M J J A S O N D J F M 24 (b) Display a sequence plot of the standardized residuals and interpret. Use monthly plotting symbols so that possible seasonality may be readily identified. > plot(y=rstudent(hours.lm),x=as.vector(time(hours)),type='l', ylab='Standardized Residuals') > points(y=rstudent(hours.lm),x=as.vector(time(hours)), pch=as.vector(season(hours))) These residuals are too smooth for randomness and Decembers are all high. (c) Perform the Runs test of the standardized residuals and interpret the results. > runs(rstudent(hours.lm)) \$pvalue  0.00012 \$observed.runs  16 \$expected.runs  30.96667 \$n1  31 \$n2  29 \$k  0 The p-value of 0.00012 indicates that our suspicion of nonrandomness is quite justified. 1983 1984 1985 1986 1987 −2 −1 0 1 2 Time Standardized Residuals J A SO N D J F M AM J J A S ON D J FM A M J J A S O N D J F M A M J J A SON D J F M A M J J A S O N D J F M A M J 25 (d) Calculate the sample autocorrelations for the standardized residuals and interpret. > acf(rstudent(hours.lm)) There is significant autocorrelation at lags 1, 3, 6, 10, 14, 16, and 17. (e) Investigate the normality of the standardized residuals (error terms). Consider histograms and normal proba- bility plots. Interpret the plots. > qqnorm(rstudent(hours.lm)); qqline(rstudent(hours.lm)) > hist(rstudent(hours.lm),xlab='Standardized Residuals') > shapiro.test(rstudent(hours.lm)) Shapiro-Wilk normality test data: rstudent(hours.lm) W = 0.9939, p-value = 0.991 We have no evidence against normality for the error terms in this model. Exercise 3.11 (Continuation of Exercise 3.5) Return to the wages series. (a) Consider the residuals from a least squares fit of a quadratic time trend. > wages.lm2=lm(wages~time(wages)+I(time(wages)^2)); summary(wages.lm2) Call: lm(formula = wages ~ time(wages) + I(time(wages)^2)) 51015 −0.4 −0.2 0.0 0.2 0.4 Lag ACF ●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles Standardized Residuals Frequency −3 −2 −1 0 1 2 3 0 5 10 15 20 26 Residuals: Min 1Q Median 3Q Max -0.148318 -0.041440 0.001563 0.050089 0.139839 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -8.495e+04 1.019e+04 -8.336 4.87e-12 *** time(wages) 8.534e+01 1.027e+01 8.309 5.44e-12 *** I(time(wages)^2) -2.143e-02 2.588e-03 -8.282 6.10e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.05889 on 69 degrees of freedom Multiple R-Squared: 0.9864, Adjusted R-squared: 0.986 F-statistic: 2494 on 2 and 69 DF, p-value: < 2.2e-16 The quardratic fit is certainly statistically significant! (b) Perform a Runs test on the standardized residuals and interpret the results. > runs(rstudent(wages.lm2)) \$pvalue  1.56e-07 \$observed.runs  15 \$expected.runs  36.75 \$n1  33 \$n2  39 \$k  0 There are too few runs for these residuals (and hence error terms) to be considered random. (c) Calculate the sample autocorrelations for the standardized residuals and interpret. > acf(rstudent(wages.lm2)) The autocorrelations displayed reinforce the results from the runs test. 51015 −0.4 0.0 0.2 0.4 0.6 Lag ACF 27 (d) Investigate the normality of the standardized residuals (error terms). Consider histograms and normal proba- bility plots. Interpret the plots. > qqnorm(rstudent(wages.lm2)); qqline(rstudent(wages.lm2)) > hist(rstudent(wages.lm2),xlab='Standardized Residuals') > shapiro.test(rstudent(wages.lm2) Although the distribution is rather mound shaped, normality is somewhat tenuous. However, the evidence is not suf- ficient to reject normality. See Shapiro-Wilk test results below. Shapiro-Wilk normality test data: rstudent(wages.lm2) W = 0.9887, p-value = 0.7693 Exercise 3.12 (Continuation of Exercise 3.6) Consider the time series in the data file beersales. (a) Obtain the residuals from the least squares fit of the seasonal-means plus quadratic time trend model. > beersales.lm2=lm(beersales~month.+time(beersales)+I(time(beersales)^2)) (b) Perform a Runs test on the standardized residuals and interpret the results. > runs(rstudent(beersales.lm2)) \$pvalue  0.0127 \$observed.runs  79 \$expected.runs  96.625 \$n1  90 \$n2  102 \$k  0 We would reject independence of the error terms on the basis of these results. ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● −2 −1 0 1 2 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles Standardized Residuals Frequency −3 −2 −1 0 1 2 3 02468101214 28 (c) Calculate the sample autocorrelations for the standardized residuals and interpret. > acf(rstudent(beersales.lm2)) These results also show the lack of independence in the error terms of this model. (d) Investigate the normality of the standardized residuals (error terms). Consider histograms and normal proba- bility plots. Interpret the plots. > qqnorm(rstudent(beersales.lm2)); qqline(rstudent(beersales.lm2)) > hist(rstudent(beersales.lm2),xlab='Standardized Residuals') > shapiro.test(rstudent(beersales.lm2)) Shapiro-Wilk normality test data: rstudent(beersales.lm2) W = 0.9924, p-value = 0.4139 All of these results provide good support for the assumption of normal error terms. Exercise 3.13 (Continuation of Exercise 3.7) Return to the winnebago time series. (a) Calculate the least squares residuals from a seasonal-means plus linear time trend model on the logarithms of the sales time series. > month.=season(winnebago) > logwinnebago.lm2=lm(log(winnebago)~month.+time(winnebago)) 5101520 −0.1 0.0 0.1 0.2 0.3 Lag ACF ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Theoretical Quantiles Sample Quantiles Standardized Residuals Frequency −4 −3 −2 −1 0 1 2 3 0204060 29 (b) Perform a Runs test on the standardized residuals and interpret the results. > runs(rstudent(logwinnebago.lm2)) \$pvalue  0.000243 \$observed.runs  18 \$expected.runs  32.71875 \$n1  29 \$n2  35 \$k  0 These results suggest strongly that the error terms are not independent. (c) Calculate the sample autocorrelations for the standardized residuals and interpret. > acf(rstudent(logwinnebago.lm2)) More evidence that the error terms are not independent. (d) Investigate the normality of the standardized residuals (error terms). Consider histograms and normal proba- bility plots. Interpret the plots. > qqnorm(rstudent(logwinnebago.lm2)); qqline(rstudent(logwinnebago.lm2)) 51015 −0.2 0.0 0.2 0.4 0.6 Lag ACF ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● −2 −1 0 1 2 −3 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles Standardized Residuals Frequency −4 −3 −2 −1 0 1 2 3 0 5 10 15 20 25 30 > hist(rstudent(logwinnebago.lm2),xlab='Standardized Residuals') > shapiro.test(rstudent(logwinnebago.lm2)) Shapiro-Wilk normality test data: rstudent(logwinnebago.lm2) W = 0.9704, p-value = 0.1262 There is evidence of an outlier on the low end but not enough to reject normality at this point. Exercise 3.14 (Continuation of Exercise 3.8) The data file retail contains UK monthly retail sales figures. (a) Obtain the least squares residuals from a seasonal-means plus linear time trend model. > month.=season(retail); retail.lm=lm(retail~month.+time(retail)) (b) Perform a Runs test on the standardized residuals and interpret the results. > runs(rstudent(retail.lm)) \$pvalue  9.19e-23 \$observed.runs  52 \$expected.runs  127.9333 \$n1  136 \$n2  119 \$k  0 The runs test provides strong evidence against randomness in the error terms. (c) Calculate the sample autocorrelations for the standardized residuals and interpret. > acf(rstudent(retail.lm)) Here we have additional evidence that the error terms are not random—the substantial autocorrelation at the sea- sonal lag 12 is especially bothersome. 5 101520 0.0 0.2 0.4 0.6 Lag ACF 31 (d) Investigate the normality of the standardized residuals (error terms). Consider histograms and normal proba- bility plots. Interpret the plots. > qqnorm(rstudent(retail.lm)); qqline(rstudent(retail.lm)) > hist(rstudent(retail.lm),xlab='Standardized Residuals') > shapiro.test(rstudent(retail.lm)) Shapiro-Wilk normality test data: rstudent(retail.lm) W = 0.939, p-value = 8.534e-09 Here we see considerable evidence against normality of the error terms. The distribution is not spread out as much as a normal distribution. Exercise 3.15 (Continuation of Exercise 3.9) Consider again the prescrip time series. (a) Save the standardized residuals from a least squares fit of a cosine trend with fundamental frequency 1/12 to the percentage change time series. > perprescrip=na.omit(100*(prescrip-zlag(prescrip))/zlag(prescrip)) > har.=harmonic(perprescrip); prescrip.lm=lm(perprescrip~har.) (b) Perform a Runs test on the standardized residuals and interpret the results. > runs(rstudent(prescrip.lm)) \$pvalue  0.0026 \$observed.runs  47 \$expected.runs  34.43284 \$n1  32 \$n2  35 \$k  0 The runs test indicates lack of independence in the error terms of the model. The large number of runs suggests that the residuals oscillate back and forth across the median much more than a random series would. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles Standardized Residuals Frequency −6 −4 −2 0 2 4 0 20406080100 32 (c) Calculate the sample autocorrelations for the standardized residuals and interpret. > acf(rstudent(prescrip.lm)) As we suspected from the runs test results, the residuals have statistically significant negative autocorrelation at lag one. (d) Investigate the normality of the standardized residuals (error terms). Consider histograms and normal proba- bility plots. Interpret the plots. > qqnorm(rstudent(prescrip.lm)); qqline(rstudent(prescrip.lm)) > hist(rstudent(prescrip.lm),xlab='Standardized Residuals') > shapiro.test(rstudent(prescrip.lm)) Shapiro-Wilk normality test data: rstudent(retail.lm) W = 0.939, p-value = 8.534e-09 We have evidence against normality in the errors for this model. The distribution seems to have fatter tails” than a normal distribution would have. 51015 −0.2 −0.1 0.0 0.1 0.2 Lag ACF ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles Standardized Residuals Frequency −2 −1 0 1 2 024681012 33 Exercise 3.16 Suppose that a stationary time series, {Yt}, has autocorrelation function of the form ρk = φk for k > 0 where φ is a constant in the range (−1,+1). (a) Show that . (Hint: Use Equation (3.2.3), page 28, the finite geometric sum , and the related sum .) First . Then (b) If n is large argue that . As n gets large, the term goes to zero and hence the required result. (c) Plot for φ over the range −1 to +1. Interpret the plot in terms of the precision in estimating the process mean. > phi=seq(from=-0.99,to=0.99,by=0.1) > plot(y=(1+phi)/(1-phi),x=phi,type='l',xlab=expression(phi), ylab=expression((1+phi)/(1-phi))) Negative values of φ imply better estimates of the mean compared with white noise but positive values imply worse estimates. This is especially true as φ approaches +1. Exercise 3.17 Verify Equation (3.2.6), page 29. (Hint: You will need the fact that for −1<φ < +1.) Var Y _ () γ0 n----- 1 φ+ 1 φ–------------ 2φ n------ 1 φn–() 1 φ–()2-------------------–= φk k 0= n ∑ 1 φn 1+– 1 φ–----------------------= kφk 1– k 0= n ∑ φd d φk k 0= n ∑= φd d φk k 0= n 1– ∑ φd d 1 φn– 1 φ–-------------- 1 φ–()nφn 1––()1 φ–()1–()– 1 φ–()2----------------------------------------------------------------------------== Var Y _ () γ0 n----- 12 1k n---–⎝⎠ ⎛⎞ρk k 1= n 1– ∑+ γ0 n----- 1–2 1k n---–⎝⎠ ⎛⎞φk k 0= n 1– ∑+ γ0 n----- 1–2φk k 0= n 1– ∑ 2 n--- kφk k 0= n 1– ∑–+== = γ0 n----- 1–21 φn–() 1 φ–-------------------+ 2 n---φ 1 φ–()nφn 1––()1 φ–()1–()– 1 φ–()2----------------------------------------------------------------------------– ⎩⎭ ⎨⎬ ⎧⎫= γ0 n----- 1 φ+ 1 φ–------------ 2φ n------ 1 φn–() 1 φ–()2-------------------–= Var Y _ () γ0 n----- 1 φ+ 1 φ–------------≈ 2φ n------ 1 φn–() 1 φ–()2------------------- 1 φ+()1 φ–()⁄ −1.0 −0.5 0.0 0.5 0 5 10 15 20 φφ (( 1 ++φ φ )) (( 1 −−φ φ )) φk k 0= ∞ ∑ 1 1 φ–------------= Var Y _ () γ0 n----- ρk k ∞–= ∞ ∑≈ γ0 n----- 12 φk k 1= ∞ ∑+ γ0 n----- 1 2φ 1 φ–------------+ γ0 n----- 1 φ+ 1 φ–------------=== 34 Exercise 3.18 Verify Equation (3.2.7), page 30. (Hint: You will need the two sums and .) This is solved in the text on page 30. You could construct an alternative proof using Equation (2.2.7), page 12, and Equation (2.2.12), page 13, but it would be longer. CHAPTER 4 Exercise 4.1 Use first principles to find the autocorrelation function for the stationary process defined by . First then , and and this persists for all lags 3 or more. Therefore Exercise 4.2 Sketch the autocorrelation functions for the following MA(2) models with parameters as specified: (a) θ1 = 0.5 and θ2 = 0.4 You could do these calculations using Equation (4.2.3), page 63, with a calculator. Or you could use the following R code: > ARMAacf(ma=list(-.5,-.4)) 0 1 2 3 1.0000000 -0.2127660 -0.2836879 0.0000000 (b) θ1 = 1.2 and θ2 = −0.7 > ARMAacf(ma=list(-1.2,.7)) 0 1 2 3 1.0000000 -0.6962457 0.2389078 0.0000000 (c) θ1 = −1 and θ2 = −0.6 > ARMAacf(ma=list(1,.6)) 0 1 2 3 1.0000000 0.6779661 0.2542373 0.0000000 t t 1= n ∑ nn 1+() 2--------------------= t2 t 1= n ∑ nn 1+()2n 1+() 6-----------------------------------------= Var Y _ () σe 2 2n 1+()n 1+() 6n-----------------= Yt 5 et 1 2---et 1–– 1 4---et 2–++= Var Yt() Var 5 et 1 2---et 1–– 1 4---et 2–++()1 1 2---()2 1 4---()2++[]σe 2 21 16------σe 2=== Cov Yt Yt 1–,()Cov et 1 2---et 1–– 1 4---et 2–+ et 1– 1 2---et 2–– 1 4---et 3–+,()Cov 1 2---et 1–– 1 4---et 2–+ et 1– 1 2---et 2––,()== Cov 1 2---et 1–– et 1–,()Cov 1 4---et 2– 1 2---et 2––,()+= 1 2--- 1 4---()– 1 2---–[]σe 2 5 8---– σe 2== Cov Yt Yt 2–,()Cov et 1 2---et 1–– 1 4---et 2–+ et 2– 1 2---et 3–– 1 4---et 4–+,()Cov 1 4---et 2– et 2–,()1 4---σe2=== Cov Yt Yt 3–,()Cov et 1 2---et 1–– 1 4---et 2–+ et 3– 1 2---et 4–– 1 4---et 5–+,()0== ρk 1 k 0= 5 8---– σe 2 21 16------σe 2 ----------- 10 21----–= k 1= 1 4---σe2 21 16------σe2 ----------- 4 21----= k 2= 0 k 2>⎩ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎧ = 35 Exercise 4.3 Verify that for an MA(1) process Recall that so that which is seen to be negative on the range −1 < θ < +1 and zero at both −1 and +1. For |θ| > 1, the derivative is positive. Taken together, these facts imply the desired results. Exercise 4.4 Show that when θ is replaced by 1/θ, the autocorrelation function for an MA(1) process does not change. as required. Exercise 4.5 Calculate and sketch the autocorrelation functions for each of the following AR(1) models. Plot for suf- ficient lags that the autocorrelation function has nearly died out. (a) φ1 = 0.6 > ACF=ARMAacf(ar=.6,lag.max=8) > plot(y=ACF[-1],x=1:8,xlab='Lag',ylab='ACF',type='h'); abline(h=0) (b) φ1 = −0.6 max ρ1 ∞θ∞<<– 0.5=andmin ρ1 ∞θ∞<<– 0.5–= ρ1 θ– 1 θ2+ ---------------= θd dρ1 θ2 1– 1 θ2+()2----------------------= 1 θ---– 1 1 θ---⎝⎠ ⎛⎞2 + -------------------- θ– 1 θ2+ ---------------= 12345678 0.0 0.2 0.4 0.6 0.8 1.0 Lag ACF 12345678 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 36 > ACF=ARMAacf(ar=-0.6,lag.max=8) > plot(y=ACF[-1],x=1:8,xlab='Lag',ylab='ACF',type='h',ylim=c(-1,1)); abline(h=0) (c) φ1 = 0.95 (Do out to 20 lags.) > ACF=ARMAacf(ar=.95,lag.max=20) > plot(y=ACF[-1],x=1:20,xlab='Lag',ylab='ACF',type='h',ylim=c(0,1)); abline(h=0) (d) φ1 = 0.3 > ACF=ARMAacf(ar=.3,lag.max=20) > plot(y=ACF[-1],x=1:20,xlab='Lag',ylab='ACF',type='h',ylim=c(0,1)); abline(h=0 Exercise 4.6 Suppose that {Yt} is an AR(1) process with −1 < φ < +1. (a) Find the autocovariance function for Wt = ∇Yt = Yt − Yt−1 in terms of φ and . Recall that for k ≥ 0 Then for k > 0 5 101520 0.0 0.2 0.4 0.6 0.8 1.0 Lag ACF 5 101520 0.0 0.2 0.4 0.6 0.8 1.0 Lag ACF σe2 Cov Yt Ytk–,()φk σe 2 1 φ2– --------------= Cov Wt Wtk–,()Cov Yt Yt 1–– Ytk– Ytk–1––,()= Cov Yt Ytk–,()Cov Yt Ytk–1––,()Cov Yt 1–– Ytk–,()Cov Yt 1–– Ytk–1––,()+++( )= 37 (b) In particular, show that Var(Wt) = 2 /(1+φ). Exercise 4.7 Describe the important characteristics of the autocorrelation function for the following models: (a) MA(1), (b) MA(2), (c) AR(1), (d) AR(2), and (e) ARMA(1,1). (a) Has nonzero correlation only at lag 1. Could be positive or negative but must be between -0.5 and +0.5. (b) Has nonzero correlation only at lags 1 and 2. (c) Has exponentially decaying autocorrelations starting from lag 0. If φ > 0, then all autocorrelations are posi- tive. If φ < 0, then autocorrelations alternate negative, positive negative, etc. (d) Autocorrelations can have several patterns but if the roots of the charcteristic equation are complex numbers, then the pattern will be a cosine with a decaying magnitude. (e) Has exponentially decaying autocorrelations starting from lag 1—but not from lag zero. Exercise 4.8 Let {Yt} be an AR(2) process of the special form Yt = φ2Yt−2 + et. Use first principles to find the range of values of φ2 for which the process is stationary. If {Yt} is stationary, then . But, by stationarity so solving the first equation gives Hence, we must have −1 < < +1. Exercise 4.9 Use the recursive formula of Equation (4.3.13), page 72, to calculate and then sketch the autocorrelation functions for the following AR(2) models with parameters as specified. In each case specify whether the roots of the characteristic equation are real or complex. If the roots are complex, find the damping factor, R, and frequency, Θ, for the corresponding autocorrelation function when expressed as in Equation (4.3.17), page 73. (a) φ1 = 0.6 and φ2 = 0.3 These calculations could be done on a calculator or with the following R code > rho=NULL; phi1=.6; phi2=.3; max.lag=20 > rho1=phi1/(1-phi2); rho2=(phi2*(1-phi2)+phi1^2)/(1-phi2) > rho=rho1; rho=rho2 > for (k in 3:max.lag) rho[k]=phi1*rho[k-1]+phi2*rho[k-2] > rho # to display the values > plot(y=rho,x=1:max.lag,type='h',ylab='ACF',xlab='Lag',ylim=c(-1,+1)); abline(h=0) > rho  0.8571429 0.8142857 0.7457143 0.6917143 0.6387429 0.5907600 0.5460789  0.5048753 0.4667488 0.4315119 0.3989318 0.3688126 0.3409671 0.3152241  0.2914246 0.2694220 0.2490806 0.2302749 0.2128891 0.1968159 φk φk 1+– φk 1–– φk+[] σe 2 1 φ2– --------------=2φk φk 1+– φk 1––[] σe 2 1 φ2– -------------- 2φφ2–1–[]φk 1– σe 2 1 φ2– --------------== 1 φ–()2[]– φk 1– σe2 1 φ2– --------------= 1 φ– 1 φ+------------– φk 1– σe 2= σe2 Var Wt() Var Yt Yt 1––()Var Yt()Var Yt 1–()2Cov Yt Yt 1–,()–+21φ–() σe 2 1 φ2– -------------- 2σe 2 1 φ+------------== == Var Yt() φ2 2Var Yt 2–()σe 2+= Var Yt() Var Yt 2–()= Var Yt() σe 2 1 φ2 2– ---------------= φ2 38 > polyroot(c(1,-phi1,-phi2)) # get the roots of the characteristic polynomial  1.081666+0i -3.081666+0i Note that the roots are real and that one root is very close to the stationarity boundary (+1). This explains the slow decay of the autocorrelation function. You could also place the point (φ1,φ2) on the display in Exhibit (4.17), page 72, to show that the roots are real. (b) φ1 = −0.4 and φ2 = 0.5 > polyroot(c(1,-phi1,-phi2)) >  -1.069694+0i 1.869694-0i The roots are real in this case. If you look at the actual ACF values you will discover that the magnitude of the lag 2 value (0.82) is slightly larger than the magnitude of the lag one value (0.80). Of course, this could not happen with an AR(1) process. 5 101520 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 5 101520 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 39 (c) φ1 = 1.2 and φ2 = −0.7 > polyroot(c(1,-phi1,-phi2))  0.8571429+0.8329931i 0.8571429-0.8329931i In this case the roots are complex. > Damp = sqrt(-phi2) # damping factor R > Freq = acos(phi1/(2*R)) # frequency Θ > Phase = atan((1-phi2)/(1+phi2)) # phase Φ > Damp; Freq; Phase # display the results  0.83666  0.7711105  1.396124 (d) φ1 = −1 and φ2 = −0.6 > > polyroot(c(1,-phi1,-phi2))  -0.8333333+0.9860133i -0.8333333-0.9860133i The roots are complex and we have > Damp; Freq; Phase # display the results  0.7745967 5 101520 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 5 101520 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 40  2.211319  1.325818 (e) φ1 = 0.5 and φ2 = −0.9 > polyroot(c(1,-phi1,-phi2))  0.277778+1.016834i 0.277778-1.016834i The roots are complex and > Damp; Freq; Phase # display the results  0.9486833  1.267354  1.518213 (f) φ1 = −0.5 and φ2 = −0.6 > polyroot(c(1,-phi1,-phi2))  -0.416667+1.221907i -0.416667-1.221907i The roots are complex with > Damp; Freq; Phase # display the results  0.7745967  1.874239  1.325818 5 101520 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 5 101520 −1.0 −0.5 0.0 0.5 1.0 Lag ACF 41 Exercise 4.10 Sketch the autocorrelation functions for each of the following ARMA models: (a) ARMA(1,1) with φ = 0.7 and θ = 0.4. Could use Equation (4.4.5), page 78, and a calculator or use the R code which follows. > ACF=ARMAacf(ar=0.7,ma=-0.4,lag.max=20) > # Remember that R uses the negative of our theta values. > plot(y=ACF[-1],x=1:20,xlab='Lag',ylab='ACF',type='h'); abline(h=0) (b) ARMA(1,1) with φ = 0.7 and θ = −0.4. > ACF=ARMAacf(ar=.7,ma=0.4,lag.max=20) > plot(y=ACF[-1],x=1:20,xlab='Lag',ylab='ACF',type='h'); abline(h=0) Exercise 4.11 For the ARMA(1,2) model Yt = 0.8Yt−1 + et + 0.7et−1 + 0.6et−2 show that (a) ρk = 0.8ρk−1 for k > 2. With no loss of generality we assume the mean of the series is zero. Then or and hence for k > 2. (b) ρ2 = 0.8ρ1 + 0.6 /γ0. or Now and the required result follows. 5 101520 0.00 0.10 0.20 0.30 Lag ACF 5 101520 0.0 0.2 0.4 0.6 0.8 Lag ACF Cov Yt Ytk–,()E 0.8Yt 1– et 0.7et 1– 0.6et 2–++ +()Ytk–[]= 0.8EYt 1– Ytk–()EetYtk–()0.7Eet 1– Ytk–()0.6Eet 2– Ytk–()++ += 0.8EYt 1– Ytk–()000+++= since k 2> 0.8Cov Yt Ytk1–()–,()= γk 0.8γk 1–= ρk 0.8ρk 1–= σe 2 Cov Yt Yt 2–,()E 0.8Yt 1– et 0.7et 1– 0.6et 2–++ +()Yt 2–[]= E 0.8Yt 1– 0.6et 2–+()Yt 2–[]= 0.8Cov Yt 1– Yt 2–,()0.6Eet 2– Yt 2–()+= γ2 0.8γ1 0.6Eet 2– Yt 2–()+= Eet 2– Yt 2–()EetYt()Eet 0.8Yt 1– et 0.7et 1– 0.6et 2–++ +()[]0 σ+ e 2 00++ σe== = = 42 Exercise 4.12 Consider two MA(2) processes, one with θ1 = θ2 = 1/6 and another with θ1 = −1 and θ2 = 6. (a) Show that these processes have the same autocorrelation function. Plug these numbers into Equations (4.2.3), page 63, and you get the same results. Alternatively, use the following R code. > ARMAacf(ma=c(1,-6)) > ARMAacf(ma=c(-1/6,-1/6)) (b) How do the roots of the corresponding characteristic polynomials compare? Notice that while . So the roots of the two poly- nomials are reciprocals of one another. Only the MA(2) model with θ1 = θ2 = 1/6 is invertible. Exercise 4.13 Let {Yt} be a stationary process with ρk = 0 for k > 1. Show that we must have |ρ1| ≤ ½. (Hint: Con- sider Var(Yn+1 + Yn ++ Y1) and then Var(Yn+1 − Yn + Yn−1 − ± Y1). Use the fact that both of these must be non- negative for all n.) So we must have both and for all n. The first of these inequalities is equivalent to . Since this must hold for all n, we must have . The inequality follows similarly from the other inequality. Exercise 4.14 Suppose that {Yt} is a zero mean, stationary process with |ρ1| < 0.5 and ρk = 0 for k > 1. Show that {Yt} must be representable as an MA(1) process. That is, show that there is a white noise sequence {et} such that Yt = et − θet − 1 where ρ1 is correct and et is uncorrelated with Yt − k for k > 0. (Hint: Choose θ such that |θ| < 1 and ρ1 = −θ/(1+θ2); then let . If we assume that {Yt} is a normal process, et will also be normal and zero correlation is equivalent to independence.) From Exhibit (4.1), page 58, we know that there exists a unique θ in (−1,+1) such that ρ1 = −θ/(1+θ2). With that θ define et as . It is then straightforward to show that Yt = et − θet − 1 as required and that et is uncorrelated with Yt − k for k > 0. Exercise 4.15 Consider the AR(1) model Yt = φYt − 1 + et. Show that if |φ| = 1 the process cannot be stationary. (Hint: Take variances of both sides.) Suppose {Yt} is stationary. Then or . If |φ| = 1 this is impossible and we have a proof by contradiction. Exercise 4.16 Consider the “nonstationary” AR(1) model Yt = 3Yt − 1 + et. (a) Show that satisfies the AR(1) equation. This follows by straightforward substitution. (b) Show that the process defined in part (a) is stationary. The calculations leading up to Equation (4.1.3), page 56, can be repeated with this situation to show stationarity. (c) In what way is this solution unsatisfactory? Since at time t depends on future error terms, this is an unsatisfactory model. 1 1 6---x– 1 6---x2– 1 6--- x 3+()x 2–()–=1x 6x2–+6x 1 3---+⎝⎠ ⎛⎞x 1 2---–⎝⎠ ⎛⎞–= … … Var Yn 1+ Yn Yn 1– … Y1++ ++()n 1+()2nρ1+[]γ0 1 n 12ρ1+()+[]γ0== Var Yn 1+ Yn– Yn 1– …– Y1±+()n 1+()2nρ1–[]γ0 1 n 12ρ1–()+[]γ0== 1 n 12ρ1+()+0≥ 1 n 12ρ1–()+0≥ ρ1 0.5 n 1+()n⁄()–≥ρ1 0.5–≥ ρ1 0.5≤ et θjYtj–j 0= ∞∑= et θjYtj–j 0= ∞∑= Var Yt() φ2Var Yt 1–()σe2+= Var Yt() σe2 1 φ2–()⁄= Yt 1 3---()jetj+j 1= ∞∑–= Yt 43 Exercise 4.17 Consider a process that satisfies the AR(1) equation Yt = ½Yt − 1 + et. (a) Show that Yt = 10(½)t + et + ½et−1 + (½)2et−2 + is a solution of the AR(1) equation. It is easy to see that 10(½)t + et + ½et−1 + (½)2et−2 += ½[10(½)t−1 + et−1 + ½et−2 + (½)2et−3 +] + et so Yt is indeed a solution of the AR(1) equation. (b) Is the solution given in part (a) stationary? Since E(Yt) = 10(½)t is not constant in time, the solution is not stationary. Exercise 4.18 Consider a process that satisfies the zero-mean, “stationary” AR(1) equation Yt = φYt − 1 + et with −1 <φ<+1. Let c be any nonzero constant and define Wt = Yt + cφt. (a) Show that E(Wt) = cφt. E(Wt) = E(Yt + cφt) = 0 + cφt = cφt. (b) Show that {Wt} satisfies the “stationary” AR(1) equation Wt = φWt − 1 + et. Yt + cφt = φ[Yt − 1 + cφt − 1] + et is valid since the terms cφt on both sides cancel. (c) Is {Wt} stationary? Since E(Wt) = cφt is not constant in time, the solution is not stationary. Exercise 4.19 Consider an MA(6) model with θ1 = 0.5, θ2 = −0.25, θ3 = 0.125, θ4 = −0.0625, θ5 = 0.03125, and θ6 = −0.015625. Find a much simpler model that has nearly the same ψ-weights. Notice that these coefficients decrease exponentially in magnitude at rate 0.5 while alternating in sign. Furthermore, the coefficients have nearly died out by θ6. Thus, an AR(1) process with φ = −0.5 would be nearly the same process. Exercise 4.20 Consider an MA(7) model with θ1 = 1, θ2 = −0.5, θ3 = 0.25, θ4 = −0.125, θ5 = 0.0625, θ6 = −0.03125, and θ7 = 0.015625. Find a much simpler model that has nearly the same ψ-weights. Notice that these coefficients decrease exponentially in magnitude at rate 0.5 while alternating in sign but starting at θ1 = 1. Furthermore, the coefficients have nearly died out by θ7. Equation (4.4.6), page 78, shows this type of behaviour for an ARMA(1,1) process. Equating and yields φ = −0.5 and θ = 0.5 in the ARMA(1,1) model that is nearly the same. Exercise 4.21 Consider the model Yt = et − 1 − et − 2 + 0.5et − 3. (a) Find the autocovariance function for this process. and So and so . All other autocorrelations are zero. (b) Show that this is a certain ARMA(p,q) process in disguise. That is, identify values for p and q, and for the θ’s and φ’s such that the ARMA(p,q) process has the same statistical properties as {Yt}. This is really just the MA(2) process Yt = et − et − 1 + 0.5et − 2 in disguise. Since we do not observe the error terms, there is no way to tell the difference between the two sequences defined as et and . … …… ψ1 φθ–1==ψ2 φθ–()φ0.5–== Var Yt() 11–()2 0.5()2++[]σe2 2.25σe2== Cov Yt Yt 1–,()Cov et 1– et 2––0.5et 3–+ et 2– et 3––0.5et 4–+,()= Cov et 2–– et 2–,()Cov 0.5et 3– et 3––,()+= 1–0.5–()σe2= 1.5σe 2–= ρ1 1.5σe 2– 2.25σe 2----------------- 2 3---–== Cov Yt Yt 2–,()Cov et 1– et 2––0.5et 3–+ et 3– et 4––0.5et 5–+,()= Cov 0.5et 3– et 3–,()= 0.5σe 2= ρ1 0.5σe2 2.25σe 2----------------- 2 9---== et' et 1–= 44 Exercise 4.22 Show that the statement “The roots of are greater than 1 in absolute value” is equivalent to the statement “The roots of are less than 1 in absolute value.” (Hint: If G is a root of one equation, is 1/G a root of the other?) Let G be a root of . Then . So 1/G is a root of . Exercise 4.23 Suppose that {Yt} is an AR(1) process with ρ1 = φ. Define the sequence {bt} as bt = Yt − φYt + 1. (a) Show that Cov(bt,bt − k) = 0 for all t and k ≠ 0. Without loss of generality, assume Var(Yt) = 1. Then (b) Show that Cov(bt,Yt + k) = 0 for all t and k > 0. Exercise 4.24 Let {et} be a zero mean, unit variance white noise process. Consider a process that begins at time t = 0 and is defined recursively as follows. Let Y0 = c1e0 and Y1 = c2Y0 + e1. Then let Yt = φ1Yt − 1 + φ2Yt − 2 + et for t > 1 as in an AR(2) process. (a) Show that the process mean is zero. and Now proceed by induction. Suppose E(Yt) = E(Yt − 1) = 0. Then and the result is established by induction on t. (b) For particular values of φ1 and φ2 within the stationarity region for an AR(2) model, show how to choose c1 and c2 so that both Var(Y0) = Var(Y1) and the lag 1 autocorrelation between Y1 and Y0 matches that of a sta- tionary AR(2) process with parameters φ1 and φ2. With no loss of generality, assume . Then or . Next . So . Finally, given parameters φ1 and φ2 within the stationarity region for an AR(2) model, set and and all of the requirements are met. (c) Once the process {Yt} is generated, show how to transform it to a new process that has any desired mean and variance. (This exercise suggests a convenient method for simulating stationary AR(2) processes.) The process will have the desired properties. Exercise 4.25 Consider an “AR(1)” process satisfying Yt = φYt − 1 + et where φ can be any number and {et} is a white noise process such that et is independent of the past {Yt − 1, Yt − 2,…}. Let Y0 be a random variable with mean μ0 and variance . (a) Show that for t > 0 we can write Yt = et + φet − 1 + φ2et − 2 + φ3et − 3 + + φt−1e1 + φtY0. 1 φ1x– φ2x2 …– φpxp––0= xp φ1xp 1–– φ2xp 2– …– φp––0= 1 φ1x– φ2x2 …– φpxp––0= 01φ1G– φ2G2 …– φpGp–– Gp 1 G----⎝⎠ ⎛⎞p φ1 1 G----⎝⎠ ⎛⎞p 1– – φ2 1 G----⎝⎠ ⎛⎞p 2– …– φp––== xp φ1xp 1–– φ2xp 2– …– φp––0= Cov bt btk–,()Cov Yt φYt 1+– Ytk– φYtk–1+–,()= Cov Yt Ytk–,()φCov Yt Ytk–1+,()– φCov Yt 1+ Ytk–,()– φ2Cov Yt 1+ Ytk–1+,()+= φk φφk 1–– φφk 1+– φ2φk+()= 0= Cov bt Ytk+,()Cov Yt φYt 1+– Ytk+,·()Cov Yt Ytk+,()φCov Yt 1+ Ytk+,()–== φk φφk 1––= 0= EY0() cE e0() 0==EY1() c2EY1()Ee0()+0== EYt 1+()c1EYt()c2EYt 1–()Eet 1+()++ 0== σe 2 1= Var Y0() c1 2 Var Y1() c2 2c1 2 1+== = c1 11c2 2–⁄= Cov Y0 Y1,()Cov c1e0 c2c1e0 e1+,()Cov c1e0 c2c1e0,()c2c1 2===ρ1 c2c1 2()c1 2()⁄ c2== c2 φ1 1 φ2–()⁄= c1 11c2 2–⁄= γ0Yt c1⁄μ+ σ0 2 … Yt φYt 1– et+ φφYt 2– et 1–+()et+ φ2 φYt 3– et 2–+()φet 1– et++…== = = φtY0 et φet 1– … φt 1– e1++ + +=as required. 45 (b) Show that for t > 0 we have E(Yt) = φtμ0. (c) Show that for t > 0 Then sum the finite geometric series. (d) Suppose now that μ0 = 0. Argue that, if {Yt} is stationary, we must have . If μ0 = 0, the process has a zero mean. If φ = 1 and the process is stationary, then which is impossible. (e) Continuing to suppose that μ0 = 0, show that, if {Yt} is stationary, then and so we must have |φ| <1. and solve for . Since this variance must be positive we must have |φ| <1. CHAPTER 5 Exercise 5.1 Identify as specific ARIMA models, that is, what are p, d, and q and what are the values of the parameters—the φ’s and θ’s? (a) Yt = Yt − 1 − 0.25Yt − 2 + et − 0.1et − 1 This looks like an ARMA(2,1) model with φ1 = 1 and φ2 = −0.25. We need to check the stationarity conditions of Equation (4.3.11), page 72. Here φ1 + φ2 = 0.75 < 1, φ2 − φ1 = −1.25 < 1, and |φ2| = 0.25 < 1 so the process is a sta- tionary and invertible ARMA(2,1) with φ1 = 1, φ2 = −0.25, and θ1 = 0.1. (b) Yt = 2Yt − 1 − Yt − 2 + et Initially it looks like an AR(2) model but 2 + (−1) = 1 which is not strictly less than 1. Rewriting as Yt − Yt − 1 = (Yt − 1 − Yt − 2) + et suggests an AR(1) model in the differences Yt − Yt − 1 but the AR coefficient would be equal to 1. Actually, the second difference Yt − 2Yt − 1 + Yt − 2 = et is white noise, so that {Yt}is an IMA(2,0) model. (c) Yt = 0.5Yt − 1 − 0.5Yt − 2 + et − 0.5et − 1+ 0.25et − 2 The AR part is stationary since the inequalities of Equation (4.3.11), page 72, are satisfied. Applying the same equations to the MA part of the model, we see that the MA part is invertible. Therefore, the model is a stationary, and invertible ARMA(2,2) model with φ1 = 0.5, φ2 = −0.5, θ1 = 0.5, and θ2 = −0.25. Exercise 5.2 For each of the ARIMA models below, give the values for E(∇Yt) and Var(∇Yt). (a) Yt = 3 + Yt − 1 + et − 0.75et − 1 Here ∇Yt = Yt − Yt − 1 = 3 + et − 0.75et − 1 so that E(∇Yt) = 3 and Var(∇Yt) = . (b) Yt = 10 + 1.25Yt − 1 − 0.25Yt − 2 + et − 0.1et − 1 In this case ∇Yt = Yt − Yt − 1 = 10 + 0.25(Yt − 1 − Yt − 2) + et − 0.1et − 1. So the model is a stationary, invertible, ARIMA(1,1,1) model with φ = 0.25, θ = 0.1, and θ0 = 10. Hence . Also, from Equation (4.4.4), page 78, EYt() E φtY0 et φet 1– … φt 1– e1++ + +()φtEY0() φtμ0=== Var Yt() 1 φ2t– 1 φ2– ---------------- σe 2 φ2tσ0 2+for φ 1≠ tσe2 σ0 2+for φ 1=⎩ ⎪ ⎨ ⎪ ⎧ = Var Yt() Var φtY0 et φet 1– … φt 1– e1++ + +()φ2tσ0 2 σe 2 φ2k k 0= t 1– ∑+== φ 1≠ Var Yt() Var Yt 1–()σe 2+= Var Yt() σe 2 1 φ2–()⁄= Var Yt() φ2Var Yt()σe 2+= Var Yt() σe 2 1 φ2–()⁄= 10.75()2+[]σe 2 25 16------ σe 2= E ∇Yt() θ0 1 φ–------------ 10 10.25–------------------- 10 0.75---------- 40 3------== == Var ∇Yt() 12φθ– θ2+() 1 φ2– ------------------------------------σe2 1 2 0.25()0.1()–0.1()2+() 10.25()2– ------------------------------------------------------------------ σe2 1.024σe2== = 46 (c) Yt = 5 + 2Yt − 1 − 1.7Yt − 2 + 0.7Yt − 3 + et − 0.5et − 1+ 0.25et − 2 Factoring the AR characteristic polynomial we have . This shows that a first difference is needed after which a stationary AR(2) obtains. Thus the model may be rewritten as . So the model is an ARIMA(2,1,2) with φ1 = 1, φ2 = −0.75, θ1 = 0.5, θ2 = −0.25, and θ0 = 5. Exercise 5.3 Suppose that {Yt} is generated according to Yt = et + cet − 1+ cet − 2+ cet − 3++ ce0 for t > 0. (a) Find the mean and covariance functions for {Yt}. Is {Yt} stationary? and which, in general, varies with t. Assume that t < s. Then (b) Find the mean and covariance functions for {∇Yt}. Is {∇Yt} stationary? so this is sta- tionary for any value of c. See part (c). (c) Identify {Yt} as a specific ARIMA process. The process {∇Yt} is an MA(1) process so that {Yt} is IMA(1,1) or ARIMA(0,1,1) with θ = 1 − c. The {∇Yt} pro- cess is invertible if |c| < 1. Exercise 5.4 Suppose that Yt = A + Bt + Xt where {Xt} is a random walk. First suppose that A and B are constants. (a) Is {Yt} stationary? Since E(Yt) = A + Bt, in general, varies with t, the process {Yt} is not stationary. (b) Is {∇Yt} stationary? ∇Yt = (A + Bt + Xt) − [A + B(t − 1) + Xt − 1] = B + Xt − Xt − 1 = B + ∇Xt. So E(∇Yt) = B. for k > 0 since ∇Xt is white noise and B is a constant. Now suppose that A and B are random variables that are independent of the random walk {Xt}. (c) Is {Yt} stationary? No, since E(Yt) = E(A) + E(B)t, in general, varies with t, the process {Yt} is not stationary. (d) Is {∇Yt} stationary? We still have ∇Yt = B + ∇Xt. So E(∇Yt) = E(B) which is constant in t. for all k. So we do have stationarity. Exercise 5.5 Using the simulated white noise values in Exhibit (5.2), page 88, verify the values shown for the explo- sive process Yt. Direct calculation. Exercise 5.6 Consider a stationary process {Yt}. Show that if ρ1 < ½, ∇Yt has a larger variance than does Yt. . So if ρ1 < ½, 2(1 −ρ1) is larger than 1 and the result follows. Exercise 5.7 Consider two models: A: Yt = 0.9Yt − 1 + 0.09Yt − 2 + et Since , , and , the process is a stationary AR(2) process. with φ1 = 0.9 and φ2 = 0.09. B: Yt = Yt − 1 + et − 0.1et − 1 Since Yt − Yt − 1 = et − 0.1et − 1, this is an IMA(1,1) process with θ = 0.1. 12x–1.7x2 0.7x3–+1x–()1 x–0.7x2+()= ∇Yt 5 ∇Yt 1– 0.7∇Yt 2–– et 0.5et 1––0.25et 2–+++= … EYt() 0= Var Yt() Var et cet 1– cet 2– … ce0++++()1 tc2+()σe2== Cov Yt Ys,()Cov et cet 1– cet 2– … ce0++++es ces 1– … cet cet 1– … ce0++++++,()= cc2t+()σe2= c 1 ct+()σe2= ∇Yt et cet 1– cet 2– … ce0++++()et 1– cet 2– cet 3– … ce0++++()– et 1 c–()et 1––== Cov ∇Yt ∇Ytk–,()Cov B ∇Xt+ B ∇Xtk–+,()0== Cov ∇Yt ∇Ytk–,()Cov B ∇Xt+ B ∇Xtk–+,()Var B()== Var ∇Yt()Var Yt Yt 1––()Var Yt()Var Yt 1–()2Cov Yt Yt 1–,()–+21ρ1–()Var Yt()== = φ1 φ2 1<+ φ2 φ1 1<– φ2 1< 47 (a) Identify each as a specific ARIMA model. That is, what are p, d, and q and what are the values of the param- eters, φ’s and θ’s? (b) In what ways are the two models different? One is stationary while the other is nonstationary. (c) In what ways are the two models similar? (Compare ψ-weights and π-weights.) Using Equations (4.3.21) on page 75 we can calculate the ψ-weights for the AR(2) model. This could be done with a calculator or the following R code: > psi=NULL > phi1=0.9 ;phi2=0.09; max.lag=20; psi=1;psi=phi1 > for (k in 3:max.lag) psi[k]=phi1*psi[k-1]+phi2*psi[k-2] > psi # The indexing here is off by 1  1.0000000 0.9000000 0.9000000 0.8910000 0.8829000 0.8748000 0.8667810  0.8588349 0.8509617 0.8431607 0.8354312 0.8277725 0.8201841 0.8126652  0.8052152 0.7978336 0.7905196 0.7832726 0.7760921 0.7689775 Alternatively, you can use the ARMAtoMA function: > ARMAtoMA(ar=c(phi1,phi2),lag.max=20)  0.9000000 0.9000000 0.8910000 0.8829000 0.8748000 0.8667810 0.8588349  0.8509617 0.8431607 0.8354312 0.8277725 0.8201841 0.8126652 0.805215  0.7978336 0.7905196 0.7832726 0.7760921 0.7689775 0.7619280 From Equation (5.2.6), page 93, the ψ-weights for the IMA(1,1) model are 1, 1 − 0.1 = 0.9, 1 − 0.1 = 0.9, 1 − 0.1 = 0.9,… . So the ψ-weights for the two models are very similar for many lags. The π-weights for the IMA(1,1) model are obtained from Equation (4.5.5), page 80. We find that πk = (1 −θ)θ k −1 for k = 1, 2,… . So π1 = (1 − 0.1) = 0.9, π2 = (1 − 0.1)(0.1) = 0.09, π3 = (1 − 0.1)(0.1)2 = 0.009, and so on. The first two π-weights for the two models are identical and the remaining π-weights are nearly the same. These two models would be essentially impossible to distinguish in practice. Exercise 5.8 Consider a nonstationary “AR(1)” process defined as a solution to Equation (5.1.2), page 88, with |φ| > 1. (a) Derive an equation similar to Equation (5.1.3), page 88, for this more general case. Use Y0 = 0 as an initial condition. We find that (b) Derive an equation similar to Equation (5.1.4), page 89, for this more general case. (c) Derive an equation similar to Equation (5.1.5), page 89, for this more general case. Yt φYt 1– et+ et φφYt 2– et 1–+()+ et φet 1– φ2Yt 2–++ …== = = et φet 1– φ2et 2– … φt 1– e1 φtY0++ +++= et φet 1– φ2et 2– … φt 1– e1++ ++= Var Yt() Var et φet 1– φ2et 2– … φt 1– e1++ ++()σe 2 φ2()k k 0= t 1– ∑ σe 2 φ2t 1– φ2 1– ----------------⎝⎠ ⎛⎞=== Cov Yt Ytk–,()Cov et φet 1– … φketk– φk 1+ etk–1– … φt 1– e1++++ ++ etk– φetk–1– … φtk–1– e1+++,( )= φk φk 2+ … φ2tk–2–+++()σe 2= φk 1 φ2 φ4 … φ2 tk–1–()++++()σe 2= φk φ2 tk–()1– φ2 1– ---------------------------⎝⎠ ⎛⎞σe 2= 48 (d) Is it true that for any |φ| > 1, for large t and moderate k? Yes! for large t and moderate k since |φ| > 1. Exercise 5.9 Verify Equation (5.1.10), page 90. where {et} and {εt} are independent white noise series. So Exercise 5.10 Nonstationary ARIMA series can be simulated by first simulating the corresponding stationary ARMA series and then “integrating” it (really partial summing it). Use statistical software to simulate a variety of IMA(1,1) and IMA(2,2) series with a variety of parameter values. Note any stochastic “trends” in the simulated series. R code can do it all. Remember that R uses the negative of our θ values. > plot(arima.sim(model=list(order=c(0,1,1),ma=-0.5),n=200),type='o',ylab='Series') > # Do this several times with the same parameters to see the possible variation then > # change the various parameters, ma and n. For the IMA(2,2) use, for example, > plot(arima.sim(model=list(order=c(0,2,2),ma=c(-0.7,-0.1)),n=50),type='o', ylab='Series') Again, it is instructive to repeat the simulation several times with the same parameters and several times with differ- ent parameters. Corr Yt Ytk–,()1≈ Corr Yt Ytk–,() φk φ2 tk–()1– φ2 1– ---------------------------⎝⎠ ⎛⎞σe 2 φ2t 1– φ2 1– ----------------⎝⎠ ⎛⎞σe 2 ⎝⎠ ⎛⎞φ2 tk–()1– φ2 1– ---------------------------⎝⎠ ⎛⎞σe 2 ⎝⎠ ⎛⎞ ------------------------------------------------------------------------------------ φk φ2 tk–()1– φ2t 1– --------------------------- 1 φ2 kt–()– 1 φ 2t–– ---------------------------1≈=== Yt∇εt et et 1––+= Var Yt∇()Var εt et et 1––+()σε 2 2σe 2+== Cov Yt∇ Yt 1–∇,()Cov εt et et 1––+ εt 1– et 1– et 2––+,()σe 2–== Corr Yt∇ Yt 1–∇,()σe2– σε 2 2σe 2+ ---------------------- 1 2 σε 2 σe 2⁄+[] -------------------------------–== 49 Exercise 5.11 The data file winnebago contains monthly unit sales of recreational vehicles (RVs) from Winnebago, Inc. from November 1966 through February 1972. (a) Display and interpret the time series plot for these data. > data(winnebago); win.graph(width=6.5,height=3,pointsize=8) > plot(winnebago,type='o',ylab='Winnebago Monthly Sales') The series increases over time and the variation is larger as the series level gets higher—a series begging us to take logarithms. (b) Now take natural logarithms of the monthly sales figures and display the time series plot of the transformed values. Describe the effect of the logarithms on the behavior of the series. > plot(log(winnebago),type='o',ylab='Log(Monthly Sales)') The series still increases over time, but now the variation around the general level is quite similar at all levels of the series. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Winnebago Monthly Sales 1967 1968 1969 1970 1971 1972 0 500 1000 1500 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Log(Monthly Sales) 1967 1968 1969 1970 1971 1972 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 50 (c) Calculate the fractional relative changes, (Yt − Yt − 1)/Yt − 1, and compare them to the differences of (natural) logarithms,∇log(Yt) = log(Yt) − log(Yt − 1). How do they compare for smaller values and for larger values? > percentage=na.omit((winnebago-zlag(winnebago))/zlag(winnebago)) > win.graph(width=3,height=3,pointsize=8) > plot(x=diff(log(winnebago))[-1],y=percentage[-1], ylab='Percentage Change', xlab='Difference of Logs') > cor(diff(log(winnebago))[-1],percentage[-1]) If there were a perfect relationship, the above plot would be a straight line. Clearly, the relationship is good but not perfect. The correlation coefficient in this plot is 0.96 so the agreement is quite good. Of course, there is seasonality in this series that has not been modeled. Exercise 5.12 The data file SP contains quarterly Standard & Poor’s Composite Index of stock price values from the first quarter of 1936 through the fourth quarter of 1977. (a) Display and interpret the time series plot for these data. > data(SP); plot(SP,type='o',ylab='Standard & Poor\'s Quarterly Stock Index') Another general upward “trend” but with increased variation at the higher levels. ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.0 1.5 Difference of Logs Percentage Change ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●●●●●●● ●●●● ● ●●●●● ●● ● ● ●●● ●● ●● ● ●●●● ●●●●●● ●●● ● ● ● ●● ●● ● ●●●● ● ●●● ● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ●● ●● Time Standard & Poor's Quarterly Stock Index 1940 1950 1960 1970 200 400 600 800 1000 51 (b) Now take natural logarithms of the quarterly values and display and the time series plot of the transformed values. Describe the effect of the logarithms on the behavior of the series. > plot(log(SP),type='o',ylab='Log(S&P)') Now the variance is stabilized but, of course, the upward trend is still there. (c) Calculate the (fractional) relative changes, (Yt − Yt − 1)/Yt − 1, and compare them to the differences of (natu- ral) logarithms, ∇log(Yt). How do they compare for smaller values and for larger values? > percentage=na.omit((SP-zlag(SP))/zlag(SP)) > win.graph(width=3,height=3,pointsize=8) > plot(x=diff(log(SP))[-1],y=percentage[-1], ylab='Percentage Change', xlab='Difference of Logs') > cor(diff(log(SP))[-1],percentage[-1]) Here the agreement between the two is very good and the correlation coefficient is 0.996. ●●●●● ● ● ● ● ●●● ●● ●●● ●●●●●● ●●●● ● ●●●●●●●●● ●●●●● ●●●●●●● ●●●●● ●●●● ●●●● ●●●●●●●●●●●● ● ●● ●●●●●●●●● ●●●● ● ●●●●●●●●● ●●●●● ●● ●●●●●●●●●●●●●●● ●● ●●●●● ●●●●●●●● ● ● ● ●●●●●●●●●●● ●● ● ●● ● ● ●● ●●●● ●●●● Time Log(S&P) 1940 1950 1960 1970 4.5 5.0 5.5 6.0 6.5 7.0 ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●●● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ●●● ● ● ●●● ● ●● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●●●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 −0.2 0.0 0.1 0.2 0.3 Difference of Logs Percentage Change 52 Exercise 5.13 The data file airpass contains international airline passenger monthly totals (in thousands) from Janu- ary 1949 through the December of 1960. This is a classic time series analyzed in Box and Jenkins (1976). (a) Display and interpret the time series plot for these data. > win.graph(width=6.5,height=3,pointsize=8) > data(airpass); plot(airpass,type='o',ylab='Airline Passenger Monthly Totals') There is a general upward “trend” with increased variation at the higher levels. There is also evidence of seasonality. (b) Now take natural logarithms of the monthly values and display and the time series plot of the transformed values. Describe the effect of the logarithms on the behavior of the series > plot(log(airpass),type='o',ylab='Log(Airline Passengers)') Now the variation is similar at both high, low, and middle levels of the series. ●●●●●●●●● ●●●●●●●● ● ●●● ● ● ●●● ●●●● ●●● ●● ●●●●●● ●●● ● ● ● ●●● ●●●● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ● ● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●● ● ●●● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● Time Airline Passenger Monthly Totals 1960 1962 1964 1966 1968 1970 1972 100 200 300 400 500 600 ●● ●● ● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ● ● ●●● ● ●●● ●● ● ● ● ●●● ● ●● ●●● ● ● ● ●●● ●●● ● ●● ● ● ● ●● ● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●● ● ●●● ● ●● ● ● ● ● ●● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● Time Log(Airline Passengers) 1960 1962 1964 1966 1968 1970 1972 5.0 5.5 6.0 6.5 53 (c) Calculate the (fractional) relative changes, (Yt − Yt − 1)/Yt − 1, and compare them to the differences of (natu- ral) logarithms,∇log(Yt). How do they compare for smaller values and for larger values? > percentage=na.omit((airpass-zlag(airpass))/zlag(airpass)) > win.graph(width=3,height=3,pointsize=8) > plot(x=diff(log(airpass))[-1],y=percentage[-1], ylab='Percentage Change',xlab='Difference of Logs') > cor(diff(log(airpass))[-1],percentage[-1]) The is excellent agreement between the two transformed series in this case. The correlation coefficient in this plot is 0.999. Either transformation would be extremely helpful in modeling this series further. Exercise 5.14 Consider the annual rainfall data for Los Angeles shown in Exhibit (1.1), page 2. The quantile-quantile normal plot of these data, shown in Exhibit (3.17), page 50, convinced us that the data were not normal. The data are in the file larain. (a) Use software to produce a plot similar to Exhibit (5.11), page 102, and determine the “best” value of λ for a power transformation of the data. > win.graph(width=3,height=3,pointsize=8) > data(larain); BoxCox.ar(larain, method='ols') The maximum likelihood value for lambda is about 0.26 but the 95% confidence interval includes the logarithm transformation (lambda = 0) and square root transformation (lambda = 0.5). We choose lambda = 0.25 or fourth root for the remaining sections of this exercise. ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● −0.2 −0.1 0.0 0.1 0.2 −0.2 −0.1 0.0 0.1 0.2 Difference of Logs Percentage Change −2 −1 0 1 2 −560 −540 −520 −500 λλ log−Likelihood 95% 54 (b) Display a quantile-quantile plot of the transformed data. Are they more normal? > win.graph(width=3,height=3,pointsize=8) > qqnorm((larain)^.25,main='') > qqline((larain)^.25) > shapiro.test((larain)^.25) Shapiro-Wilk normality test data: (larain)^0.25 W = 0.9941, p-value = 0.9096 The values transformed by the fourth root look quite normal. (c) Produce a time series plot of the transformed values. > win.graph(width=6.5,height=3,pointsize=8) > plot(larain^.25,type='o',ylab='Fourth Root of L.A. Rain Values') This transformed series could now be considered as normal white noise with a nonzero mean. ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● −2 −1 0 1 2 1.4 1.6 1.8 2.0 2.2 2.4 Theoretical Quantiles Sample Quantiles ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● Time Fourth Root of L.A. Rain Values 1880 1900 1920 1940 1960 1980 1.4 1.6 1.8 2.0 2.2 2.4 55 (d) Use the transformed values to display a plot of Yt versus Yt − 1 as in Exhibit (1.2), page 2. Should we expect the transformation to change the dependence or lack of dependence in the series? > win.graph(width=3,height=3,pointsize=8) > plot(y=(larain)^.25,x=zlag((larain)^.25), ylab='Fourth Root of L.A. Rain Values',xlab='Previous Year Value') The lack of correlation or any other kind of dependency between year values is clear from this plot. Instantaneous transformations cannot induce correlation where none was present. Exercise 5.15 Quarterly earnings per share for the Johnson & Johnson company are given in the data file named JJ. The data cover the years from 1960 through 1980. (a) Display a time series plot of the data. Interpret the interesting features in the plot. > win.graph(width=6.5,height=3,pointsize=8) > data(JJ); plot(JJ,type='o',ylab='Johnson & Johnson Quarterly Earnings') ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● 1.4 1.6 1.8 2.0 2.2 2.4 1.4 1.6 1.8 2.0 2.2 2.4 Previous Year Value Fourth Root of L.A. Rain Values ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Johnson & Johnson Quarterly Earnings 1960 1965 1970 1975 1980 0 5 10 15 56 (a) Use software to produce a plot similar to Exhibit (5.11), page 102, and determine the “best” value of λ for a power transformation of these data.. > win.graph(width=3,height=3,pointsize=8) > data(JJ); BC=BoxCox.ar(JJ) > BC=BoxCox.ar(JJ,lambda=seq(0.0,0.35,0.01)) # New range for lambda to see more detail The plot on the left shows the initial default Box-Cox analysis. The plot on the right shows more detail as the range for the lambda parameter has been restricted to 0.0 to 0.35. The maximum likelihood estimate of lambda is 0.17 and the 95% confidence interval runs from 0.02 to 0.32. We use 0.17 as the lambda parameter in the remaining analysis. (b) Display a time series plot of the transformed values. Does this plot suggest that a stationary model might be appropriate? > win.graph(width=6.5,height=3,pointsize=8) > plot((JJ)^0.17,type='o',ylab='Transformed Earnings') The variance has been stabilized but the strong trend must be accounted for before we can entertain a stationary model. −2 −1 0 1 2 150 200 250 300 350 λλ Log Likelihood 95% 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 355.5 356.5 357.5 λλ Log Likelihood 95% ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● Time Transformed Earnings 1960 1965 1970 1975 1980 1.0 1.2 1.4 1.6 57 (c) Display a time series plot of the differences of the transformed values. Does this plot suggest that a station- ary model might be appropriate for the differences? > plot(diff((JJ)^0.17),type='o',ylab='Differenced Transformed Earnings') The trend is now gone but the variation does not appear to be constant across time and there may be quarterly sea- sonality to deal with. Exercise 5.16 The file named gold contains the daily price of gold (in dollars per troy ounce) for the 252 trading days of year 2005. (a) Display the time series plot of these data. Interpret the plot. > data(gold); plot(gold,type='o',ylab='Daily Gold Prices') After a period of generally flat prices, the last half of the year shows substantial increases. (Look up today’s gold prices at www.lbma.org.uk to see how the series is doing more recently.) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Differenced Transformed Earnings 1960 1965 1970 1975 1980 −0.10 −0.05 0.00 0.05 0.10 ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ●●●●●●●●●● ●● ● ●● ●● ●●●●●●● ●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●● ●●●● ●●●● ●●●●●●●●●●●●●● ●●●●●●●●● ● ● ● ●●● ●●●●●● ●●● ● ●●●●●●●●●●●●●●●● ●●●●●●●●● ● ●●● ●●●●●●●●●● ●●●●●●●●●● ●● ●● ● ●● ●● ●● ●●●● ● ●●● ●●●● ● ●●●● ●●●● ● ●●●●● ● ●●●●● ● ●● ● ● ●●●●●● ●●● ●● ● ● ● ● ●● ●● ●● ● ● ● ● Time Daily Gold Prices 0 50 100 150 200 250 420 440 460 480 500 520 540 58 (b) Display the time series plot of the differences of the logarithms of these data. Interpret this plot. > plot(diff(log(gold)),type='o',ylab='Differences of Log(Prices)') The “trend” has been accounted for but there may be increased variability in the last half of the series. (c) Calculate and display the sample ACF for the differences of the logarithms of these data and argue that the logarithms appear to follow a random walk model. > acf(diff(log(gold)),main='') The differences of the logarithms of gold prices display the autocorrelation structure of while noise. Therefore, the logarithms of gold prices could be consided as a random walk. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●●● ● ● ● ● ●● ●●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Differences of Log(Prices) 0 50 100 150 200 250 −0.02 0.00 0.01 0.02 5101520 −0.10 0.00 0.05 0.10 Lag ACF 59 (d) Display the differences of logs in a histogram and interpret. > win.graph(width=3,height=3,pointsize=8) > hist(diff(log(gold))) The distribution looks reasonably “bell-shaped,” but see part (e) below. (e) Display the differences of logs in a quantile-quantile normal plot and interpret. > qqnorm(diff(log(gold))); qqline(diff(log(gold))) > shapiro.test(diff(log(gold))) Shapiro-Wilk normality test data: diff(log(gold)) W = 0.9861, p-value = 0.01519 The Q-Q plot indicates that the distribution deviates from normality. In particular, the tails are lighter than a normal distribution. The Shapiro-Wilk test confirms this with a p-value of 0.015. Exercise 5.17 Use calculus to show that, for any fixed x > 0, as . First rewrite . Then . So by l’Hospital’s rule diff(log(gold)) Frequency −0.03 −0.02 −0.01 0.00 0.01 0.02 0 10203040506070 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●●● ● ● ● ● ●● ●●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −3 −2 −1 0 1 2 3 −0.02 0.00 0.01 0.02 Theoretical Quantiles Sample Quantiles λ 0 xλ 1–()λ⁄ xlog→,→ xλ eλ x()log= λd dxλ λd deλ x()log x()eλlog== 60 CHAPTER 6 Exercise 6.1 Verify Equation (6.1.3), page 110 for the white noise process. For a white noise process reduces to and for i ≠ j and hence the result. Exercise 6.2 Verify Equation (6.1.4), page 110 for the AR(1) process. Without loss of generality, let 1 ≤ j. Then Now, with ρk = φk, for 0 ≤ k, we deal with the three sums separately. . Next For the third sum xλ 1–() λ------------------- λd d xλ 1–() λd dλ -------------------------- λ 0→ lim= λ 0→ lim x()eλlog 1--------------------- λ 0→ lim x()log== cij ρki+ ρkj+ ρki– ρkj+ 2ρiρkρkj+–2ρjρkρki+–2ρiρjρk 2++() k ∞–= ∞ ∑= cii ρ0 2 1== cij 0= cjj ρkj+ ρkj+ ρkj– ρkj+ 2ρjρkρkj+–2ρjρkρkj+–2ρjρjρk 2++() k ∞–= ∞ ∑= ρkj+ 2 ρkj– ρkj+ 4ρjρkρkj+–2ρj 2ρk 2++() k ∞–= ∞ ∑= 12ρj2+()ρk2 k ∞–= ∞ ∑ ρkj– ρkj+k ∞–= ∞ ∑ 4ρj ρkρkj+k ∞–= ∞ ∑–+= ρk 2 k ∞–= ∞ ∑ 12 φ2k k 1= ∞ ∑+12φ2 1 φ2– --------------+ 1 φ2+ 1 φ2– ---------------=== ρkj– ρkj+k ∞–= ∞ ∑ φjk– k– j– k ∞–= j–1– ∑ φjk– kj++ kj–= j 1– ∑ φkj– kj++ kj= ∞ ∑++= φ 2k– k ∞–= j–1– ∑ φ2i kj–= j 1– ∑ φ2k kj= ∞ ∑++= φ 2k– k ∞–= j–1– ∑ φ2j kj–= j 1– ∑ φ2k kj= ∞ ∑++= φ2 j 1+() 1 φ2– ------------------ 2 jφ2j φ2j 1 φ2– --------------++= φ2j 1 φ2+ 1 φ2– ---------------⎝⎠ ⎛⎞2jφ2j+= ρkρkj+k ∞–= ∞ ∑ φ k– φ k– j– k ∞–= j– ∑ φ k– φkj+ kj–1+= 0 ∑ φkφkj+ k 1= ∞ ∑++= φ j– φ2k kj= ∞ ∑ jφj φj φ2k k 1= ∞ ∑++= φj 1 φ2– -------------- jφj φj φ2 1 φ2– --------------⎝⎠ ⎛⎞++= 1 φ2+ 1 φ2– ---------------⎝⎠ ⎛⎞φj jφj+= 61 Substituting these three results into and doing the final simplfication gives the desired result. Exercise 6.3 Verify the line in Exhibit (6.1), page 111, for the values φ = ±0.9. Use , , and and the R code > phi=0.9;k=10;c11=sqrt((1-phi^2));c22=sqrt((1+phi^2)^2-4*phi^4) > c12=2*phi*sqrt((1-phi^2)/(1+2*phi^2-3*phi^4)) > ckk=sqrt((1+phi^2)*(1-phi^(2*k))/(1-phi^2)-2*k*phi^(2*k)) > c11;c22;c12;ckk The R code with φ = ±0.9 gives 0.4358899, 0.8072794, ±0.9719086, and 2.436515 as rounded in the Exhibit. Exercise 6.4 Add new entries to Exhibit (6.1), page 111, for the following values: (a) φ = ±0.99. The R code shown in the previous exercise with φ = ±0.99 gives 0.1410674, 0.2800214, ±0.9974716, and 1.326868. (b) φ = ±0.5. Results for φ = ±0.5 are 0.8660254, 1.145644, ±0.755929, and 1.290986. (c) φ = ±0.1. Results for φ = ±0.1 are 0.9949874, 1.009802, ±0.1970659, and 1.010051. These values are quite close to what we would get with a white noise process. Exercise 6.5 Verify Equation (6.1.9), page 111 and Equation (6.1.10) for the MA(1) process. In general, . So Also, for j > 1, . Finally, Exercise 6.6 Verify the line in Exhibit (6.2), page 112, for the values θ = ±0.9. > theta=0.9; rho1=-theta/(1+theta^2) > c11=sqrt(1-3*rho1^2+4*rho1^4); c22=sqrt(1+2*rho1^2) cjj 12ρj 2+()ρk 2 k ∞–= ∞ ∑ ρkj– ρkj+k ∞–= ∞ ∑ 4ρj ρkρkj+k ∞–= ∞ ∑–+= Var r1()1 φ2– n--------------≈ Var rk() 1 n--- 1 φ2+()1 φ2k–() 1 φ2– ------------------------------------------ 2 kφ2k–≈ Corr r1 r2,()2φ 1 φ2– 12φ2 3φ4–+ ---------------------------------≈ cij ρki+ ρkj+ ρki– ρkj+ 2ρiρkρkj+–2ρjρkρki+–2ρiρjρk2++() k ∞–= ∞ ∑= c11 ρk 1+ ρk 1+ ρk 1– ρk 1+ 2ρ1ρkρk 1+–2ρ1ρkρk 1+–2ρ1ρ1ρk 2++() k ∞–= ∞ ∑= ρ 1– 2 ρ0 2 ρ1 2++()ρ1– ρ1 4 ρ 1– ρ0ρ1 ρ1ρ0ρ1+()–2ρ1 2 ρ 1– 2 ρ0 2 ρ1 2++()++[]= 13ρ1 2–4ρ1 4+= cjj ρkj+ ρkj+ ρkj– ρkj+ 2ρjρkρkj+–2ρjρkρkj+–2ρjρjρk 2++() k ∞–= ∞ ∑= ρ 1– 2 ρ0 2 ρ1 2++()=12ρ1 2+= c12 ρk 1+ ρk 2+ ρk 1– ρk 2+ 2ρ1ρkρk 2+–2ρ2ρkρk 1+–2ρ1ρ2ρk 2++() k ∞–= ∞ ∑= ρ 2–1+ ρ 2–2+ ρ 1–1+ ρ 1–2++()2ρ1ρ 1– ρ 1–2+–[]= 2ρ1 1 ρ1 2–()= 62 > c12=2*rho1*(1-rho1^2)/(c11*c22); c11; c22; c12 Here are the results from the R code: 0.7090734, 1.222494, and −0.8635941. Exercise 6.7 Add new entries to Exhibit (6.2), page 112, for the following values: (a) θ = ±0.99. Use the R code shown in the previous exercise with θ = ±0.99 to get: 0.7071246, 1.224724, and 0.8660035. (b) θ = ±0.8. Results: 0.7159797, 1.214869, and 0.8547268. (c) θ = ±0.2. Results: 0.9457928, 1.036323, and 0.377894. Exercise 6.8 Verify Equation (6.1.11), page 112, for the general MA(q) process. In general, In the MA(q) case, only ρ−q, ρ−q+1,..., ρ0, ρ1,..., ρq are non-zero. So as required. Exercise 6.9 Use Equation (6.2.3), page 113, to verify the value for the lag 2 partial autocorrelation function for the MA(1) process given in Equation (6.2.5), page 114. In general, . So for an MA(1) process . Exercise 6.10 Show that the general expression for the partial autocorrelation function of an MA(1) process given in Equation (6.2.6), page 114, satisfies the Yule-Walker recursion given in Equation (6.2.7). We need to show that Equation (6.2.6) satisfies the Yule-walker equations. The k×k Yule-Walker Equations (6.2.7) written in matrix form are In particular, these become: for an MA(1) process where, for simplicity, we write ρ for ρ1. For a given k, write this matrix equation as . We will use Cramer’s Rule to solve for xk (= φkk). By Cramer’s Rule where B is the matrix Ak with the kth column replaced by the column vector x andthe vertical bars (| |) indicate determinants. In particular, +− +− +− cjj ρkj+ 2 ρkj– ρkj+ 2ρjρkρkj+–2ρjρkρkj+–2ρj 2ρk 2++() k ∞–= ∞ ∑= cjj 12 ρk 2 k 1= q ∑+= φ22 ρ2 ρ1 2– 1 ρ1 2– ------------------= φ22 0 θ 1 θ2+()⁄–()2– 1 θ 1 θ2+()⁄–()2– ----------------------------------------------- θ2 1 θ2+()2 θ2– ---------------------------------- θ2 1 θ2 θ4++ ---------------------------=== φkk θk 1 θ2–() 1 θ2 k 1+()– ---------------------------- for k 1≥–= ρ1 ρ2 ... ρk 1 ρ1 ρ2 …ρk 1– ρ1 1 ρ1 …ρk 2– ... ... ... ... ... ρk 1– ρk 2– ρk 3– … 1 φk1 φk2 ... φkk = ρ 0 ... 0 0 1 ρ 0 … 00 ρ 1 ρ…00 ... ... ... ... ... ... 000… 1 ρ 000…ρ1 φk1 φk2 ... φkk 1– φkk = b Akx= xk BAk⁄= 63 and expanding the determinant around the last column using the special form of B, we obtain We now develop a second-order recursion for the determinant |Ak|. Expanding the determinant around the first row, we can write or which is valid for k > 2. To start the recursion, we have |A1| = 1 and |A2| = 1 − ρ2. Then . So To use induction on k we next proceed to k = 4. . So Now we proceed by induction on k. Suppose that Equation 6.2.6) is satisfied by for k−1 and k−2. Then Hence, by induction, the required result is established. Exercise 6.11 Use Equation (6.2.8), page 114, to find the (theoretical) partial autocorrelation function for an AR(2) model in terms of φ1 and φ2 and lag k = 1, 2, 3, … . Trivially . Now, for k = 2, or . But we know that an AR(2) process satisfies so B 1 ρ 0 … 0 ρ ρ 1 ρ…00 ... ... ... ... ... ... 000… 10 000…ρ0 = B 1–()k 1+ ρ ρ 1 ρ…00 0 ρ 1 … 00 ... ... ... ... ... 0 000…ρ1 000… 0 ρ 1–()k 1+ ρρk 1– 1–()k 1+ ρk=== Ak Ak 1 Ak 1– ρρ Ak 2––= Ak Ak 1– ρ2 Ak 2––= A3 A2 ρ2 A1–1ρ2– ρ21–12ρ2–=== φ33 x3 B A3 --------- 1–()4ρ3 12ρ2– -------------------- θ 1 θ2+ ---------------–⎝⎠ ⎛⎞3 12 θ 1 θ2+ ---------------–⎝⎠ ⎛⎞2– --------------------------------------- θ3 1 θ2+()3 2θ2 1 θ2+()– ----------------------------------------------------------– θ3 1 θ2+()1 θ4+() ----------------------------------------–== = = = = θ3 1 θ2–() 1 θ23 1+()+[] ----------------------------------–= as required by Equation (6.2.6). A4 A3 ρ2 A2–12ρ2–()ρ2 1 ρ2–()–13ρ2–4ρ4+== = φ44 x4 B A4 --------- 1–()5ρ4 13ρ2–4ρ4+ ---------------------------------- 1–() θ 1 θ2+ ---------------–⎝⎠ ⎛⎞4 13 θ 1 θ2+ ---------------–⎝⎠ ⎛⎞2–4θ 1 θ2+ ---------------–⎝⎠ ⎛⎞4+ --------------------------------------------------------------------------- θ4 1 θ2 θ4 θ6 θ8++++ ---------------------------------------------------–== = = = θ4 1 θ2–() 1 θ24 1+()–[] ---------------------------------–= as required by Equation (6.2.6). φkk xk B Ak -------- 1–()k 1+ ρk Ak 1– ρ2 Ak 2–– -------------------------------------------- 1–()k 1+ ρk 1–()kρk 1– φk 1– k 1–, -------------------------- ρ2 1–()k 1– ρk 2– φk 2– k 2–, ----------------------------------– ------------------------------------------------------------------------- ρ 1 φk 1– k 1–, ------------------------ 1 φk 2– k 2–, ------------------------+ -------------------------------------------------------–== = = = ... θk 1 θ2–() 1 θ2k 2+– -------------------------–= as required by Equation (6.2.6) after (tedious) but straight forward algebra! ρ1 φ11= ρ1 φ21ρ0 φ22ρ1+= ρ2 φ21ρ1 φ22ρ0+= ⎭ ⎬ ⎫ ρ1 φ21 φ22ρ1+= ρ2 φ21ρ1 φ22+= ⎭ ⎬ ⎫ ρ1 φ1 φ2ρ1+= ρ2 φ1ρ1 φ2+= ⎭ ⎬ ⎫ 64 it must be that . For k = 3, we have or . But we know that so we must have . Similarly, for k > 2. Exercise 6.12 From a time series of 100 observations, we calculate r1 = −0.49, r2 = 0.31, r3 = −0.21, r4 = 0.11, and |rk| < 0.09 for k > 4. On this basis alone, what ARIMA model would we tentatively specify for the series? Using as a guide, we might consider MA(2) or MA(3) as possibilities. If MA(2) is tentatively assumed, then Equation (6.1.11), page 112, gives so that and MA(2) is not rejected. Exercise 6.13 A stationary time series of length 121 produced sample partial autocorrelation of = 0.8, = −0.6, = 0.08, and = 0.00. Based on this information alone, what model would we tentatively specify for the series? so an AR(2) model should be entertained. Exercise 6.14 For a series of length 169, we find that r1 = 0.41, r2 = 0.32, r3 = 0.26, r4 = 0.21, and r5 = 0.16. What ARIMA model fits this pattern of autocorrelations? Note that , , , and and we do not have . This would seem to rule out an AR(1) model but support an ARMA(1,1) with . Exercise 6.15 The sample ACF for a series and its first difference are given in the following table. Here n = 100. Based on this information alone, which ARIMA model(s) would we consider for the series? The lack of decay in the sample acf suggests nonstationarity After differencing the correlations seem much more reasonable. In particular, and . Therefore, an IMA(1,1) model warrants further consideration. Exercise 6.16 For a series of length 64, the sample partial autocorrelations are given as: Which models should we consider in this case? Notice that and that all partial autocorrelations from lag 3 on are smaller in magnitude than 0.25. This suggests an AR(2) model for the series. Exercise 6.17 Consider an AR(1) series of length 100 with φ = 0.7. (a) Would you be surprised if r1 = 0.6? For an AR(1) with φ = 0.7 and n = 100, Exhibit (6.1), page 111, shows and r1 = 0.6 is less than two standard deviations from ρ1 = φ = 0.7. We should not be surrprised at all. (b) Would r10 = −0.15 be unusual? For an AR(1) with φ = 0.7 and n = 100, Exhibit (6.1), page 111, shows . Thus r10 = −0.15 is less than one standard deviation away from its approximate mean of . lag 123456 ACF for Yt 0.97 0.97 0.93 0.85 0.80 0.71 ACF for ∇Yt −0.42 0.18 −0.02 0.07 −0.10 −0.09 Lag 12345 PACF 0.47 −0.34 0.20 0.02 −0.06 φ22 φ2= ρ1 φ31ρ0 φ32ρ1 φ33ρ2++= ρ2 φ31ρ1 φ32ρ0 φ33ρ1++= ρ3 φ31ρ2 φ32ρ1 φ33ρ0++= ⎭ ⎪ ⎬ ⎪ ⎫ ρ1 φ31 φ32ρ1 φ33ρ2++= ρ2 φ31ρ1 φ32 φ33ρ1++= ρ3 φ31ρ2 φ32ρ1 φ33++= ⎭ ⎪ ⎬ ⎪ ⎫ ρ1 φ1 φ2ρ1 0ρ2++= ρ2 φ1ρ1 φ2 0ρ1++= ρ3 φ1ρ2 φ2ρ1 0++= ⎭ ⎪ ⎬ ⎪ ⎫ φ33 0= φkk 0= 2100⁄ 0.2= Var r3() 1 3 0.49–()2 0.31()2+[]+()100⁄≈ 0.016724= r3 Var r3()()⁄ 0.21 0.016724()⁄–≈ 1.62–= φ^11 φ^22 φ^33 φ^44 2 121()⁄ 0.181= r2 r1⁄ 0.78= r3 r2⁄ 0.81= r4 r3⁄ 0.81= r5 r4⁄ 0.76= rk r1 k≈ φ 0.8≈ 1 2 0.42–()2+()100⁄ 0.0135= 0.18 0.0135()⁄ 1.55= 264()⁄ 0.25= Var r1()0.71 100()⁄≈ 0.071= Var r10()1.70 100()⁄≈ 0.17= ρ10 0.7()10 0.028== 65 Exercise 6.18 Suppose the {Xt} is a stationary AR(1) process with parameter φ but that we can only observe Yt = Xt + Nt where {Nt} is the white noise measurement error independent of {Xt}. (a) Find the autocorrelation function for the observed process in terms of φ, , and . From the solution to Exercise (2.24), page 23, we know that for k ≥ 1. (b) Which ARIMA model might we specify for {Yt}? This is the pattern of an ARMA(1,1) model. Exercise 6.19 The time plots of two series are shown below. (a) For each of the series, describe r1 using the terms strongly positive, moderately positive, near zero, moder- ately negative, or strongly negative. Do you need to know the scale of measurement for the series to answer this? The lag one autocorrelation for Series A will be strongly positive since neighboring points in time are almost uni- versally on the same side of the mean. The scale is not relevant. The lag one autocorrelation for Series B, on the other hand, will be strongly negative since neighboring points in time are almost universally on opposite sides of the mean. (b) Repeat part (a) for r2. The lag two autocorrelation for Series A will also be positive again since points two apart in in time are almost uni- versally on the same side of the mean. The lag two autocorrelation for Series B will be (strongly) positive since points two apart in time are almost uniersally on the same side of the mean. Exercise 6.20 Simulate an AR(1) time series with n = 48 and with φ = 0.7. > set.seed(241357); series=arima.sim(n=48,list(ar=0.7)) (a) Calculate the theoretical autocorrelations at lag 1 and lag 5 for this model. ρ1 = 0.7 and ρ5 = (0.7)5 = 0.16807. (b) Calculate the sample autocorrelations at lag 1 and lag 5 and compare the values with their theoretical values. Use Equations (6.1.5) and (6.1.6), page 111, to quantify the comparisons. > acf(series,lag.max=5)[1:5] Autocorrelations of series ‘series’, by lag 1 2 3 4 5 0.768 0.626 0.436 0.318 0.14 σX 2 σN 2 Corr Yt Ytk–,() Corr Xt Xtk–,() 1 σN 2 σX 2⁄+ -------------------------------------- cφk== ● ● ● ● ● ● ● ● ● ● ● Time Series A 1357911 ● ● ● ● ● ● ● ● ● ● ● Time Series B 1357911 66 The standard error of r1 is and of r5 is . With these standard errors in mind, the estimates of 0.768 and 0.14 are excellent estimates of 0.7 and 0.16807, respectively. (c) Repeat part (b) with a new simulation. Describe how the precision of the estimate varies with different sam- ples selected under identical conditions. (d) If software permits, repeat the simulation of the series and calculation of r1 and r5 many times and form the sampling distributions of r1 and r5. Describe how the precision of the estimate varies with different samples selected under identical conditions. How well does the large-sample variance given in Equation (6.1.5), page 111, approximate the variance in your sampling distribution? > set.seed(132435); r1=rep(NA,10000); r5=r1 # We are doing 10,000 replications. > for (k in 1:10000) {series=arima.sim(n=48, list(ar=0.7)) > ;r1[k]=acf(series,lag.max=1,plot=F)\$acf > ;r5[k]=acf(series,lag.max=5,plot=F)\$acf} > hist(r1); mean(r1); sd(r1); median(r1) > hist(r5); mean(r5); sd(r5); median(r5) For the sampling distribution of r1, the mean is 0.618 (ρ1 = 0.7) and the median is 0.631. This agrees with the observed skewness toward the lower values. The standard deviation in this distribution is 0.11 which agrees well with asymptotic theory (0.10). The sampling distribution of r5 has a mean of 0.033 (ρ5 = 0.168) and a median of 0.032 and this agrees with the near symmetry of this distribution. The standard deviation in this distribution is 0.18 which agrees reasonably well with asymptotic theory (0.25). This exercise illustrates the difficulty of estimating the autocorrelation function of a simple AR(1) series with a sample size of n = 48. You might want to repeat this exercise with a larger n, say, n = 96 or larger. You could also try different values for φ. Exercise 6.21 Simulate an MA(1) time series with n = 60 and with θ = 0.5. > set.seed(6453421); series=arima.sim(n=60,list(ma=-0.5)) (a) Calculate the theoretical autocorrelation at lag 1 for this model. ρ1 = . (b) Calculate the sample autocorrelation at lag 1, and compare the value with its theoretical value. Use Exhibit (6.2), page 112, to quantify the comparisons. > acf(series,lag.max=1) 1 φ2–()n⁄ 10.7()2–()48⁄ 0.010625 0.010625 0.10≈=== 1 n--- 1 φ2+ 1 φ2– --------------- 1 48------ 10.7()2+ 10.7()2– ------------------------ 0.25≈= r1 Frequency 0.0 0.2 0.4 0.6 0.8 0 500 1000 1500 r5 Frequency −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0 500 1000 1500 2000 θ 1 θ2+()⁄–0.5()10.5()2+()⁄–0.4–== 67 Autocorrelations of series ‘series’, by lag 1 -0.362 The standard error of r1 is . The estimate of −0.362 is well within two standrad errors of the true value of −0.4. (c) Repeat part (b) with a new simulation. Describe how the precision of the estimate varies with different sam- ples selected under identical conditions. (d) If software permits, repeat the simulation of the series and calculation of r1 many times and form the sam- pling distribution of r1. Describe how the precision of the estimate varies with different samples selected under identical conditions. How well does the large-sample variance given in Exhibit (6.2), page 112, approximate the variance in your sampling distribution? > set.seed(534261); r1=rep(NA,10000); r5=r1 # We are doing 10,000 replications. > for (k in 1:10000) {series=arima.sim(n=60, list(ma=-0.5)) > ;r1[k]=acf(series,lag.max=1,plot=F)\$acf} > hist(r1); mean(r1); sd(r1); median(r1) Remember that ρ1 = −0.4. Here the mean of the sampling distribution is −0.390 (median = −0.393) and the standard deviation 0.100. The large-sample standard deviation given in Exhibit (6.2), page 112, is so the large-sample value is an excellent approximation to the one obtained in the sampling distribution. Exercise 6.22 Simulate an AR(1) time series with n = 48, with > set.seed(5342310; series=arima.sim(n=48,list(ar=0.9)) (a) φ = 0.9, and calculate the theoretical autocorrelations at lag 1 and lag 5; ρ1 = 0.9 and ρ5 = (0.7)5 = 0.16807. (b) φ = 0.6, and calculate the theoretical autocorrelations at lag 1 and lag 5; ρ1 = 0.6 and ρ5 = (0.6)5 = 0.07776. (c) φ = 0.3, and calculate the theoretical autocorrelations at lag 1 and lag 5. ρ1 = 0.3 and ρ5 = (0.3)5 = 0.00243. (d) For each of the series in parts (a), (b), and (c), calculate the sample autocorrelations at lag 1 and lag 5 and compare the values with their theoretical values. Use Equations (6.1.5) and (6.1.6), page 111, to quantify the comparisons. In general, describe how the precision of the estimate varies with the value of φ. c11 n⁄ 13ρ1 2–4ρ1 4+ n⁄ 130.4–()2– 4 0.4–()4+60⁄ 0.10≈== r1 Frequency −0.6 −0.4 −0.2 0.0 0 500 1000 1500 0.79 60⁄ 0.102= 68 Case (a) φ = 0.9: Recall that for an AR(1), and . For φ = 0.9 we have and . From the simulated series we obtain > set.seed(5342310); series=arima.sim(n=48,list(ar=0.9)); acf(series)[1:5] Autocorrelations of series ‘series’, by lag 1 2 3 4 5 0.862 0.739 0.569 0.420 0.232 The estimate of 0.862 compares well with the true value of ρ1 = 0.9 when the standard error of 0.06 kept in mind. Similarly, the estimate of 0.232 compares well with the true value of ρ5 = 0.16807 when the standard error is 0.40. Case (b) φ = 0.6: Now and . From the simulation > set.seed(5342310); series=arima.sim(n=48,list(ar=0.6)); acf(series)[1:5] Autocorrelations of series ‘series’, by lag 1 2 3 4 5 0.617 0.388 0.392 0.228 0.191 The estimate of 0.617 compares well with the true value of ρ1 = 0.6 when the standard error is 0.12. The estimate of 0.191 compares well with the true value of ρ5 = 0.07776 when the standard error is 0.21. Case (c) φ = 0.3: In this case and . From the simulation > set.seed(5342310); series=arima.sim(n=48,list(ar=0.3)); acf(series)[1:5] Autocorrelations of series ‘series’, by lag 1 2 3 4 5 0.188 0.032 0.294 -0.081 0.048 The estimate of 0.188 compares well with the true value of ρ1 = 0.3 when the standard error is 0.14. The estimate of 0.048 compares well with the true value of ρ5 = 0.00243 when the standard error is 0.16. Exercise 6.23 Simulate an AR(1) time series with φ = 0.6, with (a) n = 24, and estimate ρ1 = φ = 0.6 with r1; > set.seed(162534) > series=arima.sim(model=list(order=c(1,0,0),ar=0.6),n=24); acf(series) Autocorrelations of series ‘series’, by lag 1 0.459 (b) n = 60, and estimate ρ1 = φ = 0.6 with r1; > series=arima.sim(model=list(order=c(1,0,0),ar=0.6),n=60); acf(series) Autocorrelations of series ‘series’, by lag 1 0.385 (c) n = 120, and estimate ρ1 = φ = 0.6 with r1. > series=arima.sim(model=list(order=c(1,0,0),ar=0.6),n=120); acf(series) Autocorrelations of series ‘series’, by lag 1 Var r1() 1 φ2–() n-------------------≈ Var rk() 1 n--- 1 φ2+ 1 φ2– ---------------≈ Var r1()0.06≈ Var r5()0.40≈ Var r1()0.12≈ Var r5()0.21≈ Var r1()0.14≈ Var r5()0.16≈ 69 0.627 (d) For each of the series in parts (a), (b), and (c), compare the estimated values with the theoretical value. Use Equation (6.1.5), page 111, to quantify the comparisons. In general, describe how the precision of the esti- mate varies with the sample size. Case (a) n = 24: . The estimate of 0.459 is well within two standard errors of the true value of 0.6. Case (b) n = 60: . Notice that even though the sample size is larger, this series gave a less accurate estimate 0.385 of ρ1 = φ = 0.6 than the one in part (a). In fact this estimate is more than two standard errors away from the true value of ρ1 = φ = 0.6. However, in general, estimates with larger sample sizes give better estimates as the standard errors are smaller. Case (c) n = 120: . This estimate 0.627 is the best one of the three and has the smallest standard error. Exercise 6.24 Simulate an MA(1) time series with θ = 0.7, with (a) n = 24, and estimate ρ1 with r1; First recall that ρ1 = . > set.seed(172534); series=arima.sim(n=24,list(ma=-0.7)); acf(series) Autocorrelations of series ‘series’, by lag 1 -0.595 (b) n = 60, and estimate ρ1 with r1; > set.seed(172534); series=arima.sim(n=60,list(ma=-0.7)); acf(series) Autocorrelations of series ‘series’, by lag 1 -0.527 (c) n = 120, and estimate ρ1 with r1. > set.seed(172534); series=arima.sim(n=120,list(ma=-0.7)); acf(series) Autocorrelations of series ‘series’, by lag 1 -0.458 (d) For each of the series in parts (a), (b), and (c), compare the estimated values of ρ1 with the theoretical value. Use Exhibit (6.2), page 112, to quantify the comparisons. In general, describe how the precision of the esti- mate varies with the sample size. The standard errors are , , and , respectively. With these particular simulations, the estimates get better with increasing sample size. Exercise 6.25 Simulate an AR(1) time series of length n = 36 with φ = 0.7. (a) Calculate and plot the theoretical autocorrelation function for this model. Plot sufficient lags until the corre- lations are negligible. Var r1() 10.6()2– 24------------------------0.16≈≈ Var r1() 10.6()2– 60------------------------0.10≈≈ Var r1() 10.6()2– 120------------------------0.07≈≈ θ 1 θ2+()⁄–0.7()10.7()2+()⁄–0.4–== 0.73 n⁄ 0.73 24⁄ 0.15≈= 0.73 60⁄ 0.09= 0.73 120⁄ 0.07= 70 > round(ARMAacf(ar=0.7,lag.max=10),digits=3) 0 1 2 3 4 5 6 7 8 9 10 1.000 0.700 0.490 0.343 0.240 0.168 0.118 0.082 0.058 0.040 0.028 > win.graph(width=6.5,height=3,pointsize=8) > ACF=ARMAacf(ar=0.7,lag.max=10) > plot(y=ACF[-1],x=1:10,xlab='Lag',ylab='ACF',type='h'); abline(h=0) (b) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? > set.seed(162534); series=arima.sim(n=36,list(ar=0.7)); acf(series) The pattern match is not that good but remember that n = 36. (c) What are the theoretical partial autocorrelations for this model? and otherwise. (d) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? Use the large-sample standard errors reported in Exhibit (6.1), page 111, to quantify your answer. See answer for part (b). Here so the observed r1 is well within two standard errors of the true value. Similarly, for higher order lags. 246810 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Lag ACF 2 4 6 8 10 12 14 −0.2 0.0 0.2 0.4 0.6 Lag ACF φ11 0.7= φkk 0= Var r1() 1 φ2–()n⁄≈ 10.72–()36⁄ 0.12== 71 (e) Calculate and plot the sample PACF for your simulated series. How well do the values and patterns match the theoretical PACF from part (c)? Use the large-sample standard errors reported on page 115 to quantify your answer. > pacf(series) Using the approximate standard errors of , the sample pacf matches the theoretical pacf quite well. Exercise 6.26 Simulate an MA(1) time series of length n = 48 with θ = 0.5. (a) What are the theoretical autocorrelations for this model? ρ1 = is the only nonzero autocorrelation for this model. (b) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? > set.seed(162534); series=arima.sim(n=48,list(ma=-0.5)); acf(series) The acf at lag 1 looks reasonable, but there are “significant” but spurious correlations at lags 7 and 14 also. (c) Calculate and plot the theoretical partial autocorrelation function for this model. Plot sufficient lags until the correlations are negligible. (Hint: See Equation (6.2.6), page 114.) 2 4 6 8 10 12 14 −0.2 0.0 0.2 0.4 0.6 Lag Partial ACF 1 n⁄ 136⁄ 0.167== θ 1 θ2+()⁄–0.5()10.5()2+()⁄–0.4–== 51015 −0.4 −0.2 0.0 0.2 Lag ACF φkk θk 1 θ2–() 1 θ2 k 1+()– ---------------------------- for k 1≥–= 72 > theta=0.5; phikk=rep(NA,10) > for (k in 1:10) {phikk[k]=-(theta^k)*(1-theta^2)/(1-theta^(2*(k+1)))} > plot(phikk,type='h',ylab='MA(1) PACF',xlab='Lag'); abline(h=0) (d) Calculate and plot the sample PACF for your simulated series. How well do the values and patterns match the theoretical PACF from part (c)? > pacf(series) Only the first two lags match well. However, the approximate standard errors of indicate that only the sample pacf at lag 7 is unexpected. Exercise 6.27 Simulate an AR(2) time series of length n = 72 with φ1 = 0.7 and φ2 = −0.4. > set.seed(162534); series=arima.sim(n=72,list(ar=c(0.7,-0.4))) 246810 −0.4 −0.3 −0.2 −0.1 0.0 Lag MA(1) PACF 51015 −0.4 −0.2 0.0 0.1 0.2 0.3 Lag Partial ACF 1 n⁄ 148⁄ 0.14== 73 (a) Calculate and plot the theoretical autocorrelation function for this model. Plot sufficient lags until the corre- lations are negligible. > phi1=0.7; phi2=-0.4; ACF=ARMAacf(ar=c(phi1,phi2),lag.max=10) > plot(y=ACF[-1],x=1:10,xlab='Lag',ylab='ACF',type='h',ylim=c(-0.6,0.6)); abline(h=0) (b) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? > acf(series) The lag 1 sample ACF matches well and the “damped sine wave” is somewhat apparent but the values at large lags do not die out like the theoretical ACF. (c) What are the theoretical partial autocorrelations for this model? , , and otherwise. (d) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? This question repeats part (b) 246810 −0.6 −0.2 0.0 0.2 0.4 0.6 Lag ACF 51015 −0.2 0.0 0.1 0.2 0.3 0.4 Lag ACF φ11 0.5= φ22 0.7= φkk 0= 74 (e) Calculate and plot the sample PACF for your simulated series. How well do the values and patterns match the theoretical PACF from part (c)? > pacf(series) This sample pacf matches the theoretical pacf quite well. Exercise 6.28 Simulate an MA(2) time series of length n = 36 with θ1 = 0.7 and θ2 = −0.4. > set.seed(162534); series=arima.sim(n=36,list(ma=c(-0.7,0.4))) (a) What are the theoretical autocorrelations for this model? > theta1=0.7; theta2=-0.4; ACF=ARMAacf(ma=c(-theta1,-theta2),lag.max=10) > plot(y=ACF[-1],x=1:10,xlab='Lag',ylab='ACF',type='h',ylim=c(-.6,.6)); abline(h=0) 51015 −0.2 0.0 0.2 0.4 Lag Partial ACF 246810 −0.6 −0.2 0.0 0.2 0.4 0.6 Lag ACF 75 (b) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? > acf(series) With this small sample size we only get a reasonably good match at lag 1. (c) Calculate and plot the theoretical partial autocorrelation function for this model. Plot sufficient lags until the correlations are negligible. (Hint: See Equation (6.2.6), page 114.) Unfortunately, Equation (6.2.6), page 114, does not apply to an MA(2) model. We can do a very large sample simu- lation and display the sample pacf to get a good idea of the theoretical pacf for this model. > series2=arima.sim(n=1000,list(ma=c(-0.7,0.4))); pacf(series2,ci.col=NULL) 2 4 6 8 10 12 14 −0.4 −0.2 0.0 0.2 Lag ACF 2 4 6 8 10 12 14 −0.6 −0.4 −0.2 0.0 Lag Partial ACF 76 (d) Calculate and plot the sample PACF for your simulated series. How well do the values and patterns match the theoretical PACF from part (c)? > pacf(series2) The first four partials match the theoretical pattern remarkably well—especially given the small sample size of 36. Exercise 6.29 Simulate a mixed ARMA(1,1) model of length n = 60 with φ = 0.4 and θ = 0.6. > set.seed(762534); series=arima.sim(n=60,list(ar=0.4,ma=-0.6)) (a) Calculate and plot the theoretical autocorrelation function for this model. Plot sufficient lags until the corre- lations are negligible. > phi=0.4; theta=0.6; ACF=ARMAacf(ar=phi,ma=-theta,lag.max=10) > plot(y=ACF[-1],x=1:10,xlab='Lag',ylab='ACF',type='h',ylim=c(-.2,.2)); abline(h=0) 2 4 6 8 10 12 14 −0.4 −0.2 0.0 0.2 Lag Partial ACF 246810 −0.2 −0.1 0.0 0.1 0.2 Lag ACF 77 (b) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? > acf(series) The pattern matches somewhat at the first few lags but there is a lot of spurious autocorrelation at higher lags. (c) Calculate and interpret the sample EACF for this series. Does the EACF help you specify the correct orders for the model? This sample EACF seems to point to an MA(1) model rather than the mixed ARMA(1,1). (d) Repeat parts (b) and (c) with a new simulation using the same parameter values and sample size. (e) Repeat parts (b) and (c) with a new simulation using the same parameter values but sample size n = 36. (f) Repeat parts (b) and (c) with a new simulation using the same parameter values but sample size n = 120. Exercise 6.30 Simulate a mixed ARMA(1,1) model of length n = 100 with φ = 0.8 and θ = 0.4. > set.seed(325346); series=arima.sim(n=100,list(ar=0.8,ma=-0.4)) AR/MA 012345678910111213 0 xooooooooooooo 1 xxoooooooooooo 2 xxoooooooooooo 3 xxxooooooooooo 4 xoxooooooooooo 5 oooooooooooooo 6 xxoooooooooooo 7 oxoooooooooooo 51015 −0.4 −0.2 0.0 0.1 0.2 Lag ACF 78 (a) Calculate and plot the theoretical autocorrelation function for this model. Plot sufficient lags until the corre- lations are negligible. > phi=0.8; theta=0.4; ACF=ARMAacf(ar=phi,ma=-theta,lag.max=20) > plot(y=ACF[-1],x=1:20,xlab='Lag',ylab='ACF',type='h',ylim=c(-.2,.6)); abline(h=0) (b) Calculate and plot the sample ACF for your simulated series. How well do the values and patterns match the theoretical ACF from part (a)? > acf(series) The sample acf generally matches the pattern of the theoretical acf for the first 10 or so lags but, as is quite typical, it displays spurious autocorrelation at higher lags. (c) Calculate and interpret the sample EACF for this series. Does the EACF help you specify the correct orders for the model? 5 101520 −0.2 0.0 0.2 0.4 0.6 Lag ACF 5 101520 −0.2 0.0 0.2 0.4 0.6 Lag ACF 79 This sample EACF points to the mixed ARMA(1,1) quite well. (d) Repeat parts (b) and (c) with a new simulation using the same parameter values and sample size. (e) Repeat parts (b) and (c) with a new simulation using the same parameter values but sample size n = 48. (f) Repeat parts (b) and (c) with a new simulation using the same parameter values but sample size n = 200. Exercise 6.31 Simulate a nonstationary time series with n = 60 according to the model ARIMA(0,1,1) with θ = 0.8. (Note: This is a better exercise if you use θ = −0.8.) > set.seed(15243); series=arima.sim(n=60,list(order=c(0,1,1),ma=-0.8))[-1] (a) Perform the (augmented) Dickey-Fuller test on the series with k = 0 in Equation (6.4.1), page 128. (With k = 0, this is the Dickey-Fuller test and is not augmented.) Comment on the results. > library(uroot); ADF.test(series,selectlags=list(Pmax=0),itsd=c(1,0,0)) --------- ------ - ------ ---- Augmented Dickey & Fuller test --------- ------ - ------ ---- Null hypothesis: Unit root. Alternative hypothesis: Stationarity. ---- ADF statistic: Estimate Std. Error t value Pr(>|t|) adf.reg -0.782 0.129 -6.04 0.01 Lag orders: 0 Number of available observations: 60 Warning message: In interpolpval(code = code, stat = adfreg[, 3], N = N) : p-value is smaller than printed p-value The Dickey-Fuller test rejects nonstationarity (unit root). This is a Type I error but the series does not look very non- stationary! (b) Perform the augmented Dickey-Fuller test on the series with k chosen by the software—that is, the “best” value for k. Comment on the results. > ar(diff(series)) Call: ar(x = diff(series)) Coefficients: 1 2 3 -0.7299 -0.5195 -0.3835 Order selected 3 sigma^2 estimated as 0.8227 The selected order is 3. This will be used next in the Augmented Dickey-Fuller test. AR/MA012345678910111213 0 xxxxxxxxoooooo 1 xooooooooooooo 2 xxoooooooooooo 3 oooooooooooooo 4 xooooooooooooo 5 xxoxoooooooooo 6 oxxxoooooooooo 7 xoxxoooooooooo 80 > ADF.test(series,selectlags=list(mode=c(1,2,3)),itsd=c(1,0,0)) --------- ------ - ------ ---- Augmented Dickey & Fuller test --------- ------ - ------ ---- Null hypothesis: Unit root. Alternative hypothesis: Stationarity. ---- ADF statistic: Estimate Std. Error t value Pr(>|t|) adf.reg -0.414 0.208 -1.99 0.1 Lag orders: 1 2 3 Number of available observations: 57 Warning message: In interpolpval(code = code, stat = adfreg[, 3], N = N) : p-value is greater than printed p-value Now the Augmented Dickey-Fuller test does not reject nonstationarity (a unit root). (c) Repeat parts (a) and (b) but use the differences of the simulated series. Comment on the results. (Here, of course, you should reject the unit root hypothesis.) > ADF.test(diff(series),selectlags=list(Pmax=0),itsd=c(1,0,0)) --------- ------ - ------ ---- Augmented Dickey & Fuller test --------- ------ - ------ ---- Null hypothesis: Unit root. Alternative hypothesis: Stationarity. ---- ADF statistic: Estimate Std. Error t value Pr(>|t|) adf.reg -1.489 0.116 -12.866 0.01 Lag orders: 0 Number of available observations: 59 Warning message: In interpolpval(code = code, stat = adfreg[, 3], N = N) : p-value is smaller than printed p-value We (correctly) reject the unit root hypothesis for the differenced series. > ar(diff(diff(series))) # order 5 selected Call: ar(x = diff(diff(series))) Coefficients: 1 2 3 4 5 -1.4026 -1.4281 -1.2405 -0.7083 -0.2737 Order selected 5 sigma^2 estimated as 1.205 Order 5 is selected. > ADF.test(diff(series),selectlags=list(mode=c(1,2,3,4,5)),itsd=c(1,0,0)) --------- ------ - ------ ---- Augmented Dickey & Fuller test --------- ------ - ------ ---- Null hypothesis: Unit root. Alternative hypothesis: Stationarity. 81 ---- ADF statistic: Estimate Std. Error t value Pr(>|t|) adf.reg -2.678 0.869 -3.083 0.036 Lag orders: 1 2 3 4 5 Number of available observations: 54 We (correctly) reject the unit root at the usual significance levels. Exercise 6.32 Simulate a stationary time series of length n = 36 according to an AR(1) model with φ = 0.95. This model is stationary, but just barely so. With such a series and a short history, it will be difficult if not impossible to distinguish between stationary and nonstationary with a unit root. > set.seed(274135); series=arima.sim(n=36,list(ar=0.95)) (a) Plot the series and calculate the sample ACF and PACF and describe what you see. > win.graph(width=6.5, height=2.5,pointsize=8) > plot(series,type='o') > win.graph(width=3.25, height=2.5,pointsize=8); acf(series); pacf(series) The ACF and PACF graphs would lead us to at least entertain an AR(1) model. However, the “upward trend” in the time series plot suggests nonstationarity of some kind. (b) Perform the (augmented) Dickey-Fuller test on the series with k = 0 in Equation (6.4.1), page 128. (With k = 0 this is the Dickey-Fuller test and is not augmented.) Comment on the results. > library(uroot); ADF.test(series, selectlags=list(Pmax=0), itsd=c(1,0,0)) --------- ------ - ------ ---- Augmented Dickey & Fuller test ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time series 0 5 10 15 20 25 30 35 1234567 2468101214 −0.2 0.2 0.4 0.6 Lag ACF 2 4 6 8 10 12 14 −0.2 0.2 0.4 0.6 Lag Partial ACF 82 --------- ------ - ------ ---- Null hypothesis: Unit root. Alternative hypothesis: Stationarity. ---- ADF statistic: Estimate Std. Error t value Pr(>|t|) adf.reg -0.323 0.124 -2.597 0.1 Lag orders: 0 Number of available observations: 35 The Dickey-Fuller test results suggest that we consider a nonstationary model for these data. (c) Perform the augmented Dickey-Fuller test on the series with k chosen by the software—that is, the “best” value for k. Comment on the results. > ar(diff(series)) # order 2 is selected as “best” for the differenced series > ADF.test(series, selectlags=list(mode=c(1,2)), itsd=c(1,0,0)) --------- ------ - ------ ---- Augmented Dickey & Fuller test --------- ------ - ------ ---- Null hypothesis: Unit root. Alternative hypothesis: Stationarity. ---- ADF statistic: Estimate Std. Error t value Pr(>|t|) adf.reg -0.335 0.155 -2.156 0.1 Lag orders: 1 2 Number of available observations: 33 The augmented Dickey-Fuller test, also suggests a unit root for this series. (d) Repeat parts (a), (b), and (c) but with a new simulation with n = 100. Exercise 6.33 The data file named deere1 contains 82 consecutive values for the amount of deviation (in 0.000025 inch units) from a specified target value that an industrial machining process at Deere & Co. produced under certain specified operating conditions. (a) Display the time series plot of this series and comment on any unusual points. > data(deere1); plot(deere1,type='o',ylab='Deviation') ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ●● ● ●●● ● ● ● ● ● ● ●●● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● Time Deviation 0 20406080 0102030 83 Except for one point of 30 at t = 27 the process seems relatively stable and stationary. (b) Calculate the sample ACF for this series and comment on the results. > acf(deere1) The graph indicates a statistically significant autocorrelation at lag 2. (c) Now replace the unusual value by a much more typical value and recalculate the sample ACF. Comment on the change from what you saw in part (b). > deere1=8; acf(deere1) We replaced the unusual value of 30 at time 27 with the next largest value of 8. This had only a small effect on the sample autocorrelation function. 51015 −0.2 −0.1 0.0 0.1 0.2 Lag ACF 51015 −0.2 −0.1 0.0 0.1 0.2 Lag ACF 84 (d) Calculate the sample PACF based on the revised series that you used in part (c). What model would you specify for the revised series? (Later we will investigate other ways to handle outliers in time series model- ing.) > pacf(deere1) This pacf suggests an AR(2) model for the series. Exercise 6.34 The data file named deere2 contains 102 consecutive values for the amount of deviation (in 0.0000025 inch units) from a specified target value that another industrial machining process produced at Deere & Co. (a) Display the time series plot of this series and comment on its appearance. Would a stationary model seem to be appropriate? > data(deere2); plot(deere2,type='o',ylab='Deviation') There are some unusual observations at the beginning of this series. These are possibly some startup effects. After the startup, the series might well be considered stationary. 51015 −0.2 −0.1 0.0 0.1 0.2 Lag Partial ACF ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●●● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ● Time Deviation 0 20406080100 −30 −20 −10 0 10 20 85 (b) Display the sample ACF and PACF for this series and select tentative orders for an ARMA model for the series. > win.graph(width=3.25,height=3,pointsize=8); acf(deere2); pacf(deere2) These plots strongly suggest an AR(1) model for this series. An AR(2) might also be tried as an overfit (See Chapter 8) due to the damped sine wave suggested in the acf (but the pacf does not support this model). Exercise 6.35 The data file named deere3 contains 57 consecutive measurements recorded from a complex machine tool at Deere & Co. The values given are deviations from a target value in units of ten millionths of an inch. The process employs a control mechanism that resets some of the parameters of the machine tool depending on the mag- nitude of deviation from target of the last item produced. (a) Display the time series plot of this series and comment on its appearance. Would a stationary model be appropriate here? > data(deere3); plot(deere3,type='o',ylab='Deviation') This plot looks reasonably stationary with the possible exception of the last few observations. 5101520 −0.2 0.0 0.2 0.4 0.6 Lag ACF 5101520 −0.2 0.0 0.2 0.4 0.6 Lag Partial ACF ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● Time Deviation 0 1020304050 −6000 −2000 0 2000 4000 86 (b) Display the sample ACF and PACF for this series and select tentative orders for an ARMA model for the series. > win.graph(width=3.25,height=3,pointsize=8); acf(deere3); pacf(deere3) Based on these displays, we would tentatively specify an AR(1) model for this series. Exercise 6.36 The data file named robot contains a time series obtained from an industrial robot. The robot was put through a sequence of maneuvers, and the distance from a desired ending point was recorded in inches. This was repeated 324 times to form the time series. (a) Display the time series plot of the data. Based on this information, do these data appear to come from a sta- tionary or nonstationary process? > data(robot); plot(robot,type='o',ylab='Robot End Position') From this plot we might try a stationary model but there is also enough “drift” that we might also suspect nonsta- tionarity. 51015 −0.2 0.0 0.2 0.4 Lag ACF 51015 −0.2 0.0 0.2 0.4 Lag Partial ACF ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●●● ●● ● ● ● ●● ● ●●●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●●●● ● ● ● ●● ● ● ● ●● ● ● ●●● ● ● ● ●● ●● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ●● ● ●●●● ● ● ● ●● ● ● ●●● ● ● ● ●● ● ●● ●● ●● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ●●● ● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ●●● ●●●● ●● ● ●● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ●●●● ● ●● ●● ● ●● ● ● ●●● ● ● ● ●● ● ● ● ●● ● ● ●●● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●●●● ● ●● ● ● Time Robot End Position 0 50 100 150 200 250 300 −0.005 0.000 0.005 87 (b) Calculate and plot the sample ACF and PACF for these data. Based on this additional information, do these data appear to come from a stationary or nonstationary process? > win.graph(width=3.25,height=3,pointsize=8);acf(robot);pacf(robot) These plots are are not especially definitive, but the pacf suggests possibly an AR(3) model for the series. (c) Calculate and interpret the sample EACF. > eacf(robot) The EACF suggests an ARMA(1,1) model. AR\MA 012345678910111213 0 xxxxxxxxxoxxxx 1 xooooooooooooo 2 xxoooooooooooo 3 xxoooooooooooo 4 xxxxooooooooxo 5 xxxoooooooooxo 6 xooooxoooooooo 7 xooxoxxooooooo 5 10152025 −0.1 0.0 0.1 0.2 0.3 Lag ACF 5 10152025 −0.1 0.0 0.1 0.2 0.3 Lag Partial ACF 88 (d) Use the best subsets ARMA approach to specify a model for these data. Compare these results with what you discovered in parts (a), (b), and (c). > plot(armasubsets(y=robot,nar=14,nma=14,y.name='Robot',ar.method='ols')) The best model here includes a lag 1 AR term but lags 3 and 12 in the MA part of the model. Exercise 6.37 Calculate and interpret the sample EACF for the logarithms of the Los Angeles rainfall series. The data are in the file named larain. Do the results confirm that the logs are white noise? > eacf(log(larain)) This EACF suggests that the logarithms of the L.A. annual rainfall follow a white noise process. AR\MA012345678910111213 0 oooooooooooooo 1 oooooooooooooo 2 xxoooooooooooo 3 xooooooooooooo 4 xooooooooooooo 5 xxxxxooooooooo 6 xxooxooooooooo 7 xoxooooooooooo BIC (Intercept) Robot−lag1 Robot−lag2 Robot−lag3 Robot−lag4 Robot−lag5 Robot−lag6 Robot−lag7 Robot−lag8 Robot−lag9 Robot−lag10 Robot−lag11 Robot−lag12 Robot−lag13 Robot−lag14 error−lag3 error−lag4 error−lag5 error−lag6 error−lag7 error−lag8 error−lag9 error−lag10 error−lag11 error−lag12 error−lag13 error−lag14 error−lag1 error−lag2 −7.4 −12 −15 −17 −22 −26 −28 −28 −29 89 Exercise 6.38 Calculate and interpret the sample EACF for the color property time series. The data are in the color file. Does the sample EACF suggest the same model that was specified by looking at the sample PACF? > eacf(color,ar.max=7,ma.max=9) This EACF supports an AR(1) model for this series. Exercise 6.39 The data file named days contains accounting data from the Winegard Co. of Burlington, Iowa. The data are the number of days until Winegard receives payment for 130 consecutive orders from a particular distribu- tor of Winegard products. (The name of the distributor must remain anonymous for confidentiality reasons.) (a) Plot the time series, and comment on the display. Are there any unusual values? > plot(days,type='o',ylab='Days Until Payment',xlab='Order') The values of 55 at order number 63, 49 at order number 106, and 63 at order 129 look rather unusual relative the other values. AR\MA0123456789 0 xooooooooo 1 oooooooooo 2 oooooooooo 3 xooooooooo 4 oooooooooo 5 xooooooooo 6 xooooooooo 7 xooooooooo ●● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ●● ●● ●●● ● ● ● ● ●● ●● ●●●●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ● ●● ● ● ● ●●●●●●●● ●●●● ● ● ●● ●● ●●● ● ●● ●●● ● ● ●●● ● ●●●●● ●●● ● ● ● ● ● ● ● ● ● ● ● Order Days Until Payment 0 20406080100120 20 30 40 50 60 90 (b) Calculate the sample ACF and PACF for this series. > win.graph(width=3.25,height=3,pointsize=8); acf(days); pacf(days) None of the autocorrelations or partial autocorrelations are statistically significantly different from zero. The series looks like white noise. (c) Now replace each of the unusual values with a value of 35 days—much more typical values—and repeat the calculation of the sample ACF and PACF. What ARMA model would you specify for this series after remov- ing the outliers? (Later we will investigate other ways to handle outliers in time series modeling.) > daysmod=days; daysmod=35; daysmod=35; daysmod=35 > acf(daysmod); pacf(daysmod) After replacing the outliers, we see several significant auto and partial correlations. No clear-cut pattern has emerged, but would certainly want to try both MA(2) and AR(2) and see how these models might fit. 5 101520 −0.15 −0.05 0.05 0.15 Lag ACF 5 101520 −0.15 −0.05 0.05 0.15 Lag Partial ACF 5 101520 −0.1 0.0 0.1 0.2 Lag ACF 5 101520 −0.1 0.0 0.1 0.2 Lag Partial ACF 91 CHAPTER 7 Exercise 7.1 From a series of length 100, we have computed r1 = 0.8, r2 = 0.5, r3 = 0.4, = 2, and a sample variance of 5. If we assume that an AR(2) model with a constant term is appropriate, how can we get (simple) estimates of φ1, φ2, θ0, and ? Using and , we have and . Then from we have . Finally, from . Exercise 7.2 Assuming that the following data arise from a stationary process, calculate method-of-moments esti- mates of μ, γ0, and ρ1: 6, 5, 4, 6, 4. . and Exercise 7.3 If {Yt} satisfies an AR(1) model with φ of about 0.7, how long of a series do we need to estimate φ = ρ1 with 95% confidence that our estimation error is no more than ±0.1? The (large sample) standard error of is . Solv- ing yields n = 204. Exercise 7.4 Consider an MA(1) process for which it is known that the process mean is zero. Based on a series of length n = 3, we observe Y1 = 0, Y2 = −1, and Y3 = ½ . (a) Show that the conditional least-squares estimate of θ is ½. Using Equation (7.2.14), page 157, we have , , and = ½ + (−1)θ So = and, by inspection, this is minimized when θ = ½. (b) Find an estimate of the noise variance. (Hint: Iterative methods are not needed in this simple case.) . Note: Since the mean is known to be zero, one might reasonably argue that the divisor should be n rather than n−1. Exercise 7.5 Given the data Y1 = 10, Y2 = 9, and Y3 = 9.5, we wish to fit an IMA(1,1) model without a constant term. (a) Find the conditional least squares estimate of θ. (Hint: Do Exercise 7.4 first.) After computing first differences, we have data just like in Exercise 7.4. Since fitting an IMA(1,1) model with no constant term to the original data is equivalent to fitting an MA(1) with zero mean to these differences, the answers will be the same as in the previous exercise. (b) Estimate . Exercise 7.6 Consider two different parameterizations of the AR(1) process with nonzero mean: Model I. Yt − μ = φ(Yt−1 − μ) + et. Model II. Yt = φYt−1 + θ0 + et. We want to estimate φ and μ or φ and θ0 using conditional least squares conditional on Y1. Show that with Model I we are led to solve nonlinear equations to obtain the estimates, while with Model II we need only solve linear equa- tions. Rewriting Model I as Yt = μ(1 − φ) + Yt−1 + et we see that it is not linear in the parameters μ and φ. Thus, the equa- tions obtained from setting the partial derivatives of to zero will not be linear equations. Model II is linear Y σe2 φ^1 r1 1 r2–() 1 r1 2– ------------------------= φ^2 r2 r1 2– 1 r1 2– ----------------= φ^1 0.8 1 0.5–() 10.82– -----------------------------1.11==φ^2 0.5 0.82– 10.82– ----------------------- 0 . 3 8 9–== θ0 μ 1 φ1 φ2––()= θ^ 0 μ^ 1 φ^1 φ^2––()2 1 1.11 0.389–()––()0.558== = σ^e2 1 φ^1r1– φ^2r2–()s2 11.11()0.8()– 0.389–()0.5()–[]5 1.5325== = μ^ 65464++++()5⁄ 5==γ^0 65–()2 55–()2 45–()2 65–()2 45–()2++++[]51–()⁄ 1== ρ^1 r1 65–()55–()55–()45–()45–()65–()65–()45–()+++[]4⁄ 12⁄–== = φ^ r1=1φ^2–()n⁄ 10.72–()n⁄ 0.51 n⁄ 0.7141 n⁄≈== 2 0.7141 n⁄()0.1= e1 Y1 0==e2 Y2 θe1+1–==e3 Y3 θe2+= Sc θ() et()2∑=02 1–()2 12⁄θ–()2++ σ^e 2 Sc θ() n 1–------------- et()2∑ n 1–------------------ 1 31–------------ 1 2---== == σe2 et()2∑ 92 in the parameters and ordinary least squares regression may be used to obtain the conditional least squares esti- mates. Exercise 7.7 Verify Equation (7.1.4), page 150. Rewriting we have the quadratic equation in θ . The two solutions are . We claim that is always the invertible one. We consider four cases: Case 1: . Then which is clearly true. Case 2: . Then which is true. Case 3: . Then which is true. Case 4: . Then which is true. Exercise 7.8 Consider an ARMA(1,1) model with φ = 0.5 and θ = 0.45. (a) For n = 48, evaluate the variances and correlation of the maximum likelihood estimators of φ and θ using Equations (7.4.13) on page 161. Comment on the results. and and . The standard errors are quite large relative to the quantities being estimated. This is because of the near cancellation of the AR and MA parameters. This is a rather unstable model approaching white noise. Furthermore, . The estimates are very highly correlated. (b) Repeat part (a) but now with n = 120. Comment on the new results. and . and . The standard errors are only slightly smaller. does not change with n! Exercise 7.9 Simulate an MA(1) series with θ = 0.8 and n = 48. > set.seed(15234); series=arima.sim(n=48,list(ma=-0.8)) r1 θ 1 θ2+ ---------------–= r1θ2 θ r1++ 0= 1 2r1 --------– 1 4r1 2-------- 1–± θ^ 1–14r1 2–+ 2r1 -----------------------------------= 0 r1 0.5<< θ^ 1 1–14r1 2–+ 2r1 ----------------------------------- 114r1 2–4r1 2 4r1 1++<⇔<⇔< 0 r1 0.5<< 1 θ^<–1 1–14r1 2–+ 2r1 -----------------------------------<–2r1–1–14r1 2–12r1–14r1 2– ⇔<⇔+<⇔⇔ 14r1–4r1 2+14r1 2–< 1– r1+ r1< r1 0.5<⇔⇔ 0.5– r1 0<< θ^ 1 1–14r1 2–+ 2r1 ----------------------------------- 114r1 2–2r1 114r1 2–4r1 2 4r1 1++>⇔⇔+>⇔<⇔< r1– r1 10.5r1<–⇔+< 0.5– r1 0<< 1 θ^ 1 1–14r1 2–+ 2r1 ----------------------------------- 2r1–1–14r1 2–+> 12r1–14r1 2–> ⇔⇔⇔<–⇔<– 14r1–4r1 2+14r1 2–> 1– r1+ r1–< r1 0.5<⇔⇔ Var φ^() 1 φ2– n-------------- 1 φθ– φθ–--------------- 2 ≈ 10.52– 48------------------- 10.5()0.45()– 0.5 0.45–------------------------------------- 2 3.75==3.75 1.94= Var θ^() 1 θ2– n-------------- 1 φθ– φθ–--------------- 2 ≈ 10.452– 48---------------------- 10.5()0.45()– 0.5 0.45–------------------------------------- 2 3.99==3.99 1.997= Corr φ^ θ^,() 1 φ2–()1 θ2–() 1 φθ–-------------------------------------------≈ 10.52–()10.452–() 10.5()0.45()–------------------------------------------------------- 0 . 9 9 8== Var φ^() 10.52– 120------------------- 10.5()0.45()– 0.5 0.45–------------------------------------- 2 ≈ 1.50= 1.50 1.22= Var θ^() 10.452– 120---------------------- 10.5()0.45()– 0.5 0.45–------------------------------------- 2 ≈ 1.60= 1.60 1.26= Corr φ^ θ^,()0.998≈ 93 (a) Find the method-of-moments estimate of θ. > # Below is a function that computes the method of moments estimator of > # the MA(1) coefficient of an MA(1) model. > estimate.ma1.mom=function(x){r=acf(x,plot=F)\$acf; if (abs(r)<0.5) > return((-1+sqrt(1-4*r^2))/(2*r)) else return(NA)} > estimate.ma1.mom(series) theta hat = 0.685599 (b) Find the conditional least squares estimate of θ and compare it with part (a). > arima(series,order=c(0,0,1),method='CSS') Call: arima(x = series, order = c(0, 0, 1), method = "CSS") Coefficients: ma1 intercept -0.7529 0.0109 s.e. 0.1089 0.0422 sigma^2 estimated as 1.247: part log likelihood = -73.41 With our sign convention, the estimate of theta is +0.7529. This estimate is closer to the truth than the method-of-moments estimate in part (a). (c) Find the maximum likelihood estimate of θ and compare it with parts (a) and (b). > arima(series,order=c(0,0,1),method='ML') Call: arima(x = series, order = c(0, 0, 1), method = "ML") Coefficients: ma1 intercept -0.7700 0.0033 s.e. 0.1183 0.0405 sigma^2 estimated as 1.226: log likelihood = -73.45, aic = 150.9 With our sign convention, the estimate of theta is +0.7700 and this is the best of the three estimates for this particu- lar simulation. (d) Repeat parts (a), (b), and (c) with a new simulated series using the same parameters and same sample size. Compare your results with your results from the first simulation. Exercise 7.10 Simulate an MA(1) series with θ = −0.6 and n = 36. > set.seed(135246); series=arima.sim(n=36,list(ma=0.6)) (a) Find the method-of-moments estimate of θ. > estimate.ma1.mom(series) theta hat = -0.7510313 (b) Find the conditional least squares estimate of θ and compare it with part (a). > arima(series,order=c(0,0,1),method='CSS') Call: arima(x = series, order = c(0, 0, 1), method = "CSS") Coefficients: ma1 intercept 0.9227 0.3786 s.e. 0.0618 0.2821 sigma^2 estimated as 0.8075: part log likelihood = -47.23 94 The conditional least squares estimate is −0.9227 which in this case is worse than the method-of-moments estimate of −0.7510313. (c) Find the maximum likelihood estimate of θ and compare it with parts (a) and (b). > arima(series,order=c(0,0,1),method='ML') Call: arima(x = series, order = c(0, 0, 1), method = "ML") Coefficients: ma1 intercept 0.9248 0.4390 s.e. 0.0922 0.2831 sigma^2 estimated as 0.7993: log likelihood = -48.01, aic = 100.03 The maximum likelihood estimate is quite similar to the CSS estimate and, again, the MOM estimate is better in this case. (d) Repeat parts (a), (b), and (c) with a new simulated series using the same parameters and same sample size. Compare your results with your results from the first simulation. Exercise 7.11 Simulate an MA(1) series with θ = −0.6 and n = 48. > set.seed(1352); series=arima.sim(n=48,list(ma=0.6)) (a) Find the maximum likelihood estimate of θ. > arima(series,order=c(0,0,1),method='ML')\$coef ma1 0.5081146 (Recall, that our sign convention is oposite that of the R software.) (b) If your software permits, repeat part (a) many times with a new simulated series using the same parameters and same sample size. > set.seed(1352); thetahat=rep(NA,1000) > for (k in 1:1000) {series=arima.sim(n=48,list(ma=0.6)); > thetahat[k]=-arima(series,order=c(0,0,1),method='ML')\$coef} (c) Form the sampling distribution of the maximum likelihood estimates of θ. The distribution is roughly normal but skewed a little toward zero. (d) Are the estimates (approximately) unbiased? thetahat Frequency −1.0 −0.8 −0.6 −0.4 −0.2 0.0 0 50 100 150 200 250 300 95 > mean(thetahat) The mean of the sampling distribution is −0.6042001 so the estimate is nearly unbiased. (e) Calculate the variance of your sampling distribution and compare it with the large-sample result in Equation (7.4.11), page 161. > sd(thetahat)^2 The sample variance from the 1000 replications is 0.02080749. Compare this to the large-sample result of . Things are somewhat better in standard deviation terms. The compari- son is 0.144 for the sampling distribution versus 0.115 from the large-sample theory. Exercise 7.12 Repeat Exercise 7.11 using a sample size of n = 120. Simulate an MA(1) series with θ = −0.6 and n = 120. > set.seed(1352); series=arima.sim(n=120,list(ma=0.6)) (a) Find the maximum likelihood estimate of θ. > arima(series,order=c(0,0,1)) # Maximum likelihood is the default estimation method Call: arima(x = series, order = c(0, 0, 1), method = "ML") Coefficients: ma1 intercept 0.5275 -0.0600 s.e. 0.0856 0.1367 sigma^2 estimated as 0.966: log likelihood = -168.36, aic = 340.72 Remembering the sign convention and noting the size of the standard error, we have an excellent estimate of θ in this simulation. (b) If your software permits, repeat part (a) many times with a new simulated series using the same parameters and same sample size. > set.seed(1352); thetahat=rep(NA,1000) > for (k in 1:1000) {series=arima.sim(n=120,list(ma=0.6)); > thetahat[k]=-arima(series,order=c(0,0,1))\$coef} Var θ^() 1 θ2– n--------------≈ 10.6–()2– 48---------------------------0.013== 96 (c) Form the sampling distribution of the maximum likelihood estimates of θ. > win.graph(width=3,height=3,pointsize=8) > hist(thetahat,xlab='Theta Hat') The histogram is roughly symmetric around θ = −0.6. (d) Are the estimates (approximately) unbiased? > mean(thetahat)  -0.6012115 The mean of the sampling distribution, −0.6012115, is very close to the true value of θ = −0.6. (e) Calculate the variance of your sampling distribution and compare it with the large-sample result in Equation (7.4.11), page 161. > sd(thetahat)^2  0.006199142 so the simulated result compares very favorably with the large-sample result. Exercise 7.13 Simulate an AR(1) series with φ = 0.8 and n = 48. > set.seed(4321); series=arima.sim(n=48,list(ar=0.8)) (a) Find the method-of-moments estimate of φ. > acf(series)\$acf 0.8285387 (a) Find the conditional least squares estimate of φ and compare it with part (a). > arima(series,order=c(1,0,0),method='CSS')\$coef ar1 0.8367125 For this AR(1) series the two estimates are quite close. (b) Find the maximum likelihood estimate of φ and compare it with parts (a) and (b). > arima(series,order=c(1,0,0),method='ML')\$coef ar1 0.849501 All three methods produce very similar estimates for this AR(1) series. Theta Hat Frequency −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 0 50 100 150 200 250 Var θ^() 1 θ2–()n⁄≈ 10.6–()2–()120⁄ 0.0053== 97 (c) Repeat parts (a), (b), and (c) with a new simulated series using the same parameters and same sample size. Compare your results with your results from the first simulation. Exercise 7.14 Simulate an AR(1) series with φ = −0.5 and n = 60. > set.seed(35431); series=arima.sim(n=60,list(ar=-0.5)) (a) Find the method-of-moments estimate of φ. > acf(series)\$acf -0.5469125 (b) Find the conditional least squares estimate of φ and compare it with part (a). > arima(series,order=c(1,0,0),method='CSS')\$coef ar1 -0.5596042 These two estimates are very close to each other and not far from the true value of −0.5. (c) Find the maximum likelihood estimate of φ and compare it with parts (a) and (b). > arima(series,order=c(1,0,0),method='ML')\$coef ar1 -0.5580715 The MLE and CSS estimates are nearly the same. (d) Repeat parts (a), (b), and (c) with a new simulated series using the same parameters and same sample size. Compare your results with your results from the first simulation. Exercise 7.15 Simulate an AR(1) series with φ = 0.7 and n = 100. > set.seed(12352); series=arima.sim(n=100,list(ar=0.7)) (a) Find the maximum likelihood estimate of φ. > arima(series,order=c(1,0,0),method='ML')\$coef ar1 0.7025283 (b) If your software permits, repeat part (a) many times with a new simulated series using the same parameters and same sample size. > for (k in 1:1000) {series=arima.sim(n=100,list(ar=0.7)); > phihat[k]=arima(series,order=c(1,0,0),method='ML')\$coef} (c) Form the sampling distribution of the maximum likelihood estimates of φ. The distribution is roughly normal with a little skewness toward the lower values. phihat Frequency 0.4 0.5 0.6 0.7 0.8 0.9 0 50 100 150 200 250 98 (d) Are the estimates (approximately) unbiased? Yes. > mean(phihat) 0.6691908 (e) Calculate the variance of your sampling distribution and compare it with the large-sample result in Equation (7.4.9), page 161. > sd(phihat) The standard deviation of the simulated sampling distribution is 0.0774 whereas and the large-sample result gives an excellent approximation. Exercise 7.16 Simulate an AR(2) series with φ1 = 0.6, φ2 = 0.3, and n = 60. > set.seed(12345); series=arima.sim(n=60,list(ar=c(0.6,0.3))) (a) Find the method-of-moments estimates of φ1 and φ2. > ar(series,aic=F,order.max=2,method='yw') # yw stands for Yule-Walker Call: ar(x = series, aic = F, order.max = 2, method = "yw") Coefficients: 1 2 0.5423 0.3161 Order selected 2 sigma^2 estimated as 1.366 (b) Find the conditional least squares estimates of φ1 and φ2 and compare them with part (a). > ar(series,aic=F,order.max=2,method='ols') # ols stands for Ordinary Least Squares Call: ar(x = series, aic = F, order.max = 2, method = "ols") Coefficients: 1 2 0.5363 0.3288 Intercept: -0.05011 (0.1497) Order selected 2 sigma^2 estimated as 1.299 The estimates are very similar to those obtained in parts (a) and (b). (c) Find the maximum likelihood estimates of φ1 and φ2 and compare them with parts (a) and (b). > ar(series,aic=F,order.max=2,method='mle') > # mle stands for Maximum Likelihood Estimator Call: ar(x = series, aic = F, order.max = 2, method = "mle") Coefficients: 1 2 0.5330 0.3188 Order selected 2 sigma^2 estimated as 1.265 The estimates are very similar with all three methods. (d) Repeat parts (a), (b), and (c) with a new simulated series using the same parameters and same sample size. Compare these results to your results from the first simulation. Exercise 7.17 Simulate an ARMA(1,1) series with φ = 0.7, θ = 0.4, and n = 72. > set.seed(54321); series=arima.sim(n=72,list(ar=0.7,ma=-0.4)) Var φ^() 1 φ^2– n--------------≈ 10.7()2– 100------------------------ 0.0714== 99 (a) Find the method-of-moments estimates of φ and θ. > acf(series)\$acf [1,] 0.549397661 [2,] 0.388131613 So . Recall that . The solutions for θ of the quadratic equation implied by this relationship are complex valued so that no method-of-moments estimate of θ exists for this series. (b) Find the conditional least squares estimates of φ and θ and compare them with part (a). > arima(series,order=c(1,0,1),method='CSS') Call: arima(x = series, order = c(1, 0, 1), method = "CSS") Coefficients: ar1 ma1 intercept 0.7655 -0.3605 -0.2444 s.e. 0.0961 0.1480 0.3075 sigma^2 estimated as 0.868: part log likelihood = -97.07 The estimate of φ here is larger than the one obtained by the method-of-moments. However, taking standard errors into account, the two are not significantly different. (c) Find the maximum likelihood estimates of φ and θ and compare them with parts (a) and (b). > arima(series,order=c(1,0,1),method='ML') Call: arima(x = series, order = c(1, 0, 1), method = "ML") Coefficients: ar1 ma1 intercept 0.7771 -0.3055 -0.0201 s.e. 0.1190 0.1647 0.3410 sigma^2 estimated as 0.9147: log likelihood = -99.2, aic = 204.39 The CSS and ML estimates are very close to each other and easily within two standard errors of their true values. (d) Repeat parts (a), (b), and (c) with a new simulated series using the same parameters and same sample size. Compare your new results with your results from the first simulation. Exercise 7.18 Simulate an AR(1) series with φ = 0.6, n = 36 but with error terms from a t-distribution with 3 degrees of freedom. > set.seed(54321); series=arima.sim(n=36,list(ar=0.6),innov=rt(n,3)) φ^ˆ r2 r1 ---- 0.38813161 0.54939766---------------------------- 0 . 7 0 6 4 6 7 5 4== = r1 1 θφ^–()φ^ θ–() 12θφ^– θ2+ -------------------------------------= 100 (a) Display the sample PACF of the series. Is an AR(1) model suggested? > pacf(series) This PACF suggests an AR(1) model (in spite of the heavy tail errors). (b) Estimate φ from the series and comment on the results. > acf(series)\$acf; arima(series,order=c(1,0,0),method='ML') Method of moments estimate of phi is 0.422392. > arima(series,order=c(1,0,0),method='ML') Call: arima(x = series, order = c(1, 0, 0), method = "ML") Coefficients: ar1 intercept 0.4218 0.5067 s.e. 0.1496 0.3866 sigma^2 estimated as 1.871: log likelihood = -62.46, aic = 128.92 The mle estimation eroneously assumes normal errors but still estimates quite similarly to the method of moments. (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. Exercise 7.19 Simulate an MA(1) series with θ = −0.8, n = 60 but with error terms from a t-distribution with 4 degrees of freedom. > set.seed(54321); series=arima.sim(n=60,list(ma=0.8),innov=rt(n,4)) 2 4 6 8 10 12 14 −0.2 0.0 0.2 0.4 Lag Partial ACF 101 (a) Display the sample ACF of the series. Is an MA(1) model suggested? In spite of the heavy tailed errors, this sample acf strongly suggests an MA(1) model. (b) Estimate θ from the series and comment on the results. > estimate.ma1.mom(series); arima(series,order=c(0,0,1),method='ML') Method of of moments estimate of theta = -0.7443132 Call: arima(x = series, order = c(0, 0, 1), method = "ML") Coefficients: ma1 intercept 0.8170 -0.0471 s.e. 0.1166 0.2853 sigma^2 estimated as 1.500: log likelihood = -97.84, aic = 199.68 Given the (approximate) standard errors of the estimates, the method of moments and mle estimates are similar. (Recall that we use the oposite sign for the MA parameter.) (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. 51015 −0.2 0.0 0.2 0.4 Lag ACF 102 Exercise 7.20 Simulate an AR(2) series with φ1 = 1.0, φ2 = −0.6, n = 48 but with error terms from a t-distribution with 5 degrees of freedom. > set.seed(54321); series=arima.sim(n=48,list(ar=c(1,-0.6)),innov=rt(n,5)) (a) Display the sample PACF of the series. Is an AR(2) model suggested? > pacf(series) The partial acf gives a clear indication of an AR(2) model. (b) Estimate φ1 and φ2 from the series and comment on the results. > ar(series,order.max=2,aic=F,method='yw'); arima(series,order=c(2,0,0),method='ML') Call: ar(x = series, aic = F, order.max = 2, method = "yw") Coefficients: 1 2 0.8742 -0.4982 Order selected 2 sigma^2 estimated as 1.609 Call: arima(x = series, order = c(2, 0, 0), method = "ML") Coefficients: ar1 ar2 intercept 0.9454 -0.5648 0.0466 s.e. 0.1223 0.1209 0.2727 sigma^2 estimated as 1.348: log likelihood = -75.89, aic = 157.79 In the light of the appropriate standard errors, both Yule-Walker and (pseudo) maximum likelihood, give quite sim- ilar estimates of the AR(2) parameters. (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. Exercise 7.21 Simulate an ARMA(1,1) series with φ = 0.7, θ = −0.6, n = 48 but with error terms from a t-distribution with 6 degrees of freedom. > set.seed(54321); series=arima.sim(n=48,list(ar=0.7,ma=0.6),innov=rt(n,6)) 51015 −0.4 −0.2 0.0 0.2 0.4 0.6 Lag Partial ACF 103 (a) Display the sample EACF of the series. Is an ARMA(1,1) model suggested? > eacf(series) The sample eacf provides a rather clear indication of an ARMA(,1) model. (b) Estimate φ and θ from the series and comment on the results. > arima(series,order=c(1,0,1)) Call: arima(x = series, order = c(1, 0, 1)) Coefficients: ar1 ma1 intercept 0.6606 0.7156 0.9246 s.e. 0.1240 0.1465 0.8133 sigma^2 estimated as 1.332: log likelihood = -76.02, aic = 158.03 In spite of the nonnormal errors, maximum “likelihood” produces excellent estimates of the φ and θ parameters. (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. Exercise 7.22 Simulate an AR(1) series with φ = 0.6, n = 36 but with error terms from a chi-square distribution with 6 degrees of freedom. > set.seed(54321); series=arima.sim(n=36,list(ar=0.6),innov=rchisq(n,5)) (a) Display the sample PACF of the series. Is an AR(1) model suggested? > pacf(series) AR/MA 012345678910111213 0 xxxooooooooooo 1 xooooooooooooo 2 xooooooooooooo 3 xxoooooooooooo 4 oxoooooooooooo 5 ooxxoooooooooo 6 xooooxoooooooo 7 xooooooooooooo 2 4 6 8 10 12 14 −0.2 0.0 0.2 0.4 Lag Partial ACF 104 The sample pacf gives a clear indication of an AR(1) model even though the error terms are chi-square distributed. (b) Estimate φ from the series and comment on the results. > ar(series,order.max=1,aic=F,method='yw') > arima(series,order=c(1,0,0)) # Maximum likelihood is the default estimation method Call: ar(x = series, aic = F, order.max = 1, method = "yw") Coefficients: 1 0.4097 Order selected 1 sigma^2 estimated as 21.1 Call: arima(x = series, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.4695 13.6086 s.e. 0.1582 1.3442 sigma^2 estimated as 19.19: log likelihood = -104.38, aic = 212.77 The Yule-Walker (method of moments) estimate and (pseudo) maximum likelihood estimates are quite similar with the ML estimate a little better. (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. Exercise 7.23 Simulate an MA(1) series with θ = −0.8, n = 60 but with error terms from a chi-square distribution with 7 degrees of freedom. > set.seed(54321); series=arima.sim(n=60,list(ma=0.8),innov=rchisq(n,7)) (a) Display the sample ACF of the series. Is an MA(1) model suggested? > acf(series) The sample acf points to the MA(1) model quite clearly. (b) Estimate θ from the series and comment on the results. > estimate.ma1.mom(series); arima(series,order=c(0,0,1)) Method of moments estimate of theta = -0.8131914 Call: arima(x = series, order = c(0, 0, 1)) 51015 −0.2 0.0 0.2 0.4 Lag ACF 105 Coefficients: ma1 intercept 0.7954 12.6476 s.e. 0.0743 0.9679 sigma^2 estimated as 17.69: log likelihood = -171.83, aic = 347.67 Both method of moments and (pseudo) maximum likelihood give similar good estimates of theta in this simulation. (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. Exercise 7.24 Simulate an AR(2) series with φ1 = 1.0, φ2 = −0.6, n = 48 but with error terms from a chi-square distri- bution with 8 degrees of freedom. > set.seed(54321); series=arima.sim(n=48,list(ar=c(1,-0.6)),innov=rchisq(n,8)) (a) Display the sample PACF of the series. Is an AR(2) model suggested? > pacf(series) The sample pacf gives a clear indication of the AR(2) model. (b) Estimate φ1 and φ2 from the series and comment on the results. > arima(series,order=c(2,0,0)) Call: arima(x = series, order = c(2, 0, 0), method = "ML") Coefficients: ar1 ar2 intercept 0.9529 -0.6074 13.3780 s.e. 0.1155 0.1129 1.0207 sigma^2 estimated as 21.05: log likelihood = -141.91, aic = 289.82 The (pseudo) maximum li9kelihood estimates are quiten close to the true values. (c) Repeat parts (a) and (b) with a new simulated series under the same conditions. Exercise 7.25 Simulate an ARMA(1,1) series with φ = 0.7, θ = −0.6, n = 48 but with error terms from a chi-square distribution with 9 degrees of freedom. > set.seed(54321); series=arima.sim(n=48,list(ar=0.7,ma=0.6),innov=rchisq(n,9)) 51015 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 Lag Partial ACF 106 (a) Display the sample EACF of the series. Is an ARMA(1,1) model suggested? > eacf(series) The sample eacf gives a clear indication of the mixed ARMA(1,1) model. (b) Estimate φ and θ from the series and comment on the results. > arima(series,order=c(1,0,1)) Call: arima(x = series, order = c(1, 0, 1)) Coefficients: ar1 ma1 intercept 0.6413 0.8027 45.8962 s.e. 0.1420 0.1163 3.7751 sigma^2 estimated as 29.10: log likelihood = -150.2, aic = 306.4 Relative to their standard errors, the estimates are not significantly different from their true values. (c) Repeat parts (a) and (b) with a new series under the same conditions. Exercise 7.26 Consider the AR(1) model specified for the color property time series displayed in Exhibit (1.3), page 3. The data are in the file named color. (a) Find the method-of-moments estimate of φ. > data(color); acf(color)\$acf Method of momemts estimate of phi = 0.5282091 (b) Find the maximum likelihood estimate of φ and compare it with part (a). > arima(color,order=c(1,0,0)) # Maximum likelihood is the default estimation method Call: arima(x = color, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.5706 74.3293 s.e. 0.1435 1.9151 sigma^2 estimated as 24.83: log likelihood = -106.07, aic = 216.15 Relative to the standard errors, the two methods give similar estimates of phi. See Exhibit (7.7), page 165, for an alternative solution to this exercise. AR/MA012345678910111213 0 xxoooooooooooo 1 xooooooooooooo 2 xxoooooooooooo 3 xxxooooooooooo 4 xooooooooooooo 5 oooooooooooooo 6 oxoooooooooooo 7 xooooooooooooo 107 Exercise 7.27 Exhibit (6.31), page 139, suggested specifying either an AR(1) or possibly an AR(4) model for the dif- ference of the logarithms of the oil price series. The data are in the file named oil.price. (a) Estimate both of these models using maximum likelihood and compare it with the results using the AIC cri- teria. > data(oil.price); arima(log(oil.price),order=c(1,1,0)) Call: arima(x = log(oil.price), order = c(1, 1, 0)) Coefficients: ar1 0.2364 s.e. 0.0660 sigma^2 estimated as 0.006787: log likelihood = 258.55, aic = -515.11 > arima(log(oil.price),order=c(4,1,0)) Call: arima(x = log(oil.price), order = c(4, 1, 0)) Coefficients: ar1 ar2 ar3 ar4 0.2673 -0.1550 0.0238 -0.0970 s.e. 0.0669 0.0691 0.0691 0.0681 sigma^2 estimated as 0.006603: log likelihood = 261.82, aic = -515.64 These two models are very similar. The ar3 and ar4 coefficients in the AR(4) model are not significantly different from zero. (b) Exhibit (6.32), page 140 suggested specifying an MA(1) model for the difference of the logs. Estimate this model by maximum likelihood and compare to your results in part (a). > arima(log(oil.price),order=c(0,1,1)) Call: arima(x = log(oil.price), order = c(0, 1, 1)) Coefficients: ma1 0.2956 s.e. 0.0693 sigma^2 estimated as 0.006689: log likelihood = 260.29, aic = -518.58 There is actually very little difference among these three models. Exercise 7.28 The data file named deere3 contains 57 consecutive values from a complex machine tool at Deere & Co. The values given are deviations from a target value in units of ten millionths of an inch. The process employs a control mechanism that resets some of the parameters of the machine tool depending on the magnitude of deviation from target of the last item produced. (a) Estimate the parameters of an AR(1) model for this series. > data(deere3); arima(deere3,order=c(1,0,0)) Call: arima(x = deere3, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.5256 124.3524 s.e. 0.1108 394.2320 sigma^2 estimated as 2069354: log likelihood = -495.51, aic = 995.02 The ar1 (= phi1hat) coefficient is significantly different from zero. (b) Estimate the parameters of an AR(2) model for this series and compare the results with those in part (a). 108 > arima(deere3,order=c(2,0,0)) Call: arima(x = deere3, order = c(2, 0, 0)) Coefficients: ar1 ar2 intercept 0.5211 0.0083 123.2418 s.e. 0.1310 0.1315 397.5991 sigma^2 estimated as 2069209: log likelihood = -495.51, aic = 997.01 The ar2 (= phi2hat) coefficient is not statistically significant so the AR(1) model still looks good. Exercise 7.29 The data file named robot contains a time series obtained from an industrial robot. The robot was put through a sequence of maneuvers, and the distance from a desired ending point was recorded in inches. This was repeated 324 times to form the time series. (a) Estimate the parameters of an AR(1) model for these data. > data(robot); arima(robot,order=c(1,0,0)) Call: arima(x = robot, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.3076 0.0015 s.e. 0.0528 0.0002 sigma^2 estimated as 6.482e-06: log likelihood = 1475.54, aic = -2947.08 Notice that both the ar1 and intercept coefficients are significantly different from zero statistically. (b) Estimate the parameters of an IMA(1,1) model for these data. > arima(robot,order=c(0,1,1)) Call: arima(x = robot, order = c(0, 1, 1)) Coefficients: ma1 -0.8713 s.e. 0.0389 sigma^2 estimated as 6.069e-06: log likelihood = 1480.95, aic = -2959.9 The ma1 coefficient is significantly different from zero statistically. (c) Compare the results from parts (a) and (b) in terms of AIC. The nonstationary IMA(1,1) model has a slightly smaller AIC value but the log likelihoods and AIC values are very close to each other. Exercise 7.30 The data file named days contains accounting data from the Winegard Co. of Burlington, Iowa. The data are the number of days until Winegard receives payment for 130 consecutive orders from a particular distribu- tor of Winegard products. (The name of the distributor must remain anonymous for confidentiality reasons.) The time series contains outliers that are quite obvious in the time series plot. (a) Replace each of the unusual values with a value of 35 days, a much more typical value, and then estimate the parameters of an MA(2) model. > data(days); daysmod=days; daysmod=35; daysmod=35; daysmod=35 > arima(daysmod,order=c(0,0,2)) Call: arima(x = daysmod, order = c(0, 0, 2)) Coefficients: ma1 ma2 intercept 109 0.1893 0.1958 28.1957 s.e. 0.0894 0.0740 0.6980 sigma^2 estimated as 33.22: log likelihood = -412.23, aic = 830.45 All three estimated coefficients are significantly different from zero statistically. (b) Now assume an MA(5) model and estimate the parameters. Compare these results with those obtained in part (a). > arima(daysmod,order=c(0,0,5)) Call: arima(x = daysmod, order = c(0, 0, 5)) Coefficients: ma1 ma2 ma3 ma4 ma5 intercept 0.1844 0.2680 0.0305 0.1717 -0.0859 28.2351 s.e. 0.0898 0.0929 0.1033 0.0850 0.0932 0.7755 sigma^2 estimated as 32.02: log likelihood = -409.93, aic = 831.86 In this model only the ma1, ma2, ma4 (just barely) coefficients and intercept are significantly different from zero statistically. The AIC of the MA(5) model is actually a little larger than that for the MA(2) model. Exercise 7.31 Simulate a time series of length n = 48 from an AR(1) model with φ = 0.7. Use that series as if it were real data. Now compare the theoretical asymptotic distribution of the estimator of φ with the distribution of the bootstrap estimator of φ. > set.seed(54321); series=arima.sim(n=48,list(ar=0.7)) > result = arima(series,order=c(1,0,0)) Call: arima(x = series, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.8846 -0.0489 s.e. 0.0715 1.0160 The s.e. of 0.0715 is based on large-sample theory. > x=seq(from=0.4,to=1,by=0.01); y=dnorm(x,mean=0.8846,sd=0.0715) > # Setup asymptotic distribution. > set.seed(12345) # Bootstrap Method I > coefmI=arima.boot(result,cond.boot=T,is.normal=T,B=1000,init=series) > win.graph(width=4,height=3,pointsize=8) > hist(coefmI[,1],xlab='phi hat',main='Bootstrap Distribution I',freq=F) > lines(x,y,type='l') 110 The bootstrap distribution is skewed somewhat strongly toward lower values and, of course, the asymptotic normal distribution is symmetric. The asymptotic distribution has significant probablity above φ = 1. > set.seed(12345) # Method II > coefmII=arima.boot(result,cond.boot=T,is.normal=F,B=1000,init=series) > hist(coefmII[,1],xlab='phi hat',main='Bootstrap Distribution II') > lines(x,y,type='l') This bootstrap distribution is also skewed strongly toward lower values and, of course, the asymptotic normal distri- bution is symmetric. The asymptotic distribution has significant probablity above φ = 1. Exercise 7.32 The industrial color property time series was fitted quite well by an AR(1) model. However, the series is rather short, with n = 35. Compare the theoretical asymptotic distribution of the estimator of φ with the distribu- tion of the bootstrap estimator of φ. The data are in the file named color. > data(color); arima(color,order=c(1,0,0)) Call: arima(x = color, order = c(1, 0, 0)) Coefficients: Bootstrap Distribution I phi hat Density 0.4 0.5 0.6 0.7 0.8 0.9 1.0 012345 Bootstrap Distribution II phi hat Density 0.4 0.5 0.6 0.7 0.8 0.9 1.0 012345 111 ar1 intercept 0.5705 74.3293 s.e. 0.1435 1.9151 sigma^2 estimated as 24.83: log likelihood = -106.07, aic = 216.15 > x=seq(from=0.0,to=0.9,by=0.01); y=dnorm(x,mean=0.5705,sd=0.1435) > # Setup asymptotic distribution. > set.seed(12345) # Bootstrap Method III > coefmIII=arima.boot(result,cond.boot=F,is.normal=T,ntrans=100,B=1000,init=color) > hist(coefmIII[,1],xlab='phi hat',main='Bootstrap Distribution III',freq=F) > lines(x,y,type='l') The bootstrap distribution is skewed strongly toward lower values and, of course, the asymptotic normal distribution is symmetric. > set.seed(12345) # Method IV > coefmIV=arima.boot(result,cond.boot=F,is.normal=F,ntrans=100,B=1000,init=color) > hist(coefmIV[,1],xlab='phi hat', main='Bootstrap Distribution IV', freq=F,ylim=c(0,2.8)) > lines(x,y,type='l') Bootstrap Distribution III phi hat Density 0.0 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5 112 This bootstrap distribution is also skewed strongly toward lower values while the asymptotic normal distribution is symmetric. CHAPTER 8 Exercise 8.1 For an AR(1) model with and n = 100, the lag 1 sample autocorrelation of the residuals is 0.5. Should we consider this unusual? Why or why not? From Equation (8.1.5), page 180, we have that so that would expect the lag 1 sample autocorrelation of the residuals to be within ±0.1. The residual autocorrelation of 0.5 is most unusual. Exercise 8.2 Repeat Exercise 8.1 for an MA(1) model with and n = 100. From Equation (8.1.5), page 180, with θ replacing φ as indicated on page 183, we have that so that again a lag 1 residual autocorrelation of 0.5 is most unusual. Exercise 8.3 Based on a series of length n = 200, we fit an AR(2) model and obtain residual autocorrelations of = 0.13, = 0.13, and = 0.12. If = 1.1 and = −0.8, do these residual autocorrelations support the AR(2) spec- ification? Individually? Jointly? From Equation (8.1.8), page 182, so that = 0.13 is “too large.” From Equation (8.1.9), page 182, so that = 0.13 is also “too large.” From Equation (8.1.10), page 182, for k > 2, so that = 0.12 is OK. The Ljung-Box, statistic, = = 9.83. If the AR(2) specification is correct, then has (approximately) a chi-square distribution with 3 − 2 = 1 degree of freedom. However, so that these residual autocorrelations are (jointly) too large to support the AR(2) model. Exercise 8.4 Simulate an AR(1) model with n = 30 and φ = 0.5. > set.seed(12347); series=arima.sim(n=30,list(ar=0.5)) Bootstrap Distribution IV phi hat Density −0.2 0.0 0.2 0.4 0.6 0.8 0.0 0.5 1.0 1.5 2.0 2.5 φ 0.5≈ Var r^1() φ2 n⁄≈ 0.52 100⁄ 0.05== θ 0.5≈ Var r^1() θ2 n⁄≈ 0.52 100⁄ 0.05== r^1 r^2 r^3 φ^1 φ^2 Var r^1() φ2 2 n⁄≈ 0.8–()2 200⁄ 0.057==r^1 Var r^2() φ2 2 φ1 2 1 φ2+()2+()n⁄≈ 0.059= r^2 Var r^k() 1200⁄()≈ 0.071= r^3 Q* nn 2+() r^1 2 n 1–------------ r^2 2 n 2–------------ r^3 2 n 3–------------++⎝⎠ ⎛⎞= 200 202()0.13()2 199------------------ 0.13()2 198------------------ 0.12()2 197------------------++⎝⎠ ⎛⎞ Q* Pr χ1 2 9.83>[]0.0017= 113 (a) Fit the correctly specified AR(1) model and look at a time series plot of the residuals. Does the plot support the AR(1) specification? > model=arima(series,order=c(1,0,0)); win.graph(width=6.5,height=3,pointsize=8) > plot(rstandard(model),ylab ='Standardized Residuals', type='o'); abline(h=0) These standardized residuals look fairly “random” with no particular patterns. (b) Display a normal quantile-quantile plot of the standardized residuals. Does the plot support the AR(1) spec- ification? > win.graph(width=3,height=3,pointsize=8) > qqnorm(residuals(model)); qqline(residuals(model)) With a few minor exceptions in the lower tail, the Q-Q plot of the standardized residuals looks reasonably “normal.” ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Standardized Residuals 0 5 10 15 20 25 30 −1 0 1 2 ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −1.5 −0.5 0.5 1.0 1.5 Theoretical Quantiles Sample Quantiles 114 (c) Display the sample ACF of the residuals. Does the plot support the AR(1) specification? > acf(residuals(model)) The sample acf at lag 4 is the only individual autocorrelation that comes close to being “significant.” (d) Calculate the Ljung-Box statistic summing to K = 8. Does this statistic support the AR(1) specification? > LB.test(model,lag=8) Box-Ljung test data: residuals from model X-squared = 11.2399, df = 7, p-value = 0.1285 This test does not reject randomness of the error terms based on the first eight autocorrelations of the residuals. Note: The tsdiag function will produce a display that answer all three parts (a), (c), and (d) of this exercise. See below. 2 4 6 8 10 12 14 −0.3 −0.1 0.1 0.2 0.3 Lag ACF 115 > win.graph(width=6.5,height=6,pointsize=8); tsdiag(model) The bottom display shows the p-values of the Ljung-Box test for a variety of values of the “K” parameter—the highest lag used in the sum. The top display will flag potential outliers, if any, using the Bonferroni criteria. Exercise 8.5 Simulate an MA(1) model with n = 36 and θ = −0.5. > set.seed(64231); series=arima.sim(n=36,list(ma=0.5)) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Standardized Residuals 5 1015202530 −1012 2 4 6 8 10 12 14 −0.2 0.0 0.2 ACF of Residuals ● ● ● ● ● ● ● ● ● ● ● ● 2 4 6 8 10 12 14 0.0 0.2 0.4 0.6 0.8 1.0 P−values 116 (a) Fit the correctly specified MA(1) model and look at a time series plot of the residuals. Does the plot support the MA(1) specification? > model=arima(series,order=c(0,0,1)) > win.graph(width=6.5,height=3,pointsize=8) > plot(rstandard(model),ylab ='Standardized Residuals', type='o'); abline(h=0) The sequence plot of the standardized residuals looks fairly “random.” (b) Display a normal quantile-quantile plot of the standardized residuals. Does the plot support the MA(1) spec- ification? > win.graph(width=3,height=3,pointsize=8) > qqnorm(residuals(model)); qqline(residuals(model)) The Q-Q plot is fairly straight but there may be some problem with the upper tail but sample size is quite small. Could do a Shapiro-Wilk test to check further. > shapiro.test(residuals(model)) Shapiro-Wilk normality test data: residuals(model) W = 0.968, p-value = 0.374 We would not reject normality based on these test results. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Standardized Residuals 0 5 10 15 20 25 30 35 −2 −1 0 1 2 ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles 117 (c) Display the sample ACF of the residuals. Does the plot support the MA(1) specification? > win.graph(width=6.5,height=3,pointsize=8); acf(residuals(model)) No problem with significant residual autocorrelations in this simulation. (d) Calculate the Ljung-Box statistic summing to K = 6. Does this statistic support the MA(1) specification? > LB.test(model,lag=6) Box-Ljung test data: residuals from model X-squared = 6.6437, df = 5, p-value = 0.2485 No problem with large residual autocorrelations jointly out to lag 6. Exercise 8.6 Simulate an AR(2) model with n = 48, φ1 = 1.5, and φ2 = −0.75. > set.seed(65423); series=arima.sim(n=48,list(ar=c(1.5,-0.75))) (a) Fit the correctly specified AR(2) model and look at a time series plot of the residuals. Does the plot support the AR(2) specification? > model=arima(series,order=c(2,0,0)); win.graph(width=6.5,height=3,pointsize=8) > plot(rstandard(model),ylab ='Standardized Residuals', type='o'); abline(h=0) The residuals look “random.” 2 4 6 8 10 12 14 −0.3 −0.1 0.1 0.2 0.3 Lag ACF ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Standardized Residuals 0 10203040 −2 −1 0 1 2 118 (b) Display a normal quantile-quantile plot of the standardized residuals. Does the plot support the AR(2) spec- ification? No problem with normality of the error terms. (c) Display the sample ACF of the residuals. Does the plot support the AR(2) specification? > win.graph(width=6.5,height=3,pointsize=8); acf(rstandard(model)) There are two residual autocorrelations that are “signifnicant:” at lags 2 and 15. (d) Calculate the Ljung-Box statistic summing to K = 12. Does this statistic support the AR(2) specification? > LB.test(model,lag=12) Box-Ljung test data: residuals from model X-squared = 18.7997, df = 10, p-value = 0.04288 Based on this test, we would reject the assumption of independent error terms at the 5% significance level for this simulation. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −2 −1 0 1 2 Theoretical Quantiles Sample Quantiles 51015 −0.3 −0.1 0.0 0.1 0.2 0.3 Lag ACF 119 Exercise 8.7 Fit an AR(3) model by maximum likelihood to the square root of the hare abundance series (filename hare). (a) Plot the sample ACF of the residuals. Comment on the size of the correlations. > data(hare); model=arima(sqrt(hare),order=c(3,0,0)) > win.graph(width=6.5,height=3,pointsize=8); acf(rstandard(model)) These residual autocorrelations look excellent. (b) Calculate the Ljung-Box statistic summing to K = 9. Does this statistic support the AR(3) specification? > LB.test(model,lag=9) Box-Ljung test data: residuals from model X-squared = 6.2475, df = 6, p-value = 0.3960 As we would suspect from the results in part (a), the Ljung-Box test does not reject independence of the error terms. (c) Perform a runs test on the residuals and comment on the results. > runs(rstandard(model)) \$pvalue  0.602 \$observed.runs  18 \$expected.runs  16.09677 \$n1  13 \$n2  18 \$k  0 The p-value does not permit us to reject independence of the error terms. The number of runs is not unusual! 2 4 6 8 10 12 14 −0.3 −0.1 0.1 0.2 0.3 Lag ACF 120 (d) Display the quantile-quantile normal plot of the residuals. Comment on the plot. > win.graph(width=3,height=3,pointsize=8) > qqnorm(residuals(model)); qqline(residuals(model)) There is some minor curvature to this plot with possible outliers at both extremes. (e) Perform the Shapiro-Wilk test of normality on the residuals. > shapiro.test(residuals(model)) Shapiro-Wilk normality test data: residuals(model) W = 0.9351, p-value = 0.06043 We would not reject normality of the error terms at the usual significance levels. Exercise 8.8 Consider the oil filter sales data shown in Exhibit (1.8), page 7. The data are in the file named oilfilters. (a) Fit an AR(1) model to this series. Is the estimate of the φ parameter significantly different from zero statisti- cally? > data(oilfilters); model=arima(oilfilters,order=c(1,0,0)); model Call: arima(x = oilfilters, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.3115 3370.6744 s.e. 0.1368 253.1498 sigma^2 estimated as 1482802: log likelihood = -409.19, aic = 822.37 The estimate of φ is more than two standard errors away from zero and would be deemed significant at the usual significance levels. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● −2 −1 0 1 2 −2 −1 0 1 Theoretical Quantiles Sample Quantiles 121 (b) Display the sample ACF of the residuals from the AR(1) fitted model. Comment on the display. > acf(as.numeric(rstandard(model))) The sample autocorrelation of the residuals displays a highly significant correlation at lag 12. This series contains substantial seasonality that this model does not capture. Exercise 8.9 The data file named robot contains a time series obtained from an industrial robot. The robot was put through a sequence of maneuvers, and the distance from a desired ending point was recorded in inches. This was repeated 324 times to form the time series. Compare the fits of an AR(1) model and an IMA(1,1) model for these data in terms of the diagnostic tests discussed in this chapter. > data(robot); mod1=arima(robot,order=c(1,0,0)); res1=rstandard(mod1); mod1 > mod2=arima(robot,order=c(1,0,1)); res2=rstandard(mod2); mod2 Call: arima(x = robot, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.3074 0.0015 s.e. 0.0528 0.0002 sigma^2 estimated as 6.482e-06: log likelihood = 1475.54, aic = -2947.08 Call: arima(x = robot, order = c(0, 1, 1)) Coefficients: ma1 -0.8713 s.e. 0.0389 sigma^2 estimated as 6.069e-06: log likelihood = 1480.95, aic = -2959.9 Both models have statistically significant parameter estimates. The log likelihood and AIC values are just a little better in the IMA(1,1) model. 51015 −0.2 0.0 0.2 0.4 0.6 Lag ACF 122 > win.graph(width=6.5,height=3,pointsize=8) > plot(res1,ylab='AR(1) Residuals'); abline(h=0) There may be a little drift in these residuals over time. There are more positive residuals in the first half of the series and more negative in the last half. > plot(res2,ylab='IMA(1,1) Residuals'); abline(h=0) The drift observed in the residuals of the AR(1) model does not seem to appear with the IMA(1,1) model residuals. We proceed to look at correlation in the residuals. Time AR(1) Residuals 0 50 100 150 200 250 300 −3 −2 −1 0 1 2 Time IMA(1,1) Residuals 0 50 100 150 200 250 300 −3 −2 −1 0 1 2 3 123 > acf(residuals(mod1), main='AR(1) Model',ylab='ACF of Residuals'); LB.test(mod1) Box-Ljung test data: residuals from mod1 X-squared = 52.5123, df = 11, p-value = 2.201e-07 The residuals from the AR(1) model clearly have too much autocorrelation. > acf(residuals(mod2), main='IMA(1,1) Model',ylab='ACF of Residuals'); LB.test(mod2) Box-Ljung test data: residuals from mod2 X-squared = 17.0808, df = 11, p-value = 0.1055 The residuals from the IMA(1,1) model are much less correlated with only one significant autocorrelation at lag 10. The Ljung-Box test indicates that, jointly, the residual autocorrelations are not too large. 5 10152025 −0.10 0.00 0.05 0.10 0.15 Lag ACF of Residuals AR(1) Model 5 10152025 −0.15 −0.05 0.05 0.10 Lag ACF of Residuals IMA(1,1) Model 124 Next we check out normality of the error terms by first displayinga Q-Q plot of the residuals. > win.graph(width=3,height=3,pointsize=8) > qqnorm(residuals(mod2)); qqline(residuals(mod2)) The Q-Q plot looks good. Let’s confirm this with the Shapiro-Wilk test. > shapiro.test(residuals(mod2)) Shapiro-Wilk normality test data: residuals(mod2) W = 0.9969, p-value = 0.7909 Normality looks like a viable assumption for the error terms in the IMA(1,1) model for the robot time series. Finally, let’s look at the results from the tsdiag command. ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ●●●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●●●● ● ● ● ●●● ● ● ●● ● ● ●●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ●● ● ●●●● ● ● ● ●● ● ● ●●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●●●● ● ● ● ● ● −3 −2 −1 0 1 2 3 −0.005 0.000 0.005 Theoretical Quantiles Sample Quantiles 125 > win.graph(height=6,width=6.5,pointsize=8); tsdiag(mod2) Summarizing: The robot time series seems to be well-represented by the IMA(1,1) model. Exercise 8.10 The data file named deere3 contains 57 consecutive values from a complex machine tool at Deere & Co. The values given are deviations from a target value in units of ten millionths of an inch. The process employs a control mechanism that resets some of the parameters of the machine tool depending on the magnitude of deviation from target of the last item produced. Diagnose the fit of an AR(1) model for these data in terms of the tests dis- cussed in this chapter. > data(deere3); model=arima(deere3,order=c(1,0,0)); model Call: arima(x = deere3, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.5255 124.3832 s.e. 0.1108 394.2069 sigma^2 estimated as 2069355: log likelihood = -495.51, aic = 995.02 The ar1 parameter estimate (of φ) is statistically significant but the intercept could be removed from the model. This is not too surprising since the data are deviations from a target value. We will fit the model excluding a mean or intercept term. ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●●●●● ● ● ● ●● ● ●●●●● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●●●● ● ● ● ●●● ● ● ●● ● ● ●●● ● ● ● ●● ●● ● ●● ● ● ● ●● ● ●● ●● ●● ● ●● ● ● ● ●● ● ●●●● ● ● ●●● ● ● ●●● ● ● ● ●● ● ●● ●●●● ● ●● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●●●● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ●●●● ● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ●●●● ●● ● ●● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ●●●● ● ●● ●● ● ●● ● ● ●●● ● ● ● ●● ● ● ● ●● ●● ●●● ● ●●●●● ● ● ●● ● ● ● ●● ● ● ● ●●●● ● ●● ● ● Standardized Residuals 0 50 100 150 200 250 300 −3 −1 1 2 5 10152025 −0.15 −0.05 0.05 ACF of Residuals ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5 10152025 0.0 0.4 0.8 P−values 126 > model=arima(deere3,order=c(1,0,0),include.mean=F); model Call: arima(x = deere3, order = c(1, 0, 0), include.mean = F) Coefficients: ar1 0.5291 s.e. 0.1103 sigma^2 estimated as 2072748: log likelihood = -495.56, aic = 993.12 There is very little change in the estimate of φ. The AIC is actually a little worse but we will continue our analysis with this simpler model. > res=residuals(model); plot(res,ylab='AR(1) Residuals'); abline(h=0) These residuals look reasonably “random.” > acf(res,main='AR(1) Model Residuals') There is little evidence of autocorrelation in the error terms for this model. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time AR(1) Residuals 0 1020304050 −4000 −2000 0 2000 51015 −0.2 −0.1 0.0 0.1 0.2 Lag ACF AR(1) Model Residuals 127 Next, on to normality. > qqnorm(res); qqline(res); shapiro.test(res) Shapiro-Wilk normality test data: res W = 0.9829, p-value = 0.5966 The Q-Q plot shows some deviation froma straight line at the low end but the Shapiro-Wilk test is quite clear. We cannot reject normality for the error terms in this model. In summary, this series may be adequately modeled as an AR(1) series with no intercept (or mean) parameter and with uncorrelated, normal error terms. Exercise 8.11 Exhibit (6.31), page 139, suggested specifying either an AR(1) or possibly an AR(4) model for the dif- ference of the logarithms of the oil price series. (The filename is oil.price). For refernce we first plot the differences of the logs. > plot(diff(log(oil.price)),type='o',ylab='Diff(Log(oil.price))') The difference of the logarithms looks fairly stable except for possible outliers at the beginning (February 1986) and at August 1990. (a) Estimate both of these models using maximum likelihood and compare the results using the diagnostic tests considered in this chapter. > data(oil.price); mod1=arima(log(oil.price),order=c(1,1,0)); mod1 > mod2=arima(log(oil.price),order=c(4,1,0)); mod2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● −2 −1 0 1 2 −4000 −2000 0 2000 Theoretical Quantiles Sample Quantiles ● ● ● ● ●● ● ●●● ● ● ● ●●●● ● ●● ● ● ● ●●● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ●●● ●● ● ● ● ● ● ● ●● ● ●● ● ●● ●● ● ●● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ●● ● ● ●●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● Time Diff(Log(oil.price)) 1990 1995 2000 2005 −0.4 −0.2 0.0 0.2 0.4 128 Call: arima(x = log(oil.price), order = c(1, 1, 0)) Coefficients: ar1 0.2364 s.e. 0.0660 sigma^2 estimated as 0.006787: log likelihood = 258.55, aic = -515.11 Call: arima(x = log(oil.price), order = c(4, 1, 0)) Coefficients: ar1 ar2 ar3 ar4 0.2673 -0.1550 0.0238 -0.0970 s.e. 0.0669 0.0691 0.0691 0.0681 sigma^2 estimated as 0.006603: log likelihood = 261.82, aic = -515.64 The ar3 and ar4 coefficients are not significant in the ARIMA(4,1,0) model and the AIC value is a tiny bit better in the simpler ARIMA(1,1,0) case. Furthermore, given the standard error of the ar1 coefficients, there is no real differ- ence beteen the estimates of the ar1 coefficients in the two models. Let’s try an ARIMA(2,1,0) model for compari- son. > mod3=arima(log(oil.price),order=c(2,1,0)); mod3 Call: arima(x = log(oil.price), order = c(2, 1, 0)) Coefficients: ar1 ar2 0.2630 -0.1436 s.e. 0.0666 0.0673 sigma^2 estimated as 0.00666: log likelihood = 260.81, aic = -517.61 This model has the best (smallest) AIC value of the three considered so far. (b) Exhibit (6.32), page 140, suggested specifying an MA(1) model for the difference of the logs. Estimate this model by maximum likelihood and perform the diagnostic tests considered in this chapter. > mod4=arima(log(oil.price),order=c(0,1,1)); mod4 Call: arima(x = log(oil.price), order = c(0, 1, 1)) Coefficients: ma1 0.2956 s.e. 0.0693 sigma^2 estimated as 0.006689: log likelihood = 260.29, aic = -518.58 This model has a significant ma1 coefficient and log-ikelihood and AIC values quite similar to the ARIMA(1,1,0) and ARIMA(0,1,1) models. This IMA(1,1) model does have the best AIC value. We will look at the diagnostics for these three models in part (c) before we decide which we prefer. 129 (c) Which of the three models AR(1), AR(4), or MA(1) would you prefer given the results of parts (a) and (b)? > win.graph(width=6.5,height=6,pointsize=10); tsdiag(mod1,main='Model 1') The possible outlier in August 1990 stands out in the plot of residuals and is “flagged” by the Bonferroni rule. There are also three residual acf values outside the critical limits. ● ● ● ● ● ● ● ●● ● ● ● ● ●●● ● ● ● ● ●● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● Model 1 Standardized Residuals 1990 1995 2000 2005 −2 01234 5101520 −0.10 0.00 0.10 ACF of Residuals Model 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5101520 0.0 0.2 0.4 0.6 0.8 1.0 Model 1 P−values 130 Let’s look at similar results for Model 3. > tsdiag(mod3,main='Model 3') Model 3 diagnostics are similar to those for Model 1 with the exception that the Ljung-Box statistics are better as shown in the bottom display. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ●● ● ●●● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● Model 3 Standardized Residuals 1990 1995 2000 2005 −2 01234 5101520 −0.15 −0.05 0.05 0.15 ACF of Residuals Model 3 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5101520 0.0 0.2 0.4 0.6 0.8 1.0 Model 3 P−values 131 On to Model 4. > tsdiag(mod4,main='Model 4') Model 4 diagnostics are similar to those for Models 1 and 3. The Ljung-Box statistics are the best of the lot as shown in the bottom display. ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ●● ● ●●● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● Model 4 Standardized Residuals 1990 1995 2000 2005 −2 01234 5101520 −0.10 0.00 0.10 ACF of Residuals Model 4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5101520 0.0 0.2 0.4 0.6 0.8 1.0 Model 4 P−values 132 Let’s look at normality of the error terms for Model 4. > win.graph(width=3,height=3,pointsize=8) > qqnorm(residuals(mod4)); qqline(residuals(mod4)) > shapiro.test(residuals(mod4)) Shapiro-Wilk normality test data: residuals(mod4) W = 0.9688, p-value = 3.937e-05 Both the Q-Q plot and the results of the Shapiro-Wilk test indicate that we should reject of normality for the error terms in this model but this could be caused by the suspected outliers in the series. The possible outliers and the IM(1,1) for the logarithms is considered further in Exercise 11.21. CHAPTER 9 Exercise 9.1 For an AR(1) model with Yt = 12.2, φ = −0.5, and μ = 10.8, (a) Find . From Equation (9.3.6), page 193, . (b) Calculate in two different ways. From Equation (9.3.7), page 193, . Alternatively, from Equation (9.3.8), page 194, . (c) Calculate . From Equation (9.3.8), page 194, . Exercise 9.2 Suppose that annual sales (in millions of dollars) of the Acme Corporation follow the AR(2) model with . (a) If sales for 2005, 2006, and 2007 were \$9 million, \$11 million, and \$10 million, respectively, forecast sales for 2008 and 2009. From Equation (9.3.28), page 199, . Furthermore, . (b) Show that for this model. From Equations (4.3.21) on page 75, with so that . (c) Calculate 95% prediction limits for your forecast in part (a) for 2008. ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● −3 −2 −1 0 1 2 3 −0.4 −0.2 0.0 0.2 Theoretical Quantiles Sample Quantiles Y^ t 1() Y^ t 1() μ φYt μ–()+ 10.8 0.5–()12.2 10.8–()+ 10.1== = Y^ t 2() Y^ t 2() μ φY^ t 1() μ–[]+ 10.8 0.5–()10.1 10.8–[]+ 11.15== = Y^ t 2() μ φ2 Yt μ–()+ 10.8 0.5–()2 12.2 10.8–()+ 11.15== = Y^ t 10() Y^ t 10() μφ10 Yt μ–()+ 10.8 0.5–()10 12.2 10.8–()+ 10.801367 μ≈== = Yt 51.1Yt 1– 0.5Yt 2–– et++= σe2 2= Y^ 2007 1() 51.1Y2007 0.5Y2006–+ 5 1.1 10()0.5 11()–+ 10.5=== Y^ 2007 2() 51.1Y^ 2008 0.5Y2007–+ 5 1.1 10.5()0.5 10()–+ 11.55=== ψ1 1.1= ψ1 φ1ψ0–0= ψ0 1= ψ1 φ1 1.1== 133 Using from page 203, the prediction limits are or which is . We are 95% confident that the 2008 value will be between 7.67 and 13.33. (d) If sales in 2008 turn out to be \$12 million, update your forecast for 2009. The updating Equation (9.6.1), page 207, is . So we need to update as . Exercise 9.3 Using the estimated cosine trend on page 192: (a) Forecast the average monthly temperature in Dubuque, Iowa, for April 1976. From page 192 (or page 35), . In this forecast, time is measured in years, with January 1964 as the starting value and a frequency of 1 per year. So the time value for April 1976 is 12 + (3/12) and the forecast is ° (b) Find a 95% prediction interval for that April forecast. (The estimate of for this model is 3.719°F.) The 95% prediction limits are or . Based on this model, we are 95% confident that the April 1976 average temperature in Dubuque, Iowa “will” be between 36.7°F and 51.5°F. (c) What is the forecast for April, 1977? For April 2009? With this deterministic model, all Aprils are forecast as the same value. Exercise 9.4 Using the estimated cosine trend on page 192: (a) Forecast the average monthly temperature in Dubuque, Iowa, for May 1976. For May we use t = 12 + (4/12) (see Exercise 9.3) and the forecast is °F. (b) Find a 95% prediction interval for that May 1976 forecast. (The estimate of for this model is 3.719°F.) The 95% prediction limits are or . Based on this model, we are 95% confident that the May 1976 average temperature in Dubuque, Iowa “will” be between 50.3°F and 65.1°F. Exercise 9.5 Using the seasonal means model without an intercept shown in Exhibit (3.3), page 32: (a) Forecast the average monthly temperature in Dubuque, Iowa, for April, 1976. The forecast is just the coefficient estimate for all Aprils of 46.525 which we round to 46.5°F. (b) Find a 95% prediction interval for that April forecast. (The estimate of for this model is 3.419°F.) The 95% prediction interval is or . With this model, we are 95% confident that the April 1976 average temperature in Dubuque, Iowa “will” be between 39.7°F and 53.3°F. (c) Compare your forecast with the one obtained in Exercise 9.3. This model forecasts a slightly higher average temperature, 46.5° versus 44.1°, and it has a slightly narrower predic- tion interval: width = 6.8°F versus width = 7.4°F. (d) What is the forecast for April 1977? April 2009? With this model, all April forecasts are identical so the forecast is 46.5°F. Exercise 9.6 Using the seasonal means model with an intercept shown in Exhibit (3.4), page 33: (a) Forecast the average monthly temperature in Dubuque, Iowa, for April 1976. With this model formulation the forecast is the Intercept + April coefficient so 16.608 + 29.917 = 46.525 or 46.5°F. (b) Find a 95% prediction interval for that April forecast. (The estimate of for this model is 3.419°F.) Var et l()()σe 2 ψj 2 j 0= l 1– ∑= Y^ 2007 1() 2 σe 2[]± 10.5 2 2± 10.5 2.83± Y^ t 1+ l() Y^ t l 1+()ψl Yt 1+ Y^ t 1()–[]+= Y^ 2008 1() Y^ 2007 2() ψ1 Y2008 Y^ 2007 1()–[]+ 11.55 1.1 12 10.5–[]+ 13.2=== μ^t 46.2660 26.7079–()2πt()cos 2.1697–()2πt()sin++= μ^t 46.2660 26.7079–()2π 12 3 12------+⎝⎠ ⎛⎞ ⎝⎠ ⎛⎞cos 2.1697–()2π 12 3 12------+⎝⎠ ⎛⎞ ⎝⎠ ⎛⎞sin+ + 44.1== γ0 44.1 2 3.719()± 44.1 7.4± μ^t 46.2660 26.7079–()2π 12 4 12------+⎝⎠ ⎛⎞ ⎝⎠ ⎛⎞cos 2.1697–()2π 12 4 12------+⎝⎠ ⎛⎞ ⎝⎠ ⎛⎞sin+ + 57.7== γ0 57.7 2 3.719()± 57.7 7.4± γ0 46.5 2 3.419()± 46.5 6.8± γ0 134 The 95% prediction interval is or . With this model, we are 95% confident that the April 1976 average temperature in Dubuque, Iowa “will” be between 39.7°F and 53.3°F. (c) Compare your forecast with the one obtained in Exercise 9.5. These two models will always produce identical forecasts and prediction intervals. Exercise 9.7 Using the seasonal means model with an intercept shown in Exhibit (3.4), page 33 (a) Forecast the average monthly temperature in Dubuque, Iowa, for January 1976. With this model, all Januarys are forecast with the estimated intercept, namely, 16.608 or 16.6°F. (b) Find a 95% prediction interval for that January forecast. (The estimate of for this model is 3.419°F.) The 95% prediction interval is or . With this model, we are 95% confident that the April 1976 average temperature in Dubuque, Iowa “will” be between 9.8 degrees and 23.4°F. Exercise 9.8 Consider the monthly electricity generation time series shown in Exhibit (5.8), page 99. The data are in the file named electricity. (a) Fit a deterministic trend model containing seasonal means together with a linear time trend to the logarithms of the electricity values. > data(electricity); month.=season(electricity) # First method > model=lm(log(electricity)~month.+time(electricity)); summary(model) Call: lm(formula = log(electricity) ~ month. + time(electricity)) Residuals: Min 1Q Median 3Q Max -0.0962741 -0.0291892 0.0003147 0.0255065 0.1349765 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3.783e+01 4.283e-01 -88.321 < 2e-16 *** month.February -1.246e-01 1.004e-02 -12.408 < 2e-16 *** month.March -9.080e-02 1.004e-02 -9.040 < 2e-16 *** month.April -1.642e-01 1.004e-02 -16.344 < 2e-16 *** month.May -1.000e-01 1.004e-02 -9.959 < 2e-16 *** month.June -2.016e-02 1.004e-02 -2.007 0.0455 * month.July 7.675e-02 1.004e-02 7.641 1.75e-13 *** month.August 7.368e-02 1.004e-02 7.335 1.33e-12 *** month.September -6.473e-02 1.004e-02 -6.444 3.49e-10 *** month.October -1.148e-01 1.005e-02 -11.431 < 2e-16 *** month.November -1.346e-01 1.005e-02 -13.400 < 2e-16 *** month.December -4.481e-02 1.005e-02 -4.460 1.08e-05 *** time(electricity) 2.526e-02 2.153e-04 117.310 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.0408 on 383 degrees of freedom Multiple R-Squared: 0.9753, Adjusted R-squared: 0.9746 F-statistic: 1262 on 12 and 383 DF, p-value: < 2.2e-16 Notice that all coefficients are statistically significant. Alternative R code using the arima function for this calcu- lation follows. This second method facillitates plotting of the forecasts. > month.=season(electricity); time.trend=time(electricity) # Alternative formulation > determ=as.matrix(model.matrix(~month.+time.trend-1))[,-1] > model2=arima(log(electricity),order=c(0,0,0),xreg=determ); model2 Call: arima(x = log(electricity), order = c(0, 0, 0), xreg = determ) Coefficients: intercept month.February month.March month.April month.May -37.8299 -0.1246 -0.0908 -0.1642 -0.1000 46.5 2 3.419()± 46.5 6.8± γ0 16.6 2 3.419()± 16.6 6.8± 135 s.e. 0.4212 0.0099 0.0099 0.0099 0.0099 month.June month.July month.August month.September month.October -0.0202 0.0768 0.0737 -0.0647 -0.1148 s.e. 0.0099 0.0099 0.0099 0.0099 0.0099 month.November month.December time.trend -0.1346 -0.0448 0.0253 s.e. 0.0099 0.0099 0.0002 sigma^2 estimated as 0.00161: log likelihood = 711.56, aic = -1397.11 (b) Plot the last five years of the series together with two years of forecasts and the 95% forecast limits. Interpret the plot. > win.graph(width=6.5,height=3,pointsize=8) > newmonth.=season(ts(rep(1,24),start=c(2006,1),freq=12)) > newtrend=time(electricity)[length(electricity)]+(1:24)*deltat(electricity) > plot(model2,n.ahead=24,n1=c(2001,1),xlab='Year',pch=19,ylab='Electricity & Forecasts in Log Terms', newxreg=as.matrix(model.matrix(~newmonth.+newtrend-1))[,-1]) > # The second formulation using arima facillitates plotting the forecasts The two years of forecasts mimic the upward trend and seasonal nature of the series quite well. The widths of the prediction intervals are reasonably narrow for these forecasts. Exercise 9.9 Simulate an AR(1) process with φ = 0.8 and μ = 100. Simulate 48 values but set aside the last 8 values to compare forecasts to actual values. > set.seed(132456); series=arima.sim(n=48,list(ar=0.8))+100 > future=window(series,start=41); series=window(series,end=40) # Set aside future (a) Using the first 40 values of the series, find the values for the maximum likelihood estimates of φ and μ. > model=arima(series,order=c(1,0,0)); model Call: arima(x = series, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.7878 99.8465 s.e. 0.0943 0.8110 sigma^2 estimated as 1.372: log likelihood = -63.57, aic = 131.14 The maximum likelihood estimates are quite accurate in this particular simulation. Year Electricity & Forecasts in Log Terms 2001 2002 2003 2004 2005 2006 2007 2008 12.6 12.7 12.8 12.9 13.0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 136 (b) Using the estimated model, forecast the next eight values of the series. Plot the series together with the eight forecasts. Place a horizontal line at the estimate of the process mean. > plot(model,n.ahead=8,ylab='Series & Forecasts',col=NULL,pch=19) > # col=NULL suppresses plotting the prediction intervals > abline(h=coef(model)[names(coef(model))=='intercept']) The forecasts are plotted as solid circles and have the chacteristic exponential decay toward the series mean. (c) Compare the eight forecasts with the actual values that you set aside. See part (d). (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n.ahead=8,ylab='Series, Forecasts, Actuals & Limits',pch=19) > points(x=(41:48),y=future,pch=3) # Add the actual future values to the plot > abline(h=coef(model)[names(coef(model))=='intercept']) The actual future series values are plotted as plus signs (+) and they fall well within the forecast prediction limits shown as dotted lines. The forecasts, plotted as solid circles, are, of course, much smoother than the actual values. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and the same sample size. Time Series & Forecasts 0 10203040 96 98 100 102 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Series, Forecasts, Actuals & Limits 0 10203040 96 98 100 102 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 137 Exercise 9.10 Simulate an AR(2) process with φ1 = 1.5, φ2 = −0.75, and μ = 100. Simulate 52 values but set aside the last 12 values to compare forecasts to actual values. > set.seed(132456); series=arima.sim(n=52,list(ar=c(1.5,-0.75)))+100 > actual=window(series,start=41); series=window(series,end=40) (a) Using the first 40 values of the series, find the values for the maximum likelihood estimates of the φ’s and μ. > model=arima(series,order=c(2,0,0)); model Call: arima(x = series, order = c(2, 0, 0)) Coefficients: ar1 ar2 intercept 1.6874 -0.9043 99.7057 s.e. 0.0523 0.0509 0.7142 sigma^2 estimated as 0.9335: log likelihood = -57.85, aic = 121.71 Notice that for this simulation, the ar estimates are slightly more than two standard errors away from the “true” val- ues. (b) Using the estimated model, forecast the next 12 values of the series. Plot the series together with the 12 fore- casts. Place a horizontal line at the estimate of the process mean. > result=plot(model,n.ahead=12,ylab='Series & Forecasts',col=NULL,pch=19) > abline(h=coef(model)[names(coef(model))=='intercept']) The forecasts mimic the pseudo periodic nature of the series but also decay toward the series mean as they go fur- ther into the future. (c) Compare the 12 forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 41 End = 52 Frequency = 1 actual forecasts 41 98.64384 97.72871 42 100.85458 98.90509 43 101.74739 100.14264 44 102.07439 101.16701 45 102.10662 101.77635 46 101.98472 101.87816 47 101.04909 101.49890 48 100.19580 100.76687 49 99.16138 99.87465 Time Series & Forecasts 0 1020304050 90 95 100 105 110 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 138 50 98.74983 99.03113 51 99.04251 98.41467 52 99.79968 98.13729 Thye comparison is much easier to see in the plot in part (d). (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n1=25,n.ahead=12,ylab='Series, Forecasts, Actuals & Limits', pch=19) > points(x=(41:52),y=future,pch=3) # future actual values as + signs > abline(h=coef(model)[names(coef(model))=='intercept']) The forecasts, plotted as solid circles, follow the same general “stochastic trend” as the actual future values (the pluses). The forecasts are well within the forecast prediction limits. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.11 Simulate an MA(1) process with θ = 0.6 and μ = 100. Simulate 36 values but set aside the last 4 values to compare forecasts to actual values. > set.seed(132456); series=arima.sim(n=52,list(ma=-0.6))+100 > actual=window(series,start=33); series=window(series,end=32) (a) Using the first 32 values of the series, find the values for the maximum likelihood estimates of the θ and μ. > model=arima(series,order=c(0,0,1)); model Call: arima(x = series, order = c(0, 0, 1)) Coefficients: ma1 intercept -0.8200 99.9109 s.e. 0.2066 0.0431 sigma^2 estimated as 1.001: log likelihood = -45.99, aic = 95.97 Time Series, Forecasts, Actuals & Limits 25 30 35 40 45 50 90 95 100 105 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 139 (b) Using the estimated model, forecast the next four values of the series. Plot the series together with the four forecasts. Place a horizontal line at the estimate of the process mean. > result=plot(model,n.ahead=4,ylab='Series & Forecasts',col=NULL,pch=19) > abline(h=coef(model)[names(coef(model))=='intercept']) (c) Compare the four forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 33 End = 36 Frequency = 1 actual forecast 33 101.47674 99.24312 34 98.48984 99.91093 35 101.00347 99.91093 36 97.47793 99.91093 With the MA(1) model, only the lead 1 forecast is different from the process mean. There is little in this model to help improve the forecasts. (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n.ahead=4,ylab='Series, Forecasts, Actuals & Limits',pch=19) > points(x=(33:36),y=actual,pch=3) > abline(h=coef(model)[names(coef(model))=='intercept']) Time Series & Forecasts 0 5 10 15 20 25 30 35 98 99 100 101 102 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Series, Forecasts, Actuals & Limits 0 5 10 15 20 25 30 35 98 99 100 101 102 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 140 The lead 1 forecast is outside the prediction limits for this simulation. The lead 4 forecast is nearly outside the limits and is quite extreme relative to the whole series. Of course, a new simulation would show different results. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.12 Simulate an MA(2) process with θ1 = 1, θ2 = −0.6, and μ = 100. Simulate 36 values but set aside the last 4 values with compare forecasts to actual values. > set.seed(1432756); series=arima.sim(n=36,list(ma=c(-1,0.6)))+100 > actual=window(series,start=33); series=window(series,end=32) (a) Using the first 32 values of the series, find the values for the maximum likelihood estimates of the θ’s and μ. > model=arima(series,order=c(0,0,2)); model Call: arima(x = series, order = c(0, 0, 2)) Coefficients: ma1 ma2 intercept -1.0972 0.3257 99.8535 s.e. 0.1539 0.1684 0.0498 sigma^2 estimated as 1.234: log likelihood = -49.46, aic = 104.92 Taking the standard errors into account, the maximum likehood estimates are reasonably close to the true values in this simulation. (b) Using the estimated model, forecast the next four values of the series. Plot the series together with the four forecasts. Place a horizontal line at the estimate of the process mean. > result=plot(model,n.ahead=4,ylab='Series & Forecasts',col=NULL,pch=19) > abline(h=coef(model)[names(coef(model))=='intercept']) (c) What is special about the forecasts at lead times 3 and 4? For the MA(2) model they are simply the estimated process mean. (d) Compare the four forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 33 End = 36 Frequency = 1 actual forecast 33 98.16822 99.33236 Time Series & Forecasts 0 5 10 15 20 25 30 35 98 100 102 104 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 141 34 101.56150 99.89905 35 97.68750 99.85353 36 103.08874 99.85353 The comparison is best done with a graph. See part (e). (e) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n.ahead=4,ylab='Series, Forecasts, Actuals & Limits',type='o',pch=19) > points(x=(33:36),y=actual,pch=3) > abline(h=coef(model)[names(coef(model))=='intercept']) For this simulation and this model, the forecasts are rather far from the actual values. However, the actuals are all within the forecast limits. (f) Repeat parts (a) through (e) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.13 Simulate an ARMA(1,1) process with φ = 0.7, θ = −0.5, and μ = 100. Simulate 50 values but set aside the last 10 values to compare forecasts with actual values. > set.seed(172456); series=arima.sim(n=50,list(ar=0.7,ma=0.5))+100 > actual=window(series,start=41); series=window(series,end=40) (a) Using the first 40 values of the series, find the values for the maximum likelihood estimates of φ, θ, and μ. > model=arima(series,order=c(1,0,1)); model Call: arima(x = series, order = c(1, 0, 1)) Coefficients: ar1 ma1 intercept 0.6048 0.6907 99.9745 s.e. 0.1585 0.2522 0.5846 sigma^2 estimated as 0.8162: log likelihood = -53.6, aic = 113.19 Taking the standard errors into account, the maximum likehood estimates are reasonably close to the true values in this simulation. Time Series, Forecasts, Actuals & Limits 0 5 10 15 20 25 30 35 98 100 102 104 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 142 (b) Using the estimated model, forecast the next ten values of the series. Plot the series together with the ten forecasts. Place a horizontal line at the estimate of the process mean. > result=plot(model,n.ahead=10,ylab='Series & Forecasts',col=NULL,pch=19) > abline(h=coef(model)[names(coef(model))=='intercept']) The forecfasts approach the series mean fairly quickly. (c) Compare the ten forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 41 End = 50 Frequency = 1 actual forecast 41 98.90034 99.55443 42 99.25304 99.72043 43 99.08626 99.82082 44 98.04358 99.88154 45 97.84692 99.91826 46 97.89159 99.94047 47 97.81065 99.95391 48 98.41574 99.96203 49 96.72142 99.96694 50 97.95263 99.96992 See part (d) for a graphical comparison. Time Series & Forecasts 0 1020304050 97 98 99 100 102 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 143 (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n.ahead=10,ylab='Series, Forecasts, Actuals & Limits',pch=19) > points(x=(41:50),y=actual,pch=3) > abline(h=coef(model)[names(coef(model))=='intercept']) This series is quite erratic but the actual series values are contained within the forecast limits. The forecasts decay to the estimated process mean rather quickly and the prediction limits are quite wide. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.14 Simulate an IMA(1,1) process with θ = 0.8 and θ0 = 0. Simulate 35 values, but set aside the last five values to compare forecasts with actual values. > set.seed(127456); series=arima.sim(n=35,list(order=c(0,1,1),ma=-0.8))[-1] > # delete initial term as it is always = 0 > actual=window(series,start=31); series=window(series,end=30) (a) Using the first 30 values of the series, find the value for the maximum likelihood estimate of θ. > model=arima(series,order=c(0,1,1)); model Call: arima(x = series, order = c(0, 1, 1)) Coefficients: ma1 -0.7696 s.e. 0.1832 sigma^2 estimated as 0.845: log likelihood = -39.15, aic = 80.31 Taking the standard errors into account, the maximum likehood estimate is quite close to the true value in this sim- ulation. Time Series, Forecasts, Actuals & Limits 0 1020304050 97 98 99 100 102 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 144 (b) Using the estimated model, forecast the next five values of the series. Plot the series together with the five forecasts. What is special about the forecasts? > result=plot(model,n.ahead=5,ylab='Series & Forecasts',col=NULL,pch=19) For this model the forecasts are constant for all lead times. (c) Compare the five forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 31 End = 35 Frequency = 1 actual forecast 31 0.9108642 1.413627 32 1.5476147 1.413627 33 2.6211930 1.413627 34 1.5560880 1.413627 35 0.3657792 1.413627 For this model the foecasts are the same at all lead times. See part (d) for a graphical comparison. (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n.ahead=5,ylab='Series, Forecasts, Actuals & Limits',pch=19) > points(x=(31:35),y=actual,pch=3) The forecat limits contain all of the actual values but they are quite wide. Time Series & Forecasts 0 5 10 15 20 25 30 35 0123 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Series, Forecasts, Actuals & Limits 0 5 10 15 20 25 30 35 0123 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 145 (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.15 Simulate an IMA(1,1) process with θ = 0.8 and θ0 = 10. Simulate 35 values, but set aside the last five values to compare forecasts to actual values. > set.seed(1245637); series=arima.sim(n=35,list(order=c(0,1,1),ma=-0.8))[-1] > # delete initial term that is always zero > series=series+10*(1:35) # add in intercept term that becomes a linear time trend > actual=window(series,start=31); series=window(series,end=30) (a) Using the first 30 values of the series, find the values for the maximum likelihood estimates of θ and θ0. > model=arima(series,order=c(0,1,1),xreg=(1:30)); model Call: arima(x = series, order = c(0, 1, 1), xreg = (1:30)) Coefficients: ma1 xreg -0.7178 9.2545 s.e. 0.1389 0.6751 sigma^2 estimated as 135.7: log likelihood = -112.71, aic = 229.41 Both the ma and intercept parameters are estimated well in this simulation. (b) Using the estimated model, forecast the next five values of the series. Plot the series together with the five forecasts. What is special about these forecasts? > result=plot(model,n.ahead=5,newxreg=(31:35), ylab='Series & Forecasts',col=NULL,pch=19) In this model the nonzero intercept term dominates the time series and the series and forecasts virually follow a straight line. (c) Compare the five forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 31 End = 35 Frequency = 1 actual forecast 31 308.7936 307.8215 32 318.1666 317.7470 33 329.2790 327.6724 34 337.5403 337.5979 Time Series & Forecasts 0 5 10 15 20 25 30 35 0 50 100 200 300 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 146 35 346.1691 347.5233 The comparison is seen best in the graph in part (d). (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n1=20,n.ahead=5,newxreg=(31:35),ylab='Series, Forecasts, Actuals & Limits',pch=19) > points(x=seq(31,35),y=actual,pch=3) # Actuals plotted as plus signs The actual values fall within the forecast limits. We plotted just the end of the series so we could see more detail. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.16 Simulate an IMA(2,2) process with θ1 = 1, θ2 = −0.75, and θ0 = 0. Simulate 45 values, but set aside the last five values to compare forecasts with actual values. > set.seed(432456) > series=(arima.sim(n=45,list(order=c(0,2,2),ma=c(-1,0.75)))[-1])[-1] > # Delete first two values as they are always zero > actual=window(series,start=41); series=window(series,end=40) (a) Using the first 40 values of the series, find the value for the maximum likelihood estimate of θ1 and θ2. > model=arima(series,order=c(0,2,2)); model Call: arima(x = series, order = c(0, 2, 2)) Coefficients: ma1 ma2 -0.9821 0.7264 s.e. 0.1619 0.2254 sigma^2 estimated as 0.9944: log likelihood = -54.76, aic = 113.52 For this simulation the maximum likelihood estimates are excellent. Time Series, Forecasts, Actuals & Limits 20 25 30 35 200 250 300 350 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 147 (b) Using the estimated model, forecast the next five values of the series. Plot the series together with the five forecasts. What is special about the forecasts? > result=plot(model,n.ahead=5,ylab='Series & Forecasts',col=NULL,pch=19) The forecasts seem to follow a straight line. (c) Compare the five forecasts with the actual values that you set aside. > forecast=result\$pred; cbind(actual,forecast) Time Series: Start = 41 End = 45 Frequency = 1 actual forecast 41 237.2138 239.0336 42 246.3410 246.9881 43 253.3401 254.9425 44 260.3783 262.8970 45 269.1054 270.8514 All of the forecasts are a bit higher than the actual values. See part (d) below for more comparisons. (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? > plot(model,n1=38,n.ahead=5,ylab='Series, Forecasts, Actuals & Limits', pch=19) > points(x=seq(41,45),y=actual,pch=3) Time Series & Forecasts 0 10203040 0 50 100 150 200 250 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Series, Forecasts, Actuals & Limits 38 39 40 41 42 43 44 45 220 230 240 250 260 270 280 ● ● ● ● ● ● ● ● 148 The forecast limits spread out as the lead time increases. This, of course, is characteristic of forecast limits for non- stationary models. The forecast at lead 1 is very close to the lower forecast limit. To check it more precisely, we dis- play the numbers. > lower=result\$lpi; upper=result\$upi; cbind(lower,actual,upper) Time Series: Start = 41 End = 45 Frequency = 1 lower actual upper 41 237.0792 237.2138 240.9881 42 244.1991 246.3410 249.7770 43 250.5107 253.3401 259.3743 44 256.2908 260.3783 269.5031 45 261.6857 269.1054 280.0171 From the detailed numbers, we see that in fact the lead 1 forecast is a little above the lower forecast limit so that all of the forecasts are within the 95% limits in this simulation. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.17 Simulate an IMA(2,2) process with θ1 = 1, θ2 = −0.75, and θ0 = 10. Simulate 45 values, but set aside the last five values to compare forecasts with actual values. (Note: An intercept value of, say, θ0 = 0.03 provides a more reasonable looking series.) > set.seed(13243546) > series=arima.sim(n=45,list(order=c(0,2,2),ma=c(-1,0.75)))[-(1:2)] > # Delete initial zero values > series=series+10*(1:45)^2 # Add nonzero intercept term > actual=window(series,start=41); series=window(series,end=40) (a) Using the first 40 values of the series, find the values for the maximum likelihood estimates of θ1, θ2, and θ0. > model=arima(series,order=c(0,2,2),xreg=(1:40)^2); model Call: arima(x = series, order = c(0, 2, 2), xreg = (1:40)^2) Coefficients: ma1 ma2 xreg -1.0382 0.6638 10.0338 s.e. 0.1123 0.1435 0.0551 sigma^2 estimated as 1.202: log likelihood = -58.24, aic = 122.48 Taking the standard errors into consideration, the estimates of the MA parameters and intercept are quite reason- able. 149 (b) Using the estimated model, forecast the next five values of the series. Plot the series together with the five forecasts. What is special about these forecasts? > result=plot(model,n.ahead=5,newxreg=(41:45)^2,ylab='Series & Forecasts', type=’o’,col=NULL,pch=19) They look like they fall on a straight line. In fact, we know that it is really a quadratic curve but over this range the slope is nearly constant. (c) Compare the five forecasts with the actual values that you set aside. > forecast=result\$pred; lower=result\$lpi; upper=result\$upi > cbind(lower,forecast,actual,upper) Time Series: Start = 41 End = 45 Frequency = 1 lower forecast actual upper 41 16804.96 16807.11 16808.72 16809.26 42 17636.78 17639.76 17640.59 17642.74 43 18487.95 18492.48 18495.70 18497.01 44 19358.70 19365.26 19368.23 19371.83 45 20249.16 20258.12 20259.99 20267.08 The comparison is best seen in the plot in part (d) below. (d) Plot the forecasts together with 95% forecast limits. Do the actual values fall within the forecast limits? Time Series & Forecasts 0 10203040 0 5000 10000 15000 20000 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Series, Forecasts, Actuals & Limits 0 10203040 0 5000 10000 15000 20000 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 150 > plot(model,n1=1,n.ahead=5,newxreg=(41:45)^2,type='o',pch=19, ylab='Series, Forecasts, Actuals & Limits') > points(x=seq(41,45),y=actual,pch=3) The forecast limits are so tight in this example that they cannot be seen on the plot. See part (c) for the actual num- bers. The actual values are all within the forecast intervals for this simulation. (e) Repeat parts (a) through (d) with a new simulated series using the same values of the parameters and same sample size. Exercise 9.18 Consider the model , where . We assume that β0, β1, and φ are known. Show that the minimum mean square error forecast l steps ahead can be written as . Note: Since we assume all parameters are known, conditioning on is the same as conditioning on . So Now, since is an AR(1) process, and the desired result is obtained. Exercise 9.19 Verify Equation (9.3.12), page 196. Exercise 9.20 Verify Equation (9.3.28), page 199. From Equation (4.4.1), page 77, replacing t by t + l, we have . So This becomes where Exercise 9.21 The data file named deere3 contains 57 consecutive values from a complex machine tool process at Deere & Co. The values given are deviations from a target value in units of ten millionths of an inch. The process Yt β0 β1tXt++= Xt φXt 1– et+= Y^ t l() β0 β1 t l+()φl Yt β0– β1t–()++= Y1 Y2 … Yt,,, X1 X2 … Xt,,, Y^ t l() E β0 β1 t l+()Xt l+++Y1 Y2 … Yt,,,()= β0 β1 t l+()EXt l+ Y1 Y2 … Yt,,,()++= β0 β1 t l+()EXt l+ X1 X2 … Xt,,,()++= Xt{} EXt l+ X1 X2 … Xt,,,()φlXt φlXt φl Yt β0– β1t–()=== Yt et φet 1– φ2et 2– … φk 1– etk–1+ φkYtk–++ ++ += Yt et φet 1– φ2et 2– φ3et 3– …++++= Yt l+ φ1Yt 1– l+ φ2Yt 2– l+ … φpYtp– l+ et l+++++= θ1et 1– l+ θ2et 2– l+– … θqetq– l+––– Y^ t l() EYt l+ Y1 Y2 … Yt,,,()φ1EYt 1– l+ Y1 Y2 … Yt,,,()φ2EYt 2– l+ Y1 Y2 … Yt,,,()…++== φpEYtp– l+ Y1 Y2 … Yt,,,()Eet l+ Y1 Y2 … Yt,,,()θ1Eet 1– l+ Y1 Y2 … Yt,,,()– …–++ θqEetq– l+ Y1 Y2 … Yt,,,()– Y^ t l() φ1Y^ t l 1–()φ2Y^ t l 2–()… φpY^ t l p–()θ0++++= θ1Eet l 1–+ |Y1 Y2 … Yt,,,()– θ2Eet l 2–+ |Y1 Y2 … Yt,,,()– …– θqEet l q–+ |Y1 Y2 … Yt,,,()– Eetj+ |Y1 Y2 … Yt,,,() 0 etj+⎩ ⎨ ⎧= for j 0> for j 0≤ 151 employs a control mechanism that resets some of the parameters of the machine tool depending on the magnitude of deviation from target of the last item produced. (a) Using an AR(1) model for this series, forecast the next ten values. > data(deere3); model=arima(deere3,order=c(1,0,0)); plot(model,n.ahead=10)\$pred Time Series: Start = 58 End = 67 Frequency = 1  -335.145915 -117.120755 -2.538371 57.680013 89.327581 105.959853  114.700888 119.294709 121.708976 122.977786 The forecasts are reasonably constant from forecast lead 8 on. (b) Plot the series, the forecasts, and 95% forecast limits, and interpret the results. > win.graph(width=6.5,height=3,pointsize=8) > plot(model,n.ahead=10,ylab='Deviation',xlab='Year',pch=19) > abline(h=coef(model)[names(coef(model))=='intercept']) Since the model does not contain a lot of autocorrelation or other structure, the forecasts, plotted as solid circles, quickly settle down to the mean of the series. Exercise 9.22 The data file named days contains accounting data from the Winegard Co. of Burlington, Iowa. The data are the number of days until Winegard receives payment for 130 consecutive orders from a particular distribu- tor of Winegard products. (The name of the distributor must remain anonymous for confidentiality reasons.) The time series contains outliers that are quite obvious in the time series plot. Replace each of the unusual values at “times” 63, 106, and 129 with the much more typical value of 35 days. > data(days); daysmod=days; daysmod=35; daysmod=35; daysmod=35 (a) Use an MA(2) model to forecast the next ten values of this modified series. > model=arima(daysmod,order=c(0,0,2)); plot(model,n.ahead=10)\$pred Time Series: Start = 131 End = 140 Frequency = 1  29.07436 27.52056 28.19564 28.19564 28.19564 28.19564 28.19564 28.19564 28.19564  28.19564 Of course, these values would be rounded before reporting them. Notice that they are identical from lead 3 on. Year Deviation 0 102030405060 −6000 −2000 0 2000 4000 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● 152 (b) Plot the series, the forecasts, and 95% forecast limits, and interpret the results. > plot(model,n1=100,n.ahead=10,ylab='Days',xlab='Year',pch=23) > abline(h=coef(model)[names(coef(model))=='intercept']) We start the series plot at t = 100 so that the detail in the forcasts can be seen more easily. The MA(2) model has autocorrelation only at lags 1 and 2 so the forecasts are just the process mean beyond lead 2. Exercise 9.23 The time series in the data file robot gives the final position in the “x-direction” after an industrial robot has finished a planned set of exercises. The measurements are expressed as deviations from a target position. The robot is put through this planned set of exercises in the hope that its behavior is repeatable and thus predictable. (a) Use an IMA(1,1) model to forecast five values ahead. Obtain 95% forecast limits also. > data(robot); model=arima(robot,order=c(0,1,1)); model; plot(model,n.ahead=5)\$pred Time Series: Start = 325 End = 329 Frequency = 1  0.001742672 0.001742672 0.001742672 0.001742672 0.001742672 > plot(model,n.ahead=5)\$upi; plot(model,n.ahead=5)\$lpi Time Series: Start = 325 End = 329 Frequency = 1  0.006669889 0.006710540 0.006750862 0.006790862 0.006830548 Time Series: Start = 325 End = 329 Frequency = 1  -0.003184545 -0.003225197 -0.003265519 -0.003305518 -0.003345204 Year Days 100 110 120 130 140 20 25 30 35 40 ● ●● ● ● ● ●● ●● ● ● ●● ●●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 153 (b) Display the forecasts, forecast limits, and actual values in a graph and interpret the results. > win.graph(width=6.5,height=3,pointsize=8) > plot(model,n1=300,n.ahead=5,ylab='Deviation',pch=19) Thew forecast limits are quite wide in this fitted model and the forecasts are relatively constant. (c) Now use an ARMA(1,1) model to forecast five values ahead and obtain 95% forecast limits. Compare these results with those obtained in part (a). > model=arima(robot,order=c(1,0,1)); plot(model,n.ahead=5)\$pred Time Series: Start = 325 End = 329 Frequency = 1  0.001901348 0.001879444 0.001858695 0.001839041 0.001820424 > plot(model,n.ahead=5)\$upi; plot(model,n.ahead=5)\$lpi Time Series: Start = 325 End = 329 Frequency = 1  0.006571344 0.006611183 0.006650699 0.006689898 0.006728790 Time Series: Start = 325 End = 329 Frequency = 1  -0.003086000 -0.003125839 -0.003165355 -0.003204555 -0.003243446 Time Deviation 300 305 310 315 320 325 330 −0.002 0.002 0.006 ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● Time Deviation 300 305 310 315 320 325 330 −0.002 0.002 0.006 ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● 154 > plot(model,n1=300,n.ahead=5,ylab='Deviation',pch=19) > abline(h=coef(model)[names(coef(model))=='intercept']) Both of these models give quite similar forecasts and forecast limits. Exercise 9.24 Exhibit (9.4), page 206, displayed the forecasts and 95% forecast limits for the square root of the Cana- dian hare abundance. The data are in the file named hare. Produce a similar plot in original terms. That is, plot the original abundance values together with the squares of the forecasts and squares of the forecast limits. > data(hare); m1.hare=arima(sqrt(hare),order=c(3,0,0)) > square=function(x){y=x^2} # Define the square function > plot(m1.hare,n.ahead=25,xlab='Year',ylab='Hare Abundance',pch=19,transform=square) In original terms, the prediction intervals are not symmetric around the predictions. Exercise 9.25 Consider the seasonal means plus linear time trend model for the logarithms of the monthly electricity generation time series in Exercise 9.8. (The data are in the file named electricity.) (a) Find the two-year forecasts and forecast limits in original terms. That is, exponentiate (antilog) the results obtained in Exercise 9.8. > data(electricity); win.graph(width=6.5,height=3,pointsize=8) > month.=season(electricity); trend=time(electricity) # From Exercise 9.8 > model=arima(log(electricity),order=c(0,0,0), xreg=as.matrix(model.matrix(~month.+trend-1))[,-1]) > newmonth.=season(ts(rep(1,24),start=c(2006,1),freq=12)) > newtrend=time(electricity)[length(electricity)]+(1:24)*deltat(electricity) > result=plot(model,n.ahead=24,n1=c(2001,1),ylab='Electricity',xlab='Year', newxreg=as.matrix(model.matrix(~newmonth.+ newtrend-1))[,-1],pch=19,transform=exp) The statement transform=exp in the last line is the only thing new compared to the R code in Exercise 9.8. This last line will produce the plot needed for part (b) below. To see the forecasts and prediction limits use the following additional R code. > forecast=result\$pred; upper=result\$upi; lower=result\$lpi; cbind(lower,forecast,upper) lower forecast upper Jan 2006 348024.4 376498.6 407302.4 Feb 2006 307893.1 333083.9 360335.8 Mar 2006 319157.2 345269.6 373518.4 Apr 2006 297206.9 321523.4 347829.4 May 2006 317557.9 343539.4 371646.7 Jun 2006 344687.8 372889.0 403397.6 Jul 2006 380563.5 411699.9 445383.9 Aug 2006 380196.3 411302.7 444954.1 Year Hare Abundance 1910 1920 1930 1940 1950 1960 0 50 100 150 ● ●●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 155 Sep 2006 331748.7 358891.2 388254.5 Oct 2006 316204.6 342075.4 370062.9 Nov 2006 310663.6 336081.1 363578.1 Dec 2006 340569.1 368433.3 398577.3 Jan 2007 356926.9 386129.5 417721.3 Feb 2007 315769.1 341604.3 369553.2 Mar 2007 327321.3 354101.7 383073.1 Apr 2007 304809.5 329748.0 356726.9 May 2007 325681.1 352327.2 381153.4 Jun 2007 353505.0 382427.6 413716.6 Jul 2007 390298.4 422231.3 456776.9 Aug 2007 389921.8 421823.9 456336.1 Sep 2007 340234.9 368071.8 398186.2 Oct 2007 324293.2 350825.8 379529.2 Nov 2007 318610.4 344678.1 372878.5 Dec 2007 349280.9 377857.9 408773.0 (b) Plot the last five years of the original time series together with two years of forecasts and the 95% forecast limits, all in original terms. Interpret the plot. The forecasts follow the general upward trend and seasonal pattern of the series quite well and the prediction limits are reasonably tight from this model. CHAPTER 10 Exercise 10.1 Based on quarterly data, a seasonal model of the form has been fit to a certain time series. (a) Find the first four ψ-weights for this model. Iterating back in time, we have From this we may read off ψ0= 1, ψ1= −θ1, ψ2= −θ2, ψ3 = 0, and ψ4 = 1. Year Electricity 2001 2002 2003 2004 2005 2006 2007 2008 300000 350000 400000 450000 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Yt Yt 4– et θ1et 1–– θ2et 2––+= Yt Yt 4– et θ1et 1–– θ2et 2––+ Yt 8– et 4– θ1et 5–– θ2et 6––+()et θ1et 1–– θ2et 2––+== et θ1et 1–– θ2et 2–– et 4– θ1et 5–– θ2et 6–– Yt 8–++= 156 (b) Suppose that θ1 = 0.5, θ2 = −0.25, and σe = 1. Find forecasts for the next four quarters if data for the last four quarters are (c) Find 95% prediction intervals for the forecasts in part (b). Exercise 10.2 An AR model has AR characteristic polynomial (a) Is the model stationary? Since Φ = 0.8, the seasonal part of the model is staionary. In the nonseasonal part, φ1 = 1.6 and φ2 = −0.7. These parameter values satisfy Equations (4.3.11) on page 72. Therefore the complete model is stationary. (b) Identify the model as a certain seasonal ARIMA model. The model is a stationary, seasonal ARIMA(2,0,0)×(1,0,0)12 with φ1 = 1.6, φ2 = −0.7, and Φ = 0.8. Expanded out it would be Exercise 10.3 Suppose that {Yt} satisfies where St is deterministic and periodic with period s and {Xt} is a seasonal ARIMA(p,0,q)×(P,1,Q)s series. What is the model for Wt = Yt −Yt − s? Therefore {Wt} is ARMA(p, q)×(P, Q)s with constant term bs. Exercise 10.4 For the seasonal model with |Φ| < 1, find γ0 and ρk. So . Furthermore, since , we have . For k > 1, we have . Setting k = 3, we obtain Quarter I II III IV Series 25 20 25 40 Residual2123 Quarter Forecast Interval I or 20 to 24 II or 18.51 to 22.99 III or 22.71 to 27.29 IV or 37.71 to 42.29 Y^ t 1() Yt 3– θ1et– θ2et 1––250.5()3()–0.25–()2()–24== = Y^ t 2() Yt 2– θ2et–200.25–()3()–20.75== = Y^ t 3() Yt 1– 25== Y^ t 4() Yt 40== 24 2 1± 24 2±= 20.75 2 1 0.5()2+± 20.75 2.24±= 25 2 1 0.5()2 0.25()2++± 25 2.29±= 40 2 1 0.5()2 0.25()2 02++ +± 40 2.29±= 11.6x–0.7x2+()10.8x12–() Yt 1.6Yt 1– 0.7Yt 2––0.8Yt 12– 1.6 0.8()Yt 13–– 0.7 0.8()Yt 14– et+++= 1.6Yt 1– 0.7Yt 2––0.8Yt 12– 1.28Yt 13––0.56Yt 14– et+++= Yt abtSt Xt+++= Wt Yt Yts–– abtSt Xt+++()abts–()Sts– Xts–+++[]– bs St Sts–– Xts∇++bs Xts∇+== = = Yt ΦYt 4– et θet 1––+= γ0 Var Yt() Φ2Var Yt 4–()1 θ2+()σe 2+ Φ2γ0 1 θ2+()σe 2+== = γ0 1 θ2+ 1 Φ2+ ---------------- σe 2= EetYt 1–()Eet 1– ΦYt 5– et 1– θet 2––+()[]σe 2== γ1 E ΦYt 4– Yt 1– etYt 1– θet 1– Yt 1––+[]Φγ3 θσe 2–== γk E ΦYt 4– Ytk– etYtk– θet 1– Ytk––+[]Φγk 4–== 157 . This may be solved simultaneously with to yield which, in turn, gives . For general k, we use the recursion . For k = 2, we have and so . We have further , , and so on. In summary, , , , and for k = 0, 1, 2, ... Exercise 10.5 Identify the following as certain multiplicative seasonal ARIMA models: (a) . Rewriting as we see that it is an ARIMA(1,0,1)×(0,1,0)4 model with Φ1 = 0.5 and θ1 = 0.3. Alternatively, we could factor the AR characteristic polynomial as and pick off the model and coefficients. (b) . [Typos in first printing of the book!] Rewriting we see that the model is an ARIMA(0,1,1)×(0,1,1)12 with θ1 = 0.5, Θ1 = 0.5, and seasonal period 12. Exercise 10.6 Verify Equations (10.2.11) on page 232. The model is with 0 < |Φ| < 1, so we have or which gives . Now and, for k > 1, . Putting k = 11, we have which we may solve simultaneously with to obtain and then . With k = 2 and k = 10, we get the pair and which imply . Similarly, the pair k = 3 and k = 9 yield and eventually . However, and ρ12 = Φρ0 = Φ. Furthermore, and . Similarly, but ρ24 = Φρ12 = Φ2. In summary , for k = 1, 2, 3,..., and all other autocorrela- tions are zero. Exercise 10.7 Suppose that the process {Yt} develops according to with Yt = et for t = 1, 2, 3, and 4. (a) Find the variance function for {Yt}. First note that E{Yt} = 0. Now write t = 4k + r where r = 1, 2, 3, or 4 and k = 0, 1, 2, 3, ... . (r is the quarter indicator and k is the year.) Now iterating back on t we have Since there are exactly k + 1 e’s in the sum we have . (b) Find the autocorrelation function for {Yt}. Let s > t and express s = 4j + i where i = 1, 2, 3, or 4 and j = 0, 1, 2, 3,... . Then γ3 Φγ 1– Φγ1== γ1 Φγ3 θσe 2–= γ1 θ– 1 Φ2– ----------------σe 2= ρ1 θ– 1 Φ2– ----------------= γk Φγk 4–= ρ2 Φρ 2– Φρ2== ρ2 0= ρ3 Φρ 1– Φρ1==ρ4 Φρ0 Φ== γ0 1 θ2+ 1 Φ2+ ---------------- σe 2= ρ4k 1+ ρ4k 1– θ– 1 Φ2– ----------------φk== ρ4k Φk= ρ4k 2+ 0= Yt 0.5Yt 1– Yt 4– 0.5Yt 5–– et 0.3et 1––++= Yt Yt 4––0.5Yt 1– Yt 5––()et 0.3et 1––+= 10.5x– x4–0.5x5+()10.5x–()1 x–()4= Yt Yt 1– Yt 12– Yt 13–– et 0.5et 1––0.5et 12––0.25et 13–++ += Yt Yt 1––()Yt 12– Yt 13––()– ∇∇12Yt et 0.5et 1––0.5et 12––0.5()0.5()et 13–+== Yt ΦYt 12– et θet 1––+= Var Yt() Φ2Var Yt 12–()1 θ2+()σe 2+= γ0 Φ2γ0 1 θ2+()σe 2+= γ0 1 θ2+ 1 Φ2– ---------------- σe 2= γ1 E ΦYt 12– Yt 1– etYt 1– θet 1– Yt 1––+()Φγ11 θσe 2–== γk E ΦYt 12– Ytk– etYtk– θet 1– Ytk––+()Φγk 12–==γ11 Φγ1= γ1 Φγ11 θσe 2–= γ1 θ– 1 Φ2– ----------------σe 2= ρ1 θ– 1 θ2– --------------= γ2 Φγ10= γ10 Φγ2= γ2 γ10 0== γ3 γ9 0== ρ2 ρ3 ρ4 … ρ10 0==== = ρ11 Φρ1= ρ13 Φρ1= ρ13 Φρ1 ρ11== ρ14 ρ15 … ρ23 0==== ρ12k Φk for k 1≥= ρ12k 1– ρ12k 1+ θ 1 θ2+ ---------------Φk–== Yt Yt 4– et+= Yt Yt 4– et+ Yt 8– et 4–+()et+ Yt 12– et 8–+()et 4– et++== = ... et et 4– et 8– … er 4+ er+++++= Var Yt() k 1+()σe 2= 158 . If r ≠ i, then there will be no overlap among te two sets of e’s and the covariance will be zero. If r = i, then that is, s and t correspond to the same quarter of the year, then the e’s will be the same from t down to r and . In summary, for s > t Note: r = i is equivalent to s − t being divisible by 4. Finally, For large t, observations for the same quarter in neighboring years will be strongly positively correlated. Obser- vations in different quarters will always be uncorrelated. (c) Identify the model for {Yt} as a certain seasonal ARIMA model. Since , this is a seasonal ARIMA(0,0,0)×(0,1,0)4. That is, the quarterly seasonal difference is white noise. Exercise 10.8 Consider the Alert, Canada, monthly carbon dioxide time series shown in Exhibit (10.1), page 227. The data are in the file named co2. (a) Fit a deterministic seasonal means plus linear time trend model to these data. Are any of the regression coef- ficients “statistically significant”? > data(co2); month.=season(co2); trend=time(co2) > model=lm(co2~month.+trend); summary(model) Call: lm(formula = co2 ~ month. + trend) Residuals: Min 1Q Median 3Q Max -1.73874 -0.59689 -0.06947 0.54086 2.15539 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -3290.5412 44.1790 -74.482 < 2e-16 *** month.February 0.6682 0.3424 1.952 0.053320 . month.March 0.9637 0.3424 2.815 0.005715 ** month.April 1.2311 0.3424 3.595 0.000473 *** month.May 1.5275 0.3424 4.460 1.87e-05 *** month.June -0.6761 0.3425 -1.974 0.050696 . month.July -7.2851 0.3426 -21.267 < 2e-16 *** month.August -13.4414 0.3426 -39.232 < 2e-16 *** month.September -12.8205 0.3427 -37.411 < 2e-16 *** month.October -8.2604 0.3428 -24.099 < 2e-16 *** month.November -3.9277 0.3429 -11.455 < 2e-16 *** month.December -1.3367 0.3430 -3.897 0.000161 *** trend 1.8321 0.0221 82.899 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8029 on 119 degrees of freedom Multiple R-squared: 0.9902, Adjusted R-squared: 0.9892 F-statistic: 997.7 on 12 and 119 DF, p-value: < 2.2e-16 All of the regression coefficients are statistically significant except for the seasonal effects for February and June. Those two have p-values just above 0.05. (b) What is the multiple R-squared for this model? The multiple R-squared can be read off the output as 99.02%. Cov Yt Ys,()Cov et et 4– … er+++es es 4– … ei+++,()= Cov Yt Ys,()Var Yt()= Cov Yt Ys,() k 1+()σe2 if ri= 0otherwise⎩ ⎨ ⎧= Corr Yt Ys,() k 1+ j 1+------------if s 4jrt,+4kr (r+1234),,,== = 0otherwise⎩ ⎪ ⎨ ⎪ ⎧ = Yt Yt 4–– et= 159 (c) Now calculate the sample autocorrelation of the residuals from this model. Interpret the results. > acf(residuals(model)) Clearly, this deterministic trend model has not captured the autocorrelation in this time series. The seasonal ARIMA model illustrated in Chapter 10 is a much better model for these data. Exercise 10.9 The monthly airline passenger time series, first investigated in Box and Jenkins (1976), is considered a classic time series. The data are in the file named airline. [Typo: The filename is airpass.] (a) Display the time series plots of both the original series and the logarithms of the series. Argue that taking logs is an appropriate transformation. > win.graph(width=3.25,height=2.5,pointsize=8) > data(airpass); plot(airpass, type='o',ylab='Air Passengers') > plot(log(airpass), type='o',ylab='Log(Air Passengers)') The graph of the logarithms displays a much more constant variation around the upward “trend.” 5101520 −0.2 0.0 0.2 0.4 Lag ACF ●●●●●●●●●●●●●●●●● ●●●● ●● ●●● ●●●●●●● ●●●●●●●● ●●● ●●● ●●● ●●●●●● ● ● ●●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●● ● ●●● ● ●● ● ● ● ● ●● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● Time Air Passengers 1960 1962 1964 1966 1968 1970 1972 100 300 500 ●● ●●● ● ●● ● ● ● ●● ● ●● ● ● ●●● ● ● ●●● ● ●●● ●● ● ● ● ●●●●●● ●●● ● ● ● ●●● ●●●● ●● ● ● ● ●● ● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●● ●●● ● ●● ● ● ● ●●●● ●● ● ●● ● ● ● ● Time Log(Air Passengers) 1960 1962 1964 1966 1968 1970 1972 5.0 5.5 6.0 6.5 160 (b) Display and interpret the time series plots of the first difference of the logged series. > win.graph(width=6.5,height=3,pointsize=8) > plot(diff(log(airpass)),type='o',ylab='Difference of Log(Air Passengers)') This series appears to be stationary. However, seasonality, if any, could be hiding in this plot. Perhaps a plot using seasonal plotting symbols would reveal the seasonality. > plot(diff(log(airpass)),type='l',ylab='Difference of Log(Air Passengers)') > points(diff(log(airpass)),x=time(diff(log(airpass))), pch=as.vector(season(diff(log(airpass)))) The seasonality can be observed by looking at the plotting symbols carefully. Septembers, Octobers, and Novem- bers are mostly at the low points and Decembers mostly at the high points. ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Difference of Log(Air Passengers) 1960 1962 1964 1966 1968 1970 1972 −0.2 −0.1 0.0 0.1 0.2 Time Difference of Log(Air Passengers) 1960 1962 1964 1966 1968 1970 1972 −0.2 −0.1 0.0 0.1 0.2 F M A M JJ A S ON D J FM A M J J A S ON D JF M A MJ J A S O N D JFM A M J JA S ON D JF M AM J J A SO N D J F M A M JJ A SON D J F M AM JJ A S ON D J F M A M J J A SO N D J F M A M J J A SO N D J F M A M J J A S O N DJ F M A M J J A S ON D J F M A M J J A S O N D 161 (c) Display and interpret the time series plot of the seasonal difference of the first difference of the logged series. > plot(diff(diff(log(airpass)),lag=12),type='l', ylab='First & Seasonal Differences of Log(AirPass)') > points(diff(diff(log(airpass)),lag=12),x=time(diff(diff(log(airpass)),lag=12)), pch=as.vector(season(diff(diff(log(airpass)),lag=12)))) We chose to do the plot with seasonal plotting symbols. The seasonality is much less obvious now. Some Decem- bers are high and some low. Similarly, some Octobers are high and some low. (d) Calculate and interpret the sample ACF of the seasonal difference of the first difference of the logged series. > acf(as.vector(diff(diff(log(airpass)),lag=12)),ci.type='ma', main='First & Seasonal Differences of Log(AirPass)') Although there is a “significant” autocorrelation at lag 3, the most prominent autocorrelations are at lags 1 and 12 and the “airline” model seems like a reasonable choice to investigate. (e) Fit the “airline model” (ARIMA(0,1,1)×(0,1,1)12) to the logged series. > model=arima(log(airpass),order=c(0,1,1),seasonal=list(order=c(0,1,1),period=12)) > model Time First & Seasonal Differences of Log(AirPass) 1962 1964 1966 1968 1970 1972 −0.15 −0.05 0.05 0.10 0.15 F M AM J J AS O N D J F M A M J J AS ON D J F M A M J J A S O NDJ F M A M J J A S O N D J F M A MJJ A S O N D JF M A M J J A S O N D J F M A MJ J A SO N D J F M A MJJ A SON DJF M A M J JA S O ND J F M AM J J A S O ND J F M A M JJ A S O N D 5101520 −0.4 −0.2 0.0 0.1 0.2 Lag ACF First & Seasonal Differences of Log(AirPass) 162 Call: arima(x = log(airpass), order = c(0, 1, 1), seasonal = list(order = c(0, 1, 1), period = 12)) Coefficients: ma1 sma1 -0.4018 -0.5569 s.e. 0.0896 0.0731 sigma^2 estimated as 0.001348: log likelihood = 244.7, aic = -485.4 Notice that both the seasonal and nonsrasonal ma parameters are significant. (f) Investigate diagnostics for this model, including autocorrelation and normality of the residuals. > win.graph(width=6.5,height=6); tsdiag(model) None of these three plots indicate difficulties with the model. There are no outliers and little autocorrelation in the residuals, both indiviually and jointly. ● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ●● ● ●● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● Standardized Residuals 1962 1964 1966 1968 1970 1972 −3 −1 1 2 3 5101520 −0.15 0.00 0.15 ACF of Residuals ● ● ● ● ● ● ● ● 5101520 0.0 0.4 0.8 P−values 163 Let’s look at normality. > win.graph(width=4,height=3,pointsize=8); hist(residuals(model),xlab='Residuals') > qqplot(residuals(model)); qqline(residuals(model)) The distribution of the residuals is quite symmetric but the Q-Q plot indicates that the tails are lighter than a normal distribution. Let’s do a Shapiro-Wilk test. > shapiro.test(residuals(model)) Shapiro-Wilk normality test data: residuals(model) W = 0.9864, p-value = 0.1674 The Shapiro-Wilk test does not reject normality of the error terms at any of the usual significance levels and we pro- cede to use the model for forecasting. (g) Produce forecasts for this series with a lead time of two years. Be sure to include forecast limits. > win.graph(width=6.5,height=3,pointsize=8) > plot(model,n1=c(1969,1),n.ahead=24,pch=19,ylab='Log(Air Passengers)') Residuals Frequency −0.10 −0.05 0.00 0.05 0.10 0102030 ●●●●●●●●●●●● ● ● ● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −0.10 −0.05 0.00 0.05 0.10 Theoretical Quantiles Sample Quantiles Time Log(Air Passengers) 1969 1970 1971 1972 1973 1974 5.8 6.0 6.2 6.4 6.6 6.8 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 164 The forecasts follow the seasonal and upward “trend” of the time series nicely. The forecast limits provide us with a clear measure ofthe uncertainty in the forecasts. For completeness, we also plot the forecasts and limits in orginal terms. > plot(model,n1=c(1969,1),n.ahead=24,pch=19,ylab='Air Passengers',transform=exp) In original terms it is easier to see that the forecast limits spread out as we get further into the future. Exercise 10.10 Exhibit (5.8), page 99 displayed the monthly electricity generated in the United States. We argued there that taking logarithms was appropriate for modeling. Exhibit (5.10), page 100 showed the time series plot of the first differences for this series. The filename is electricity. (a) Calculate the sample ACF of the first difference of the logged series. Is the seasonality visible in this dis- play? > data(electricity); acf(diff(log(as.vector(electricity))),lag.max=36) The very strong autocorrelations at lags 12, 24, and 36 point out the substantial seasonality in this time series. Time Air Passengers 1969 1970 1971 1972 1973 1974 300 400 500 600 700 800 900 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 5 10 15 20 25 30 35 −0.4 0.0 0.2 0.4 0.6 0.8 Lag ACF 165 (b) Plot the time series of seasonal difference and first difference of the logged series. Does a stationary model seem appropriate now? > plot(diff(diff(log(electricity))),ylab='Seasonal & First Difference of Log(Electricity)') The time series plot appears stationary but the seasonality will still have to be investigated further and modeled. (c) Display the sample ACF of the series after a seasonal difference and a first difference have been taken of the logged series. What model(s) might you consider for the electricity series? > acf(diff(diff(log(as.vector(electricity)),lag=12)),lag.max=36) After seasonal differencing, the strong autocorrelations at lags 24 and 36 have disappeared. Perhaps a stationay model could now be entertained. The “airline” model, ARIMA(0,1,1)×(0,1,1)12 for the logarithms, might capture most of the autocorrelation structure in this series. Time Seasonal& First Difference of Log(Electricity) 1975 1980 1985 1990 1995 2000 2005 −0.2 −0.1 0.0 0.1 0.2 0 5 10 15 20 25 30 35 −0.4 −0.2 0.0 0.1 0.2 Lag ACF 166 Exercise 10.11 The quarterly earnings per share for 1960–1980 of the U.S. company Johnson & Johnson, are saved in the file named JJ. (a) Plot the time series and also the logarithm of the series. Argue that we should transform by logs to model this series. > data(JJ); win.graph(width=6.5,height=3,pointsize=8); oldpar=par; par(mfrow=c(1,2)) > plot(JJ,ylab='Earnings',type='o'); plot(log(JJ),ylab='Log(Earnings)',type='o') > par=oldpar In the plot at the left it is clear that at the higher values of the series there is also much more variation. The plot opf the logs at the right shows much more equal variation at all levels of the series. We use logs for all of the remaining modeling. (b) The series is clearly not stationary. Take first differences and plot that series. Does stationarity now seem reasonable? > plot(diff(log(JJ)),ylab='Difference of Log(Earnings)',type='o') We do not expect stationay series to have less variability in the middle of the series as this one does but we might entertain a stationary model and see where it leads us. ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●● ● ●●●● ●●● ●●● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● Time Earnings 1960 1965 1970 1975 1980 0 5 10 15 ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ●● ●●● ● ●●● ●● ● ● ●●● ●● ● ● ●●●● ●●● ●●● ● ●● ● ● ●●● ● ●●● ● ● ●● ● ●●●● ●●● ● ●● ● ● ●●● ● Time Log(Earnings) 1960 1965 1970 1975 1980 012 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Difference of Log(Earnings) 1960 1965 1970 1975 1980 −0.6 −0.4 −0.2 0.0 0.2 0.4 167 (c) Calculate and graph the sample ACF of the first differences. Interpret the results. > acf(diff(log(as.vector(JJ))),ci.type='ma') In this quarterly series, the strongest autocorrelations are at the seasonal lags of 4, 8, 12, and 16. Clearly, we need to address the seasonality in this series. (d) Display the plot of seasonal differences and the first differences. Interpret the plot. Recall that for quarterly data, a season is of length 4. > series=diff(diff(log(JJ),lag=4)) > plot(series,ylab='Seasonal & First Difference',type='l') > points(y=series,x=time(series),pch=as.vector(season(series))) The various quarters seem to be quite randomly distributede among high, middle, and low values, so that most of the seasonality is accounted for in the seasonal difference. 51015 −0.4 0.0 0.2 0.4 0.6 Lag ACF Time Seasonal & First Difference 1965 1970 1975 1980 −0.2 −0.1 0.0 0.1 0.2 2 3 4 12 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 41 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 412 34 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 168 (e) Graph and interpret the sample ACF of seasonal differences with the first differences. > acf(as.vector(series),ci.type='ma') They only significant autocorrelations are at lags 1 and 7. Lag 4 (the quarterly lag) is nearly significant. (f) Fit the model ARIMA(0,1,1)×(0,1,1)4, and assess the significance of the estimated coefficients. > model=arima(log(JJ),order=c(0,1,1),seasonal=list(order=c(0,1,1),period=4)); model Call: arima(x = log(JJ), order = c(0, 1, 1), seasonal = list(order = c(0, 1, 1), period = 4)) Coefficients: ma1 sma1 -0.6809 -0.3146 s.e. 0.0982 0.1070 sigma^2 estimated as 0.00793: log likelihood = 78.38, aic = -152.75 Both the seasonal and nonseasonal ma parameters are significant in this model. 51015 −0.4 −0.2 0.0 0.2 Lag ACF 169 (g) Perform all of the diagnostic tests on the residuals. > win.graph(width=6.5,height=6); tsdiag(model) These diagnostic plots do not show any inadequacies with the model. No outliers are detected and there is little autocorrelation in the residuals. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Standardized Residuals 1965 1970 1975 1980 −2 −1 0 1 2 51015 −0.2 0.0 0.2 ACF of Residuals ● ● ● ● ● ● ● ● ● ● ● ● ● ● 51015 0.0 0.4 0.8 P−values 170 On to normality. > win.graph(width=3,height=3,pointsize=8); hist(residuals(model),xlab='Residuals') > qqnorm(residuals(model)); qqline(residuals(model)); shapiro.test(residuals(model)) Shapiro-Wilk normality test data: residuals(model) W = 0.9858, p-value = 0.489 Normality of the error terms looks like a very good assumption. (h) Calculate and plot forecasts for the next two years of the series. Be sure to include forecast limits. > win.graph(width=6.5,height=3,pointsize=8) > plot(model,n1=c(1978,1),n.ahead=8,pch=19,ylab='Log(Earnings)') The forecasts follow the general pattern of seasonality and “trend” in the earnings series and the forecast limits give a good indication of the confidence in these forecasts. ●●●●● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −2 −1 0 1 2 −0.1 0.0 0.1 0.2 Theoretical Quantiles Sample Quantiles Residuals Frequency −0.2 −0.1 0.0 0.1 0.2 0 5 10 15 20 Time Log(Earnings) 1978 1979 1980 1981 1982 2.2 2.4 2.6 2.8 3.0 3.2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 171 Lastly, we display the forecasts in orginal terms. > plot(model,n1=c(1978,1),n.ahead=8,pch=19,ylab='Earnings',transform=exp) In original terms, the uncertainty in the forecasts is easier to understand. Exercise 10.12 The file named boardings contains monthly data on the number of people who boarded transit vehi- cles (mostly light rail trains and city buses) in the Denver, Colorado, region for August 2000 through December 2005. (a) Produce the time series plot for these data. Be sure to use plotting symbols that will help you assess season- ality. Does a stationary model seem reasonable? > data(boardings); series=boardings[,1] > plot(series,type='l',ylab='Light Rail & Bus Boardings') > points(series,x=time(series),pch=as.vector(season(series))) As one would expect, there is substantial seasonality in this series. Decembers are generaly low due to the holidays and Septembers are usually quite high due to the start up of school. There may also be a gradual upward “trend” that may need to be modeled with some kind of nonstationarity. Time Earnings 1978 1979 1980 1981 1982 10 15 20 25 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Time Light Rail & Bus Boardings 2001 2002 2003 2004 2005 2006 12.40 12.50 12.60 12.70 A S ON D J F M AMJ J A S O N D J F M AM J J A SON D J F M AM J J A S O N D J FM AM J J A S O N D J FM AM J J A S O N D J F M 172 (b) Calculate and plot the sample ACF for this series. At which lags do you have significant autocorrelation? > acf(as.vector(series),ci.type='ma') There are significant autocorrelations at lags 1, 5, 6, and 12. Perhaps the following models will incorporate these effects. (c) Fit an ARMA(0,3)×(1,0)12 model to these data. Assess the significance of the estimated coefficients. > model=arima(series,order=c(0,0,3),seasonal=list(order=c(1,0,0),period=12)); model Call: arima(x = series, order = c(0, 0, 3), seasonal = list(order = c(1, 0, 0), period = 12)) Coefficients: ma1 ma2 ma3 sar1 intercept 0.7290 0.6116 0.2950 0.8776 12.5455 s.e. 0.1186 0.1172 0.1118 0.0507 0.0354 sigma^2 estimated as 0.0006542: log likelihood = 143.54, aic = -277.09 All of these coefficients are statistically significant at the usual significance levels. (d) Overfit with an ARMA(0,4)×(1,0)12 model. Interpret the results. > model2=arima(series,order=c(0,0,4),seasonal=list(order=c(1,0,0),period=12)); model2 Call: arima(x = series, order = c(0, 0, 4), seasonal = list(order = c(1, 0, 0), period = 12)) Coefficients: ma1 ma2 ma3 ma4 sar1 intercept 0.7277 0.6686 0.4244 0.1414 0.8918 12.5459 s.e. 0.1212 0.1327 0.1681 0.1228 0.0445 0.0419 sigma^2 estimated as 0.0006279: log likelihood = 144.22, aic = -276.45 In this model, the added coefficient, ma4, is not statistically significant. Furthemore, the coefficients in common have changed very little—especially in light of the sizes of their standard errors. Finally, the AIC value is slightly better for the simpler model. 51015 −0.4 −0.2 0.0 0.2 0.4 Lag ACF Chapter 11 Exercise 11.1 Let us draw a time series plot of the logarithms of airmiles data from January 1996 to May 2005 using diﬀerent symbols to denote diﬀerent months. > library(TSA) > data(airmiles) > win.graph(width=10, height=6,pointsize=8) > plot(log(airmiles),ylab=’Log(airmiles)’,xlab=’Year’) > Month=c("J","F","M","A","M","J","J","A","S","O","N","D") > points(log(airmiles),pch=Month) Year Log(airmiles) 1996 1998 2000 2002 2004 17.1 17.2 17.3 17.4 17.5 17.6 17.7 17.8 J F M AM J J A S O N D J F M A M J J A S O N D J F M AM J J A S O N D J F M AM J J A S O N D J F M A M J JA S O N D J F M AM J JA S O N D JF M A M J JA S O N D J F M A M J J A S O N D JF M AM J J A S O N D J F M A M Figure 1: Plot of Log(airmiles) with Monthly Symbols We can easily see that the air traﬃc is generally higher during July, August and Decem- ber, and otherwise lower in January and Febrary. So the seasonality is obvious. 173 Exercise 11.2 Equation (11.1.7) gives mt =  ω 1−δt−T 1−δ , for t > T 0, otherwise. If t ≤ T, then mt = mt−1 = 0 and S(T) t−1 = 0. Hence, for t ≤ T the “AR(1)” type recursion mt = δmt−1 + ωS(T) t−1 (1) holds. When t = T +1, we have mT +1 = ω, mT = 0 and S(T) T = 1. Recursion (1) still holds. For t ≥ T + 2, from (11.1.7) we get mt = ω 1−δt−T 1−δ , mt−1 = ω 1−δt−T −1 1−δ and S(T) t−1 = 1. It’s easy to check that (1) holds also in this case. Exercise 11.3 When δ = 0.7, the half-life for the intervention eﬀect speciﬁed in Equation (11.1.6) is log (0.5) / log (0.7)=1.94. Exercise 11.4 The half-life for the intervention eﬀect speciﬁed in Equation (11.1.6) is log(0.5) / log(δ), which satisﬁes limδր1 log (0.5) / log(δ)= ∞. Exercise 11.5 For t > T, we have mt = ω1 − δt−T 1 − δ = ω 1+ δ + ··· + δt−T −1  → ω (t − T), as δ → 1. For t ≤ T, mt is always 0. So limδ→1 mt =  ω (t − T), for t ≥ T 0, otherwise. Exercise 11.6 (a) The intervention model is given by mt = ωB 1 − δB S(T) t ⇐⇒ mt = δmt−1 + ωS(T) t−1, 174 which is the model given by Equation (11.1.6). With the initial condition m0 = 0, we know m0 = m1 = ··· = mT = 0 and mT +1 = ω. So the jump at time T + 1 is of height ω. (b) By Exercise 11.2 we know that mt =  ω 1−δt−T 1−δ , for t > T 0, otherwise. Since 0 <δ< 1, we obtain limt→∞ mt = ω limt→∞ 1 − δt−T 1 − δ = ω 1 − δ. Exercise 11.7 The intervention model is given by mt = ωB 1 − BS(T) t ⇐⇒ mt = mt−1 + ωS(T) t−1, with the initial condition m0 = 0. Then m0 = m1 = ··· = mT = 0 and mt = mt−1 + ω for t ≥ T + 1. So the eﬀect increases linearly starting at time T + 1 with slope ω. Exercise 11.8 (a) The intervention model is given by mt = ωB 1 − δB P(T) t ⇐⇒ mt = δmt−1 + ωP (T) t−1, with the initial condition m0 = 0. Then m0 = m1 = ··· = mT = 0 and mT +1 = mT +ω = ω. (b) For t ≥ T + 2, since P(T) t−1 = 0, we have mt = δmt−1 =⇒ mt = δt−T −1ω → 0, as t →∞. Exercise 11.9 (a) The intervention model is given by mt =  ω1B 1 − δB + ω2B 1 − B  P(T) t ⇐⇒ mt =(δ + 1) mt−1 − δmt−2 +(ω1 + ω2)P(T) t−1 − (ω1 + δω2)P(T) t−2,(2) 175 with the initial condition m0 = m1 = 0. Then m0 = m1 = ··· = mT = 0 and mT +1 = (ω1 + ω2)· 1= ω1 + ω2. (b) By (2), we have mT +2 = (δ +1)(ω1 + ω2) − (ω1 + δω2) = δω1 + ω2. It’s easy by using the inductive method to prove that for all t ≥ T + 1, mt = δt−T −1ω1 + ω2 → ω2, as t →∞. Exercise 11.10 (a) The intervention model is given by mt = ω0 + ω1B 1 − δB + ω2B 1 − B  P(T) t ⇐⇒ mt =(δ + 1) mt−1 − δmt−2 + ω0P(T) t + [ω1 + ω2 − (δ + 1) ω0]P(T) t−1 +(δω0 − ω1 − δω2)P(T) t−2,(3) with the initial condition m0 = m1 = 0. Then m0 = m1 = ··· = mT −1 = 0 and mT = ω0 · 1= ω0. (b) By (3), we have mT +1 =(δ + 1) ω0 + [ω1 + ω2 − (δ + 1) ω0]= ω1 + ω2. (c) Again by (3), we have mT +2 = (δ +1)(ω1 + ω2) − δω0 +(δω0 − ω1 − δω2) = δω1 + ω2. It’s easy by using the inductive method to prove that for all t ≥ T + 1, mt = δt−T −1ω1 + ω2 → ω2, as t →∞. Exercise 11.11 The sample CCF between the generated 100 pairs (Xt,Yt) is shown below. > set.seed(12345) > X=2*rnorm(105) > Y=zlag(X,3)+rnorm(105) > X=ts(X[-(1:5)],start=1,freq=1) > Y=ts(Y[-(1:5)],start=1,freq=1) > win.graph(width=4.875, height=2.5,pointsize=8) > ccf(X,Y,ylab=’CCF’) 176 −15 −10 −5 0 5 10 15 −0.2 0.2 0.6 Lag CCF X & Y Figure 2: Sample CCF From Equation (11.3.1) with d =3 The sample CCF of the simulated data is only signiﬁcant at lag −3. This result coincides with the theoretical analysis. Theoretically, the CCF should be zero except at lag −3 where it equals ρ−3 (X,Y)=2/√4+1=0.89. Exercise 11.12 Suppose X and Y are two independent and stationary AR(1) time series with parameters φX and φY. We know that the autocorrelations of X and Y are ρk (X)= φk X and ρk (Y)= φk Y respectively. Equation (11.3.5) tells us the variance of √nrk (X,Y) is approximately 1+2 ∞ Xk=1 ρk (X) ρk (Y) =1+2 ∞ Xk=1 φk X φk Y = 1+ φXφY 1 − φXφY . So the variance of rk (X,Y) is approximately 1+ φXφY n (1 − φXφY), which is given by Equation (11.3.6). 177 Exercise 11.13 Since e Yt = ∞ Xk=−∞ βk e Xt−k + e Zt, where e X is a white noise sequence independent of e Z, we have ρk  e X, e Y  = cov  e Yt, e Xt+k σ eX σeY = β−kσ2eX σ eX σeY = β−k σ eX σeY . Exercise 11.14 The following three graphs are a simulated AR time series and its ACF and PACF plots. > y=arima.sim(model=list(ar=.7),n=48) > plot(y,type=’o’) > acf(y) > pacf(y) 178 Time y 0 10 20 30 40 −3 −1 0 1 2 3 4 0 5 10 15 −0.2 0.2 0.6 1.0 Lag ACF Series y 5 10 15 −0.2 0.2 0.4 0.6 Lag Partial ACF Series y Figure 3: Simulated AR(1) Time Series 179 (a) If we add a step function response of ω = 1 at time t = 36 to the simulated AR(1) time series, then the plots of the new time series and its ACF and PACF are given below. > a=c(rep(0,35),rep(1,13)) > y1=y+a > plot(y1,type=’o’) > acf(y1) > pacf(y1) 180 Time y1 0 10 20 30 40 −3 −1 0 1 2 3 4 0 5 10 15 −0.2 0.2 0.6 1.0 Lag ACF Series y1 5 10 15 −0.2 0.2 0.4 0.6 Lag Partial ACF Series y1 Figure 4: Simulated AR(1) Time Series Plus a Step Response 181 (b) If we add a pulse function response of ω = 1 at time t = 36 to the simulated AR(1) time series, then the plots of the new time series and its ACF and PACF are given below. > b=c(rep(0,35),1,rep(0,12)) > y2=y+b > plot(y2,type=’o’) > acf(y2) > pacf(y2) 182 Time y2 0 10 20 30 40 −3 −1 0 1 2 3 4 0 5 10 15 −0.2 0.2 0.6 1.0 Lag ACF Series y2 5 10 15 −0.2 0.2 0.4 0.6 Lag Partial ACF Series y2 Figure 5: Simulated AR(1) Time Series Plus a Pulse Response 183 Exercise 11.15 (a) To verify Exhibit 11.5: > acf(diff(diff(window(log(airmiles),end=c(2001,8)),12)),lag.max=48) 0 1 2 3 4 −0.3 −0.2 −0.1 0.0 0.1 0.2 Lag ACF airmiles Figure 6: ACF of log(airmiles) (b) The plot above tells us that the ACF for the twice diﬀerenced series of logarithms of the airmiles data is signiﬁcantly away from 0 only in the case that the lag is 1/12 (1 month). It suggests an ARIMA(0, 1, 1) × (0, 1, 0)12 model. > m1.airmiles=arima(window(log(airmiles),end=c(2001,8)),order=c(0,1,1), + seasonal=list(order=c(0,1,0),period=12)) > m1.airmiles Call: arima(x = window(log(airmiles), end=c(2001,8)),order=c(0,1,1), seasonal = list(order = c(0, 1, 0), period = 12)) Coefficients: ma1 -0.4391 ** s.e. 0.1173 sigma^2 estimated as 0.00143: log likelihood = 101.98, aic=-199.96 184 The coeﬃcient estimate is signiﬁcant. Then let’s look at the time series plot of the residuals from this model. > plot(rstandard(m1.airmiles),ylab=’Standard Residuals’,type=’b’) > abline(h=0) Time Standard Residuals 1996 1997 1998 1999 2000 2001 0 2 4 Figure 7: Residuals of model 1 From the residuals plot, we see that except for January 1998 the residuals are distributed around 0 with no pattern. It leads us to check for possible outliers. > detectAO(m1.airmiles) [,1] ind 25.000000 lambda2 8.114302 *** > detectIO(m1.airmiles) [,1] ind 25.000000 lambda1 8.434749 *** We see that both b λ1 and b λ2 are highly signiﬁcant (much larger than the critical value with α = 5% and n = 68). So next we are going to add the innovative outlier to the previous model. (c) Let’s ﬁt an ARIMA(0, 1, 1) × (0, 1, 0)12+an outlier model to the logarithms of the prein- tervention data. 185 > m2.airmiles=arimax(window(log(airmiles),end=c(2001,8)),order=c(0,1,1), + seasonal=list(order=c(0,1,0),period=12),io=c(25)) > m2.airmiles Call: arimax(x = window(log(airmiles),end=c(2001,8)),order=c(0,1,1), seasonal = list(order = c(0, 1, 0), period = 12), io = c(25)) Coefficients: ma1 IO-25 -0.3894 ** 0.2132 *** s.e. 0.0888 0.0248 sigma^2 estimated as 0.0006092: log likelihood = 125.47,aic=-244.94 In this model, the IO eﬀect is highly signiﬁcant. Compared with the previous model without outlier, the AIC is much better and b θ has changed signiﬁcantly. The residuals plot of this model is given by > plot(rstandard(m2.airmiles),ylab=’Standard Residuals of m2’,type=’b’) Time Standard Residuals of m2 1996 1997 1998 1999 2000 2001 −2 −1 0 1 2 Figure 8: Residuals of model 2 186 The residuals look distributed much better. Furthermore, let’s look at the sample ACF of the residuals. > acf(as.vector(rstandard(m2.airmiles)),lag.max=68) 0 10 20 30 40 50 60 −0.3 −0.2 −0.1 0.0 0.1 0.2 Lag ACF Series as.vector(rstandard(m2.airmiles)) Figure 9: ACF of residuals of model 2 We see from the graph above that the ACF of the residuals are signiﬁcantly diﬀerent from 0, at lag 2, 7, 24, and 25. This means that there is still some autocorrelation between the residuals. One more coeﬃcient is needed. Next we’ll ﬁt the model ARIMA(0, 1, 1) ×(0, 1, 1)12+an outlier. (d) Let’s ﬁt an ARIMA(0, 1, 1)× (0, 1, 1)12+an outlier model to the logarithms of the prein- tervention data. > m3.airmiles=arimax(window(log(airmiles),end=c(2001,8)),order=c(0,1,1), + seasonal=list(order=c(0,1,1),period=12),io=c(25)) > m3.airmiles Call: arimax(x = window(log(airmiles),end=c(2001,8)),order=c(0,1,1), seasonal = list(order = c(0, 1, 1), period = 12), io = c(25)) Coefficients: ma1 sma1 IO-25 -0.4030*** -0.2762** 0.2113*** s.e. 0.0801 0.0894 0.0235 187 sigma^2 estimated as 0.0005116: log likelihood = 129.79,aic=-251.58 All coeﬃcient estimates are signiﬁcantly diﬀerent from 0. Compared with model 2, all of the estimates have not changed too much. But the autocorrelation between residuals amostly disappears as shown below. > acf(as.vector(rstandard(m3.airmiles)),lag.max=68) 0 10 20 30 40 50 60 −0.2 −0.1 0.0 0.1 0.2 Lag ACF Series as.vector(rstandard(m3.airmiles)) Figure 10: ACF of residuals of model 3 188 We see that the ACF is only signiﬁcantly diﬀerent from 0 at lag 2. This could easily happen by chance alone. So under this model there is no autocorrelation between residuals. Let’s also check the normality of the residuals. > win.graph(width=2.5,height=2.5,pointsize=8) > qqnorm(rstandard(m3.airmiles)) > qqline(rstandard(m3.airmiles)) −2 −1 0 1 2 −2 −1 0 1 2 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 11: Q-Q norm Here the normal Q-Q plot looks that the residuals are not samples from a normal dis- tribution, while the Shapiro-Wilk test of normality has a test statistic W = 0.969 and p − value =0.088 and normality is not rejected at signiﬁcant level of 5%. We conclude that the ARIMA(0, 1, 1) × (0, 1, 1)12+an outlier model ﬁts the preintervention data well. Exercise 11.16 (a) The plot of monthly boardings using seasonal symbols is given below. > library(TSA) > data(boardings) > plot(boardings[,1],ylab=’Boardings’,xlab=’Year’) > Month=c("J","F","M","A","M","J","J","A","S","O","N","D") > points(boardings[,1],pch=Month) 189 Year Boardings 2001 2002 2003 2004 2005 2006 12.40 12.45 12.50 12.55 12.60 12.65 12.70 J F M A M J J A SON D J F M A M J J A SO N D J FMA M J J A SO ND J F M A M J JA SO N D J F M A M J JA S O ND J F M A M J J A Figure 12: Logarithm of Monthly Boardings We notice that during the period between August 2000 and March 2006, the peak public transportation boarding appears in February, September and October, while in May and December it comes to the bottom. (b) The plot of monthly average gasoline prices using seasonal symbols is given below. > plot(boardings[,2],ylab=’Gas Prices’,xlab=’Year’) > Month=c("J","F","M","A","M","J","J","A","S","O","N","D") > points(boardings[,2],pch=Month) 190 Year Gas Prices 2001 2002 2003 2004 2005 2006 4.8 5.0 5.2 5.4 5.6 J FMAM J JA S ON D J F M A MJ J A SON DJ F MA M J J A S OND J F MA M J J A S ONDJ F MA MJ J A SON D J F M A M J J A Figure 13: Logarithm of Monthly Average Gasoline Prices From the graph we see that the average gasoline price is generally higher during Febrary, September and October, while lower during May. Exercise 11.17 (a) Fit an AR(2) model to the original data including the outlier. > library(TSA) > data(deere1) > m1<-arima(deere1,order=c(2,0,0)) > m1 Call: arima(x = deere1, order = c(2, 0, 0)) Coefficients: ar1 ar2 intercept 0.0269 0.2392 1.4135 s.e. 0.1062 0.1061 0.6275 sigma^2 estimated as 17.68: log likelihood = -234.19, aic = 476.38 191 (b) Test the possible AO and IO outliers. Both tests tell us at time t = 27 there is an outlier. > detectAO(m1) [,1] ind 27.000000 lambda2 8.668582 > detectIO(m1) [,1] ind 27.00000 lambda1 8.81655 (c) Since b λ1 > b λ2, let us reﬁt the AR(2) model incorporating an IO outlier at time t = 27. > m2<-arimax(deere1,order=c(2,0,0),io=c(27)) > m2 Call: arimax(x = deere1, order = c(2, 0, 0), io = c(27)) Coefficients: ar1 ar2 intercept IO-27 -0.0018 0.2143 0.9754 28.5946 s.e. 0.0713 0.0713 0.3960 2.8263 sigma^2 estimated as 7.866: log likelihood = -200.96, aic = 409.93 The IO is found to be highly signiﬁcant. The other coeﬃcient estimates are slightly changed, although the intercept term (the mean) is aﬀected more. (d) Model diagnostics can be carried out by the following command > tsdiag(m2) which yields Figure 14 shown on the next page. Now, the ﬁtted model passes all model diagnostics. In particular, there are no more outliers as also conﬁrmed by the formal tests. > detectAO(m2)  ‘‘No AO detected’’ > detectIO(m2)  ‘‘No IO detected’’ 192 Time Standardized Residuals 0 20 40 60 80 −3 −2 −1 0 1 2 5 10 15 −0.2 0.0 0.1 0.2 Lag ACF of Residuals 5 10 15 0.0 0.4 0.8 Number of lags P−values Figure 14: Model diagnostics Exercise 11.18 (a) Fit the MA(2) model and test it for both AO and IO outliers. > data(days) > m1<-arima(days,order=c(0,0,2)) 193 > detectAO(m1) [,1] [,2] ind 63.000000 129.000000 lambda2 4.009568 5.344322 > detectIO(m1) [,1] [,2] ind 63.000000 129.000000 lambda1 4.081066 5.268322 (b) Fit the MA(2) incorporating the outliers into the model. > m2=arima(days,order=c(0,0,2),xreg=data.frame(AO=seq(days)==129)) > detectAO(m2) [,1] [,2] ind 63.000000 106.000000 lambda2 4.369851 3.563718 > detectIO(m2) [,1] [,2] ind 63.000000 106.000000 lambda1 4.566191 3.654841 > m3=arimax(days,order=c(0,0,2),xreg=data.frame(AO=seq(days)== 129),io=c(63)) > m3 Call: arimax(x = days, order = c(0, 0, 2), xreg = data.frame(AO = seq(days) == 129), io = c(63)) Coefficients: ma1 ma2 intercept AO IO-63 0.2283 0.1631 28.0764 36.6490 28.4814 s.e. 0.0823 0.0738 0.7259 5.8214 5.9752 sigma^2 estimated as 35.10: log likelihood = -415.8, aic = 841.6 (c) The ﬁt of the above model is then assessed. > tsdiag(m3) 194 Time Standardized Residuals 0 20 40 60 80 100 120 −2 0 1 2 3 4 5 10 15 20 −0.15 0.00 0.15 Lag ACF of Residuals 5 10 15 20 0.0 0.4 0.8 Number of lags P−values Figure 15: Model diagnostics An outlier is apparent from the residual plot, with formal tests reported below suggesting an IO at time point 106. > detectIO(m3) [,1] ind 106.000000 lambda1 3.928688 There were 12 warnings (use warnings( to see them) > detectAO(m3) 195 [,1] ind 106.000000 lambda2 3.805427 ) (d) We now incorporate all three outliers in the following. > m4=arimax(days,order=c(0,0,2),xreg=data.frame(AO=seq(days)== 129),io=c(63,106)) > m4 Call: arimax(x = days, order = c(0, 0, 2), xreg = data.frame(AO = seq(days) == 129), io = c(63, 106)) Coefficients: ma1 ma2 intercept AO IO-63 IO-106 0.2472 0.1770 27.8145 36.8519 28.7753 23.0643 s.e. 0.0768 0.0683 0.7027 5.4496 5.6200 5.6065 sigma^2 estimated as 31.06: log likelihood = -407.85, aic = 827.7 > tsdiag(m4) All outliers are found to be signiﬁcant. No more outliers are found from the time plot of the residuals, which is also conﬁrmed by formal tests (not reported). However, the Ljung-Box test and the residual ACF plot suggest that there is remaining serial autocorrelation. The residual ACF is signiﬁcant at lag 7 suggesting perhaps a seasonal MA(1) pattern with period 7. We subsequently ﬁtted an enlarged model that adds a seasonal MA(1) coeﬃcient to the above model. However, the seasonal MA(1) coeﬃcient is not signiﬁcant, see below. Hence, we conclude that the MA(2) plus three outliers model provides a marginally adequate ﬁt to the data. > m5=arimax(days,order=c(0,0,2),seasonal=list(order=c(0,0,1), period=7),xreg=data.frame(AO=seq(days)== 129),io=c(63,106)) > m5 Call: 196 Time Standardized Residuals 0 20 40 60 80 100 120 −2 −1 0 1 2 5 10 15 20 −0.1 0.0 0.1 0.2 Lag ACF of Residuals Series as.numeric(residuals) 5 10 15 20 0.0 0.4 0.8 Number of lags P−values Figure 16: Model diagnostics arimax(x = days, order = c(0, 0, 2), seasonal = list(order = c(0, 0, 1), period = 7), xreg = data.frame(AO = seq(days) == 129), io = c(63, 106)) Coefficients: 197 ma1 ma2 sma1 intercept AO IO-63 IO-106 0.2432 0.1729 0.0899 27.7658 37.7247 28.1777 23.2698 s.e. 0.0767 0.0698 0.0713 0.7544 5.4619 5.5982 5.5740 sigma^2 estimated as 30.67: log likelihood = -407.06, aic = 828.13 Exercise 11.19 Let us see the plot of the log-transformed weekly unit sales of lite potato chips and the weekly average price over a period of 104 weeks. > library(TSA) > data(bluebirdlite) > ts.bluebirdlite=ts(bluebirdlite) > plot(ts.bluebirdlite,yax.flip=T) 9.5 10.5 log.sales 1.2 1.6 0 20 40 60 80 100 price Time ts.bluebirdlite Figure 17: Weekly Log(Sales) and Price Series for Bluebirdlite Potato Chips Next, after diﬀerencing and using prewhitened data, we draw the plot for CCF, which is signiﬁcant only at lag 0, suggesting a strong contemporaneous negative relationship between lag 1 of price and sales. Higher prices are associated with lower sales. > prewhiten(y=diff(ts.bluebirdlite)[,1],x=diff(ts.bluebirdlite) + [,2],ylab=’CCF’) 198 −15 −10 −5 0 5 10 15 −0.6 −0.2 0.0 0.2 Lag CCF x & y Figure 18: Sample Cross Correlation Between Prewhitened Diﬀerenced Log(Sales) and Price of Lite Potato Chips We estimate the coeﬃcients from the OLS regression of log(sales) on price and get the following estimates. > sales=bluebirdlite[,1];price=bluebirdlite[,2] > m1=lm(sales~price,data=bluebirdlite) > summary(m1) Call: lm(formula = sales ~price, data = bluebirdlite) Residuals: Min 1Q Median 3Q Max -0.47884 -0.13992 0.01661 0.11243 0.60085 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 13.7894 0.2345 58.81 <2e-16 *** price -2.1000 0.1348 -15.57 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1963 on 102 degrees of freedom 199 Multiple R-Squared: 0.7039, Adjusted R-squared: 0.701 F-statistic: 242.5 on 1 and 102 DF, p-value: < 2.2e-16 The residuals are, however, autocorrelated as can be seen from their sample ACF and PACF displayed below respectively. Indeed, the sample autocorrelations of the residuals are signiﬁcant for the ﬁrst 8 lags whereas the sample partial autocorrelations are signiﬁcant at lags 1, 2, 4 and 16. > acf(residuals(m1),ci.type=’ma’) 5 10 15 20 −0.4 0.0 0.2 0.4 Lag ACF Series residuals(m1) Figure 19: Sample ACF of Residuals from OLS Regression of Log(Sales) on Price > pacf(residuals(m1)) 200 5 10 15 20 −0.2 0.0 0.2 0.4 Lag Partial ACF Series residuals(m1) Figure 20: Sample PACF of Residuals from OLS Regression of Log(Sales) on Price The sample EACF of the residuals, shown below, contains a triangle of zeroes with a vertex at (1,5), thereby suggesting an ARMA(1,5) model. Hence, we ﬁt a regression model of log(sales) on price with an ARMA(1,5) error. > eacf(residuals(m1)) AR/MA 012345678910111213 0xxxxxxxxxxx x x o 1xooxooooooo o o o 2xxoxoxooooo o o o 3xxoxooooooo o o o 4oxxoooooooo o o o 5ooxoooooooo o o o 6xxxxooooooo o o o 7xoooooooooo o o o It turns out that the estimates of the π1, θ1 and θ5 coeﬃcients are not signiﬁcant and hence a model ﬁxing these coeﬃcient to be zero was subsequently ﬁtted and reported below. > m2=arima(sales,order=c(1,0,5),xreg=price) > m3=arima(sales,order=c(0,0,4),xreg=price,fixed=c(0,NA,NA,NA,NA,NA)) > m4=arima(sales,order=c(0,0,4),xreg=price,fixed=c(0,NA,0,NA,NA,NA)) > m4 201 Call: arima(x = sales, order = c(0, 0, 4), xreg = price, fixed = c(0, NA, 0, NA, NA, NA)) Coefficients: ma1 ma2 ma3 ma4 intercept price 0 0.4354 0 0.5451 13.5306 -1.9505 s.e. 0 0.0827 0 0.0926 0.1785 0.1016 sigma^2 estimated as 0.02360: log likelihood = 46.45, aic = -82.9 Note that the regression coeﬃcient estimate on Price is similar to that from the OLS regression ﬁt earlier but the standard error of the estimete is about 25% lower than that from the simple OLS regression. Exercise 11.20 (a) The data shows an increasing trend with quasi-periodic behavior. > data(units) > plot(units,type=’o’,xlab=’Year’) (b) > lm1=lm(units~time(units)) > summary(lm1) Call: lm(formula = units ~ time(units)) Residuals: Min 1Q Median 3Q Max -43.184 -23.096 6.212 19.289 37.258 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -6532.1466 1466.7183 -4.454 0.000199 *** time(units) 3.3449 0.7357 4.546 0.000159 *** --- 202 Year Units 1985 1990 1995 2000 2005 80 100 120 140 160 180 200 Figure 21: Time plot of the annual units sale. Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1 Residual standard error: 24.95 on 22 degrees of freedom Multiple R-squared: 0.4844, Adjusted R-squared: 0.461 F-statistic: 20.67 on 1 and 22 DF, p-value: 0.0001589 203 The ﬁtted straight line is found to have a signiﬁcant, positive slope. (c) 2 4 6 8 10 12 −0.4 0.0 0.4 Lag ACF Series residuals(lm1) 2 4 6 8 10 12 −0.6 −0.2 0.2 0.6 Lag Partial ACF Series residuals(lm1) Figure 22: ACF and PACF plots of the residuals from the linear model reported in (b). It appears that an AR(2) model is appropriate. (d) > m1=arima(units, order=c(2,0,0), xreg=data.frame(year=time(units))) 204 > m1 Call: arima(x = units, order = c(2, 0, 0), xreg = data.frame(year = time(units))) Coefficients: ar1 ar2 intercept year 1.3310 -0.8289 -6672.500 3.4161 s.e. 0.1156 0.1093 1910.761 0.9602 sigma^2 estimated as 94.08: log likelihood = -90.12, aic = 188.25 All coeﬃcient estimates are signiﬁcant. The sale increased by 3.41 units per year. The intercept term refers to the mean sale at year 0, which is not interesting. If, for example, we are interested in using 1982 as the base year, the model ﬁt should be modiﬁed with the covariates being time(units)-1982 where now year 0 refers to 1982 and the intercept is the mean sale in 1982. Below are the modiﬁed R codes. > m1a=arima(units, order=c(2,0,0), xreg=data.frame(year=time(units)-1982)) > m1a Call: arima(x = units, order = c(2, 0, 0), xreg = data.frame(year = time(units) - 1982)) Coefficients: ar1 ar2 intercept year 1.3310 -0.8289 98.2637 3.4161 s.e. 0.1154 0.1077 8.5687 0.6613 sigma^2 estimated as 94.08: log likelihood = -90.12, aic = 188.25 Note that now all other parameter estimates are unchanged. For the ﬁtted AR(2) model, 205 ># checking if the process is quasi-periodic > phi1=1.331;phi2=-0.8289 > phi1^2+4*phi2  -1.544039 ># Find the quasi-period > 2*pi/acos(phi1/(2*sqrt(-phi2)))  8.365753 Thus, besides an increasing trend, the data is quasi-periodic with period around 8.4 years, which is somewhat lower than that suggested from the time plot of the data; the latter seems to suggest a period around 10. (e) > tsdiag(m1) Error in tsdiag.Arima(m1) : increase gof.lag as psi weights are not small enough for the Ljung-Box tests In addition: Warning message: In any(test) : coercing argument of type ’double’ to logical ># Need to set tol in the tsdiag function higher > tsdiag(m1,tol=.2) Note that in carrying out the model diagnostics, we have to specify the option tol=0.2; this option relaxes the criterion for ﬁnding the range of maximum lags in the Ljung-Box tests. The default option is tol=0.1 which uses a stringent criterion. Here, the small sample size necessitates the relaxation, lest the Ljung-Box tests cannot be carried out. According to the above diagnostics, the ﬁtted model provides a good ﬁt to the data. (f) Now, we repeat the analysis with the logarithmically transformed units. > m2=arima(log(units), order=c(2,0,0), xreg=data.frame(year=time(units))) > m2 Call: arima(x = log(units), order = c(2, 0, 0), xreg = data.frame(year = time(units))) Coefficients: 206 Time Standardized Residuals 1985 1990 1995 2000 2005 −2 −1 0 1 2 2 4 6 8 10 12 −0.4 0.0 0.2 0.4 Lag ACF of Residuals Series as.numeric(residuals) 2 4 6 8 10 12 0.0 0.4 0.8 Number of lags P−values Figure 23: Model diagnostics for the ﬁtted model reported in (d). ar1 ar2 intercept year 1.2869 -0.7750 -46.5788 0.0258 s.e. 0.1269 0.1217 NaN NaN sigma^2 estimated as 0.007025: log likelihood = 24.15, aic = -40.31 207 Warning message: In sqrt(diag(x\$var.coef)) : NaNs produced Note that the standard errors of the regression parameters can’t be computed, due to collinearity problem. However, if we shift the base year to 1982, the problem goes away. > m2a=arima(log(units), order=c(2,0,0), xreg=data.frame(year=time(units)-1982)) > m2a Call: arima(x = log(units), order = c(2, 0, 0), xreg = data.frame(year = time(units) - 1982)) Coefficients: ar1 ar2 intercept year 1.2869 -0.7750 4.5918 0.0258 s.e. 0.1269 0.1265 0.0760 0.0058 sigma^2 estimated as 0.007025: log likelihood = 24.15, aic = -40.31 > phi1=1.2869;phi2=-0.7749 > phi1^2+4*phi2  -1.443488 ># fitted AR(2) is quasi-periodic > 2*pi/acos(phi1/(2*sqrt(-phi2)))  8.365615 ># period is about 8.4 > tsdiag(m2a) Error in tsdiag.Arima(m2a) : increase gof.lag as psi weights are not small enough for the Ljung-Box tests In addition: Warning message: In any(test) : coercing argument of type ’double’ to logical ># Need to set tol higher. > tsdiag(m2a,tol=.2) 208 Now, the interpretation is that the annual sales increased by 2.6% annually with a quasi- period closed to 8.4 years. The model diagnostics reported below suggest that the model also provides good ﬁt to the data on the logarithmic scale. Time Standardized Residuals 1985 1990 1995 2000 2005 −2 −1 0 1 2 4 6 8 10 12 −0.4 0.0 0.4 Lag ACF of Residuals 2 4 6 8 10 12 0.0 0.4 0.8 Number of lags P−values Figure 24: Model diagnostics for the ﬁtted model reported in (d). Exercise 11.21 In Chapter 8, an IMA(1, 1) model is ﬁtted for the logarithms of monthly oil prices. The results obtained are: 209 > data(oil.price) > m1.oil=arima(log(oil.price),order=c(0,1,1)) > m1.oil Call: arima(x = log(oil.price), order = c(0, 1, 1)) Coefficients: ma1 0.2956 s.e. 0.0693 sigma^2 estimated as 0.006689: log likelihood = 260.29,aic=-516.58 > plot(rstandard(m1.oil),ylab=’Standardized residuals’,type=’l’) > abline(h=0) Time Standardized residuals 1990 1995 2000 2005 −4 −2 0 2 4 Figure 25: Residuals of model 1 210 The residuals graph suggests that there may be several outliers in this series. Let’s check whether there are innovative or additive outliers in this time series. > detectAO(m1.oil) [,1] [,2] [,3] ind 2.000000 8.000000 56.00000 lambda2 -4.326085 4.007243 4.07535 > detectIO(m1.oil) [,1] [,2] [,3] ind 2.000000 8.000000 56.000000 lambda1 -4.875561 3.773707 4.570056 The results above show that there may be outliers at Febrary 1986, August 1986, and August 1990. Since among the tests for AO and IO the largest magnitude appears as λ1 at time t = 1, let’s try to ﬁx the innovative outlier at time 1 in the model. > m2.oil=arimax(log(oil.price),order=c(0,1,1),io=c(2)) > detectAO(m2.oil) [,1] [,2] ind 8.000000 56.000000 lambda2 4.070551 4.253151 > detectIO(m2.oil) [,1] [,2] ind 8.000000 56.000000 lambda1 3.864698 4.740229 In model 2 we still detect possible outliers at t = 8 and 56. We continue to add the innovative outlier eﬀect at t = 56 into our model. > m3.oil=arimax(log(oil.price),order=c(0,1,1),io=c(2,56)) > detectAO(m3.oil) [,1] ind 8.000000 lambda2 4.100764 > detectIO(m3.oil) [,1] ind 8.000000 lambda1 3.937741 211 After ﬁtting model 3, the outlier at t = 8 is still signiﬁcant. Since |λ2,8| > |λ1,8|, we incorporate the additive outlier at t = 8 into the model 3 to get model 4. > m4.oil=arimax(log(oil.price),order=c(0,1,1),io=c(2,56), + xreg=data.frame(AO=seq(log(oil.price))==8)) > detectAO(m4.oil)  "No AO detected" > detectIO(m4.oil)  "No IO detected" > m4.oil Call: arimax(x = log(oil.price), order = c(0, 1, 1), xreg = data.frame(AO =seq(log(oil.price)) == 8), io = c(2, 56)) Coefficients: ma1 AO IO-2 IO-56 0.2696 0.1534 -0.3966 0.3589 s.e. 0.0593 0.0439 0.0752 0.0732 sigma^2 estimated as 0.005276: log likelihood = 288.76,aic=-567.52 We can see that after adding 3 outliers into the original IMA(1, 1) model, model 4 has no more signiﬁcant outliers. In model 4, all outlier estimates are signiﬁcant. Compare the results of model 4 with model 1, in which the outliers were not taken into account. The estimate of φ has not changed very much, but the AIC is much better (−567.52 < −516.58). So the model IMA(1, 1) + 2 IO’s and 1 AO is better than the original IMA(1, 1) model. 212 Chapter 12 Exercise 12.1 Let us plot the absolute returns and squared returns for the CREF data. > library(TSA) > data(CREF) > r.cref=diff(log(CREF))*100 > win.graph(width=4.875,height=2.5,pointsize=8) > plot(abs(r.cref)) Time abs(r.cref) 0 100 200 300 400 500 0.0 0.5 1.0 1.5 2.0 2.5 Figure 1: Absolute Values of Daily CREF Stock Returns > win.graph(width=4.875,height=2.5,pointsize=8) > plot(r.cref^2) 213 Time r.cref^2 0 100 200 300 400 500 0 1 2 3 4 5 6 Figure 2: Squared Daily CREF Stock Returns From the above 2 plots we can clearly see that the returns became more volatile towards the end of the study period. Exercise 12.2 Let us plot the absolute returns and squared returns for the CREF data. > library(TSA) > data(usd.hkd) > plot(ts(abs(usd.hkd\$hkrate),freq=1),type=’l’,xlab=’day’, + ylab=’abs(return)’) day abs(return) 0 100 200 300 400 0.00 0.05 0.10 0.15 Figure 3: Abs of Daily Returns of USD/HKD Exchange Rate: 1/1/05-3/7/06 214 > win.graph(width=4.875,height=2.5,pointsize=8) > plot(ts(usd.hkd\$hkrate^2,freq=1),type=’l’,xlab=’day’, + ylab=’return^2’) day return^2 0 100 200 300 400 0.000 0.010 0.020 0.030 Figure 4: Squared Daily Returns of USD/HKD Exchange Rate: 1/1/05-3/7/06 From the above 2 plots we can see that the volatility clustering is evident particularly during the second half year of 2005. Exercise 12.3 Since ηt = r2 t − σ2 t|t−1 and rt = σt|t−1εt where {εt} is a sequence of i.i.d. random variables with zero mean and unit variance and εt is independent of rt−j, j =1, 2,..., we have E(ηt) = E σ2 t|t−1 ε2 t − 1 = E E σ2 t|t−1 ε2 t − 1  rt−1,rt−2,... = E σ2 t|t−1E ε2 t − 1  rt−1,rt−2,... = E σ2 t|t−1 × 0 = 0, 215 and for every k > 0, E(ηtηt−k) = E σ2 t|t−1 ε2 t − 1  r2 t−k − σ2 t−k|t−k−1 = E E σ2 t|t−1 ε2 t − 1  r2 t−k − σ2 t−k|t−k−1  rt−1,rt−2,... = E σ2 t|t−1 r2 t−k − σ2 t−k|t−k−1  E ε2 t − 1  rt−1,rt−2,... = E σ2 t|t−1 r2 t−k − σ2 t−k|t−k−1  × 0 = 0. Hence, Cov (ηt, ηt−k)=E(ηtηt−k) − E(ηt)E(ηt−k)=0, which means that {ηt} is a serially uncorrelated sequence. Similarly, we obtain E ηtr2 t−k  = E σ2 t|t−1 ε2 t − 1  r2 t−k = E E σ2 t|t−1 ε2 t − 1  r2 t−k rt−1,rt−2,... = E σ2 t|t−1r2 t−kE ε2 t − 1  rt−1,rt−2,... = E σ2 t|t−1r2 t−k × 0 = 0. So Cov ηt,r2 t−k  = E ηtr2 t−k  − E(ηt) E r2 t−k  =0, which means that ηt is uncorrelated with past squared returns. Exercise 12.4 Substituting σ2 t|t−1 = r2 t − ηt into Equation (12.2.2), we have r2 t − ηt = ω + αr2 t−1 =⇒ r2 t = ω + αr2 t−1 + ηt, which is Equation (12.2.5). Exercise 12.5 Equation (12.2.2) tells us σ2 t|t−1 = ω + αr2 t−1 =⇒ σ4 t|t−1 = ω2 +2ωαr2 t−1 + α2r4 t−1. 216 Let us take expectation on both sides of the last equation. Denoting τ = E  σ4 t|t−1 and since E (r4 t )=3τ (Equation (12.2.7)), we get τ = ω2 +2ωασ2 + α23τ, which is Equation 12.2.8. Exercise 12.6 The order is like: the uniform distribution on [−1, 1], the normal distribution with mean 0 and variance 4, the t-distribution with 30 d.f., and the t-distribution with 10 d.f.. Reason: Kurtosis is a measure of the “peakedness”. A high kurtosis distribution has a sharper “peak” and fatter “tails”, while a low kurtosis distribution has a more rounded peak with wider “shoulders”. So the uniform distribution which has no peak must have the lowest kurtosis value. Among the other three distributions, the t-distributions have heavier tails than the normal distribution. Furthermore, the t-distribution with 10 d.f. has heavier tails compared to the t-distribution with 30 d.f.. Exercise 12.7 Simulate a time series, of size 500, from a GARCH(1,1) model with standard normal inno- vations and parameter values ω =0.01, α =0.1, and β =0.8. > library(TSA) > set.seed(1234567) > garch1.sim=garch.sim(alpha=c(0.01,0.1),beta=0.8,n=500) > plot(garch1.sim,type=’l’,ylab=expression(r[t]),xlab=’t’) 0 100 200 300 400 500 −1.0 0.0 0.5 1.0 t r t Figure 5: Simulated GARCH(1,1) Process Let us see the sample ACF, PACF, and EACF of the simulated time series. 217 > acf(garch1.sim) 0 5 10 15 20 25 −0.10 0.00 0.10 Lag ACF Series garch1.sim Figure 6: Sample ACF of Simulated GARCH(1,1) Process > pacf(garch1.sim) 0 5 10 15 20 25 −0.10 0.00 0.10 Lag Partial ACF Series garch1.sim Figure 7: Sample PACF of Simulated GARCH(1,1) Process 218 > eacf(garch1.sim) AR/MA 012345678910111213 0ooxoooooooo o o o 1xoxoooooooo o o o 2xxxoooooooo o o o 3oxxoooooooo o o o 4ooxoooooooo o o o 5xxxoooooooo o o o 6xxxoooooooo o o o 7xxoxooxoooo o o o Except for lag 3 and 20 which are mildly signiﬁcant, the sample ACF and PACF of the simulated data do not show signiﬁcant correlations. Also, the pattern in the EACF table seems suggest an AR(3) model. Hence, the simulated process seems consistent with the assumption of white noise. (a) Let us see the sample ACF, PACF, and EACF of the squared simulated GARCH(1,1) time series. > acf(garch1.sim^2) 0 5 10 15 20 25 −0.05 0.05 0.15 Lag ACF Series garch1.sim^2 Figure 8: Sample ACF of Squared Simulated GARCH(1,1) Process > pacf(garch1.sim^2) 219 0 5 10 15 20 25 −0.05 0.05 0.15 Lag Partial ACF Series garch1.sim^2 Figure 9: Sample PACF of Squared Simulated GARCH(1,1) Process 220 > eacf(garch1.sim^2) AR/MA 012345678910111213 0oxxxxoxoooo o o o 1xoooxoxxooo o o o 2xooooxxoxoo o o o 3xxxooxooooo o o o 4xxxxxoooooo o o o 5xoxoooooooo o o o 6xxxxoxooooo o o o 7xxoxoxooooo o o o The sample ACF and PACF of the squared simulated GARCH(1,1) process show signif- icant autocorrelation pattern in the squared data. Hence the simulated process is serially dependent as it is. But the pattern in the EACF table is not very clear. An ARMA(3,3) model is kind of suggested. As mentioned in the textbook, the fuzziness of the signal in the EACF table is likely caused by the larger sampling variability when we deal with higher moments. (b) Let us see the sample ACF, PACF, and EACF of the absolute values of the simulated GARCH(1,1) time series. > acf(abs(garch1.sim)) 0 5 10 15 20 25 −0.05 0.05 0.15 Lag ACF Series abs(garch1.sim) Figure 10: Sample ACF of Abs(Simulated GARCH(1,1) Process) 221 > pacf(abs(garch1.sim)) 0 5 10 15 20 25 −0.05 0.05 0.10 Lag Partial ACF Series abs(garch1.sim) Figure 11: Sample PACF of Abs(Simulated GARCH(1,1) Process) > eacf(abs(garch1.sim)) AR/MA 012345678910111213 0xxxxxoxoooo o o o 1xoooxoooooo o o o 2xxooooooooo o o o 3xxoooxooooo o o o 4xxoxoxooooo o o o 5xoxxxoooooo o o o 6xoxoxxooooo o o o 7xxxxxoxoooo o o o The sample EACF table for the absolute simulated process suggests convincingly an ARMA(1,1) model, and therefore a GARCH(1,1) model for the original data. (c) The McLeod-Li test shows the presence of strong ARCH eﬀects in the data, as we know there are. (d) Let us see the sample ACF, PACF, and EACF of the squared GARCH(1,1) time series using only the ﬁrst 200 simulated data. 222 0 5 10 15 20 25 0.0 0.4 0.8 Lag P−value Figure 12: McLeod-Li Test for the Simulated GARCH(1,1) Process > garch2=garch1.sim[1:200] > acf(garch2^2) 5 10 15 20 −0.15 −0.05 0.05 0.15 Lag ACF Series garch2^2 Figure 13: Sample ACF of 200 Squared Simulated GARCH(1,1) Process 223 > pacf(garch2^2) 5 10 15 20 −0.15 −0.05 0.05 0.15 Lag Partial ACF Series garch2^2 Figure 14: Sample PACF of 200 Squared Simulated GARCH(1,1) Process > eacf(garch2^2) AR/MA 012345678910111213 0ooxoooooooo o o o 1ooooooooooo o o o 2xxooooooxoo o o o 3xxxoooooooo o o o 4xoxoooooooo o o o 5xxooxoooooo o o o 6xxooxxooooo o o o 7xooxoxooooo o o o The plots of the ACF and PACF for the ﬁrst 200 squared simulated data show no signiﬁcant autocorrelations except for lags 3 and 9. Also, the EACF table seems to suggest the 200 squared data are white noise. Then let us see the sample ACF, PACF, and EACF of the absolute values of the GARCH(1,1) time series also using the ﬁrst 200 simulated data. > acf(abs(garch2)) > pacf(abs(garch2)) 224 5 10 15 20 −0.10 0.00 0.10 Lag ACF Series abs(garch2) Figure 15: Sample ACF of 200 Abs(Simulated GARCH(1,1) Process) 5 10 15 20 −0.10 0.00 0.10 Lag Partial ACF Series abs(garch2) Figure 16: Sample PACF of 200 Abs(Simulated GARCH(1,1) Process) > eacf(abs(garch2)) AR/MA 012345678910111213 0ooooooooooo o o o 1xoooooooooo o o o 2xxooooooooo o o o 3xoooooooooo o o o 4xoooooooooo o o o 5xxxxxoooooo o o o 6xoxxxoooooo o o o 7xoxxooooooo o o o The plots of the ACF and PACF for the ﬁrst 200 absolute simulated data show no signiﬁcant autocorrelations. In addition, the EACF table convincingly suggests that the 200 squared data are white noise. So a GARCH(p, q) time series with size 200 is not enough for us to identify the orders p and q by inspecting its ACF, PACF and EACF. 225 Exercise 12.8 (a) The plot of the daily CREF bond prices is given below. Generally the bond price has an increasing trend. But deep jumps appear several times during the whole period. > data(cref.bond) > plot(cref.bond) Time cref.bond 0 100 200 300 400 500 74 75 76 77 Figure 17: Daily CREF Bond Prices: August 26, 2004 to August 15, 2006 (b) Let us plot the daily bond returns below. It shows no evidence of volatility clustering. > r.bond=diff(log(cref.bond))*100 > plot(r.bond) > abline(h=0) (c) The McLeod-Li test suggests that there is no ARCH eﬀects in the return series. (d) The ACF and PACF plots of the bond returns suggest that the returns have little serial correlation. > acf(r.bond) > pacf(r.bond) 226 Time r.bond 0 100 200 300 400 500 −0.6 −0.2 0.2 0.6 Figure 18: Daily CREF Bond Returns: August 26, 2004 to August 15, 2006 0 5 10 15 20 25 0.0 0.4 0.8 Lag P−value Figure 19: McLeod-Li Test for the returns. Also, the ACF and PACF plots for the absolute and squared returns are given below. From these plots, no signiﬁcant autocorrelation is observed, which further supports the claim that the returns of the CREF bond price series appear to be independently and identically distributed. > acf(abs(r.bond)) > pacf(abs(r.bond)) 227 0 5 10 15 20 25 −0.05 0.00 0.05 0.10 Lag ACF Series r.bond Figure 20: Sample ACF of Daily CREF Bond Returns 0 5 10 15 20 25 −0.05 0.00 0.05 Lag Partial ACF Series r.bond Figure 21: Sample PACF of Daily CREF Bond Returns 0 5 10 15 20 25 −0.10 0.00 0.05 Lag ACF Series abs(r.bond) Figure 22: Sample ACF of the Absolute Daily CREF Bond Returns > acf(r.bond^2) > pacf(r.bond^2) 228 0 5 10 15 20 25 −0.05 0.00 0.05 Lag Partial ACF Series abs(r.bond) Figure 23: Sample PACF of the Absolute Daily CREF Bond Returns 0 5 10 15 20 25 −0.05 0.00 0.05 Lag ACF Series r.bond^2 Figure 24: Sample ACF of the Squared Daily CREF Bond Returns 0 5 10 15 20 25 −0.05 0.00 0.05 Lag Partial ACF Series r.bond^2 Figure 25: Sample PACF of the Squared Daily CREF Bond Returns Exercise 12.9 (a) The daily returns of the google stock are plotted below. > data(google) > plot(google) 229 Time google 0 100 200 300 400 500 −0.05 0.05 0.15 Figure 26: Daily Google Stock Returns: August 14, 2004 to September 13, 2006 From the ACF and PACF of the daily returns, we see that the data are essentially uncorrelated over time. > acf(google) 0 5 10 15 20 25 −0.05 0.00 0.05 Lag ACF Series google Figure 27: Sample ACF of the Daily Google Stock Returns 230 > pacf(google) 0 5 10 15 20 25 −0.05 0.00 0.05 Lag Partial ACF Series google Figure 28: Sample PACF of the Daily Google Stock Returns (b) The mean of the google daily returns is 0.00269, which is signiﬁcantly greater than 0 by a one-sample, one-sided t-test. Hence, we should consider a mean+GARCH model for the datai, i.e. rt = µ + σt|t−1ǫt. Since, the GARCH model ﬁt is invariant to mean shift, the GARCH model ﬁt reported below is the same whether or not we speciﬁcally include the mean shift. For convenience, the GARCH models will be ﬁtted to the original returns. > t.test(google, alternative=’greater’) One Sample t-test data: google t = 2.5689, df = 520, p-value = 0.00524 alternative hypothesis: true mean is greater than 0 95 percent confidence interval: 0.000962967 Inf sample estimates: mean of x 0.002685589 (c) The McLeod-Li test suggests signiﬁcant ARCH eﬀects in the Google return series. (d) The sample EACF’s of the absolute and squared daily google stock returns are given be- low. Both of them convincingly suggest an ARMA(1,1) model, and therefore a GARCH(1,1) model for the original data. > eacf(abs(google)) 231 0 5 10 15 20 25 0.0 0.4 0.8 Lag P−value Figure 29: McLeod-Li Test for the Google returns. AR/MA 012345678910111213 0xxxoooxooxooxx 1xoooooooooooox 2xxooooooooooox 3xxxoooooooooox 4xoxooooooooooo 5xoxoxooooooooo 6oxxxxxoooooooo 7xoxxxoxooooooo > eacf(google^2) AR/MA 012345678910111213 0xxoooooooxo o o x 1xooooooooxo o o x 2xooooooooxo o o x 3xxxooooooxo o o x 4xxxoooooooo o o o 5xxxoooooooo o o o 6xxxxooooooo o o o 7oxxooxooooo o o o 232 We ﬁt a GARCH(1,1) model to the (mean-deleted) google stock daily returns data and get the MLE’s of the ﬁtted model. > m1=garch( x=google-mean(google), order=c(1,1) ,reltol=0.000001) > summary(m1) Call: garch(x = google, order = c(1, 1), reltol = 1e-06) Model: GARCH(1,1) Residuals: Min 1Q Median 3Q Max -3.64587 -0.46484 0.08232 0.65376 5.73913 Coefficient(s): Estimate Std. Error t value Pr(>|t|) a0 5.058e-05 1.232e-05 4.105 4.04e-05 *** a1 1.264e-01 2.136e-02 5.920 3.22e-09 *** b1 7.865e-01 3.579e-02 21.978 < 2e-16 *** --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1 The graph below shows the standardized residuals from the ﬁtted GARCH model. It suggests no particular tendency in the standardized residuals. > plot(residuals(m1),type=’h’,ylab=’standard residuals’) We can also look at the sample ACF and generalized portmanteau tests of squared and absolute standardized residuals in the following plots. It seems that the standardized residuals { b εt} is close to independently and identically distributed. So the GARCH(1,1) model provides good ﬁt to the daily google stock returns data. > acf(residuals(m1)^2,na.action=na.omit) > gBox(m1,method=’squared’) > acf(abs(residuals(m1)),na.action=na.omit) > gBox(m1,method=’absolute’) 233 Time standard residuals 0 100 200 300 400 500 −2 0 2 4 Figure 30: Standardized Residuals from the Fitted GARCH Model of Daily Google Stock Returns (e) The next graph shows the within-sample estimates of the conditional variances. At the ﬁnal time point, the squared return equals 0.00135 and the conditional variance estimated to be 0.0003417. These values combined with Equations (12.3.8) and (12.3.9) in the book can be used to compute the forecasts of future conditional variances. For example, the one-step ahead forecast of the conditional variance equals 0.00005 + 0.1264 × 0.00135 + 0.7865 × 0.0003417 = 0.00049. The longer forecasts eventually approach 0.00058, the long run variance of the model. > plot((fitted(m1)[,1])^2,type=’l’,ylab=’conditional variance’,xlab=’t’) 234 0 5 10 15 20 25 −0.05 Lag ACF Series residuals(m1)^2 5 10 15 20 0.0 0.8 lag p−value Figure 31: Sample ACF and Generalized Portmanteau Test P-Values of Squared Standard- ized Residuals from the Fitted Model (f) The QQ normal plot of the standardized residuals from the ﬁtted GARCH(1,1) model is given below. It shows that the standardized residuals are distributed with heavier tails than that of standard normal distribution on both sides. > qqnorm(residuals(m1));qqline(residuals(m1)) (g) The 95% conﬁdence interval for b1 is (0.7865 − 1.96 × 0.03578, 0.7865+ 1.96 × 0.03578) = (0.7164, 0.8566). (h) According to the GARCH(1,1) model, the stationary variance is b ω 1 − b α − b β = 0.00005 1 − 0.1264 − 0.7865 =0.00058, which is very close to the variance of the raw data, 0.00057. The stationary mean of the mean plus GARCH(1,1) model is simply 0.002686, the mean of the raw returns. (Remember that the stationary mean of a GARCH model is always zero!) 235 0 5 10 15 20 25 −0.05 0.00 0.05 0.10 Lag ACF Series abs(residuals(m1)) 5 10 15 20 0.0 0.8 lag p−value Figure 32: Sample ACF and Generalized Portmanteau Test P-Values of Absolute Standard- ized Residuals from the Fitted Model (i) There is no closed-form solution to this problem. We have to resort to Monte Carlo methods for obtaining the predictive intervals. The ﬁrst to ﬁfth steps ahead predictive distribution can be simulated by recursively computing (12.3.12) and drawing realizations from (12.2.1), with the initial condition set by the fact that at the ﬁnal time point, the squared return equals 0.00135 and the conditional variance estimated to be 0.0003417. Below is the required R-code mean.r=mean(google) nrepl=1000 returnm=NULL step=5 236 t conditional variance 0 100 200 300 400 500 0.0005 0.0015 0.0025 0.0035 Figure 33: Estimated Conditional Variances of the Daily Google Stock Returns set.seed(12579) # outer loop that replicated the simulation nrepl times for (j in 1:nrepl) { returnv=NULL # inner loop simulate from the 1 to 5 step ahead predictive distributions. for (i in 1:step){ # compute sigma^2_{t|t-1} recursively sigma2=0.00005246+0.7698*sigma2.lag1+0.1397*sq.return.lag1 # draw a realization from the i-step ahead distribution new.return=rnorm(1,mean=0,sd=sigma2^.5) returnv=c(returnv,new.return) sigma2.lag1=sigma2 sq.return.lag1=new.return^2 } returnm=cbind(returnm,returnv) } # add the overall mean returnm=returnm+mean.r 237 −3 −2 −1 0 1 2 3 −2 0 2 4 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 34: QQ Norm Plot of Standardized Residuals from the Fitted Model of the Daily Google Stock Returns # compute the 95% predictive intervals for the 1 to 5 step ahead predictions. > apply(returnm,1,function(x){quantile(x,c(.025,.975))}) [,1] [,2] [,3] [,4] [,5] 2.5% -0.04173014 -0.04238438 -0.0409627 -0.04163405 -0.04481970 97.5% 0.04867032 0.04674474 0.0516759 0.04919631 0.05039805 Exercise 12.10 (a) In Chapter 8, an IMA(1, 1) model is ﬁtted for the logarithms of monthly oil prices. The plots of sample ACF, PACF, and EACF of the absolute and squared residuals from the ﬁtted 238 IMA(1, 1) model are given below. > data(oil.price) > m1.oil=arima(log(oil.price),order=c(0,1,1)) > acf(abs(rstandard(m1.oil))) 0.5 1.0 1.5 −0.10 −0.05 0.00 0.05 0.10 0.15 Lag ACF Series abs(rstandard(m1.oil)) Figure 35: ACF of absolute residuals of m1.oil 239 > pacf(abs(rstandard(m1.oil))) 0.5 1.0 1.5 −0.10 −0.05 0.00 0.05 0.10 0.15 Lag Partial ACF Series abs(rstandard(m1.oil)) Figure 36: PACF of absolute residuals of m1.oil 240 > eacf(abs(rstandard(m1.oil))) AR/MA 012345678910111213 0xoooxxooooo o o o 1xooooxooooo o o o 2xxooooooooo o o o 3xxooooooooo o o o 4xxoxooooooo o o o 5xxxxooooooo o o o 6ooooooooooo o o o 7oooxooooooo o o o > acf(rstandard(m1.oil)^2) 0.5 1.0 1.5 −0.1 0.0 0.1 0.2 0.3 Lag ACF Series rstandard(m1.oil)^2 Figure 37: ACF of squared residuals of m1.oil 241 > pacf(rstandard(m1.oil)^2) 0.5 1.0 1.5 −0.1 0.0 0.1 0.2 Lag Partial ACF Series rstandard(m1.oil)^2 Figure 38: PACF of squared residuals of m1.oil > eacf(rstandard(m1.oil)^2) AR/MA 012345678910111213 0oooooxooooo o o o 1xooooxxoooo o o o 2xxooooooooo o o o 3xxooooooooo o o o 4xoooooooooo o o o 5oxxoxoooooo o o o 6xoxxoxooooo o o o 7xoxooxooooo o o o The sample ACF, PACF, and EACF of the absolute residuals and those of the squared residuals in the above graphs display some signiﬁcant autocorrelations and suggest that a GARCH(1, 1) model may be appropriate for the residuals. (b) Let’s ﬁt an IMA(1, 1)+GARCH(1, 1) model to the logarithms of monthly oil prices. > m2=garchFit(formula=~arma(0,1)+garch(1,1),data=diff(log(oil.price)), + include.mean=F) 242 > summary(m2) Title: GARCH Modelling Call: garchFit(formula = ~arma(0, 1) + garch(1, 1), data = diff(log(oil.price)), include.mean = F) Mean and Variance Equation: ~arma(0, 1) + ~garch(1, 1) Conditional Distribution: dnorm Coefficient(s): ma1 omega alpha1 beta1 0.271071054 0.000938987 0.192378356 0.661455778 Error Analysis: Estimate Std. Error t value Pr(>|t|) ma1 0.2710711 0.0761841 3.558 0.000374 *** omega 0.0009390 0.0005427 1.730 0.083604 . alpha1 0.1923784 0.0749343 2.567 0.010250 * beta1 0.6614558 0.1148448 5.760 8.43e-09 *** --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1 Log Likelihood: -277.2456 normalized: -1.155190 Standadized Residuals Tests: Statistic p-Value Jarque-Bera Test R Chi^2 6.637005 0.03620702 Shapiro-Wilk Test R W 0.991279 0.1632340 Ljung-Box Test R Q(10) 10.36138 0.4093809 243 Ljung-Box Test R Q(15) 23.84615 0.06775269 Ljung-Box Test R Q(20) 36.94588 0.01187811 Ljung-Box Test R^2 Q(10) 3.985653 0.9479919 Ljung-Box Test R^2 Q(15) 7.576812 0.9396271 Ljung-Box Test R^2 Q(20) 10.03166 0.9675942 LM Arch Test R TR^2 6.80259 0.8703787 Information Criterion Statistics: AIC BIC SIC HQIC 2.343713 2.401724 2.343170 2.367087 From the summary of the model, we see that all of the estimates for the parameters are signiﬁcantly diﬀerent from 0. But the residuals tests are somewhat contradictory. The p-value for Jarque-Bera test is 0.0362 which means the normality assumption is rejected at a usual conﬁdence level, while the p-value of Shapiro-Wilk test, 0.163, doesn’t suggest rejection of the normality assumption. (c) The time-sequence plot for the standardized residuals from the ﬁtted IMA(1, 1)+GARCH(1, 1) model is given below. > postscript(plot((residuals(m2)-mean(residuals(m2)))/sd(residuals(m2)), + ylab=’Standard Residuals of m2’,type=’b’)) From the above graph, we can see that at 3 time points the standard residuals are partic- ular large with one residual close to 3 and the other two residuals around 4. This phenomena tells us that there may exist some outliers for this model. (d) Next we are going to ﬁt an IMA(1, 1) model with two IO’s at t = 2 and t = 56 and an AO at t = 8. > m3=arimax(log(oil.price),order=c(0,1,1),io=c(2,56), + xreg=data.frame(AO=seq(log(oil.price))==8)) > plot(rstandard(m3),ylab=’Residuals of m3’,type=’b’) 244 0 50 100 150 200 −2 −1 0 1 2 3 4 Index Standard Residuals of m2 Figure 39: Standard Residuals of m2 Time Residuals of m3 1990 1995 2000 2005 −3 −2 −1 0 1 2 3 Figure 40: Standard Residuals of m3 Under this model, the standard residuals seem independently and identically distributed. Let’s check the normality. > qqnorm(rstandard(m3)) > qqline(rstandard(m3)) 245 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 41: QQ-plot of standard residuals of m3 > shapiro.test(rstandard(m3)) Shapiro-Wilk normality test data: rstandard(m3) W = 0.9972, p-value = 0.9539 Both of the qqnorm plot and Shapiro-Wilk test accept the normality of the residuals. So there is no discernible volatility clustering. (e) Comparing the analysis of standard residuals for the two models in part (c) and (d), the IMA(1, 1) model is more appropriate for the oil price data. The volatility clustering is not so discernible that the IMA+GARCH model doesn’t ﬁt well. 246 Chapter 13 Exercise 13.1 Since 3cos(2πft +0.4) = 3[cos(2πft)cos(0.4) − sin (2πft) sin (0.4)] it is clear that A = 3cos(0.4) , B = −3 sin (0.4) . Exercise 13.2 In this problem, A = 1 and B =3. So R = √A2 + B2 = √10, Φ = arctan(−B/A) = arctan(−3) . Exercise 13.3 (a) Regress the time series y on cos 2πt 4 96  and sin 2πt 4 96  . > t=1:96 # n=96 > cos1=cos(2*pi*t*4/96) > cos2=cos(2*pi*(t*14/96+.3)) > sin1=sin(2*pi*t*4/96) > y=2*cos1+3*cos2 > m1=lm(y~cos1+sin1-1) > summary(m1) Call: lm(formula = y ~cos1 + sin1 - 1) Residuals: 246 Min 1Q Median 3Q Max -2.996e+00 -2.063e+00 8.077e-15 2.063e+00 2.996e+00 Coefficients: Estimate Std. Error t value Pr(>|t|) cos1 2.000e+00 3.094e-01 6.464 4.51e-09 *** sin1 4.111e-16 3.094e-01 1.33e-15 1 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.144 on 94 degrees of freedom Multiple R-Squared: 0.3077, Adjusted R-squared: 0.293 F-statistic: 20.89 on 2 and 94 DF, p-value: 3.119e-08 The estimates are A = 2 and B = 0. (b) According to Equations (13.1.5), for the cosine component at frequency f = 14/96, A = 3cos(0.6π)= −0.927 and B = −3 sin (0.6π)= −2.853. (c) Now regress the time series y on cos 2πt 14 96  and sin 2πt 14 96  . > cos3=cos(2*pi*t*14/96) > sin3=sin(2*pi*t*14/96) > m2=lm(y~cos3+sin3-1) > summary(m2) Call: lm(formula = y ~cos3 + sin3 - 1) Residuals: Min 1Q Median 3Q Max -2.000e+00 -1.414e+00 5.877e-15 1.414e+00 2.000e+00 Coefficients: Estimate Std. Error t value Pr(>|t|) cos3 -0.9271 0.2063 -4.494 1.99e-05 *** sin3 -2.8532 0.2063 -13.831 < 2e-16 *** --- 247 Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.429 on 94 degrees of freedom Multiple R-Squared: 0.6923, Adjusted R-squared: 0.6858 F-statistic: 105.7 on 2 and 94 DF, p-value: < 2.2e-16 The estimates of A and B are the same as what we got in part (b). (d) Let us regress the series y on cos(2πtf) and sin (2πtf) for both f =4/96 and f = 14/96. > m3=lm(y~cos1+sin1+cos3+sin3-1) > summary(m3) Call: lm(formula = y ~cos1 + sin1 + cos3 + sin3 - 1) Residuals: Min 1Q Median 3Q Max -5.128e-14 -5.856e-15 -9.716e-16 8.103e-15 2.862e-14 Coefficients: Estimate Std. Error t value Pr(>|t|) cos1 2.000e+00 1.884e-15 1.061e+15 <2e-16 *** sin1 -1.395e-15 1.884e-15 -7.400e-01 0.461 cos3 -9.271e-01 1.884e-15 -4.920e+14 <2e-16 *** sin3 -2.853e+00 1.884e-15 -1.514e+15 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.306e-14 on 92 degrees of freedom Multiple R-Squared: 1, Adjusted R-squared: 1 F-statistic: 9.152e+29 on 4 and 92 DF, p-value: < 2.2e-16 From the regression, A4 =2, B4 =0, A14 = −0.927, and B14 = −2.853. These are perfect estimates. (e) Let us try to regress the series y on cos(2πtf) and sin (2πtf) for both f = 3/96 and f = 13/96. 248 > cos4=cos(2*pi*t*3/96) > sin4=sin(2*pi*t*3/96) > cos5=cos(2*pi*t*13/96) > sin5=sin(2*pi*t*13/96) > m4=lm(y~cos4+sin4+cos5+sin5-1) > summary(m4) Call: lm(formula = y ~cos4 + sin4 + cos5 + sin5 - 1) Residuals: Min 1Q Median 3Q Max -4.8532 -1.8291 0.1309 1.6302 4.7598 Coefficients: Estimate Std. Error t value Pr(>|t|) cos4 -9.012e-16 3.759e-01 -2.40e-15 1 sin4 8.967e-17 3.759e-01 2.39e-16 1 cos5 -1.599e-16 3.759e-01 -4.25e-16 1 sin5 -4.847e-15 3.759e-01 -1.29e-14 1 Residual standard error: 2.604 on 92 degrees of freedom Multiple R-Squared: 1.895e-30, Adjusted R-squared: -0.04348 F-statistic: 4.358e-29 on 4 and 92 DF, p-value: 1 We see that all the estimates for A3, B3, A13, and B13 are 0. (f) Now we redo the regression in part (b) but add a third pair, cos 2πt 7 96  and sin 2πt 7 96  , as predictor variables. > cos6=cos(2*pi*t*7/96) > sin6=sin(2*pi*t*7/96) > m5=lm(y~cos1+sin1+cos3+sin3+cos6+sin6-1) > summary(m5) Call: lm(formula = y ~cos1 + sin1 + cos3 + sin3 + cos6 + sin6 - 1) 249 Residuals: Min 1Q Median 3Q Max -5.067e-14 -6.223e-15 -1.473e-16 8.207e-15 2.780e-14 Coefficients: Estimate Std. Error t value Pr(>|t|) cos1 2.000e+00 1.866e-15 1.072e+15 <2e-16 *** sin1 -1.395e-15 1.866e-15 -7.480e-01 0.4566 cos3 -9.271e-01 1.866e-15 -4.969e+14 <2e-16 *** sin3 -2.853e+00 1.866e-15 -1.529e+15 <2e-16 *** cos6 -1.278e-15 1.866e-15 -6.850e-01 0.4950 sin6 -3.438e-15 1.866e-15 -1.843e+00 0.0687 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.293e-14 on 90 degrees of freedom Multiple R-Squared: 1, Adjusted R-squared: 1 F-statistic: 6.225e+29 on 6 and 90 DF, p-value: < 2.2e-16 From the new regression, the estimates are A4 =2, B4 =0, A14 = −0.927, B14 = −2.853, and A7 = B7 = 0. All of these estimates are perfect. Exercise 13.4 Let {Yt, t =1,..., 10} be an arbitrary series of length 10. We denote b A0 = Y , b Aj = 1 5 10 Xt=1 Yt cos πtj 5 and b Bj = 1 5 10 Xt=1 Yt sin πtj 5 , for j =1,..., 4, b A5 = 1 10 10 Xt=1 (−1)t Yt and b B5 =0. 250 Then for each t =1,..., 10, we have b A0 + 5 Xj=1  b Aj cos πtj 5 + b Bj sin πtj 5  = Y + 1 10 10 Xs=1 Ys "2 4 Xj=1 cos πsj 5 cos πtj 5 + sin πsj 5 sin πtj 5  +(−1)s cos(πt)# = 1 10 10 Xs=1 Ys "1+ 10 Xj=1 cos 2π (s − t) j 10 +(−1)s cos(πt) − cos(π (s − t)) − cos(2π (s − t))# = 10 Xs=1 Ys1s=t = Yt, where 1X is the indicator function of event X. So {Yt, t =1,..., 10} is exactly ﬁt by a linear combination of cosine-sine curves at the Fourier frequencies. Exercise 13.5 (a) Let us use the same parameter values used in Exhibit (13.4) to simulate a signal+noise time series. The plot is given below. It is hard to see periodicities from this plot. > library(TSA) > win.graph(width=4.875,height=2.5,pointsize=8) > t=1:96 > set.seed(124) > integer=sample(48,2) > freq1=integer/96; freq2=integer/96 > A1=rnorm(1,0,2); B1=rnorm(1,0,2) # sample normal "amplitudes" > A2=rnorm(1,0,3); B2=rnorm(1,0,3) > w=2*pi*t > y=A1*cos(w*freq1)+B1*sin(w*freq1)+A2*cos(w*freq2)+B2*sin(w*freq2)+ + rnorm(96,0,1) > plot(t,y,type=’o’,ylab=expression(y[t])) 251 0 20 40 60 80 −8 −4 0 2 4 6 t y t Figure 1: Simulated Signal Plus Noise Time Series (b) Now let us see the periodogram for the simulated time series. > win.graph(width=4.875,height=2.5,pointsize=8) > spec(y,log=’no’,main=’’,ylab=’Periodogram’,xlab=’Frequency’, + type=’h’,lwd=2,sub=’’) 0.0 0.1 0.2 0.3 0.4 0.5 0 100 300 Frequency Periodogram Figure 2: Periodogram for the Simulated Signal Plus Noise Time Series It is clear that the time series contains two cosine-sine pairs at frequencies of about 0.04 and 0.21 and that the higher frequency component is much stronger. Actually, one frequency is chosen to be 4/96 ≈ 0.0417 and the other is 20/96 ≈ 0.2083. Exercise 13.6 Equation (13.3.1) tells us that Yt = m Xj=1 [Aj cos(2πfjt)+ Bj sin (2πfjt)], where the frequencies 0 < f1 < f2 <... 0 and negative when θ < 0, where f ∈ (0, 1/2). So when θ > 0 the spectral density for an MA(1) model is an increasing function while for θ < 0 the spectral density decreases. Exercise 13.10 Figure 3 below shows the theoretical spectral density function for an MA(1) process with θ =0.6. The density is much stronger for higher frequencies than for low frequencies. High frequency means that the process has a tendency to oscillate back and forth across its mean level quickly. 254 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.5 1.0 1.5 2.0 2.5 Frequency Spectral Density Figure 3: Spectral Density of MA(1) Process with θ =0.6 Exercise 13.11 Figure 4 below shows the theoretical spectral density function for an MA(1) process with θ = −0.8. The density is much stronger for lower frequencies than for high frequencies. Such a process tends to change slowly from one time instance to the next. 0.0 0.1 0.2 0.3 0.4 0.5 0.0 1.0 2.0 3.0 Frequency Spectral Density Figure 4: Spectral Density of MA(1) Process with θ = −0.8 Exercise 13.12 According to Equation (13.5.6), the theoretical spectral density for an AR(1) model is S(f)= σ2 e 1+ φ2 − 2φ cos(2πf). The ﬁrst derivative of the denominator with respect to f is 4πφ sin (2πf), which is positive when φ > 0 and negative when φ < 0, where f ∈ (0, 1/2). So when φ > 0 the spectral density for an AR(1) model is an decreasing function while for φ < 0 the spectral density increases. 255 Exercise 13.13 Figure 5 below shows the theoretical spectral density function for an AR(1) process with φ = 0.7. The density function decreases rapidly and is much stronger for lower frequencies than for high frequencies. Such a process tends to change slowly from one time instance to the next. 0.0 0.1 0.2 0.3 0.4 0.5 0 2 4 6 8 10 Frequency Spectral Density Figure 5: Spectral Density of AR(1) Process with φ =0.7 Exercise 13.14 Figure 6 below shows the theoretical spectral density function for an AR(1) process with φ = −0.4. The density is much stronger for higher frequencies than for low frequencies. High frequency means that the process has a tendency to oscillate back and forth across its mean level quickly. 0.0 0.1 0.2 0.3 0.4 0.5 0.0 1.0 2.0 Frequency Spectral Density Figure 6: Spectral Density of AR(1) Process with φ = −0.4 256 Exercise 13.15 Figure 7 below shows the theoretical spectral density function for an MA(2) process with θ1 = −0.5 and θ2 = 0.9. The spectral density has a peak at a frequency around 0.25. The process has a main component with frequency arount 0.25. 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 Frequency Spectral Density Figure 7: Spectral Density of MA(2) Process with θ1 = −0.5 and θ2 =0.9 Exercise 13.16 Figure 8 below shows the theoretical spectral density function for an MA(2) process with θ1 =0.5 and θ2 = −0.9. The spectral density is strong at f = 0 and f =0.5. So the process has two main components. One component tends to change slowly from one time instance to the next; while the other component oscillates back and forth across its mean level quickly. The latter component has a stronger eﬀect on the process. 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 5 Frequency Spectral Density Figure 8: Spectral Density of MA(2) Process with θ1 =0.5 and θ2 = −0.9 257 Exercise 13.17 Figure 9 below shows the theoretical spectral density function for an AR(2) process with φ1 = −0.1 and φ2 = −0.9. The spectral density has a sharp peak at around frequency f =0.26. The process ﬂuctuates mainly with a frequency 0.26. 0.0 0.1 0.2 0.3 0.4 0.5 0 20 40 60 80 100 Frequency Spectral Density Figure 9: Spectral Density of AR(2) Process with φ1 = −0.1 and φ2 = −0.9 Exercise 13.18 Figure 10 below shows the theoretical spectral density function for an AR(2) process with φ1 =1.8 and φ2 = −0.9. The spectral density has a sharp peak at around frequency f =0.06. The process ﬂuctuates mainly with a frequency 0.06. 0.0 0.1 0.2 0.3 0.4 0.5 0 200 600 1000 Frequency Spectral Density Figure 10: Spectral Density of AR(2) Process with φ1 =1.8 and φ2 = −0.9 258 Exercise 13.19 Figure 11 below shows the theoretical spectral density function for an AR(2) process with φ1 = −1 and φ2 = −0.8. The spectral density has a sharp peak at around frequency f =0.35. The process ﬂuctuates mainly with a frequency 0.35. 0.0 0.1 0.2 0.3 0.4 0.5 0 10 20 30 Frequency Spectral Density Figure 11: Spectral Density of AR(2) Process with φ1 = −1 and φ2 = −0.8 Exercise 13.20 Figure 12 below shows the theoretical spectral density function for an AR(2) process with φ1 = 0.5 and φ2 = 0.4. The density is much stronger for lower frequencies than for high frequencies. Such a process tends to change very slowly from one time instance to the next. 0.0 0.1 0.2 0.3 0.4 0.5 0 20 40 60 80 100 Frequency Spectral Density Figure 12: Spectral Density of AR(2) Process with φ1 =0.5 and φ2 =0.4 Exercise 13.21 Figure 13 below shows the theoretical spectral density function for an AR(2) process with φ1 = 0 and φ2 = 0.8. The strongest density appears at f = 0 and f = 0.5. So the process 259 has two main components. One component tends to change slowly from one time instance to the next; while the other component oscillates back and forth across its mean level quickly. 0.0 0.1 0.2 0.3 0.4 0.5 0 5 10 15 20 25 Frequency Spectral Density Figure 13: Spectral Density of AR(2) Process with φ1 = 0 and φ2 =0.8 Exercise 13.22 Figure 14 below shows the theoretical spectral density function for an AR(2) process with φ1 = 0.8 and φ2 = −0.2. The density is much stronger for lower frequencies than for high frequencies. Such a process tends to change slowly from one time instance to the next. 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 5 6 Frequency Spectral Density Figure 14: Spectral Density of AR(2) Process with φ1 =0.8 and φ2 = −0.2 Exercise 13.23 Figure 15 below shows the theoretical spectral density function for an AR(2) process with φ = 0.5 and θ = 0.8. The density is much stronger for higher frequencies than for low frequencies. High frequency means that the process has a tendency to oscillate back and forth across its mean level quickly. 260 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.4 0.8 1.2 Frequency Spectral Density Figure 15: Spectral Density of ARMA(1,1) Process with φ =0.5 and θ =0.8 Exercise 13.24 Figure 16 below shows the theoretical spectral density function for an AR(2) process with φ =0.95 and θ =0.8. The density is much stronger for frequencies greater than 0.2 than for frequencies less than 0.2. The process has a tendency to oscillate back and forth across its mean level quickly. Frequencies from 0.2 to 0.5 have almost same eﬀect on the process. 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.4 0.8 1.2 Frequency Spectral Density Figure 16: Spectral Density of ARMA(1,1) Process with φ =0.95 and θ =0.8 Exercise 13.25 (a) Since Yt =(Xt + Xt−1)/2, we have c0 =1/2,c1 =1/2 and Yt = P 1 i=0 ciXt−i. The power transfer function for this linear ﬁlter is given by C e−2πif  2 = 1 2e−2πif + 1 2 2 = 1 2 [1 + cos(2πf)]. (b) Since ck = 0, for k < 0, this ﬁlter is causal. 261 (c) The plot of the power transfer function is given below. It shows that this linear ﬁlter retains lower frequencies and de-emphasizes higher frequencies. > f=(1:50)/100 > c1=1/2*(1+cos(2*pi*f)) > plot(f,c1,type="l", ylab=’Power transfer function’) 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 f Power transfer function Figure 17: Power Transfer Function Exercise 13.26 (a) Since Yt = Xt −Xt−1, we have c0 =1,c1 = −1 and Yt = P 1 i=0 ciXt−i. The power transfer function for this linear ﬁlter is given by C e−2πif  2 = 1 − e−2πif 2 =2 − 2cos (2πf). (b) Since ck = 0, for k < 0, this ﬁlter is causal. (c) The plot of the power transfer function is given below. It shows that this linear ﬁlter retains higher frequencies and de-emphasizes lower frequencies. > c2=2-2*cos(2*pi*f) > plot(f,c2,type="l", ylab=’Power transfer function’) 262 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 f Power transfer function Figure 18: Power Transfer Function Exercise 13.27 (a) Since Yt =(Xt+1 + Xt + Xt−1)/3, we have c−1 = c0 = c1 =1/3 and Yt = P 1 i=−1 ciXt−i. The power transfer function for this linear ﬁlter is given by C e−2πif  2 = 1 3e−2πif + 1 3 + 1 3e2πif 2 = 1 9 [1+2cos(2πf)]2 = 1 9  1+4cos(2πf) + 4 cos2 (2πf)  . (b) Since c−1 =1/3, this ﬁlter is not causal. (c) The plot of the power transfer function is given below. It shows that this linear ﬁlter retains lower frequencies and de-emphasizes frequencies in (0.3, 0.4). > c3=(1+2*cos(2*pi*f))^2/9 > plot(f,c3,type="l", ylab=’Power transfer function’) 263 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 f Power transfer function Figure 19: Power Transfer Function Exercise 13.28 (a) Since Yt = (Xt + Xt−1 + Xt−2)/3, we have c0 = c1 = c2 = 1/3 and Yt = P 2 i=0 ciXt−i. The power transfer function for this linear ﬁlter is given by C e−2πif  2 = 1 3 + 1 3e−2πif + 1 3e−4πif 2 = 1 9 [1 + cos(2πf)+cos(4πf)]2 + 1 9 [sin (2πf)+sin(4πf)]2 = 1 9 [3+4cos(2πf)+2cos(4πf)] = 1 9  1+4cos(2πf) + 4 cos2 (2πf)  , which is the same as the power transfer function of the ﬁlter deﬁned in Exercise (13.27). (b) Since ck = 0, for k < 0, this ﬁlter is causal. Exercise 13.29 Same as Exercise 13.26. 264 Exercise 13.30 (a) Since Yt = (Xt+1 − 2Xt + Xt−1)/3, we have c−1 = 1/3,c0 = −2/3,c1 = 1/3 and Yt = P 1 i=−1 ciXt−i. The power transfer function for this linear ﬁlter is given by C e−2πif  2 = 1 3e−2πif − 2 3 + 1 3e2πif 2 = 2 3cos(2πf) − 2 3 2 = 4 9  1 + cos2 (2πf) − 2cos (2πf)  . (b) The plot of the power transfer function is given below. It shows that this linear ﬁlter retains higher frequencies and de-emphasizes lower frequencies. > win.graph(width=4.875,height=2.5,pointsize=8) > f=(1:50)/100 > c1=4/9*(1+cos(2*pi*f)^2-2*cos(2*pi*f)) > plot(f,c1,type="l", ylab=’Power transfer function’) 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.5 1.0 1.5 f Power transfer function Figure 20: Power Transfer Function Exercise 13.31 (a) Suppose {Yt} is a white noise process with variance γ0. The sample spectral density at Fourier frequency f ∈ [0, 1/2] is b S(f)= n 4  b A2 f + b B2 f  , 265 where b Af = 2 n n Xt=1 Yt cos(2πtf) and b Bf = 2 n n Xt=1 Yt sin (2πtf). So b S(f) = 1 n n Xt=1 Yt cos (2πtf)! 2 + 1 n n Xt=1 Yt sin (2πtf)! 2 = 1 n " n Xt=1 Y 2 t +2 X1≤i n=1000 > seed(357864) > win.graph(width=4.875,height=2.5,pointsize=8) > spec(y,log=’no’,ylab=’Periodogram’,xlab=’Frequency’,type=’h’,lwd=2,sub=’’) 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 5 6 Frequency Periodogram Series: x Raw Periodogram Figure 29: Periodogram of Simulated Normal White Noises 270 We know that the theoretical spectral density for a white noise process is constant for all frequency in −1/2 < f ≤ 1/2. Figure 29 ﬁts this feature. 271 Chapter 14 Exercise 14.1 Let f be a Fourier frequency. The smoothed sample spectral density with Daniell spectral window is S(f)= 1 2m +1 m Xj=−m b S f + j n .(1) Dividing both sides of Equation (1) by S(f), we get S(f) S(f) = 1 2(2m + 1) m Xj=−m 2 b S f + j n S(f). Let us assume that the spectral density changes very little over a small interval of fre- quencies. We already know from the book that the sample spectral density values at the Fourier frequencies are approximately uncorrelated and 2 b S(f)/S(f) has approximately a chi-square distribution with two degrees of freedom. So, from the above equation we have V ar  S(f) S(f) ≈ 1 4 (2m + 1)2 m Xj=−m 4 " S f + j n S(f)# 2 ≈ 1 2m +1, which implies V ar S(f)  ≈ S2 (f) 2m +1. Exercise 14.2 (a) The following panel gives the Daniell spectral window with m = 5 and its 2nd and 3rd convolutions. > oldpar=par > par(mfrow=c(1,3)) > plot(kernel("daniell", c(5,5,5)),lwd=2,main=’’,ylim=c(0,.1), + ylab=expression(W(k))) > abline(h=0) 272 > plot(kernel("daniell", c(5,5)),lwd=2,main=’’,ylim=c(0,.1), + ylab=expression(W(k))) > abline(h=0) > plot(kernel("daniell", c(5)),lwd=2,main=’’,ylim=c(0,.1), + ylab=expression(W(k))) > abline(h=0) > par=oldpar −15 −5 5 15 0.00 0.04 0.08 k W ( k ) −10 −5 0 5 10 0.00 0.04 0.08 k W ( k ) −4 0 2 4 0.00 0.04 0.08 k W ( k ) Figure 1: The Daniell Spectral Window and Its Convolutions (b) The bandwidths and degrees of freedom for each of the spectral windows in part (a) are given below. n = 100 is used. > k1=kernel("daniell", c(5,5,5)) > k2=kernel("daniell", c(5,5)) > k3=kernel("daniell", c(5)) > v1=2/(sum((k1\$coef)^2)*2-k1\$coef^2) > c1=c(0:15)^2 > BW1=1/100*sqrt(sum(c1*(k1\$coef))*2) > v2=2/(sum((k2\$coef)^2)*2-k2\$coef^2) > c2=c(0:10)^2 > BW2=1/100*sqrt(sum(c2*(k2\$coef))*2) > v3=2/(sum((k3\$coef)^2)*2-k3\$coef^2) > c3=c(0:5)^2 > BW3=1/100*sqrt(sum(c3*(k3\$coef))*2) > c(v1,BW1)  39.84931337 0.05477226 > c(v2,BW2)  32.86419753 0.04472136 > c(v3,BW3)  22.00000000 0.03162278 273 (c) The following panel gives the modiﬁed Daniell spectral window with m = 5 for the ﬁrst graph, convolution of m = 5 and m = 7 for the second graph, and convolution of three windows with m’s of 5, 7, and 11 for the third graph. > oldpar=par > par(mfrow=c(1,3)) > plot(kernel("modified.daniell", c(5)),lwd=2,main=’’,ylim=c(0,.11), + ylab=expression(W(k))) > abline(h=0) > plot(kernel("modified.daniell", c(5,7)),lwd=2,main=’’,ylim=c(0,.11), + ylab=expression(W(k))) > abline(h=0) > plot(kernel("modified.daniell", c(5,7,11)),lwd=2,main=’’,ylim=c(0,.11), + ylab=expression(W(k))) > abline(h=0) > par=oldpar −4 0 2 4 0.00 0.04 0.08 k W ( k ) −10 0 5 10 0.00 0.04 0.08 k W ( k ) −20 0 10 20 0.00 0.04 0.08 k W ( k ) Figure 2: The Modiﬁed Daniell Spectral Window and Its Convolutions (d) The bandwidths and degrees of freedom for each of the spectral windows in part (c) are given below. n = 100 is used. > k4=kernel("modified.daniell", c(5)) > k5=kernel("modified.daniell", c(5,7)) > k6=kernel("modified.daniell", c(5,7,11)) > v4=2/(sum((k4\$coef)^2)*2-k4\$coef^2) > c4=c(0:5)^2 > BW4=1/100*sqrt(sum(c4*(k4\$coef))*2) > v5=2/(sum((k5\$coef)^2)*2-k5\$coef^2) > c5=c(0:12)^2 > BW5=1/100*sqrt(sum(c5*(k5\$coef))*2) > v6=2/(sum((k6\$coef)^2)*2-k6\$coef^2) > c6=c(0:23)^2 274 > BW6=1/100*sqrt(sum(c6*(k6\$coef))*2) > c(v4,BW4)  21.05263158 0.02915476 > c(v5,BW5)  36.97241 0.05000 > c(v6,BW6)  59.53696211 0.08093207 Exercise 14.3 (a) For Daniell rectangular spectral window, Wm (k)=1/(2m + 1) ,k = −m,...,m. So 1 n2 m Xk=−m k2Wm (k)= 2 n2 (2m + 1) m Xk=1 k2. Since m Xk=1 k2 = m (m + 1)(2m + 1) 6 , ∀m =1, 2,..., we have 1 n2 m Xk=−m k2Wm (k) = 2 n2 (2m + 1) m (m + 1)(2m + 1) 6 = 2 n2 (2m + 1)  m3 3 + m2 2 + m 6  . (b) If m = c√n for some constant c, then the right-hand-side of the expression in part (a) becomes 2 n2 (2m + 1)  m3 3 + m2 2 + m 6  = 2 √n (2c√n + 1)  c3 3 + c2 2√n + c 6n −→ 0, as n −→ ∞. (c) If m = c√n for some constant c, then the approximate variance of the smoothed spectral density given by the right-hand-side of Equation (14.2.4) becomes S2 (f) m Xk=−m W 2 m (k)= S2 (f) 2c√n +1 −→ 0, as n −→ ∞. 275 Exercise 14.4 Suppose S(f) ≈ cχ2 v, where c is a constant and χ2 v is a chi-square variable with degrees of freedom v. By the approximate unbiasedness of S(f), we have E S(f)  ≈ cv ≈ S(f).(2) From the approximate variance of S(f), we get V ar S(f)  ≈ 2vc2 ≈ S2 (f) m Xk=−m W 2 m (k).(3) Then from Equations (2) and (3), we estimate the values of c and v as c = S(f) P m k=−m W 2 m (k) 2 , v = 2 P m k=−m W 2m (k). Exercise 14.5 The periodogram of the series {Yt} is given by > t=1:48 > y=cos(2*pi*0.28*t) > win.graph(width=4.875,heigh=2.5,pointsize=8) > spec(y,log=’no’,ylab=’Periodogram’,xlab=’Frequency’,type=’h’,lwd=2,sub=’’) > abline(h=0) 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 5 6 Frequency Periodogram Series: x Raw Periodogram Figure 3: Periodogram of Yt = sin [2π (0.28) t], t =1,..., 48. 276 Note that f =0.28 is not a Fourier frequency. So the peak at f =0.28 does not appear. Instead, the power at this frequency is blurred across several nearby frequencies giving the appearance of a much wider peak. Exercise 14.6 Using the modiﬁed Daniell spectral window with the span of 11, we estimate the spectrum of the logarithms of the raw rainfall values. > data(larain) > win.graph(width=4.875,heigh=2.5,pointsize=8) > spec(log(larain),spans=c(11),ylim=c(.08,0.5),sub=’’,ylab=’Log(Spectrum)’, + xlab=’Frequency’) 0.0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.5 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 4: Log(Spectrum) of Log(Larain) Exercise 14.7 (a) The time series plot of spots1 is given below. This time series seems stationary. 277 Time spots 1700 1750 1800 1850 1900 1950 2000 0 50 100 150 Figure 5: Sunspot Numbers Time Series (b) The estimated spectrum using a modiﬁed Daniell spectral window convoluted with itself and a span of 3 for both is given below. The peak at around f =0.09 is signiﬁcant suggesting that the number of sunspots ﬂuctuates with a cycle period of about 1/f = 11 years. This accords with the observation from the data plot in part (a). 0.0 0.1 0.2 0.3 0.4 0.5 10 50 500 5000 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 6: Estimated Spectrum for the Sunspot Number (c) Then let us estimate the spectrum using an AR model with the order chosen to minimize the AIC. The chosen AR order is 9. The graph below suggests a peak at around f =0.095. This suggests a cycle of about 1/f = 10.53 years in the ﬂuctuation of the annual sunspot number. 278 0.0 0.1 0.2 0.3 0.4 0.5 50 200 2000 20000 frequency spectrum Series: x AR (9) spectrum Figure 7: Estimated Spectrum for the Sunspot Number by AR Method (d) Overlay the estimates obtained in parts (b) and (c) above onto one plot. They seem to to agree to a reasonable degree. 0.0 0.1 0.2 0.3 0.4 0.5 10 50 500 5000 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 8: Log(Spectrum) of Sunspot Number Exercise 14.8 (a) The following 3 ﬁgures are the modiﬁed Daniell spectral windows of tempdub using span=3,5, and 7. 279 0.0 0.1 0.2 0.3 0.4 0.5 1 10 100 1000 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 9: Modiﬁed Daniell Spectral Window with Span = 3 for Tempdub 0.0 0.1 0.2 0.3 0.4 0.5 5 20 100 500 5000 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 10: Modiﬁed Daniell Spectral Window with Span = 5 for Tempdub (b) The estimate with span = 3 among the three estimates in part (a) best represents the spectrum of the process. With the smallest bandwidth, it shows clearly the prominent peak at f =1/12 representing the strong annual seasonality and some secondary peaks at about f = 2/12, 3/12, representing higher harmonics of the annual frequency. On the contrary, with bigger bandwidth, the peaks at those frequencies in the windows with span =5or7are ﬂatted. Given the length of the 95% conﬁdence interval shown in Figure 9, we can conclude that the peaks at around f =1/12, 2/12, and 3/12 are probably real. Exercise 14.9 (a) The plot for EEG time series is given below. It seems that this time series is stationary. 280 0.0 0.1 0.2 0.3 0.4 0.5 5 20 100 500 5000 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 11: Modiﬁed Daniell Spectral Window with Span = 7 for Tempdub > library(TSA) > data(eeg) > pdf() > win.graph(width=4.875,height=2.5,pointsize=8) > plot(eeg,type="l") Time eeg 2000 4000 6000 8000 10000 12000 14000 −300 −100 0 100 Figure 12: Plot for EEG Time Series 281 (b) By using a modiﬁed Daniell spectral window convolved with itself and an m of 50 for both components of the convolution, we give the sample spectral density plot for EGG. The sample spectral density has a unique and strong peak at a small frequency, which suggests that an AR model may well ﬁt it. > k=kernel("modified.daniell", c(50,50)) > win.graph(width=4.875,height=2.5,pointsize=8) > sp1=spec(eeg,kernel=k,main=’’,sub=’’, + xlab=’Frequency’,ylab=’Log(Sample Spectral Density)’) 0.0 0.1 0.2 0.3 0.4 0.5 1e−01 1e+01 1e+03 1e+05 Frequency Log(Sample Spectral Density) Figure 13: Sample Spectral Density for EEG (c) Using an AR model with order chosen to minimize the AIC, we obtain the sample spectral estimation given below. The chosen best order for AR model is 41. > sp2=spec(eeg,method=’ar’,main=’’,lty=’solid’, xlab=’Frequency’, + ylab=’Log(Estimated AR Spectral Density)’) > sp2\$method  "AR (41) spectrum " (d) Overlay the above two sample spectral estimates onto one plot below. We can see that the non-parametric smoothing estimation agrees with the estimation from an ﬁtted AR(41) model very well. > spec(eeg,span=c(51,51),sub=’’, + xlab=’Frequency’,ylab=’Log(Sample Spectral Density)’) > sp2=spec(eeg,method=’ar’,main=’’,plot=F) > lines(sp2\$freq,sp2\$spec,lty=’dashed’) 282 0.0 0.1 0.2 0.3 0.4 0.5 1e−01 1e+01 1e+03 1e+05 Frequency Log(Estimated AR Spectral Density) Figure 14: AR Spectral Estimation for EEG 0.0 0.1 0.2 0.3 0.4 0.5 1e−01 1e+01 1e+03 1e+05 Frequency Log(Sample Spectral Density) Series: x Smoothed Periodogram Figure 15: Estimated Spectral Densities Exercise 14.10 (a) The time series plot of the ﬁrst diﬀerence of the logarithms of the electricity values is given below. This process seems stationary. (b) The following 3 ﬁgures display the smoothed spectrum of the ﬁrst diﬀerence of the logarithms using a modiﬁed Daniell spectral window and span = 25, 13, and 7. As the span value becomes smaller, the bandwidth becomes narrower. So in Figure 19, we clearly see 6 peaks at around these frequencies: 0.09, 0.18, 0.25, 0.34, 0.41 and 0.5. However, possible leakage appears at these peaks. (c) Now let us use a spectral window that is a convolution of two modiﬁed Daniell windows each with span = 3. Also use a 10% taper. By tapering and using a smaller span value, the 6 peaks are more clear: f1 =0.0825 = 1/12, f2 =0.1675 = 2/12, f3 =0.25=3/12, f4 = 283 Time Differenced Log(Electricity) 1975 1980 1985 1990 1995 2000 2005 −0.2 −0.1 0.0 0.1 Figure 16: The First Diﬀerence of the Log(Electricity) 0.0 0.1 0.2 0.3 0.4 0.5 5e−05 5e−04 5e−03 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 17: Smoothed Log(Spectrum) in a Modiﬁed Daniell Window with Span = 25 0.3325 = 4/12, f5 =0.4175 = 5/12, f6 =0.5=6/12. (d) Using an AR model with order chosen to minimize the AIC, we obtain the sample spectral estimation given below. The chosen best order for AR model is 25. This spectrum shows peaks at the same frequencies as those in part (c), representing multiples of the fundamental frequency of 1/12. (e) Now overlay the estimates obtained in parts (c) and (d) above onto one plot. We can see that the non-parametric smoothing estimation agrees the estimation from an ﬁtted AR(25) model well. 284 0.0 0.1 0.2 0.3 0.4 0.5 5e−05 5e−04 5e−03 5e−02 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 18: Smoothed Log(Spectrum) in a Modiﬁed Daniell Window with Span = 13 0.0 0.1 0.2 0.3 0.4 0.5 5e−05 1e−03 5e−02 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 19: Smoothed Log(Spectrum) in a Modiﬁed Daniell Window with Span =7 Exercise 14.11 (a) Let us estimate the spectrum using a spectral window that is a convolution of two modiﬁed Daniell windows each with span = 7. Compared with Exhibit (14.24) in the book, the bandwidth is bigger in this window and the peaks are wider. But the peak frequencies are still clearly seen as multiples of 1/12. (b) Now estimate the spectrum using a single modiﬁed Daniell spectral window with span = 7. Compared with those results in part (a) and Exhibit (14.24), the peaks here are ﬂat. So we can not ﬁgure out at what frequencies the peaks exactly are. (c) Then estimate the spectrum using a single modiﬁed Daniell spectral window with span = 11. Compared with the results in part (b), the peaks here are even ﬂatter. Again we have no idea at what frequencies have a peak. 285 0.0 0.1 0.2 0.3 0.4 0.5 5e−05 1e−03 5e−02 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 20: Smoothed Log(Spectrum) in a Convoluted Modiﬁed Daniell Window Each with Span = 3 and Taper= 0.1 0.0 0.1 0.2 0.3 0.4 0.5 1e−04 1e−02 1e+00 frequency spectrum Series: x AR (25) spectrum Figure 21: Estimated Spectrum for the Diﬀerenced Log(Electricity) by AR Method (d) Among the 4 diﬀerent estimates, I prefer the estimate shown in Exhibit (14.24) because the other 3 have more smoothing than the convolution of two modiﬁed Daniell windows each with span = 3. Exercise 14.12 (a) Let us estimate the spectrum using span = 25 with the modiﬁed Daniell spectral window. > data(flow) > spec(log(flow),spans=c(25),main=’’,sub=’’,ylim=c(.02,13), + ylab=’Log(Spectrum)’,xlab=’Frequency’) 286 0.0 0.1 0.2 0.3 0.4 0.5 5e−05 1e−03 5e−02 Frequency Log(Spectrum) Series: x Smoothed Periodogram Figure 22: Estimated Spectral Densities for Diﬀerenced Log(Electricity) 0.0 0.1 0.2 0.3 0.4 0.5 500 2000 10000 Frequency Estimated Log(Spectrum) Figure 23: Estimated Spectrum for Milk Production in a Convoluted Modiﬁed Daniell Win- dow with Span =7 The bandwidth is 0.012, which is much bigger than the bandwidth (0.0044) in a convolu- tion of two windows each with m = 7. The bigger bandwidth implies a window smoothed too much to see any peaks representing the annual seasonality which can be seen from Exhibit 14.23. (b) Then let us estimate the spectrum using span = 13 with the modiﬁed Daniell spectral window. > spec(log(flow),spans=c(13),main=’’,sub=’’,ylim=c(.02,13), + ylab=’Log(Spectrum)’,xlab=’Frequency’) The bandwidth is 0.006, which is bigger than that in Exhibit 14.23 and smaller than that in part (a). So the estimated spetrum here is rougher than that in Exhibit 14.23 but 287 0.0 0.1 0.2 0.3 0.4 0.5 50 200 1000 10000 Frequency Estimated Log(Spectrum) Figure 24: Estimated Spectrum for Milk Production in a Modiﬁed Daniell Window with Span =7 0.0 0.1 0.2 0.3 0.4 0.5 50 200 1000 5000 Frequency Estimated Log(Spectrum) Figure 25: Estimated Spectrum for Milk Production in a Modiﬁed Daniell Window with Span = 11 smoother than that in part (a). The peaks at frequencies f = 1/12, 2/12 and 3/12 become somewhat clear. Exercise 14.13 (a) The plot of the ﬁrst 400 of the tuba time series is given below. Compared with the plots of the trombone and euphonium in Exhibit (14.25), the tuba time series has longer cycles. > win.graph(width=4.875,height=2.5,pointsize=8) > data(tuba) 288 0.0 0.1 0.2 0.3 0.4 0.5 0.02 0.10 0.50 5.00 Frequency Log(Spectrum) Figure 26: Log(Spectrum) of Log(Flow) with Span = 25. 0.0 0.1 0.2 0.3 0.4 0.5 0.02 0.10 0.50 5.00 Frequency Log(Spectrum) Figure 27: Log(Spectrum) of Log(Flow) with Span = 13. > plot(window(tuba,end=400),ylab=’Waveform’,yaxp=c(-1,+1,2)) 289 Time Waveform 0 100 200 300 400 0 Figure 28: Tuba (b) Let us estimate the spectrum of the tuba time series using a convolution of two modiﬁed Daniell spectral window each with span m = 11. > spec(window(tuba,end=400),spans=c(11,11),sub=’’, + xlab=’Frequency’,ylab=’Log Spectral Density’) 0.0 0.1 0.2 0.3 0.4 0.5 1e−04 1e−02 1e+00 Frequency Log Spectral Density Series: x Smoothed Periodogram Figure 29: Estimated Spectram (c) Compared with the estimated spectra of the trombone and euphonium shown in Exhibit (14.24), the estimated spectrum of the tuba time series obtained in part (b) is lower and much smoother. (d) It’s not hard to see that the higher frequency components of the spectrum for the tuba look more like those of the euphonium than the trombone. This reveals one reason why the euphonium is sometimes called a tenor tuba. 290 Chapter 15 Exercise 15.1 We ﬁt the TAR(2; 1, 4) model with delay d = 2 to the logarithms of the predator series in the data ﬁle veilleux. > library(TSA) > data(veilleux) > predator=veilleux[,1] > predator.eq=window(predator,start=c(7,1)) > predator.tar.3=tar(y=log(predator.eq),p1=4,p2=4,d=2,a=.1,b=.9,print=T) The estimation for this model is given below. Estimate Std. Error t-statistic p-value ˆd 2 ˆr 4.048 Lower Regime (14) ˆφ1,0 0.95 0.79 1.21 0.25 ˆφ1,1 0.82 0.20 4.08 0.00 ˜σ2 1 0.0598 Upper Regime (39) ˆφ2,0 4.06 0.57 7.10 0.00 ˆφ2,1 0.91 0.14 6.37 0.00 ˆφ2,2 −0.26 0.21 −1.24 0.22 ˆφ2,3 −0.20 0.20 −0.98 0.33 ˆφ2,4 −0.32 0.15 −2.14 0.04 ˜σ2 2 0.0638 From the above table we can see that the ﬁtted TAR model with delay d = 2 has a smaller threshold estimate ˆr = 4.048 than that in the ﬁtted TAR(2;1, 4) model with delay d = 3 (ˆr = 4.661). So only 14 data cases fall in lower regime, while 39 data cases fall in upper regime. It’s not as balanced as the situation reported in the text book, where n1 = 30 and n2 = 23. In addition, the estimated noise variances in the ﬁtted TAR model, 0.0598 and 0.0638, are larger than those in the TAR model with delay 3, 0.0548 and 0.0560. Also, let us compare the skeletons of the two models. We know that the skeleton of TAR(2;1, 4) model 290 with delay 3 converges to a limit cycle. The graph (below) of the skeleton of TAR(2; 1, 4) model with delay 2 tells us it also converges to a limit cycle. > tar.skeleton(predator.tar.3) 0 10 20 30 40 50 4.0 5.0 t Skeleton Figure 1: Skeleton of model TAR(2;1,4) Exercise 15.2 Let us set the maximum order p = 5 and use the MAIC estimation method to estimate the parameters. > AICM=NULL > for(d in 1:5) {spots.tar=tar(y=sqrt(spots),p1=5,p2=5,d=d,a=.1,b=.9) + AICM=rbind(AICM, c(d,spots.tar\$AIC,signif(spots.tar\$thd,5), + spots.tar\$p1,spots.tar\$p2)) + } > colnames(AICM)=c(’d’,’nominal AIC’,’r’,’p1’,’p2’) > rownames(AICM)=NULL > AICM d AIC ˆr ˆp1 ˆp2 1 149.9 5.882 5 5 2 110.5 6.058 3 5 3 124.6 6.596 2 5 4 126.2 8.044 3 5 5 150.5 8.155 4 5 From the above table we ﬁnd that the TAR(2; 3, 5) with delay d = 2 has the smallest AIC. Next we ﬁt this model to the spots data. 291 > spots.tar.1=tar(y=sqrt(spots),p1=5,p2=5,d=2,a=.1,b=.9,print=T) Estimate Std. Error t-statistic p-value ˆd 2 ˆr 6.058 Lower Regime (20) ˆφ1,0 9.52 1.28 7.44 0.00 ˆφ1,1 1.05 0.17 6.17 0.00 ˆφ1,2 −1.20 0.29 −4.13 0.00 ˆφ1,3 −0.56 0.24 −2.28 0.04 ˜σ2 1 0.8211 Upper Regime (37) ˆφ2,0 5.69 0.86 6.60 0.00 ˆφ2,1 0.37 0.11 3.47 0.00 ˆφ2,2 0.38 0.12 3.25 0.00 ˆφ2,3 −0.07 0.11 −0.59 0.56 ˆφ2,4 −0.36 0.11 −3.34 0.00 ˆφ2,5 −0.11 0.07 −1.50 0.14 ˜σ2 2 0.2141 Notice that the estimate ˆφ2,5 with p-value= 0.14 is not signiﬁcant. It guides us to ﬁt the model TAR(2; 3, 4) with delay d = 2. > spots.tar.2=tar(y=sqrt(spots),p1=4,p2=4,d=2,a=.1,b=.9,print=T) The ﬁtted TAR(2; 3, 4) model of the lower regime implies that Y 1 2 t =9.52+1.05Y 1 2 t−1 − 1.20Y 1 2 t−2 − 0.56Y 1 2 t−3 +0.91et, when Yt−2 ≤ 6.0582 = 36.70, while the ﬁtted model of the upper regime implies that Y 1 2 t =4.81+0.40Y 1 2 t−1 +0.42Y 1 2 t−2 − 0.01Y 1 2 t−3 − 0.49Y 1 2 t−4 +0.48et, when Yt−2 ≥ 36.70. Next let us look at the goodness of ﬁt of the TAR(2; 3, 4) model to the square root of spots data. > win.graph(width=4.875, height=4.5,pointsize=8) > tsdiag(spots.tar.2,gof.lag=20) 292 0 10 20 30 40 50 −1.5 −0.5 0.5 1.5 t Standardized Residuals 5 10 15 −0.2 0.0 0.1 0.2 Lag ACF of Residuals 5 10 15 20 0.0 0.2 0.4 0.6 Number of Lags P−values Figure 2: Model Diagnostics of TAR(2;3,4): Sopts Series > win.graph(width=2.5, height=2.5,pointsize=8) > qqnorm(spots.tar.2\$std.res) > qqline(spots.tar.2\$std.res) In Figure 2, the top ﬁgure shows that the standardized residuals have no particular pattern. The middle ﬁgure shows no signiﬁcant autocorrelation in the residuals. In the bottom ﬁgure, all p-values are above 0.05. Figure 3 displays the QQ normal score plot of the standardized residuals, which is apparently straight and hence the errors appear to be normally distributed. In summary, the ﬁtted TAR(2; 3, 4) model provides a good ﬁt to the spots data. Exercise 15.3 Let us draw the prediction intervals and the predicted medians for 10 years by using TAR(2; 3, 4) with delay 2. > pred.sopts=predict(spots.tar.2,n.ahead=10,n.sim=1000) 293 −2 −1 0 1 2 −1.5 −0.5 0.5 1.5 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 3: QQ Normal Plot of the Standardized Residuals > yy=ts(c(sqrt(spots),pred.sopts\$fit),frequency=1,start=start(spots)) > plot(yy,type=’n’,ylim=range(c(yy,pred.sopts\$pred.interval)), + ylab=’Sqrt Spots’, xlab=expression(t)) > lines(sqrt(spots)) > lines(window(yy, start=end(spots)+c(0,1)),lty=2) > lines(ts(pred.sopts\$pred.interval[2,],start=end(spots)+c(0,1), + freq=1),lty=2) > lines(ts(pred.sopts\$pred.interval[1,],start=end(spots)+c(0,1), + freq=1),lty=2) t Sqrt Spots 1950 1960 1970 1980 1990 2000 2010 2 6 10 14 Figure 4: Prediction of the Sqrt of Spots The middle dashed line in Figure 4 is the median of the predictive distribution and the other dashed lines are the 2.5th and 97.5th percentiles of the predictive distribution. 294 Exercise 15.4 The long-run behavior of the skeleton of the ﬁtted TAR(2; 3, 4) model for the relative sunspot data is given below. The ﬁtted model is stationary and its skeleton converges to a limit point. > tar.skeleton(spots.tar.2) 0 10 20 30 40 50 5 7 9 t Skeleton Figure 5: Skeleton of the Fitted Model for the Spots Data Exercise 15.5 The graph below shows the simulated series with size 1000 from the ﬁtted model TAR(2; 3, 4) with delay d = 2. > set.seed(356813) > plot(y=tar.sim(n=1000,object=spots.tar.1)\$y,x=1:1000,ylab=expression(Y[t]), + xlab=expression(t),type=’o’) To compare the spectrum of the simulated realization with that of the data, let us see the graph below. The spectrum of the simulated series ﬁts the sunspots data. > set.seed(2357125) > yy.1=tar.sim(spots.tar.2,n=1000)\$y > spec.1=spec(yy.1,taper=.1, method=’ar’,plot=F) > spec.spots=spec(sqrt(spots),taper=.1, span=c(3,3),plot=F) > spec.spots=spec(sqrt(spots),taper=.1, span=c(3,3), + ylim=range(c(spec.1\$spec,spec.spots\$spec)),sub=’’) > lines(y=spec.1\$spec,x=spec.1\$freq,lty=2) 295 0 200 400 600 800 1000 4 6 8 10 12 t Y t Figure 6: Simulated Series of the Fitted Model (n = 1000) 0.0 0.1 0.2 0.3 0.4 0.5 5e−02 1e+00 5e+01 frequency spectrum Series: x Smoothed Periodogram Figure 7: Spectra of Simulated Series and Sqrt Transformed Data Exercise 15.6 The lagged regression plots for the square-root transformed hare series is given below. We can see from Figure 8 that the regression function estimates appear to be strongly nonlinear for lags 2, 3 and 6, suggesting a nonlinear data mechanism. > data(hare) > win.graph(width=4.875, height=6.5,pointsize=8) > set.seed(2534567) > par(mfrow=c(3,2)) > lagplot(sqrt(hare)) 296 0 2 4 6 8 10 2 4 6 8 10 lag−1 regression plot ooo o o o o o o o o oo o o o o o o o o o o o o o o o o o 0 2 4 6 8 10 2 4 6 8 10 lag−2 regression plot oo o o o o o o o o oo o o o o o o o o o o oo o o o o o 0 2 4 6 8 10 2 4 6 8 10 lag−3 regression plot o o oo o o o o o oo o o o o o o o o o o oo o o o o o 0 2 4 6 8 10 2 4 6 8 10 lag−4 regression plot o oo o o o o o oo o o o o o o o o o o oo o o o o o 0 2 4 6 8 10 2 4 6 8 10 lag−5 regression plot oo o o o o o oo o o o oo o o o o o oo o o o o o 0 2 4 6 8 10 2 4 6 8 10 lag−6 regression plot o o o o o o o o o o o oo o o o o o oo o o o o o Figure 8: Lagged Regression of Sqrt Transformed Hare Data Exercise 15.7 The results of formal tests (Keenan’s test, Tsay’s test and threshold likelihood ratio test) for nonlinearity for the hare data are given below. > Keenan.test(sqrt(hare)) \$test.stat  8.083568 \$p.value  0.009207613 \$order  3 > Tsay.test(sqrt(hare)) \$test.stat  2.135 297 \$p.value  0.09923 \$order  3 > pvaluem=NULL > for (d in 1:3){ + res=tlrt(sqrt(hare),p=5,d=d,a=0.25,b=0.75) + pvaluem= cbind( pvaluem, c(d,res\$test.statistic, + res\$p.value))} > rownames(pvaluem)=c(’d’,’test statistic’,’p-value’) > round(pvaluem,3) [,1] [,2] [,3] d 1.000 2.000 3.000 test statistic 71.558 54.964 16.807 p-value 0.000 0.000 0.083 The p-value of Keenan’s test is 0.009, strongly suggesting the nonlinearity of square-root hare series. Although the p-value of Tsay’s test is 0.09923 > 0.05, it’s still less than 0.10. Moreover, when d =1, 2, the p-value of threshold likehood ratio test is about 0.000 and when d = 3 the p-value is 0.083. We conclude it from the three test results that the generating mechanism for hare data is nonlinear. Exercise 15.8 From the MAIC method we set the maximum order to be p = 3. > AICM=NULL > for(d in 1:3) {hare.tar=tar(y=sqrt(hare),p1=3,p2=3,d=d,a=.1,b=.9) + AICM=rbind(AICM, c(d,hare.tar\$AIC,signif(hare.tar\$thd,3), + hare.tar\$p1,hare.tar\$p2))} > colnames(AICM)=c(’d’,’nominal AIC’,’r’,’p1’,’p2’) > rownames(AICM)=NULL > AICM d nominal AIC r p1 p2 [1,] 1 86.24 7.42 3 2 [2,] 2 83.87 3.87 3 3 [3,] 3 89.37 5.29 3 3 Let us ﬁt a TAR(2; 3, 3) model with d = 2. > hare.tar.1=tar(y=sqrt(hare),p1=3,p2=3,d=2,a=.1,b=.9,print=T) 298 Estimate Std. Error t-statistic p-value ˆd 1 ˆr 3.873 Lower Regime (7) ˆφ1,0 3.89 1.21 3.22 0.049 ˆφ1,1 1.25 0.32 3.85 0.03 ˆφ1,2 0.38 0.69 0.56 0.61 ˆφ1,3 −1.41 0.52 −2.71 0.07 ˜σ2 1 1.04 Upper Regime (21) ˆφ2,0 5.05 1.15 4.38 0.00 ˆφ2,1 0.84 0.24 3.52 0.00 ˆφ2,2 −0.14 0.32 −0.45 0.66 ˆφ2,3 −0.52 0.21 −2.45 0.03 ˜σ2 2 0.8447 The estimated threshold is 3.873, which is approximately the 25th percentile of the data. Notice that ˆφ1,2 and ˆφ1,3 are not signiﬁcant, which suggests the model TAR(2; 1, 3) with delay 2. But the nominal AIC of TAR(2; 1, 3) is 92.53 which is much larger than that of TAR(2; 3, 3), 83.87. Let us do model diagnostics. > win.graph(width=4.875, height=4.5,pointsize=8) > tsdiag(hare.tar.1,gof.lag=20) 299 0 5 10 15 20 25 −2 −1 0 1 t Standardized Residuals 2 4 6 8 10 12 14 −0.4 −0.2 0.0 0.2 0.4 Lag ACF of Residuals 5 10 15 20 0.0 0.2 0.4 0.6 0.8 1.0 Number of Lags P−values Figure 9: Diag of Lagged Regression of Sqrt Transformed Hare Data In Figure 9, the ACF and p-values of standard residuals from the ﬁtted model look good. Next let us see the normality of the standard residuals. > win.graph(width=2.5, height=2.5,pointsize=8) > qqnorm(hare.tar.1\$std.res) > qqline(hare.tar.1\$std.res) It looks like the distribution of the residuals has a lighter right tail than standard normal distribution. This may be caused by the small data size. Remember we only have 31 cases in the original data. 300 −2 −1 0 1 2 −2 −1 0 1 Normal Q−Q Plot Theoretical Quantiles Sample Quantiles Figure 10: QQ Normal Plot of Standard Residuals Exercise 15.9 (a) Let P =(pij), i, j =1, 2 be the transition probability matrix of Rt. Then p11 = Pr(Yt+1 ≤ r| Yt ≤ r) = Pr φ1,0 + σ1et+1 ≤ r  = Φ  r − φ1,0 σ1  and p12 =1 − p11 = Φ  φ1,0 − r σ1  , where Φ (·) is the cumulative distribution function of a standard normal random variable. Similarly, p21 = Pr(Yt+1 ≤ r| Yt >r) = Pr φ2,0 + σ2et+1 ≤ r  = Φ  r − φ2,0 σ2  and p22 =1 − p21 = Φ  φ2,0 − r σ2  . Let π1 π2  T be the stationary distribution of {Rt}. Then π1 + π2 = 1 and π1 π2   p11 p12 p21 p22  = π1 π2  . 301 It can be derived that  π1 π2  =  p21 p21+p12 p12 p21+p12  . (b) Assume {Yt} is stationary with stationary distribution F. Then for every y ∈ R, we have F(y)= F(r) Φ  y − φ1,0 σ1  + (1 − F(r)) Φ  y − φ2,0 σ2  .(0.1) Let y = r in (0.1) to obtain F(r)= Φ  r−φ2,0 σ2  1 − Φ  r−φ1,0 σ1  + Φ  r−φ2,0 σ2  .(0.2) Plugging (0.2) into (0.1) we get the stationary distribution F(y)= Φ  r−φ2,0 σ2  Φ  y−φ1,0 σ1  +  1 − Φ  r−φ1,0 σ1  Φ  y−φ2,0 σ2  1 − Φ  r−φ1,0 σ1  + Φ  r−φ2,0 σ2  . (c) From part (b), we know E(Yt)=E(Yt−1)= φ1,0F(r)+ φ2,0 (1 − F(r)) and E(YtYt−1) = φ1,0E Yt−1I(Yt−1≤r)  + φ2,0E Yt−1I(Yt−1>r)  = φ1,0 R r −∞ ydF(y) F(r) + φ2,0 R ∞ r ydF(y) 1 − F(r), where IE denotes the indicator function of event E. So we have Cov (Yt,Yt−1)=E(YtYt−1) − E(Yt)E(Yt−1) = φ1,0 R r −∞ ydF(y) F(r) + φ2,0 R ∞ r ydF(y) 1 − F(r) − φ1,0F(r)+ φ2,0 (1 − F(r))  2 . 302

pdf的实际排版效果，会与网站的显示效果略有不同！！