Daily Speculations 

The Chairman

6/9/2005
Victor Niederhoffer: Review of
Forecasting with Dynamic Regression
Models
Forecasting with Dynamic Regression Models by Alan Pankrantz, Wiley, 1991, is a statistics books that shows you the ideas behind the widelyused programs for forecasting with Box Jenkins models and distributed lags. Written in 1991, before the field had developed all the special notation, techniques, higher mathematics, and mumbo that makes such books accessible only to a mathematics consultant, pure mathematician or modern econometrician, this book is designed for all those with just an elementary statistics background.
The chapters include practical and theoretical reasons for using autoregressive and moving average models, an integration of these with regression forecasting, discussions of how to tell when a shift in the model has occurred, a series of diagnostics for telling which model is best, and worked examples of how to forecast, including one using stock prices to forecast industrial production. The last chapter on vector models shows how to put all these together in the context of simultaneous equations where the inputs and outputs of each equation have feedback relations with each other.
The author uses the technique of backshift notation , e.g. B(y(t)) = y(t1), to show how you can use simple models for dependence in residuals independent of normal regression forecasts where one variable such as industrial production is used to forecast something like stock prices. Practical electronics hobbyists will find that a familiarity with circuit analysis for such components as operation amplifiers, with output feeding back to the positive and negative nodes of the input, is a helpful foundation for gaining an intuitive feel for many of the concepts used.
I like the author's idea of using simple models for the effect of one variable on another, where the impact of each lagged independent variable exponentially decays, or oscillates around 1 in its impact on the dependent after a period of rest or delay before it has an impact. The author finds it very rare in practice that a model with more than a 2 period impact of the residuals from a regression is useful for forecasting.
The book leads me to focus on such variables as the joint impact of the correlation between two variables, and how far away from their normal affect they are on each other now and in the previous period. For example, stock prices in the US are normally up a little bit more than German stocks, but what happens when for the last two periods the relation is much greater or less than normal, or such departures are getting much bigger or smaller than usual. It leads to looking at such relations as comparing the current change of a variable to previous changes in other related ones, and then using the departure during the most recent two periods as a forecasting tool.
The book is recommended for those who find practical forecasting with moving averages, regressions, and serially correlated variables fascinating and wish to get some ideas as to how to combine them.
P.S. Normally, I don’t read a book 15 years out of date for such a topic, and I am confident that more up to date, technical, and useful treatments of the subject are available, perhaps even in the manuals accompanying the widely used statistical software packages, but I was induced to do it in honor of William Cochran, the teacher in my second statistics class 45 years ago. He was very amiable and talented and his query on our first exam as to the standard error of the constant term in a regression equation had me looking through my old statistics book, which led me to finding this one.