Sunday, May 8, 2016

Econometric Computing in the Good Ol' Days

I received an email from Michael Belongia, who said:

"I wrote earlier in response to your post about Almon lags but forgot to include an anecdote that may be of interest to your follow-up.
In the late 1960s, the "St. Louis Equation"  became a standard framework for evaluating the relative effects of monetary and fiscal policy. The equation was estimated by the use of Almon lags (see, e.g., footnotes 12 and 18 in the article).  To estimate the equation, however, the St. Louis Fed had to use the computing power of nearby McDonnell-Douglas!!!  As Keith Carlson, who was in the Bank's Research Dept at the time, confirmed for me:   
'We did send our stuff out to McDonnell-Douglas.  Gave the instructions to the page who took it to the Cotton Belt building at 4th and Pine and the output would be picked up a couple days later. We did this until about 67 or 68 when we shifted to in-house.  In fact we hired the programmer from M-D.'
Difficulties like this certainly made economists of the era think more carefully about their models before taking them to the data."
I concur wholeheartedly with Michael's last comment. My own computing experience began in the late 1960's - I've posted about this in the past in The Monkey Run.

And I haven't forgotten the follow-up post on Almon distributed lag models that I promised!

© 2016, David E. Giles