There were two other big conclusions I drew from that [2nd year macro] course.
The first was that the DSGE framework is a straitjacket that is strangling the field. It’s very costly in terms of time and computing resources to solve a model with more than one or two “frictions” (i.e. realistic elements), with more than a few structural parameters, with hysteresis, or with heterogeneity, etc. This means that what ends up getting published are the very simplest models – the basic RBC model, for example. (Incidentally, that also biases the field toward models in which markets are close to efficient, and in which government policy thus plays only a small role.)
Worse, all of the mathematical formalism and kludgy numerical solutions of DSGE give you basically zero forecasting ability (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say “Well, MY model is fully microfounded, and contains only ‘deep structural’ parameters like tastes and technology!”…Well, that, and a shot at publication in a top journal.
Finally, my field course taught me what a bad deal the whole neoclassical paradigm was. When people like Jordi Gali found that RBC models didn’t square with the evidence, it did not give any discernible pause to the multitudes of researchers who assume that technology shocks cause recessions. The aforementioned paper by Basu, Fernald and Kimball uses RBC’s own framework to show its internal contradictions – it jumps through all the hoops set up by Lucas and Prescott – but I don’t exactly expect it to derail the neoclassical program any more than did Gali.