Sunday, October 11, 2015

On Forecast Intervals "too Wide to be Useful"

I keep hearing people say things like this or that forecast interval is "too wide to be useful." 

In general, equating "wide" intervals with "useless" intervals is nonsense. A good (useful) forecast interval is one that's correctly conditionally calibrated; see Christoffersen (International Economic Review, 1998). If a correctly-conditionally-calibrated interval is wide, then so be it. A wide interval is appropriate and desirable if conditional risk is truly high.

[Note well:  The relevant calibration concept is conditional. It's not enough for a forecast interval to be merely correctly unconditionally calibrated, which means that an allegedly x percent interval actually winds up containing the realization x percent of the time. That's necessary, but not sufficient, for correct conditional calibration. Again, see Christoffersen.]

Of course all this holds as well for density forecasts.  
Whether a density forecast is "good" has nothing to do with its dispersion. Rather, in precise parallel to interval forecasts, a good density forecast is one that's correctly conditionally calibrated; see Diebold, Gunther and Tay (International Economic Review, 1998). 

Sunday, October 4, 2015

Whither Econometric Principal-Components Regressions?

Principal-components regression (PCR) is routine in applied time-series econometrics.

Why so much PCR, and so little ridge regression? Ridge and PCR are both shrinkage procedures involving PC's. The difference is that ridge effectively includes all PC's and shrinks according to sizes of associated eigenvalues, whereas PCR effectively shrinks some PCs completely to zero (those not included) and doesn't shrink others at all (those included). 

Does not ridge resonate as more natural and appropriate? 

This recognition is hardly new or secret. It's in standard texts, like the beautiful Hastie et al. Elements of Statistical Learning.  

Econometricians should pay more attention to ridge.  

Thursday, October 1, 2015

Balke et al. on Real-Time Nowcasting

Check out the new paper, "Incorporating the Beige Book in a Quantitative Index of Economic Activity," by Nathan Balke, Michael Fulmer and Ren Zhang (BFZ).

[The Beige Book (BB) is a written description of U.S. economic conditions, produced by the Federal Reserve system. It is released eight times a year, roughly two weeks before the FOMC meeting.]

Basically BFZ include BB in an otherwise-standard FRB Philadelphia ADS Index.  Here's the abstract:  
We apply customized text analytics to the written description contained in the BB to obtain a quantitative measure of current economic conditions. This quantitative BB measure is then included into a dynamic factor index model that also contains other commonly used quantitative economic data. We find that at the time the BB is released, the BB has information about current economic activity not contained in other quantitative data. This is particularly the case during recessionary periods. However, by three weeks after its release date,"old" BB contain little additional information about economic activity not already contained in other quantitative data.  

The paper is interesting for several reasons.

First, from a technical viewpoint, BFZ take mixed-frequency data to the max, because Beige Book releases are unequally spaced. Their modified ADS has quarterly, monthly, weekly, and now unequally-spaced, variables.  But the Kalman filter handles it all, seamlessly.   

Second, including Beige Book -- basically "the view of the Federal Reserve System" -- is a novel and potentially large expansion of the nowcast information set.

Third, BFZ approach the evaluation problem in a very clever way, not revealed in the abstract. They view the initial ADS releases (with vs. without BB included) as forecasts of final-revised ADS (without BB included). They find large gains from including BB in estimating time t activity using time t vintage data, but little gain from including BB in estimating time t-30 (days) activity using time t vintage data. That is, including BB in ADS improves real-time nowcasting, even if it evidently adds little to retrospective historical assessment.

Sunday, September 27, 2015

Near Writing Disasters

Check this out this "Retraction Watch" post, forwarded by a reader:

Really funny.  Except that it's a little close to home.  I suspect that we've all had a few such accidents, or at least near-accidents, and with adjectives significantly stronger than "crappy".  I know I have.

Thursday, September 24, 2015

Coolest Paper at 2015 Jackson Hole

The Faust-Leeper paper is wild and wonderful.  The friend who emailed it said, "Be prepared, it’s very different but a great picture of real-time forecasting..." He got it right.

Actually his full email was, "Be prepared, it’s very different but a great picture of real-time forecasting, and they quote Zarnowitz." (He and I always liked and admired Victor Zarnowitz. But that's another post.)

The paper shines its light all over the place, and different people will read it differently. I did some spot checks with colleagues. My interpretation below resonated with some, while others wondered if we had read the same paper. Perhaps, as with Keynes, we'll never know exactly what Faust-Leeper really, really, really meant.

I read Faust-Leeper as speaking to f
actor analysis in macroeconomics and finance, arguing that dimensionality reduction via factor structure, at least as typically implemented and interpreted, is of limited value to policymakers, although the paper never uses wording like "dimensionality reduction" or "factor structure".

Faust-Leeper are doubting factor structure itself, then I think they're way off base. It's no accident that factor structure is at the center of both modern empirical/theoretical macro and modern empirical/theoretical finance. It's really there and it really works.

Alternatively, if they're implicitly saying something like this, then I'm interested:

Small-scale factor models involving just a few variables and a single common factor (or even two factors like "real activity" and "inflation") are likely missing important things, and are therefore incomplete guides for policy analysis

Or, closely related and more constructively: 

We should cast a wide net in terms of the universe of observables from which we extract common factors, and the number of factors that we extract. Moreover we should examine and interpret not only common factors, but also allegedly "idiosyncratic" factors, which may actually be contemporaneously correlated, time dependent, or even trending, due to mis-specification.

Enough.  Read it for yourself.

[General note: My use of terms like "factor modeling" throughout this post should be broadly interpreted to include not only explicit reduced-form statistical/econometric dynamic factor modeling, but also structural DSGE modeling.]  

Wednesday, September 16, 2015

Warning Problem Hopefully Solved

If during the last month you got a warning when accessing No Hesitations, I may have found and fixed the problem, finally.  (This happened once before.)  There were a couple of clearly bogus comments, which contained links that may have been phishing, posted anonymously. I have now deleted/banned comments, and No Hesitations has now been removed from any/all blacklists, as far as I know.

If for some reason you still get a warning -- now or ever -- please email me with as much information as possible (browser, any add-ons like Microsoft Smartscreen, any other security software on your machine or institution-wide, etc.). And if you're offered a way to report the warning as incorrect, please do. 

Thanks for your support.

Monday, September 14, 2015

Cochrane on Point vs. Density Forecasting

I recently blogged on Manski's call for uncertainty estimates for economic statistics.  Of course we should also acknowledge the uncertainty in economic forecasts (with or without acknowledgment of data uncertainty, and with is better than without).

Some of us have been pushing applied interval and density forecasting for years, and of course Bayesians have been pushing for centuries. The quantitative finance and risk management communities have been largely receptive, whereas macroeconomics has been slower, not withstanding the Bank of England's justly-famous "fan charts."

From a recent post in John Cochrane's Grumpy Economist:
... conditioning decisions on a forecast, cranked out to two decimal places, is a bad idea. Economic policy should embrace uncertainty! ... This is really a big deal. ... All forecasts ... should have error bars. ... Knowing what you don't know is real knowledge.

Here, here!

Surely statistical "error bars" as conventionally calculated are themselves often too tight, as they rely on a host of assumptions and in any event fail to capture unknown and unknowable sources of forecast uncertainty. But they're certainly a step in the right direction.

Thursday, September 10, 2015

Monday, September 7, 2015

BEA to Resume Provision of NSA GDP

In an earlier post, I argued for publication of non-seasonally-adjusted (NSA) series. Thanks to a helpful communication from Jonathan Wright, I recently learned (as did he) that BEA will resume compilation and publication of NSA U.S. GDP.

The official announcement is simply, "BEA will develop a NSA GDP that will be released in parallel with BEA’s quarterly GDP estimates." It's buried at the end of the box on p. 5 of "Preview of the 2015 Annual Revision of the National Income and Product Accounts," by Stephanie H. McCulla and Shelly Smith in the June 2015 Survey of Current Business. Rumor has it that we should look for the new NSA series to appear starting in late 2016 or early 2017.

Obviously my No Hesitations post was too late to have influenced the BEA’s decision, but other academic work may have played a role, notably Jonathan Wright's 2013 Brookings Papers piece (which stresses "overadjustment" in seasonally-adjusted data) and Chuck Manski's forthcoming 2015 Journal of Economic Literature piece (which stresses conceptual difficulties with seasonally-adjusted data).

Thanks BEA, for resuscitating NSA GDP. It’s the right thing to do.

Saturday, August 29, 2015

New CEA Overview of GDO

The U.S. Council of Economic Advisors has a nice new review of "Gross Domestic Output" (GDO), a simple average of expenditure- and income-side GDP estimates now published by the BEA.

In an earlier post I wrote rather negatively about GDO as compared to GDPplus, which is an optimally-weighted blend rather than a simple average. (See the FRB Philadelphia GDPplus site and the corresponding Aruba et al. paper available there.) My view has not changed.

But I want to be very clear about one thing: Quite apart from whether GDO is as accurate as GDPplus, GDO is surely much, much more accurate than standard expenditure-side GDP alone, or income-side GDP alone. Just look at Figure 2 and the surrounding discussion here. (in X. Chen and N. Swanson, eds., Causality, Prediction, and Specification Analysis: Recent Advances and Future Directions, Essays in Honor of Halbert L. White Jr., Springer, 2013, 1-26).

As I said in the above-mentioned earlier post (but alas, burried at the end):
I applaud the BEA's new averaged GDP. If it's not at the cutting edge, it's nevertheless much superior to the standard approach of doing nothing ... and it's an official acknowledgment of the wastefulness of doing so. Hence it's a significant step in the right direction. Hopefully its publication by BEA will nudge people away from uncritical and exclusive reliance on expenditure-side GDP. 
So here's to GDO.

[By the way, speaking of the Hal White volume, the introductory chapter is marvelous, filled with wonderful memories of Hal's career and insights into his research. You must read his description of his career path leading to UCSD, pp. vii-xi in the gray box.]