## Tuesday, October 21, 2014

I guess my first posted rant was the call for papers thing.  Here's a second.

Countless times, from me to Chair/Dean xxx at Some Other University:

I am happy to help with your evaluation of Professor zzz. This email will serve as my letter. [email here]...
Countless times, from Chair/Dean xxx to me:
Fantasy response from me to Chair/Dean xxx:
Sure, no problem at all. My time is completely worthless, so I'm happy to oblige, despite the fact that email conveys precisely the same information and is every bit as legally binding (whatever that even means in this context) as a "signed" "letter" on "letterhead." So now I’ll copy my email, try to find some dusty old Word doc letterhead on my hard drive, paste the email into the Word doc, try to beat it into submission depending on how poor the formatting / font / color / blocking looks when first pasted, print from Word to pdf, attach the pdf to a new email, and re-send it to you. How 1990’s.
Actually last week I did send something approximating the fantasy email to a dean at a leading institution. I suspect that he didn't find it amusing. (I never heard back.) But as I also said at the end of that email,
"Please don’t be annoyed. I...know that these sorts of 'requirements' have nothing to do with you per se. Instead I’m just trying to push us both forward in our joint battle with red tape."

## Monday, October 13, 2014

### Lawrence R. Klein Legacy Colloquium

In Memoriam

The Department of Economics of the University of Pennsylvania, with kind support from the School of Arts and Sciences, the Wharton School, PIER and IER, is pleased is pleased to host a colloquium, "The Legacy of Lawrence R. Klein: Macroeconomic Measurement, Theory, Prediction and Policy," on Penn’s campus, Saturday, October 25, 2014. The full program and related information are here. We look forward to honoring Larry’s legacy throughout the day. Please join us if you can.

Featuring:
• Olav Bjerkholt, Professor of Economics, University of Oslo
• Harold L. Cole, Professor of Economics and Editor of International Economic Review, University of Pennsylvania
• Thomas F. Cooley, Paganelli-Bull Professor of Economics, New York University
• Francis X. Diebold, Paul F. Miller, Jr. and E. Warren Shafer Miller Professor of Economics, University of Pennsylvania
• Jesus Fernandez-Villaverde, Professor of Economics, University of Pennsylvania
• Dirk Krueger, Professor and Chair of the Department of Economics, University of Pennsylvania
• Enrique G. Mendoza, Presidential Professor of Economics and Director of Penn Institute for Economic Research, University of Pennsylvania
• Glenn D. Rudebusch, Executive Vice President and Director of Research, Federal Reserve Bank of San Francisco
• Frank Schorfheide, Professor of Economics, University of Pennsylvania
• Christopher A. Sims, John F. Sherrerd ‘52 University Professor of Economics, Princeton University
• Ignazio Visco, Governor of the Bank of Italy

## Monday, October 6, 2014

### Intuition for Prediction Under Bregman Loss

Elements of the Bregman family of loss functions, denoted $$B(y, \hat{y})$$, take the form:
$$B(y, \hat{y}) = \phi(y) - \phi(\hat{y}) - \phi'(\hat{y}) (y-\hat{y})$$ where $$\phi: \mathcal{Y} \rightarrow R$$ is any strictly convex function, and $$\mathcal{Y}$$ is the support of $$Y$$.

Several readers have asked for intuition for equivalence between the predictive optimality of $$E[y|\mathcal{F}]$$ and Bregman loss function $$B(y, \hat{y})$$.  The simplest answers come from the proof itself, which is straightforward.

First consider $$B(y, \hat{y}) \Rightarrow E[y|\mathcal{F}]$$.  The derivative of expected Bregman loss with respect to $$\hat{y}$$ is
$$\frac{\partial}{\partial \hat{y}} E[B(y, \hat{y})] = \frac{\partial}{\partial \hat{y}} \int B(y,\hat{y}) \;f(y|\mathcal{F}) \; dy$$
$$= \int \frac{\partial}{\partial \hat{y}} \left ( \phi(y) - \phi(\hat{y}) - \phi'(\hat{y}) (y-\hat{y}) \right ) \; f(y|\mathcal{F}) \; dy$$
$$= \int (-\phi'(\hat{y}) - \phi''(\hat{y}) (y-\hat{y}) + \phi'(\hat{y})) \; f(y|\mathcal{F}) \; dy$$
$$= -\phi''(\hat{y}) \left( E[y|\mathcal{F}] - \hat{y} \right).$$
Hence the first order condition is
$$-\phi''(\hat{y}) \left(E[y|\mathcal{F}] - \hat{y} \right) = 0,$$
so the optimal forecast is the conditional mean, $$E[y|\mathcal{F}]$$.

Now consider $$E[y|\mathcal{F}] \Rightarrow B(y, \hat{y})$$. It's a simple task of reverse-engineering. We need the f.o.c. to be of the form
$$const \times \left(E[y|\mathcal{F}] - \hat{y} \right) = 0,$$
so that the optimal forecast is the conditional mean, $$E[y|\mathcal{F}]$$. Inspection reveals that $$B(y, \hat{y})$$ (and only $$B(y, \hat{y})$$) does the trick.

One might still want more intuition for the optimality of the conditional mean under Bregman loss, despite its asymmetry.  The answer, I conjecture, is that the Bregman family is not asymmetric! At least not for an appropriate definition of asymmetry in the general $$L(y, \hat{y})$$ case, which is more complicated and subtle than the $$L(e)$$ case.  Asymmetric loss plots like those in Patton (2014), on which I reported last week, are for fixed $$y$$ (in Patton's case, $$y=2$$ ), whereas for a complete treatment we need to look across all $$y$$. More on that soon.

[I would like to thank -- without implicating -- Minchul Shin for helpful discussions.]

## Monday, September 29, 2014

### A Mind-Blowing Optimal Prediction Result

I concluded my previous post with:
Consider, for example, the following folk theorem: "Under asymmetric loss, the optimal prediction is conditionally biased." The folk theorem is false. But how can that be?
What's true is this: The conditional mean is the L-optimal forecast if and only if the loss function L is in the Bregman family, given by
$$L(y, \hat{y}) = \phi (y) - \phi (\hat{y}) - \phi ' ( \hat{y}) (y - \hat{y}).$$ Quadratic loss is in the Bregman family, so the optimal prediction is the conditional mean.  But the Bregman family has many asymmetric members, for which the conditional mean remains optimal despite the loss asymmetry. It just happens that the most heavily-studied asymmetric loss functions are not in the Bregman family (e.g., linex, linlin), so the optimal prediction is not the conditional mean.

So the Bregman result (basically unseen in econometrics until Patton's fine new 2014 paper) is not only (1) a beautiful and perfectly-precise (necessary and sufficient) characterization of optimality of the conditional mean, but also (2) a clear statement that the conditional mean can be optimal even under highly-asymmetric loss.

Truly mind-blowing! Indeed it sounds bizarre, if not impossible. You'd think that such asymmetric Bregman families must must be somehow pathological or contrived. Nope. Consider for example, Kneiting's (2011) "homogeneous" Bregman family obtained by taking $$\phi (x; k) = |x|^k$$ for $$k>1$$, and Patton's (2014) "exponential" Bregman family, obtained by taking $$\phi (x; a) = 2 a^{-2} exp(ax)$$ for $$a \ne 0$$. Patton (2014) plots them (see Figure 1 from his paper, reproduced below with his kind permission). The Kneiting homogeneous Bregman family has a few funky plateaus on the left, but certainly nothing bizarre, and the Patton exponential Bregman family has nothing funky whatsoever. Look, for example, at the upper right element of Patton's figure. Perfectly natural looking -- and highly asymmetric.

For your reading pleasure, see: Bregman (1967)Savage (1971)Christoffersen and Diebold (1997)Gneiting (2011)Patton (2014).

## Monday, September 22, 2014

### Prelude to a Mind-Blowing Result

A mind-blowing optimal prediction result will come next week. This post sets the stage.

My earlier post, "Musings on Prediction Under Asymmetric Loss," got me thinking and re-thinking about the predictive conditions under which the conditional mean is optimal, in the sense of minimizing expected loss.

To strip things to the simplest case possible, consider a conditionally-Gaussian process.

(1) Under quadratic loss, the conditional mean is of course optimal. But the conditional mean is also optimal under other loss functions, like absolute-error loss (in general the conditional median is optimal under absolute-error loss, but by symmetry of the conditionally-Gaussian process, the conditional median is the conditional mean).

(2) Under asymmetric loss like linex or linlin, the conditional mean is generally not the optimal prediction. One would naturally expect the optimal forecast to be biased, to lower the probability of making errors of the more hated sign. That intuition is generally correct. More precisely, the following result from Christoffersen and Diebold (1997) obtains:
If $$y_{t}$$ is a conditionally Gaussian process and $$L(e_{t+h} )$$ is any loss function defined on the $$h$$-step-ahead prediction error $$e_{t+h |t}$$, then the $$L$$-optimal predictor is of the form $$y_{t+h | t} = \mu _{t+h,t} + \alpha _{t},$$where $$\mu _{t+h,t} = E(y_{t+h} | \Omega_t)$$, $$\Omega_t = y_t, y_{t-1}, ...$$, and $$\alpha _{t}$$ depends only on the loss function $$L$$ and the conditional prediction-error variance $$var(e _{t+h} | \Omega _{t} )$$.
That is, the optimal forecast is a "shifted" version of the conditional mean, where the generally time-varying bias depends only on the loss function (no explanation needed) and on the conditional variance (explanation: when the conditional variance is high, you're more likely to make a large error, including an error of the sign you hate, so under asymmetric loss it's optimal to inject more bias at such times).

(1) and (2) are true. A broad and correct lesson emerging from them is that the conditional mean is the central object for optimal prediction under any loss function. Either it is the optimal prediction, or it's a key ingredient.

But casual readings of (1) and (2) can produce false interpretations. Consider, for example, the following folk theorem: "Under asymmetric loss, the optimal prediction is conditionally biased." The folk theorem is false. But how can that be? Isn't the folk theorem basically just (2)?

Things get really interesting.

To be continued...

## Monday, September 15, 2014

### 1976 NBER-Census Time Series Conference

What a great blast from the past -- check out the program of the 1976 NBER-Census Time-Series Conference. (Thanks to Bill Wei for forwarding, via Hang Kim.)

The 1976 conference was a pioneer in bridging time-series econometrics and statistics. Econometricians at the table included Zellner, Engle, Granger, Klein, Sims, Howrey, Wallis, Nelson, Sargent, Geweke, and Chow. Statisticians included Tukey, Durbin, Bloomfield, Cleveland, Watts, and Parzen. Wow!

The 1976 conference also clearly provided the model for the subsequent long-running and hugely-successful NBER-NSF Time-Series Conference, the hallmark of which is also bridging the time-series econometrics and statistics communities. An historical listing is here, and the tradition continues with the upcoming 2014 NBER-NSF meeting at the Federal Reserve Bank of St. Louis. (Registration deadline Wednesday!)

## Monday, September 8, 2014

### Network Econometrics at Dinner

At a seminar dinner at Duke last week, I asked the leading young econometrician at the table for his forecast of the Next Big Thing, now that the partial-identification set-estimation literature has matured. The speed and forcefulness of his answer -- network econometrics -- raised my eyebrows, and I agree with it. (Obviously I've been working on network econometrics, so maybe he was just stroking me, but I don't think so.) Related, the Acemoglu-Jackson 2014 NBER Methods Lectures, "Theory and Application of Network Models," are now online (both videos and slides). Great stuff!

## Tuesday, September 2, 2014

### FinancialConnectedness.org Site Now Up

The Financial and Macroeconomic Connectedness site is now up, thanks largely to the hard work of Kamil Yilmaz and Mert Demirer. Check it out at It implements the Diebold-Yilmaz framework for network connecteness measurement in global stock, sovereign bond, FX and CDS markets, both statically and dynamically (in real time). It includes results, data, code, bibliography, etc. Presently it's all financial markets and no macro (e.g., no global business cycle connectedness), but macro is coming soon. Check back in the coming months as the site grows and evolves.

## Monday, August 25, 2014

### Musings on Prediction Under Asymmetric Loss

As has been known for more than a half-century, linear-quadratic-Gaussian (LQG) decision/control problems deliver certainty equivalence (CE). That is, in LQG situations we can first predict/extract (form a conditional expectation) and then simply plug the result into the rest of the problem. Hence the huge literature on prediction under quadratic loss, without specific reference to the eventual decision environment.

But two-step forecast-decision separation (i.e., CE) is very special. Situations of asymmetric loss, for example, immediately diverge from LQG, so certainty equivalence is lost. That is, the two-step CE prescription of “forecast first, and then make a decision conditional on the forecast” no longer works under asymmetric loss.

Yet forecasting under asymmetric loss -- again without reference to the decision environment -- seems to pass the market test. People are interested in it, and a significant literature has arisen. (See, for example, Elliott and Timmermann, "Economic Forecasting," Journal of Economic Literature, 46, 3-56.)

What gives? Perhaps the implicit hope is that CE two-step procedures might be acceptably-close approximations to fully-optimal procedures even in non-CE situations. Maybe they are, sometimes. Or perhaps we haven't thought enough about non-CE environments, and the literature on prediction under asymmetric loss is misguided. Maybe it is, sometimes. Maybe it's a little of both.

## Monday, August 18, 2014

### Models Didn't Cause the Crisis

Some of the comments engendered by the Black Swan post remind me of something I've wanted to say for a while: In sharp contrast to much popular perception, the financial crisis wasn't caused by models or modelers.

Rather, the crisis was caused by huge numbers of smart, self-interested people involved with the financial services industry -- buy-side industry, sell-side industry, institutional and retail customers, regulators, everyone -- responding rationally to the distorted incentives created by too-big-to-fail (TBTF), sometimes consciously, often unconsciously. Of course modelers were part of the crowd looking the other way, but that misses the point: TBTF coaxed everyone into looking the other way. So the key to financial crisis management isn't as simple as executing the modelers, who perform invaluable and ongoing tasks. Instead it's credibly committing to end TBTF, but no one has found a way. Ironically, Dodd-Frank steps backward, institutionalizing TBTF, potentially making the financial system riskier now than ever. Need it really be so hard to end TBTF? As Nick Kiefer once wisely said (as the cognoscenti rolled their eyes), "If they're too big to fail, then break them up."

[For more, see my earlier financial regulation posts:  part 1part 2 and part 3.]