Thursday, February 11, 2016

New R Code for High-Frequency Financial Data Analysis

I looked through the manual (below). Looks well done.

From the email:

Package features estimators for working with high frequency market data.

Microstructure Noise:
- Autocovariance Noise Variance
- Realized Noise Variance
- Unbiased Realized Noise Variance
- Noise-to-Signal Ratio

Price Variance:
- Two Series Realized Variance
- Multiple Series Realized Variance
- Modulated Realized Variance
- Jump Robust Modulated Realized Variance
- Uncertainty Zones Realized Variance
- Kernel Realized Variance (Bartlett, Cubic, 5th/6th/7th/8th-order, Epanichnikov, Parzen, Tukey-Hanning kernels)

Price Quarticity:
- Realized Quarticity
- Realized Quad-power Quarticity
- Realized Tri-power Quarticity
- Modulated Realized Quarticity

- for R (@ CRAN):

Tuesday, February 9, 2016

Worst Practices Conference

This ad just arrived in the email.  What a title.  Presumably the conference is about improving worst-case outcomes in order to improve expected minimax loss.  But still, that title...

2016 Foresight Practitioner Conference:
Worst Practices in Forecasting and Planning: 
Making Better Mistakes Tomorrow

Foresight has teamed up with the Advanced Analytics Institute at North Carolina State University (NCSU) in Raleigh to offer a tantalizing 1.5-day conference.

New Judea Pearl Causal Inference "Primer"

Should be a fun and informative read. Check out the contents and various chapters here. ("Causal Inference in Statistics - A Primer" by J. Pearl, M. Glymour and N. Jewell. Available now on Kindle; available in print Feb. 26, 2016.)


Saturday, February 6, 2016

Dual Regression

Speaking of quantiles and quantile regression, I also like the new version of Spady and Stouli's "Dual Regression." The power and insights of quantile regression, without the possibility of intersecting conditional quantile surfaces.  Sounds good to me.

Multivariate Quantiles

This new paper got me thinking. How often one wishes for a natural notion of multivariate median, or more generally, multivariate quantiles. Had fun learning about centerpoints and Tukey depths.

Multiple-Output Quantile Regression
By: Marc Hallin ; Miroslav Šiman
URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/224753&r=ecm

(Skip the first page, which is evidently from a different paper.)

Sunday, January 31, 2016

Shrinking VAR's Toward Theory: Supplanting the Minnesota Prior?


A recent post, On Bayesian DSGE Modeling with Hard and Soft Restrictions, ended with: "A related issue is whether 'theory priors' will supplant others, like the 'Minnesota prior'. I'll save that for a later post." This is that later post. Its title refers to Ingram and Whiteman's 1994 classic, entitled "Supplanting the 'Minnesota' Prior: Forecasting Macroeconomic Time Series Using Real Business Cycle Model Priors."

So, shrinking VAR's using DSGE theory priors improves VAR forecasts. Sounds like a victory for economics, with the headline "Using Economic Theory Improves Economic Forecasts!" We'd all like that. We all want that.

But the "victory" is misleading, and more than a little hollow. Lots of shrinkage directions improve forecasts. Indeed almost all shrinkage directions improve forecasts. Real victory would require theory-inspired priors to deliver clear extra improvement relative to other shrinkage directions, but they usually don't. In particular, the Minnesota prior, centered on a simple vector random walk, remains competitive. (See Del Negro and Schorfheide (2004) and Del Negro and Schorfheide (2007).) Sometimes theory priors beat the Minnesota prior by a little, sometimes they lose by a little. It depends on the dataset, the variable, the forecast horizon, etc.

The bottom line: Theory priors seem to be roughly as good as anything else, including the Minnesota prior, but certainly they've not led us to anything resembling wonderful new forecasting success. This seems at best a small forecasting victory for theory priors, but perhaps a victory nonetheless, particularly given the obvious appeal of using a theory prior for Bayesian VAR forecasting that coheres with the theory model used for policy analysis.

Saturday, January 23, 2016

Strippers, JFK, Stalin, and the Oxford Comma

Maybe everyone already knows about the Oxford comma and the crazy stripper thing. I just learned about them. Anyway, here goes.

Consider (1) "x, y and z" vs. (2) "x, y, and z". The difference is that (2) has an extra comma before "and". I always thought that (1) vs. (2) doesn't matter, so long as you pick one and stick with it, maintaining consistency. But some authorities feel strongly that (2) should always be used. Indeed the extra comma in (2) is called an "Oxford comma", because the venerable Oxford University Press has insisted on its use for as long as anyone can remember.

Oxford has a point. It turns out that use of the Oxford comma eliminates the possibility of confusion that can arise otherwise. For example, consider the sentence, "We invited two strippers, JFK and Stalin." It's not clear whether that means two strippers plus JFK and Stalin, for a total of four people, as in the left panel below, or whether the strippers are JFK and Stalin, as in the right panel.



In contrast, inclusion of an Oxford comma renders the meaning unambiguous: "We invited two strippers, JFK, and Stalin" clearly corresponds to the left panel. 


The wacky example and pictures were created by a Dallas high school teacher and used in class a few months ago. Local parents were suitably outraged. Read about it here.

(The pictures are CBS Dallas screenshots.  Thanks to Hannah Diebold for bringing them to my attention!)

Wednesday, January 20, 2016

Time-Varying Dynamic Factor Loadings

Check out Mikkelsen et al. (2015).  I've always wanted to try high-dimensional dynamic factor models (DFM's) with time-varying loadings as an approach to network connectedness measurement (e.g., increasing connectedness would correspond to increasing factor loadings...).  The problem for me was how to do time-varying parameter DFM's in (ultra) high dimensions.  Enter Mikkelsen et al.  I also like that it's MLE -- I'm still an MLE fan, per Doz, Giannone and Reichlin.  It might be cool and appropriate to endow the time-varying factor loadings with factor structure themselves, which might be a straightforward extension (application?) of Sevanovic (2015).  (Stevanovic paper here; supplementary material here.)

Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models


Jakob Guldbæk Mikkelsen (Aarhus University and CREATES) ; Eric Hillebrand (Aarhus University and CREATES) ; Giovanni Urga (Cass Business School)


2015


In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite-sample properties of the maximum likelihood estimator through an extensive simulation study and illustrate the empirical relevance of the time-varying loadings structure using a large quarterly dataset for the US economy.

Sunday, January 17, 2016

Measuring Policy Uncertainty and its Effects

Fascinating work like Baker, Bloom and Davis (2015) has for some time had me interested in defining and measuring policy uncertainty and its effects. 

A plausible hypothesis is that policy uncertainty, like inflation uncertainty, reduces aggregate welfare by throwing sand in the Walrasian gears. An interesting new paper by Erzo Luttmer and Andrew Samwick, "The Welfare Cost of Policy Uncertainty: Evidence from Social Security," drills down to the micro decision-making level and shows how it reduces individual welfare by directly eroding the intended policy benefits. Nice work.