Sunday, July 9, 2017

On the Identification of Network Connectedness

I want to clarify an aspect of the Diebold-Yilmaz framework (e.g., here or here).  It is simply a method for summarizing and visualizing dynamic network connectedness, based on a variance decomposition matrix.  The variance decomposition is not a part of our technology; rather, it is the key input to our technology.  Calculation of a variance decomposition of course requires an identified model.  We have nothing new to say about that; numerous models/identifications have appeared over the years, and it's your choice (but you will of course have to defend your choice). 

For certain reasons (e.g., comparatively easy extension to high dimensions) Yilmaz and I generally use a vector-autoregressive model and Koop-Pesaran-Shin "generalized identification".  Again, however, if you don't find that appealing, you can use whatever model and identification scheme you want.  As long as you can supply a credible / defensible variance decomposition matrix, the network summarization / visualization technology can then take over.


Monday, July 3, 2017

Bayes, Jeffreys, MCMC, Statistics, and Econometrics

In Ch. 3 of their brilliant book, Efron and Tibshirani (ET) assert that:
Jeffreys’ brand of Bayesianism [i.e., "uninformative" Jeffreys priors] had a dubious reputation among Bayesians in the period 1950-1990, with preference going to subjective analysis of the type advocated by Savage and de Finetti. The introduction of Markov chain Monte Carlo methodology was the kind of technological innovation that changes philosophies. MCMC ... being very well suited to Jeffreys-style analysis of Big Data problems, moved Bayesian statistics out of the textbooks and into the world of computer-age applications.
Interestingly, the situation in econometrics strikes me as rather the opposite.  Pre-MCMC, much of the leading work emphasized Jeffreys priors (RIP Arnold Zellner), whereas post-MCMC I see uniform at best (still hardly uninformative as is well known and as noted by ET), and often Gaussian or Wishart or whatever.  MCMC of course still came to dominate modern Bayesian econometrics, but for a different reason: It facilitates calculation of the marginal posteriors of interest, in contrast to the conditional posteriors of old-style analytical calculations. (In an obvious notation and for an obvious normal-gamma regression problem, for example, one wants posterior(beta), not posterior(beta | sigma).) So MCMC has moved us toward marginal posteriors, but moved us away from uninformative priors.

Or maybe I'm missing something.

Thursday, June 29, 2017

More Slides: Forecast Evaluation, DSGE Modeling, and Connectedness

The last post (slides from a recent conference discussion) reminded me of some slide decks that go along with some forthcoming papers.  I hope they're useful.

Diebold, F.X. and Shin, M. (in press), "Assessing Point Forecast Accuracy by Stochastic Error Distance," Econometric Reviews.  Slides here.

Diebold, F.X., Schorfheide, F. and Shin, M. (in press)"Real-Time Forecast Evaluation of DSGE Models with Stochastic Volatility," Journal of Econometrics.  Slides here.

Demirer, M., Diebold, F.X., Liu, L. and Yilmaz, K. (in press), "Estimating Global Bank Network Connectedness", Journal of Applied Econometrics.  Slides here.

Monday, June 26, 2017

Slides from SoFiE NYU Discussion

Here are the slides from my pre-conference discussion of Yang Liu's interesting paper, "Government Debt and Risk Premia", at the NYU SoFiE meeting. The key will be to see whether his result (that debt/GDP is a key driver of the equity premium) remains when he controls for expected future real activity. (See Campbell and Diebold, "Stock Returns and Expected Business Conditions: Half a Century of Direct Evidence," Journal of Business and Economic Statistics, 27, 266-278, 2009.)

Wednesday, June 7, 2017

Structural Change and Big Data

Recall the tall-wide-dense (T, K, m) Big Data taxonomy.  One might naively assert that tall data (big time dimension, T) are not really a part of the Big Data phenomenon, insofar as T has not started growing more quickly in recent years.  But a more sophisticated perspective on the "size" of T is whether it is big enough to make structural change a potentially serious concern.  And structural change is a serious concern, routinely, in time-series econometrics.  Hence structural change, in a sense, produces Big Data through the T channel.

Saturday, May 27, 2017

SoFiE 2017 New York

If you haven't yet been to the Society for Financial Econometrics (SoFiE) annual meeting, now's the time.  They're pulling out all the stops for the 10th anniversary at NYU Stern, June 21-23, 2017.  There will be a good mix of financial econometrics and empirical finance (invited speakers here; full program here). The "pre-conference" will also continue, this year June 20, with presentations by junior scholars (new/recent Ph.D.'s) and discussions by senior scholars. Lots of information here. See you there!

Monday, May 22, 2017

Big Data in Econometric Modeling

Here's a speakers' photo from last week's Penn conference, Big Data in Dynamic Predictive Econometric Modeling.  Click through to find the program, copies of papers and slides, a participant list, and a few more photos.  A good and productive time was had by all!


Monday, May 15, 2017

Statistics in the Computer Age

Efron and Tibshirani's Computer Age Statistical Inference (CASI) is about as good as it gets. Just read it. (Yes, I generally gush about most work in the Efron, Hastie, Tibshirani, Brieman, Friedman, et al. tradition.  But there's good reason for that.)  As with the earlier Hastie-Tibshirani Springer-published blockbusters (e.g., here), the CASI publisher (Cambridge) has allowed ungated posting of the pdf (here).  Hats off to Efron, Tibshirani, Springer, and Cambridge.

Monday, May 8, 2017

Replicating Anomalies

I blogged a few weeks ago on "the file drawer problem".  In that vein, check out the interesting new paper below. I like their term "p-hacking". 

Random thought 1:  
Note that reverse p-hacking can also occur, when an author wants low p-values.  In the study below, for example, the deck could be stacked with all sorts of dubious/spurious "anomaly variables" that no one ever took seriously.  Then of course a very large number would wind up with low p-values.  I am not suggesting that the study below is guilty of this; rather, I simply had never thought about reverse p-hacking before, and this paper led me to think of the possibility, so I'm relaying the thought.

Related random thought 2:  
It would be interesting to compare anomalies published in "top journals" and "non-top journals" to see whether the top journals are more guilty or less guilty of p-hacking.  I can think of competing factors that could tip it either way!

Replicating Anomalies
by Kewei Hou, Chen Xue, Lu Zhang - NBER Working Paper #23394
Abstract:
The anomalies literature is infested with widespread p-hacking. We replicate the entire anomalies literature in finance and accounting by compiling a largest-to-date data library that contains 447 anomaly variables. With microcaps alleviated via New York Stock Exchange breakpoints and value-weighted returns, 286 anomalies (64%) including 95 out of 102 liquidity variables (93%) are insignificant at the conventional 5% level. Imposing the cutoff t-value of three raises the number of insignificance to 380 (85%). Even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Out of the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3). In all, capital markets are more efficient than previously recognized.