Monday, November 20, 2017

More on Path Forecasts

I blogged on path forecasts yesterday.  A reader just forwarded this interesting paper, of which I was unaware.  Lots of ideas and up-to-date references.

Thursday, November 16, 2017

Forecasting Path Averages

Consider two standard types of \(h\)-step forecast:

(a).  \(h\)-step forecast, \(y_{t+h,t}\), of \(y_{t+h}\)

(b).  \(h\)-step path forecast, \(p_{t+h,t}\), of \(p_{t+h} =  \{ y_{t+1}, y_{t+2}, ..., y_{t+h} \}\).

Clive Granger used to emphasize the distinction between (a) and (b).

As regards path forecasts, lately there's been some focus not on forecasting the entire path \(p_{t+h}\), but rather on forecasting the path average:

(c).  \(h\)-step path average forecast, \(a_{t+h,t}\), of \(a_{t+h} =  1/h [y_{t+1} +  y_{t+2} + ... +   y_{t+h}]\)

The leading case is forecasting "average growth", as in Mueller and Waston (2016).

Forecasting path averages (c) never resonated thoroughly with me.  After all, (b) is sufficient for (c), but not conversely -- the average is just one aspect of the path, and additional aspects (overall shape, etc.) might be of interest. 

Then, listening to Ken West's FRB SL talk, my eyes opened.  Of course the path average is insufficient for the whole path, but it's surely the most important aspect of the path -- if you could know just one thing about the path, you'd almost surely ask for the average.  Moreover -- and this is important -- it might be much easier to provide credible point, interval, and density forecasts of \(a_{t+h}\) than of \(p_{t+h}\).

So I still prefer full path forecasts when feasible/credible, but I'm now much more appreciative of path averages.

Wednesday, November 15, 2017

FRB St. Louis Forecasting Conference

Got back a couple days ago.  Great lineup.  Wonderful to see such sharp focus.  Many thanks to FRBSL and the organizers (Domenico Giannone, George Kapetanios, and Mike McCracken).  I'll hopefully blog on one or two of the papers shortly.  Meanwhile, the program is here.

Wednesday, November 8, 2017

Artificial Intelligence, Machine Learning, and Productivity

As Bob Solow famously quipped, "You can see the computer age everywhere but in the productivity statistics".  That was in 1987.  The new "Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics," NBER w.p. 24001, by Brynjolfsson, Rock, and Syverson, brings us up to 2017.  Still a puzzle.  Fascinating.  Ungated version here.

Sunday, November 5, 2017

Regression on Term Structures

An important insight regarding use of dynamic Nelson Siegel (DNS) and related term-structure modeling strategies (see here and here) is that they facilitate regression on an entire term structure.  Regressing something on a curve might initially sound strange, or ill-posed.  The insight, of course, is that DNS distills curves into level, slope, and curvature factors; hence if you know the factors, you know the whole curve.  And those factors can be estimated and included in regressions, effectively enabling regression on a curve.

In a stimulating new paper, “The Time-Varying Effects of Conventional and Unconventional Monetary Policy: Results from a New Identification Procedure”, Atsushi Inoue and Barbara Rossi put that insight to very good use. They use DNS yield curve factors to explore the effects of monetary policy during the Great Recession.  That monetary policy is often dubbed "unconventional" insofar as it involved the entire yield curve, not just a very short "policy rate".

I recently saw Atsushi present it at NBER-NSF and Barbara present it at Penn's econometrics seminar.  It was posted today, here.

Sunday, October 29, 2017

What's up With "Fintech"?

It's been a while, so it's time for a rant (in this case gentle, with no names named).

Discussion of financial technology ("fintech", as it's called) seems to be everywhere these days, from business school fintech course offerings to high-end academic fintech research conferences. I definitely get the business school thing -- tech is cool with students now, and finance is cool with students now, and there are lots of high-paying jobs.

But I'm not sure I get the academic research thing. We can talk about "X-tech" for almost unlimited X: shopping, travel, learning, medicine, construction, sailing, ..., and yes, finance. It's all interesting, but is there something extra interesting about X=finance that elevates fintech to a higher level? Or elevates it to a serious and separate new research area? If there is, I don't know what it is, notwithstanding the cute name and all the recent publicity.

(Some earlier rants appear to the right, under Browse by Topic / Rants.)

Sunday, October 22, 2017

Pockets of Predictability

The possibility of localized "pockets of predictability", particularly in financial markets, is obviously intriguing.  Recently I'm noticing a similarly-intriguing pocket of research on pockets of predictability.  

The following paper, for example, was presented at 2017 the NBER-NSF Time Series conference at  Northwestern University, even if it is evidently not yet circulating:
"Pockets of Predictability", by Leland Farmer (UCSD), Lawrence Schmidt (Chicago), and Allan Timmermann (UCSD).  Abstract:  We show that return predictability in the U.S. stock market is a localized phenomenon, in which short periods, “pockets,” with significant predictability are interspersed with long periods with little or no evidence of return predictability. We explore possible explanations of this finding, including time-varying risk premia, and find that they are inconsistent with a general class of affine asset pricing models which allow for stochastic volatility and compound Poisson jumps. We find that pockets of return predictability can, however, be explained by a model of incomplete learning in which the underlying cash flow process is subject to change and investors update their priors about the current state. Simulations from the model demonstrate that investors’ learning about the underlying cash flow process can induce patterns that look, ex-post, like local return predictability, even in a model in which ex-ante expected returns are constant.

And this one just appeared as an NBER w.p.: "Sparse Signals in the Cross-Section of Returns", by Alexander M. Chinco, Adam D. Clark-Joseph, Mao Ye, NBER w.p. 23933, October 2017.
http://papers.nber.org/papers/w23933?utm_campaign=ntw&utm_medium=email&utm_source=ntw
Abstract: This paper applies the Least Absolute Shrinkage and Selection Operator (LASSO) to make rolling 1-minute-ahead return forecasts using the entire cross section of lagged returns as candidate predictors. The LASSO increases both out-of-sample fit and forecast-implied Sharpe ratios. And, this out-of-sample success comes from identifying predictors that are  unexpected, short-lived, and sparse. Although the LASSO uses a statistical rule rather than economic intuition to identify predictors, the predictors it identifies are nevertheless associated with economically meaningful events: the LASSO tends to identify as predictors stocks with news about fundamentals.

Here's some associated work in dynamical systems theory:  "A Mechanism for Pockets of Predictability in Complex Adaptive Systems", by Jorgen Vitting Andersen, Didier Sornette, Europhysics Letters, 2005.  https://arxiv.org/abs/cond-mat/0410762
 Abstract:  We document a mechanism operating in complex adaptive systems leading to dynamical pockets of predictability ("prediction days''), in which agents collectively take predetermined courses of action, transiently decoupled from past history. We demonstrate and test it out-of-sample on synthetic minority and majority games as well as on real financial time series. The surprising large frequency of these prediction days implies a collective organization of agents and of their strategies which condense into transitional herding regimes.

There's even an ETH Zürich master's thesis:  "In Search Of Pockets Of Predictability", by AT Morera, ‎2008
https://www.ethz.ch/content/dam/ethz/special-interest/mtec/chair-of-entrepreneurial-risks-dam/documents/dissertation/master%20thesis/Master_Thesis_Alan_Taxonera_Sept08.pdf

Finally, related ideas have appeared recently in the forecast evaluation literature, such as this paper and many of the references therein:  "Testing for State-Dependent Predictive Ability", by Sebastian Fossati, University of Alberta, September 2017.
 https://sites.ualberta.ca/~econwps/2017/wp2017-09.pdf
Abstract: This paper proposes a new test for comparing the out-of-sample forecasting performance of two competing models for situations in which the predictive content may be state-dependent (for example, expansion and recession states or low and high volatility states). To apply this test the econometrician is not required to observe when the underlying states shift. The test is simple to implement and accommodates several different cases of interest. An out-of-sample forecasting exercise for US output growth using real-time data illustrates the improvement of this test over previous approaches to perform forecast comparison.

Saturday, October 14, 2017

Machine Learning and Macro

Earlier I posted here on machine learning and central banking.  Here's something related.  

Last week Penn's Warren Center hosted a timely and stimulating conference, "Machine Learning for Macroeconomic Prediction and Policy".  The program appears below.  Papers were not posted, but with a little Googling you should be able to obtain those that are available.

Conference on Machine Learning for Macroeconomic Prediction and Policy

October 12 and 13, 2017

Glandt Forum, Singh Center for Nanotechnology


Co-Sponsored by Penn’s Warren Center for Network and Data Sciences
        and the Federal Reserve Bank of Philadelphia

Organizers: Michael Dotsey (FRBP), Jesus Fernandez-Villaverde (Penn), Michael Kearns (Penn)

SCHEDULE:

Thursday October 12:

8:00 Breakfast

8:45 Welcome

9:00 Stephen Hansen (University of Oxford): The Long-Run Information Effect of Central Bank Text

9:45 Stephen Ryan (Washington University): Classi cation Trees for Heterogeneous Moment-Based Models

10:30 Break

11:00 James Cowie (DeepMacro): DeepMacro Data Challenges

11:45 Galo Nuno (Banco de España): Machine Learning and Heterogeneous Agent Models

12:30 Lunch

1:30: Francis X. Diebold (Penn): Egalitarian LASSO for Combining Central Bank Survey Forecasts

2:15 Lyle Ungar (Penn): How to Make Better Forecasts

3:00 Vegard Larsen (Norges Bank): Components of Uncertainty

3:45 Break

4:15 Panel: ML and Econometrics: Similarities and Differences (Michael Kearns, Vegard Larsen, Stephen Hansen, Rakesh Vohra (Penn))

Friday October 13:

9:00 Aaron Smalter Hall (Federal Reserve Bank of Kansas City): Recession Forecasting with Bayesian Classification

9: 45 Susan Athey (Stanford GSB): Estimating Heterogeneity in Structural Parameters Using Generalized Random Forests

10:30 Break

11:00 Panel: ML Challenges at the Fed (Jose Canals-Cerda (Philadelphia Fed), Galo Nuno, Jesus Fernandez-Villaverde, Aaron Smalter Hall)

12:30 Lunch

Departures

Saturday, October 7, 2017

Long Memory in Realized Volatility

A noteworthy aspect of long memory in realized asset return volatility is that in many leading cases it's basically undeniable on the basis of a variety of evidence -- the question isn't existence but rather strength.  Hence it's useful to have a broad and comparable set of state-of-the-art (local Whittle) estimates together in one place, as in the interesting paper below.  For the most part it gets d in [.4, .6], consistent with my personal experience of d usually around .45, in the covariance stationary (finite variance) region d<.5, but close to the boundary.
http://d.repec.org/n?u=RePEc:han:dpaper:dp-601&r=ecm


Date:2017-07
By:Wenger, Kai ; Leschinski, Christian ; Sibbertsen, Philipp
The focus of the volatility literature on forecasting and the predominance of the conceptually simpler HAR model over long memory stochastic volatility models has led to the fact that the actual degree of memory estimates has rarely been considered. Estimates in the literature range roughly between 0.4 and 0.6 - that is from the higher stationary to the lower non-stationary region. This difference, however, has important practical implications - such as the existence or non-existence of the fourth moment of the return distribution. Inference on the memory order is complicated by the presence of measurement error in realized volatility and the potential of spurious long memory. In this paper we provide a comprehensive analysis of the memory in variances of international stock indices and exchange rates. On the one hand, we find that the variance of exchange rates is subject to spurious long memory and the true memory parameter is in the higher stationary range. Stock index variances, on the other hand, are free of low frequency contaminations and the memory is in the lower non-stationary range. These results are obtained using state of the art local Whittle methods that allow consistent estimation in presence of perturbations or low frequency contaminations.
Keywords:Realized Volatility; Long Memory; Perturbation; Spurious Long Memory
JEL:C12 C22 C58 G15
URL:http://d.repec.org/n?u=RePEc:han:dpaper:dp-601&r=ecm


Sunday, October 1, 2017

Economics Working Papers now in arXiv

Economics working papers are now a part of arXiv.  This is great news, as arXiv is the premier working paper hosting platform in mathematics and the mathematical / statistical sciencesThe Economics arXiv will start with a single subject area of Econometrics (econ.EM).  More economics subject areas will be added (of course), and moreover, subject areas can and will be subdivided.  Hats off to the econ.EM team (Victor Chernozhukov, MIT; Iván Fernández-Val, Boston University; Marc Henry, Penn State; Francesca Molinari, Cornell; Jörg Stoye, Bonn & Cornell; Martin Weidner, University College London).  The full announcement is here.