Template-Type: ReDIF-Article 1.0
Author-Name: Alberto Abadie
Author-X-Name-First: Alberto
Author-X-Name-Last: Abadie
Author-Name: Guido W. Imbens
Author-X-Name-First: Guido W.
Author-X-Name-Last: Imbens
Title: Bias-Corrected Matching Estimators for Average Treatment Effects
Abstract:
In Abadie and Imbens (2006), it was shown
that simple nearest-neighbor matching estimators include a conditional
bias term that converges to zero at a rate that may be slower than
N -super-1/2. As
a result, matching estimators are not N
-super-1/2-consistent in general. In this article,
we propose a bias correction that renders matching estimators
N
-super-1/2-consistent and asymptotically normal. To demonstrate the
methods proposed in this article, we apply them to the National Supported
Work (NSW) data, originally analyzed in Lalonde (1986). We also carry out
a small simulation study based on the NSW example. In this simulation
study, a simple implementation of the bias-corrected matching estimator
performs well compared to both simple matching estimators and to
regression estimators in terms of bias, root-mean-squared-error, and
coverage rates. Software to compute the estimators proposed in this
article is available on the authors' web pages
(http://www.economics.harvard.edu/faculty/imbens/software.html) and
documented in Abadie et al. (2003).
Journal: Journal of Business & Economic Statistics
Pages: 1-11
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.07333
File-URL: http://hdl.handle.net/10.1198/jbes.2009.07333
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:1-11
Template-Type: ReDIF-Article 1.0
Author-Name: Tom Ahn
Author-X-Name-First: Tom
Author-X-Name-Last: Ahn
Author-Name: Peter Arcidiacono
Author-X-Name-First: Peter
Author-X-Name-Last: Arcidiacono
Author-Name: Walter Wessels
Author-X-Name-First: Walter
Author-X-Name-Last: Wessels
Title: The Distributional Impacts of Minimum Wage Increases When Both Labor Supply and Labor Demand Are Endogenous
Abstract:
We develop and estimate a one-shot search
model with endogenous firm entry, and therefore zero expected profits, and
endogenous labor supply. Positive employment effects from a minimum wage
increase can result as the employment level depends upon both the numbers
of searching firms and workers. Welfare implications are similar to the
classical analysis: workers who most want the minimum wage jobs are hurt
by the minimum wage hike with workers marginally interested in minimum
wage jobs benefiting. We estimate the model using CPS data on teenagers
and show that small changes in the employment level are masking large
changes in labor supply and demand. Teenagers from well-educated families
see increases in their employment probabilities and push out their
less-privileged counterparts from the labor market. This article has
supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 12-23
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.07076
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07076
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:12-23
Template-Type: ReDIF-Article 1.0
Author-Name: Xavier Gabaix
Author-X-Name-First: Xavier
Author-X-Name-Last: Gabaix
Author-Name: Rustam Ibragimov
Author-X-Name-First: Rustam
Author-X-Name-Last: Ibragimov
Title: Rank - 1 / 2: A Simple Way to Improve the OLS Estimation of Tail Exponents
Abstract:
Despite the availability of more
sophisticated methods, a popular way to estimate a Pareto exponent is
still to run an OLS regression: log(Rank) = a -
b log(Size), and take b as an estimate
of the Pareto exponent. The reason for this popularity is arguably the
simplicity and robustness of this method. Unfortunately, this procedure is
strongly biased in small samples. We provide a simple practical remedy for
this bias, and propose that, if one wants to use an OLS regression, one
should use the Rank - 1 / 2, and run log(Rank - 1 / 2) =
a - b log(Size). The shift of 1 / 2 is
optimal, and reduces the bias to a leading order. The standard error on
the Pareto exponent ζ is not the OLS standard
error, but is asymptotically (2 / n)-super-1 / 2
ζ. Numerical results
demonstrate the advantage of the proposed approach over the standard OLS
estimation procedures and indicate that it performs well under dependent
heavy-tailed processes exhibiting deviations from power laws. The
estimation procedures considered are illustrated using an empirical
application to Zipf's law for the United States city size distribution.
Journal: Journal of Business & Economic Statistics
Pages: 24-39
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.06157
File-URL: http://hdl.handle.net/10.1198/jbes.2009.06157
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:24-39
Template-Type: ReDIF-Article 1.0
Author-Name: Shakeeb Khan
Author-X-Name-First: Shakeeb
Author-X-Name-Last: Khan
Author-Name: Youngki Shin
Author-X-Name-First: Youngki
Author-X-Name-Last: Shin
Author-Name: Elie Tamer
Author-X-Name-First: Elie
Author-X-Name-Last: Tamer
Title: Heteroscedastic Transformation Models With Covariate Dependent Censoring
Abstract:
In this article we propose an inferential
procedure for transformation models with conditional heteroscedasticity in
the error terms. The proposed method is robust to covariate dependent
censoring of arbitrary form. We provide sufficient conditions for point
identification. We then propose an estimator and show that it is
√n-consistent and asymptotically normal. We
conduct a simulation study that reveals adequate finite sample
performance. We also use the estimator in an empirical illustration of
export duration, where we find advantages of the proposed method over
existing ones.
Journal: Journal of Business & Economic Statistics
Pages: 40-48
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.07227
File-URL: http://hdl.handle.net/10.1198/jbes.2009.07227
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:40-48
Template-Type: ReDIF-Article 1.0
Author-Name: Brent Kreider
Author-X-Name-First: Brent
Author-X-Name-Last: Kreider
Author-Name: John V. Pepper
Author-X-Name-First: John V.
Author-X-Name-Last: Pepper
Title: Identification of Expected Outcomes in a Data Error Mixing Model With Multiplicative Mean Independence
Abstract:
We consider the problem of identifying a
mean outcome in corrupt sampling where the observed outcome is drawn from
a mixture of the distribution of interest and another distribution.
Relaxing the contaminated sampling assumption that the outcome is
statistically independent of the mixing process, we assess the identifying
power of an assumption that the conditional means of the distributions
differ by a factor of proportionality. For binary outcomes, we consider
the special case that all draws from the alternative distribution are
erroneous. We illustrate how these models can inform researchers about
illicit drug use in the presence of reporting errors.
Journal: Journal of Business & Economic Statistics
Pages: 49-60
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.07223
File-URL: http://hdl.handle.net/10.1198/jbes.2009.07223
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:49-60
Template-Type: ReDIF-Article 1.0
Author-Name: Cheti Nicoletti
Author-X-Name-First: Cheti
Author-X-Name-Last: Nicoletti
Author-Name: Franco Peracchi
Author-X-Name-First: Franco
Author-X-Name-Last: Peracchi
Author-Name: Francesca Foliano
Author-X-Name-First: Francesca
Author-X-Name-Last: Foliano
Title: Estimating Income Poverty in the Presence of Missing Data and Measurement Error
Abstract:
Reliable measures of poverty are an
essential statistical tool for public policies aimed at reducing poverty.
In this article we consider the reliability of income poverty measures
based on survey data which are typically plagued by missing data and
measurement error. Neglecting these problems can bias the estimated
poverty rates. We show how to derive upper and lower bounds for the
population poverty rate using the sample evidence, an upper bound on the
probability of misclassifying people into poor and nonpoor, and
instrumental or monotone instrumental variable assumptions. By using the
European Community Household Panel, we compute bounds for the poverty rate
in 10 European countries and study the sensitivity of poverty comparisons
across countries to missing data and measurement error problems.
Supplemental materials for this article may be downloaded from the
JBES website.
Journal: Journal of Business & Economic Statistics
Pages: 61-72
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.07185
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07185
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:61-72
Template-Type: ReDIF-Article 1.0
Author-Name: Robert C. Jung
Author-X-Name-First: Robert C.
Author-X-Name-Last: Jung
Author-Name: Roman Liesenfeld
Author-X-Name-First: Roman
Author-X-Name-Last: Liesenfeld
Author-Name: Jean-François Richard
Author-X-Name-First: Jean-François
Author-X-Name-Last: Richard
Title: Dynamic Factor Models for Multivariate Count Data: An Application to Stock-Market Trading Activity
Abstract:
We propose a dynamic factor model for the
analysis of multivariate time series count data. Our model allows for
idiosyncratic as well as common serially correlated latent factors in
order to account for potentially complex dynamic interdependence between
series of counts. The model is estimated under alternative count
distributions (Poisson and negative binomial). Maximum likelihood
estimation requires high-dimensional numerical integration in order to
marginalize the joint distribution with respect to the unobserved dynamic
factors. We rely upon the Monte Carlo integration procedure known as
efficient importance sampling, which produces fast and numerically
accurate estimates of the likelihood function. The model is applied to
time series data consisting of numbers of trades in 5-min intervals for
five New York Stock Exchange (NYSE) stocks from two industrial sectors.
The estimated model provides a good parsimonious representation of the
contemporaneous correlation across the individual stocks and their serial
correlation. It also provides strong evidence of a common factor, which we
interpret as reflecting market-wide news.
Journal: Journal of Business & Economic Statistics
Pages: 73-85
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.08212
File-URL: http://hdl.handle.net/10.1198/jbes.2009.08212
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:73-85
Template-Type: ReDIF-Article 1.0
Author-Name: Don Harding
Author-X-Name-First: Don
Author-X-Name-Last: Harding
Author-Name: Adrian Pagan
Author-X-Name-First: Adrian
Author-X-Name-Last: Pagan
Title: An Econometric Analysis of Some Models for Constructed Binary Time Series
Abstract:
Macroeconometric and financial
researchers often use binary data constructed in a way that creates serial
dependence. We show that this dependence can be allowed for if the binary
states are treated as Markov processes. In addition, the methods of
construction ensure that certain sequences are never observed in the
constructed data. Together these features make it difficult to utilize
static and dynamic Probit models. We develop modeling methods that respect
the Markov-process nature of constructed binary data and explicitly deals
with censoring constraints. An application is provided that investigates
the relation between the business cycle and the yield spread.
Journal: Journal of Business & Economic Statistics
Pages: 86-95
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.08005
File-URL: http://hdl.handle.net/10.1198/jbes.2009.08005
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:86-95
Template-Type: ReDIF-Article 1.0
Author-Name: Jinyong Hahn
Author-X-Name-First: Jinyong
Author-X-Name-Last: Hahn
Author-Name: Keisuke Hirano
Author-X-Name-First: Keisuke
Author-X-Name-Last: Hirano
Author-Name: Dean Karlan
Author-X-Name-First: Dean
Author-X-Name-Last: Karlan
Title: Adaptive Experimental Design Using the Propensity Score
Abstract:
Many social experiments are run in
multiple waves or replicate earlier social experiments. In principle, the
sampling design can be modified in later stages or replications to allow
for more efficient estimation of causal effects. We consider the design of
a two-stage experiment for estimating an average treatment effect when
covariate information is available for experimental subjects. We use data
from the first stage to choose a conditional treatment assignment rule for
units in the second stage of the experiment. This amounts to choosing the
propensity score, the conditional probability of
treatment given covariates. We propose to select the propensity score to
minimize the asymptotic variance bound for estimating the average
treatment effect. Our procedure can be implemented simply using standard
statistical software and has attractive large-sample properties.
Journal: Journal of Business & Economic Statistics
Pages: 96-108
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.08161
File-URL: http://hdl.handle.net/10.1198/jbes.2009.08161
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:96-108
Template-Type: ReDIF-Article 1.0
Author-Name: Xiangdong Long
Author-X-Name-First: Xiangdong
Author-X-Name-Last: Long
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Aman Ullah
Author-X-Name-First: Aman
Author-X-Name-Last: Ullah
Title: Estimation and Forecasting of Dynamic Conditional Covariance: A Semiparametric Multivariate Model
Abstract:
We propose a semiparametric conditional
covariance (SCC) estimator that combines the first-stage parametric
conditional covariance (PCC) estimator with the second-stage nonparametric
correction estimator in a multiplicative way. We prove the asymptotic
normality of our SCC estimator, propose a nonparametric test for the
correct specification of PCC models, and study its asymptotic properties.
We evaluate the finite sample performance of our test and SCC estimator
and compare the latter with that of the PCC estimator, purely
nonparametric estimator, and Hafner, Dijk, and Franses's (2006) estimator
in terms of mean squared error and Value-at-Risk losses via simulations
and real data analyses.
Journal: Journal of Business & Economic Statistics
Pages: 109-125
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.07057
File-URL: http://hdl.handle.net/10.1198/jbes.2009.07057
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:109-125
Template-Type: ReDIF-Article 1.0
Author-Name: Stefania D'Amico
Author-X-Name-First: Stefania
Author-X-Name-Last: D'Amico
Author-Name: Mira Farka
Author-X-Name-First: Mira
Author-X-Name-Last: Farka
Title: The Fed and the Stock Market: An Identification Based on Intraday Futures Data
Abstract:
This article develops a new
identification procedure to estimate the contemporaneous relation between
monetary policy and the stock market within a vector autoregression (VAR)
framework. The approach combines high-frequency data from the futures
market with the VAR methodology to circumvent exclusion restrictions and
achieve identification. Our analysis casts doubt on VAR models imposing a
recursive structure between innovations in policy rates and stock returns.
We find that a tightening in policy rates has a negative impact on stock
prices and that the Federal Reserve (Fed) has responded significantly to
movements in the stock market. Estimates are robust to various model
specifications.
Journal: Journal of Business & Economic Statistics
Pages: 126-137
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2009.08019
File-URL: http://hdl.handle.net/10.1198/jbes.2009.08019
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:126-137
Template-Type: ReDIF-Article 1.0
Author-Name: Francesco Audrino
Author-X-Name-First: Francesco
Author-X-Name-Last: Audrino
Author-Name: Fabio Trojani
Author-X-Name-First: Fabio
Author-X-Name-Last: Trojani
Title: A General Multivariate Threshold GARCH Model With Dynamic Conditional Correlations
Abstract:
We introduce a new multivariate GARCH
model with multivariate thresholds in conditional correlations and develop
a two-step estimation procedure that is feasible in large dimensional
applications. Optimal threshold functions are estimated endogenously from
the data and the model conditional covariance matrix is ensured to be
positive definite. We study the empirical performance of our model in two
applications using U.S. stock and bond market data. In both applications
our model has, in terms of statistical and economic significance, higher
forecasting power than several other multivariate GARCH models for
conditional correlations.
Journal: Journal of Business & Economic Statistics
Pages: 138-149
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.08117
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08117
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:138-149
Template-Type: ReDIF-Article 1.0
Author-Name: Wagner Piazza Gaglianone
Author-X-Name-First: Wagner Piazza
Author-X-Name-Last: Gaglianone
Author-Name: Luiz Renato Lima
Author-X-Name-First: Luiz Renato
Author-X-Name-Last: Lima
Author-Name: Oliver Linton
Author-X-Name-First: Oliver
Author-X-Name-Last: Linton
Author-Name: Daniel R. Smith
Author-X-Name-First: Daniel R.
Author-X-Name-Last: Smith
Title: Evaluating Value-at-Risk Models via Quantile Regression
Abstract:
This article is concerned with evaluating
Value-at-Risk estimates. It is well known that using only binary
variables, such as whether or not there was an exception, sacrifices too
much information. However, most of the specification tests (also called
backtests) available in the literature, such as Christoffersen (1998) and
Engle and Manganelli (2004) are based on such variables. In this article
we propose a new backtest that does not rely solely on binary variables.
It is shown that the new backtest provides a sufficient condition to
assess the finite sample performance of a quantile model whereas the
existing ones do not. The proposed methodology allows us to identify
periods of an increased risk exposure based on a quantile regression model
(Koenker and Xiao 2002). Our theoretical findings are corroborated through
a Monte Carlo simulation and an empirical exercise with daily S&P500 time
series.
Journal: Journal of Business & Economic Statistics
Pages: 150-160
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.07318
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07318
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:150-160
Template-Type: ReDIF-Article 1.0
Author-Name: Ravi Bansal
Author-X-Name-First: Ravi
Author-X-Name-Last: Bansal
Author-Name: Dana Kiku
Author-X-Name-First: Dana
Author-X-Name-Last: Kiku
Title: Cointegration and Long-Run Asset Allocation
Abstract:
We show that economic restrictions of
cointegration between asset cash flows and aggregate consumption have
important implications for return dynamics and optimal portfolio rules,
particularly at long investment horizons. When cash flows and consumption
share a common stochastic trend (i.e., are cointegrated), temporary
deviations between their levels forecast long-horizon dividend growth
rates and returns, and consequently, alter the term profile of risks and
expected returns. We show that the optimal asset allocation based on the
error-correction vector autoregression (EC-VAR) specification can be quite
different relative to a traditional VAR that ignores the cointegrating
relation. Unlike the EC-VAR, the commonly used VAR approach to model
expected returns focuses on short-run forecasts and can considerably miss
on long-horizon return dynamics, and hence, the optimal portfolio mix in
the presence of cointegration. We develop and implement methods to account
for parameter uncertainty in the EC-VAR setup and highlight the importance
of the error-correction channel for optimal portfolio decisions at various
investment horizons.
Journal: Journal of Business & Economic Statistics
Pages: 161-173
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.08062
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08062
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:161-173
Template-Type: ReDIF-Article 1.0
Author-Name: Tsunao Okumura
Author-X-Name-First: Tsunao
Author-X-Name-Last: Okumura
Title: Nonparametric Estimation of Labor Supply and Demand Factors
Abstract:
This article derives sharp bounds on
labor supply and demand shift variables within a nonparametric
simultaneous equations model using only observations of the intersection
of upward sloping supply curves and downward sloping demand curves.
Furthermore, I demonstrate that these bounds tighten with the imposition
of plausible assumptions on the distribution of the disturbance terms.
Using Katz and Murphy's (1992) panel data on wages and labor inputs, I
estimate these bounds and assess the supply and demand factors that
determine changes within male--female wage differentials and the college
wage premium.
Journal: Journal of Business & Economic Statistics
Pages: 174-185
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.08068
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08068
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:174-185
Template-Type: ReDIF-Article 1.0
Author-Name: Gloria González-Rivera
Author-X-Name-First: Gloria
Author-X-Name-Last: González-Rivera
Author-Name: Zeynep Senyuz
Author-X-Name-First: Zeynep
Author-X-Name-Last: Senyuz
Author-Name: Emre Yoldas
Author-X-Name-First: Emre
Author-X-Name-Last: Yoldas
Title: Autocontours: Dynamic Specification Testing
Abstract:
We propose a new battery of dynamic
specification tests for the joint hypothesis of iid-ness and density
function based on the fundamental properties of independent random
variables with identical distributions. We introduce a device-the
autocontour-whose shape is very sensitive to departures from the null in
either direction, thus providing superior power. The tests are parametric
with asymptotic t and chi-squared limiting distributions
and standard convergence rates. They do not require a transformation of
the original data or a Kolmogorov style assessment of goodness-of-fit,
explicitly account for parameter uncertainty, and have superior finite
sample properties. An application to autoregressive conditional duration
(ACD) models for trade durations shows that the difficulty with the
assumed densities lies on the probability assigned to very small
durations. Supplemental materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 186-200
Issue: 1
Volume: 29
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2010.08144
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08144
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:1:p:186-200
Template-Type: ReDIF-Article 1.0
Author-Name: Patrick Bayer
Author-X-Name-First: Patrick
Author-X-Name-Last: Bayer
Author-Name: Shakeeb Khan
Author-X-Name-First: Shakeeb
Author-X-Name-Last: Khan
Author-Name: Christopher Timmins
Author-X-Name-First: Christopher
Author-X-Name-Last: Timmins
Title: Nonparametric Identification and Estimation in a Roy Model With Common Nonpecuniary Returns
Abstract:
We consider identification and estimation
of a Roy model that includes a common nonpecuniary utility component
associated with each choice alternative. This augmented Roy model has
broader applications to many polychotomous choice problems in addition to
occupational sorting. We develop a pair of nonparametric estimators for
this model, derive asymptotics, and illustrate small-sample properties
with a series of Monte Carlo experiments. We apply one of these models to
migration behavior and analyze the effect of Roy sorting on observed
returns to college education. Correcting for Roy sorting bias, the returns
to a college degree are cut in half. This article has supplementary
material online.
Journal: Journal of Business & Economic Statistics
Pages: 201-215
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.08083
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08083
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:201-215
Template-Type: ReDIF-Article 1.0
Author-Name: David F. Hendry
Author-X-Name-First: David F.
Author-X-Name-Last: Hendry
Author-Name: Kirstin Hubrich
Author-X-Name-First: Kirstin
Author-X-Name-Last: Hubrich
Title: Combining Disaggregate Forecasts or Combining Disaggregate Information to Forecast an Aggregate
Abstract:
To forecast an aggregate, we propose
adding disaggregate variables, instead of combining forecasts of those
disaggregates or forecasting by a univariate aggregate model. New
analytical results show the effects of changing coefficients,
misspecification, estimation uncertainty, and mismeasurement error.
Forecast-origin shifts in parameters affect absolute, but not relative,
forecast accuracies; misspecification and estimation uncertainty induce
forecast-error differences, which variable-selection procedures or
dimension reductions can mitigate. In Monte Carlo simulations, different
stochastic structures and interdependencies between disaggregates imply
that including disaggregate information in the aggregate model improves
forecast accuracy. Our theoretical predictions and simulations are
corroborated when forecasting aggregate United States inflation pre and
post 1984 using disaggregate sectoral data.
Journal: Journal of Business & Economic Statistics
Pages: 216-227
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2009.07112
File-URL: http://hdl.handle.net/10.1198/jbes.2009.07112
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:216-227
Template-Type: ReDIF-Article 1.0
Author-Name: Robert Jong
Author-X-Name-First: Robert
Author-X-Name-Last: Jong
Author-Name: Ana María Herrera
Author-X-Name-First: Ana María
Author-X-Name-Last: Herrera
Title: Dynamic Censored Regression and the Open Market Desk Reaction Function
Abstract:
The censored regression model and the
Tobit model are standard tools in econometrics. This paper provides a
formal asymptotic theory for dynamic time series censored regression when
lags of the dependent variable have been included among the regressors.
The central analytical challenge is to prove that the dynamic censored
regression model satisfies stationarity and weak dependence properties if
a condition on the lag polynomial holds. We show the formal asymptotic
correctness of conditional maximum likelihood estimation of the dynamic
Tobit model, and the correctness of Powell's least absolute deviations
procedure for the estimation of the dynamic censored regression model. The
paper is concluded with an application of the dynamic censored regression
methodology to temporary purchases of the Open Market Desk. This article
has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 228-237
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.07181
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07181
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:228-237
Template-Type: ReDIF-Article 1.0
Author-Name: A. Colin Cameron
Author-X-Name-First: A. Colin
Author-X-Name-Last: Cameron
Author-Name: Jonah B. Gelbach
Author-X-Name-First: Jonah B.
Author-X-Name-Last: Gelbach
Author-Name: Douglas L. Miller
Author-X-Name-First: Douglas L.
Author-X-Name-Last: Miller
Title: Robust Inference With Multiway Clustering
Abstract:
In this article we propose a variance
estimator for the OLS estimator as well as for nonlinear estimators such
as logit, probit, and GMM. This variance estimator enables cluster-robust
inference when there is two-way or multiway clustering that is nonnested.
The variance estimator extends the standard cluster-robust variance
estimator or sandwich estimator for one-way clustering (e.g., Liang and
Zeger 1986; Arellano 1987) and relies on similar relatively weak
distributional assumptions. Our method is easily implemented in
statistical packages, such as Stata and SAS, that already offer
cluster-robust standard errors when there is one-way clustering. The
method is demonstrated by a Monte Carlo analysis for a two-way random
effects model; a Monte Carlo analysis of a placebo law that extends the
state--year effects example of Bertrand, Duflo, and Mullainathan (2004) to
two dimensions; and by application to studies in the empirical literature
where two-way clustering is present.
Journal: Journal of Business & Economic Statistics
Pages: 238-249
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.07136
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07136
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:238-249
Template-Type: ReDIF-Article 1.0
Author-Name: Peter J. Brockwell
Author-X-Name-First: Peter J.
Author-X-Name-Last: Brockwell
Author-Name: Richard A. Davis
Author-X-Name-First: Richard A.
Author-X-Name-Last: Davis
Author-Name: Yu Yang
Author-X-Name-First: Yu
Author-X-Name-Last: Yang
Title: Estimation for Non-Negative Lévy-Driven CARMA Processes
Abstract:
Continuous-time autoregressive moving
average (CARMA) processes with a nonnegative kernel and driven by a
nondecreasing Lévy process constitute a useful and very general class of
stationary, nonnegative continuous-time processes that have been used, in
particular, for the modeling of stochastic volatility. Brockwell, Davis,
and Yang (2007) derived efficient estimates of the parameters of a
nonnegative Lévy-driven CAR(1) process and showed how the realization of
the underlying Lévy process can be estimated from closely-spaced
observations of the process itself. In this article we show how the ideas
of that article can be generalized to higher order CARMA processes with
nonnegative kernel, the key idea being the decomposition of the CARMA
process into a sum of dependent Ornstein--Uhlenbeck processes.
Journal: Journal of Business & Economic Statistics
Pages: 250-259
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.08165
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08165
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:250-259
Template-Type: ReDIF-Article 1.0
Author-Name: José R. Berrendero
Author-X-Name-First: José R.
Author-X-Name-Last: Berrendero
Author-Name: Javier Cárcamo
Author-X-Name-First: Javier
Author-X-Name-Last: Cárcamo
Title: Tests for the Second Order Stochastic Dominance Based on L-Statistics
Abstract:
We use some characterizations of convex
and concave-type orders to define discrepancy measures useful in two
testing problems involving stochastic dominance assumptions. The results
are connected with the mean value of the order statistics and have a clear
economic interpretation in terms of the expected cumulative resources of
the poorest (or richest) in random samples. Our approach mainly consists
of comparing the estimated means in ordered samples of the involved
populations. The test statistics we derive are functions of
L-statistics and are generated through estimators of the
mean order statistics. We illustrate some properties of the procedures
with simulation studies and an empirical example.
Journal: Journal of Business & Economic Statistics
Pages: 260-270
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.07224
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07224
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:260-270
Template-Type: ReDIF-Article 1.0
Author-Name: Paul Frijters
Author-X-Name-First: Paul
Author-X-Name-Last: Frijters
Author-Name: John P. Haisken-DeNew
Author-X-Name-First: John P.
Author-X-Name-Last: Haisken-DeNew
Author-Name: Michael A. Shields
Author-X-Name-First: Michael A.
Author-X-Name-Last: Shields
Title: The Increasingly Mixed Proportional Hazard Model: An Application to Socioeconomic Status, Health Shocks, and Mortality
Abstract:
We introduce a duration model that allows
for unobserved cumulative individual-specific shocks, which are likely to
be important in explaining variations in duration outcomes, such as length
of life and time spent unemployed. The model is also a useful tool in
situations where researchers observe a great deal of information about
individuals when first interviewed in surveys but little thereafter. We
call this model the "increasingly mixed proportional hazard" (IMPH) model.
We compare and contrast this model with the mixed proportional hazard
(MPH) model, which continues to be the workhorse of applied single-spell
duration analysis in economics and the other social sciences. We apply the
IMPH model to study the relationships among socioeconomic status, health
shocks, and mortality, using 19 waves of data drawn from the German
Socio-Economic Panel (SOEP). The IMPH model is found to fit the data
statistically better than the MPH model, and unobserved health shocks and
socioeconomic status are shown to play powerful roles in predicting
longevity.
Journal: Journal of Business & Economic Statistics
Pages: 271-281
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.08082
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08082
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:271-281
Template-Type: ReDIF-Article 1.0
Author-Name: Chirok Han
Author-X-Name-First: Chirok
Author-X-Name-Last: Han
Author-Name: Jin Seo Cho
Author-X-Name-First: Jin Seo
Author-X-Name-Last: Cho
Author-Name: Peter C. B. Phillips
Author-X-Name-First: Peter C. B.
Author-X-Name-Last: Phillips
Title: Infinite Density at the Median and the Typical Shape of Stock Return Distributions
Abstract:
Statistics are developed to test for the
presence of an asymptotic discontinuity (or infinite density or
peakedness) in a probability density at the median. The approach makes use
of work by Knight (1998) on L
1 estimation asymptotics in conjunction with
nonparametric kernel density estimation methods. The size and power of the
tests are assessed, and conditions under which the tests have good
performance are explored in simulations. The new methods are applied to
stock returns of leading companies across major U.S. industry groups. The
results confirm the presence of infinite density at the median as a new
significant empirical evidence for stock return distributions.
Journal: Journal of Business & Economic Statistics
Pages: 282-294
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.07327
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07327
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:282-294
Template-Type: ReDIF-Article 1.0
Author-Name: Stephen G. Donald
Author-X-Name-First: Stephen G.
Author-X-Name-Last: Donald
Author-Name: Natércia Fortuna
Author-X-Name-First: Natércia
Author-X-Name-Last: Fortuna
Author-Name: Vladas Pipiras
Author-X-Name-First: Vladas
Author-X-Name-Last: Pipiras
Title: Local and Global Rank Tests for Multivariate Varying-Coefficient Models
Abstract:
In a multivariate varying-coefficient
model, the response vectors Y are regressed on known
functions v(X) of some explanatory variables
X and the coefficients in an unknown regression matrix
θ
(Z) depend on
another set of explanatory variables Z. We provide
statistical tests, called local and global rank tests, which allow one to
estimate the rank of an unknown regression coefficient matrix
θ
(Z) locally at a fixed
level of the variable Z or globally as the maximum rank over
all levels of Z in a proper, compact subset of the support of
Z, respectively. We apply our results to estimate the
so-called local and global ranks in a demand system where budget shares
are regressed on known functions of total expenditures and the
coefficients in a regression matrix depend on prices faced by a consumer.
Journal: Journal of Business & Economic Statistics
Pages: 295-306
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.07303
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07303
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:295-306
Template-Type: ReDIF-Article 1.0
Author-Name: M. Hashem Pesaran
Author-X-Name-First: M. Hashem
Author-X-Name-Last: Pesaran
Author-Name: Andreas Pick
Author-X-Name-First: Andreas
Author-X-Name-Last: Pick
Title: Forecast Combination Across Estimation Windows
Abstract:
In this article we consider combining
forecasts generated from the same model but over different estimation
windows. We develop theoretical results for random walks with breaks in
the drift and volatility and for a linear regression model with a break in
the slope parameter. Averaging forecasts over different estimation windows
leads to a lower bias and root mean square forecast error (RMSFE) compared
with forecasts based on a single estimation window for all but the
smallest breaks. An application to weekly returns on 20 equity index
futures shows that averaging forecasts over estimation windows leads to a
smaller RMSFE than some competing methods.
Journal: Journal of Business & Economic Statistics
Pages: 307-318
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.09018
File-URL: http://hdl.handle.net/10.1198/jbes.2010.09018
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:307-318
Template-Type: ReDIF-Article 1.0
Author-Name: Rick L. Andrews
Author-X-Name-First: Rick L.
Author-X-Name-Last: Andrews
Author-Name: Imran S. Currim
Author-X-Name-First: Imran S.
Author-X-Name-Last: Currim
Author-Name: Peter S. H. Leeflang
Author-X-Name-First: Peter S. H.
Author-X-Name-Last: Leeflang
Title: A Comparison of Sales Response Predictions From Demand Models Applied to Store-Level versus Panel Data
Abstract:
In order to generate sales promotion
response predictions, marketing analysts estimate demand models using
either disaggregated (consumer-level) or aggregated (store-level) scanner
data. Comparison of predictions from these demand models is complicated by
the fact that models may accommodate different forms of consumer
heterogeneity depending on the level of data aggregation. This study shows
via simulation that demand models with various heterogeneity
specifications do not produce more accurate sales response predictions
than a homogeneous demand model applied to store-level data, with one
major exception: a random coefficients model designed to capture
within-store heterogeneity using store-level data produced significantly
more accurate sales response predictions (as well as better fit) compared
to other model specifications. An empirical application to the paper towel
product category adds additional insights. This article has supplementary
material online.
Journal: Journal of Business & Economic Statistics
Pages: 319-326
Issue: 2
Volume: 29
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2010.07225
File-URL: http://hdl.handle.net/10.1198/jbes.2010.07225
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:2:p:319-326
Template-Type: ReDIF-Article 1.0
Author-Name: Todd E. Clark
Author-X-Name-First: Todd E.
Author-X-Name-Last: Clark
Title: Real-Time Density Forecasts From Bayesian Vector Autoregressions With Stochastic Volatility
Abstract:
Central banks and other forecasters are increasingly interested in various
aspects of density forecasts. However, recent sharp changes in
macroeconomic volatility, including the Great Moderation and the more
recent sharp rise in volatility associated with increased variation in
energy prices and the deep global recession-pose significant challenges to
density forecasting. Accordingly, this paper examines, with real-time
data, density forecasts of U.S. GDP growth, unemployment, inflation, and
the federal funds rate from Bayesian vector autoregression (BVAR) models
with stochastic volatility. The results indicate that adding stochastic
volatility to BVARs materially improves the real-time accuracy of density
forecasts. This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 327-341
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2010.09248
File-URL: http://hdl.handle.net/10.1198/jbes.2010.09248
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:327-341
Template-Type: ReDIF-Article 1.0
Author-Name: Antonello Loddo
Author-X-Name-First: Antonello
Author-X-Name-Last: Loddo
Author-Name: Shawn Ni
Author-X-Name-First: Shawn
Author-X-Name-Last: Ni
Author-Name: Dongchu Sun
Author-X-Name-First: Dongchu
Author-X-Name-Last: Sun
Title: Selection of Multivariate Stochastic Volatility Models via Bayesian Stochastic Search
Abstract:
We propose a Bayesian stochastic search
approach to selecting restrictions on multivariate regression models where
the errors exhibit deterministic or stochastic conditional volatilities.
We develop a Markov chain Monte Carlo (MCMC) algorithm that generates
posterior restrictions on the regression coefficients and Cholesky
decompositions of the covariance matrix of the errors. Numerical
simulations with artificially generated data show that the proposed method
is effective in selecting the data-generating model restrictions and
improving the forecasting performance of the model. Applying the method to
daily foreign exchange rate data, we conduct stochastic search on a VAR
model with stochastic conditional volatilities.
Journal: Journal of Business & Economic Statistics
Pages: 342-355
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2010.08197
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08197
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:342-355
Template-Type: ReDIF-Article 1.0
Author-Name: Viktor Todorov
Author-X-Name-First: Viktor
Author-X-Name-Last: Todorov
Author-Name: George Tauchen
Author-X-Name-First: George
Author-X-Name-Last: Tauchen
Title: Volatility Jumps
Abstract:
The article undertakes a nonparametric
analysis of the high-frequency movements in stock market volatility using
very finely sampled data on the VIX volatility index compiled from options
data by the CBOE. We derive theoretically the link between pathwise
properties of the latent spot volatility and the VIX index, such as
presence of continuous martingale and/or jumps, and further show how to
make statistical inference about them from the observed data. Our
empirical results suggest that volatility is a pure jump process with
jumps of infinite variation and activity close to that of a continuous
martingale. Additional empirical work shows that jumps in volatility and
price level in most cases occur together, are strongly dependent, and have
opposite sign. The latter suggests that jumps are an important channel for
generating leverage effect.
Journal: Journal of Business & Economic Statistics
Pages: 356-371
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2010.08342
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08342
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:356-371
Template-Type: ReDIF-Article 1.0
Author-Name: Stephen H. Shore
Author-X-Name-First: Stephen H.
Author-X-Name-Last: Shore
Title: The Intergenerational Transmission of Income Volatility: Is Riskiness Inherited?
Abstract:
This article examines the
intergenerational transmission of income risk. Do risky parents have risky
kids? Income volatility-a proxy for income risk-is not observed directly;
instead, it must be estimated with substantial error from the time series
variability of income. I characterize an income process with
individual-specific volatility parameters and estimate the joint
distribution of volatility parameters for fathers and for their adult
sons. In data from the Panel Study of Income Dynamics, fathers with higher
income volatility have sons with higher income volatility. This finding is
correlated with, but far from fully explained by, the intergenerational
transmission of risk tolerance and of the propensity for self-employment.
Journal: Journal of Business & Economic Statistics
Pages: 372-381
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2011.08091
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08091
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:372-381
Template-Type: ReDIF-Article 1.0
Author-Name: Bertil Wegmann
Author-X-Name-First: Bertil
Author-X-Name-Last: Wegmann
Author-Name: Mattias Villani
Author-X-Name-First: Mattias
Author-X-Name-Last: Villani
Title: Bayesian Inference in Structural Second-Price Common Value Auctions
Abstract:
Structural econometric auction models
with explicit game-theoretic modeling of bidding strategies have been
quite a challenge from a methodological perspective, especially within the
common value framework. We develop a Bayesian analysis of the hierarchical
Gaussian common value model with stochastic entry introduced by Bajari and
Hortaçsu. A key component of our approach is an accurate and easily
interpretable analytical approximation of the equilibrium bid function,
resulting in a fast and numerically stable evaluation of the likelihood
function. We extend the analysis to situations with positive valuations
using a hierarchical gamma model. We use a Bayesian variable selection
algorithm that simultaneously samples the posterior distribution of the
model parameters and does inference on the choice of covariates. The
methodology is applied to simulated data and to a newly collected dataset
from eBay with bids and covariates from 1000 coin auctions. We demonstrate
that the Bayesian algorithm is very efficient and that the approximation
error in the bid function has virtually no effect on the model inference.
Both models fit the data well, but the Gaussian model outperforms the
gamma model in an out-of-sample forecasting evaluation of auction prices.
This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 382-396
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2011.08289
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08289
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:382-396
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Predictability of Output Growth and Inflation: A Multi-Horizon Survey Approach
Abstract:
We develop an unobserved-components
approach to study surveys of forecasts containing multiple forecast
horizons. Under the assumption that forecasters optimally update their
beliefs about past, current, and future state variables as new information
arrives, we use our model to extract information on the degree of
predictability of the state variable and the importance of measurement
errors in the observables. Empirical estimates of the model are obtained
using survey forecasts of annual GDP growth and inflation in the United
States with forecast horizons ranging from 1 to 24 months, and the model
is found to closely match the joint realization of forecast errors at
different horizons. Our empirical results suggest that professional
forecasters face severe measurement error problems for GDP growth in real
time, while this is much less of a problem for inflation. Moreover,
inflation exhibits greater persistence, and thus is predictable at longer
horizons, than GDP growth and the persistent component of both variables
is well approximated by a low-order autoregressive specification.
Journal: Journal of Business & Economic Statistics
Pages: 397-410
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2010.08347
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08347
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:397-410
Template-Type: ReDIF-Article 1.0
Author-Name: Tilmann Gneiting
Author-X-Name-First: Tilmann
Author-X-Name-Last: Gneiting
Author-Name: Roopesh Ranjan
Author-X-Name-First: Roopesh
Author-X-Name-Last: Ranjan
Title: Comparing Density Forecasts Using Threshold- and Quantile-Weighted Scoring Rules
Abstract:
We propose a method for comparing density
forecasts that is based on weighted versions of the continuous ranked
probability score. The weighting emphasizes regions of interest, such as
the tails or the center of a variable's range, while retaining propriety,
as opposed to a recently developed weighted likelihood ratio test, which
can be hedged. Threshold- and quantile-based decompositions of the
continuous ranked probability score can be illustrated graphically and
provide insight into the strengths and deficiencies of a forecasting
method. We illustrate the use of the test and graphical tools in case
studies on the Bank of England's density forecasts of quarterly inflation
rates in the United Kingdom, and probabilistic predictions of wind
resources in the Pacific Northwest.
Journal: Journal of Business & Economic Statistics
Pages: 411-422
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2010.08110
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08110
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:411-422
Template-Type: ReDIF-Article 1.0
Author-Name: Zhongjun Qu
Author-X-Name-First: Zhongjun
Author-X-Name-Last: Qu
Title: A Test Against Spurious Long Memory
Abstract:
This paper proposes a test statistic for
the null hypothesis that a given time series is a stationary long-memory
process against the alternative hypothesis that it is affected by regime
change or a smoothly varying trend. The proposed test is in the frequency
domain and is based on the derivatives of the profiled local Whittle
likelihood function in a degenerating neighborhood of the origin. The
assumptions used are mild, allowing for non-Gaussianity or conditional
heteroscedasticity. The resulting null limiting distribution is free of
nuisance parameters and can be easily simulated. Furthermore, the test is
straightforward to implement; in particular, it does not require
specifying the form of the trend or the number of different regimes under
the alternative hypothesis. Monte Carlo simulation shows that the test has
decent size and power properties. The article also considers three
empirical applications to illustrate the usefulness of the test. This
article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 423-438
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2010.09153
File-URL: http://hdl.handle.net/10.1198/jbes.2010.09153
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:423-438
Template-Type: ReDIF-Article 1.0
Author-Name: Michal Pakoš
Author-X-Name-First: Michal
Author-X-Name-Last: Pakoš
Title: Estimating Intertemporal and Intratemporal Substitutions When Both Income and Substitution Effects Are Present: The Role of Durable Goods
Abstract:
Homotheticity induces a dramatic
statistical bias in the estimates of the intratemporal and intertemporal
substitutions. I find potent support in favor of nonhomotheticity in
aggregate consumption data, with nondurable goods being necessities and
durable goods luxuries. I obtain the intertemporal substitutability
negligible (0.04), a magnitude close to Hall's (1988) original estimate,
and the intratemporal substitutability between nondurable goods and
service flow from the stock of durable goods small as well (0.18). Despite
that, due to the secular decline of the rental cost, the budget share of
durable goods appears trendless.
Journal: Journal of Business & Economic Statistics
Pages: 439-454
Issue: 3
Volume: 29
Year: 2011
Month: 7
X-DOI: 10.1198/jbes.2009.07046
File-URL: http://hdl.handle.net/10.1198/jbes.2009.07046
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:3:p:439-454
Template-Type: ReDIF-Article 1.0
Author-Name: Nikolay Gospodinov
Author-X-Name-First: Nikolay
Author-X-Name-Last: Gospodinov
Author-Name: Alex Maynard
Author-X-Name-First: Alex
Author-X-Name-Last: Maynard
Author-Name: Elena Pesavento
Author-X-Name-First: Elena
Author-X-Name-Last: Pesavento
Title: Sensitivity of Impulse Responses to Small Low-Frequency Comovements: Reconciling the Evidence on the Effects of Technology Shocks
Abstract:
This article clarifies the empirical
source of the debate on the effect of technology shocks on hours worked.
We find that the contrasting conclusions from levels and differenced
vector autoregression specifications, documented in the literature, can be
explained by a small low-frequency comovement between hours worked and
productivity growth that gives rise to a discontinuity in the solution for
the structural coefficients identified by long-run restrictions. Whereas
the low-frequency comovement is allowed for in the levels specification,
it is implicitly set to 0 in the differenced vector autoregression.
Consequently, even when the root of hours is very close to 1 and the
low-frequency comovement is quite small, removing it can give rise to
biases of sufficient size to account for the empirical difference between
the two specifications.
Journal: Journal of Business & Economic Statistics
Pages: 455-467
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.10042
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10042
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:455-467
Template-Type: ReDIF-Article 1.0
Author-Name: Junye Li
Author-X-Name-First: Junye
Author-X-Name-Last: Li
Title: Sequential Bayesian Analysis of Time-Changed Infinite Activity Derivatives Pricing Models
Abstract:
This article investigates time-changed
infinite activity derivatives pricing models from the sequential Bayesian
perspective. It proposes a sequential Monte Carlo method with the proposal
density generated by the unscented Kalman filter. This approach overcomes
to a large extent the particle impoverishment problem inherent to the
conventional particle filter. Simulation study and real applications
indicate that (1) using the underlying alone cannot capture the dynamics
of states, and by including options, the precision of state filtering is
dramatically improved; (2) the proposed method performs better and is more
robust than the conventional one; and (3) joint identification of the
diffusion, stochastic volatility, and jumps can be achieved using both the
underlying data and the options data.
Journal: Journal of Business & Economic Statistics
Pages: 468-480
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2010.08310
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08310
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:468-480
Template-Type: ReDIF-Article 1.0
Author-Name: Richard H. Gerlach
Author-X-Name-First: Richard H.
Author-X-Name-Last: Gerlach
Author-Name: Cathy W. S. Chen
Author-X-Name-First: Cathy W. S.
Author-X-Name-Last: Chen
Author-Name: Nancy Y. C. Chan
Author-X-Name-First: Nancy Y. C.
Author-X-Name-Last: Chan
Title: Bayesian Time-Varying Quantile Forecasting for Value-at-Risk in Financial Markets
Abstract:
Recently, advances in time-varying
quantile modeling have proven effective in financial Value-at-Risk
forecasting. Some well-known dynamic conditional autoregressive quantile
models are generalized to a fully nonlinear family. The Bayesian solution
to the general quantile regression problem, via the Skewed-Laplace
distribution, is adapted and designed for parameter estimation in this
model family via an adaptive Markov chain Monte Carlo sampling scheme. A
simulation study illustrates favorable precision in estimation, compared
to the standard numerical optimization method. The proposed model family
is clearly favored in an empirical study of 10 major stock markets. The
results that show the proposed model is more accurate at Value-at-Risk
forecasting over a two-year period, when compared to a range of existing
alternative models and methods.
Journal: Journal of Business & Economic Statistics
Pages: 481-492
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2010.08203
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08203
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:481-492
Template-Type: ReDIF-Article 1.0
Author-Name: Melissa Bjelland
Author-X-Name-First: Melissa
Author-X-Name-Last: Bjelland
Author-Name: Bruce Fallick
Author-X-Name-First: Bruce
Author-X-Name-Last: Fallick
Author-Name: John Haltiwanger
Author-X-Name-First: John
Author-X-Name-Last: Haltiwanger
Author-Name: Erika McEntarfer
Author-X-Name-First: Erika
Author-X-Name-Last: McEntarfer
Title: Employer-to-Employer Flows in the United States: Estimates Using Linked Employer-Employee Data
Abstract:
We use administrative data linking
workers and firms to study employer-to-employer (E-to-E) flows. After
discussing how to identify such flows in quarterly data, we investigate
their basic empirical patterns. We find that the pace of E-to-E flows is
high, representing approximately 4% of employment and 30% of separations
each quarter. The pace of E-to-E flows appears to be highly procyclical
and varies systematically across worker, job, and employer
characteristics. There are rich patterns in terms of origin and
destination of industries. Somewhat surprisingly, we find that more than
half of the workers making E-to-E transitions switch even broadly defined
industries (i.e., NAICS supersectors).
Journal: Journal of Business & Economic Statistics
Pages: 493-505
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.08053
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08053
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:493-505
Template-Type: ReDIF-Article 1.0
Author-Name: Tomislav Vukina
Author-X-Name-First: Tomislav
Author-X-Name-Last: Vukina
Author-Name: Xiaoyong Zheng
Author-X-Name-First: Xiaoyong
Author-X-Name-Last: Zheng
Title: Homogenous and Heterogenous Contestants in Piece Rate Tournaments: Theory and Empirical Analysis
Abstract:
In this article we show that sorting
different ability contestants in piece rate tournaments into more
homogenous groups alters agents' incentives to exert effort. We propose a
method for structurally estimating the piece rate tournament game with
heterogenous players and apply it to the payroll data from a broiler
production contract. Our counterfactual analysis shows that under
reasonable assumptions, both the principal and the growers can gain when
the tournament groups are heterogenized. This business strategy could be
difficult to implement in real-life settings, however. This article has
supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 506-517
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2010.08345
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08345
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:506-517
Template-Type: ReDIF-Article 1.0
Author-Name: Ke-Li Xu
Author-X-Name-First: Ke-Li
Author-X-Name-Last: Xu
Author-Name: Peter C. B. Phillips
Author-X-Name-First: Peter C. B.
Author-X-Name-Last: Phillips
Title: Tilted Nonparametric Estimation of Volatility Functions With Empirical Applications
Abstract:
This article proposes a novel positive
nonparametric estimator of the conditional variance function without
reliance on logarithmic or other transformations. The estimator is based
on an empirical likelihood modification of conventional local-level
nonparametric regression applied to squared residuals of the mean
regression. The estimator is shown to be asymptotically equivalent to the
local linear estimator in the case of unbounded support but, unlike that
estimator, is restricted to be nonnegative in finite samples. It is fully
adaptive to the unknown conditional mean function. Simulations are
conducted to evaluate the finite-sample performance of the estimator. Two
empirical applications are reported. One uses cross-sectional data and
studies the relationship between occupational prestige and income, and the
other uses time series data on Treasury bill rates to fit the total
volatility function in a continuous-time jump diffusion model.
Journal: Journal of Business & Economic Statistics
Pages: 518-528
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.09012
File-URL: http://hdl.handle.net/10.1198/jbes.2011.09012
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:518-528
Template-Type: ReDIF-Article 1.0
Author-Name: Emmanuel Dhyne
Author-X-Name-First: Emmanuel
Author-X-Name-Last: Dhyne
Author-Name: Catherine Fuss
Author-X-Name-First: Catherine
Author-X-Name-Last: Fuss
Author-Name: M. Hashem Pesaran
Author-X-Name-First: M. Hashem
Author-X-Name-Last: Pesaran
Author-Name: Patrick Sevestre
Author-X-Name-First: Patrick
Author-X-Name-Last: Sevestre
Title: Lumpy Price Adjustments: A Microeconometric Analysis
Abstract:
Based on a reduced-form state-dependent
pricing model with random thresholds, we specify and estimate a nonlinear
panel data model with an unobserved factor representing the common cost or
demand components of the unobserved optimal price. Using this model, we
are able to assess the relative importance of common and idiosyncratic
shocks in explaining the frequency and magnitude of price changes in the
case of a wide variety of consumer products in Belgium and France. We find
that the mean level and variability of the random thresholds are key for
explaining differences across products in the frequency of price changes.
We also find that the idiosyncratic shocks represent the most important
driver of the magnitude of price changes. Supplementary materials for this
article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 529-540
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.09066
File-URL: http://hdl.handle.net/10.1198/jbes.2011.09066
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:529-540
Template-Type: ReDIF-Article 1.0
Author-Name: Yiguo Sun
Author-X-Name-First: Yiguo
Author-X-Name-Last: Sun
Author-Name: Qi Li
Author-X-Name-First: Qi
Author-X-Name-Last: Li
Title: Data-Driven Bandwidth Selection for Nonstationary Semiparametric Models
Abstract:
This article extends the asymptotic
results of the traditional least squares cross-validatory (CV) bandwidth
selection method to semiparametric regression models with nonstationary
data. Two main findings are that (a) the CV-selected bandwidth is
stochastic even asymptotically and (b) the selected bandwidth based on the
local constant method converges to 0 at a different speed than that based
on the local linear method. Both findings are in sharp contrast to
existing results when working with weakly dependent or independent data.
Monte Carlo simulations confirm our theoretical results and show that the
automatic data-driven method works well.
Journal: Journal of Business & Economic Statistics
Pages: 541-551
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.09159
File-URL: http://hdl.handle.net/10.1198/jbes.2011.09159
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:541-551
Template-Type: ReDIF-Article 1.0
Author-Name: Drew Creal
Author-X-Name-First: Drew
Author-X-Name-Last: Creal
Author-Name: Siem Jan Koopman
Author-X-Name-First: Siem Jan
Author-X-Name-Last: Koopman
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Title: A Dynamic Multivariate Heavy-Tailed Model for Time-Varying Volatilities and Correlations
Abstract:
We propose a new class of
observation-driven time-varying parameter models for dynamic volatilities
and correlations to handle time series from heavy-tailed distributions.
The model adopts generalized autoregressive score dynamics to obtain a
time-varying covariance matrix of the multivariate Student
t distribution. The key novelty of our proposed model
concerns the weighting of lagged squared innovations for the estimation of
future correlations and volatilities. When we account for heavy tails of
distributions, we obtain estimates that are more robust to large
innovations. We provide an empirical illustration for a panel of daily
equity returns.
Journal: Journal of Business & Economic Statistics
Pages: 552-563
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.10070
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10070
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:552-563
Template-Type: ReDIF-Article 1.0
Author-Name: Lu Han
Author-X-Name-First: Lu
Author-X-Name-Last: Han
Author-Name: Seung-Hyun Hong
Author-X-Name-First: Seung-Hyun
Author-X-Name-Last: Hong
Title: Testing Cost Inefficiency Under Free Entry in the Real Estate Brokerage Industry
Abstract:
This article provides an empirical
framework for studying entry and cost inefficiency in the real estate
brokerage industry. We present a structural entry model that exploits
individual level data on entry and earnings to estimate potential real
estate agents' revenues and reservation wages, thereby recovering costs of
providing brokerage service. Using U.S. Census data, we estimate the model
and find strong evidence for cost inefficiency under free entry,
attributable in particular to wasteful nonprice competition. We further
use the estimated model to evaluate welfare implications of the rebate
bans that currently persist in some U.S. states. Supplemental materials
are provided online.
Journal: Journal of Business & Economic Statistics
Pages: 564-578
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.08314
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08314
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:564-578
Template-Type: ReDIF-Article 1.0
Author-Name: Muyi Li
Author-X-Name-First: Muyi
Author-X-Name-Last: Li
Author-Name: Guodong Li
Author-X-Name-First: Guodong
Author-X-Name-Last: Li
Author-Name: Wai Keung Li
Author-X-Name-First: Wai Keung
Author-X-Name-Last: Li
Title: Score Tests for Hyperbolic GARCH Models
Abstract:
Davidson (2004) recently proposed the
hyperbolic GARCH model to capture the phenomenon of long-range dependence
in volatility, with the extent of such dependence measured by the
geometric or hyperbolic decay of the coefficients in an ARCH(∞)
model. In this article, we reinterpret the hyperbolic GARCH model by
building a link with the common GARCH model, and construct a simplified
score test to check the presence of the hyperbolic decay. We derive the
asymptotic of the test statistic under the null hypothesis and the local
alternatives. We conduct Monte Carlo simulation experiments to study the
performance of this test, and report an illustration on two log return
sequences. This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 579-586
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.10024
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10024
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:579-586
Template-Type: ReDIF-Article 1.0
Author-Name: Robert S. Chirinko
Author-X-Name-First: Robert S.
Author-X-Name-Last: Chirinko
Author-Name: Steven M. Fazzari
Author-X-Name-First: Steven M.
Author-X-Name-Last: Fazzari
Author-Name: Andrew P. Meyer
Author-X-Name-First: Andrew P.
Author-X-Name-Last: Meyer
Title: A New Approach to Estimating Production Function Parameters: The Elusive Capital--Labor Substitution Elasticity
Abstract:
Parameters of taste and technology are
central to a wide variety of economic models and issues. This article
proposes a simple method for estimating production function parameters
from panel data, with a particular focus on the elasticity of substitution
between capital and labor. Elasticity estimates have varied widely, and a
consensus estimate remains elusive. Our estimation strategy exploits
long-run variation and thus avoids several pitfalls, including
difficult-to-specify dynamics, transitory time-series variation, and
positively sloped supply schedules, that can bias the estimated
elasticity. Our results are based on an extensive panel comprising 1860
firms. Our approach generates a precisely estimated elasticity of 0.40.
Although existing estimates range widely, we document a remarkable
convergence of results from two related approaches applied to a common
dataset. The method developed here may prove useful in estimating other
structural parameters from panel datasets.
Journal: Journal of Business & Economic Statistics
Pages: 587-594
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.08119
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08119
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:587-594
Template-Type: ReDIF-Article 1.0
Author-Name: Keisuke Hirano
Author-X-Name-First: Keisuke
Author-X-Name-Last: Hirano
Author-Name: Jonathan Wright
Author-X-Name-First: Jonathan
Author-X-Name-Last: Wright
Title: Editors' Report 2011
Journal: Journal of Business & Economic Statistics
Pages: 597-597
Issue: 4
Volume: 29
Year: 2011
Month: 10
X-DOI: 10.1198/jbes.2011.294er
File-URL: http://hdl.handle.net/10.1198/jbes.2011.294er
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:29:y:2011:i:4:p:597-597
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Forecast Rationality Tests Based on Multi-Horizon Bounds
Abstract:
Forecast rationality under squared error loss implies various bounds on
second moments of the data across forecast horizons. For example, the mean
squared forecast error should be increasing in the horizon, and the mean
squared forecast should be decreasing in the horizon. We propose
rationality tests based on these restrictions, including new ones that can
be conducted without data on the target variable, and implement them via
tests of inequality constraints in a regression framework. A new test of
optimal forecast revision based on a regression of the target variable on
the long-horizon forecast and the sequence of interim forecast revisions
is also proposed. The size and power of the new tests are compared with
those of extant tests through Monte Carlo simulations. An empirical
application to the Federal Reserve's Greenbook forecasts is presented.
Journal: Journal of Business & Economic Statistics
Pages: 1-17
Issue: 1
Volume: 30
Year: 2011
Month: 6
X-DOI: 10.1080/07350015.2012.634337
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634337
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:1-17
Template-Type: ReDIF-Article 1.0
Author-Name: Dean Croushore
Author-X-Name-First: Dean
Author-X-Name-Last: Croushore
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 17-20
Issue: 1
Volume: 30
Year: 2011
Month: 8
X-DOI: 10.1080/07350015.2012.634340
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634340
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:17-20
Template-Type: ReDIF-Article 1.0
Author-Name: Kajal Lahiri
Author-X-Name-First: Kajal
Author-X-Name-Last: Lahiri
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 20-25
Issue: 1
Volume: 30
Year: 2011
Month: 7
X-DOI: 10.1080/07350015.2012.634342
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634342
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:20-25
Template-Type: ReDIF-Article 1.0
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 25-29
Issue: 1
Volume: 30
Year: 2011
Month: 8
X-DOI: 10.1080/07350015.2012.634343
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634343
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:25-29
Template-Type: ReDIF-Article 1.0
Author-Name: Lennart Hoogerheide
Author-X-Name-First: Lennart
Author-X-Name-Last: Hoogerheide
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Author-Name: Herman K. van Dijk
Author-X-Name-First: Herman K.
Author-X-Name-Last: van Dijk
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 30-33
Issue: 1
Volume: 30
Year: 2011
Month: 9
X-DOI: 10.1080/07350015.2012.634348
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634348
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:30-33
Template-Type: ReDIF-Article 1.0
Author-Name: Kenneth D. West
Author-X-Name-First: Kenneth D.
Author-X-Name-Last: West
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 34-35
Issue: 1
Volume: 30
Year: 2011
Month: 6
X-DOI: 10.1080/07350015.2012.634350
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634350
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:34-35
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 36-40
Issue: 1
Volume: 30
Year: 2012
Month: 1
X-DOI: 10.1080/07350015.2012.634354
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634354
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:36-40
Template-Type: ReDIF-Article 1.0
Author-Name: Pascal Lavergne
Author-X-Name-First: Pascal
Author-X-Name-Last: Lavergne
Author-Name: Valentin Patilea
Author-X-Name-First: Valentin
Author-X-Name-Last: Patilea
Title: One for All and All for One: Regression Checks With Many Regressors
Abstract:
We develop a novel approach to building checks of parametric regression
models when many regressors are present, based on a class of sufficiently
rich semiparametric alternatives, namely single-index models. We propose
an omnibus test based on the kernel method that performs against a
sequence of directional nonparametric alternatives as if there was only
one regressor whatever the number of regressors. This test can be viewed
as a smooth version of the integrated conditional moment test of Bierens.
Qualitative information can be easily incorporated into the procedure to
enhance power. In an extensive comparative simulation study, we find that
our test is not very sensitive to the smoothing parameter and performs
well in multidimensional settings. We apply this test to a cross-country
growth regression model.
Journal: Journal of Business & Economic Statistics
Pages: 41-52
Issue: 1
Volume: 30
Year: 2011
Month: 1
X-DOI: 10.1198/jbes.2011.07152
File-URL: http://hdl.handle.net/10.1198/jbes.2011.07152
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:41-52
Template-Type: ReDIF-Article 1.0
Author-Name: Todd E. Clark
Author-X-Name-First: Todd E.
Author-X-Name-Last: Clark
Author-Name: Michael W. McCracken
Author-X-Name-First: Michael W.
Author-X-Name-Last: McCracken
Title: Reality Checks and Comparisons of Nested Predictive Models
Abstract:
This article develops a simple bootstrap method for simulating asymptotic
critical values for tests of equal forecast accuracy and encompassing
among many nested models. Our method combines elements of fixed regressor
and wild bootstraps. We first derive the asymptotic distributions of tests
of equal forecast accuracy and encompassing applied to forecasts from
multiple models that nest the benchmark model—that is, reality
check tests. We then prove the validity of the bootstrap for these tests.
Monte Carlo experiments indicate that our proposed bootstrap has better
finite-sample size and power than other methods designed for comparison of
nonnested models. Supplementary materials are available online.
Journal: Journal of Business & Economic Statistics
Pages: 53-66
Issue: 1
Volume: 30
Year: 2011
Month: 2
X-DOI: 10.1198/jbes.2011.10278
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10278
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:53-66
Template-Type: ReDIF-Article 1.0
Author-Name: Arthur Lewbel
Author-X-Name-First: Arthur
Author-X-Name-Last: Lewbel
Title: Using Heteroscedasticity to Identify and Estimate Mismeasured and Endogenous Regressor Models
Abstract:
This article proposes a new method of obtaining identification in
mismeasured regressor models, triangular systems, and simultaneous
equation systems. The method may be used in applications where other
sources of identification, such as instrumental variables or repeated
measurements, are not available. Associated estimators take the form of
two-stage least squares or generalized method of moments. Identification
comes from a heteroscedastic covariance restriction that is shown to be a
feature of many models of endogeneity or mismeasurement. Identification is
also obtained for semiparametric partly linear models, and associated
estimators are provided. Set identification bounds are derived for cases
where point-identifying assumptions fail to hold. An empirical application
estimating Engel curves is provided.
Journal: Journal of Business & Economic Statistics
Pages: 67-80
Issue: 1
Volume: 30
Year: 2010
Month: 12
X-DOI: 10.1080/07350015.2012.643126
File-URL: http://hdl.handle.net/10.1080/07350015.2012.643126
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2010:i:1:p:67-80
Template-Type: ReDIF-Article 1.0
Author-Name: Qian Li
Author-X-Name-First: Qian
Author-X-Name-Last: Li
Author-Name: Pravin K. Trivedi
Author-X-Name-First: Pravin K.
Author-X-Name-Last: Trivedi
Title: Medicare Health Plan Choices of the Elderly: A Choice-With-Screening Model
Abstract:
With the expansion of Medicare, increasing attention has been paid to the
behavior of elderly persons in choosing health insurance. This article
investigates how the elderly use plan attributes to screen their Medicare
health plans to simplify a complicated choice situation. The proposed
model extends the conventional random utility models by considering a
screening stage. Bayesian estimation is implemented, and the results based
on Medicare data show that the elderly are likely to screen according to
premium, prescription drug coverage, and vision coverage. These attributes
have nonlinear effects on plan choice that cannot be captured by
conventional models. This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 81-93
Issue: 1
Volume: 30
Year: 2011
Month: 2
X-DOI: 10.1198/jbes.2011.0819
File-URL: http://hdl.handle.net/10.1198/jbes.2011.0819
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:81-93
Template-Type: ReDIF-Article 1.0
Author-Name: Ingmar Nolte
Author-X-Name-First: Ingmar
Author-X-Name-Last: Nolte
Author-Name: Valeri Voev
Author-X-Name-First: Valeri
Author-X-Name-Last: Voev
Title: Least Squares Inference on Integrated Volatility and the Relationship Between Efficient Prices and Noise
Abstract:
The expected value of sums of squared intraday returns (realized
variance) gives rise to a least squares regression which adapts itself to
the assumptions of the noise process and allows for joint inference on
integrated variance (), noise moments, and
price-noise relations. In the iid noise case, we derive the asymptotic
variance of the and noise variance estimators
and show that they are consistent. The joint estimation approach is
particularly attractive as it reveals important characteristics of the
noise process which can be related to liquidity and market efficiency. The
analysis of dependence between the price and noise processes provides an
often missing link to market microstructure theory. We find substantial
differences in the noise characteristics of trade and quote data arising
from the effect of distinct market microstructure frictions. This article
has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 94-108
Issue: 1
Volume: 30
Year: 2011
Month: 4
X-DOI: 10.1080/10473289.2011.637876
File-URL: http://hdl.handle.net/10.1080/10473289.2011.637876
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:94-108
Template-Type: ReDIF-Article 1.0
Author-Name: José Gonzalo Rangel
Author-X-Name-First: José Gonzalo
Author-X-Name-Last: Rangel
Author-Name: Robert F. Engle
Author-X-Name-First: Robert F.
Author-X-Name-Last: Engle
Title: The Factor--Spline--GARCH Model for High and Low Frequency Correlations
Abstract:
We propose a new approach to model high and low frequency components of
equity correlations. Our framework combines a factor asset pricing
structure with other specifications capturing dynamic properties of
volatilities and covariances between a single common factor and
idiosyncratic returns. High frequency correlations mean revert to slowly
varying functions that characterize long-term correlation patterns. We
associate such term behavior with low frequency economic variables,
including determinants of market and idiosyncratic volatilities.
Flexibility in the time-varying level of mean reversion improves both the
empirical fit of equity correlations in the United States and correlation
forecasts at long horizons.
Journal: Journal of Business & Economic Statistics
Pages: 109-124
Issue: 1
Volume: 30
Year: 2011
Month: 5
X-DOI: 10.1080/07350015.2012.643132
File-URL: http://hdl.handle.net/10.1080/07350015.2012.643132
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:109-124
Template-Type: ReDIF-Article 1.0
Author-Name: Charles Bellemare
Author-X-Name-First: Charles
Author-X-Name-Last: Bellemare
Author-Name: Luc Bissonnette
Author-X-Name-First: Luc
Author-X-Name-Last: Bissonnette
Author-Name: Sabine Kröger
Author-X-Name-First: Sabine
Author-X-Name-Last: Kröger
Title: Flexible Approximation of Subjective Expectations Using Probability Questions
Abstract:
We propose a flexible method to approximate the subjective cumulative
distribution function of an economic agent about the future realization of
a continuous random variable. The method can closely approximate a wide
variety of distributions while maintaining weak assumptions on the shape
of distribution functions. We show how moments and quantiles of general
functions of the random variable can be computed analytically and/or
numerically. We illustrate the method by revisiting the determinants of
income expectations in the United States. A Monte Carlo analysis suggests
that a quantile-based flexible approach can be used to successfully deal
with censoring and possible rounding levels present in the data. Finally,
our analysis suggests that the performance of our flexible approach
matches that of a correctly specified parametric approach and is clearly
better than that of a misspecified parametric approach.
Journal: Journal of Business & Economic Statistics
Pages: 125-131
Issue: 1
Volume: 30
Year: 2011
Month: 4
X-DOI: 10.1198/jbes.2011.09053
File-URL: http://hdl.handle.net/10.1198/jbes.2011.09053
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:125-131
Template-Type: ReDIF-Article 1.0
Author-Name: Xinyu Zhang
Author-X-Name-First: Xinyu
Author-X-Name-Last: Zhang
Author-Name: Alan T. K. Wan
Author-X-Name-First: Alan T. K.
Author-X-Name-Last: Wan
Author-Name: Sherry Z. Zhou
Author-X-Name-First: Sherry Z.
Author-X-Name-Last: Zhou
Title: Focused Information Criteria, Model Selection, and Model Averaging in a Tobit Model With a Nonzero Threshold
Abstract:
Claeskens and Hjort (2003) have developed a focused information criterion
(FIC) for model selection that selects different models based on different
focused functions with those functions tailored to the parameters singled
out for interest. Hjort and Claeskens (2003) also have presented model
averaging as an alternative to model selection, and suggested a local
misspecification framework for studying the limiting distributions and
asymptotic risk properties of post-model selection and model average
estimators in parametric models. Despite the burgeoning literature on
Tobit models, little work has been done on model selection explicitly in
the Tobit context. In this article we propose FICs for variable selection
allowing for such measures as mean absolute deviation, mean squared error,
and expected expected linear exponential errors in a type I Tobit model
with an unknown threshold. We also develop a model average Tobit estimator
using values of a smoothed version of the FIC as weights. We study the
finite-sample performance of model selection and model average estimators
resulting from various FICs via a Monte Carlo experiment, and demonstrate
the possibility of using a model screening procedure before combining the
models. Finally, we present an example from a well-known study on married
women's working hours to illustrate the estimation methods discussed. This
article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 132-142
Issue: 1
Volume: 30
Year: 2011
Month: 6
X-DOI: 10.1198/jbes.2011.10075
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10075
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:132-142
Template-Type: ReDIF-Article 1.0
Author-Name: Randal J. Verbrugge
Author-X-Name-First: Randal J.
Author-X-Name-Last: Verbrugge
Title: Do the Consumer Price Index's Utilities Adjustments for Owners’ Equivalent Rent Distort Inflation Measurement?
Abstract:
The Consumer Price Index (CPI) is an important social index number,
central to monetary policy, well being measurement, optimal pricing, and
tax and contract escalation. Shelter costs have a large weight in the CPI,
so their movements receive much attention. The CPI incorporates two
shelter indexes: Rent, covering renters, and Owners’ Equivalent
Rent (OER), covering owners. Between 1999 and 2006, Rent and OER inflation
twice diverged markedly; this occurred again recently. Because these
indexes share a common data source—a large sample of market
rents—such divergence often prompts questions about CPI methods,
particularly the OER utilities adjustment. (This adjustment is necessary
to produce an unbiased OER index, because many market rents include
utilities, but OER is a rent-of-shelter concept.) The utilities adjustment
procedure is no smoking gun. It was not the major cause of these
divergences, and it generates no long-run inflation measurement bias.
Nonetheless, it increases OER inflation volatility and can drive OER
inflation far from its measurement goal in the short run. This article
develops a theory of utilities adjustment and outlines a straightforward
improvement of Bureau of Labor Statistics procedures that eliminates their
undesirable properties. The short-run impact on inflation measurement can
be very sizable.
Journal: Journal of Business & Economic Statistics
Pages: 143-148
Issue: 1
Volume: 30
Year: 2009
Month: 12
X-DOI: 10.1198/jbes.2011.08016
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08016
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2009:i:1:p:143-148
Template-Type: ReDIF-Article 1.0
Author-Name: Marigee Bacolod
Author-X-Name-First: Marigee
Author-X-Name-Last: Bacolod
Author-Name: John DiNardo
Author-X-Name-First: John
Author-X-Name-Last: DiNardo
Author-Name: Mireille Jacobson
Author-X-Name-First: Mireille
Author-X-Name-Last: Jacobson
Title: Beyond Incentives: Do Schools Use Accountability Rewards Productively?
Abstract:
We use a regression discontinuity design to analyze an understudied
aspect of school accountability systems—how schools use financial
rewards. For two years, California's accountability system financially
rewarded schools based on a deterministic function of test scores.
Qualifying schools received per-pupil awards amounting to about 1% of
statewide per-pupil spending. Corroborating anecdotal evidence that awards
were paid out as teacher bonuses, we find no evidence that winning schools
purchased more instructional material, increased teacher hiring, or
changed the subject-specific composition of their teaching staff. Most
importantly, we find no evidence that student achievement increased in
winning schools. Supplemental materials for this article are available
online.
Journal: Journal of Business & Economic Statistics
Pages: 149-163
Issue: 1
Volume: 30
Year: 2011
Month: 6
X-DOI: 10.1080/07350015.2012.637868
File-URL: http://hdl.handle.net/10.1080/07350015.2012.637868
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:149-163
Template-Type: ReDIF-Article 1.0
Author-Name: Wolfgang Rinnergschwentner
Author-X-Name-First: Wolfgang
Author-X-Name-Last: Rinnergschwentner
Author-Name: Gottfried Tappeiner
Author-X-Name-First: Gottfried
Author-X-Name-Last: Tappeiner
Author-Name: Janette Walde
Author-X-Name-First: Janette
Author-X-Name-Last: Walde
Title: Multivariate Stochastic Volatility via Wishart Processes: A Comment
Abstract:
This comment refers to an error in the methodology for estimating the
parameters of the model developed by Philipov and Glickman for modeling
multivariate stochastic volatility via Wishart processes. For estimation
they used Bayesian techniques. The derived expressions for the full
conditionals of the model parameters as well as the expression for the
acceptance ratio of the covariance matrix are erroneous. In this erratum
all necessary formulae are given to guarantee an appropriate
implementation and application of the model.
Journal: Journal of Business & Economic Statistics
Pages: 164-164
Issue: 1
Volume: 30
Year: 2011
Month: 9
X-DOI: 10.1080/07350015.2012.634358
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634358
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:1:p:164-164
Template-Type: ReDIF-Article 1.0
Author-Name: Szymon Wlazlowski
Author-X-Name-First: Szymon
Author-X-Name-Last: Wlazlowski
Author-Name: Monica Giulietti
Author-X-Name-First: Monica
Author-X-Name-Last: Giulietti
Author-Name: Jane Binner
Author-X-Name-First: Jane
Author-X-Name-Last: Binner
Author-Name: Costas Milas
Author-X-Name-First: Costas
Author-X-Name-Last: Milas
Title: Price Transmission in the EU Wholesale Petroleum Markets
Abstract:
This article employs nonlinear smooth transition models to analyze the
relationship between upstream and midstream prices of petroleum products.
We test for the presence of nonlinearities in price linkages using both
weekly series constructed using official EU procedures and also daily
industry series applied for the first time. Our results show that the
estimated shape of the transition function and equilibrium reversion path
depend on the frequency of the price dataset. Our analysis of the crude
oil to wholesale price transmission provides evidence of nonlinearities
when prices are observed with daily frequency. The nature of the
nonlinearities provides evidence in support of the existence of menu costs
or, more generally, frictions in the markets rather than supply adjustment
costs. This result differs from that found for the U.S. petroleum markets.
Journal: Journal of Business & Economic Statistics
Pages: 165-172
Issue: 2
Volume: 30
Year: 2011
Month: 1
X-DOI: 10.1080/07350015.2012.672290
File-URL: http://hdl.handle.net/10.1080/07350015.2012.672290
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:165-172
Template-Type: ReDIF-Article 1.0
Author-Name: Alastair Cunningham
Author-X-Name-First: Alastair
Author-X-Name-Last: Cunningham
Author-Name: Jana Eklund
Author-X-Name-First: Jana
Author-X-Name-Last: Eklund
Author-Name: Chris Jeffery
Author-X-Name-First: Chris
Author-X-Name-Last: Jeffery
Author-Name: George Kapetanios
Author-X-Name-First: George
Author-X-Name-Last: Kapetanios
Author-Name: Vincent Labhard
Author-X-Name-First: Vincent
Author-X-Name-Last: Labhard
Title: A State Space Approach to Extracting the Signal From Uncertain Data
Abstract:
Most macroeconomic data are uncertain—they are estimates rather
than perfect measures of underlying economic variables. One symptom of
that uncertainty is the propensity of statistical agencies to revise their
estimates in the light of new information or methodological advances. This
paper sets out an approach for extracting the signal from uncertain data.
It describes a two-step estimation procedure in which the history of past
revisions is first used to estimate the parameters of a measurement
equation describing the official published estimates. These parameters are
then imposed in a maximum likelihood estimation of a state space model for
the macroeconomic variable.
Journal: Journal of Business & Economic Statistics
Pages: 173-180
Issue: 2
Volume: 30
Year: 2009
Month: 3
X-DOI: 10.1198/jbes.2009.08171
File-URL: http://hdl.handle.net/10.1198/jbes.2009.08171
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2009:i:2:p:173-180
Template-Type: ReDIF-Article 1.0
Author-Name: N. Kundan Kishor
Author-X-Name-First: N. Kundan
Author-X-Name-Last: Kishor
Author-Name: Evan F. Koenig
Author-X-Name-First: Evan F.
Author-X-Name-Last: Koenig
Title: VAR Estimation and Forecasting When Data Are Subject to Revision
Abstract:
We show that Howrey’s method for producing economic forecasts when
data are subject to revision is easily generalized to handle the case
where data are produced by a sophisticated statistical agency. The
proposed approach assumes that government estimates are efficient with a
finite lag. It takes no stand on whether earlier revisions are the result
of “news” or of reductions in “noise.” We
present asymptotic performance results in the scalar case and illustrate
the technique using several simple models of economic activity. In each
case, it outperforms both conventional VAR analysis and the original
Howrey method. It produces GDP forecasts that are competitive with those
of professional forecasters. Special cases and extensions of the analysis
are discussed in a series of appendices that are available online.
Journal: Journal of Business & Economic Statistics
Pages: 181-190
Issue: 2
Volume: 30
Year: 2009
Month: 7
X-DOI: 10.1198/jbes.2010.08169
File-URL: http://hdl.handle.net/10.1198/jbes.2010.08169
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2009:i:2:p:181-190
Template-Type: ReDIF-Article 1.0
Author-Name: Erik Meijer
Author-X-Name-First: Erik
Author-X-Name-Last: Meijer
Author-Name: Susann Rohwedder
Author-X-Name-First: Susann
Author-X-Name-Last: Rohwedder
Author-Name: Tom Wansbeek
Author-X-Name-First: Tom
Author-X-Name-Last: Wansbeek
Title: Measurement Error in Earnings Data: Using a Mixture Model Approach to Combine Survey and Register Data
Abstract:
Survey data on earnings tend to contain measurement error. Administrative
data are superior in principle, but are worthless in case of a mismatch.
We develop methods for prediction in mixture factor analysis models that
combine both data sources to arrive at a single earnings figure. We apply
the methods to a Swedish data set. Our results show that register earnings
data perform poorly if there is a (small) probability of a mismatch.
Survey earnings data are more reliable, despite their measurement error.
Predictors that combine both and take conditional class probabilities into
account outperform all other predictors. This article has supplementary
material online.
Journal: Journal of Business & Economic Statistics
Pages: 191-201
Issue: 2
Volume: 30
Year: 2011
Month: 2
X-DOI: 10.1198/jbes.2011.08166
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08166
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:191-201
Template-Type: ReDIF-Article 1.0
Author-Name: Baoline Chen
Author-X-Name-First: Baoline
Author-X-Name-Last: Chen
Title: A Balanced System of U.S. Industry Accounts and Distribution of the Aggregate Statistical Discrepancy by Industry
Abstract:
This article describes and illustrates a generalized least squares (GLS)
method that systematically incorporates all available information on the
reliability of initial data in the reconciliation of a large disaggregated
system of national accounts. The GLS method is applied to reconciling the
1997 U.S. Input-Output and Gross Domestic Product (GDP)-by-industry
accounts with benchmarked GDP estimated from expenditures. The GLS
procedure produced a balanced system of industry accounts and distributed
the aggregate statistical discrepancy by industry according to the
estimated relative reliabilities of initial estimates. The study
demonstrates the empirical feasibility and computational efficiency of the
GLS method for large accounts reconciliation.
Journal: Journal of Business & Economic Statistics
Pages: 202-211
Issue: 2
Volume: 30
Year: 2012
Month: 2
X-DOI: 10.1080/07350015.2012.669667
File-URL: http://hdl.handle.net/10.1080/07350015.2012.669667
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:2:p:202-211
Template-Type: ReDIF-Article 1.0
Author-Name: Robert Engle
Author-X-Name-First: Robert
Author-X-Name-Last: Engle
Author-Name: Bryan Kelly
Author-X-Name-First: Bryan
Author-X-Name-Last: Kelly
Title: Dynamic Equicorrelation
Abstract:
A new covariance matrix estimator is proposed under the assumption that
at every time period all pairwise correlations are equal. This assumption,
which is pragmatically applied in various areas of finance, makes it
possible to estimate arbitrarily large covariance matrices with ease. The
model, called DECO, involves first adjusting for individual volatilities
and then estimating correlations. A quasi-maximum likelihood result shows
that DECO provides consistent parameter estimates even when the
equicorrelation assumption is violated. We demonstrate how to generalize
DECO to block equicorrelation structures. DECO estimates for U.S. stock
return data show that (block) equicorrelated models can provide a better
fit of the data than DCC. Using out-of-sample forecasts, DECO and Block
DECO are shown to improve portfolio selection compared to an unrestricted
dynamic correlation structure.
Journal: Journal of Business & Economic Statistics
Pages: 212-228
Issue: 2
Volume: 30
Year: 2011
Month: 7
X-DOI: 10.1080/07350015.2011.652048
File-URL: http://hdl.handle.net/10.1080/07350015.2011.652048
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:212-228
Template-Type: ReDIF-Article 1.0
Author-Name: Jesús Gonzalo
Author-X-Name-First: Jesús
Author-X-Name-Last: Gonzalo
Author-Name: Jean-Yves Pitarakis
Author-X-Name-First: Jean-Yves
Author-X-Name-Last: Pitarakis
Title: Regime-Specific Predictability in Predictive Regressions
Abstract:
Predictive regressions are linear specifications linking a noisy variable
such as stock returns to past values of a very persistent regressor with
the aim of assessing the presence of predictability. Key complications
that arise are the potential presence of endogeneity and the poor adequacy
of asymptotic approximations. In this article, we develop tests for
uncovering the presence of predictability in such models when the strength
or direction of predictability may alternate across different economically
meaningful episodes. An empirical application reconsiders the dividend
yield-based return predictability and documents a strong predictability
that is countercyclical, occurring solely during bad economic times. This
article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 229-241
Issue: 2
Volume: 30
Year: 2011
Month: 6
X-DOI: 10.1080/07350015.2011.652053
File-URL: http://hdl.handle.net/10.1080/07350015.2011.652053
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:229-241
Template-Type: ReDIF-Article 1.0
Author-Name: Ana-Maria Dumitru
Author-X-Name-First: Ana-Maria
Author-X-Name-Last: Dumitru
Author-Name: Giovanni Urga
Author-X-Name-First: Giovanni
Author-X-Name-Last: Urga
Title: Identifying Jumps in Financial Assets: A Comparison Between Nonparametric Jump Tests
Abstract:
We perform a comprehensive Monte Carlo comparison between nine
alternative procedures available in the literature to detect jumps in
financial assets using high-frequency data. We evaluate size and power
properties of the procedures under alternative sampling frequencies,
persistence in volatility, jump size and intensity, and degree of
contamination with microstructure noise. The overall best performance is
shown by the Andersen, Bollerslev, and Dobrev (2007) and Lee and Mykland
(2008) intraday procedures (ABD-LM), provided the price process is not
very volatile. We propose two extensions to the existing battery of tests.
The first regards the finite sample improvements based on simulated
critical values for the ABD-LM procedure. The second regards a procedure
that combines frequencies and tests able to reduce the number of spurious
jumps. Finally, we report an empirical analysis using real high frequency
data on five stocks listed in the New York Stock Exchange.
Journal: Journal of Business & Economic Statistics
Pages: 242-255
Issue: 2
Volume: 30
Year: 2011
Month: 10
X-DOI: 10.1080/07350015.2012.663250
File-URL: http://hdl.handle.net/10.1080/07350015.2012.663250
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:242-255
Template-Type: ReDIF-Article 1.0
Author-Name: Matei Demetrescu
Author-X-Name-First: Matei
Author-X-Name-Last: Demetrescu
Author-Name: Christoph Hanck
Author-X-Name-First: Christoph
Author-X-Name-Last: Hanck
Title: Unit Root Testing in Heteroscedastic Panels Using the Cauchy Estimator
Abstract:
The Cauchy estimator of an autoregressive root uses the sign of the first
lag as instrumental variable. The resulting IV t-type
statistic follows a standard normal limiting distribution under a unit
root case even under unconditional heteroscedasticity, if the series to be
tested has no deterministic trends. The standard normality of the Cauchy
test is exploited to obtain a standard normal panel unit root test under
cross-sectional dependence and time-varying volatility with an
orthogonalization procedure. The article’s analysis of the joint
N, T asymptotics of the test suggests
that (1) N should be smaller than T and
(2) its local power is competitive with other popular tests. To render the
test applicable when N is comparable with, or larger
than, T, shrinkage estimators of the involved covariance
matrix are used. The finite-sample performance of the discussed procedures
is found to be satisfactory.
Journal: Journal of Business & Economic Statistics
Pages: 256-264
Issue: 2
Volume: 30
Year: 2011
Month: 10
X-DOI: 10.1080/07350015.2011.638839
File-URL: http://hdl.handle.net/10.1080/07350015.2011.638839
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:256-264
Template-Type: ReDIF-Article 1.0
Author-Name: Shiferaw Gurmu
Author-X-Name-First: Shiferaw
Author-X-Name-Last: Gurmu
Author-Name: John Elder
Author-X-Name-First: John
Author-X-Name-Last: Elder
Title: Flexible Bivariate Count Data Regression Models
Abstract:
The article develops a semiparametric estimation method for the bivariate
count data regression model. We develop a series expansion approach in
which dependence between count variables is introduced by means of
stochastically related unobserved heterogeneity components, and in which,
unlike existing commonly used models, positive as well as negative
correlations are allowed. Extensions that accommodate excess zeros,
censored data, and multivariate generalizations are also given. Monte
Carlo experiments and an empirical application to tobacco use confirms
that the model performs well relative to existing bivariate models, in
terms of various statistical criteria and in capturing the range of
correlation among dependent variables. This article has supplementary
materials online.
Journal: Journal of Business & Economic Statistics
Pages: 265-274
Issue: 2
Volume: 30
Year: 2011
Month: 8
X-DOI: 10.1080/07350015.2011.638816
File-URL: http://hdl.handle.net/10.1080/07350015.2011.638816
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:265-274
Template-Type: ReDIF-Article 1.0
Author-Name: Taoufik Bouezmarni
Author-X-Name-First: Taoufik
Author-X-Name-Last: Bouezmarni
Author-Name: Jeroen V.K. Rombouts
Author-X-Name-First: Jeroen V.K.
Author-X-Name-Last: Rombouts
Author-Name: Abderrahim Taamouti
Author-X-Name-First: Abderrahim
Author-X-Name-Last: Taamouti
Title: Nonparametric Copula-Based Test for Conditional Independence with Applications to Granger Causality
Abstract:
This article proposes a new nonparametric test for conditional
independence that can directly be applied to test for Granger causality.
Based on the comparison of copula densities, the test is easy to implement
because it does not involve a weighting function in the test statistic,
and it can be applied in general settings since there is no restriction on
the dimension of the time series data. In fact, to apply the test, only a
bandwidth is needed for the nonparametric copula. We prove that the test
statistic is asymptotically pivotal under the null hypothesis, establishes
local power properties, and motivates the validity of the bootstrap
technique that we use in finite sample settings. A simulation study
illustrates the size and power properties of the test. We illustrate the
practical relevance of our test by considering two empirical applications
where we examine the Granger noncausality between financial variables. In
a first application and contrary to the general findings in the
literature, we provide evidence on two alternative mechanisms of nonlinear
interaction between returns and volatilities: nonlinear leverage and
volatility feedback effects. This can help better understand the well
known asymmetric volatility phenomenon. In a second application, we
investigate the Granger causality between stock index returns and trading
volume. We find convincing evidence of linear and nonlinear feedback
effects from stock returns to volume, but a weak evidence of nonlinear
feedback effect from volume to stock returns.
Journal: Journal of Business & Economic Statistics
Pages: 275-287
Issue: 2
Volume: 30
Year: 2011
Month: 10
X-DOI: 10.1080/07350015.2011.638831
File-URL: http://hdl.handle.net/10.1080/07350015.2011.638831
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:275-287
Template-Type: ReDIF-Article 1.0
Author-Name: Kyungchul Song
Author-X-Name-First: Kyungchul
Author-X-Name-Last: Song
Title: Testing Predictive Ability and Power Robustification
Abstract:
One of the approaches to compare forecasting methods is to test whether
the risk from a benchmark prediction is smaller than the others. The test
can be embedded into a general problem of testing inequality constraints
using a one-sided sup functional. Hansen showed that such tests suffer
from asymptotic bias. This article generalizes this observation, and
proposes a hybrid method to robustify the power properties by coupling a
one-sided sup test with a complementary test. The method can also be
applied to testing stochastic dominance or moment inequalities. Simulation
studies demonstrate that the new test performs well relative to the
existing methods. For illustration, the new test was applied to analyze
the forecastability of stock returns using technical indicators employed
by White.
Journal: Journal of Business & Economic Statistics
Pages: 288-296
Issue: 2
Volume: 30
Year: 2011
Month: 10
X-DOI: 10.1080/07350015.2012.663245
File-URL: http://hdl.handle.net/10.1080/07350015.2012.663245
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:288-296
Template-Type: ReDIF-Article 1.0
Author-Name: Olesya V. Grishchenko
Author-X-Name-First: Olesya V.
Author-X-Name-Last: Grishchenko
Author-Name: Marco Rossi
Author-X-Name-First: Marco
Author-X-Name-Last: Rossi
Title: The Role of Heterogeneity in Asset Pricing: The Effect of a Clustering Approach
Abstract:
In this article we use a novel clustering approach to study the role of
heterogeneity in asset pricing. We present evidence that the equity
premium is consistent with a stochastic discount factor (SDF) calculated
as the average of the household clusters’ intertemporal marginal
rates of substitution in the 1984--2002 period. The result is driven by
the skewness of the cluster-based cross-sectional distribution of
consumption growth, but cannot be explained by the cross-sectional
variance and mean alone. We find that nine clusters are sufficient to
explain the equity premium with relative risk aversion coefficient equal
to six. The result is robust to various averaging schemes of cluster-based
consumption growth used to construct the SDF. Lastly, the analysis reveals
that standard approximation schemes of the SDF using individual household
data produce unreliable results, implying a negative SDF.
Journal: Journal of Business & Economic Statistics
Pages: 297-311
Issue: 2
Volume: 30
Year: 2011
Month: 11
X-DOI: 10.1080/07350015.2012.670544
File-URL: http://hdl.handle.net/10.1080/07350015.2012.670544
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:297-311
Template-Type: ReDIF-Article 1.0
Author-Name: Peter Arcidiacono
Author-X-Name-First: Peter
Author-X-Name-Last: Arcidiacono
Author-Name: Ahmed Khwaja
Author-X-Name-First: Ahmed
Author-X-Name-Last: Khwaja
Author-Name: Lijing Ouyang
Author-X-Name-First: Lijing
Author-X-Name-Last: Ouyang
Title: Habit Persistence and Teen Sex: Could Increased Access to Contraception Have Unintended Consequences for Teen Pregnancies?
Abstract:
We develop a dynamic discrete-choice model of teen sex and pregnancy that
incorporates habit persistence. Habit persistence has two sources here.
The first is a “fixed cost” of having sex, which relates to
a moral or psychological barrier that has been crossed the first time one
has sex. The second is a “transition cost,” whereby once a
particular relationship has progressed to sex, it is difficult to move
back. We estimate significant habit persistence in teen sex, implying that
the long-run effects of contraception policy may be different from their
short-run counterparts, especially if the failure rate of contraception is
sufficiently large. Programs that increase access to contraception are
found to decrease teen pregnancies in the short run but increase them in
the long run.
Journal: Journal of Business & Economic Statistics
Pages: 312-325
Issue: 2
Volume: 30
Year: 2011
Month: 11
X-DOI: 10.1080/07350015.2011.652052
File-URL: http://hdl.handle.net/10.1080/07350015.2011.652052
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:312-325
Template-Type: ReDIF-Article 1.0
Author-Name: Christiane Baumeister
Author-X-Name-First: Christiane
Author-X-Name-Last: Baumeister
Author-Name: Lutz Kilian
Author-X-Name-First: Lutz
Author-X-Name-Last: Kilian
Title: Real-Time Forecasts of the Real Price of Oil
Abstract:
We construct a monthly real-time dataset consisting of vintages for
1991.1--2010.12 that is suitable for generating forecasts of the real
price of oil from a variety of models. We document that revisions of the
data typically represent news, and we introduce backcasting and nowcasting
techniques to fill gaps in the real-time data. We show that real-time
forecasts of the real price of oil can be more accurate than the no-change
forecast at horizons up to 1 year. In some cases, real-time mean squared
prediction error (MSPE) reductions may be as high as 25% 1 month ahead and
24% 3 months ahead. This result is in striking contrast to related results
in the literature for asset prices. In particular, recursive vector
autoregressive (VAR) forecasts based on global oil market variables tend
to have lower MSPE at short horizons than forecasts based on oil futures
prices, forecasts based on autoregressive (AR) and autoregressive moving
average (ARMA) models, and the no-change forecast. In addition, these VAR
models have consistently higher directional accuracy.
Journal: Journal of Business & Economic Statistics
Pages: 326-336
Issue: 2
Volume: 30
Year: 2011
Month: 9
X-DOI: 10.1080/07350015.2011.648859
File-URL: http://hdl.handle.net/10.1080/07350015.2011.648859
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:2:p:326-336
Template-Type: ReDIF-Article 1.0
Author-Name: Heng Lian
Author-X-Name-First: Heng
Author-X-Name-Last: Lian
Title: Semiparametric Estimation of Additive Quantile Regression Models by Two-Fold Penalty
Abstract:
In this article, we propose a model selection and semiparametric
estimation method for additive models in the context of quantile
regression problems. In particular, we are interested in finding nonzero
components as well as linear components in the conditional quantile
function. Our approach is based on spline approximation for the components
aided by two Smoothly Clipped Absolute Deviation (SCAD) penalty terms. The
advantage of our approach is that one can automatically choose between
general additive models, partially linear additive models, and linear
models in a single estimation step. The most important contribution is
that this is achieved without the need for specifying which covariates
enter the linear part, solving one serious practical issue for models with
partially linear additive structure. Simulation studies as well as a real
dataset are used to illustrate our method.
Journal: Journal of Business & Economic Statistics
Pages: 337-350
Issue: 3
Volume: 30
Year: 2012
Month: 3
X-DOI: 10.1080/07350015.2012.693851
File-URL: http://hdl.handle.net/10.1080/07350015.2012.693851
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:337-350
Template-Type: ReDIF-Article 1.0
Author-Name: H. Peter Boswijk
Author-X-Name-First: H. Peter
Author-X-Name-Last: Boswijk
Author-Name: Franc Klaassen
Author-X-Name-First: Franc
Author-X-Name-Last: Klaassen
Title: Why Frequency Matters for Unit Root Testing in Financial Time Series
Abstract:
It is generally believed that the power of unit root tests is determined
only by the time span of observations, not by their sampling frequency. We
show that the sampling frequency does matter for stock data displaying fat
tails and volatility clustering, such as financial time series. Our claim
builds on recent work on unit root testing based on non-Gaussian
GARCH-based likelihood functions. Such methods yield power gains in the
presence of fat tails and volatility clustering, and the strength of these
features increases with the sampling frequency. This is illustrated using
local power calculations and an empirical application to real exchange
rates.
Journal: Journal of Business & Economic Statistics
Pages: 351-357
Issue: 3
Volume: 30
Year: 2011
Month: 9
X-DOI: 10.1080/07350015.2011.648858
File-URL: http://hdl.handle.net/10.1080/07350015.2011.648858
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:3:p:351-357
Template-Type: ReDIF-Article 1.0
Author-Name: Joshua C.C. Chan
Author-X-Name-First: Joshua C.C.
Author-X-Name-Last: Chan
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: Roberto Leon-Gonzalez
Author-X-Name-First: Roberto
Author-X-Name-Last: Leon-Gonzalez
Author-Name: Rodney W. Strachan
Author-X-Name-First: Rodney W.
Author-X-Name-Last: Strachan
Title: Time Varying Dimension Models
Abstract:
Time varying parameter (TVP) models have enjoyed an increasing popularity
in empirical macroeconomics. However, TVP models are parameter-rich and
risk over-fitting unless the dimension of the model is small. Motivated by
this worry, this article proposes several Time Varying Dimension (TVD)
models where the dimension of the model can change over time, allowing for
the model to automatically choose a more parsimonious TVP representation,
or to switch between different parsimonious representations. Our TVD
models all fall in the category of dynamic mixture models. We discuss the
properties of these models and present methods for Bayesian inference. An
application involving U.S. inflation forecasting illustrates and compares
the different TVD models. We find our TVD approaches exhibit better
forecasting performance than many standard benchmarks and shrink toward
parsimonious specifications. This article has online supplementary
materials.
Journal: Journal of Business & Economic Statistics
Pages: 358-367
Issue: 3
Volume: 30
Year: 2012
Month: 1
X-DOI: 10.1080/07350015.2012.663258
File-URL: http://hdl.handle.net/10.1080/07350015.2012.663258
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:358-367
Template-Type: ReDIF-Article 1.0
Author-Name: Fulvio Corsi
Author-X-Name-First: Fulvio
Author-X-Name-Last: Corsi
Author-Name: Roberto Renò
Author-X-Name-First: Roberto
Author-X-Name-Last: Renò
Title: Discrete-Time Volatility Forecasting With Persistent Leverage Effect and the Link With Continuous-Time Volatility Modeling
Abstract:
We first propose a reduced-form model in discrete time
for S&P 500 volatility showing that the forecasting performance can be
significantly improved by introducing a persistent leverage effect with a
long-range dependence similar to that of volatility itself. We also find a
strongly significant positive impact of lagged jumps on volatility, which
however is absorbed more quickly. We then estimate
continuous-time stochastic volatility models that are
able to reproduce the statistical features captured by the discrete-time
model. We show that a single-factor model driven by a fractional Brownian
motion is unable to reproduce the volatility dynamics observed in the
data, while a multifactor Markovian model fully replicates the persistence
of both volatility and leverage effect. The impact of jumps can be
associated with a common jump component in price and volatility. This
article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 368-380
Issue: 3
Volume: 30
Year: 2012
Month: 1
X-DOI: 10.1080/07350015.2012.663261
File-URL: http://hdl.handle.net/10.1080/07350015.2012.663261
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:368-380
Template-Type: ReDIF-Article 1.0
Author-Name: Christine Amsler
Author-X-Name-First: Christine
Author-X-Name-Last: Amsler
Author-Name: Peter Schmidt
Author-X-Name-First: Peter
Author-X-Name-Last: Schmidt
Title: Tests of Short Memory With Thick-Tailed Errors
Abstract:
In this article, we consider the robustness to fat tails of four
stationarity tests. We also consider their sensitivity to the number of
lags used in long-run variance estimation, and the power of the tests.
Lo's modified rescaled range (MR/S) test is not very robust. Choi's
Lagrange multiplier (LM) test has excellent robustness properties but is
not generally as powerful as the Kwiatkowski--Phillips--Schmidt--Shin
(KPSS) test. As an analytical framework for fat tails, we suggest
local-to-finite variance asymptotics, based on a representation of the
process as a weighted sum of a finite variance process and an infinite
variance process, where the weights depend on the sample size and a
constant. The sensitivity of the asymptotic distribution of a test to the
weighting constant is a good indicator of its robustness to fat tails.
This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 381-390
Issue: 3
Volume: 30
Year: 2011
Month: 11
X-DOI: 10.1080/07350015.2012.669668
File-URL: http://hdl.handle.net/10.1080/07350015.2012.669668
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2011:i:3:p:381-390
Template-Type: ReDIF-Article 1.0
Author-Name: John M. Maheu
Author-X-Name-First: John M.
Author-X-Name-Last: Maheu
Author-Name: Thomas H. McCurdy
Author-X-Name-First: Thomas H.
Author-X-Name-Last: McCurdy
Author-Name: Yong Song
Author-X-Name-First: Yong
Author-X-Name-Last: Song
Title: Components of Bull and Bear Markets: Bull Corrections and Bear Rallies
Abstract:
Existing methods of partitioning the market index into bull and bear
regimes do not identify market corrections or bear market rallies. In
contrast, our probabilistic model of the return distribution allows for
rich and heterogeneous intraregime dynamics. We focus on the
characteristics and dynamics of bear market rallies and bull market
corrections, including, for example, the probability of transition from a
bear market rally into a bull market versus back to the primary bear
state. A Bayesian estimation approach accounts for parameter and regime
uncertainty and provides probability statements regarding future regimes
and returns. We show how to compute the predictive density of long-horizon
returns and discuss the improvements our model provides over benchmarks.
This article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 391-403
Issue: 3
Volume: 30
Year: 2012
Month: 2
X-DOI: 10.1080/07350015.2012.680412
File-URL: http://hdl.handle.net/10.1080/07350015.2012.680412
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:391-403
Template-Type: ReDIF-Article 1.0
Author-Name: Lane F. Burgette
Author-X-Name-First: Lane F.
Author-X-Name-Last: Burgette
Author-Name: Erik V. Nordheim
Author-X-Name-First: Erik V.
Author-X-Name-Last: Nordheim
Title: The Trace Restriction: An Alternative Identification Strategy for the Bayesian Multinomial Probit Model
Abstract:
Previous authors have made Bayesian multinomial probit models
identifiable by fixing a parameter on the main diagonal of the covariance
matrix. The choice of which element one fixes can influence posterior
predictions. Thus, we propose restricting the trace of the covariance
matrix, which we achieve without computational penalty. This permits a
prior that is symmetric to permutations of the nonbase outcome categories.
We find in real and simulated consumer choice datasets that the
trace-restricted model is less prone to making extreme predictions.
Further, the trace restriction can provide stronger identification,
yielding marginal posterior distributions that are more easily
interpreted.
Journal: Journal of Business & Economic Statistics
Pages: 404-410
Issue: 3
Volume: 30
Year: 2012
Month: 2
X-DOI: 10.1080/07350015.2012.680416
File-URL: http://hdl.handle.net/10.1080/07350015.2012.680416
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:404-410
Template-Type: ReDIF-Article 1.0
Author-Name: Victoria Prowse
Author-X-Name-First: Victoria
Author-X-Name-Last: Prowse
Title: Modeling Employment Dynamics With State Dependence and Unobserved Heterogeneity
Abstract:
This study extends existing work on the dynamics of labor force
participation by distinguishing between full-time and part-time employment
and by allowing unobserved heterogeneity in the effects of previous
employment outcomes, children and education on labor supply behavior. In
addition, unobserved heterogeneity may feature autocorrelation and
correlated random effects. The results reveal significant variation in the
effects of children and education on labor supply behavior. Moreover, the
omission of random coefficients and autocorrelation biases estimates of
state dependencies. On average, temporary shocks that increase the rate of
part-time employment lead subsequently to lower rates of nonemployment
than do shocks that temporarily increase the rate of full-time work. The
article has additional online supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 411-431
Issue: 3
Volume: 30
Year: 2012
Month: 4
X-DOI: 10.1080/07350015.2012.697851
File-URL: http://hdl.handle.net/10.1080/07350015.2012.697851
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:411-431
Template-Type: ReDIF-Article 1.0
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Author-Name: Atsushi Inoue
Author-X-Name-First: Atsushi
Author-X-Name-Last: Inoue
Title: Out-of-Sample Forecast Tests Robust to the Choice of Window Size
Abstract:
This article proposes new methodologies for evaluating economic
models’ out-of-sample forecasting performance that are robust to
the choice of the estimation window size. The methodologies involve
evaluating the predictive ability of forecasting models over a wide range
of window sizes. The study shows that the tests proposed in the literature
may lack the power to detect predictive ability and might be subject to
data snooping across different window sizes if used repeatedly. An
empirical application shows the usefulness of the methodologies for
evaluating exchange rate models’ forecasting ability.
Journal: Journal of Business & Economic Statistics
Pages: 432-453
Issue: 3
Volume: 30
Year: 2012
Month: 4
X-DOI: 10.1080/07350015.2012.693850
File-URL: http://hdl.handle.net/10.1080/07350015.2012.693850
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:432-453
Template-Type: ReDIF-Article 1.0
Author-Name: Michael D. Bauer
Author-X-Name-First: Michael D.
Author-X-Name-Last: Bauer
Author-Name: Glenn D. Rudebusch
Author-X-Name-First: Glenn D.
Author-X-Name-Last: Rudebusch
Author-Name: Jing Cynthia Wu
Author-X-Name-First: Jing Cynthia
Author-X-Name-Last: Wu
Title: Correcting Estimation Bias in Dynamic Term Structure Models
Abstract:
The affine dynamic term structure model (DTSM) is the canonical empirical
finance representation of the yield curve. However, the possibility that
DTSM estimates may be distorted by small-sample bias has been largely
ignored. We show that conventional estimates of DTSM coefficients are
indeed severely biased, and this bias results in misleading estimates of
expected future short-term interest rates and of long-maturity term
premia. We provide a variety of bias-corrected estimates of affine DTSMs,
for both maximally flexible and overidentified specifications. Our
estimates imply interest rate expectations and term premia that are more
plausible from a macrofinance perspective. This article has supplementary
material online.
Journal: Journal of Business & Economic Statistics
Pages: 454-467
Issue: 3
Volume: 30
Year: 2012
Month: 4
X-DOI: 10.1080/07350015.2012.693855
File-URL: http://hdl.handle.net/10.1080/07350015.2012.693855
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:454-467
Template-Type: ReDIF-Article 1.0
Author-Name: Guillaume Horny
Author-X-Name-First: Guillaume
Author-X-Name-Last: Horny
Author-Name: Rute Mendes
Author-X-Name-First: Rute
Author-X-Name-Last: Mendes
Author-Name: Gerard J. van den Berg
Author-X-Name-First: Gerard J.
Author-X-Name-Last: van den Berg
Title: Job Durations With Worker- and Firm-Specific Effects: MCMC Estimation With Longitudinal Employer--Employee Data
Abstract:
We study job durations using a multivariate hazard model allowing for
worker-specific and firm-specific unobserved determinants. The latter are
captured by unobserved heterogeneity terms or random effects, one at the
firm level and another at the worker level. This enables us to decompose
the variation in job durations into the relative contribution of the
worker and the firm. We also allow the unobserved terms to be correlated
in a model that is primarily relevant for markets with small firms. For
the empirical analysis, we use a Portuguese longitudinal matched
employer--employee dataset. The model is estimated with a Bayesian Markov
chain Monte Carlo (MCMC) estimation method. The results imply that
unobserved firm characteristics explain almost 40% of the systematic
variation in log job durations. In addition, we find a positive
correlation between unobserved worker and firm characteristics.
Journal: Journal of Business & Economic Statistics
Pages: 468-480
Issue: 3
Volume: 30
Year: 2012
Month: 3
X-DOI: 10.1080/07350015.2012.698142
File-URL: http://hdl.handle.net/10.1080/07350015.2012.698142
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:3:p:468-480
Template-Type: ReDIF-Article 1.0
Author-Name: James H. Stock
Author-X-Name-First: James H.
Author-X-Name-Last: Stock
Author-Name: Mark W. Watson
Author-X-Name-First: Mark W.
Author-X-Name-Last: Watson
Title: Generalized Shrinkage Methods for Forecasting Using Many Predictors
Abstract:
This article provides a simple shrinkage representation that describes
the operational characteristics of various forecasting methods designed
for a large number of orthogonal predictors (such as principal
components). These methods include pretest methods, Bayesian model
averaging, empirical Bayes, and bagging. We compare empirically forecasts
from these methods with dynamic factor model (DFM) forecasts using a U.S.
macroeconomic dataset with 143 quarterly variables spanning 1960--2008.
For most series, including measures of real economic activity, the
shrinkage forecasts are inferior to the DFM forecasts. This article has
online supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 481-493
Issue: 4
Volume: 30
Year: 2012
Month: 6
X-DOI: 10.1080/07350015.2012.715956
File-URL: http://hdl.handle.net/10.1080/07350015.2012.715956
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:481-493
Template-Type: ReDIF-Article 1.0
Author-Name: Nikolay Gospodinov
Author-X-Name-First: Nikolay
Author-X-Name-Last: Gospodinov
Author-Name: Raymond Kan
Author-X-Name-First: Raymond
Author-X-Name-Last: Kan
Author-Name: Cesare Robotti
Author-X-Name-First: Cesare
Author-X-Name-Last: Robotti
Title: Further Results on the Limiting Distribution of GMM Sample Moment Conditions
Abstract:
In this article, we examine the limiting behavior of generalized method
of moments (GMM) sample moment conditions and point out an important
discontinuity that arises in their asymptotic distribution. We show that
the part of the scaled sample moment conditions that gives rise to
degeneracy in the asymptotic normal distribution is
T-consistent and has a nonstandard limiting distribution.
We derive the appropriate asymptotic (weighted chi-squared) distribution
when this degeneracy occurs and show how to conduct asymptotically valid
statistical inference. We also propose a new rank test that provides
guidance on which (standard or nonstandard) asymptotic framework should be
used for inference. The finite-sample properties of the proposed
asymptotic approximation are demonstrated using simulated data from some
popular asset pricing models.
Journal: Journal of Business & Economic Statistics
Pages: 494-504
Issue: 4
Volume: 30
Year: 2012
Month: 5
X-DOI: 10.1080/07350015.2012.694743
File-URL: http://hdl.handle.net/10.1080/07350015.2012.694743
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:494-504
Template-Type: ReDIF-Article 1.0
Author-Name: Emma M. Iglesias
Author-X-Name-First: Emma M.
Author-X-Name-Last: Iglesias
Author-Name: Garry D. A. Phillips
Author-X-Name-First: Garry D. A.
Author-X-Name-Last: Phillips
Title: Almost Unbiased Estimation in Simultaneous Equation Models With Strong and/or Weak Instruments
Abstract:
We propose two simple bias-reduction procedures that apply to estimators
in a general static simultaneous equation model and that are valid under
relatively weak distributional assumptions for the errors. Standard
jackknife estimators, as applied to 2SLS, may not reduce the bias of the
exogenous variable coefficient estimators since the estimator biases are
not monotonically nonincreasing with sample size (a necessary condition
for successful bias reduction) and they have moments only up to the order
of overidentification. Our proposed approaches do not have either of these
drawbacks. (1) In the first procedure, both endogenous and exogenous
variable parameter estimators are unbiased to order T
-super-− 2 and when implemented for k-class
estimators for which k > 1, the higher-order moments will
exist. (2) An alternative second approach is based on taking linear
combinations of k-class estimators for k
> 1. In general, this yields estimators that are unbiased to order
T -super-− 1 and that possess higher moments. We
also prove theoretically how the combined k-class
estimator produces a smaller mean squared error than 2SLS when the degree
of overidentification of the system is 0, 1, or at least 8. The
performance of the two procedures is compared with 2SLS in a number of
Monte Carlo experiments using a simple two-equation model. Finally, an
application shows the usefulness of our new estimator in practice versus
competitor estimators.
Journal: Journal of Business & Economic Statistics
Pages: 505-520
Issue: 4
Volume: 30
Year: 2012
Month: 6
X-DOI: 10.1080/07350015.2012.715959
File-URL: http://hdl.handle.net/10.1080/07350015.2012.715959
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:505-520
Template-Type: ReDIF-Article 1.0
Author-Name: Siem Jan Koopman
Author-X-Name-First: Siem Jan
Author-X-Name-Last: Koopman
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Author-Name: Bernd Schwaab
Author-X-Name-First: Bernd
Author-X-Name-Last: Schwaab
Title: Dynamic Factor Models With Macro, Frailty, and Industry Effects for U.S. Default Counts: The Credit Crisis of 2008
Abstract:
We develop a high-dimensional, nonlinear, and non-Gaussian dynamic factor
model for the decomposition of systematic default risk conditions into
latent components for (1) macroeconomic/financial risk, (2) autonomous
default dynamics (frailty), and (3) industry-specific effects. We analyze
discrete U.S. corporate default counts together with macroeconomic and
financial variables in one unifying framework. We find that approximately
35% of default rate variation is due to systematic and industry factors.
Approximately one-third of this systematic variation is captured by the
macroeconomic and financial factors. The remainder is captured by frailty
(40%) and industry (25%) effects. The default-specific effects are
particularly relevant before and during times of financial turbulence. We
detect a build-up of systematic risk over the period preceding the 2008
credit crisis. This article has online supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 521-532
Issue: 4
Volume: 30
Year: 2012
Month: 5
X-DOI: 10.1080/07350015.2012.700859
File-URL: http://hdl.handle.net/10.1080/07350015.2012.700859
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:521-532
Template-Type: ReDIF-Article 1.0
Author-Name: Yiu-kuen Tse
Author-X-Name-First: Yiu-kuen
Author-X-Name-Last: Tse
Author-Name: Thomas Tao Yang
Author-X-Name-First: Thomas Tao
Author-X-Name-Last: Yang
Title: Estimation of High-Frequency Volatility: An Autoregressive Conditional Duration Approach
Abstract:
We propose a method to estimate the intraday volatility of a stock by
integrating the instantaneous conditional return variance per unit time
obtained from the autoregressive conditional duration (ACD) model, called
the ACD-ICV method. We compare the daily volatility estimated using the
ACD-ICV method against several versions of the realized volatility (RV)
method, including the bipower variation RV with subsampling, the realized
kernel estimate, and the duration-based RV. Our Monte Carlo results show
that the ACD-ICV method has lower root mean-squared error than the RV
methods in almost all cases considered. This article has online
supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 533-545
Issue: 4
Volume: 30
Year: 2012
Month: 4
X-DOI: 10.1080/07350015.2012.707582
File-URL: http://hdl.handle.net/10.1080/07350015.2012.707582
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:533-545
Template-Type: ReDIF-Article 1.0
Author-Name: Roland Rathelot
Author-X-Name-First: Roland
Author-X-Name-Last: Rathelot
Title: Measuring Segregation When Units are Small: A Parametric Approach
Abstract:
This article considers the issue of measuring segregation in a population
of units that contain few individuals (e.g., establishments, classrooms).
When units are small, the usual segregation indices, which are based on
sample proportions, are biased. We propose a parametric solution: the
probability that an individual within a given unit belongs to the minority
is assumed to be distributed as a mixture of Beta distributions. The model
can be estimated and indices deduced. Simulations show that this new
method performs well compared to existing ones, even in the case of
misspecification. An application to residential segregation in France
according to parents’ nationalities is then undertaken. This
article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 546-553
Issue: 4
Volume: 30
Year: 2012
Month: 6
X-DOI: 10.1080/07350015.2012.707586
File-URL: http://hdl.handle.net/10.1080/07350015.2012.707586
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:546-553
Template-Type: ReDIF-Article 1.0
Author-Name: Michael P. Clements
Author-X-Name-First: Michael P.
Author-X-Name-Last: Clements
Author-Name: Ana Beatriz Galvão
Author-X-Name-First: Ana Beatriz
Author-X-Name-Last: Galvão
Title: Improving Real-Time Estimates of Output and Inflation Gaps With Multiple-Vintage Models
Abstract:
Real-time estimates of output gaps and inflation gaps differ from the
values that are obtained using data available long after the event. Part
of the problem is that the data on which the real-time estimates are based
is subsequently revised. We show that vector-autoregressive models of data
vintages provide forecasts of post-revision values of future observations
and of already-released observations capable of improving estimates of
output and inflation gaps in real time. Our findings indicate that annual
revisions to output and inflation data are in part predictable based on
their past vintages. This article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 554-562
Issue: 4
Volume: 30
Year: 2012
Month: 5
X-DOI: 10.1080/07350015.2012.707588
File-URL: http://hdl.handle.net/10.1080/07350015.2012.707588
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:554-562
Template-Type: ReDIF-Article 1.0
Author-Name: Gholamreza Hajargasht
Author-X-Name-First: Gholamreza
Author-X-Name-Last: Hajargasht
Author-Name: William E. Griffiths
Author-X-Name-First: William E.
Author-X-Name-Last: Griffiths
Author-Name: Joseph Brice
Author-X-Name-First: Joseph
Author-X-Name-Last: Brice
Author-Name: D.S. Prasada Rao
Author-X-Name-First: D.S. Prasada
Author-X-Name-Last: Rao
Author-Name: Duangkamon Chotikapanich
Author-X-Name-First: Duangkamon
Author-X-Name-Last: Chotikapanich
Title: Inference for Income Distributions Using Grouped Data
Abstract:
We develop a general approach to estimation and inference for income
distributions using grouped or aggregate data that are typically available
in the form of population shares and class mean incomes, with unknown
group bounds. We derive generic moment conditions and an optimal weight
matrix that can be used for generalized method-of-moments (GMM) estimation
of any parametric income distribution. Our derivation of the weight matrix
and its inverse allows us to express the seemingly complex GMM objective
function in a relatively simple form that facilitates estimation. We show
that our proposed approach, which incorporates information on class means
as well as population proportions, is more efficient than maximum
likelihood estimation of the multinomial distribution, which uses only
population proportions. In contrast to the earlier work of Chotikapanich,
Griffiths, and Rao, and Chotikapanich, Griffiths, Rao, and Valencia, which
did not specify a formal GMM framework, did not provide methodology for
obtaining standard errors, and restricted the analysis to the beta-2
distribution, we provide standard errors for estimated parameters and
relevant functions of them, such as inequality and poverty measures, and
we provide methodology for all distributions. A test statistic for testing
the adequacy of a distribution is proposed. Using eight countries/regions
for the year 2005, we show how the methodology can be applied to estimate
the parameters of the generalized beta distribution of the second kind
(GB2), and its special-case distributions, the beta-2, Singh--Maddala,
Dagum, generalized gamma, and lognormal distributions. We test the
adequacy of each distribution and compare predicted and actual income
shares, where the number of groups used for prediction can differ from the
number used in estimation. Estimates and standard errors for inequality
and poverty measures are provided. Supplementary materials for this
article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 563-575
Issue: 4
Volume: 30
Year: 2012
Month: 5
X-DOI: 10.1080/07350015.2012.707590
File-URL: http://hdl.handle.net/10.1080/07350015.2012.707590
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:563-575
Template-Type: ReDIF-Article 1.0
Author-Name: Bruno Feunou
Author-X-Name-First: Bruno
Author-X-Name-Last: Feunou
Author-Name: Roméo Tédongap
Author-X-Name-First: Roméo
Author-X-Name-Last: Tédongap
Title: A Stochastic Volatility Model With Conditional Skewness*
Abstract:
We develop a discrete-time affine stochastic volatility model with
time-varying conditional skewness (SVS). Importantly, we disentangle the
dynamics of conditional volatility and conditional skewness in a coherent
way. Our approach allows current asset returns to be asymmetric
conditional on current factors and past information, which we term
contemporaneous asymmetry. Conditional skewness is an explicit combination
of the conditional leverage effect and contemporaneous asymmetry. We
derive analytical formulas for various return moments that are used for
generalized method of moments (GMM) estimation. Applying our approach to
S&P500 index daily returns and option data, we show that one- and
two-factor SVS models provide a better fit for both the historical and the
risk-neutral distribution of returns, compared to existing affine
generalized autoregressive conditional heteroscedasticity (GARCH), and
stochastic volatility with jumps (SVJ) models. Our results are not due to
an overparameterization of the model: the one-factor SVS models have the
same number of parameters as their one-factor GARCH competitors and less
than the SVJ benchmark.
Journal: Journal of Business & Economic Statistics
Pages: 576-591
Issue: 4
Volume: 30
Year: 2012
Month: 7
X-DOI: 10.1080/07350015.2012.715958
File-URL: http://hdl.handle.net/10.1080/07350015.2012.715958
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:4:p:576-591
Template-Type: ReDIF-Article 1.0
Author-Name: Christopher J. Bennett
Author-X-Name-First: Christopher J.
Author-X-Name-Last: Bennett
Author-Name: Ričardas Zitikis
Author-X-Name-First: Ričardas
Author-X-Name-Last: Zitikis
Title: Examining the Distributional Effects of Military Service on Earnings: A Test of Initial Dominance
Abstract:
Existing empirical evidence suggests that the effects of Vietnam veteran
status on earnings in the decade-and-a-half following service may be
concentrated in the lower tail of the earnings distribution. Motivated by
this evidence, we develop a formal statistical procedure that is
specifically designed to test for lower tail dominance in the
distributions of earnings. When applied to the same data as in previous
studies, the test reveals that the distribution of earnings for veterans
is indeed dominated by the distribution of earnings for nonveterans up to
$12,600 (in 1978 dollars), thereby indicating that there was higher social
welfare and lower poverty experienced by nonveterans in the
decade-and-a-half following military service.
Journal: Journal of Business & Economic Statistics
Pages: 1-15
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.741053
File-URL: http://hdl.handle.net/10.1080/07350015.2012.741053
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:1-15
Template-Type: ReDIF-Article 1.0
Author-Name: Miguel A. Delgado
Author-X-Name-First: Miguel A.
Author-X-Name-Last: Delgado
Author-Name: Juan Carlos Escanciano
Author-X-Name-First: Juan Carlos
Author-X-Name-Last: Escanciano
Title: Conditional Stochastic Dominance Testing
Abstract:
This article proposes bootstrap-based stochastic dominance tests for
nonparametric conditional distributions and their moments. We exploit the
fact that a conditional distribution dominates the other if and only if
the difference between the marginal joint distributions is monotonic in
the explanatory variable at each value of the dependent variable. The
proposed test statistic compares restricted and unrestricted estimators of
the difference between the joint distributions, and it can be implemented
under minimal smoothness requirements on the underlying nonparametric
curves and without resorting to smooth estimation. The finite sample
properties of the proposed test are examined by means of a Monte Carlo
study. We illustrate the test by studying the impact on postintervention
earnings of the National Supported Work Demonstration, a randomized labor
training program carried out in the 1970s.
Journal: Journal of Business & Economic Statistics
Pages: 16-28
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.723556
File-URL: http://hdl.handle.net/10.1080/07350015.2012.723556
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:16-28
Template-Type: ReDIF-Article 1.0
Author-Name: Jan J. J. Groen
Author-X-Name-First: Jan J. J.
Author-X-Name-Last: Groen
Author-Name: Richard Paap
Author-X-Name-First: Richard
Author-X-Name-Last: Paap
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Title: Real-Time Inflation Forecasting in a Changing World
Abstract:
This article revisits the accuracy of inflation forecasting using
activity and expectations variables. We apply Bayesian model averaging
across different regression specifications selected from a set of
potential predictors that includes lagged values of inflation, a host of
real activity data, term structure data, (relative) price data, and
surveys. In this model average, we can entertain different channels of
structural instability, by either incorporating stochastic breaks in the
regression parameters of each individual specification within this
average, or allowing for breaks in the error variance of the overall model
average, or both. Thus, our framework simultaneously addresses structural
change and model uncertainty that would unavoidably affect any inflation
forecast model. The different versions of our framework are used to model
U.S. personal consumption expenditures (PCE) deflator and gross domestic
product (GDP) deflator inflation rates for the 1960--2011 period. A
real-time inflation forecast evaluation shows that averaging over many
predictors in a model that at least allows for structural breaks in the
error variance results in very accurate point and density forecasts,
especially for the post-1984 period. Our framework is especially useful
when forecasting, in real-time, the likelihood of lower-than-usual
inflation rates over the medium term. This article has online
supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 29-44
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.727718
File-URL: http://hdl.handle.net/10.1080/07350015.2012.727718
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:29-44
Template-Type: ReDIF-Article 1.0
Author-Name: Pierre Guérin
Author-X-Name-First: Pierre
Author-X-Name-Last: Guérin
Author-Name: Massimiliano Marcellino
Author-X-Name-First: Massimiliano
Author-X-Name-Last: Marcellino
Title: Markov-Switching MIDAS Models
Abstract:
This article introduces a new regression model—Markov-switching
mixed data sampling (MS-MIDAS)—that incorporates regime changes in
the parameters of the mixed data sampling (MIDAS) models and allows for
the use of mixed-frequency data in Markov-switching models. After a
discussion of estimation and inference for MS-MIDAS and a small sample
simulation-based evaluation, the MS-MIDAS model is applied to the
prediction of the U.S. economic activity, in terms of both quantitative
forecasts of the aggregate economic activity and the prediction of the
business cycle regimes. Both simulation and empirical results indicate
that MS-MIDAS is a very useful specification.
Journal: Journal of Business & Economic Statistics
Pages: 45-56
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.727721
File-URL: http://hdl.handle.net/10.1080/07350015.2012.727721
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:45-56
Template-Type: ReDIF-Article 1.0
Author-Name: Qi Li
Author-X-Name-First: Qi
Author-X-Name-Last: Li
Author-Name: Juan Lin
Author-X-Name-First: Juan
Author-X-Name-Last: Lin
Author-Name: Jeffrey S. Racine
Author-X-Name-First: Jeffrey S.
Author-X-Name-Last: Racine
Title: Optimal Bandwidth Selection for Nonparametric Conditional Distribution and Quantile Functions
Abstract:
We propose a data-driven least-square cross-validation method to
optimally select smoothing parameters for the nonparametric estimation of
conditional cumulative distribution functions and conditional quantile
functions. We allow for general multivariate covariates that can be
continuous, categorical, or a mix of either. We provide asymptotic
analysis, examine finite-sample properties via Monte Carlo simulation, and
consider an application involving testing for first-order stochastic
dominance of children’s health conditional on parental education
and income. This article has supplementary materials online.
Journal: Journal of Business & Economic Statistics
Pages: 57-65
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.738955
File-URL: http://hdl.handle.net/10.1080/07350015.2012.738955
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:57-65
Template-Type: ReDIF-Article 1.0
Author-Name: Sermin Gungor
Author-X-Name-First: Sermin
Author-X-Name-Last: Gungor
Author-Name: Richard Luger
Author-X-Name-First: Richard
Author-X-Name-Last: Luger
Title: Testing Linear Factor Pricing Models With Large Cross Sections: A Distribution-Free Approach
Abstract:
In this article, we develop a finite-sample distribution-free procedure
to test the beta-pricing representation of linear factor pricing models.
In sharp contrast to extant finite-sample tests, our framework allows for
unknown forms of nonnormalities, heteroscedasticity, and time-varying
covariances. The power of the proposed test procedure increases as the
time series lengthens and/or the cross section becomes larger. So the
criticism sometimes heard that nonparametric tests lack power does not
apply here, since the number of test assets is chosen by the user. This
also stands in contrast to the usual tests that lose power or may not even
be computable if the number of test assets is too large. Supplementary
materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 66-77
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.740435
File-URL: http://hdl.handle.net/10.1080/07350015.2012.740435
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:66-77
Template-Type: ReDIF-Article 1.0
Author-Name: Lutz Kilian
Author-X-Name-First: Lutz
Author-X-Name-Last: Kilian
Author-Name: Robert J. Vigfusson
Author-X-Name-First: Robert J.
Author-X-Name-Last: Vigfusson
Title: Do Oil Prices Help Forecast U.S. Real GDP? The Role of Nonlinearities and Asymmetries
Abstract:
There is a long tradition of using oil prices to forecast U.S. real GDP.
It has been suggested that the predictive relationship between the price
of oil and one-quarter-ahead U.S. real GDP is nonlinear in that (a) oil
price increases matter only to the extent that they exceed the maximum oil
price in recent years, and that (b) oil price decreases do not matter at
all. We examine, first, whether the evidence of in-sample predictability
in support of this view extends to out-of-sample forecasts. Second, we
discuss how to extend this forecasting approach to higher horizons. Third,
we compare the resulting class of nonlinear models to alternative
economically plausible nonlinear specifications and examine which aspect
of the model is most useful for forecasting. We show that the asymmetry
embodied in commonly used nonlinear transformations of the price of oil is
not helpful for out-of-sample forecasting; more robust and often more
accurate real GDP forecasts are obtained from symmetric nonlinear models
based on the 3-year net oil price change. Finally, we quantify the extent
to which the 2008 recession could have been forecast using the latter
class of time-varying threshold models.
Journal: Journal of Business & Economic Statistics
Pages: 78-93
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.740436
File-URL: http://hdl.handle.net/10.1080/07350015.2012.740436
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:78-93
Template-Type: ReDIF-Article 1.0
Author-Name: Joshua C. C. Chan
Author-X-Name-First: Joshua C. C.
Author-X-Name-Last: Chan
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: Simon M. Potter
Author-X-Name-First: Simon M.
Author-X-Name-Last: Potter
Title: A New Model of Trend Inflation
Abstract:
This article introduces a new model of trend inflation. In contrast to
many earlier approaches, which allow for trend inflation to evolve
according to a random walk, ours is a bounded model which ensures that
trend inflation is constrained to lie in an interval. The bounds of this
interval can either be fixed or estimated from the data. Our model also
allows for a time-varying degree of persistence in the transitory
component of inflation. In an empirical exercise with CPI inflation, we
find the model to work well, yielding more sensible measures of trend
inflation and forecasting better than popular alternatives such as the
unobserved components stochastic volatility model. This article has
supplementary materials online.
Journal: Journal of Business & Economic Statistics
Pages: 94-106
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2012.741549
File-URL: http://hdl.handle.net/10.1080/07350015.2012.741549
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:94-106
Template-Type: ReDIF-Article 1.0
Author-Name: Garland Durham
Author-X-Name-First: Garland
Author-X-Name-Last: Durham
Author-Name: Yang-Ho Park
Author-X-Name-First: Yang-Ho
Author-X-Name-Last: Park
Title: Beyond Stochastic Volatility and Jumps in Returns and Volatility
Abstract:
While a great deal of attention has been focused on stochastic volatility
in stock returns, there is strong evidence suggesting that return
distributions have time-varying skewness and kurtosis as well. Under the
risk-neutral measure, for example, this can be observed from variation
across time in the shape of Black--Scholes implied volatility smiles. This
article investigates model characteristics that are consistent with
variation in the shape of return distributions using a stochastic
volatility model with a regime-switching feature to allow for random
changes in the parameters governing volatility of volatility, leverage
effect, and jump intensity. The analysis consists of two steps. First, the
models are estimated using only information from observed returns and
option-implied volatility. Standard model assessment tools indicate a
strong preference in favor of the proposed models. Since the information
from option-implied skewness and kurtosis is not used in fitting the
models, it is available for diagnostic purposes. In the second step of the
analysis, regressions of option-implied skewness and kurtosis on the
filtered state variables (and some controls) suggest that the models have
strong explanatory power for these characteristics.
Journal: Journal of Business & Economic Statistics
Pages: 107-121
Issue: 1
Volume: 31
Year: 2013
Month: 1
X-DOI: 10.1080/07350015.2013.747800
File-URL: http://hdl.handle.net/10.1080/07350015.2013.747800
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:1:p:107-121
Template-Type: ReDIF-Article 1.0
Author-Name: José J. Canals-Cerdá
Author-X-Name-First: José J.
Author-X-Name-Last: Canals-Cerdá
Author-Name: Jason Pearcy
Author-X-Name-First: Jason
Author-X-Name-Last: Pearcy
Title: Arriving in Time: Estimation of English Auctions With a Stochastic Number of Bidders
Abstract:
We develop a new econometric approach for the estimation of
second-price ascending-bid auctions with a stochastic number of bidders.
Our empirical framework considers the arrival process of new bidders as
well as the distribution of bidders' valuations of objects being
auctioned. By observing the timing of bidder arrival, the model is
identified even when the number of potential bidders is stochastic and
unknown. The relevance of our approach is illustrated with an empirical
application using a unique dataset of art auctions on eBay. Our results
suggest a higher impact of sellers' reputation on bidders' valuations than
previously reported in cross-sectional studies but the impact of
reputation on bidder arrival is largely insignificant. Interestingly, a
seller's reputation impacts not only the actions of the bidders but the
actions of the seller as well. In particular, experience and a good
reputation increase the probability of a seller posting items for sale on
longer-lasting auctions, which we find increases the expected revenue for
the seller. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 125-135
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2012.747825
File-URL: http://hdl.handle.net/10.1080/07350015.2012.747825
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:125-135
Template-Type: ReDIF-Article 1.0
Author-Name: Gregor Bäurle
Author-X-Name-First: Gregor
Author-X-Name-Last: Bäurle
Title: Structural Dynamic Factor Analysis Using Prior Information From Macroeconomic Theory
Abstract:
Dynamic factor models are becoming increasingly popular in
empirical macroeconomics due to their ability to cope with large datasets.
Dynamic stochastic general equilibrium (DSGE) models, on the other hand,
are suitable for the analysis of policy interventions from a methodical
point of view. In this article, we provide a Bayesian method to combine
the statistically rich specification of the former with the conceptual
advantages of the latter by using information from a DSGE model to form a
prior belief about parameters in the dynamic factor model. Because the
method establishes a connection between observed data and economic theory
and at the same time incorporates information from a large dataset, our
setting is useful to study the effects of policy interventions on a large
number of observed variables. An application of the method to U.S. data
shows that a moderate weight of the DSGE prior is optimal and that the
model performs well in terms of forecasting. We then analyze the impact of
monetary shocks on both the factors and selected series using a DSGE-based
identification of these shocks. Supplementary materials for this article
are available online.
Journal: Journal of Business & Economic Statistics
Pages: 136-150
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2012.747839
File-URL: http://hdl.handle.net/10.1080/07350015.2012.747839
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:136-150
Template-Type: ReDIF-Article 1.0
Author-Name: Jouchi Nakajima
Author-X-Name-First: Jouchi
Author-X-Name-Last: Nakajima
Author-Name: Mike West
Author-X-Name-First: Mike
Author-X-Name-Last: West
Title: Bayesian Analysis of Latent Threshold Dynamic Models
Abstract:
We discuss a general approach to dynamic sparsity modeling in
multivariate time series analysis. Time-varying parameters are linked to
latent processes that are thresholded to induce zero values adaptively,
providing natural mechanisms for dynamic variable inclusion/selection. We
discuss Bayesian model specification, analysis and prediction in dynamic
regressions, time-varying vector autoregressions, and multivariate
volatility models using latent thresholding. Application to a topical
macroeconomic time series problem illustrates some of the benefits of the
approach in terms of statistical and economic interpretations as well as
improved predictions. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 151-164
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2012.747847
File-URL: http://hdl.handle.net/10.1080/07350015.2012.747847
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:151-164
Template-Type: ReDIF-Article 1.0
Author-Name: Nikolaus Hautsch
Author-X-Name-First: Nikolaus
Author-X-Name-Last: Hautsch
Author-Name: Mark Podolskij
Author-X-Name-First: Mark
Author-X-Name-Last: Podolskij
Title: Preaveraging-Based Estimation of Quadratic Variation in the Presence of Noise and Jumps: Theory, Implementation, and Empirical Evidence
Abstract:
This article contributes to the theory for preaveraging
estimators of the daily quadratic variation of asset prices and provides
novel empirical evidence. We develop asymptotic theory for preaveraging
estimators in the case of autocorrelated microstructure noise and propose
an explicit test for serial dependence. Moreover, we extend the theory on
preaveraging estimators for processes involving jumps. We discuss several
jump-robust measures and derive feasible central limit theorems for the
general quadratic variation. Using transaction data of different stocks
traded at the New York Stock Exchange, we analyze the estimators'
sensitivity to the choice of the preaveraging bandwidth. Moreover, we
investigate the dependence of preaveraging-based inference on the sampling
scheme, the sampling frequency, microstructure noise properties, and the
occurrence of jumps. As a result of a thorough empirical study, we provide
guidance for optimal implementation of preaveraging estimators and discuss
potential pitfalls in practice.
Journal: Journal of Business & Economic Statistics
Pages: 165-183
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2012.754313
File-URL: http://hdl.handle.net/10.1080/07350015.2012.754313
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:165-183
Template-Type: ReDIF-Article 1.0
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Irina Murtazashvili
Author-X-Name-First: Irina
Author-X-Name-Last: Murtazashvili
Author-Name: Aman Ullah
Author-X-Name-First: Aman
Author-X-Name-Last: Ullah
Title: Local Linear GMM Estimation of Functional Coefficient IV Models With an Application to Estimating the Rate of Return to Schooling
Abstract:
We consider the local linear generalized method of moment
(GMM) estimation of functional coefficient models with a mix of discrete
and continuous data and in the presence of endogenous regressors. We
establish the asymptotic normality of the estimator and derive the optimal
instrumental variable that minimizes the asymptotic variance-covariance
matrix among the class of all local linear GMM estimators. Data-dependent
bandwidth sequences are also allowed for. We propose a nonparametric test
for the constancy of the functional coefficients, study its asymptotic
properties under the null hypothesis as well as a sequence of local
alternatives and global alternatives, and propose a bootstrap version for
it. Simulations are conducted to evaluate both the estimator and test.
Applications to the 1985 Australian Longitudinal Survey data indicate a
clear rejection of the null hypothesis of the constant rate of return to
education, and that the returns to education obtained in earlier studies
tend to be overestimated for all the work experience.
Journal: Journal of Business & Economic Statistics
Pages: 184-207
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2012.754314
File-URL: http://hdl.handle.net/10.1080/07350015.2012.754314
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:184-207
Template-Type: ReDIF-Article 1.0
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Martin Spindler
Author-X-Name-First: Martin
Author-X-Name-Last: Spindler
Title: Nonparametric Testing for Asymmetric Information
Abstract:
Asymmetric information is an important phenomenon in many
markets and in particular in insurance markets. Testing for asymmetric
information has become a very important issue in the literature in the
last two decades. Almost all testing procedures that are used in empirical
studies are parametric, which may yield misleading conclusions in the case
of misspecification of either functional or
distributional relationships among the variables of
interest. Motivated by the literature on testing conditional independence,
we propose a new nonparametric test for asymmetric information, which is
applicable in a variety of situations. We demonstrate that the test works
reasonably well through Monte Carlo simulations and apply it to an
automobile insurance dataset and a long-term care insurance (LTCI)
dataset. Our empirical results consolidate Chiappori and Salanié's
findings that there is no evidence for the presence of asymmetric
information in the French automobile insurance market. While Finkelstein
and McGarry found no positive correlation between risk and coverage in the
LTCI market in the United States, our test detects asymmetric information
using only the information that is available to the insurance company, and
our investigation of the source of asymmetric information suggests some
sort of asymmetric information that is related to risk preferences as
opposed to risk types and thus lends support to Finkelstein and McGarry.
Journal: Journal of Business & Economic Statistics
Pages: 208-225
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2012.755127
File-URL: http://hdl.handle.net/10.1080/07350015.2012.755127
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:208-225
Template-Type: ReDIF-Article 1.0
Author-Name: Sergei Koulayev
Author-X-Name-First: Sergei
Author-X-Name-Last: Koulayev
Title: Search With Dirichlet Priors: Estimation and Implications for Consumer Demand
Abstract:
This article is an empirical application of the search model
with an unknown distribution, as introduced by Rothschild in 1974. For
searchers who hold Dirichlet priors, we develop a novel characterization
of optimal search behavior. Our solution delivers easily computable
formulas for the ex-ante purchase probabilities as outcomes of search, as
required by discrete-choice-based estimation. Using our method, we
investigate the consequences of consumer learning on the properties of
search-generated demand. Holding search costs constant, the search model
from a known distribution predicts larger price elasticities, mainly for
the lower-priced products. We estimate a search model with Dirichlet
priors, on a dataset of prices and market shares of S&P 500 mutual funds.
We find that the assumption of no uncertainty in consumer priors leads to
substantial biases in search cost estimates.
Journal: Journal of Business & Economic Statistics
Pages: 226-239
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2013.764696
File-URL: http://hdl.handle.net/10.1080/07350015.2013.764696
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:226-239
Template-Type: ReDIF-Article 1.0
Author-Name: Elena Andreou
Author-X-Name-First: Elena
Author-X-Name-Last: Andreou
Author-Name: Eric Ghysels
Author-X-Name-First: Eric
Author-X-Name-Last: Ghysels
Author-Name: Andros Kourtellos
Author-X-Name-First: Andros
Author-X-Name-Last: Kourtellos
Title: Should Macroeconomic Forecasters Use Daily Financial Data and How?
Abstract:
We introduce easy-to-implement, regression-based methods for
predicting quarterly real economic activity that use daily financial data
and rely on forecast combinations of mixed data sampling (MIDAS)
regressions. We also extract a novel small set of daily financial factors
from a large panel of about 1000 daily financial assets. Our analysis is
designed to elucidate the value of daily financial information and provide
real-time forecast updates of the current (nowcasting) and future quarters
of real GDP growth.
Journal: Journal of Business & Economic Statistics
Pages: 240-251
Issue: 2
Volume: 31
Year: 2013
Month: 4
X-DOI: 10.1080/07350015.2013.767199
File-URL: http://hdl.handle.net/10.1080/07350015.2013.767199
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:2:p:240-251
Template-Type: ReDIF-Article 1.0
Author-Name: Paul Goldsmith-Pinkham
Author-X-Name-First: Paul
Author-X-Name-Last: Goldsmith-Pinkham
Author-Name: Guido W. Imbens
Author-X-Name-First: Guido W.
Author-X-Name-Last: Imbens
Title: Social Networks and the Identification of Peer Effects
Abstract:
There is a large and growing literature on peer effects in
economics. In the current article, we focus on a Manski-type
linear-in-means model that has proved to be popular in empirical work. We
critically examine some aspects of the statistical model that may be
restrictive in empirical analyses. Specifically, we focus on three
aspects. First, we examine the endogeneity of the network or peer groups.
Second, we investigate simultaneously alternative definitions of links and
the possibility of peer effects arising through multiple networks. Third,
we highlight the representation of the traditional linear-in-means model
as an autoregressive model, and contrast it with an alternative
moving-average model, where the correlation between unconnected
individuals who are indirectly connected is limited. Using data on
friendship networks from the Add Health dataset, we illustrate the
empirical relevance of these ideas.
Journal: Journal of Business & Economic Statistics
Pages: 253-264
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.801251
File-URL: http://hdl.handle.net/10.1080/07350015.2013.801251
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:253-264
Template-Type: ReDIF-Article 1.0
Author-Name: Yann Bramoullé
Author-X-Name-First: Yann
Author-X-Name-Last: Bramoullé
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 264-266
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.792265
File-URL: http://hdl.handle.net/10.1080/07350015.2013.792265
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:264-266
Template-Type: ReDIF-Article 1.0
Author-Name: Bryan S. Graham
Author-X-Name-First: Bryan S.
Author-X-Name-Last: Graham
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 266-270
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.792261
File-URL: http://hdl.handle.net/10.1080/07350015.2013.792261
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:266-270
Template-Type: ReDIF-Article 1.0
Author-Name: Matthew O. Jackson
Author-X-Name-First: Matthew O.
Author-X-Name-Last: Jackson
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 270-273
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.794095
File-URL: http://hdl.handle.net/10.1080/07350015.2013.794095
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:270-273
Template-Type: ReDIF-Article 1.0
Author-Name: Charles F. Manski
Author-X-Name-First: Charles F.
Author-X-Name-Last: Manski
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 273-275
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.792262
File-URL: http://hdl.handle.net/10.1080/07350015.2013.792262
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:273-275
Template-Type: ReDIF-Article 1.0
Author-Name: Bruce Sacerdote
Author-X-Name-First: Bruce
Author-X-Name-Last: Sacerdote
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 275-275
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.792263
File-URL: http://hdl.handle.net/10.1080/07350015.2013.792263
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:275-275
Template-Type: ReDIF-Article 1.0
Author-Name: Brendan Kline
Author-X-Name-First: Brendan
Author-X-Name-Last: Kline
Author-Name: Elie Tamer
Author-X-Name-First: Elie
Author-X-Name-Last: Tamer
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 276-279
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.792264
File-URL: http://hdl.handle.net/10.1080/07350015.2013.792264
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:276-279
Template-Type: ReDIF-Article 1.0
Author-Name: Paul Goldsmith-Pinkham
Author-X-Name-First: Paul
Author-X-Name-Last: Goldsmith-Pinkham
Author-Name: Guido Imbens
Author-X-Name-First: Guido
Author-X-Name-Last: Imbens
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 279-281
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.792260
File-URL: http://hdl.handle.net/10.1080/07350015.2013.792260
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:279-281
Template-Type: ReDIF-Article 1.0
Author-Name: Gian Piero Aielli
Author-X-Name-First: Gian Piero
Author-X-Name-Last: Aielli
Title: Dynamic Conditional Correlation: On Properties and Estimation
Abstract:
This article addresses some of the issues that arise with the
Dynamic Conditional Correlation (DCC) model. It is proven that the DCC
large system estimator can be inconsistent, and that the traditional
interpretation of the DCC correlation parameters can result in misleading
conclusions. Here, we suggest a more tractable DCC model, called the
cDCC model. The cDCC model allows for a
large system estimator that is heuristically proven to be consistent.
Sufficient stationarity conditions for cDCC processes of
interest are established. The empirical performances of the DCC and
cDCC large system estimators are compared via simulations
and applications to real data.
Journal: Journal of Business & Economic Statistics
Pages: 282-299
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.771027
File-URL: http://hdl.handle.net/10.1080/07350015.2013.771027
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:282-299
Template-Type: ReDIF-Article 1.0
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: M. Hashem Pesaran
Author-X-Name-First: M. Hashem
Author-X-Name-Last: Pesaran
Author-Name: Ron P. Smith
Author-X-Name-First: Ron P.
Author-X-Name-Last: Smith
Title: On Identification of Bayesian DSGE Models
Abstract:
This article is concerned with local identification of
individual parameters of dynamic stochastic general equilibrium (DSGE)
models estimated by Bayesian methods. Identification is often judged by a
comparison of the posterior distribution of a parameter with its prior.
However, these can differ even when the parameter is not identified.
Instead, we propose two Bayesian indicators of identification. The first
follows a suggestion by Poirier of comparing the posterior density of the
parameter of interest with the posterior expectation of its prior
conditional on the remaining parameters. The second examines the rate at
which the posterior precision of the parameter gets updated with the
sample size, using data simulated at the parameter point of interest for
an increasing sequence of sample sizes (T). For
identified parameters, the posterior precision increases at rate
T. For parameters that are either unidentified or are
weakly identified, the posterior precision may get updated but its rate of
update will be slower than T. We use empirical examples
to demonstrate that these methods are useful in practice. This article has
online supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 300-314
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.773905
File-URL: http://hdl.handle.net/10.1080/07350015.2013.773905
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:300-314
Template-Type: ReDIF-Article 1.0
Author-Name: Jia Chen
Author-X-Name-First: Jia
Author-X-Name-Last: Chen
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Degui Li
Author-X-Name-First: Degui
Author-X-Name-Last: Li
Title: Estimation in Partially Linear Single-Index Panel Data Models With Fixed Effects
Abstract:
In this article, we consider semiparametric estimation in a
partially linear single-index panel data model with fixed effects. Without
taking the difference explicitly, we propose using a semiparametric
minimum average variance estimation (SMAVE) based on a dummy variable
method to remove the fixed effects and obtain consistent estimators for
both the parameters and the unknown link function. As both the
cross-section size and the time series length tend to infinity, we not
only establish an asymptotically normal distribution for the estimators of
the parameters in the single index and the linear component of the model,
but also obtain an asymptotically normal distribution for the
nonparametric local linear estimator of the unknown link function. The
asymptotically normal distributions of the proposed estimators are similar
to those obtained in the random effects case. In addition, we study
several partially linear single-index dynamic panel data models. The
methods and results are augmented by simulation studies and illustrated by
application to two real data examples. This article has online
supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 315-330
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.775093
File-URL: http://hdl.handle.net/10.1080/07350015.2013.775093
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:315-330
Template-Type: ReDIF-Article 1.0
Author-Name: Federico M. Bandi
Author-X-Name-First: Federico M.
Author-X-Name-Last: Bandi
Author-Name: Jeffrey R. Russell
Author-X-Name-First: Jeffrey R.
Author-X-Name-Last: Russell
Author-Name: Chen Yang
Author-X-Name-First: Chen
Author-X-Name-Last: Yang
Title: Realized Volatility Forecasting in the Presence of Time-Varying Noise
Abstract:
Observed high-frequency financial prices can be considered as
having two components, a true price and a market microstructure noise
perturbation. It is an empirical regularity, coherent with classical
market microstructure theories of price determination, that the second
moment of market microstructure noise is time-varying. We study the
optimal, from a finite-sample forecast mean squared error (MSE)
standpoint, frequency selection for realized variance in linear variance
forecasting models with time-varying market microstructure noise. We show
that the resulting sampling frequencies are generally considerably lower
than those that would be optimally chosen when time-variation in the
second moment of the noise is unaccounted for. These optimal, lower
frequencies have the potential to translate into considerable
out-of-sample MSE gains. When forecasting using high-frequency variance
estimates, we recommend treating the relevant frequency as a parameter and
evaluating it jointly with the parameters of the
forecasting model. The proposed joint solution is robust to the features
of the true price formation mechanism and generally applicable to a
variety of forecasting models and high-frequency variance estimators,
including those for which the typical choice variable is a smoothing
parameter, rather than a frequency.
Journal: Journal of Business & Economic Statistics
Pages: 331-345
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.803866
File-URL: http://hdl.handle.net/10.1080/07350015.2013.803866
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:331-345
Template-Type: ReDIF-Article 1.0
Author-Name: Markus Frölich
Author-X-Name-First: Markus
Author-X-Name-Last: Frölich
Author-Name: Blaise Melly
Author-X-Name-First: Blaise
Author-X-Name-Last: Melly
Title: Unconditional Quantile Treatment Effects Under Endogeneity
Abstract:
This article develops estimators for unconditional quantile
treatment effects when the treatment selection is endogenous. We use an
instrumental variable (IV) to solve for the endogeneity of the binary
treatment variable. Identification is based on a monotonicity assumption
in the treatment choice equation and is achieved without any functional
form restriction. We propose a weighting estimator that is extremely
simple to implement. This estimator is root n consistent,
asymptotically normally distributed, and its variance attains the
semiparametric efficiency bound. We also show that including covariates in
the estimation is not only necessary for consistency when the IV is itself
confounded but also for efficiency when the instrument is valid
unconditionally. An application of the suggested methods to the effects of
fertility on the family income distribution illustrates their usefulness.
Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 346-357
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/07350015.2013.803869
File-URL: http://hdl.handle.net/10.1080/07350015.2013.803869
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:346-357
Template-Type: ReDIF-Article 1.0
Author-Name: José Luis Montiel Olea
Author-X-Name-First: José Luis Montiel
Author-X-Name-Last: Olea
Author-Name: Carolin Pflueger
Author-X-Name-First: Carolin
Author-X-Name-Last: Pflueger
Title: A Robust Test for Weak Instruments
Abstract:
We develop a test for weak instruments in linear instrumental
variables regression that is robust to heteroscedasticity,
autocorrelation, and clustering. Our test statistic is a scaled nonrobust
first-stage F statistic. Instruments are considered weak
when the two-stage least squares or the limited information maximum
likelihood Nagar bias is large relative to a benchmark. We apply our
procedures to the estimation of the elasticity of intertemporal
substitution, where our test cannot reject the null of weak instruments in
a larger number of countries than the test proposed by Stock and Yogo in
2005. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 358-369
Issue: 3
Volume: 31
Year: 2013
Month: 7
X-DOI: 10.1080/00401706.2013.806694
File-URL: http://hdl.handle.net/10.1080/00401706.2013.806694
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:3:p:358-369
Template-Type: ReDIF-Article 1.0
Author-Name: Maria Kalli
Author-X-Name-First: Maria
Author-X-Name-Last: Kalli
Author-Name: Stephen G. Walker
Author-X-Name-First: Stephen G.
Author-X-Name-Last: Walker
Author-Name: Paul Damien
Author-X-Name-First: Paul
Author-X-Name-Last: Damien
Title: Modeling the Conditional Distribution of Daily Stock Index Returns: An Alternative Bayesian Semiparametric Model
Abstract:
This article introduces a new family of Bayesian
semiparametric models for the conditional distribution of daily stock
index returns. The proposed models capture key stylized facts of such
returns, namely, heavy tails, asymmetry, volatility clustering, and the
"leverage effect." A Bayesian nonparametric prior is used to generate
random density functions that are unimodal and asymmetric. Volatility is
modeled parametrically. The new model is applied to the daily returns of
the S&P 500, FTSE 100, and EUROSTOXX 50 indices and is compared with
GARCH, stochastic volatility, and other Bayesian semiparametric models.
Journal: Journal of Business & Economic Statistics
Pages: 371-383
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.794142
File-URL: http://hdl.handle.net/10.1080/07350015.2013.794142
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:371-383
Template-Type: ReDIF-Article 1.0
Author-Name: Gaurab Aryal
Author-X-Name-First: Gaurab
Author-X-Name-Last: Aryal
Author-Name: Dong-Hyuk Kim
Author-X-Name-First: Dong-Hyuk
Author-X-Name-Last: Kim
Title: A Point Decision for Partially Identified Auction Models
Abstract:
This article proposes a decision-theoretic method to choose a
single reserve price for partially identified auction models, such as
Haile and Tamer (2003), using data on transaction prices from English
auctions. The article employs Gilboa and Schmeidler (1989) for inference
that is robust with respect to the prior over unidentified parameters. It
is optimal to interpret the transaction price as the highest value, and
maximize the posterior mean of the seller's revenue. The Monte Carlo study
shows substantial gains relative to the revenues corresponding to a random
point and the midpoint in the Haile and Tamer interval.
Journal: Journal of Business & Economic Statistics
Pages: 384-397
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.794731
File-URL: http://hdl.handle.net/10.1080/07350015.2013.794731
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:384-397
Template-Type: ReDIF-Article 1.0
Author-Name: Robert G. Hammond
Author-X-Name-First: Robert G.
Author-X-Name-Last: Hammond
Title: Quantifying Consumer Perception of a Financially Distressed Company
Abstract:
To measure how consumers respond to negative information
about the financial health of a durable-goods producer, I use the prices
at which vehicles sell in secondary markets to quantify consumer
perception of the Chrysler Corporation during the period surrounding the
Chrysler Loan Guarantee Act of 1979. I focus on Chrysler's July 31, 1979
announcement of financial distress and request for assistance from the
U.S. government. The trend in the prices of used Chrysler vehicles
relative to those of its American competitors provides strong support for
the claim that consumers reduce their willingness to pay for the goods of
a financially distressed company.
Journal: Journal of Business & Economic Statistics
Pages: 398-411
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.799998
File-URL: http://hdl.handle.net/10.1080/07350015.2013.799998
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:398-411
Template-Type: ReDIF-Article 1.0
Author-Name: Christian Francq
Author-X-Name-First: Christian
Author-X-Name-Last: Francq
Author-Name: Jean-Michel Zakoïan
Author-X-Name-First: Jean-Michel
Author-X-Name-Last: Zakoïan
Title: Estimating the Marginal Law of a Time Series With Applications to Heavy-Tailed Distributions
Abstract:
This article addresses estimating parametric marginal
densities of stationary time series in the absence of precise information
on the dynamics of the underlying process. We propose using an estimator
obtained by maximization of the "quasi-marginal" likelihood, which is a
likelihood written as if the observations were independent. We study the
effect of the (neglected) dynamics on the asymptotic behavior of this
estimator. The consistency and asymptotic normality of the estimator are
established under mild assumptions on the dependence structure.
Applications of the asymptotic results to the estimation of stable,
generalized extreme value and generalized Pareto distributions are
proposed. The theoretical results are illustrated on financial index
returns. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 412-425
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.801776
File-URL: http://hdl.handle.net/10.1080/07350015.2013.801776
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:412-425
Template-Type: ReDIF-Article 1.0
Author-Name: Juan Carlos Escanciano
Author-X-Name-First: Juan Carlos
Author-X-Name-Last: Escanciano
Author-Name: Ignacio N. Lobato
Author-X-Name-First: Ignacio N.
Author-X-Name-Last: Lobato
Author-Name: Lin Zhu
Author-X-Name-First: Lin
Author-X-Name-Last: Zhu
Title: Automatic Specification Testing for Vector Autoregressions and Multivariate Nonlinear Time Series Models
Abstract:
This article introduces an automatic test for the correct
specification of a vector autoregression (VAR) model. The proposed test
statistic is a Portmanteau statistic with an automatic selection of the
order of the residual serial correlation tested. The test presents several
attractive characteristics: simplicity, robustness, and high power in
finite samples. The test is simple to implement since the researcher does
not need to specify the order of the autocorrelation tested and the
proposed critical values are simple to approximate, without resorting to
bootstrap procedures. In addition, the test is robust to the presence of
conditional heteroscedasticity of unknown form and accounts for estimation
uncertainty without requiring the computation of large-dimensional
inverses of near-to-singularity covariance matrices. The basic methodology
is extended to general nonlinear multivariate time series models.
Simulations show that the proposed test presents higher power than the
existing ones for models commonly employed in empirical macroeconomics and
empirical finance. Finally, the test is applied to the classical bivariate
VAR model for GNP (gross national product) and unemployment of Blanchard
and Quah (1989) and Evans (1989). Online supplementary material includes
proofs and additional details.
Journal: Journal of Business & Economic Statistics
Pages: 426-437
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.803973
File-URL: http://hdl.handle.net/10.1080/07350015.2013.803973
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:426-437
Template-Type: ReDIF-Article 1.0
Author-Name: Rolf Tschernig
Author-X-Name-First: Rolf
Author-X-Name-Last: Tschernig
Author-Name: Enzo Weber
Author-X-Name-First: Enzo
Author-X-Name-Last: Weber
Author-Name: Roland Weigand
Author-X-Name-First: Roland
Author-X-Name-Last: Weigand
Title: Long-Run Identification in a Fractionally Integrated System
Abstract:
We propose an extension of structural fractionally integrated
vector autoregressive models that avoids certain undesirable effects on
the impulse responses that occur if long-run identification restrictions
are imposed. We derive the model's Granger representation and investigate
the effects of long-run restrictions. Simulations illustrate that
enforcing integer integration orders can have severe consequences for
impulse responses. In a system of U.S. real output and aggregate prices,
the effects of structural shocks strongly depend on the specification of
the integration orders. In the statistically preferred fractional model,
shocks that are typically interpreted as demand disturbances have a very
brief influence on GDP. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 438-450
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.812517
File-URL: http://hdl.handle.net/10.1080/07350015.2013.812517
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:438-450
Template-Type: ReDIF-Article 1.0
Author-Name: Enrique Moral-Benito
Author-X-Name-First: Enrique
Author-X-Name-Last: Moral-Benito
Title: Likelihood-Based Estimation of Dynamic Panels With Predetermined Regressors
Abstract:
This article discusses the likelihood-based estimation of
panel data models with individual-specific effects and both lagged
dependent variable regressors and additional predetermined explanatory
variables. The resulting new estimator, labeled as subsystem limited
information maximum likelihood (ssLIML), is asymptotically equivalent to
standard panel generalized method of moment as N
→∞ for fixed T but tends to present smaller
biases in finite samples as illustrated in simulation experiments.
Simulation results also indicate that the estimator is preferred to other
alternatives available in the literature in terms of finite-sample
performance. Finally, to provide an empirical illustration, I revisit the
evidence on the relationship between income and democracy in a panel of
countries using the proposed estimator.
Journal: Journal of Business & Economic Statistics
Pages: 451-472
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.818003
File-URL: http://hdl.handle.net/10.1080/07350015.2013.818003
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:451-472
Template-Type: ReDIF-Article 1.0
Author-Name: Gloria González-Rivera
Author-X-Name-First: Gloria
Author-X-Name-Last: González-Rivera
Author-Name: Wei Lin
Author-X-Name-First: Wei
Author-X-Name-Last: Lin
Title: Constrained Regression for Interval-Valued Data
Abstract:
Current regression models for interval-valued data do not
guarantee that the predicted lower bound of the interval is always smaller
than its upper bound. We propose a constrained regression model that
preserves the natural order of the interval in all instances, either for
in-sample fitted intervals or for interval forecasts. Within the framework
of interval time series, we specify a general dynamic bivariate system for
the upper and lower bounds of the intervals. By imposing the order of the
interval bounds into the model, the bivariate probability density function
of the errors becomes conditionally truncated. In this context, the
ordinary least squares (OLS) estimators of the parameters of the system
are inconsistent. Estimation by maximum likelihood is possible but it is
computationally burdensome due to the nonlinearity of the estimator when
there is truncation. We propose a two-step procedure that combines maximum
likelihood and least squares estimation and a modified two-step procedure
that combines maximum likelihood and minimum-distance estimation. In both
instances, the estimators are consistent. However, when multicollinearity
arises in the second step of the estimation, the modified two-step
procedure is superior at identifying the model regardless of the severity
of the truncation. Monte Carlo simulations show good finite sample
properties of the proposed estimators. A comparison with the current
methods in the literature shows that our proposed methods are superior by
delivering smaller losses and better estimators (no bias and low mean
squared errors) than those from competing approaches. We illustrate our
approach with the daily interval of low/high SP500 returns and find that
truncation is very severe during and after the financial crisis of 2008,
so OLS estimates should not be trusted and a modified two-step procedure
should be implemented. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 473-490
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.818004
File-URL: http://hdl.handle.net/10.1080/07350015.2013.818004
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:473-490
Template-Type: ReDIF-Article 1.0
Author-Name: Jean-Marie Dufour
Author-X-Name-First: Jean-Marie
Author-X-Name-Last: Dufour
Author-Name: Dalibor Stevanović
Author-X-Name-First: Dalibor
Author-X-Name-Last: Stevanović
Title: Factor-Augmented VARMA Models With Macroeconomic Applications
Abstract:
We study the relationship between vector autoregressive
moving-average (VARMA) and factor representations of a vector stochastic
process. We observe that, in general, vector time series and factors
cannot both follow finite-order VAR models. Instead, a VAR factor dynamics
induces a VARMA process, while a VAR process entails VARMA factors. We
propose to combine factor and VARMA modeling by using factor-augmented
VARMA (FAVARMA) models. This approach is applied to forecasting key
macroeconomic aggregates using large U.S. and Canadian monthly panels. The
results show that FAVARMA models yield substantial improvements over
standard factor models, including precise representations of the effect
and transmission of monetary policy.
Journal: Journal of Business & Economic Statistics
Pages: 491-506
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.818005
File-URL: http://hdl.handle.net/10.1080/07350015.2013.818005
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:491-506
Template-Type: ReDIF-Article 1.0
Author-Name: Taisuke Otsu
Author-X-Name-First: Taisuke
Author-X-Name-Last: Otsu
Author-Name: Ke-Li Xu
Author-X-Name-First: Ke-Li
Author-X-Name-Last: Xu
Author-Name: Yukitoshi Matsushita
Author-X-Name-First: Yukitoshi
Author-X-Name-Last: Matsushita
Title: Estimation and Inference of Discontinuity in Density
Abstract:
Continuity or discontinuity of probability density functions
of data often plays a fundamental role in empirical economic analysis. For
example, for identification and inference of causal effects in regression
discontinuity designs it is typically assumed that the density function of
a conditioning variable is continuous at a cutoff point that determines
assignment of a treatment. Also, discontinuity in density functions can be
a parameter of economic interest, such as in analysis of bunching
behaviors of taxpayers. To facilitate researchers to conduct valid
inference for these problems, this article extends the binning and local
likelihood approaches to estimate discontinuity of density functions and
proposes empirical likelihood-based tests and confidence sets for the
discontinuity. In contrast to the conventional Wald-type test and
confidence set using the binning estimator, our empirical likelihood-based
methods (i) circumvent asymptotic variance estimation to construct the
test statistics and confidence sets; (ii) are invariant to nonlinear
transformations of the parameters of interest; (iii) offer confidence sets
whose shapes are automatically determined by data; and (iv) admit
higher-order refinements, so-called Bartlett corrections. First- and
second-order asymptotic theories are developed. Simulations demonstrate
the superior finite sample behaviors of the proposed methods. In an
empirical application, we assess the identifying assumption of no
manipulation of class sizes in the regression discontinuity design studied
by Angrist and Lavy (1999).
Journal: Journal of Business & Economic Statistics
Pages: 507-524
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.818007
File-URL: http://hdl.handle.net/10.1080/07350015.2013.818007
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:507-524
Template-Type: ReDIF-Article 1.0
Author-Name: Willa W. Chen
Author-X-Name-First: Willa W.
Author-X-Name-Last: Chen
Author-Name: Rohit S. Deo
Author-X-Name-First: Rohit S.
Author-X-Name-Last: Deo
Author-Name: Yanping Yi
Author-X-Name-First: Yanping
Author-X-Name-Last: Yi
Title: Uniform Inference in Predictive Regression Models
Abstract:
The restricted likelihood has been found to provide a
well-behaved likelihood ratio test in the predictive regression model even
when the regressor variable exhibits almost unit root behavior. Using the
weighted least squares approximation to the restricted likelihood obtained
in Chen and Deo, we provide a quasi restricted likelihood ratio test
(QRLRT), obtain its asymptotic distribution as the nuisance persistence
parameter varies, and show that this distribution varies very slightly.
Consequently, the resulting sup bound QRLRT is shown to maintain size
uniformly over the parameter space without sacrificing power. In
simulations, the QRLRT is found to deliver uniformly higher power than
competing procedures with power gains that are substantial.
Journal: Journal of Business & Economic Statistics
Pages: 525-533
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.818008
File-URL: http://hdl.handle.net/10.1080/07350015.2013.818008
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:525-533
Template-Type: ReDIF-Article 1.0
Author-Name: Carlos A. Flores
Author-X-Name-First: Carlos A.
Author-X-Name-Last: Flores
Author-Name: Alfonso Flores-Lagunes
Author-X-Name-First: Alfonso
Author-X-Name-Last: Flores-Lagunes
Title: Partial Identification of Local Average Treatment Effects With an Invalid Instrument
Abstract:
We derive nonparametric bounds for local average treatment
effects (LATE) without imposing the exclusion restriction assumption or
requiring an outcome with bounded support. Instead, we employ assumptions
requiring weak monotonicity of mean potential and counterfactual outcomes
within or across subpopulations defined by the values of the potential
treatment status under each value of the instrument. The key element in
our derivation is a result relating LATE to a causal mediation effect,
which allows us to exploit partial identification results from the causal
mediation analysis literature. The bounds are employed to analyze the
effect of attaining a GED, high school, or vocational degree on future
labor market outcomes using randomization into a training program as an
invalid instrument. The resulting bounds are informative, indicating that
the local effect when assigned to training for those whose degree
attainment is affected by the instrument is at most 12.7 percentage points
on employment and $64.4 on weekly earnings.
Journal: Journal of Business & Economic Statistics
Pages: 534-545
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.822760
File-URL: http://hdl.handle.net/10.1080/07350015.2013.822760
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:534-545
Template-Type: ReDIF-Article 1.0
Author-Name: Natalia Sizova
Author-X-Name-First: Natalia
Author-X-Name-Last: Sizova
Title: Long-Horizon Return Regressions With Historical Volatility and Other Long-Memory Variables
Abstract:
The predictability of long-term asset returns increases with
the time horizon as estimated in regressions of aggregated-forward returns
on aggregated-backward predictive variables. This previously established
evidence is consistent with the presence of common slow-moving components
that are extracted upon aggregation from returns and predictive variables.
Long memory is an appropriate econometric framework for modeling this
phenomenon. We apply this framework to explain the results from
regressions of returns on risk measures. We introduce suitable econometric
methods for construction of confidence intervals and apply them to test
the predictability of NYSE/AMEX returns.
Journal: Journal of Business & Economic Statistics
Pages: 546-559
Issue: 4
Volume: 31
Year: 2013
Month: 10
X-DOI: 10.1080/07350015.2013.827985
File-URL: http://hdl.handle.net/10.1080/07350015.2013.827985
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:31:y:2013:i:4:p:546-559
Template-Type: ReDIF-Article 1.0
Author-Name: Garry F. Barrett
Author-X-Name-First: Garry F.
Author-X-Name-Last: Barrett
Author-Name: Stephen G. Donald
Author-X-Name-First: Stephen G.
Author-X-Name-Last: Donald
Author-Name: Debopam Bhattacharya
Author-X-Name-First: Debopam
Author-X-Name-Last: Bhattacharya
Title: Consistent Nonparametric Tests for Lorenz Dominance
Abstract:
This article proposes consistent
nonparametric methods for testing the null hypothesis of Lorenz dominance.
The methods are based on a class of statistical functionals defined over
the difference between the Lorenz curves for two samples of
welfare-related variables. We present two specific test statistics
belonging to the general class and derive their asymptotic properties. As
the limiting distributions of the test statistics are nonstandard, we
propose and justify bootstrap methods of inference. We provide methods
appropriate for case where the two samples are independent as well as the
case where the two samples represent different measures of welfare for one
set of individuals. The small sample performance of the two tests is
examined and compared in the context of a Monte Carlo study and an
empirical analysis of income and consumption inequality.
Journal: Journal of Business & Economic Statistics
Pages: 1-13
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.834262
File-URL: http://hdl.handle.net/10.1080/07350015.2013.834262
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:1-13
Template-Type: ReDIF-Article 1.0
Author-Name: Anastasios Panagiotelis
Author-X-Name-First: Anastasios
Author-X-Name-Last: Panagiotelis
Author-Name: Michael S. Smith
Author-X-Name-First: Michael S.
Author-X-Name-Last: Smith
Author-Name: Peter J. Danaher
Author-X-Name-First: Peter J.
Author-X-Name-Last: Danaher
Title: From Amazon to Apple: Modeling Online Retail Sales, Purchase Incidence, and Visit Behavior
Abstract:
In this study, we propose a multivariate
stochastic model for Web site visit duration, page views, purchase
incidence, and the sale amount for online retailers. The model is
constructed by composition from carefully selected distributions and
involves copula components. It allows for the strong nonlinear
relationships between the sales and visit variables to be explored in
detail, and can be used to construct sales predictions. The model is
readily estimated using maximum likelihood, making it an attractive choice
in practice given the large sample sizes that are commonplace in online
retail studies. We examine a number of top-ranked U.S. online retailers,
and find that the visit duration and the number of pages viewed are both
related to sales, but in very different ways for different products. Using
Bayesian methodology, we show how the model can be extended to a finite
mixture model to account for consumer heterogeneity via latent household
segmentation. The model can also be adjusted to accommodate a more
accurate analysis of online retailers like apple.com that sell products at
a very limited number of price points. In a validation study across a
range of different Web sites, we find that the purchase incidence and
sales amount are both forecast more accurately using our model, when
compared to regression, probit regression, a popular data-mining method,
and a survival model employed previously in an online retail study.
Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 14-29
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.835729
File-URL: http://hdl.handle.net/10.1080/07350015.2013.835729
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:14-29
Template-Type: ReDIF-Article 1.0
Author-Name: Mehmet Caner
Author-X-Name-First: Mehmet
Author-X-Name-Last: Caner
Author-Name: Hao Helen Zhang
Author-X-Name-First: Hao Helen
Author-X-Name-Last: Zhang
Title: Adaptive Elastic Net for Generalized Methods of Moments
Abstract:
Model selection and estimation are crucial
parts of econometrics. This article introduces a new technique that can
simultaneously estimate and select the model in generalized method of
moments (GMM) context. The GMM is particularly powerful for analyzing
complex datasets such as longitudinal and panel data, and it has wide
applications in econometrics. This article extends the least squares based
adaptive elastic net estimator by Zou and Zhang to nonlinear equation
systems with endogenous variables. The extension is not trivial and
involves a new proof technique due to estimators' lack of closed-form
solutions. Compared to Bridge-GMM by Caner, we allow for the number of
parameters to diverge to infinity as well as collinearity among a large
number of variables; also, the redundant parameters are set to zero via a
data-dependent technique. This method has the oracle property, meaning
that we can estimate nonzero parameters with their standard limit and the
redundant parameters are dropped from the equations simultaneously.
Numerical examples are used to illustrate the performance of the new
method.
Journal: Journal of Business & Economic Statistics
Pages: 30-47
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.836104
File-URL: http://hdl.handle.net/10.1080/07350015.2013.836104
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:30-47
Template-Type: ReDIF-Article 1.0
Author-Name: Knut Are Aastveit
Author-X-Name-First: Knut Are
Author-X-Name-Last: Aastveit
Author-Name: Karsten R. Gerdrup
Author-X-Name-First: Karsten R.
Author-X-Name-Last: Gerdrup
Author-Name: Anne Sofie Jore
Author-X-Name-First: Anne Sofie
Author-X-Name-Last: Jore
Author-Name: Leif Anders Thorsrud
Author-X-Name-First: Leif Anders
Author-X-Name-Last: Thorsrud
Title: Nowcasting GDP in Real Time: A Density Combination Approach
Abstract:
In this article, we use U.S. real-time
data to produce combined density nowcasts of quarterly Gross Domestic
Product (GDP) growth, using a system of three commonly used model classes.
We update the density nowcast for every new data release throughout the
quarter, and highlight the importance of new information for nowcasting.
Our results show that the logarithmic score of the predictive densities
for U.S. GDP growth increase almost monotonically, as new information
arrives during the quarter. While the ranking of the model classes changes
during the quarter, the combined density nowcasts always perform well
relative to the model classes in terms of both logarithmic scores and
calibration tests. The density combination approach is superior to a
simple model selection strategy and also performs better in terms of point
forecast evaluation than standard point forecast combinations.
Journal: Journal of Business & Economic Statistics
Pages: 48-68
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.844155
File-URL: http://hdl.handle.net/10.1080/07350015.2013.844155
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:48-68
Template-Type: ReDIF-Article 1.0
Author-Name: Cristina Amado
Author-X-Name-First: Cristina
Author-X-Name-Last: Amado
Author-Name: Timo Teräsvirta
Author-X-Name-First: Timo
Author-X-Name-Last: Teräsvirta
Title: Conditional Correlation Models of Autoregressive Conditional Heteroscedasticity With Nonstationary GARCH Equations
Abstract:
In this article, we investigate the
effects of careful modeling the long-run dynamics of the volatilities of
stock market returns on the conditional correlation structure. To this
end, we allow the individual unconditional variances in conditional
correlation generalized autoregressive conditional heteroscedasticity
(CC-GARCH) models to change smoothly over time by incorporating a
nonstationary component in the variance equations such as the spline-GARCH
model and the time-varying (TV)-GARCH model. The variance equations
combine the long-run and the short-run dynamic behavior of the
volatilities. The structure of the conditional correlation matrix is
assumed to be either time independent or to vary over time. We apply our
model to pairs of seven daily stock returns belonging to the S&P 500
composite index and traded at the New York Stock Exchange. The results
suggest that accounting for deterministic changes in the unconditional
variances improves the fit of the multivariate CC-GARCH models to the
data. The effect of careful specification of the variance equations on the
estimated correlations is variable: in some cases rather small, in others
more discernible. We also show empirically that the CC-GARCH models with
time-varying unconditional variances using the TV-GARCH model outperform
the other models under study in terms of out-of-sample forecasting
performance. In addition, we find that portfolio volatility-timing
strategies based on time-varying unconditional variances often outperform
the unmodeled long-run variances strategy out-of-sample. As a by-product,
we generalize news impact surfaces to the situation in which both the
GARCH equations and the conditional correlations contain a deterministic
component that is a function of time.
Journal: Journal of Business & Economic Statistics
Pages: 69-87
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.847376
File-URL: http://hdl.handle.net/10.1080/07350015.2013.847376
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:69-87
Template-Type: ReDIF-Article 1.0
Author-Name: Eric Ghysels
Author-X-Name-First: Eric
Author-X-Name-Last: Ghysels
Author-Name: Fangfang Wang
Author-X-Name-First: Fangfang
Author-X-Name-Last: Wang
Title: Moment-Implied Densities: Properties and Applications
Abstract:
Suppose one uses a parametric density
function based on the first four (conditional) moments to model risk.
There are quite a few densities to choose from and depending on which is
selected, one implicitly assumes very different tail behavior and very
different feasible skewness/kurtosis combinations. Surprisingly, there is
no systematic analysis of the tradeoff one faces. It is the purpose of the
article to address this. We focus on the tail behavior and the range of
skewness and kurtosis as these are key for common applications such as
risk management.
Journal: Journal of Business & Economic Statistics
Pages: 88-111
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.847842
File-URL: http://hdl.handle.net/10.1080/07350015.2013.847842
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:88-111
Template-Type: ReDIF-Article 1.0
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: Heteroscedasticity Robust Panel Unit Root Tests
Abstract:
This article proposes new unit root tests
for panels where the errors may be not only serial and/or
cross-correlated, but also unconditionally heteroscedastic. Despite their
generality, the test statistics are shown to be very simple to implement,
requiring only minimal corrections and still the limiting distributions
under the null hypothesis are completely free from nuisance parameters.
Monte Carlo evidence is also provided to suggest that the new tests
perform well in small samples, also when compared to some of the existing
tests. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 112-135
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.857612
File-URL: http://hdl.handle.net/10.1080/07350015.2013.857612
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:112-135
Template-Type: ReDIF-Article 1.0
Author-Name: Jens H. E. Christensen
Author-X-Name-First: Jens H. E.
Author-X-Name-Last: Christensen
Author-Name: Jose A. Lopez
Author-X-Name-First: Jose A.
Author-X-Name-Last: Lopez
Author-Name: Glenn D. Rudebusch
Author-X-Name-First: Glenn D.
Author-X-Name-Last: Rudebusch
Title: Do Central Bank Liquidity Facilities Affect Interbank Lending Rates?
Abstract:
In response to the global financial crisis
that started in August 2007, central banks provided extraordinary amounts
of liquidity to the financial system. To investigate the effect of central
bank liquidity facilities on term interbank lending rates near the start
of the crisis, we estimate a six-factor arbitrage-free model of U.S.
Treasury yields, financial corporate bond yields, and term interbank
rates. This model can account for fluctuations in the term structure of
credit and liquidity spreads observed in the data. A significant shift in
model estimates after the announcement of the liquidity facilities
suggests that these central bank actions did help lower the liquidity
premium in term interbank rates.
Journal: Journal of Business & Economic Statistics
Pages: 136-151
Issue: 1
Volume: 32
Year: 2014
Month: 1
X-DOI: 10.1080/07350015.2013.858631
File-URL: http://hdl.handle.net/10.1080/07350015.2013.858631
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:1:p:136-151
Template-Type: ReDIF-Article 1.0
Author-Name: Yu-Pin Hu
Author-X-Name-First: Yu-Pin
Author-X-Name-Last: Hu
Author-Name: Ruey S. Tsay
Author-X-Name-First: Ruey S.
Author-X-Name-Last: Tsay
Title: Principal Volatility Component Analysis
Abstract:
Many empirical time series such as asset
returns and traffic data exhibit the characteristic of time-varying
conditional covariances, known as volatility or conditional
heteroscedasticity. Modeling multivariate volatility, however, encounters
several difficulties, including the curse of dimensionality. Dimension
reduction can be useful and is often necessary. The goal of this article
is to extend the idea of principal component analysis to principal
volatility component (PVC) analysis. We define a cumulative generalized
kurtosis matrix to summarize the volatility dependence of multivariate
time series. Spectral analysis of this generalized kurtosis matrix is used
to define PVCs. We consider a sample estimate of the generalized kurtosis
matrix and propose test statistics for detecting linear combinations that
do not have conditional heteroscedasticity. For application, we applied
the proposed analysis to weekly log returns of seven exchange rates
against U.S. dollar from 2000 to 2011 and found a linear combination among
the exchange rates that has no conditional heteroscedasticity.
Journal: Journal of Business & Economic Statistics
Pages: 153-164
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.818006
File-URL: http://hdl.handle.net/10.1080/07350015.2013.818006
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:153-164
Template-Type: ReDIF-Article 1.0
Author-Name: Shiqing Ling
Author-X-Name-First: Shiqing
Author-X-Name-Last: Ling
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 165-165
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.887016
File-URL: http://hdl.handle.net/10.1080/07350015.2014.887016
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:165-165
Template-Type: ReDIF-Article 1.0
Author-Name: Qiwei Yao
Author-X-Name-First: Qiwei
Author-X-Name-Last: Yao
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 165-166
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.887014
File-URL: http://hdl.handle.net/10.1080/07350015.2014.887014
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:165-166
Template-Type: ReDIF-Article 1.0
Author-Name: Philip L. H. Yu
Author-X-Name-First: Philip L. H.
Author-X-Name-Last: Yu
Author-Name: Guodong Li
Author-X-Name-First: Guodong
Author-X-Name-Last: Li
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 166-167
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.885436
File-URL: http://hdl.handle.net/10.1080/07350015.2014.885436
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:166-167
Template-Type: ReDIF-Article 1.0
Author-Name: Elena Andreou
Author-X-Name-First: Elena
Author-X-Name-Last: Andreou
Author-Name: Eric Ghysels
Author-X-Name-First: Eric
Author-X-Name-Last: Ghysels
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 168-171
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.902238
File-URL: http://hdl.handle.net/10.1080/07350015.2014.902238
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:168-171
Template-Type: ReDIF-Article 1.0
Author-Name: Juergen Franke
Author-X-Name-First: Juergen
Author-X-Name-Last: Franke
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 171-172
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.903652
File-URL: http://hdl.handle.net/10.1080/07350015.2014.903652
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:171-172
Template-Type: ReDIF-Article 1.0
Author-Name: Wolfgang Karl Härdle
Author-X-Name-First: Wolfgang Karl
Author-X-Name-Last: Härdle
Author-Name: Weining Wang
Author-X-Name-First: Weining
Author-X-Name-Last: Wang
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 173-174
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.898585
File-URL: http://hdl.handle.net/10.1080/07350015.2014.898585
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:173-174
Template-Type: ReDIF-Article 1.0
Author-Name: Michael McAleer
Author-X-Name-First: Michael
Author-X-Name-Last: McAleer
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 174-175
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.898584
File-URL: http://hdl.handle.net/10.1080/07350015.2014.898584
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:174-175
Template-Type: ReDIF-Article 1.0
Author-Name: Yu-Pin Hu
Author-X-Name-First: Yu-Pin
Author-X-Name-Last: Hu
Author-Name: Ruey S. Tsay
Author-X-Name-First: Ruey S.
Author-X-Name-Last: Tsay
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 176-177
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.902236
File-URL: http://hdl.handle.net/10.1080/07350015.2014.902236
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:176-177
Template-Type: ReDIF-Article 1.0
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Lei Qi
Author-X-Name-First: Lei
Author-X-Name-Last: Qi
Author-Name: Dacheng Xiu
Author-X-Name-First: Dacheng
Author-X-Name-Last: Xiu
Title: Quasi-Maximum Likelihood Estimation of GARCH Models With Heavy-Tailed Likelihoods
Abstract:
The non-Gaussian maximum likelihood
estimator is frequently used in GARCH models with the intention of
capturing heavy-tailed returns. However, unless the parametric likelihood
family contains the true likelihood, the estimator is inconsistent due to
density misspecification. To correct this bias, we identify an unknown
scale parameter η
f that is
critical to the identification for consistency and propose a three-step
quasi-maximum likelihood procedure with non-Gaussian likelihood functions.
This novel approach is consistent and asymptotically normal under weak
moment conditions. Moreover, it achieves better efficiency than the
Gaussian alternative, particularly when the innovation error has heavy
tails. We also summarize and compare the values of the scale parameter and
the asymptotic efficiency for estimators based on different choices of
likelihood functions with an increasing level of heaviness in the
innovation tails. Numerical studies confirm the advantages of the proposed
approach.
Journal: Journal of Business & Economic Statistics
Pages: 178-191
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.840239
File-URL: http://hdl.handle.net/10.1080/07350015.2013.840239
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:178-191
Template-Type: ReDIF-Article 1.0
Author-Name: Beth Andrews
Author-X-Name-First: Beth
Author-X-Name-Last: Andrews
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 191-193
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.875921
File-URL: http://hdl.handle.net/10.1080/07350015.2013.875921
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:191-193
Template-Type: ReDIF-Article 1.0
Author-Name: Gabriele Fiorentini
Author-X-Name-First: Gabriele
Author-X-Name-Last: Fiorentini
Author-Name: Enrique Sentana
Author-X-Name-First: Enrique
Author-X-Name-Last: Sentana
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 193-198
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.878661
File-URL: http://hdl.handle.net/10.1080/07350015.2013.878661
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:193-198
Template-Type: ReDIF-Article 1.0
Author-Name: Christian Francq
Author-X-Name-First: Christian
Author-X-Name-Last: Francq
Author-Name: Jean-Michel Zakoïan
Author-X-Name-First: Jean-Michel
Author-X-Name-Last: Zakoïan
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 198-201
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.879829
File-URL: http://hdl.handle.net/10.1080/07350015.2013.879829
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:198-201
Template-Type: ReDIF-Article 1.0
Author-Name: Qiwei Yao
Author-X-Name-First: Qiwei
Author-X-Name-Last: Yao
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 201-201
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.887015
File-URL: http://hdl.handle.net/10.1080/07350015.2014.887015
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:201-201
Template-Type: ReDIF-Article 1.0
Author-Name: Shiqing Ling
Author-X-Name-First: Shiqing
Author-X-Name-Last: Ling
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 202-203
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.907059
File-URL: http://hdl.handle.net/10.1080/07350015.2014.907059
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:202-203
Template-Type: ReDIF-Article 1.0
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Lei Qi
Author-X-Name-First: Lei
Author-X-Name-Last: Qi
Author-Name: Dacheng Xiu
Author-X-Name-First: Dacheng
Author-X-Name-Last: Xiu
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 204-205
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2014.898448
File-URL: http://hdl.handle.net/10.1080/07350015.2014.898448
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:204-205
Template-Type: ReDIF-Article 1.0
Author-Name: Michael P. Clements
Author-X-Name-First: Michael P.
Author-X-Name-Last: Clements
Title: Forecast Uncertainty-Ex Ante and Ex Post: U.S. Inflation and Output Growth
Abstract:
Survey respondents who make point
predictions and histogram forecasts of macro-variables reveal both how
uncertain they believe the future to be, ex ante, as well
as their ex post performance. Macroeconomic forecasters
tend to be overconfident at horizons of a year or more, but overestimate
(i.e., are underconfident regarding) the uncertainty surrounding their
predictions at short horizons. Ex ante uncertainty
remains at a high level compared to the ex post measure
as the forecast horizon shortens. There is little evidence of a link
between individuals' ex post forecast accuracy and their
ex ante subjective assessments.
Journal: Journal of Business & Economic Statistics
Pages: 206-216
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.859618
File-URL: http://hdl.handle.net/10.1080/07350015.2013.859618
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:206-216
Template-Type: ReDIF-Article 1.0
Author-Name: H. J. Turtle
Author-X-Name-First: H. J.
Author-X-Name-Last: Turtle
Author-Name: Kainan Wang
Author-X-Name-First: Kainan
Author-X-Name-Last: Wang
Title: Modeling Conditional Covariances With Economic Information Instruments
Abstract:
We propose a new model for conditional
covariances based on predetermined idiosyncratic shocks as well as
macroeconomic and own information instruments. The
specification ensures positive definiteness by construction, is unique
within the class of linear functions for our covariance decomposition, and
yields a simple yet rich model of covariances. We introduce a property,
invariance to variate order, that assures estimation is
not impacted by a simple reordering of the variates in the system.
Simulation results using realized covariances show smaller mean absolute
errors (MAE) and root mean square errors (RMSE) for every element of the
covariance matrix relative to a comparably specified BEKK model with
own information instruments. We also find a smaller mean
absolute percentage error (MAPE) and root mean square percentage error
(RMSPE) for the entire covariance matrix. Supplementary materials for
practitioners as well as all Matlab code used in the article are available
online.
Journal: Journal of Business & Economic Statistics
Pages: 217-236
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.859078
File-URL: http://hdl.handle.net/10.1080/07350015.2013.859078
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:217-236
Template-Type: ReDIF-Article 1.0
Author-Name: Danyang Huang
Author-X-Name-First: Danyang
Author-X-Name-Last: Huang
Author-Name: Runze Li
Author-X-Name-First: Runze
Author-X-Name-Last: Li
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Feature Screening for Ultrahigh Dimensional Categorical Data With Applications
Abstract:
Ultrahigh dimensional data with both
categorical responses and categorical covariates are frequently
encountered in the analysis of big data, for which feature screening has
become an indispensable statistical tool. We propose a Pearson chi-square
based feature screening procedure for categorical response with ultrahigh
dimensional categorical covariates. The proposed procedure can be directly
applied for detection of important interaction effects. We further show
that the proposed procedure possesses screening consistency property in
the terminology of Fan and Lv (2008). We investigate the finite sample
performance of the proposed procedure by Monte Carlo simulation studies
and illustrate the proposed method by two empirical datasets.
Journal: Journal of Business & Economic Statistics
Pages: 237-244
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.863158
File-URL: http://hdl.handle.net/10.1080/07350015.2013.863158
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:237-244
Template-Type: ReDIF-Article 1.0
Author-Name: I. Sebastian Buhai
Author-X-Name-First: I. Sebastian
Author-X-Name-Last: Buhai
Author-Name: Coen N. Teulings
Author-X-Name-First: Coen N.
Author-X-Name-Last: Teulings
Title: Tenure Profiles and Efficient Separation in a Stochastic Productivity Model
Abstract:
We develop a theoretical model based on
efficient bargaining, where both log outside productivity and log
productivity in the current job follow a random walk. This setting allows
the application of real option theory. We derive the efficient worker-firm
separation rule. We show that wage data from completed job spells are
uninformative about the true tenure profile. The model is estimated on the
Panel Study of Income Dynamics. It fits the observed distribution of job
tenures well. Selection of favorable random walks can account for the
concavity in tenure profiles. About 80% of the estimated wage returns to
tenure is due to selectivity in the realized outside productivities.
Journal: Journal of Business & Economic Statistics
Pages: 245-258
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.866568
File-URL: http://hdl.handle.net/10.1080/07350015.2013.866568
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:245-258
Template-Type: ReDIF-Article 1.0
Author-Name: Mian Huang
Author-X-Name-First: Mian
Author-X-Name-Last: Huang
Author-Name: Runze Li
Author-X-Name-First: Runze
Author-X-Name-Last: Li
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Author-Name: Weixin Yao
Author-X-Name-First: Weixin
Author-X-Name-Last: Yao
Title: Estimating Mixture of Gaussian Processes by Kernel Smoothing
Abstract:
When functional data are not homogenous,
for example, when there are multiple classes of functional curves in the
dataset, traditional estimation methods may fail. In this article, we
propose a new estimation procedure for the mixture of Gaussian processes,
to incorporate both functional and inhomogenous properties of the data.
Our method can be viewed as a natural extension of high-dimensional normal
mixtures. However, the key difference is that smoothed structures are
imposed for both the mean and covariance functions. The model is shown to
be identifiable, and can be estimated efficiently by a combination of the
ideas from expectation-maximization (EM) algorithm, kernel regression, and
functional principal component analysis. Our methodology is empirically
justified by Monte Carlo simulations and illustrated by an analysis of a
supermarket dataset.
Journal: Journal of Business & Economic Statistics
Pages: 259-270
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.868084
File-URL: http://hdl.handle.net/10.1080/07350015.2013.868084
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:259-270
Template-Type: ReDIF-Article 1.0
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Author-Name: Bernd Schwaab
Author-X-Name-First: Bernd
Author-X-Name-Last: Schwaab
Author-Name: Xin Zhang
Author-X-Name-First: Xin
Author-X-Name-Last: Zhang
Title: Conditional Euro Area Sovereign Default Risk
Abstract:
We propose an empirical framework to
assess the likelihood of joint and conditional sovereign default from
observed CDS prices. Our model is based on a dynamic
skewed-t distribution that captures all salient features
of the data, including skewed and heavy-tailed changes in the price of CDS
protection against sovereign default, as well as dynamic volatilities and
correlations that ensure that uncertainty and risk dependence can increase
in times of stress. We apply the framework to euro area sovereign CDS
spreads during the euro area debt crisis. Our results reveal significant
time-variation in distress dependence and spill-over effects for sovereign
default risk. We investigate market perceptions of joint and conditional
sovereign risk around announcements of Eurosystem asset purchases
programs, and document a strong impact on joint risk.
Journal: Journal of Business & Economic Statistics
Pages: 271-284
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.873540
File-URL: http://hdl.handle.net/10.1080/07350015.2013.873540
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:271-284
Template-Type: ReDIF-Article 1.0
Author-Name: Antonio F. Galvao
Author-X-Name-First: Antonio F.
Author-X-Name-Last: Galvao
Author-Name: Kengo Kato
Author-X-Name-First: Kengo
Author-X-Name-Last: Kato
Title: Estimation and Inference for Linear Panel Data Models Under Misspecification When Both n and T are Large
Abstract:
This article considers fixed effects (FE)
estimation for linear panel data models under possible model
misspecification when both the number of individuals, n,
and the number of time periods, T, are large. We first
clarify the probability limit of the FE estimator and argue that this
probability limit can be regarded as a pseudo-true parameter. We then
establish the asymptotic distributional properties of the FE estimator
around the pseudo-true parameter when n and
T jointly go to infinity. Notably, we show that the FE
estimator suffers from the incidental parameters bias of which the top
order is O(T
-super- - 1), and even after the incidental parameters bias is
completely removed, the rate of convergence of the FE estimator depends on
the degree of model misspecification and is either
(nT)-super- - 1/2 or n
-super- - 1/2. Second, we establish asymptotically
valid inference on the (pseudo-true) parameter. Specifically, we derive
the asymptotic properties of the clustered covariance matrix (CCM)
estimator and the cross-section bootstrap, and show that they are robust
to model misspecification. This establishes a rigorous theoretical ground
for the use of the CCM estimator and the cross-section bootstrap when
model misspecification and the incidental parameters bias (in the
coefficient estimate) are present. We conduct Monte Carlo simulations to
evaluate the finite sample performance of the estimators and inference
methods, together with a simple application to the unemployment dynamics
in the U.S.
Journal: Journal of Business & Economic Statistics
Pages: 285-309
Issue: 2
Volume: 32
Year: 2014
Month: 4
X-DOI: 10.1080/07350015.2013.875473
File-URL: http://hdl.handle.net/10.1080/07350015.2013.875473
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:2:p:285-309
Template-Type: ReDIF-Article 1.0
Author-Name: Ulrich K. Müller
Author-X-Name-First: Ulrich K.
Author-X-Name-Last: Müller
Title: HAC Corrections for Strongly Autocorrelated Time Series
Abstract:
Applied work routinely relies on heteroscedasticity and autocorrelation
consistent (HAC) standard errors when conducting inference in a time
series setting. As is well known, however, these corrections perform
poorly in small samples under pronounced autocorrelations. In this
article, I first provide a review of popular methods to clarify the
reasons for this failure. I then derive inference that remains valid under
a specific form of strong dependence. In particular, I assume that the
long-run properties can be approximated by a stationary Gaussian AR(1)
model, with coefficient arbitrarily close to one. In this setting, I
derive tests that come close to maximizing a weighted average power
criterion. Small sample simulations show these tests to perform well, also
in a regression context.
Journal: Journal of Business & Economic Statistics
Pages: 311-322
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.931238
File-URL: http://hdl.handle.net/10.1080/07350015.2014.931238
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:311-322
Template-Type: ReDIF-Article 1.0
Author-Name: Nicholas M. Kiefer
Author-X-Name-First: Nicholas M.
Author-X-Name-Last: Kiefer
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 322-323
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.926816
File-URL: http://hdl.handle.net/10.1080/07350015.2014.926816
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:322-323
Template-Type: ReDIF-Article 1.0
Author-Name: Matias D. Cattaneo
Author-X-Name-First: Matias D.
Author-X-Name-Last: Cattaneo
Author-Name: Richard K. Crump
Author-X-Name-First: Richard K.
Author-X-Name-Last: Crump
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 324-329
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.928220
File-URL: http://hdl.handle.net/10.1080/07350015.2014.928220
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:324-329
Template-Type: ReDIF-Article 1.0
Author-Name: Yixiao Sun
Author-X-Name-First: Yixiao
Author-X-Name-Last: Sun
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 330-334
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.926817
File-URL: http://hdl.handle.net/10.1080/07350015.2014.926817
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:330-334
Template-Type: ReDIF-Article 1.0
Author-Name: Timothy J. Vogelsang
Author-X-Name-First: Timothy J.
Author-X-Name-Last: Vogelsang
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 334-338
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.926818
File-URL: http://hdl.handle.net/10.1080/07350015.2014.926818
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:334-338
Template-Type: ReDIF-Article 1.0
Author-Name: Ulrich K. Müller
Author-X-Name-First: Ulrich K.
Author-X-Name-Last: Müller
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 338-340
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.931769
File-URL: http://hdl.handle.net/10.1080/07350015.2014.931769
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:338-340
Template-Type: ReDIF-Article 1.0
Author-Name: Jan de Haan
Author-X-Name-First: Jan
Author-X-Name-Last: de Haan
Author-Name: Frances Krsinich
Author-X-Name-First: Frances
Author-X-Name-Last: Krsinich
Title: Scanner Data and the Treatment of Quality Change in Nonrevisable Price Indexes
Abstract:
The recently developed rolling year GEKS procedure makes maximum use of
all matches in the data to construct nonrevisable price indexes that are
approximately free from chain drift. A potential weakness is that
unmatched items are ignored. In this article we use imputation Törnqvist
price indexes as inputs into the rolling year GEKS procedure. These
indexes account for quality changes by imputing the "missing prices"
associated with new and disappearing items. Three imputation methods are
discussed. The first method makes explicit imputations using a hedonic
regression model which is estimated for each time period. The other two
methods make implicit imputations; they are based on time dummy hedonic
and time-product dummy regression models and are estimated on bilateral
pooled data. We present empirical evidence for New Zealand from scanner
data on eight consumer electronics products and find that accounting for
quality change can make a substantial difference.
Journal: Journal of Business & Economic Statistics
Pages: 341-358
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.880059
File-URL: http://hdl.handle.net/10.1080/07350015.2014.880059
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:341-358
Template-Type: ReDIF-Article 1.0
Author-Name: Mehmet Caner
Author-X-Name-First: Mehmet
Author-X-Name-Last: Caner
Author-Name: Xu Han
Author-X-Name-First: Xu
Author-X-Name-Last: Han
Title: Selecting the Correct Number of Factors in Approximate Factor Models: The Large Panel Case With Group Bridge Estimators
Abstract:
This article proposes a group bridge estimator to select the correct
number of factors in approximate factor models. It contributes to the
literature on shrinkage estimation and factor models by extending the
conventional bridge estimator from a single equation to a large panel
context. The proposed estimator can consistently estimate the factor
loadings of relevant factors and shrink the loadings of irrelevant factors
to zero with a probability approaching one. Hence, it provides a
consistent estimate for the number of factors. We also propose an
algorithm for the new estimator; Monte Carlo experiments show that our
algorithm converges reasonably fast and that our estimator has very good
performance in small samples. An empirical example is also presented based
on a commonly used U.S. macroeconomic dataset.
Journal: Journal of Business & Economic Statistics
Pages: 359-374
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.880349
File-URL: http://hdl.handle.net/10.1080/07350015.2014.880349
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:359-374
Template-Type: ReDIF-Article 1.0
Author-Name: Hang J. Kim
Author-X-Name-First: Hang J.
Author-X-Name-Last: Kim
Author-Name: Jerome P. Reiter
Author-X-Name-First: Jerome P.
Author-X-Name-Last: Reiter
Author-Name: Quanli Wang
Author-X-Name-First: Quanli
Author-X-Name-Last: Wang
Author-Name: Lawrence H. Cox
Author-X-Name-First: Lawrence H.
Author-X-Name-Last: Cox
Author-Name: Alan F. Karr
Author-X-Name-First: Alan F.
Author-X-Name-Last: Karr
Title: Multiple Imputation of Missing or Faulty Values Under Linear Constraints
Abstract:
Many statistical agencies, survey organizations, and research centers
collect data that suffer from item nonresponse and erroneous or
inconsistent values. These data may be required to satisfy linear
constraints, for example, bounds on individual variables and inequalities
for ratios or sums of variables. Often these constraints are designed to
identify faulty values, which then are blanked and imputed. The data also
may exhibit complex distributional features, including nonlinear
relationships and highly nonnormal distributions. We present a fully
Bayesian, joint model for modeling or imputing data with missing/blanked
values under linear constraints that (i) automatically incorporates the
constraints in inferences and imputations, and (ii) uses a flexible
Dirichlet process mixture of multivariate normal distributions to reflect
complex distributional features. Our strategy for estimation is to augment
the observed data with draws from a hypothetical population in which the
constraints are not present, thereby taking advantage of computationally
expedient methods for fitting mixture models. Missing/blanked items are
sampled from their posterior distribution using the Hit-and-Run sampler,
which guarantees that all imputations satisfy the constraints. We
illustrate the approach using manufacturing data from Colombia, examining
the potential to preserve joint distributions and a regression from the
plant productivity literature. Supplementary materials for this article
are available online.
Journal: Journal of Business & Economic Statistics
Pages: 375-386
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.885435
File-URL: http://hdl.handle.net/10.1080/07350015.2014.885435
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:375-386
Template-Type: ReDIF-Article 1.0
Author-Name: Ted Juhl
Author-X-Name-First: Ted
Author-X-Name-Last: Juhl
Title: A Nonparametric Test of the Predictive Regression Model
Abstract:
This article considers testing the significance of a regressor with a near
unit root in a predictive regression model. The procedures discussed in
this article are nonparametric, so one can test the significance of a
regressor without specifying a functional form. The results are used to
test the null hypothesis that the entire function takes the value of zero.
We show that the standardized test has a normal distribution regardless of
whether there is a near unit root in the regressor. This is in contrast to
tests based on linear regression for this model where tests have a
nonstandard limiting distribution that depends on nuisance parameters. Our
results have practical implications in testing the significance of a
regressor since there is no need to conduct pretests for a unit root in
the regressor and the same procedure can be used if the regressor has a
unit root or not. A Monte Carlo experiment explores the performance of the
test for various levels of persistence of the regressors and for various
linear and nonlinear alternatives. The test has superior performance
against certain nonlinear alternatives. An application of the test applied
to stock returns shows how the test can improve inference about
predictability.
Journal: Journal of Business & Economic Statistics
Pages: 387-394
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.887013
File-URL: http://hdl.handle.net/10.1080/07350015.2014.887013
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:387-394
Template-Type: ReDIF-Article 1.0
Author-Name: Stephen G. Donald
Author-X-Name-First: Stephen G.
Author-X-Name-Last: Donald
Author-Name: Yu-Chin Hsu
Author-X-Name-First: Yu-Chin
Author-X-Name-Last: Hsu
Author-Name: Robert P. Lieli
Author-X-Name-First: Robert P.
Author-X-Name-Last: Lieli
Title: Testing the Unconfoundedness Assumption via Inverse Probability Weighted Estimators of (L)ATT
Abstract:
We propose inverse probability weighted estimators for the local average
treatment effect (LATE) and the local average treatment effect for the
treated (LATT) under instrumental variable assumptions with covariates. We
show that these estimators are asymptotically normal and efficient. When
the (binary) instrument satisfies one-sided noncompliance, we propose a
Durbin-Wu-Hausman-type test of whether treatment assignment is
unconfounded conditional on some observables. The test is based on the
fact that under one-sided noncompliance LATT coincides with the average
treatment effect for the treated (ATT). We conduct Monte Carlo simulations
to demonstrate, among other things, that part of the theoretical
efficiency gain afforded by unconfoundedness in estimating ATT survives
pretesting. We illustrate the implementation of the test on data from
training programs administered under the Job Training Partnership Act in
the United States. This article has online supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 395-415
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.888290
File-URL: http://hdl.handle.net/10.1080/07350015.2014.888290
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:395-415
Template-Type: ReDIF-Article 1.0
Author-Name: Heejoon Han
Author-X-Name-First: Heejoon
Author-X-Name-Last: Han
Author-Name: Dennis Kristensen
Author-X-Name-First: Dennis
Author-X-Name-Last: Kristensen
Title: Asymptotic Theory for the QMLE in GARCH-X Models With Stationary and Nonstationary Covariates
Abstract:
This article investigates the asymptotic properties of the Gaussian
quasi-maximum-likelihood estimators (QMLE's) of the GARCH model augmented
by including an additional explanatory variable-the so-called GARCH-X
model. The additional covariate is allowed to exhibit any degree of
persistence as captured by its long-memory parameter
dx; in particular, we allow for both
stationary and nonstationary covariates. We show that the QMLE's of the
parameters entering the volatility equation are consistent and
mixed-normally distributed in large samples. The convergence rates and
limiting distributions of the QMLE's depend on whether the regressor is
stationary or not. However, standard inferential tools for the parameters
are robust to the level of persistence of the regressor with
t-statistics following standard Normal distributions in
large sample irrespective of whether the regressor is stationary or not.
Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 416-429
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.897954
File-URL: http://hdl.handle.net/10.1080/07350015.2014.897954
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:416-429
Template-Type: ReDIF-Article 1.0
Author-Name: Drew D. Creal
Author-X-Name-First: Drew D.
Author-X-Name-Last: Creal
Author-Name: Robert B. Gramacy
Author-X-Name-First: Robert B.
Author-X-Name-Last: Gramacy
Author-Name: Ruey S. Tsay
Author-X-Name-First: Ruey S.
Author-X-Name-Last: Tsay
Title: Market-Based Credit Ratings
Abstract:
We present a methodology for rating in real-time the creditworthiness of
public companies in the U.S. from the prices of traded assets. Our
approach uses asset pricing data to impute a term structure of risk
neutral survival functions or default probabilities. Firms are then
clustered into ratings categories based on their survival functions using
a functional clustering algorithm. This allows all public firms whose
assets are traded to be directly rated by market participants. For firms
whose assets are not traded, we show how they can be indirectly rated by
matching them to firms that are traded based on observable
characteristics. We also show how the resulting ratings can be used to
construct loss distributions for portfolios of bonds. Finally, we compare
our ratings to Standard & Poors and find that, over the period 2005 to
2011, our ratings lead theirs for firms that ultimately default.
Journal: Journal of Business & Economic Statistics
Pages: 430-444
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.902763
File-URL: http://hdl.handle.net/10.1080/07350015.2014.902763
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:430-444
Template-Type: ReDIF-Article 1.0
Author-Name: Guoyu Guan
Author-X-Name-First: Guoyu
Author-X-Name-Last: Guan
Author-Name: Jianhua Guo
Author-X-Name-First: Jianhua
Author-X-Name-Last: Guo
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Varying Naïve Bayes Models With Applications to Classification of Chinese Text Documents
Abstract:
Document classification is an area of great importance for which many
classification methods have been developed. However, most of these methods
cannot generate time-dependent classification rules. Thus, they are not
the best choices for problems with time-varying structures. To address
this problem, we propose a varying naïve Bayes model, which is a natural
extension of the naïve Bayes model that allows for time-dependent
classification rule. The method of kernel smoothing is developed for
parameter estimation and a BIC-type criterion is invented for feature
selection. Asymptotic theory is developed and numerical studies are
conducted. Finally, the proposed method is demonstrated on a real dataset,
which was generated by the Mayor Public Hotline of Changchun, the capital
city of Jilin Province in Northeast China.
Journal: Journal of Business & Economic Statistics
Pages: 445-456
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.903086
File-URL: http://hdl.handle.net/10.1080/07350015.2014.903086
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:445-456
Template-Type: ReDIF-Article 1.0
Author-Name: Bing-Yi Jing
Author-X-Name-First: Bing-Yi
Author-X-Name-Last: Jing
Author-Name: Zhi Liu
Author-X-Name-First: Zhi
Author-X-Name-Last: Liu
Author-Name: Xin-Bing Kong
Author-X-Name-First: Xin-Bing
Author-X-Name-Last: Kong
Title: On the Estimation of Integrated Volatility With Jumps and Microstructure Noise
Abstract:
In this article, we propose a nonparametric procedure to estimate the
integrated volatility of an Itô semimartingale in the presence of
jumps and microstructure noise. The estimator is based on a combination of
the preaveraging method and threshold technique, which serves to remove
microstructure noise and jumps, respectively. The estimator is shown to
work for both finite and infinite activity jumps. Furthermore, asymptotic
properties of the proposed estimator, such as consistency and a central
limit theorem, are established. Simulations results are given to evaluate
the performance of the proposed method in comparison with other
alternative methods.
Journal: Journal of Business & Economic Statistics
Pages: 457-467
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.906350
File-URL: http://hdl.handle.net/10.1080/07350015.2014.906350
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:457-467
Template-Type: ReDIF-Article 1.0
Author-Name: Manuel Wiesenfarth
Author-X-Name-First: Manuel
Author-X-Name-Last: Wiesenfarth
Author-Name: Carlos Matías Hisgen
Author-X-Name-First: Carlos Matías
Author-X-Name-Last: Hisgen
Author-Name: Thomas Kneib
Author-X-Name-First: Thomas
Author-X-Name-Last: Kneib
Author-Name: Carmen Cadarso-Suarez
Author-X-Name-First: Carmen
Author-X-Name-Last: Cadarso-Suarez
Title: Bayesian Nonparametric Instrumental Variables Regression Based on Penalized Splines and Dirichlet Process Mixtures
Abstract:
We propose a Bayesian nonparametric instrumental variable approach under
additive separability that allows us to correct for endogeneity bias in
regression models where the covariate effects enter with unknown
functional form. Bias correction relies on a simultaneous equations
specification with flexible modeling of the joint error distribution
implemented via a Dirichlet process mixture prior. Both the structural and
instrumental variable equation are specified in terms of additive
predictors comprising penalized splines for nonlinear effects of
continuous covariates. Inference is fully Bayesian, employing efficient
Markov chain Monte Carlo simulation techniques. The resulting posterior
samples do not only provide us with point estimates, but allow us to
construct simultaneous credible bands for the nonparametric effects,
including data-driven smoothing parameter selection. In addition, improved
robustness properties are achieved due to the flexible error distribution
specification. Both these features are challenging in the classical
framework, making the Bayesian one advantageous. In simulations, we
investigate small sample properties and an investigation of the effect of
class size on student performance in Israel provides an illustration of
the proposed approach which is implemented in an R package
bayesIV. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 468-482
Issue: 3
Volume: 32
Year: 2014
Month: 7
X-DOI: 10.1080/07350015.2014.907092
File-URL: http://hdl.handle.net/10.1080/07350015.2014.907092
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:3:p:468-482
Template-Type: ReDIF-Article 1.0
Author-Name: Lucia Alessi
Author-X-Name-First: Lucia
Author-X-Name-Last: Alessi
Author-Name: Eric Ghysels
Author-X-Name-First: Eric
Author-X-Name-Last: Ghysels
Author-Name: Luca Onorante
Author-X-Name-First: Luca
Author-X-Name-Last: Onorante
Author-Name: Richard Peach
Author-X-Name-First: Richard
Author-X-Name-Last: Peach
Author-Name: Simon Potter
Author-X-Name-First: Simon
Author-X-Name-Last: Potter
Title: Central Bank Macroeconomic Forecasting During the Global Financial Crisis: The European Central Bank and Federal Reserve Bank of New York Experiences
Abstract:
This article documents macroeconomic forecasting during the global
financial crisis by two key central banks: the European Central Bank and
the Federal Reserve Bank of New York. The article is the result of a
collaborative effort between staff at the two institutions, allowing us to
study the time-stamped forecasts as they were made throughout the crisis.
The analysis does not exclusively focus on point forecast performance. It
also examines methodological contributions, including how financial market
data could have been incorporated into the forecasting process.
Journal: Journal of Business & Economic Statistics
Pages: 483-500
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.959124
File-URL: http://hdl.handle.net/10.1080/07350015.2014.959124
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:483-500
Template-Type: ReDIF-Article 1.0
Author-Name: G. Kenny
Author-X-Name-First: G.
Author-X-Name-Last: Kenny
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 500-504
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.956636
File-URL: http://hdl.handle.net/10.1080/07350015.2014.956636
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:500-504
Template-Type: ReDIF-Article 1.0
Author-Name: Chiara Scotti
Author-X-Name-First: Chiara
Author-X-Name-Last: Scotti
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 504-506
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.956873
File-URL: http://hdl.handle.net/10.1080/07350015.2014.956873
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:504-506
Template-Type: ReDIF-Article 1.0
Author-Name: Kirstin Hubrich
Author-X-Name-First: Kirstin
Author-X-Name-Last: Hubrich
Author-Name: Simone Manganelli
Author-X-Name-First: Simone
Author-X-Name-Last: Manganelli
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 506-509
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.956874
File-URL: http://hdl.handle.net/10.1080/07350015.2014.956874
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:506-509
Template-Type: ReDIF-Article 1.0
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 510-514
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.956875
File-URL: http://hdl.handle.net/10.1080/07350015.2014.956875
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:510-514
Template-Type: ReDIF-Article 1.0
Author-Name: Lucia Alessi
Author-X-Name-First: Lucia
Author-X-Name-Last: Alessi
Author-Name: Eric Ghysels
Author-X-Name-First: Eric
Author-X-Name-Last: Ghysels
Author-Name: Luca Onorante
Author-X-Name-First: Luca
Author-X-Name-Last: Onorante
Author-Name: Richard Peach
Author-X-Name-First: Richard
Author-X-Name-Last: Peach
Author-Name: Simon Potter
Author-X-Name-First: Simon
Author-X-Name-Last: Potter
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 514-515
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.958920
File-URL: http://hdl.handle.net/10.1080/07350015.2014.958920
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:514-515
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaodong Liu
Author-X-Name-First: Xiaodong
Author-X-Name-Last: Liu
Title: Identification and Efficient Estimation of Simultaneous Equations Network Models
Abstract:
This article considers identification and estimation of social network
models in a system of simultaneous equations. We show that, with or
without row-normalization of the social adjacency matrix, the network
model has different equilibrium implications, needs different
identification conditions, and requires different estimation strategies.
When the adjacency matrix is not row-normalized, the variation in the
Bonacich centrality across nodes in a network can be used as an IV to
identify social interaction effects and improve estimation efficiency. The
number of such IVs depends on the number of networks. When there are many
networks in the data, the proposed estimators may have an asymptotic bias
due to the presence of many IVs. We propose a bias-correction procedure
for the many-instrument bias. Simulation experiments show that the
bias-corrected estimators perform well in finite samples. We also provide
an empirical example to illustrate the proposed estimation procedure.
Journal: Journal of Business & Economic Statistics
Pages: 516-536
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.907093
File-URL: http://hdl.handle.net/10.1080/07350015.2014.907093
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:516-536
Template-Type: ReDIF-Article 1.0
Author-Name: Peter C. B. Phillips
Author-X-Name-First: Peter C. B.
Author-X-Name-Last: Phillips
Author-Name: Sainan Jin
Author-X-Name-First: Sainan
Author-X-Name-Last: Jin
Title: Testing the Martingale Hypothesis
Abstract:
We propose new tests of the martingale hypothesis based on generalized
versions of the Kolmogorov-Smirnov and Cramér-von Mises tests. The tests
are distribution-free and allow for a weak drift in the null model. The
methods do not require either smoothing parameters or bootstrap resampling
for their implementation and so are well suited to practical work. The
article develops limit theory for the tests under the null and shows that
the tests are consistent against a wide class of nonlinear, nonmartingale
processes. Simulations show that the tests have good finite sample
properties in comparison with other tests particularly under conditional
heteroscedasticity and mildly explosive alternatives. An empirical
application to major exchange rate data finds strong evidence in favor of
the martingale hypothesis, confirming much earlier research.
Journal: Journal of Business & Economic Statistics
Pages: 537-554
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.908780
File-URL: http://hdl.handle.net/10.1080/07350015.2014.908780
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:537-554
Template-Type: ReDIF-Article 1.0
Author-Name: Deniz Ozabaci
Author-X-Name-First: Deniz
Author-X-Name-Last: Ozabaci
Author-Name: Daniel J. Henderson
Author-X-Name-First: Daniel J.
Author-X-Name-Last: Henderson
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Title: Additive Nonparametric Regression in the Presence of Endogenous Regressors
Abstract:
In this article we consider nonparametric estimation of a structural
equation model under full additivity constraint. We propose estimators for
both the conditional mean and gradient which are consistent,
asymptotically normal, oracle efficient, and free from the curse of
dimensionality. Monte Carlo simulations support the asymptotic
developments. We employ a partially linear extension of our model to study
the relationship between child care and cognitive outcomes. Some of our
(average) results are consistent with the literature (e.g., negative
returns to child care when mothers have higher levels of education).
However, as our estimators allow for heterogeneity both across and within
groups, we are able to contradict many findings in the literature (e.g.,
we do not find any significant differences in returns between boys and
girls or for formal versus informal child care). Supplementary materials
for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 555-575
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.917590
File-URL: http://hdl.handle.net/10.1080/07350015.2014.917590
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:555-575
Template-Type: ReDIF-Article 1.0
Author-Name: Shangyu Xie
Author-X-Name-First: Shangyu
Author-X-Name-Last: Xie
Author-Name: Yong Zhou
Author-X-Name-First: Yong
Author-X-Name-Last: Zhou
Author-Name: Alan T. K. Wan
Author-X-Name-First: Alan T. K.
Author-X-Name-Last: Wan
Title: A Varying-Coefficient Expectile Model for Estimating Value at Risk
Abstract:
This article develops a nonparametric varying-coefficient approach for
modeling the expectile-based value at risk (EVaR). EVaR has an advantage
over the conventional quantile-based VaR (QVaR) of being more sensitive to
the magnitude of extreme losses. EVaR can also be used for calculating
QVaR and expected shortfall (ES) by exploiting the one-to-one mapping from
expectiles to quantiles, and the relationship between VaR and ES. Previous
studies on conditional EVaR estimation only considered parametric
autoregressive model set-ups, which account for the stochastic dynamics of
asset returns but ignore other exogenous economic and investment related
factors. Our approach overcomes this drawback and allows expectiles to be
modeled directly using covariates that may be exogenous or lagged
dependent in a flexible way. Risk factors associated with profits and
losses can then be identified via the expectile regression at different
levels of prudentiality. We develop a local linear smoothing technique for
estimating the coefficient functions within an asymmetric least squares
minimization set-up, and establish the consistency and asymptotic
normality of the resultant estimator. To save computing time, we propose
to use a one-step weighted local least squares procedure to compute the
estimates. Our simulation results show that the computing advantage
afforded by this one-step procedure over full iteration is not compromised
by a deterioration in estimation accuracy. Real data examples are used to
illustrate our method. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 576-592
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.917979
File-URL: http://hdl.handle.net/10.1080/07350015.2014.917979
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:576-592
Template-Type: ReDIF-Article 1.0
Author-Name: Tingguo Zheng
Author-X-Name-First: Tingguo
Author-X-Name-Last: Zheng
Author-Name: Tao Song
Author-X-Name-First: Tao
Author-X-Name-Last: Song
Title: A Realized Stochastic Volatility Model With Box-Cox Transformation
Abstract:
This article presents a new class of realized stochastic volatility model
based on realized volatilities and returns jointly. We generalize the
traditionally used logarithm transformation of realized volatility to the
Box-Cox transformation, a more flexible parametric family of
transformations. A two-step maximum likelihood estimation procedure is
introduced to estimate this model on the basis of Koopman and Scharth
(2013). Simulation results show that the two-step estimator performs well,
and the misspecified log transformation may lead to inaccurate parameter
estimation and certain excessive skewness and kurtosis. Finally, an
empirical investigation on realized volatility measures and daily returns
is carried out for several stock indices.
Journal: Journal of Business & Economic Statistics
Pages: 593-605
Issue: 4
Volume: 32
Year: 2014
Month: 10
X-DOI: 10.1080/07350015.2014.918544
File-URL: http://hdl.handle.net/10.1080/07350015.2014.918544
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:32:y:2014:i:4:p:593-605
Template-Type: ReDIF-Article 1.0
Author-Name: Francis X. Diebold
Author-X-Name-First: Francis X.
Author-X-Name-Last: Diebold
Title: Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests
Abstract:
The Diebold-Mariano () test was
intended for comparing forecasts; it has been, and remains, useful in that
regard. The test was
not intended for comparing models. Much of the large
ensuing literature, however, uses -type tests for
comparing models, in pseudo-out-of-sample environments. In that case,
simpler yet more compelling full-sample model comparison procedures exist;
they have been, and should continue to be, widely used. The hunch that
pseudo-out-of-sample analysis is somehow the "only," or "best," or even
necessarily a "good" way to provide insurance against in-sample
overfitting in model comparisons proves largely false. On the other hand,
pseudo-out-of-sample analysis remains useful for certain tasks, perhaps
most notably for providing information about comparative predictive
performance during particular historical episodes.
Journal: Journal of Business & Economic Statistics
Pages: 1-1
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.983236
File-URL: http://hdl.handle.net/10.1080/07350015.2014.983236
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:1-1
Template-Type: ReDIF-Article 1.0
Author-Name: Atsushi Inoue
Author-X-Name-First: Atsushi
Author-X-Name-Last: Inoue
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 9-11
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.969428
File-URL: http://hdl.handle.net/10.1080/07350015.2014.969428
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:9-11
Template-Type: ReDIF-Article 1.0
Author-Name: Jonathan H. Wright
Author-X-Name-First: Jonathan H.
Author-X-Name-Last: Wright
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 12-13
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.969429
File-URL: http://hdl.handle.net/10.1080/07350015.2014.969429
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:12-13
Template-Type: ReDIF-Article 1.0
Author-Name: Lutz Kilian
Author-X-Name-First: Lutz
Author-X-Name-Last: Kilian
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 13-17
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.969430
File-URL: http://hdl.handle.net/10.1080/07350015.2014.969430
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:13-17
Template-Type: ReDIF-Article 1.0
Author-Name: Peter Reinhard Hansen
Author-X-Name-First: Peter Reinhard
Author-X-Name-Last: Hansen
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 17-21
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.983601
File-URL: http://hdl.handle.net/10.1080/07350015.2014.983601
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:17-21
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 22-24
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.977445
File-URL: http://hdl.handle.net/10.1080/07350015.2014.977445
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:22-24
Template-Type: ReDIF-Article 1.0
Author-Name: Francis X. Diebold
Author-X-Name-First: Francis X.
Author-X-Name-Last: Diebold
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 24-24
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.983237
File-URL: http://hdl.handle.net/10.1080/07350015.2014.983237
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:24-24
Template-Type: ReDIF-Article 1.0
Author-Name: Luca Agnello
Author-X-Name-First: Luca
Author-X-Name-Last: Agnello
Author-Name: Vitor Castro
Author-X-Name-First: Vitor
Author-X-Name-Last: Castro
Author-Name: Ricardo M. Sousa
Author-X-Name-First: Ricardo M.
Author-X-Name-Last: Sousa
Title: Booms, Busts, and Normal Times in the Housing Market
Abstract:
We assess the existence of duration dependence in the likelihood of an end
in housing booms, busts, and normal times. Using data for 20 industrial
countries and a continuous-time Weibull duration model, we find evidence
of positive duration dependence suggesting that housing market cycles have
become longer over the last decades. Then, we extend the baseline Weibull
model and allow for the presence of a change-point in the duration
dependence parameter. We show that positive duration dependence is present
in booms and busts that last less than 26 quarters, but that does not seem
to be the case for longer phases of the housing market cycle. For normal
times, no evidence of change-points is found. Finally, the empirical
findings uncover positive duration dependence in housing market booms of
European and non-European countries and housing busts of European
countries. In addition, they reveal that while housing booms have similar
length in European and non-European countries, housing busts are typically
shorter in European countries.
Journal: Journal of Business & Economic Statistics
Pages: 25-45
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.918545
File-URL: http://hdl.handle.net/10.1080/07350015.2014.918545
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:25-45
Template-Type: ReDIF-Article 1.0
Author-Name: Quentin Giai Gianetto
Author-X-Name-First: Quentin
Author-X-Name-Last: Giai Gianetto
Author-Name: Hamdi Raïssi
Author-X-Name-First: Hamdi
Author-X-Name-Last: Raïssi
Title: Testing Instantaneous Causality in Presence of Nonconstant Unconditional Covariance
Abstract:
This article investigates the problem of testing instantaneous causality
between vector autoregressive (VAR) variables with time-varying
unconditional covariance. It is underlined that the standard test does not
control the Type I errors, while the tests with White and heteroscedastic
autocorrelation consistent (HAC) corrections can suffer from a severe loss
of power when the covariance is not constant. Consequently, we propose a
modified test based on a bootstrap procedure. We illustrate the relevance
of the modified test through a simulation study. The tests considered in
this article are also compared by investigating the instantaneous
causality relations between U.S. macroeconomic variables.
Journal: Journal of Business & Economic Statistics
Pages: 46-53
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.920614
File-URL: http://hdl.handle.net/10.1080/07350015.2014.920614
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:46-53
Template-Type: ReDIF-Article 1.0
Author-Name: Frank A. Cowell
Author-X-Name-First: Frank A.
Author-X-Name-Last: Cowell
Author-Name: Russell Davidson
Author-X-Name-First: Russell
Author-X-Name-Last: Davidson
Author-Name: Emmanuel Flachaire
Author-X-Name-First: Emmanuel
Author-X-Name-Last: Flachaire
Title: Goodness of Fit: An Axiomatic Approach
Abstract:
An axiomatic approach is used to develop a one-parameter family of
measures of divergence between distributions. These measures can be used
to perform goodness-of-fit tests with good statistical properties.
Asymptotic theory shows that the test statistics have well-defined
limiting distributions which are, however, analytically intractable. A
parametric bootstrap procedure is proposed for implementation of the
tests. The procedure is shown to work very well in a set of simulation
experiments, and to compare favorably with other commonly used
goodness-of-fit tests. By varying the parameter of the statistic, one can
obtain information on how the distribution that generated a sample
diverges from the target family of distributions when the true
distribution does not belong to that family. An empirical application
analyzes a U.K. income dataset.
Journal: Journal of Business & Economic Statistics
Pages: 54-67
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.922470
File-URL: http://hdl.handle.net/10.1080/07350015.2014.922470
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:54-67
Template-Type: ReDIF-Article 1.0
Author-Name: Katja Ignatieva
Author-X-Name-First: Katja
Author-X-Name-Last: Ignatieva
Author-Name: Paulo Rodrigues
Author-X-Name-First: Paulo
Author-X-Name-Last: Rodrigues
Author-Name: Norman Seeger
Author-X-Name-First: Norman
Author-X-Name-Last: Seeger
Title: Empirical Analysis of Affine Versus Nonaffine Variance Specifications in Jump-Diffusion Models for Equity Indices
Abstract:
This article investigates several crucial issues that arise when modeling
equity returns with stochastic variance. (i) Does the model need to
include jumps even when using a nonaffine variance specification? We find
that jump models clearly outperform pure stochastic volatility models.
(ii) How do affine variance specifications perform when compared to
nonaffine models in a jump diffusion setup? We find that nonaffine
specifications outperform affine models, even after including jumps.
Journal: Journal of Business & Economic Statistics
Pages: 68-75
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.922471
File-URL: http://hdl.handle.net/10.1080/07350015.2014.922471
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:68-75
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Ronghua Luo
Author-X-Name-First: Ronghua
Author-X-Name-Last: Luo
Author-Name: Chih-Ling Tsai
Author-X-Name-First: Chih-Ling
Author-X-Name-Last: Tsai
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Author-Name: Yunhong Yang
Author-X-Name-First: Yunhong
Author-X-Name-Last: Yang
Title: Testing the Diagonality of a Large Covariance Matrix in a Regression Setting
Abstract:
In multivariate analysis, the covariance matrix associated with a set of
variables of interest (namely response variables) commonly contains
valuable information about the dataset. When the dimension of response
variables is considerably larger than the sample size, it is a nontrivial
task to assess whether there are linear relationships between the
variables. It is even more challenging to determine whether a set of
explanatory variables can explain those relationships. To this end, we
develop a bias-corrected test to examine the significance of the
off-diagonal elements of the residual covariance matrix after adjusting
for the contribution from explanatory variables. We show that the
resulting test is asymptotically normal. Monte Carlo studies and a
numerical example are presented to illustrate the performance of the
proposed test.
Journal: Journal of Business & Economic Statistics
Pages: 76-86
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.923317
File-URL: http://hdl.handle.net/10.1080/07350015.2014.923317
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:76-86
Template-Type: ReDIF-Article 1.0
Author-Name: Yigit Atilgan
Author-X-Name-First: Yigit
Author-X-Name-Last: Atilgan
Author-Name: Turan G. Bali
Author-X-Name-First: Turan G.
Author-X-Name-Last: Bali
Author-Name: K. Ozgur Demirtas
Author-X-Name-First: K. Ozgur
Author-X-Name-Last: Demirtas
Title: Implied Volatility Spreads and Expected Market Returns
Abstract:
This article investigates the intertemporal relation between volatility
spreads and expected returns on the aggregate stock market. We provide
evidence for a significantly negative link between volatility spreads and
expected returns at the daily and weekly frequencies. We argue that this
link is driven by the information flow from option markets to stock
markets. The documented relation is significantly stronger for the periods
during which (i) S&P 500 constituent firms announce their earnings; (ii)
cash flow and discount rate news are large in magnitude; and (iii)
consumer sentiment index takes extreme values. The intertemporal relation
remains strongly negative after controlling for conditional volatility,
variance risk premium, and macroeconomic variables. Moreover, a trading
strategy based on the intertemporal relation with volatility spreads has
higher portfolio returns compared to a passive strategy of investing in
the S&P 500 index, after transaction costs are taken into account.
Journal: Journal of Business & Economic Statistics
Pages: 87-101
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.923776
File-URL: http://hdl.handle.net/10.1080/07350015.2014.923776
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:87-101
Template-Type: ReDIF-Article 1.0
Author-Name: Maria Kalli
Author-X-Name-First: Maria
Author-X-Name-Last: Kalli
Author-Name: Jim Griffin
Author-X-Name-First: Jim
Author-X-Name-Last: Griffin
Title: Flexible Modeling of Dependence in Volatility Processes
Abstract:
This article proposes a novel stochastic volatility (SV) model that draws
from the existing literature on autoregressive SV models, aggregation of
autoregressive processes, and Bayesian nonparametric modeling to create a
SV model that can capture long-range dependence. The volatility process is
assumed to be the aggregate of autoregressive processes, where the
distribution of the autoregressive coefficients is modeled using a
flexible Bayesian approach. The model provides insight into the dynamic
properties of the volatility. An efficient algorithm is defined which uses
recently proposed adaptive Monte Carlo methods. The proposed model is
applied to the daily returns of stocks.
Journal: Journal of Business & Economic Statistics
Pages: 102-113
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.925457
File-URL: http://hdl.handle.net/10.1080/07350015.2014.925457
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:102-113
Template-Type: ReDIF-Article 1.0
Author-Name: Siem Jan Koopman
Author-X-Name-First: Siem Jan
Author-X-Name-Last: Koopman
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Author-Name: Marcel Scharth
Author-X-Name-First: Marcel
Author-X-Name-Last: Scharth
Title: Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State-Space Models
Abstract:
We propose a general likelihood evaluation method for nonlinear
non-Gaussian state-space models using the simulation-based method of
efficient importance sampling. We minimize the simulation effort by
replacing some key steps of the likelihood estimation procedure by
numerical integration. We refer to this method as numerically accelerated
importance sampling. We show that the likelihood function for models with
a high-dimensional state vector and a low-dimensional signal can be
evaluated more efficiently using the new method. We report many efficiency
gains in an extensive Monte Carlo study as well as in an empirical
application using a stochastic volatility model for U.S. stock returns
with multiple volatility factors. Supplementary materials for this article
are available online.
Journal: Journal of Business & Economic Statistics
Pages: 114-127
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.925807
File-URL: http://hdl.handle.net/10.1080/07350015.2014.925807
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:114-127
Template-Type: ReDIF-Article 1.0
Author-Name: Juan Lin
Author-X-Name-First: Juan
Author-X-Name-Last: Lin
Author-Name: Ximing Wu
Author-X-Name-First: Ximing
Author-X-Name-Last: Wu
Title: Smooth Tests of Copula Specifications
Abstract:
We present a family of smooth tests for the goodness of fit of
semiparametric multivariate copula models. The proposed tests are
distribution free and can be easily implemented. They are diagnostic and
constructive in the sense that when a null distribution is rejected, the
test provides useful pointers to alternative copula distributions. We then
propose a method of copula density construction, which can be viewed as a
multivariate extension of Efron and Tibshirani. We further generalize our
methods to the semiparametric copula-based multivariate dynamic models. We
report extensive Monte Carlo simulations and three empirical examples to
illustrate the effectiveness and usefulness of our method.
Journal: Journal of Business & Economic Statistics
Pages: 128-143
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.932696
File-URL: http://hdl.handle.net/10.1080/07350015.2014.932696
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:128-143
Template-Type: ReDIF-Article 1.0
Author-Name: Sebastiano Manzan
Author-X-Name-First: Sebastiano
Author-X-Name-Last: Manzan
Title: Forecasting the Distribution of Economic Variables in a Data-Rich Environment
Abstract:
This article investigates the relevance of considering a large number of
macroeconomic indicators to forecast the complete distribution of a
variable. The baseline time series model is a semiparametric specification
based on the quantile autoregressive (QAR) model that assumes that the
quantiles depend on the lagged values of the variable. We then augment the
time series model with macroeconomic information from a large dataset by
including principal components or a subset of variables selected by LASSO.
We forecast the distribution of the h-month growth rate
for four economic variables from 1975 to 2011 and evaluate the forecast
accuracy relative to a stochastic volatility model using the quantile
score. The results for the output and employment measures indicate that
the multivariate models outperform the time series forecasts, in
particular at long horizons and in tails of the distribution, while for
the inflation variables the improved performance occurs mostly at the
6-month horizon. We also illustrate the practical relevance of predicting
the distribution by considering forecasts at three dates during the last
recession.
Journal: Journal of Business & Economic Statistics
Pages: 144-164
Issue: 1
Volume: 33
Year: 2015
Month: 1
X-DOI: 10.1080/07350015.2014.937436
File-URL: http://hdl.handle.net/10.1080/07350015.2014.937436
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:1:p:144-164
Template-Type: ReDIF-Article 1.0
Author-Name: Hohsuk Noh
Author-X-Name-First: Hohsuk
Author-X-Name-Last: Noh
Author-Name: Anouar El Ghouch
Author-X-Name-First: Anouar El
Author-X-Name-Last: Ghouch
Author-Name: Ingrid Van Keilegom
Author-X-Name-First: Ingrid
Author-X-Name-Last: Van Keilegom
Title: Semiparametric Conditional Quantile Estimation Through Copula-Based Multivariate Models
Abstract:
We consider a new approach in quantile regression modeling based on the
copula function that defines the dependence structure between the
variables of interest. The key idea of this approach is to rewrite the
characterization of a regression quantile in terms of a copula and
marginal distributions. After the copula and the marginal distributions
are estimated, the new estimator is obtained as the weighted quantile of
the response variable in the model. The proposed conditional estimator has
three main advantages: it applies to both iid and time series data, it is
automatically monotonic across quantiles, and, unlike other copula-based
methods, it can be directly applied to the multiple covariates case
without introducing any extra complications. We show the asymptotic
properties of our estimator when the copula is estimated by maximizing the
pseudo-log-likelihood and the margins are estimated nonparametrically
including the case where the copula family is misspecified. We also
present the finite sample performance of the estimator and illustrate the
usefulness of our proposal by an application to the historical
volatilities of Google and Yahoo.
Journal: Journal of Business & Economic Statistics
Pages: 167-178
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.926171
File-URL: http://hdl.handle.net/10.1080/07350015.2014.926171
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:167-178
Template-Type: ReDIF-Article 1.0
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Title: Causal Pitfalls in the Decomposition of Wage Gaps
Abstract:
The decomposition of gender or ethnic wage gaps into explained and
unexplained components (often with the aim to assess labor market
discrimination) has been a major research agenda in empirical labor
economics. This article demonstrates that conventional decompositions, no
matter whether linear or nonparametric, are equivalent to assuming a
(probably too) simple model of mediation (aimed at assessing causal
mechanisms) and may therefore lack causal interpretability. The reason is
that decompositions typically control for post-birth variables that lie on
the causal pathway from gender/ethnicity (which are determined at or even
before birth) to wage but neglect potential endogeneity that may arise
from this approach. Based on the newer literature on mediation analysis,
we therefore provide more attractive identifying assumptions and discuss
nonparametric identification based on reweighting.
Journal: Journal of Business & Economic Statistics
Pages: 179-191
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.937437
File-URL: http://hdl.handle.net/10.1080/07350015.2014.937437
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:179-191
Template-Type: ReDIF-Article 1.0
Author-Name: Jin-Chuan Duan
Author-X-Name-First: Jin-Chuan
Author-X-Name-Last: Duan
Author-Name: Andras Fulop
Author-X-Name-First: Andras
Author-X-Name-Last: Fulop
Title: Density-Tempered Marginalized Sequential Monte Carlo Samplers
Abstract:
We propose a density-tempered marginalized sequential Monte Carlo (SMC)
sampler, a new class of samplers for full Bayesian inference of general
state-space models. The dynamic states are approximately marginalized out
using a particle filter, and the parameters are sampled via a sequential
Monte Carlo sampler over a density-tempered bridge between the prior and
the posterior. Our approach delivers exact draws from the joint posterior
of the parameters and the latent states for any given number of state
particles and is thus easily parallelizable in implementation. We also
build into the proposed method a device that can automatically select a
suitable number of state particles. Since the method incorporates sample
information in a smooth fashion, it delivers good performance in the
presence of outliers. We check the performance of the density-tempered SMC
algorithm using simulated data based on a linear Gaussian state-space
model with and without misspecification. We also apply it on real stock
prices using a GARCH-type model with microstructure noise.
Journal: Journal of Business & Economic Statistics
Pages: 192-202
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.940081
File-URL: http://hdl.handle.net/10.1080/07350015.2014.940081
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:192-202
Template-Type: ReDIF-Article 1.0
Author-Name: Yan Li
Author-X-Name-First: Yan
Author-X-Name-Last: Li
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Yuewu Xu
Author-X-Name-First: Yuewu
Author-X-Name-Last: Xu
Title: A Combined Approach to the Inference of Conditional Factor Models
Abstract:
This article develops a new methodology for estimating and testing
conditional factor models in finance. We propose a two-stage procedure
that naturally unifies the two existing approaches in the finance
literature--the parametric approach and the nonparametric approach. Our
combined approach possesses important advantages over both methods. Using
our two-stage combined estimator, we derive new test statistics for
investigating key hypotheses in the context of conditional factor models.
Our tests can be performed on a single asset or jointly across multiple
assets. We further propose a novel test to directly check whether the
parametric model used in our first stage is correctly specified.
Simulations indicate that our estimates and tests perform well in finite
samples. In our empirical analysis, we use our new method to examine the
performance of the conditional capital asset pricing model (CAPM), which
has generated controversial results in the recent asset-pricing
literature.
Journal: Journal of Business & Economic Statistics
Pages: 203-220
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.940082
File-URL: http://hdl.handle.net/10.1080/07350015.2014.940082
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:203-220
Template-Type: ReDIF-Article 1.0
Author-Name: Jushan Bai
Author-X-Name-First: Jushan
Author-X-Name-Last: Bai
Author-Name: Peng Wang
Author-X-Name-First: Peng
Author-X-Name-Last: Wang
Title: Identification and Bayesian Estimation of Dynamic Factor Models
Abstract:
We consider a set of minimal identification conditions for dynamic factor
models. These conditions have economic interpretations and require fewer
restrictions than the static factor framework. Under these restrictions, a
standard structural vector autoregression (SVAR) with measurement errors
can be embedded into a dynamic factor model. More generally, we also
consider overidentification restrictions to achieve efficiency. We discuss
general linear restrictions, either in the form of known factor loadings
or cross-equation restrictions. We further consider serially correlated
idiosyncratic errors with heterogeneous dynamics. A numerically stable
Bayesian algorithm for the dynamic factor model with general parameter
restrictions is constructed for estimation and inference. We show that a
square-root form of the Kalman filter improves robustness and accuracy
when sampling the latent factors. Confidence intervals (bands) for the
parameters of interest such as impulse responses are readily computed.
Similar identification conditions are also exploited for multilevel factor
models, and they allow us to study the "spill-over" effects of the shocks
arising from one group to another. Supplementary materials for technical
details are available online.
Journal: Journal of Business & Economic Statistics
Pages: 221-240
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.941467
File-URL: http://hdl.handle.net/10.1080/07350015.2014.941467
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:221-240
Template-Type: ReDIF-Article 1.0
Author-Name: Katarzyna Łasak
Author-X-Name-First: Katarzyna
Author-X-Name-Last: Łasak
Author-Name: Carlos Velasco
Author-X-Name-First: Carlos
Author-X-Name-Last: Velasco
Title: Fractional Cointegration Rank Estimation
Abstract:
This article considers cointegration rank estimation for a
p-dimensional fractional vector error correction model.
We propose a new two-step procedure that allows testing for further
long-run equilibrium relations with possibly different persistence levels.
The first step consists of estimating the parameters of the model under
the null hypothesis of the cointegration rank r = 1, 2,
..., p - 1. This step provides consistent estimates of
the order of fractional cointegration, the cointegration vectors, the
speed of adjustment to the equilibrium parameters and the common trends.
In the second step we carry out a sup-likelihood ratio test of
no-cointegration on the estimated p -
r common trends that are not cointegrated under the null.
The order of fractional cointegration is reestimated in the second step to
allow for new cointegration relationships with different memory. We
augment the error correction model in the second step to adapt to the
representation of the common trends estimated in the first step. The
critical values of the proposed tests depend only on the number of common
trends under the null, p - r, and on
the interval of the orders of fractional cointegration b
allowed in the estimation, but not on the order of fractional
cointegration of already identified relationships. Hence, this reduces the
set of simulations required to approximate the critical values, making
this procedure convenient for practical purposes. In a Monte Carlo study
we analyze the finite sample properties of our procedure and compare with
alternative methods. We finally apply these methods to study the term
structure of interest rates.
Journal: Journal of Business & Economic Statistics
Pages: 241-254
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.945589
File-URL: http://hdl.handle.net/10.1080/07350015.2014.945589
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:241-254
Template-Type: ReDIF-Article 1.0
Author-Name: Michael T. Belongia
Author-X-Name-First: Michael T.
Author-X-Name-Last: Belongia
Author-Name: Peter N. Ireland
Author-X-Name-First: Peter N.
Author-X-Name-Last: Ireland
Title: Interest Rates and Money in the Measurement of Monetary Policy
Abstract:
Over the last 25 years, a set of influential studies has placed
interest rates at the heart of analyses that interpret and evaluate
monetary policies. In light of this work, the Federal Reserve's recent
policy of "quantitative easing," with its goal of affecting the supply of
liquid assets, appears to be a radical break from standard practice.
Alternatively, one could posit that the monetary aggregates, when measured
properly, never lost their ability to explain aggregate fluctuations and,
for this reason, represent an important omission from standard models and
policy discussions. In this context, the new policy initiatives can be
characterized simply as conventional attempts to increase money growth.
This view is supported by evidence that superlative (Divisia) measures of
money often help in forecasting movements in key macroeconomic variables.
Moreover, the statistical fit of a structural vector autoregression
deteriorates significantly if such measures of money are excluded when
identifying monetary policy shocks. These results cast doubt on the
adequacy of conventional models that focus on interest rates alone. They
also highlight that all monetary disturbances have an important
"quantitative" component, which is captured by movements in a properly
measured monetary aggregate.
Journal: Journal of Business & Economic Statistics
Pages: 255-269
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.946132
File-URL: http://hdl.handle.net/10.1080/07350015.2014.946132
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:255-269
Template-Type: ReDIF-Article 1.0
Author-Name: Malte Knüppel
Author-X-Name-First: Malte
Author-X-Name-Last: Knüppel
Title: Evaluating the Calibration of Multi-Step-Ahead Density Forecasts Using Raw Moments
Abstract:
The evaluation of multi-step-ahead density forecasts is complicated by the
serial correlation of the corresponding probability integral transforms.
In the literature, three testing approaches can be found that take this
problem into account. However, these approaches rely on data-dependent
critical values, ignore important information and, therefore lack power,
or suffer from size distortions even asymptotically. This article proposes
a new testing approach based on raw moments. It is extremely easy to
implement, uses standard critical values, can include all moments regarded
as important, and has correct asymptotic size. It is found to have good
size and power properties in finite samples if it is based on the
(standardized) probability integral transforms.
Journal: Journal of Business & Economic Statistics
Pages: 270-281
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.948175
File-URL: http://hdl.handle.net/10.1080/07350015.2014.948175
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:270-281
Template-Type: ReDIF-Article 1.0
Author-Name: Antonio Diez de Los Rios
Author-X-Name-First: Antonio Diez
Author-X-Name-Last: de Los Rios
Title: A New Linear Estimator for Gaussian Dynamic Term Structure Models
Abstract:
This article proposes a novel regression-based approach to the estimation
of Gaussian dynamic term structure models. This new estimator is an
asymptotic least-square estimator defined by the no-arbitrage conditions
upon which these models are built. Further, we note that our estimator
remains easy-to-compute and asymptotically efficient in a variety of
situations in which other recently proposed approaches might lose their
tractability. We provide an empirical application in the context of the
Canadian bond market.
Journal: Journal of Business & Economic Statistics
Pages: 282-295
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.948176
File-URL: http://hdl.handle.net/10.1080/07350015.2014.948176
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:282-295
Template-Type: ReDIF-Article 1.0
Author-Name: Zhibiao Zhao
Author-X-Name-First: Zhibiao
Author-X-Name-Last: Zhao
Title: Inference for Local Autocorrelations in Locally Stationary Models
Abstract:
For nonstationary processes, the time-varying correlation structure
provides useful insights into the underlying model dynamics. We study
estimation and inferences for local autocorrelation process in locally
stationary time series. Our constructed simultaneous confidence band can
be used to address important hypothesis testing problems, such as whether
the local autocorrelation process is indeed time-varying and whether the
local autocorrelation is zero. In particular, our result provides an
important generalization of the R function acf() to
locally stationary Gaussian processes. Simulation studies and two
empirical applications are developed. For the global temperature series,
we find that the local autocorrelations are time-varying and have a "V"
shape during 1910-1960. For the S&P 500 index, we conclude that the
returns satisfy the efficient-market hypothesis whereas the magnitudes of
returns show significant local autocorrelations.
Journal: Journal of Business & Economic Statistics
Pages: 296-306
Issue: 2
Volume: 33
Year: 2015
Month: 4
X-DOI: 10.1080/07350015.2014.948177
File-URL: http://hdl.handle.net/10.1080/07350015.2014.948177
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:2:p:296-306
Template-Type: ReDIF-Article 1.0
Author-Name: Gunnar Bårdsen
Author-X-Name-First: Gunnar
Author-X-Name-Last: Bårdsen
Author-Name: Luca Fanelli
Author-X-Name-First: Luca
Author-X-Name-Last: Fanelli
Title: Frequentist Evaluation of Small DSGE Models
Abstract:
This article proposes a new evaluation approach for the class of
small-scale "hybrid" new Keynesian dynamic stochastic general equilibrium
(NK-DSGE) models typically used in monetary policy and business cycle
analysis. The empirical assessment of the NK-DSGE model is based on a
conditional sequence of likelihood-based tests conducted in a vector
autoregressive (VAR) system, in which both the low- and high-frequency
implications of the model are addressed in a coherent framework. If the
low-frequency behavior of the original time series of the model can be
approximated by nonstationary processes, stationarity must be imposed by
removing the stochastic trends. This gives rise to a set of recoverable
unit roots/cointegration restrictions, in addition to the short-run
cross-equation restrictions. The procedure is based on the sequence
"LR1→LR2→LR3," where LR1 is the cointegration rank test, LR2 is
the cointegration matrix test, and LR3 is the cross-equation restrictions
test: LR2 is computed conditional on LR1 and LR3 is computed conditional
on LR2. The Type I errors of the three tests are set consistently with a
prefixed overall nominal significance level. A bootstrap analog of the
testing strategy is proposed in small samples. We show that the
information stemming from the individual tests can be used constructively
to uncover which features of the data are not captured by the theoretical
model and thus to rectify, when possible, the specification. We
investigate the empirical size properties of the proposed testing strategy
by a Monte Carlo experiment and show the empirical usefulness of our
approach by estimating and testing a monetary business cycle NK-DSGE model
using U.S. quarterly data. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 307-322
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.948724
File-URL: http://hdl.handle.net/10.1080/07350015.2014.948724
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:307-322
Template-Type: ReDIF-Article 1.0
Author-Name: Christoph Rothe
Author-X-Name-First: Christoph
Author-X-Name-Last: Rothe
Title: Decomposing the Composition Effect: The Role of Covariates in Determining Between-Group Differences in Economic Outcomes
Abstract:
In this article, we study the structure of the composition effect, which
is the part of the observed between-group difference in the distribution
of some economic outcome that can be explained by differences in the
distribution of covariates. Using results from copula theory, we derive a
new representation that contains three types of components: (i) the
"direct contribution" of each covariate due to between-group differences
in the respective marginal distributions, (ii) several "two-way" and
"higher-order interaction effects" due to the interplay between two or
more marginal distributions, and (iii) a "dependence effect" accounting
for between-group differences in dependence patterns among the covariates.
We show how these components can be estimated in practice, and use our
method to study the evolution of the wage distribution in the United
States between 1985 and 2005. We obtain some new and interesting empirical
findings. For example, our estimates suggest that the dependence effect
alone can explain about one-fifth of the increase in wage inequality over
that period (as measured by the difference between the 90% and the 10%
quantile).
Journal: Journal of Business & Economic Statistics
Pages: 323-337
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.948959
File-URL: http://hdl.handle.net/10.1080/07350015.2014.948959
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:323-337
Template-Type: ReDIF-Article 1.0
Author-Name: Christiane Baumeister
Author-X-Name-First: Christiane
Author-X-Name-Last: Baumeister
Author-Name: Lutz Kilian
Author-X-Name-First: Lutz
Author-X-Name-Last: Kilian
Title: Forecasting the Real Price of Oil in a Changing World: A Forecast Combination Approach
Abstract:
The U.S. Energy Information Administration (EIA) regularly publishes
monthly and quarterly forecasts of the price of crude oil for horizons up
to 2 years, which are widely used by practitioners. Traditionally,
such out-of-sample forecasts have been largely judgmental, making them
difficult to replicate and justify. An alternative is the use of real-time
econometric oil price forecasting models. We investigate the merits of
constructing combinations of six such models. Forecast combinations have
received little attention in the oil price forecasting literature to date.
We demonstrate that over the last 20 years suitably constructed
real-time forecast combinations would have been systematically more
accurate than the no-change forecast at horizons up to 6 quarters or 18
months. The MSPE reductions may be as high as 12% and directional accuracy
as high as 72%. The gains in accuracy are robust over time. In contrast,
the EIA oil price forecasts not only tend to be less accurate than
no-change forecasts, but are much less accurate than our preferred
forecast combination. Moreover, including EIA forecasts in the forecast
combination systematically lowers the accuracy of the combination
forecast. We conclude that suitably constructed forecast combinations
should replace traditional judgmental forecasts of the price of oil.
Journal: Journal of Business & Economic Statistics
Pages: 338-351
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.949342
File-URL: http://hdl.handle.net/10.1080/07350015.2014.949342
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:338-351
Template-Type: ReDIF-Article 1.0
Author-Name: Pragya Sur
Author-X-Name-First: Pragya
Author-X-Name-Last: Sur
Author-Name: Galit Shmueli
Author-X-Name-First: Galit
Author-X-Name-Last: Shmueli
Author-Name: Smarajit Bose
Author-X-Name-First: Smarajit
Author-X-Name-Last: Bose
Author-Name: Paromita Dubey
Author-X-Name-First: Paromita
Author-X-Name-Last: Dubey
Title: Modeling Bimodal Discrete Data Using Conway-Maxwell-Poisson Mixture Models
Abstract:
Bimodal truncated count distributions are frequently observed in aggregate
survey data and in user ratings when respondents are mixed in their
opinion. They also arise in censored count data, where the highest
category might create an additional mode. Modeling bimodal behavior in
discrete data is useful for various purposes, from comparing shapes of
different samples (or survey questions) to predicting future ratings by
new raters. The Poisson distribution is the most common distribution for
fitting count data and can be modified to achieve mixtures of truncated
Poisson distributions. However, it is suitable only for modeling
equidispersed distributions and is limited in its ability to capture
bimodality. The Conway-Maxwell-Poisson (CMP) distribution is a
two-parameter generalization of the Poisson distribution that allows for
over- and underdispersion. In this work, we propose a mixture of CMPs for
capturing a wide range of truncated discrete data, which can exhibit
unimodal and bimodal behavior. We present methods for estimating the
parameters of a mixture of two CMP distributions using an EM approach. Our
approach introduces a special two-step optimization within the M step to
estimate multiple parameters. We examine computational and theoretical
issues. The methods are illustrated for modeling ordered rating data as
well as truncated count data, using simulated and real examples.
Journal: Journal of Business & Economic Statistics
Pages: 352-365
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.949343
File-URL: http://hdl.handle.net/10.1080/07350015.2014.949343
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:352-365
Template-Type: ReDIF-Article 1.0
Author-Name: Frank Schorfheide
Author-X-Name-First: Frank
Author-X-Name-Last: Schorfheide
Author-Name: Dongho Song
Author-X-Name-First: Dongho
Author-X-Name-Last: Song
Title: Real-Time Forecasting With a Mixed-Frequency VAR
Abstract:
This article develops a vector autoregression (VAR) for time series which
are observed at mixed frequencies--quarterly and monthly. The model is
cast in state-space form and estimated with Bayesian methods under a
Minnesota-style prior. We show how to evaluate the marginal data density
to implement a data-driven hyperparameter selection. Using a real-time
dataset, we evaluate forecasts from the mixed-frequency VAR and compare
them to standard quarterly frequency VAR and to forecasts from MIDAS
regressions. We document the extent to which information that becomes
available within the quarter improves the forecasts in real time. This
article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 366-380
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.954707
File-URL: http://hdl.handle.net/10.1080/07350015.2014.954707
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:366-380
Template-Type: ReDIF-Article 1.0
Author-Name: Jiahan Li
Author-X-Name-First: Jiahan
Author-X-Name-Last: Li
Title: Sparse and Stable Portfolio Selection With Parameter Uncertainty
Abstract:
A number of alternative mean-variance portfolio strategies have been
recently proposed to improve the empirical performance of the classic
Markowitz mean-variance framework. Designed as remedies for parameter
uncertainty and estimation errors in portfolio selection problems, these
alternative portfolio strategies deliver substantially better
out-of-sample performance. In this article, we first show how to solve a
general portfolio selection problem in a linear regression framework. Then
we propose to reduce the estimation risk of expected returns and the
variance-covariance matrix of asset returns by imposing additional
constraints on the portfolio weights. With results from linear regression
models, we show that portfolio weights derived from new approaches enjoy
two favorable properties: sparsity and stability. Moreover, we present
insights into these new approaches as well as their connections to
alternative strategies in literature. Four empirical studies show that the
proposed strategies have better out-of-sample performance and lower
turnover than many other strategies, especially when the estimation risk
is large.
Journal: Journal of Business & Economic Statistics
Pages: 381-392
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.954708
File-URL: http://hdl.handle.net/10.1080/07350015.2014.954708
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:381-392
Template-Type: ReDIF-Article 1.0
Author-Name: Tae-Hwy Lee
Author-X-Name-First: Tae-Hwy
Author-X-Name-Last: Lee
Author-Name: Yundong Tu
Author-X-Name-First: Yundong
Author-X-Name-Last: Tu
Author-Name: Aman Ullah
Author-X-Name-First: Aman
Author-X-Name-Last: Ullah
Title: Forecasting Equity Premium: Global Historical Average Versus Local Historical Average and Constraints
Abstract:
The equity premium, return on equity minus return on risk-free asset, is
expected to be positive. We consider imposing such positivity constraint
in local historical average (LHA) in nonparametric kernel regression
framework. It is also extended to the semiparametric single index model
when multiple predictors are used. We construct the constrained LHA
estimator via an indicator function which operates as "model-selection"
between the unconstrained LHA and the bound of the constraint (zero for
the positivity constraint). We smooth the indicator function by bagging,
which operates as "model-averaging" and yields a combined forecast of
unconstrained LHA forecasts and the bound of the constraint. The local
combining weights are determined by the probability that the constraint is
binding. Asymptotic properties of the constrained LHA estimators without
and with bagging are established, which show how the positive constraint
and bagging can help reduce the asymptotic variance and mean squared
errors. Monte Carlo simulations are conducted to show the finite sample
behavior of the asymptotic properties. In predicting U.S. equity premium,
we show that substantial nonlinearity can be captured by LHA and that the
local positivity constraint can improve out-of-sample prediction of the
equity premium.
Journal: Journal of Business & Economic Statistics
Pages: 393-402
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.955174
File-URL: http://hdl.handle.net/10.1080/07350015.2014.955174
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:393-402
Template-Type: ReDIF-Article 1.0
Author-Name: Nikolay Gospodinov
Author-X-Name-First: Nikolay
Author-X-Name-Last: Gospodinov
Author-Name: Serena Ng
Author-X-Name-First: Serena
Author-X-Name-Last: Ng
Title: Minimum Distance Estimation of Possibly Noninvertible Moving Average Models
Abstract:
This article considers estimation of moving average (MA) models with
non-Gaussian errors. Information in higher order cumulants allows
identification of the parameters without imposing invertibility. By
allowing for an unbounded parameter space, the generalized method of
moments estimator of the MA(1) model is classical root-T
consistent and asymptotically normal when the MA root is inside, outside,
and on the unit circle. For more general models where the dependence of
the cumulants on the model parameters is analytically intractable, we
consider simulation-based estimators with two features. First, in addition
to an autoregressive model, new auxiliary regressions that exploit
information from the second and higher order moments of the data are
considered. Second, the errors used to simulate the model are drawn from a
flexible functional form to accommodate a large class of distributions
with non-Gaussian features. The proposed simulation estimators are also
asymptotically normally distributed without imposing the assumption of
invertibility. In the application considered, there is overwhelming
evidence of noninvertibility in the Fama-French portfolio returns.
Journal: Journal of Business & Economic Statistics
Pages: 403-417
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.955175
File-URL: http://hdl.handle.net/10.1080/07350015.2014.955175
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:403-417
Template-Type: ReDIF-Article 1.0
Author-Name: Julian Thimme
Author-X-Name-First: Julian
Author-X-Name-Last: Thimme
Author-Name: Clemens Völkert
Author-X-Name-First: Clemens
Author-X-Name-Last: Völkert
Title: Ambiguity in the Cross-Section of Expected Returns: An Empirical Assessment
Abstract:
This article estimates and tests the smooth ambiguity model of Klibanoff,
Marinacci, and Mukerji based on stock market data. We introduce a novel
methodology to estimate the conditional expectation, which characterizes
the impact of a decision maker's ambiguity attitude on asset prices. Our
point estimates of the ambiguity parameter are between 25 and 60, whereas
our risk aversion estimates are considerably lower. The substantial
difference indicates that market participants are ambiguity averse.
Furthermore, we evaluate if ambiguity aversion helps explaining the
cross-section of expected returns. Compared with Epstein and Zin
preferences, we find that incorporating ambiguity into the decision model
improves the fit to the data while keeping relative risk aversion at more
reasonable levels. Supplementary materials for this article are available
online.
Journal: Journal of Business & Economic Statistics
Pages: 418-429
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.958230
File-URL: http://hdl.handle.net/10.1080/07350015.2014.958230
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:418-429
Template-Type: ReDIF-Article 1.0
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: Rethinking the Univariate Approach to Panel Unit Root Testing: Using Covariates to Resolve the Incidental Trend Problem
Abstract:
In an influential article, Hansen showed that covariate augmentation can
lead to substantial power gains when compared to univariate tests. In this
article, we ask if this result extends also to the panel data context? The
answer turns out to be yes, which is maybe not that surprising. What is
surprising, however, is the extent of the power gain, which is shown to
more than outweigh the well-known power loss in the presence of incidental
trends. That is, the covariates have an order effect on the neighborhood
around unity for which local asymptotic power is negligible.
Journal: Journal of Business & Economic Statistics
Pages: 430-443
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.962697
File-URL: http://hdl.handle.net/10.1080/07350015.2014.962697
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:430-443
Template-Type: ReDIF-Article 1.0
Author-Name: Yeonwoo Rho
Author-X-Name-First: Yeonwoo
Author-X-Name-Last: Rho
Author-Name: Xiaofeng Shao
Author-X-Name-First: Xiaofeng
Author-X-Name-Last: Shao
Title: Inference for Time Series Regression Models With Weakly Dependent and Heteroscedastic Errors
Abstract:
Motivated by the need to assess the significance of the trend in some
macroeconomic series, this article considers inference of a parameter in
parametric trend functions when the errors exhibit certain degrees of
nonstationarity with changing unconditional variances. We adopt the
recently developed self-normalized approach to avoid the difficulty
involved in the estimation of the asymptotic variance of the ordinary
least-square estimator. The limiting distribution of the self-normalized
quantity is nonpivotal but can be consistently approximated by using the
wild bootstrap, which is not consistent in general without studentization.
Numerical simulation demonstrates favorable coverage properties of the
proposed method in comparison with alternative ones. The U.S. nominal
wages series is analyzed to illustrate the finite sample performance. Some
technical details are included in the online supplemental material.
Journal: Journal of Business & Economic Statistics
Pages: 444-457
Issue: 3
Volume: 33
Year: 2015
Month: 7
X-DOI: 10.1080/07350015.2014.962698
File-URL: http://hdl.handle.net/10.1080/07350015.2014.962698
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:3:p:444-457
Template-Type: ReDIF-Article 1.0
Author-Name: Michael W. Robbins
Author-X-Name-First: Michael W.
Author-X-Name-Last: Robbins
Author-Name: Thomas J. Fisher
Author-X-Name-First: Thomas J.
Author-X-Name-Last: Fisher
Title: Cross-Correlation Matrices for Tests of Independence and Causality Between Two Multivariate Time Series
Abstract:
An often-studied problem in time series analysis is that of testing for
the independence of two (potentially multivariate) time series. Toeplitz
matrices have demonstrated utility for the related setting of time series
goodness-of-fit testing--ergo, herein, we extend those concepts by
defining a nontrivial block Toeplitz matrix for use in the setting of
independence testing. We propose test statistics based on the trace of the
square of the matrix and determinant of the matrix; these statistics are
connected to one another as well as known statistics previously proposed
in the literature. Furthermore, the log of the determinant is argued to
relate to a likelihood ratio test and is proven to be more powerful than
other tests that are asymptotically equivalent under the null hypothesis.
Additionally, matrix-based tests are presented for the purpose of
inferring the location or direction of the causality existing between the
two series. A simulation study is provided to explore the efficacy of the
proposed methodology--the methods are shown to offer improvement over
existing techniques, which include the famous Granger causality test.
Finally, data examples involving U.S. inflation, trade volume, and
exchange rates are given. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 459-473
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.962699
File-URL: http://hdl.handle.net/10.1080/07350015.2014.962699
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:459-473
Template-Type: ReDIF-Article 1.0
Author-Name: Michal Kolesár
Author-X-Name-First: Michal
Author-X-Name-Last: Kolesár
Author-Name: Raj Chetty
Author-X-Name-First: Raj
Author-X-Name-Last: Chetty
Author-Name: John Friedman
Author-X-Name-First: John
Author-X-Name-Last: Friedman
Author-Name: Edward Glaeser
Author-X-Name-First: Edward
Author-X-Name-Last: Glaeser
Author-Name: Guido W. Imbens
Author-X-Name-First: Guido W.
Author-X-Name-Last: Imbens
Title: Identification and Inference With Many Invalid Instruments
Abstract:
We study estimation and inference in settings where the interest is in the
effect of a potentially endogenous regressor on some outcome. To address
the endogeneity, we exploit the presence of additional variables. Like
conventional instrumental variables, these variables are correlated with
the endogenous regressor. However, unlike conventional instrumental
variables, they also have direct effects on the outcome, and thus are
"invalid" instruments. Our novel identifying assumption is that the direct
effects of these invalid instruments are uncorrelated with the effects of
the instruments on the endogenous regressor. We show that in this case the
limited-information-maximum-likelihood (liml) estimator is no longer
consistent, but that a modification of the bias-corrected
two-stage-least-square (tsls) estimator is consistent. We also show that
conventional tests for over-identifying restrictions, adapted to the many
instruments setting, can be used to test for the presence of these direct
effects. We recommend that empirical researchers carry out such tests and
compare estimates based on liml and the modified version of bias-corrected
tsls. We illustrate in the context of two applications that such practice
can be illuminating, and that our novel identifying assumption has
substantive empirical content.
Journal: Journal of Business & Economic Statistics
Pages: 474-484
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.978175
File-URL: http://hdl.handle.net/10.1080/07350015.2014.978175
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:474-484
Template-Type: ReDIF-Article 1.0
Author-Name: Jason Abrevaya
Author-X-Name-First: Jason
Author-X-Name-Last: Abrevaya
Author-Name: Yu-Chin Hsu
Author-X-Name-First: Yu-Chin
Author-X-Name-Last: Hsu
Author-Name: Robert P. Lieli
Author-X-Name-First: Robert P.
Author-X-Name-Last: Lieli
Title: Estimating Conditional Average Treatment Effects
Abstract:
We consider a functional parameter called the conditional average
treatment effect (CATE), designed to capture the heterogeneity of a
treatment effect across subpopulations when the unconfoundedness
assumption applies. In contrast to quantile regressions, the
subpopulations of interest are defined in terms of the possible values of
a set of continuous covariates rather than the quantiles of the potential
outcome distributions. We show that the CATE parameter is
nonparametrically identified under unconfoundedness and propose inverse
probability weighted estimators for it. Under regularity conditions, some
of which are standard and some are new in the literature, we show
(pointwise) consistency and asymptotic normality of a fully nonparametric
and a semiparametric estimator. We apply our methods to estimate the
average effect of a first-time mother's smoking during pregnancy on the
baby's birth weight as a function of the mother's age. A robust
qualitative finding is that the expected effect becomes stronger (more
negative) for older mothers.
Journal: Journal of Business & Economic Statistics
Pages: 485-505
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.975555
File-URL: http://hdl.handle.net/10.1080/07350015.2014.975555
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:485-505
Template-Type: ReDIF-Article 1.0
Author-Name: Xun Lu
Author-X-Name-First: Xun
Author-X-Name-Last: Lu
Title: A Covariate Selection Criterion for Estimation of Treatment Effects
Abstract:
We study how to select or combine estimators of the average treatment
effect (ATE) and the average treatment effect on the treated (ATT) in the
presence of multiple sets of covariates. We consider two cases: (1) all
sets of covariates satisfy the unconfoundedness assumption and (2) some
sets of covariates violate the unconfoundedness assumption locally. For
both cases, we propose a data-driven covariate selection criterion (CSC)
to minimize the asymptotic mean squared errors (AMSEs). Based on our CSC,
we propose new average estimators of ATE and ATT, which include the
selected estimators based on a single set of covariates as a special case.
We derive the asymptotic distributions of our new estimators and propose
how to construct valid confidence intervals. Our Monte Carlo simulations
show that in finite samples, our new average estimators achieve
substantial efficiency gains over the estimators based on a single set of
covariates. We apply our new estimators to study the impact of inherited
control on firm performance.
Journal: Journal of Business & Economic Statistics
Pages: 506-522
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.982755
File-URL: http://hdl.handle.net/10.1080/07350015.2014.982755
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:506-522
Template-Type: ReDIF-Article 1.0
Author-Name: Xuan Chen
Author-X-Name-First: Xuan
Author-X-Name-Last: Chen
Author-Name: Carlos A. Flores
Author-X-Name-First: Carlos A.
Author-X-Name-Last: Flores
Title: Bounds on Treatment Effects in the Presence of Sample Selection and Noncompliance: The Wage Effects of Job Corps
Abstract:
Randomized and natural experiments are commonly used in economics and
other social science fields to estimate the effect of programs and
interventions. Even when employing experimental data, assessing the impact
of a treatment is often complicated by the presence of sample selection
(outcomes are only observed for a selected group) and noncompliance (some
treatment group individuals do not receive the treatment while some
control individuals do). We address both of these identification problems
simultaneously and derive nonparametric bounds for average treatment
effects within a principal stratification framework. We employ these
bounds to empirically assess the wage effects of Job Corps (JC), the most
comprehensive and largest federally funded job training program for
disadvantaged youth in the United States. Our results strongly suggest
positive average effects of JC on wages for individuals who comply with
their treatment assignment and would be employed whether or not they
enrolled in JC (the "always-employed compliers"). Under relatively weak
monotonicity and mean dominance assumptions, we find that this average
effect is between 5.7% and 13.9% 4 years after randomization, and
between 7.7% and 17.5% for non-Hispanics. Our results are consistent with
larger effects of JC on wages than those found without adjusting for
noncompliance.
Journal: Journal of Business & Economic Statistics
Pages: 523-540
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.975229
File-URL: http://hdl.handle.net/10.1080/07350015.2014.975229
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:523-540
Template-Type: ReDIF-Article 1.0
Author-Name: Yongli Zhang
Author-X-Name-First: Yongli
Author-X-Name-Last: Zhang
Author-Name: Xiaotong Shen
Author-X-Name-First: Xiaotong
Author-X-Name-Last: Shen
Title: Adaptive Modeling Procedure Selection by Data Perturbation
Abstract:
Many procedures have been developed to deal with the high-dimensional
problem that is emerging in various business and economics areas. To
evaluate and compare these procedures, modeling uncertainty caused by
model selection and parameter estimation has to be assessed and integrated
into a modeling process. To do this, a data perturbation method estimates
the modeling uncertainty inherited in a selection process by perturbing
the data. Critical to data perturbation is the size of perturbation, as
the perturbed data should resemble the original dataset. To account for
the modeling uncertainty, we derive the optimal size of perturbation,
which adapts to the data, the model space, and other relevant factors in
the context of linear regression. On this basis, we develop an adaptive
data-perturbation method that, unlike its nonadaptive counterpart,
performs well in different situations. This leads to a data-adaptive model
selection method. Both theoretical and numerical analysis suggest that the
data-adaptive model selection method adapts to distinct situations in that
it yields consistent model selection and optimal prediction, without
knowing which situation exists a priori. The proposed method is applied to
real data from the commodity market and outperforms its competitors in
terms of price forecasting accuracy.
Journal: Journal of Business & Economic Statistics
Pages: 541-551
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.965307
File-URL: http://hdl.handle.net/10.1080/07350015.2014.965307
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:541-551
Template-Type: ReDIF-Article 1.0
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Author-Name: Wai Keung Li
Author-X-Name-First: Wai Keung
Author-X-Name-Last: Li
Title: A New Pearson-Type QMLE for Conditionally Heteroscedastic Models
Abstract:
This article proposes a novel Pearson-type quasi-maximum likelihood
estimator (QMLE) of GARCH(p, q) models.
Unlike the existing Gaussian QMLE, Laplacian QMLE, generalized
non-Gaussian QMLE, or LAD estimator, our Pearsonian QMLE (PQMLE) captures
not just the heavy-tailed but also the skewed innovations. Under strict
stationarity and some weak moment conditions, the strong consistency and
asymptotic normality of the PQMLE are obtained. With no further efforts,
the PQMLE can be applied to other conditionally heteroscedastic models. A
simulation study is carried out to assess the performance of the PQMLE.
Two applications to four major stock indexes and two exchange rates
further highlight the importance of our new method. Heavy-tailed and
skewed innovations are often observed together in practice, and the PQMLE
now gives us a systematic way to capture these two coexisting features.
Journal: Journal of Business & Economic Statistics
Pages: 552-565
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.977446
File-URL: http://hdl.handle.net/10.1080/07350015.2014.977446
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:552-565
Template-Type: ReDIF-Article 1.0
Author-Name: Chang-Jin Kim
Author-X-Name-First: Chang-Jin
Author-X-Name-Last: Kim
Author-Name: Jaeho Kim
Author-X-Name-First: Jaeho
Author-X-Name-Last: Kim
Title: Bayesian Inference in Regime-Switching ARMA Models With Absorbing States: The Dynamics of the Ex-Ante Real Interest Rate Under Regime Shifts
Abstract:
One goal of this article is to develop an efficient Metropolis-Hastings
(MH) algorithm for estimating an ARMA model with a regime-switching mean,
by designing a new efficient proposal distribution for the
regime-indicator variable. Unlike the existing algorithm, our algorithm
can achieve reasonably fast convergence to the posterior distribution even
when the latent regime-indicator variable is highly persistent or when
there exist absorbing states. Another goal is to appropriately investigate
the dynamics of the latent ex-ante real interest rate (EARR) in the
presence of structural breaks, by employing the econometric tool
developed. We show that excluding the theory-implied moving-average terms
may understate the persistence of the observed EPRR dynamics. Our
empirical results suggest that, even though we rule out the possibility of
a unit root in the EARR, it may be more persistent and volatile than has
been documented in some of the literature.
Journal: Journal of Business & Economic Statistics
Pages: 566-578
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.979995
File-URL: http://hdl.handle.net/10.1080/07350015.2014.979995
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:566-578
Template-Type: ReDIF-Article 1.0
Author-Name: A. S. Hurn
Author-X-Name-First: A. S.
Author-X-Name-Last: Hurn
Author-Name: K. A. Lindsay
Author-X-Name-First: K. A.
Author-X-Name-Last: Lindsay
Author-Name: A. J. McClelland
Author-X-Name-First: A. J.
Author-X-Name-Last: McClelland
Title: Estimating the Parameters of Stochastic Volatility Models Using Option Price Data
Abstract:
This article describes a maximum likelihood method for estimating the
parameters of the standard square-root stochastic volatility model and a
variant of the model that includes jumps in equity prices. The model is
fitted to data on the S&P 500 Index and the prices of vanilla options
written on the index, for the period 1990 to 2011. The method is able to
estimate both the parameters of the physical measure (associated with the
index) and the parameters of the risk-neutral measure (associated with the
options), including the volatility and jump risk premia. The estimation is
implemented using a particle filter whose efficacy is demonstrated under
simulation. The computational load of this estimation method, which
previously has been prohibitive, is managed by the effective use of
parallel computing using graphics processing units (GPUs). The empirical
results indicate that the parameters of the models are reliably estimated
and consistent with values reported in previous work. In particular, both
the volatility risk premium and the jump risk premium are found to be
significant.
Journal: Journal of Business & Economic Statistics
Pages: 579-594
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.981634
File-URL: http://hdl.handle.net/10.1080/07350015.2014.981634
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:579-594
Template-Type: ReDIF-Article 1.0
Author-Name: Yin Liao
Author-X-Name-First: Yin
Author-X-Name-Last: Liao
Author-Name: John Stachurski
Author-X-Name-First: John
Author-X-Name-Last: Stachurski
Title: Simulation-Based Density Estimation for Time Series Using Covariate Data
Abstract:
This article proposes a simulation-based density estimation technique for
time series that exploits information found in covariate data. The method
can be paired with a large range of parametric models used in time series
estimation. We derive asymptotic properties of the estimator and
illustrate attractive finite sample properties for a range of well-known
econometric and financial applications.
Journal: Journal of Business & Economic Statistics
Pages: 595-606
Issue: 4
Volume: 33
Year: 2015
Month: 10
X-DOI: 10.1080/07350015.2014.982247
File-URL: http://hdl.handle.net/10.1080/07350015.2014.982247
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:33:y:2015:i:4:p:595-606
Template-Type: ReDIF-Article 1.0
Author-Name: Rasmus Tangsgaard Varneskov
Author-X-Name-First: Rasmus Tangsgaard
Author-X-Name-Last: Varneskov
Title: Flat-Top Realized Kernel Estimation of Quadratic Covariation With Nonsynchronous and Noisy Asset Prices
Abstract:
This article develops a general multivariate additive noise model for
synchronized asset prices and provides a multivariate extension of the
generalized flat-top realized kernel estimators, analyzed earlier by
Varneskov (2014), to estimate its quadratic covariation. The additive
noise model allows for α-mixing dependent exogenous noise, random
sampling, and an endogenous noise component that encompasses
synchronization errors, lead-lag relations, and diurnal
heteroscedasticity. The various components may exhibit polynomially
decaying autocovariances. In this setting, the class of estimators
considered is consistent, asymptotically unbiased, and mixed Gaussian at
the optimal rate of convergence, n-super-1/4. A simple
finite sample correction based on projections of symmetric matrices
ensures positive definiteness without altering the asymptotic properties
of the estimators. It, thereby, guarantees the existence of nonlinear
transformations of the estimated covariance matrix such as correlations
and realized betas, which inherit the asymptotic properties from the
flat-top realized kernel estimators. An empirically motivated simulation
study assesses the choice of sampling scheme and projection rule, and it
shows that flat-top realized kernels have a desirable combination of
robustness and efficiency relative to competing estimators. Last, an
empirical analysis of signal detection and out-of-sample predictions for a
portfolio of six stocks of varying size and liquidity illustrates the use
and properties of the new estimators.
Journal: Journal of Business & Economic Statistics
Pages: 1-22
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2015.1005622
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1005622
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:1-22
Template-Type: ReDIF-Article 1.0
Author-Name: Eric Hillebrand
Author-X-Name-First: Eric
Author-X-Name-Last: Hillebrand
Author-Name: Marcelo C. Medeiros
Author-X-Name-First: Marcelo C.
Author-X-Name-Last: Medeiros
Title: Nonlinearity, Breaks, and Long-Range Dependence in Time-Series Models
Abstract:
We study the simultaneous occurrence of long memory and nonlinear effects,
such as parameter changes and threshold effects, in time series models and
apply our modeling framework to daily realized measures of integrated
variance. We develop asymptotic theory for parameter estimation and
propose two model-building procedures. The methodology is applied to
stocks of the Dow Jones Industrial Average during the period 2000 to 2009.
We find strong evidence of nonlinear effects in financial volatility. An
out-of-sample analysis shows that modeling these effects can improve
forecast performance. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 23-41
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2014.985828
File-URL: http://hdl.handle.net/10.1080/07350015.2014.985828
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:23-41
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew Hodge
Author-X-Name-First: Andrew
Author-X-Name-Last: Hodge
Author-Name: Sriram Shankar
Author-X-Name-First: Sriram
Author-X-Name-Last: Shankar
Title: Single-Variable Threshold Effects in Ordered Response Models With an Application to Estimating the Income-Happiness Gradient
Abstract:
This short article extends well-known threshold models to the ordered
response setting. We consider the case where the sample is endogenously
split to estimate regime-dependent coefficients for one variable of
interest, while keeping the other coefficients and auxiliary parameters
constant across the threshold. We use Monte Carlo methods to examine the
behavior of the model. In addition, we derive the formulae for the partial
effects associated with the model. We apply our threshold model to the
relationship between income and self-reported happiness using data drawn
from the U.S. General Social Survey. While the findings suggest the
presence of a threshold in the income-happiness gradient at approximately
U.S. $76,000, no evidence is found in support of a satiation point.
Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 42-52
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2014.991785
File-URL: http://hdl.handle.net/10.1080/07350015.2014.991785
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:42-52
Template-Type: ReDIF-Article 1.0
Author-Name: Duk B. Jun
Author-X-Name-First: Duk B.
Author-X-Name-Last: Jun
Author-Name: Jihwan Moon
Author-X-Name-First: Jihwan
Author-X-Name-Last: Moon
Author-Name: Sungho Park
Author-X-Name-First: Sungho
Author-X-Name-Last: Park
Title: Temporal Disaggregation: Methods, Information Loss, and Diagnostics
Abstract:
This research provides a generalized framework to disaggregate
lower-frequency time series and evaluate the disaggregation performance.
The proposed framework combines two models in separate stages: a linear
regression model to exploit related independent variables in the first
stage and a state--space model to disaggregate the residual from the
regression in the second stage. For the purpose of providing a set of
practical criteria for assessing the disaggregation performance, we
measure the information loss that occurs during temporal aggregation while
examining what effects take place when aggregating data. To validate the
proposed framework, we implement Monte Carlo simulations and provide two
empirical studies. Supplementary materials for this article are available
online.
Journal: Journal of Business & Economic Statistics
Pages: 53-61
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2014.995797
File-URL: http://hdl.handle.net/10.1080/07350015.2014.995797
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:53-61
Template-Type: ReDIF-Article 1.0
Author-Name: Melody Lo
Author-X-Name-First: Melody
Author-X-Name-Last: Lo
Author-Name: Yong Bao
Author-X-Name-First: Yong
Author-X-Name-Last: Bao
Title: Are Overall Journal Rankings a Good Mapping for Article Quality in Specialty Fields?
Abstract:
Overall journal rankings, which are generated with sample articles in
different research fields, are commonly used to measure the research
productivity of academic economists. In this article, we investigate a
growing concern in the profession that the use of the overall journal
rankings to evaluate scholars’ relative research productivity may
exhibit a downward bias toward researchers in some specialty fields if
their respective field journals are under-ranked in the overall journals
rankings. To address this concern, we constructed new journal rankings
based on the intellectual influence of research in 8 specialty fields
using a sample consisting of 26,401 articles published across 60 economics
journals from 1998 to 2007. We made various comparisons between the newly
constructed journal rankings in specialty fields and the traditional
overall journal ranking. Our results show that the overall journal ranking
provides a considerably good mapping for the article quality in specialty
fields. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 62-67
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2014.995798
File-URL: http://hdl.handle.net/10.1080/07350015.2014.995798
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:62-67
Template-Type: ReDIF-Article 1.0
Author-Name: Dong Li
Author-X-Name-First: Dong
Author-X-Name-Last: Li
Author-Name: Shiqing Ling
Author-X-Name-First: Shiqing
Author-X-Name-Last: Ling
Author-Name: Rongmao Zhang
Author-X-Name-First: Rongmao
Author-X-Name-Last: Zhang
Title: On a Threshold Double Autoregressive Model
Abstract:
This article first proposes a score-based test for a double autoregressive
model against a threshold double autoregressive (AR) model. It is an
asymptotically distribution-free test and is easy to implement in
practice. The article further studies the quasi-maximum likelihood
estimation of a threshold double autoregressive model. It is shown that
the estimated threshold is n-consistent and converges
weakly to a functional of a two-sided compound Poisson process and the
remaining parameters are asymptotically normal. Our results include the
asymptotic theory of the estimator for threshold AR models with
autoregressive conditional heteroscedastic (ARCH) errors and threshold
ARCH models as special cases, each of which is also new in literature. Two
portmanteau-type statistics are also derived for checking the adequacy of
fitted model when either the error is nonnormal or the threshold is
unknown. Simulation studies are conducted to assess the performance of the
score-based test and the estimator in finite samples. The results are
illustrated with an application to the weekly closing prices of Hang Seng
Index. This article also includes the weak convergence of a score-marked
empirical process on the space under an
α-mixing assumption, which is independent of interest.
Journal: Journal of Business & Economic Statistics
Pages: 68-80
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2014.1001028
File-URL: http://hdl.handle.net/10.1080/07350015.2014.1001028
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:68-80
Template-Type: ReDIF-Article 1.0
Author-Name: Yohei Yamamoto
Author-X-Name-First: Yohei
Author-X-Name-Last: Yamamoto
Title: Forecasting With Nonspurious Factors in U.S. Macroeconomic Time Series
Abstract:
This study examines the practical implications of the fact that structural
changes in factor loadings can produce spurious factors (or irrelevant
factors) in forecasting exercises. These spurious factors can induce an
overfitting problem in factor-augmented forecasting models. To address
this concern, we propose a method to estimate nonspurious factors by
identifying the set of response variables that have no structural changes
in their factor loadings. Our theoretical results show that the obtained
set may include a fraction of unstable response variables. However, the
fraction is so small that the original factors are able to be identified
and estimated consistently. Moreover, using this approach, we find that a
significant portion of 132 U.S. macroeconomic time series have
structural changes in their factor loadings. Although traditional
principal components provide eight or more factors, there are
significantly fewer nonspurious factors. The forecasts using the
nonspurious factors can significantly improve out-of-sample performance.
Journal: Journal of Business & Economic Statistics
Pages: 81-106
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2015.1004071
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1004071
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:81-106
Template-Type: ReDIF-Article 1.0
Author-Name: Amanda Kowalski
Author-X-Name-First: Amanda
Author-X-Name-Last: Kowalski
Title: Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care
Abstract:
Efforts to control medical care costs depend critically on how individuals
respond to prices. I estimate the price elasticity of expenditure on
medical care using a censored quantile instrumental variable (CQIV)
estimator. CQIV allows estimates to vary across the conditional
expenditure distribution, relaxes traditional censored model assumptions,
and addresses endogeneity with an instrumental variable. My instrumental
variable strategy uses a family member’s injury to induce variation
in an individual’s own price. Across the conditional deciles of the
expenditure distribution, I find elasticities that vary from −0.76
to −1.49, which are an order of magnitude larger than previous
estimates. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 107-117
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2015.1004072
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1004072
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:107-117
Template-Type: ReDIF-Article 1.0
Author-Name: Massimiliano Marcellino
Author-X-Name-First: Massimiliano
Author-X-Name-Last: Marcellino
Author-Name: Mario Porqueddu
Author-X-Name-First: Mario
Author-X-Name-Last: Porqueddu
Author-Name: Fabrizio Venditti
Author-X-Name-First: Fabrizio
Author-X-Name-Last: Venditti
Title: Short-Term GDP Forecasting With a Mixed-Frequency Dynamic Factor Model With Stochastic Volatility
Abstract:
In this article, we develop a mixed frequency dynamic factor model in
which the disturbances of both the latent common factor and of the
idiosyncratic components have time-varying stochastic volatilities. We use
the model to investigate business cycle dynamics in the euro area and
present three sets of empirical results. First, we evaluate the impact of
macroeconomic releases on point and density forecast accuracy and on the
width of forecast intervals. Second, we show how our setup allows to make
a probabilistic assessment of the contribution of releases to forecast
revisions. Third, we examine point and density out of sample forecast
accuracy. We find that introducing stochastic volatility in the model
contributes to an improvement in both point and density forecast accuracy.
Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 118-127
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2015.1006773
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1006773
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:118-127
Template-Type: ReDIF-Article 1.0
Author-Name: P. Giudici
Author-X-Name-First: P.
Author-X-Name-Last: Giudici
Author-Name: A. Spelta
Author-X-Name-First: A.
Author-X-Name-Last: Spelta
Title: Graphical Network Models for International Financial Flows
Abstract:
The late-2000s financial crisis stressed the need to understand the world
financial system as a network of countries, where cross-border financial
linkages play a fundamental role in the spread of systemic risks.
Financial network models, which take into account the complex
interrelationships between countries, seem to be an appropriate tool in
this context. To improve the statistical performance of financial network
models, we propose to generate them by means of multivariate graphical
models. We then introduce Bayesian graphical models, which can take model
uncertainty into account, and dynamic Bayesian graphical models, which
provide a convenient framework to model temporal cross-border data,
decomposing the model into autoregressive and contemporaneous networks.
The article shows how the application of the proposed models to the Bank
of International Settlements locational banking statistics allows the
identification of four distinct groups of countries, that can be
considered central in systemic risk contagion.
Journal: Journal of Business & Economic Statistics
Pages: 128-138
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2015.1017643
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1017643
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:128-138
Template-Type: ReDIF-Article 1.0
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Author-Name: Michael Lechner
Author-X-Name-First: Michael
Author-X-Name-Last: Lechner
Author-Name: Giovanni Mellace
Author-X-Name-First: Giovanni
Author-X-Name-Last: Mellace
Title: The Finite Sample Performance of Estimators for Mediation Analysis Under Sequential Conditional Independence
Abstract:
Using a comprehensive simulation study based on empirical data, this
article investigates the finite sample properties of different classes of
parametric and semiparametric estimators of (natural) direct and indirect
causal effects used in mediation analysis under sequential conditional
independence assumptions. The estimators are based on regression, inverse
probability weighting, and combinations thereof. Our simulation design
uses a large population of Swiss jobseekers and considers variations of
several features of the data-generating process (DGP) and the
implementation of the estimators that are of practical relevance. We find
that no estimator performs uniformly best (in terms of root mean squared
error) in all simulations. Overall, so-called
“g-computation” dominates. However, differences between
estimators are often (but not always) minor in the various setups and the
relative performance of the methods often (but not always) varies with the
features of the DGP.
Journal: Journal of Business & Economic Statistics
Pages: 139-160
Issue: 1
Volume: 34
Year: 2016
Month: 1
X-DOI: 10.1080/07350015.2015.1017644
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1017644
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:1:p:139-160
Template-Type: ReDIF-Article 1.0
Author-Name: Sermin Gungor
Author-X-Name-First: Sermin
Author-X-Name-Last: Gungor
Author-Name: Richard Luger
Author-X-Name-First: Richard
Author-X-Name-Last: Luger
Title: Multivariate Tests of Mean-Variance Efficiency and Spanning With a Large Number of Assets and Time-Varying Covariances
Abstract:
We develop a finite-sample procedure to test the mean-variance efficiency
and spanning hypotheses, without imposing any parametric assumptions on
the distribution of model disturbances. In so doing, we provide an exact
distribution-free method to test uniform linear restrictions in
multivariate linear regression models. The framework allows for unknown
forms of nonnormalities as well as time-varying conditional variances and
covariances among the model disturbances. We derive exact bounds on the
null distribution of joint F statistics to deal with the
presence of nuisance parameters, and we show how to implement the
resulting generalized nonparametric bounds tests with Monte Carlo
resampling techniques. In sharp contrast to the usual tests that are not
even computable when the number of test assets is too large, the power of
the proposed test procedure potentially increases along both the time and
cross-sectional dimensions.
Journal: Journal of Business & Economic Statistics
Pages: 161-175
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1019510
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1019510
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:161-175
Template-Type: ReDIF-Article 1.0
Author-Name: Brendan Kline
Author-X-Name-First: Brendan
Author-X-Name-Last: Kline
Title: Identification of the Direction of a Causal Effect by Instrumental Variables
Abstract:
This article provides a strategy to identify the existence and direction
of a causal effect in a generalized nonparametric and nonseparable model
identified by instrumental variables. The causal effect concerns how the
outcome depends on the endogenous treatment variable. The outcome
variable, treatment variable, other explanatory variables, and the
instrumental variable can be essentially any combination of continuous,
discrete, or “other” variables. In particular, it is not
necessary to have any continuous variables, none of the variables need to
have large support, and the instrument can be binary even if the
corresponding endogenous treatment variable and/or outcome is continuous.
The outcome can be mismeasured or interval-measured, and the endogenous
treatment variable need not even be observed. The identification results
are constructive, and can be empirically implemented using standard
estimation results.
Journal: Journal of Business & Economic Statistics
Pages: 176-184
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1021925
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1021925
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:176-184
Template-Type: ReDIF-Article 1.0
Author-Name: Donna Feir
Author-X-Name-First: Donna
Author-X-Name-Last: Feir
Author-Name: Thomas Lemieux
Author-X-Name-First: Thomas
Author-X-Name-Last: Lemieux
Author-Name: Vadim Marmer
Author-X-Name-First: Vadim
Author-X-Name-Last: Marmer
Title: Weak Identification in Fuzzy Regression Discontinuity Designs
Abstract:
In fuzzy regression discontinuity (FRD) designs, the treatment effect is
identified through a discontinuity in the conditional probability of
treatment assignment. We show that when identification is weak (i.e., when
the discontinuity is of a small magnitude), the usual
t-test based on the FRD estimator and its standard error
suffers from asymptotic size distortions as in a standard instrumental
variables setting. This problem can be especially severe in the FRD
setting since only observations close to the discontinuity are useful for
estimating the treatment effect. To eliminate those size distortions, we
propose a modified t-statistic that uses a
null-restricted version of the standard error of the FRD estimator. Simple
and asymptotically valid confidence sets for the treatment effect can be
also constructed using this null-restricted standard error. An extension
to testing for constancy of the regression discontinuity effect across
covariates is also discussed. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 185-196
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1024836
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1024836
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:185-196
Template-Type: ReDIF-Article 1.0
Author-Name: Angela Vossmeyer
Author-X-Name-First: Angela
Author-X-Name-Last: Vossmeyer
Title: Sample Selection and Treatment Effect Estimation of Lender of Last Resort Policies
Abstract:
This article develops a framework for estimating multivariate treatment
effect models in the presence of sample selection. The methodology deals
with several important issues prevalent in policy and program evaluation,
including application and approval stages, nonrandom treatment assignment,
endogeneity, and discrete outcomes. This article presents a
computationally efficient estimation algorithm and techniques for model
comparison and treatment effects. The framework is applied to evaluate the
effectiveness of bank recapitalization programs and their ability to
resuscitate the financial system. The analysis of lender of last resort
(LOLR) policies is not only complicated due to econometric challenges, but
also because regulator data are not easily obtainable. Motivated by these
difficulties, this article constructs a novel bank-level dataset and
employs the new methodology to jointly model a bank’s decision to
apply for assistance, the LOLR’s decision to approve or decline the
assistance, and the bank’s performance following the disbursements.
The article offers practical estimation tools to unveil new answers to
important regulatory and policy questions.
Journal: Journal of Business & Economic Statistics
Pages: 197-212
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1024837
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1024837
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:197-212
Template-Type: ReDIF-Article 1.0
Author-Name: Minya Xu
Author-X-Name-First: Minya
Author-X-Name-Last: Xu
Author-Name: Ping-Shou Zhong
Author-X-Name-First: Ping-Shou
Author-X-Name-Last: Zhong
Author-Name: Wei Wang
Author-X-Name-First: Wei
Author-X-Name-Last: Wang
Title: Detecting Variance Change-Points for Blocked Time Series and Dependent Panel Data
Abstract:
This article proposes a class of weighted differences of averages (WDA)
statistics to test and estimate possible change-points in variance for
time series with weakly dependent blocks and dependent panel data without
specific distributional assumptions. We derive the asymptotic
distributions of the test statistics for testing the existence of a single
variance change-point under the null and local alternatives. We also study
the consistency of the change-point estimator. Within the proposed class
of the WDA test statistics, a standardized WDA test is shown to have the
best consistency rate and is recommended for practical use. An iterative
binary searching procedure is suggested for estimating the locations of
possible multiple change-points in variance, whose consistency is also
established. Simulation studies are conducted to compare detection power
and number of wrong rejections of the proposed procedure to that of a
cumulative sum (CUSUM) based test and a likelihood ratio-based test.
Finally, we apply the proposed method to a stock index dataset and an
unemployment rate dataset. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 213-226
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1026438
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1026438
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:213-226
Template-Type: ReDIF-Article 1.0
Author-Name: Jason Parker
Author-X-Name-First: Jason
Author-X-Name-Last: Parker
Author-Name: Donggyu Sul
Author-X-Name-First: Donggyu
Author-X-Name-Last: Sul
Title: Identification of Unknown Common Factors: Leaders and Followers
Abstract:
This article has the following contributions. First, this article develops
a new criterion for identifying whether or not a particular time series
variable is a common factor in the conventional approximate factor model.
Second, by modeling observed factors as a set of potential factors to be
identified, this article reveals how to easily pin down the factor without
performing a large number of estimations. This allows the researcher to
check whether or not each individual in the panel is the underlying common
factor and, from there, identify which individuals best represent the
factor space by using a new clustering mechanism. Asymptotically, the
developed procedure correctly identifies the factor when
N and T jointly approach infinity. The
procedure is shown to be quite effective in the finite sample by means of
Monte Carlo simulation. The procedure is then applied to an empirical
example, demonstrating that the newly developed method identifies the
unknown common factors accurately.
Journal: Journal of Business & Economic Statistics
Pages: 227-239
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1026439
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1026439
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:227-239
Template-Type: ReDIF-Article 1.0
Author-Name: Bertrand Candelon
Author-X-Name-First: Bertrand
Author-X-Name-Last: Candelon
Author-Name: Sessi Tokpavi
Author-X-Name-First: Sessi
Author-X-Name-Last: Tokpavi
Title: A Nonparametric Test for Granger Causality in Distribution With Application to Financial Contagion
Abstract:
This article introduces a kernel-based nonparametric inferential procedure
to test for Granger causality in distribution. This test is a multivariate
extension of the kernel-based Granger causality test in tail event. The
main advantage of this test is its ability to examine a large number of
lags, with higher-order lags discounted. In addition, our test is highly
flexible because it can be used to identify Granger causality in specific
regions on the distribution supports, such as the center or tails. We
prove that the test converges asymptotically to a standard Gaussian
distribution under the null hypothesis and thus is free of parameter
estimation uncertainty. Monte Carlo simulations illustrate the excellent
small sample size and power properties of the test. This new test is
applied to a set of European stock markets to analyze spillovers during
the recent European crisis and to distinguish contagion from
interdependence effects.
Journal: Journal of Business & Economic Statistics
Pages: 240-253
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1026774
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1026774
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:240-253
Template-Type: ReDIF-Article 1.0
Author-Name: Luc Bauwens
Author-X-Name-First: Luc
Author-X-Name-Last: Bauwens
Author-Name: Edoardo Otranto
Author-X-Name-First: Edoardo
Author-X-Name-Last: Otranto
Title: Modeling the Dependence of Conditional Correlations on Market Volatility
Abstract:
Several models have been developed to capture the dynamics of the
conditional correlations between time series of financial returns and
several studies have shown that the market volatility is a major
determinant of the correlations. We extend some models to include
explicitly the dependence of the correlations on the market volatility.
The models differ by the way—linear or nonlinear, direct or
indirect—in which the volatility influences the correlations. Using
a wide set of models with two measures of market volatility on two
datasets, we find that for some models, the empirical results support to
some extent the statistical significance and the economic significance of
the volatility effect on the correlations, but the presence of the
volatility effect does not improve the forecasting performance of the
extended models. Supplementary materials for this article are available
online.
Journal: Journal of Business & Economic Statistics
Pages: 254-268
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1037882
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1037882
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:254-268
Template-Type: ReDIF-Article 1.0
Author-Name: Peter Reinhard Hansen
Author-X-Name-First: Peter Reinhard
Author-X-Name-Last: Hansen
Author-Name: Zhuo Huang
Author-X-Name-First: Zhuo
Author-X-Name-Last: Huang
Title: Exponential GARCH Modeling With Realized Measures of Volatility
Abstract:
We introduce the realized exponential GARCH model that can use multiple
realized volatility measures for the modeling of a return series. The
model specifies the dynamic properties of both returns and realized
measures, and is characterized by a flexible modeling of the dependence
between returns and volatility. We apply the model to 27 stocks and an
exchange traded fund that tracks the S&P 500 index and find specifications
with multiple realized measures that dominate those that rely on a single
realized measure. The empirical analysis suggests some convenient
simplifications and highlights the advantages of the new specification.
Journal: Journal of Business & Economic Statistics
Pages: 269-287
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1038543
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1038543
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:269-287
Template-Type: ReDIF-Article 1.0
Author-Name: Bryan S. Graham
Author-X-Name-First: Bryan S.
Author-X-Name-Last: Graham
Author-Name: Cristine Campos de Xavier Pinto
Author-X-Name-First: Cristine Campos de Xavier
Author-X-Name-Last: Pinto
Author-Name: Daniel Egel
Author-X-Name-First: Daniel
Author-X-Name-Last: Egel
Title: Efficient Estimation of Data Combination Models by the Method of Auxiliary-to-Study Tilting (AST)
Abstract:
We propose a locally efficient estimator for a class of semiparametric
data combination problems. A leading estimand in this class is the average
treatment effect on the treated (ATT). Data combination problems are
related to, but distinct from, the class of missing data problems with
data missing at random (of which the average treatment effect (ATE)
estimand is a special case). Our estimator also possesses a double
robustness property. Our procedure may be used to efficiently estimate,
among other objects, the ATT, the two-sample instrumental variables model
(TSIV), counterfactual distributions, poverty maps, and semiparametric
difference-in-differences. In an empirical application, we use our
procedure to characterize residual Black--White wage inequality after
flexibly controlling for “premarket” differences in measured
cognitive achievement. Supplementary materials for this article are
available online.
Journal: Journal of Business & Economic Statistics
Pages: 288-301
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1038544
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1038544
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:288-301
Template-Type: ReDIF-Article 1.0
Author-Name: Sung Jae Jun
Author-X-Name-First: Sung Jae
Author-X-Name-Last: Jun
Author-Name: Yoonseok Lee
Author-X-Name-First: Yoonseok
Author-X-Name-Last: Lee
Author-Name: Youngki Shin
Author-X-Name-First: Youngki
Author-X-Name-Last: Shin
Title: Treatment Effects With Unobserved Heterogeneity: A Set Identification Approach
Abstract:
We propose the sharp identifiable bounds of the potential outcome
distributions using panel data. We allow for the possibility that
statistical randomization of treatment assignments is not achieved until
unobserved heterogeneity is properly controlled for. We
use certain stationarity assumptions to obtain the sharp bounds. Our
approach allows for dynamic treatment decisions, where the current
treatment decisions may depend on the past treatments or the past observed
outcomes. As an empirical illustration, we study the effect of smoking
during pregnancy on infant birthweight. We find that for the group of
switchers the infant birthweight of a smoking mother is first-order
stochastically dominated by that of a nonsmoking mother.
Journal: Journal of Business & Economic Statistics
Pages: 302-311
Issue: 2
Volume: 34
Year: 2016
Month: 4
X-DOI: 10.1080/07350015.2015.1044008
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1044008
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:2:p:302-311
Template-Type: ReDIF-Article 1.0
Author-Name: Eric Jondeau
Author-X-Name-First: Eric
Author-X-Name-Last: Jondeau
Author-Name: Emmanuel Jurczenko
Author-X-Name-First: Emmanuel
Author-X-Name-Last: Jurczenko
Author-Name: Michael Rockinger
Author-X-Name-First: Michael
Author-X-Name-Last: Rockinger
Title: Moment Component Analysis: An Illustration With International Stock Markets
Abstract:
We describe a statistical technique, which we call Moment Component Analysis (MCA), that extends principal component analysis (PCA) to higher co-moments such as co-skewness and co-kurtosis. This method allows us to identify the factors that drive co-skewness and co-kurtosis structures across a large set of series. We illustrate MCA using 44 international stock markets sampled at weekly frequency from 1994 to 2014. We find that both the co-skewness and the co-kurtosis structures can be summarized with a small number of factors. Using a rolling window approach, we show that these co-moments convey useful information about market returns, for systemic risk measurement and portfolio allocation, complementary to the information extracted from a standard PCA or from an independent component analysis.
Journal: Journal of Business & Economic Statistics
Pages: 576-598
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1216851
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1216851
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:576-598
Template-Type: ReDIF-Article 1.0
Author-Name: Javier Mencía
Author-X-Name-First: Javier
Author-X-Name-Last: Mencía
Author-Name: Enrique Sentana
Author-X-Name-First: Enrique
Author-X-Name-Last: Sentana
Title: Volatility-Related Exchange Traded Assets: An Econometric Investigation
Abstract:
We develop a theoretical framework for covariance stationary but persistent positively valued processes which combines a semi-nonparametric expansion of the Gamma distribution with a component version of the multiplicative error model. Our conditional mean assumption allows for slow, possibly nonmonotonic mean-reversion, while our distribution assumption provides more flexibility than a traditional Laguerre expansion while preserving positivity of the density. We apply our framework to a dynamic portfolio allocation for Exchange Traded Notes tracking short- and mid-term VIX futures indices, which are increasingly popular but risky financial instruments. We show the superior performance of the strategies based on our econometric model.
Journal: Journal of Business & Economic Statistics
Pages: 599-614
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1216852
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1216852
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:599-614
Template-Type: ReDIF-Article 1.0
Author-Name: Tao Chen
Author-X-Name-First: Tao
Author-X-Name-Last: Chen
Author-Name: Yuanyuan Ji
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Ji
Author-Name: Yahong Zhou
Author-X-Name-First: Yahong
Author-X-Name-Last: Zhou
Author-Name: Pingfang Zhu
Author-X-Name-First: Pingfang
Author-X-Name-Last: Zhu
Title: Testing Conditional Mean Independence Under Symmetry
Abstract:
Conditional mean independence (CMI) is one of the most widely used assumptions in the treatment effect literature to achieve model identification. We propose a Kolmogorov–Smirnov-type statistic to test CMI under a specific symmetry condition. We also propose a bootstrap procedure to obtain the p-values and critical values that are required to carry out the test. Results from a simulation study suggest that our test can work very well even in small to moderately sized samples. As an empirical illustration, we apply our test to a dataset that has been used in the literature to estimate the return on college education in China, to check whether the assumption of CMI is supported by the dataset and show the plausibility of the extra symmetry condition that is necessary for this new test.
Journal: Journal of Business & Economic Statistics
Pages: 615-627
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1219263
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1219263
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:615-627
Template-Type: ReDIF-Article 1.0
Author-Name: Tom Boot
Author-X-Name-First: Tom
Author-X-Name-Last: Boot
Author-Name: Andreas Pick
Author-X-Name-First: Andreas
Author-X-Name-Last: Pick
Title: Optimal Forecasts from Markov Switching Models
Abstract:
We derive forecasts for Markov switching models that are optimal in the mean square forecast error (MSFE) sense by means of weighting observations. We provide analytic expressions of the weights conditional on the Markov states and conditional on state probabilities. This allows us to study the effect of uncertainty around states on forecasts. It emerges that, even in large samples, forecasting performance increases substantially when the construction of optimal weights takes uncertainty around states into account. Performance of the optimal weights is shown through simulations and an application to U.S. GNP, where using optimal weights leads to significant reductions in MSFE. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 628-642
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1219264
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1219264
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:628-642
Template-Type: ReDIF-Article 1.0
Author-Name: Anne Opschoor
Author-X-Name-First: Anne
Author-X-Name-Last: Opschoor
Author-Name: Pawel Janus
Author-X-Name-First: Pawel
Author-X-Name-Last: Janus
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Author-Name: Dick Van Dijk
Author-X-Name-First: Dick
Author-X-Name-Last: Van Dijk
Title: New HEAVY Models for Fat-Tailed Realized Covariances and Returns
Abstract:
We develop a new score-driven model for the joint dynamics of fat-tailed realized covariance matrix observations and daily returns. The score dynamics for the unobserved true covariance matrix are robust to outliers and incidental large observations in both types of data by assuming a matrix-F distribution for the realized covariance measures and a multivariate Student's t distribution for the daily returns. The filter for the unknown covariance matrix has a computationally efficient matrix formulation, which proves beneficial for estimation and simulation purposes. We formulate parameter restrictions for stationarity and positive definiteness. Our simulation study shows that the new model is able to deal with high-dimensional settings (50 or more) and captures unobserved volatility dynamics even if the model is misspecified. We provide an empirical application to daily equity returns and realized covariance matrices up to 30 dimensions. The model statistically and economically outperforms competing multivariate volatility models out-of-sample. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 643-657
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1245622
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1245622
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:643-657
Template-Type: ReDIF-Article 1.0
Author-Name: Fabio Sanches
Author-X-Name-First: Fabio
Author-X-Name-Last: Sanches
Author-Name: Daniel Silva Junior
Author-X-Name-First: Daniel
Author-X-Name-Last: Silva Junior
Author-Name: Sorawoot Srisuma
Author-X-Name-First: Sorawoot
Author-X-Name-Last: Srisuma
Title: Minimum Distance Estimation of Search Costs Using Price Distribution
Abstract:
It has been shown that equilibrium restrictions in a search model can be used to identify quantiles of the search cost distribution from observedprices alone. These quantiles can be difficult to estimate in practice. This article uses a minimum distance approach to estimate them that is easy to compute. A version of our estimator is a solution to a nonlinear least-square problem that can be straightforwardly programmed on softwares such as STATA. We show our estimator is consistent and has an asymptotic normal distribution. Its distribution can be consistently estimated by a bootstrap. Our estimator can be used to estimate the cost distribution nonparametrically on a larger support when prices from heterogenous markets are available. We propose a two-step sieve estimator for that case. The first step estimates quantiles from each market. They are used in the second step as generated variables to perform nonparametric sieve estimation. We derive the uniform rate of convergence of the sieve estimator that can be used to quantify the errors incurred from interpolating data across markets. To illustrate we use online bookmaking odds for English football leagues’ matches (as prices) and find evidence that suggests search costs for consumers have fallen following a change in the British law that allows gambling operators to advertise more widely. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 658-671
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1247003
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1247003
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:658-671
Template-Type: ReDIF-Article 1.0
Author-Name: James E. Pustejovsky
Author-X-Name-First: James E.
Author-X-Name-Last: Pustejovsky
Author-Name: Elizabeth Tipton
Author-X-Name-First: Elizabeth
Author-X-Name-Last: Tipton
Title: Small-Sample Methods for Cluster-Robust Variance Estimation and Hypothesis Testing in Fixed Effects Models
Abstract:
In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called clubSandwich. This article has online supplementary materials.
Journal: Journal of Business & Economic Statistics
Pages: 672-683
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1247004
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1247004
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:672-683
Template-Type: ReDIF-Article 1.0
Author-Name: Michelle Anzarut
Author-X-Name-First: Michelle
Author-X-Name-Last: Anzarut
Author-Name: Ramsés H. Mena
Author-X-Name-First: Ramsés H.
Author-X-Name-Last: Mena
Author-Name: Consuelo R. Nava
Author-X-Name-First: Consuelo R.
Author-X-Name-Last: Nava
Author-Name: Igor Prünster
Author-X-Name-First: Igor
Author-X-Name-Last: Prünster
Title: Poisson-Driven Stationary Markov Models
Abstract:
We propose a simple yet powerful method to construct strictly stationary Markovian models with given but arbitrary invariant distributions. The idea is based on a Poisson-type transform modulating the dependence structure in the model. An appealing feature of our approach is the possibility to control the underlying transition probabilities and, therefore, incorporate them within standard estimation methods. Given the resulting representation of the transition density, a Gibbs sampler algorithm based on the slice method is proposed and implemented. In the discrete-time case, special attention is placed to the class of generalized inverse Gaussian distributions. In the continuous case, we first provide a brief treatment of the class of gamma distributions, and then extend it to cover other invariant distributions, such as the generalized extreme value class. The proposed approach and estimation algorithm are illustrated with real financial datasets. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 684-694
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1251441
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1251441
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:684-694
Template-Type: ReDIF-Article 1.0
Author-Name: Igor Viveiros Melo Souza
Author-X-Name-First: Igor Viveiros Melo
Author-X-Name-Last: Souza
Author-Name: Valderio Anselmo Reisen
Author-X-Name-First: Valderio Anselmo
Author-X-Name-Last: Reisen
Author-Name: Glaura da Conceição Franco
Author-X-Name-First: Glaura da Conceição
Author-X-Name-Last: Franco
Author-Name: Pascal Bondon
Author-X-Name-First: Pascal
Author-X-Name-Last: Bondon
Title: The Estimation and Testing of the Cointegration Order Based on the Frequency Domain
Abstract:
This article proposes a method to estimate the degree of cointegration in bivariate series and suggests a test statistic for testing noncointegration based on the determinant of the spectral density matrix for the frequencies close to zero. In the study, series are assumed to be I(d), 0 < d ⩽ 1, with parameter d supposed to be known. In this context, the order of integration of the error series is I(d − b), b ∈ [0, d]. Besides, the determinant of the spectral density matrix for the dth difference series is a power function of b. The proposed estimator for b is obtained here performing a regression of logged determinant on a set of logged Fourier frequencies. Under the null hypothesis of noncointegration, the expressions for the bias and variance of the estimator were derived and its consistency property was also obtained. The asymptotic normality of the estimator, under Gaussian and non-Gaussian innovations, was also established. A Monte Carlo study was performed and showed that the suggested test possesses correct size and good power for moderate sample sizes, when compared with other proposals in the literature. An advantage of the method proposed here, over the standard methods, is that it allows to know the order of integration of the error series without estimating a regression equation. An application was conducted to exemplify the method in a real context.
Journal: Journal of Business & Economic Statistics
Pages: 695-704
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2016.1251442
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1251442
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:695-704
Template-Type: ReDIF-Article 1.0
Author-Name: Koen Jochmans
Author-X-Name-First: Koen
Author-X-Name-Last: Jochmans
Title: Semiparametric Analysis of Network Formation
Abstract:
We consider a statistical model for directed network formation that features both node-specific parameters that capture degree heterogeneity and common parameters that reflect homophily among nodes. The goal is to perform statistical inference on the homophily parameters while treating the node-specific parameters as fixed effects. Jointly estimating all parameters leads to incidental-parameter bias and incorrect inference. As an alternative, we develop an approach based on a sufficient statistic that separates inference on the homophily parameters from estimation of the fixed effects. The estimator is easy to compute and can be applied to both dense and sparse networks, and is shown to have desirable asymptotic properties under sequences of growing networks. We illustrate the improvements of this estimator over maximum likelihood and bias-corrected estimation in a series of numerical experiments. The technique is applied to explain the import and export patterns in a dense network of countries and to estimate a more sparse advice network among attorneys in a corporate law firm.
Journal: Journal of Business & Economic Statistics
Pages: 705-713
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2017.1286242
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1286242
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:705-713
Template-Type: ReDIF-Article 1.0
Author-Name: David McKenzie
Author-X-Name-First: David
Author-X-Name-Last: McKenzie
Title: Can Business Owners Form Accurate Counterfactuals? Eliciting Treatment and Control Beliefs About Their Outcomes in the Alternative Treatment Status
Abstract:
A survey of participants in a large-scale business plan competition experiment, in which winners received an average of U.S. $50,000 each, is used to elicit ex-post beliefs about what the outcomes would have been in the alternative treatment status. Participants are asked the percent chance they would be operating a firm, and the number of employees and monthly sales they would have, had their treatment status been reversed. The study finds the control group to have reasonably accurate expectations of the large treatment effect they would experience on the likelihood of operating a firm, although this may reflect the treatment effect being close to an upper bound. The control group dramatically overestimates how much winning would help them grow the size of their firm. The treatment group overestimates how much winning helps their chance of their business surviving and also overestimates how much winning helps them grow their firms. In addition, these counterfactual expectations appear unable to generate accurate relative rankings of which groups of participants benefit most from treatment.
Journal: Journal of Business & Economic Statistics
Pages: 714-722
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2017.1305276
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1305276
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:714-722
Template-Type: ReDIF-Article 1.0
Author-Name: Ulrich K. Müller
Author-X-Name-First: Ulrich K.
Author-X-Name-Last: Müller
Title: Comment on "HAR Inference: Recommendations for Practice" by E. Lazarus, D. J. Lewis, J. H. Stock and M. W. Watson
Journal: Journal of Business & Economic Statistics
Pages: 563-564
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2018.1497502
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497502
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:563-564
Template-Type: ReDIF-Article 1.0
Author-Name: Timothy J. Vogelsang
Author-X-Name-First: Timothy J.
Author-X-Name-Last: Vogelsang
Title: Comment on "HAR Inference: Recommendations for Practice"
Journal: Journal of Business & Economic Statistics
Pages: 569-573
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2018.1497503
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497503
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:569-573
Template-Type: ReDIF-Article 1.0
Author-Name: Kenneth D. West
Author-X-Name-First: Kenneth D.
Author-X-Name-Last: West
Title: Discussion of Lazarus, Lewis, Stock, and Watson, “HAR Inference: Recommendations for Practice”
Journal: Journal of Business & Economic Statistics
Pages: 560-562
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2018.1505627
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1505627
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:560-562
Template-Type: ReDIF-Article 1.0
Author-Name: Yixiao Sun
Author-X-Name-First: Yixiao
Author-X-Name-Last: Sun
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 565-568
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2018.1505628
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1505628
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:565-568
Template-Type: ReDIF-Article 1.0
Author-Name: Eben Lazarus
Author-X-Name-First: Eben
Author-X-Name-Last: Lazarus
Author-Name: Daniel J. Lewis
Author-X-Name-First: Daniel J.
Author-X-Name-Last: Lewis
Author-Name: James H. Stock
Author-X-Name-First: James H.
Author-X-Name-Last: Stock
Author-Name: Mark W. Watson
Author-X-Name-First: Mark W.
Author-X-Name-Last: Watson
Title: HAR Inference: Recommendations for Practice
Abstract:
The classic papers by Newey and West (1987) and Andrews (1991) spurred a large body of work on how to improve heteroscedasticity- and autocorrelation-robust (HAR) inference in time series regression. This literature finds that using a larger-than-usual truncation parameter to estimate the long-run variance, combined with Kiefer-Vogelsang (2002, 2005) fixed-b critical values, can substantially reduce size distortions, at only a modest cost in (size-adjusted) power. Empirical practice, however, has not kept up. This article therefore draws on the post-Newey West/Andrews literature to make concrete recommendations for HAR inference. We derive truncation parameter rules that choose a point on the size-power tradeoff to minimize a loss function. If Newey-West tests are used, we recommend the truncation parameter rule S = 1.3T1/2 and (nonstandard) fixed-b critical values. For tests of a single restriction, we find advantages to using the equal-weighted cosine (EWC) test, where the long run variance is estimated by projections onto Type II cosines, using ν = 0.4T2/3 cosine terms; for this test, fixed-b critical values are, conveniently, tν or F. We assess these rules using first an ARMA/GARCH Monte Carlo design, then a dynamic factor model design estimated using a 207 quarterly U.S. macroeconomic time series.
Journal: Journal of Business & Economic Statistics
Pages: 541-559
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2018.1506926
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1506926
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:541-559
Template-Type: ReDIF-Article 1.0
Author-Name: Eben Lazarus
Author-X-Name-First: Eben
Author-X-Name-Last: Lazarus
Author-Name: Daniel J. Lewis
Author-X-Name-First: Daniel J.
Author-X-Name-Last: Lewis
Author-Name: James H. Stock
Author-X-Name-First: James H.
Author-X-Name-Last: Stock
Author-Name: Mark W. Watson
Author-X-Name-First: Mark W.
Author-X-Name-Last: Watson
Title: HAR Inference: Recommendations for Practice Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 574-575
Issue: 4
Volume: 36
Year: 2018
Month: 10
X-DOI: 10.1080/07350015.2018.1513251
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1513251
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:4:p:574-575
Template-Type: ReDIF-Article 1.0
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Alex Furger
Author-X-Name-First: Alex
Author-X-Name-Last: Furger
Author-Name: Dacheng Xiu
Author-X-Name-First: Dacheng
Author-X-Name-Last: Xiu
Title: Incorporating Global Industrial Classification Standard Into Portfolio Allocation: A Simple Factor-Based Large Covariance Matrix Estimator With High-Frequency Data
Abstract:
We document a striking block-diagonal pattern in the factor model residual covariances of the S&P 500 Equity Index constituents, after sorting the assets by their assigned Global Industry Classification Standard (GICS) codes. Cognizant of this structure, we propose combining a location-based thresholding approach based on sector inclusion with the Fama-French and SDPR sector Exchange Traded Funds (ETF’s). We investigate the performance of our estimators in an out-of-sample portfolio allocation study. We find that our simple and positive-definite covariance matrix estimator yields strong empirical results under a variety of factor models and thresholding schemes. Conversely, we find that the Fama-French factor model is only suitable for covariance estimation when used in conjunction with our proposed thresholding technique. Theoretically, we provide justification for the empirical results by jointly analyzing the in-fill and diverging dimension asymptotics.
Journal: Journal of Business & Economic Statistics
Pages: 489-503
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1052458
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052458
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:489-503
Template-Type: ReDIF-Article 1.0
Author-Name: Asger Lunde
Author-X-Name-First: Asger
Author-X-Name-Last: Lunde
Author-Name: Neil Shephard
Author-X-Name-First: Neil
Author-X-Name-Last: Shephard
Author-Name: Kevin Sheppard
Author-X-Name-First: Kevin
Author-X-Name-Last: Sheppard
Title: Econometric Analysis of Vast Covariance Matrices Using Composite Realized Kernels and Their Application to Portfolio Choice
Abstract:
We propose a composite realized kernel to estimate the ex-post covariation of asset prices. These measures can in turn be used to forecast the covariation of future asset returns. Composite realized kernels are a data-efficient method, where the covariance estimate is composed of univariate realized kernels to estimate variances and bivariate realized kernels to estimate correlations. We analyze the merits of our composite realized kernels in an ultra high-dimensional environment, making asset allocation decisions every day solely based on the previous day’s data or a short moving average over very recent days. The application is a minimum variance portfolio exercise. The dataset is tick-by-tick data comprising 437 U.S. equities over the sample period 2006–2011. We show that our estimator is able to outperform its competitors, while the associated trading costs are competitive.
Journal: Journal of Business & Economic Statistics
Pages: 504-518
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1064432
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1064432
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:504-518
Template-Type: ReDIF-Article 1.0
Author-Name: Michael W. McCracken
Author-X-Name-First: Michael W.
Author-X-Name-Last: McCracken
Author-Name: Serena Ng
Author-X-Name-First: Serena
Author-X-Name-Last: Ng
Title: FRED-MD: A Monthly Database for Macroeconomic Research
Abstract:
This article describes a large, monthly frequency, macroeconomic database with the goal of establishing a convenient starting point for empirical analysis that requires “big data.” The dataset mimics the coverage of those already used in the literature but has three appealing features. First, it is designed to be updated monthly using the Federal Reserve Economic Data (FRED) database. Second, it will be publicly accessible, facilitating comparison of related research and replication of empirical work. Third, it will relieve researchers from having to manage data changes and revisions. We show that factors extracted from our dataset share the same predictive content as those based on various vintages of the so-called Stock–Watson dataset. In addition, we suggest that diffusion indexes constructed as the partial sum of the factor estimates can potentially be useful for the study of business cycle chronology. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 574-589
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1086655
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1086655
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:574-589
Template-Type: ReDIF-Article 1.0
Author-Name: Jin-Chuan Duan
Author-X-Name-First: Jin-Chuan
Author-X-Name-Last: Duan
Author-Name: Weimin Miao
Author-X-Name-First: Weimin
Author-X-Name-Last: Miao
Title: Default Correlations and Large-Portfolio Credit Analysis
Abstract:
A factor model with sparsely correlated residuals is used to model short-term probabilities of default and other corporate exits while permitting missing data, and serves as the basis for generating default correlations. This novel factor model can then be used to produce portfolio credit risk profiles (default-rate and portfolio-loss distributions) by complementing an existing credit portfolio aggregation method with a novel simulation–convolution algorithm. We apply the model and the portfolio aggregation method on a global sample of 40,560 exchange-listed firms and focus on three large portfolios (the U.S., Eurozone-12, and ASEAN-5). Our results reaffirm the critical importance of default correlations. With default correlations, both default-rate and portfolio-loss distributions become far more right-skewed, reflecting a much higher likelihood of defaulting together. Our results also reveal that portfolio credit risk profiles evaluated at two different time points can change drastically with moving economic conditions, suggesting the importance of modeling credit risks with a dynamic system. Our factor model coupled with the aggregation algorithm provides a useful tool for active credit portfolio management.
Journal: Journal of Business & Economic Statistics
Pages: 536-546
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1087855
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1087855
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:536-546
Template-Type: ReDIF-Article 1.0
Author-Name: Weiming Li
Author-X-Name-First: Weiming
Author-X-Name-Last: Li
Author-Name: Jing Gao
Author-X-Name-First: Jing
Author-X-Name-Last: Gao
Author-Name: Kunpeng Li
Author-X-Name-First: Kunpeng
Author-X-Name-Last: Li
Author-Name: Qiwei Yao
Author-X-Name-First: Qiwei
Author-X-Name-Last: Yao
Title: Modeling Multivariate Volatilities via Latent Common Factors
Abstract:
Volatility, represented in the form of conditional heteroscedasticity, plays an important role in controlling and forecasting risks in various financial operations including asset pricing, portfolio allocation, and hedging futures. However, modeling and forecasting multi-dimensional conditional heteroscedasticity are technically challenging. As the volatilities of many financial assets are often driven by a few common and latent factors, we propose in this article a dimension-reduction method to model a multivariate volatility process and to estimate a lower-dimensional space, to be called the volatility space, within which the dynamics of the multivariate volatility process is confined. The new method is simple to use, as technically it boils down to an eigenanalysis for a nonnegative definite matrix. Hence, it is applicable to the cases when the number of assets concerned is in the order of thousands (using an ordinary PC/laptop). On the other hand, the model has the capability to cater for complex conditional heteroscedasticity behavior for multi-dimensional processes. Some asymptotic properties for the new method are established. We further illustrate the new method using both simulated and real data examples.
Journal: Journal of Business & Economic Statistics
Pages: 564-573
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1092975
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1092975
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:564-573
Template-Type: ReDIF-Article 1.0
Author-Name: Alexandre Belloni
Author-X-Name-First: Alexandre
Author-X-Name-Last: Belloni
Author-Name: Victor Chernozhukov
Author-X-Name-First: Victor
Author-X-Name-Last: Chernozhukov
Author-Name: Christian Hansen
Author-X-Name-First: Christian
Author-X-Name-Last: Hansen
Author-Name: Damian Kozbur
Author-X-Name-First: Damian
Author-X-Name-Last: Kozbur
Title: Inference in High-Dimensional Panel Models With an Application to Gun Control
Abstract:
We consider estimation and inference in panel data models with additive unobserved individual specific heterogeneity in a high-dimensional setting. The setting allows the number of time-varying regressors to be larger than the sample size. To make informative estimation and inference feasible, we require that the overall contribution of the time-varying variables after eliminating the individual specific heterogeneity can be captured by a relatively small number of the available variables whose identities are unknown. This restriction allows the problem of estimation to proceed as a variable selection problem. Importantly, we treat the individual specific heterogeneity as fixed effects which allows this heterogeneity to be related to the observed time-varying variables in an unspecified way and allows that this heterogeneity may differ for all individuals. Within this framework, we provide procedures that give uniformly valid inference over a fixed subset of parameters in the canonical linear fixed effects model and over coefficients on a fixed vector of endogenous variables in panel data instrumental variable models with fixed effects and many instruments. We present simulation results in support of the theoretical developments and illustrate the use of the methods in an application aimed at estimating the effect of gun prevalence on crime rates.
Journal: Journal of Business & Economic Statistics
Pages: 590-605
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1102733
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1102733
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:590-605
Template-Type: ReDIF-Article 1.0
Author-Name: Jushan Bai
Author-X-Name-First: Jushan
Author-X-Name-Last: Bai
Author-Name: Kunpeng Li
Author-X-Name-First: Kunpeng
Author-X-Name-Last: Li
Author-Name: Lina Lu
Author-X-Name-First: Lina
Author-X-Name-Last: Lu
Title: Estimation and Inference of FAVAR Models
Abstract:
The factor-augmented vector autoregressive (FAVAR) model is now widely used in macroeconomics and finance. In this model, observable and unobservable factors jointly follow a vector autoregressive process, which further drives the comovement of a large number of observable variables. We study the identification restrictions for FAVAR models, and propose a likelihood-based two-step method to estimate the model. The estimation explicitly accounts for factors being partially observed. We then provide an inferential theory for the estimated factors, factor loadings, and the dynamic parameters in the VAR process. We show how and why the limiting distributions are different from the existing results. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 620-641
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2015.1111222
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1111222
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:620-641
Template-Type: ReDIF-Article 1.0
Author-Name: Ruey S. Tsay
Author-X-Name-First: Ruey S.
Author-X-Name-Last: Tsay
Title: Some Methods for Analyzing Big Dependent Data
Abstract:
We consider an approach to analyze big data of time series. Big dependent data are first transformed into functional time series of densities via nonparametric density estimation. We then discuss some tools for exploratory data analysis of the resulting functional time series. The tools employed include K-means cluster analysis and tree-based classification. For modeling, we propose a threshold approximate-factor model and a Hellinger distance autoregressive model for functional time series of continuous densities. The latent factors of factor models are estimated by functional principal component analysis. Cross-validation and Hellinger distance are used to select the number of principal component functions. For prediction of high-dimensional time series, we use the results of cluster analysis to obtain parsimonious models. We demonstrate the proposed analysis by considering the demand of electricity, the behavior of daily U.S. stock returns, and U.S. income distributions.
Journal: Journal of Business & Economic Statistics
Pages: 673-688
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1148040
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1148040
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:673-688
Template-Type: ReDIF-Article 1.0
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Michael B. Imerman
Author-X-Name-First: Michael B.
Author-X-Name-Last: Imerman
Author-Name: Wei Dai
Author-X-Name-First: Wei
Author-X-Name-Last: Dai
Title: What Does the Volatility Risk Premium Say About Liquidity Provision and Demand for Hedging Tail Risk?
Abstract:
This article provides a data-driven analysis of the volatility risk premium, using tools from high-frequency finance and Big Data analytics. We argue that the volatility risk premium, loosely defined as the difference between realized and implied volatility, can best be understood when viewed as a systematically priced bias. We first use ultra-high-frequency transaction data on SPDRs and a novel approach for estimating integrated volatility on the frequency domain to compute realized volatility. From that we subtract the daily VIX, our measure of implied volatility, to construct a time series of the volatility risk premium. To identify the factors behind the volatility risk premium as a priced bias, we decompose it into magnitude and direction. We find compelling evidence that the magnitude of the deviation of the realized volatility from implied volatility represents supply and demand imbalances in the market for hedging tail risk. It is difficult to conclusively accept the hypothesis that the direction or sign of the volatility risk premium reflects expectations about future levels of volatility. However, evidence supports the hypothesis that the sign of the volatility risk premium is indicative of gains or losses on a delta-hedged portfolio.
Journal: Journal of Business & Economic Statistics
Pages: 519-535
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1152968
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1152968
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:519-535
Template-Type: ReDIF-Article 1.0
Author-Name: Alexandre Belloni
Author-X-Name-First: Alexandre
Author-X-Name-Last: Belloni
Author-Name: Victor Chernozhukov
Author-X-Name-First: Victor
Author-X-Name-Last: Chernozhukov
Author-Name: Ying Wei
Author-X-Name-First: Ying
Author-X-Name-Last: Wei
Title: Post-Selection Inference for Generalized Linear Models With Many Controls
Abstract:
This article considers generalized linear models in the presence of many controls. We lay out a general methodology to estimate an effect of interest based on the construction of an instrument that immunizes against model selection mistakes and apply it to the case of logistic binary choice model. More specifically we propose new methods for estimating and constructing confidence regions for a regression parameter of primary interest α0, a parameter in front of the regressor of interest, such as the treatment variable or a policy variable. These methods allow to estimate α0 at the root-n rate when the total number p of other regressors, called controls, potentially exceeds the sample size n using sparsity assumptions. The sparsity assumption means that there is a subset of s < n controls, which suffices to accurately approximate the nuisance part of the regression function. Importantly, the estimators and these resulting confidence regions are valid uniformly over s-sparse models satisfying s2log 2p = o(n) and other technical conditions. These procedures do not rely on traditional consistent model selection arguments for their validity. In fact, they are robust with respect to moderate model selection mistakes in variable selection. Under suitable conditions, the estimators are semi-parametrically efficient in the sense of attaining the semi-parametric efficiency bounds for the class of models in this article.
Journal: Journal of Business & Economic Statistics
Pages: 606-619
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1166116
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1166116
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:606-619
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaoyi Han
Author-X-Name-First: Xiaoyi
Author-X-Name-Last: Han
Author-Name: Lung-Fei Lee
Author-X-Name-First: Lung-Fei
Author-X-Name-Last: Lee
Title: Bayesian Analysis of Spatial Panel Autoregressive Models With Time-Varying Endogenous Spatial Weight Matrices, Common Factors, and Random Coefficients
Abstract:
This article examines spatial panel autoregressive (SAR) models with dynamic, time-varying endogenous spatial weights matrices, common factors, and random coefficients. An empirical application is on the spillover effects of state Medicaid spending. Endogeneity of spatial weights matrices comes from the correlation of “economic distance” and the disturbances in the SAR equation. Common factors control for common shocks to all states and random coefficients may capture heterogeneity in responses. The Bayesian Markov chain Monte Carlo (MCMC) estimation is developed. Identification of factors and factor loadings, and model selection issues based upon the deviance information criterion (DIC) are explored. We find that a state’s Medicaid related spending is positively and significantly affected by those of its neighbors. Both welfare motivated move and yardstick competition are possible sources of strategic interactions among state governments. Welfare motivated move turns out to be more a driving force for the interdependence and states do exhibit heterogenous responses.
Journal: Journal of Business & Economic Statistics
Pages: 642-660
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1167058
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1167058
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:642-660
Template-Type: ReDIF-Article 1.0
Author-Name: Matt Taddy
Author-X-Name-First: Matt
Author-X-Name-Last: Taddy
Author-Name: Matt Gardner
Author-X-Name-First: Matt
Author-X-Name-Last: Gardner
Author-Name: Liyun Chen
Author-X-Name-First: Liyun
Author-X-Name-Last: Chen
Author-Name: David Draper
Author-X-Name-First: David
Author-X-Name-Last: Draper
Title: A Nonparametric Bayesian Analysis of Heterogenous Treatment Effects in Digital Experimentation
Abstract:
Randomized controlled trials play an important role in how Internet companies predict the impact of policy decisions and product changes. In these “digital experiments,” different units (people, devices, products) respond differently to the treatment. This article presents a fast and scalable Bayesian nonparametric analysis of such heterogenous treatment effects and their measurement in relation to observable covariates. New results and algorithms are provided for quantifying the uncertainty associated with treatment effect measurement via both linear projections and nonlinear regression trees (CART and random forests). For linear projections, our inference strategy leads to results that are mostly in agreement with those from the frequentist literature. We find that linear regression adjustment of treatment effect averages (i.e., post-stratification) can provide some variance reduction, but that this reduction will be vanishingly small in the low-signal and large-sample setting of digital experiments. For regression trees, we provide uncertainty quantification for the machine learning algorithms that are commonly applied in tree-fitting. We argue that practitioners should look to ensembles of trees (forests) rather than individual trees in their analysis. The ideas are applied on and illustrated through an example experiment involving 21 million unique users of EBay.com.
Journal: Journal of Business & Economic Statistics
Pages: 661-672
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1172013
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1172013
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:661-672
Template-Type: ReDIF-Article 1.0
Author-Name: Jushan Bai
Author-X-Name-First: Jushan
Author-X-Name-Last: Bai
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Ruey Tsay
Author-X-Name-First: Ruey
Author-X-Name-Last: Tsay
Title: Special Issue on Big Data
Journal: Journal of Business & Economic Statistics
Pages: 487-488
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1197681
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1197681
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:487-488
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Editorial Board EOV
Journal: Journal of Business & Economic Statistics
Pages: ebi-ebi
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1221617
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1221617
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:ebi-ebi
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Editorial Collaborators
Journal: Journal of Business & Economic Statistics
Pages: 689-692
Issue: 4
Volume: 34
Year: 2016
Month: 10
X-DOI: 10.1080/07350015.2016.1221618
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1221618
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:4:p:689-692
Template-Type: ReDIF-Article 1.0
Author-Name: Tingting Cheng
Author-X-Name-First: Tingting
Author-X-Name-Last: Cheng
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Xibin Zhang
Author-X-Name-First: Xibin
Author-X-Name-Last: Zhang
Title: Bayesian Bandwidth Estimation in Nonparametric Time-Varying Coefficient Models
Abstract:
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 1-12
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1255216
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1255216
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:1-12
Template-Type: ReDIF-Article 1.0
Author-Name: Yunus Emre Ergemen
Author-X-Name-First: Yunus Emre
Author-X-Name-Last: Ergemen
Title: System Estimation of Panel Data Models Under Long-Range Dependence
Abstract:
A general dynamic panel data model is considered that incorporates individual and interactive fixed effects allowing for contemporaneous correlation in model innovations. The model accommodates general stationary or nonstationary long-range dependence through interactive fixed effects and innovations, removing the necessity to perform a priori unit-root or stationarity testing. Moreover, persistence in innovations and interactive fixed effects allows for cointegration; innovations can also have vector-autoregressive dynamics; deterministic trends can be featured. Estimations are performed using conditional-sum-of-squares criteria based on projected series by which latent characteristics are proxied. Resulting estimates are consistent and asymptotically normal at standard parametric rates. A simulation study provides reliability on the estimation method. The method is then applied to the long-run relationship between debt and GDP. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 13-26
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1255217
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1255217
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:13-26
Template-Type: ReDIF-Article 1.0
Author-Name: Florian Huber
Author-X-Name-First: Florian
Author-X-Name-Last: Huber
Author-Name: Martin Feldkircher
Author-X-Name-First: Martin
Author-X-Name-Last: Feldkircher
Title: Adaptive Shrinkage in Bayesian Vector Autoregressive Models
Abstract:
Vector autoregressive (VAR) models are frequently used for forecasting and impulse response analysis. For both applications, shrinkage priors can help improving inference. In this article, we apply the Normal-Gamma shrinkage prior to the VAR with stochastic volatility case and derive its relevant conditional posterior distributions. This framework imposes a set of normally distributed priors on the autoregressive coefficients and the covariance parameters of the VAR along with Gamma priors on a set of local and global prior scaling parameters. In a second step, we modify this prior setup by introducing another layer of shrinkage with scaling parameters that push certain regions of the parameter space to zero. Two simulation exercises show that the proposed framework yields more precise estimates of model parameters and impulse response functions. In addition, a forecasting exercise applied to U.S. data shows that this prior performs well relative to other commonly used specifications in terms of point and density predictions. Finally, performing structural inference suggests that responses to monetary policy shocks appear to be reasonable.
Journal: Journal of Business & Economic Statistics
Pages: 27-39
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1256217
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1256217
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:27-39
Template-Type: ReDIF-Article 1.0
Author-Name: Lore Dirick
Author-X-Name-First: Lore
Author-X-Name-Last: Dirick
Author-Name: Tony Bellotti
Author-X-Name-First: Tony
Author-X-Name-Last: Bellotti
Author-Name: Gerda Claeskens
Author-X-Name-First: Gerda
Author-X-Name-Last: Claeskens
Author-Name: Bart Baesens
Author-X-Name-First: Bart
Author-X-Name-Last: Baesens
Title: Macro-Economic Factors in Credit Risk Calculations: Including Time-Varying Covariates in Mixture Cure Models
Abstract:
The prediction of the time of default in a credit risk setting via survival analysis needs to take a high censoring rate into account. This rate is because default does not occur for the majority of debtors. Mixture cure models allow the part of the loan population that is unsusceptible to default to be modeled, distinct from time of default for the susceptible population. In this article, we extend the mixture cure model to include time-varying covariates. We illustrate the method via simulations and by incorporating macro-economic factors as predictors for an actual bank dataset.
Journal: Journal of Business & Economic Statistics
Pages: 40-53
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1260471
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1260471
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:40-53
Template-Type: ReDIF-Article 1.0
Author-Name: Shu Shen
Author-X-Name-First: Shu
Author-X-Name-Last: Shen
Title: Estimation and Inference of Distributional Partial Effects: Theory and Application
Abstract:
This article considers nonparametric and semiparametric estimation and inference of the effects of a covariate, either discrete or continuous, on the conditional distribution of a response outcome. It also proposes various uniform tests following estimation. This type of analysis is useful in situations where the econometrician or policy-maker is interested in knowing the effect of a variable or policy on the whole distribution of the response outcome conditional on covariates and is not willing to make parametric functional form assumptions. Monte Carlo experiments show that the proposed estimators and tests are well-behaved in small samples. The empirical section studies the effect of minimum wage hikes on household labor earnings. It is found that the minimum wage has a heterogenous impact on household earnings in the U.S. and that small hikes in the minimum wage are more effective in improving the household earnings distribution.
Journal: Journal of Business & Economic Statistics
Pages: 54-66
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1272458
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1272458
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:54-66
Template-Type: ReDIF-Article 1.0
Author-Name: Jakob J. Bosma
Author-X-Name-First: Jakob J.
Author-X-Name-Last: Bosma
Author-Name: Michael Koetter
Author-X-Name-First: Michael
Author-X-Name-Last: Koetter
Author-Name: Michael Wedow
Author-X-Name-First: Michael
Author-X-Name-Last: Wedow
Title: Too Connected to Fail? Inferring Network Ties From Price Co-Movements
Abstract:
We use extreme value theory methods to infer conventionally unobservable connections between financial institutions from joint extreme movements in credit default swap spreads and equity returns. Estimated pairwise co-crash probabilities identify significant connections among up to 186 financial institutions prior to the crisis of 2007/2008. Financial institutions that were very central prior to the crisis were more likely to be bailed out during the crisis or receive the status of systemically important institutions. This result remains intact also after controlling for indicators of too-big-to-fail concerns, systemic, systematic, and idiosyncratic risks. Both credit default swap (CDS)-based and equity-based connections are significant predictors of bailouts. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 67-80
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1272459
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1272459
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:67-80
Template-Type: ReDIF-Article 1.0
Author-Name: John M. Clapp
Author-X-Name-First: John M.
Author-X-Name-Last: Clapp
Author-Name: Stephen L. Ross
Author-X-Name-First: Stephen L.
Author-X-Name-Last: Ross
Author-Name: Tingyu Zhou
Author-X-Name-First: Tingyu
Author-X-Name-Last: Zhou
Title: Retail Agglomeration and Competition Externalities: Evidence from Openings and Closings of Multiline Department Stores in the U.S.
Abstract:
From the perspective of an existing retailer, the optimal size of a cluster of retail activity represents a trade-off between the marginal increases in consumer attraction from another store against the depletion of the customer base caused by an additional competitor. We estimate opening and closing probabilities of multi-line department stores (“anchors”) as a function of preexisting anchors by type of anchor store (low-priced, mid-priced, or high-priced) using a bias-corrected probit model with county and year fixed effects. We find strong negative competitive effects of an additional same type but no effect on openings of anchors of another type. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 81-96
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2016.1272460
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1272460
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:81-96
Template-Type: ReDIF-Article 1.0
Author-Name: Daniel R. Kowal
Author-X-Name-First: Daniel R.
Author-X-Name-Last: Kowal
Author-Name: David S. Matteson
Author-X-Name-First: David S.
Author-X-Name-Last: Matteson
Author-Name: David Ruppert
Author-X-Name-First: David
Author-X-Name-Last: Ruppert
Title: Functional Autoregression for Sparsely Sampled Data
Abstract:
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.
Journal: Journal of Business & Economic Statistics
Pages: 97-109
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1279058
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1279058
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:97-109
Template-Type: ReDIF-Article 1.0
Author-Name: Monica Jain
Author-X-Name-First: Monica
Author-X-Name-Last: Jain
Title: Perceived Inflation Persistence
Abstract:
This article constructs and estimates a measure called perceived inflation persistence that can be used to determine if professional forecasters’ inflation forecasts indicate there has been a change in inflation persistence. This measure is built via the implied autocorrelation function that follows from the estimates obtained using a forecaster-specific state-space model. Findings indicate that U.S. perceived inflation persistence has changed since the mid-1990s with more consensus among forecasters at lower levels of persistence. When compared to the autocorrelation function for actual inflation, forecasters typically react less to shocks to inflation than the actual inflation data would suggest.
Journal: Journal of Business & Economic Statistics
Pages: 110-120
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1281814
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1281814
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:110-120
Template-Type: ReDIF-Article 1.0
Author-Name: James W. Taylor
Author-X-Name-First: James W.
Author-X-Name-Last: Taylor
Title: Forecasting Value at Risk and Expected Shortfall Using a Semiparametric Approach Based on the Asymmetric Laplace Distribution
Abstract:
Value at Risk (VaR) forecasts can be produced from conditional autoregressive VaR models, estimated using quantile regression. Quantile modeling avoids a distributional assumption, and allows the dynamics of the quantiles to differ for each probability level. However, by focusing on a quantile, these models provide no information regarding expected shortfall (ES), which is the expectation of the exceedances beyond the quantile. We introduce a method for predicting ES corresponding to VaR forecasts produced by quantile regression models. It is well known that quantile regression is equivalent to maximum likelihood based on an asymmetric Laplace (AL) density. We allow the density's scale to be time-varying, and show that it can be used to estimate conditional ES. This enables a joint model of conditional VaR and ES to be estimated by maximizing an AL log-likelihood. Although this estimation framework uses an AL density, it does not rely on an assumption for the returns distribution. We also use the AL log-likelihood for forecast evaluation, and show that it is strictly consistent for the joint evaluation of VaR and ES. Empirical illustration is provided using stock index data. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 121-133
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1281815
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1281815
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:121-133
Template-Type: ReDIF-Article 1.0
Author-Name: Federico Carlini
Author-X-Name-First: Federico
Author-X-Name-Last: Carlini
Author-Name: Paolo Santucci de Magistris
Author-X-Name-First: Paolo
Author-X-Name-Last: Santucci de Magistris
Title: On the Identification of Fractionally Cointegrated VAR Models With the Condition
Abstract:
This article discusses identification problems in the fractionally cointegrated system of Johansen and Johansen and Nielsen. It is shown that several equivalent reparametrizations of the model associated with different fractional integration and cointegration parameters may exist for any choice of the lag-length when the true cointegration rank is known. The properties of these multiple nonidentified models are studied and a necessary and sufficient condition for the identification of the fractional parameters of the system is provided. The condition is named F(d)$\mathcal {F}(d)$. This is a generalization of the well-known I(1) condition to the fractional case. Imposing a proper restriction on the fractional integration parameter, d, is sufficient to guarantee identification of all model parameters and the validity of the F(d)$\mathcal {F}(d)$ condition. The article also illustrates the indeterminacy between the cointegration rank and the lag-length. It is also proved that the model with rank zero and k lags may be an equivalent reparameterization of the model with full rank and k − 1 lags. This precludes the possibility to test for the cointegration rank unless a proper restriction on the fractional integration parameter is imposed.
Journal: Journal of Business & Economic Statistics
Pages: 134-146
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1294077
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1294077
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:134-146
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Lilun Du
Author-X-Name-First: Lilun
Author-X-Name-Last: Du
Title: A Factor-Adjusted Multiple Testing Procedure With Application to Mutual Fund Selection
Abstract:
In this article, we propose a factor-adjusted multiple testing (FAT) procedure based on factor-adjusted p-values in a linear factor model involving some observable and unobservable factors, for the purpose of selecting skilled funds in empirical finance. The factor-adjusted p-values were obtained after extracting the latent common factors by the principal component method. Under some mild conditions, the false discovery proportion can be consistently estimated even if the idiosyncratic errors are allowed to be weakly correlated across units. Furthermore, by appropriately setting a sequence of threshold values approaching zero, the proposed FAT procedure enjoys model selection consistency. Extensive simulation studies and a real data analysis for selecting skilled funds in the U.S. financial market are presented to illustrate the practical utility of the proposed method. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 147-157
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1294078
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1294078
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:147-157
Template-Type: ReDIF-Article 1.0
Author-Name: Zongwu Cai
Author-X-Name-First: Zongwu
Author-X-Name-Last: Cai
Author-Name: Ying Fang
Author-X-Name-First: Ying
Author-X-Name-Last: Fang
Author-Name: Ming Lin
Author-X-Name-First: Ming
Author-X-Name-Last: Lin
Author-Name: Jia Su
Author-X-Name-First: Jia
Author-X-Name-Last: Su
Title: Inferences for a Partially Varying Coefficient Model With Endogenous Regressors
Abstract:
In this article, we propose a new class of semiparametric instrumental variable models with partially varying coefficients, in which the structural function has a partially linear form and the impact of endogenous structural variables can vary over different levels of some exogenous variables. We propose a three-step estimation procedure to estimate both functional and constant coefficients. The consistency and asymptotic normality of these proposed estimators are established. Moreover, a generalized F-test is developed to test whether the functional coefficients are of particular parametric forms with some underlying economic intuitions, and furthermore, the limiting distribution of the proposed generalized F-test statistic under the null hypothesis is established. Finally, we illustrate the finite sample performance of our approach with simulations and two real data examples in economics.
Journal: Journal of Business & Economic Statistics
Pages: 158-170
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1294079
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1294079
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:158-170
Template-Type: ReDIF-Article 1.0
Author-Name: Yingying Dong
Author-X-Name-First: Yingying
Author-X-Name-Last: Dong
Title: Regression Discontinuity Designs With Sample Selection
Abstract:
This article extends the standard regression discontinuity (RD) design to allow for sample selection or missing outcomes. We deal with both treatment endogeneity and sample selection. Identification in this article does not require any exclusion restrictions in the selection equation, nor does it require specifying any selection mechanism. The results can therefore be applied broadly, regardless of how sample selection is incurred. Identification instead relies on smoothness conditions. Smoothness conditions are empirically plausible, have readily testable implications, and are typically assumed even in the standard RD design. We first provide identification of the “extensive margin” and “intensive margin” effects. Then based on these identification results and principle stratification, sharp bounds are constructed for the treatment effects among the group of individuals that may be of particular policy interest, that is, those always participating compliers. These results are applied to evaluate the impacts of academic probation on college completion and final GPAs. Our analysis reveals striking gender differences at the extensive versus the intensive margin in response to this negative signal on performance.
Journal: Journal of Business & Economic Statistics
Pages: 171-186
Issue: 1
Volume: 37
Year: 2019
Month: 1
X-DOI: 10.1080/07350015.2017.1302880
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1302880
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:1:p:171-186
Template-Type: ReDIF-Article 1.0
Author-Name: Jiexing Wu
Author-X-Name-First: Jiexing
Author-X-Name-Last: Wu
Author-Name: Kate J. Li
Author-X-Name-First: Kate J.
Author-X-Name-Last: Li
Author-Name: Jun S. Liu
Author-X-Name-First: Jun S.
Author-X-Name-Last: Liu
Title: Bayesian Inference for Assessing Effects of Email Marketing Campaigns
Abstract:
Email marketing has been an increasingly important tool for today’s businesses. In this article, we propose a counting-process-based Bayesian method for quantifying the effectiveness of email marketing campaigns in conjunction with customer characteristics. Our model explicitly addresses the seasonality of data, accounts for the impact of customer characteristics on their purchasing behavior, and evaluates effects of email offers as well as their interactions with customer characteristics. Using the proposed method, together with a propensity-score-based unit-matching technique for alleviating potential confounding, we analyze a large email marketing dataset of an online ticket marketplace to evaluate the short- and long-term effectiveness of their email campaigns. It is shown that email offers can increase customer purchase rate both immediately and during a longer term. Customers’ characteristics such as length of shopping history, purchase recency, average ticket price, average ticket count, and number of genres purchased also affect customers’ purchase rate. A strong positive interaction is uncovered between email offer and purchase recency, suggesting that customers who have been inactive recently are more likely to take advantage of promotional offers. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 253-266
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1141096
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1141096
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:253-266
Template-Type: ReDIF-Article 1.0
Author-Name: Haroon Mumtaz
Author-X-Name-First: Haroon
Author-X-Name-Last: Mumtaz
Author-Name: Konstantinos Theodoridis
Author-X-Name-First: Konstantinos
Author-X-Name-Last: Theodoridis
Title: The Changing Transmission of Uncertainty Shocks in the U.S.
Abstract:
This article investigates if the impact of uncertainty shocks on the U.S. economy has changed over time. To this end, we develop an extended factor augmented vector autoregression (VAR) model that simultaneously allows the estimation of a measure of uncertainty and its time-varying impact on a range of variables. We find that the impact of uncertainty shocks on real activity and financial variables has declined systematically over time. In contrast, the response of inflation and the short-term interest rate to this shock has remained fairly stable. Simulations from a nonlinear dynamic stochastic general equilibrium (DSGE) model suggest that these empirical results are consistent with an increase in the monetary authorities’ antiinflation stance and a “flattening” of the Phillips curve. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 239-252
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1147357
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1147357
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:239-252
Template-Type: ReDIF-Article 1.0
Author-Name: Wayne-Roy Gayle
Author-X-Name-First: Wayne-Roy
Author-X-Name-Last: Gayle
Author-Name: Natalia Khorunzhina
Author-X-Name-First: Natalia
Author-X-Name-Last: Khorunzhina
Title: Micro-Level Estimation of Optimal Consumption Choice With Intertemporal Nonseparability in Preferences and Measurement Errors
Abstract:
This article investigates the presence of habit formation in household consumption, using data from the Panel Study of Income Dynamics. We develop an econometric model of internal habit formation of the multiplicative specification. The restrictions of the model allow for classical measurement errors in consumption without parametric assumptions on the distribution of measurement errors. We estimate the parameters by nonlinear generalized method of moments and find that habit formation is an important determinant of household food-consumption patterns. Using the parameter estimates, we develop bounds for the expectation of the implied heterogenous intertemporal elasticity of substitution and relative risk aversion that account for measurement errors, and compute confidence intervals for these bounds. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 227-238
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1149071
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1149071
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:227-238
Template-Type: ReDIF-Article 1.0
Author-Name: George Milunovich
Author-X-Name-First: George
Author-X-Name-Last: Milunovich
Author-Name: Minxian Yang
Author-X-Name-First: Minxian
Author-X-Name-Last: Yang
Title: Simultaneous Equation Systems With Heteroscedasticity: Identification, Estimation, and Stock Price Elasticities
Abstract:
We give a set of identifying conditions for p-dimensional (p ≥ 2) simultaneous equation systems (SES) with heteroscedasticity in the framework of Gaussian quasi-maximum likelihood (QML). Our conditions rely on the presence of heteroscedasticity in the data rather than identifying restrictions traditionally employed in the literature. The QML estimator is shown to be consistent for the true parameter point and asymptotically normal. Monte Carlo experiments indicate that the QML estimator performs well in comparison to the generalized method of moments (GMM) estimator in finite samples, even when the conditional variance is mildly misspecified. We analyze the relationship between traded stock prices and volumes in the setting of SES. Based on a sample of the Russell 3000 stocks, our findings provide new evidence against perfectly elastic demand and supply schedules for equities.
Journal: Journal of Business & Economic Statistics
Pages: 288-308
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1149072
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1149072
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:288-308
Template-Type: ReDIF-Article 1.0
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: On the Use of GLS Demeaning in Panel Unit Root Testing
Abstract:
One of the most well-known facts about unit root testing in time series is that the Dickey–Fuller (DF) test based on ordinary least squares (OLS) demeaned data suffers from low power, and that the use of generalized least squares (GLS) demeaning can lead to substantial power gains. Of course, this development has not gone unnoticed in the panel unit root literature. However, while the potential of using GLS demeaning is widely recognized, oddly enough, there are still no theoretical results available to facilitate a formal analysis of such demeaning in the panel data context. The present article can be seen as a reaction to this. The purpose is to evaluate the effect of GLS demeaning when used in conjuncture with the pooled OLS t-test for a unit root, resulting in a panel analog of the time series DF–GLS test. A key finding is that the success of GLS depend critically on the order in which the dependent variable is demeaned and first-differenced. If the variable is demeaned prior to taking first-differences, power is maximized by using GLS demeaning, whereas if the differencing is done first, then OLS demeaning is preferred. Furthermore, even if the former demeaning approach is used, such that GLS is preferred, the asymptotic distribution of the resulting test is independent of the tuning parameters that characterize the local alternative under which the demeaning performed. Hence, the demeaning can just as well be performed under the unit root null hypothesis. In this sense, GLS demeaning under the local alternative is redundant.
Journal: Journal of Business & Economic Statistics
Pages: 309-320
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1152969
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1152969
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:309-320
Template-Type: ReDIF-Article 1.0
Author-Name: Michael D. Bauer
Author-X-Name-First: Michael D.
Author-X-Name-Last: Bauer
Title: Restrictions on Risk Prices in Dynamic Term Structure Models
Abstract:
Restrictions on the risk-pricing in dynamic term structure models (DTSMs) tighten the link between cross-sectional and time-series variation of interest rates, and make absence of arbitrage useful for inference about expectations. This article presents a new econometric framework for estimation of affine Gaussian DTSMs under restrictions on risk prices, which addresses the issues of a large model space and of model uncertainty using a Bayesian approach. A simulation study demonstrates the good performance of the proposed method. Data for U.S. Treasury yields calls for tight restrictions on risk pricing: only level risk is priced, and only changes in the slope affect term premia. Incorporating the restrictions changes the model-implied short-rate expectations and term premia. Interest rate persistence is higher than in a maximally flexible model, hence expectations of future short rates are more variable—restrictions on risk prices help resolve the puzzle of implausibly stable short-rate expectations in this literature. Consistent with survey evidence and conventional macro wisdom, restricted models attribute a large share of the secular decline in long-term interest rates to expectations of future nominal short rates. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 196-211
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1164707
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1164707
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:196-211
Template-Type: ReDIF-Article 1.0
Author-Name: Chew Lian Chua
Author-X-Name-First: Chew Lian
Author-X-Name-Last: Chua
Author-Name: Sarantis Tsiaplias
Author-X-Name-First: Sarantis
Author-X-Name-Last: Tsiaplias
Title: A Bayesian Approach to Modeling Time-Varying Cointegration and Cointegrating Rank
Abstract:
A multivariate model that allows for both a time-varying cointegrating matrix and time-varying cointegrating rank is presented. The model addresses the issue that, in real data, the validity of a constant cointegrating relationship may be questionable. The model nests the submodels implied by alternative cointegrating matrix ranks and allows for transitions between stationarity and nonstationarity, and cointegrating and noncointegrating relationships in accordance with the observed behavior of the data. A Bayesian test of cointegration is also developed. The model is used to assess the validity of the Fisher effect and is also applied to equity market data.
Journal: Journal of Business & Economic Statistics
Pages: 267-277
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1166117
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1166117
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:267-277
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaojun Song
Author-X-Name-First: Xiaojun
Author-X-Name-Last: Song
Author-Name: Abderrahim Taamouti
Author-X-Name-First: Abderrahim
Author-X-Name-Last: Taamouti
Title: Measuring Nonlinear Granger Causality in Mean
Abstract:
We propose model-free measures for Granger causality in mean between random variables. Unlike the existing measures, ours are able to detect and quantify nonlinear causal effects. The new measures are based on nonparametric regressions and defined as logarithmic functions of restricted and unrestricted mean square forecast errors. They are easily and consistently estimated by replacing the unknown mean square forecast errors by their nonparametric kernel estimates. We derive the asymptotic normality of nonparametric estimator of causality measures, which we use to build tests for their statistical significance. We establish the validity of smoothed local bootstrap that one can use in finite sample settings to perform statistical tests. Monte Carlo simulations reveal that the proposed test has good finite sample size and power properties for a variety of data-generating processes and different sample sizes. Finally, the empirical importance of measuring nonlinear causality in mean is also illustrated. We quantify the degree of nonlinear predictability of equity risk premium using variance risk premium. Our empirical results show that the variance risk premium is a very good predictor of risk premium at horizons less than 6 months. We also find that there is a high degree of predictability at the 1-month horizon, that can be attributed to a nonlinear causal effect. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 321-333
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1166118
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1166118
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:321-333
Template-Type: ReDIF-Article 1.0
Author-Name: Frédérique Fève
Author-X-Name-First: Frédérique
Author-X-Name-Last: Fève
Author-Name: Jean-Pierre Florens
Author-X-Name-First: Jean-Pierre
Author-X-Name-Last: Florens
Author-Name: Ingrid Van Keilegom
Author-X-Name-First: Ingrid
Author-X-Name-Last: Van Keilegom
Title: Estimation of Conditional Ranks and Tests of Exogeneity in Nonparametric Nonseparable Models
Abstract:
Consider a nonparametric nonseparable regression model Y = ϕ(Z, U), where ϕ(Z, U) is strictly increasing in U and U ∼ U[0, 1]. We suppose that there exists an instrument W that is independent of U. The observable random variables are Y, Z, and W, all one-dimensional. We construct test statistics for the hypothesis that Z is exogenous, that is, that U is independent of Z. The test statistics are based on the observation that Z is exogenous if and only if V = FY|Z(Y|Z) is independent of W, and hence they do not require the estimation of the function ϕ. The asymptotic properties of the proposed tests are proved, and a bootstrap approximation of the critical values of the tests is shown to be consistent and to work for finite samples via simulations. An empirical example using the U.K. Family Expenditure Survey is also given. As a byproduct of our results we obtain the asymptotic properties of a kernel estimator of the distribution of V, which equals U when Z is exogenous. We show that this estimator converges to the uniform distribution at faster rate than the parametric n− 1/2-rate.
Journal: Journal of Business & Economic Statistics
Pages: 334-345
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1166120
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1166120
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:334-345
Template-Type: ReDIF-Article 1.0
Author-Name: Lucio Barabesi
Author-X-Name-First: Lucio
Author-X-Name-Last: Barabesi
Author-Name: Andrea Cerasa
Author-X-Name-First: Andrea
Author-X-Name-Last: Cerasa
Author-Name: Andrea Cerioli
Author-X-Name-First: Andrea
Author-X-Name-Last: Cerioli
Author-Name: Domenico Perrotta
Author-X-Name-First: Domenico
Author-X-Name-Last: Perrotta
Title: Goodness-of-Fit Testing for the Newcomb-Benford Law With Application to the Detection of Customs Fraud
Abstract:
The Newcomb-Benford law for digit sequences has recently attracted interest in antifraud analysis. However, most of its applications rely either on diagnostic checks of the data, or on informal decision rules. We suggest a new way of testing the Newcomb-Benford law that turns out to be particularly attractive for the detection of frauds in customs data collected from international trade. Our approach has two major advantages. The first one is that we control the rate of false rejections at each stage of the procedure, as required in antifraud applications. The second improvement is that our testing procedure leads to exact significance levels and does not rely on large-sample approximations. Another contribution of our work is the derivation of a simple expression for the digit distribution when the Newcomb-Benford law is violated, and a bound for a chi-squared type of distance between the actual digit distribution and the Newcomb-Benford one.
Journal: Journal of Business & Economic Statistics
Pages: 346-358
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1172014
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1172014
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:346-358
Template-Type: ReDIF-Article 1.0
Author-Name: P. Richard Hahn
Author-X-Name-First: P. Richard
Author-X-Name-Last: Hahn
Author-Name: Jingyu He
Author-X-Name-First: Jingyu
Author-X-Name-Last: He
Author-Name: Hedibert Lopes
Author-X-Name-First: Hedibert
Author-X-Name-Last: Lopes
Title: Bayesian Factor Model Shrinkage for Linear IV Regression With Many Instruments
Abstract:
A Bayesian approach for the many instruments problem in linear instrumental variable models is presented. The new approach has two components. First, a slice sampler is developed, which leverages a decomposition of the likelihood function that is a Bayesian analogue to two-stage least squares. The new sampler permits nonconjugate shrinkage priors to be implemented easily and efficiently. The new computational approach permits a Bayesian analysis of problems that were previously infeasible due to computational demands that scaled poorly in the number of regressors. Second, a new predictor-dependent shrinkage prior is developed specifically for the many instruments setting. The prior is constructed based on a factor model decomposition of the matrix of observed instruments, allowing many instruments to be incorporated into the analysis in a robust way. Features of the new method are illustrated via a simulation study and three empirical examples.
Journal: Journal of Business & Economic Statistics
Pages: 278-287
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1172968
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1172968
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:278-287
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Zheng Fang
Author-X-Name-First: Zheng
Author-X-Name-Last: Fang
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Author-Name: Chih-Ling Tsai
Author-X-Name-First: Chih-Ling
Author-X-Name-Last: Tsai
Title: Covariance Matrix Estimation via Network Structure
Abstract:
In this article, we employ a regression formulation to estimate the high-dimensional covariance matrix for a given network structure. Using prior information contained in the network relationships, we model the covariance as a polynomial function of the symmetric adjacency matrix. Accordingly, the problem of estimating a high-dimensional covariance matrix is converted to one of estimating low dimensional coefficients of the polynomial regression function, which we can accomplish using ordinary least squares or maximum likelihood. The resulting covariance matrix estimator based on the maximum likelihood approach is guaranteed to be positive definite even in finite samples. Under mild conditions, we obtain the theoretical properties of the resulting estimators. A Bayesian information criterion is also developed to select the order of the polynomial function. Simulation studies and empirical examples illustrate the usefulness of the proposed methods.
Journal: Journal of Business & Economic Statistics
Pages: 359-369
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1173558
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1173558
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:359-369
Template-Type: ReDIF-Article 1.0
Author-Name: Dong Hwan Oh
Author-X-Name-First: Dong Hwan
Author-X-Name-Last: Oh
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Title: Time-Varying Systemic Risk: Evidence From a Dynamic Copula Model of CDS Spreads
Abstract:
This article proposes a new class of copula-based dynamic models for high-dimensional conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modeling high-dimensional covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enables the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008–2009, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the precrisis period. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 181-195
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1177535
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1177535
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:181-195
Template-Type: ReDIF-Article 1.0
Author-Name: Yan Fan
Author-X-Name-First: Yan
Author-X-Name-Last: Fan
Author-Name: Wolfgang Karl Härdle
Author-X-Name-First: Wolfgang Karl
Author-X-Name-Last: Härdle
Author-Name: Weining Wang
Author-X-Name-First: Weining
Author-X-Name-Last: Wang
Author-Name: Lixing Zhu
Author-X-Name-First: Lixing
Author-X-Name-Last: Zhu
Title: Single-Index-Based CoVaR With Very High-Dimensional Covariates
Abstract:
Systemic risk analysis reveals the interdependencies of risk factors especially in tail event situations. In applications the focus of interest is on capturing joint tail behavior rather than a variation around the mean. Quantile and expectile regression are used here as tools of data analysis. When it comes to characterizing tail event curves one faces a dimensionality problem, which is important for CoVaR (Conditional Value at Risk) determination. A projection-based single-index model specification may come to the rescue but for ultrahigh-dimensional regressors one faces yet another dimensionality problem and needs to balance precision versus dimension. Such a balance is achieved by combining semiparametric ideas with variable selection techniques. In particular, we propose a projection-based single-index model specification for very high-dimensional regressors. This model is used for practical CoVaR estimates with a systemically chosen indicator. In simulations we demonstrate the practical side of the semiparametric CoVaR method. The application to the U.S. financial sector shows good backtesting results and indicate market coagulation before the crisis period. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 212-226
Issue: 2
Volume: 36
Year: 2018
Month: 4
X-DOI: 10.1080/07350015.2016.1180990
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1180990
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:2:p:212-226
Template-Type: ReDIF-Article 1.0
Author-Name: Qingliang Fan
Author-X-Name-First: Qingliang
Author-X-Name-Last: Fan
Author-Name: Wei Zhong
Author-X-Name-First: Wei
Author-X-Name-Last: Zhong
Title: Nonparametric Additive Instrumental Variable Estimator: A Group Shrinkage Estimation Perspective
Abstract:
In this article, we study a nonparametric approach regarding a general nonlinear reduced form equation to achieve a better approximation of the optimal instrument. Accordingly, we propose the nonparametric additive instrumental variable estimator (NAIVE) with the adaptive group Lasso. We theoretically demonstrate that the proposed estimator is root-n consistent and asymptotically normal. The adaptive group Lasso helps us select the valid instruments while the dimensionality of potential instrumental variables is allowed to be greater than the sample size. In practice, the degree and knots of B-spline series are selected by minimizing the BIC or EBIC criteria for each nonparametric additive component in the reduced form equation. In Monte Carlo simulations, we show that the NAIVE has the same performance as the linear instrumental variable (IV) estimator for the truly linear reduced form equation. On the other hand, the NAIVE performs much better in terms of bias and mean squared errors compared to other alternative estimators under the high-dimensional nonlinear reduced form equation. We further illustrate our method in an empirical study of international trade and growth. Our findings provide a stronger evidence that international trade has a significant positive effect on economic growth.
Journal: Journal of Business & Economic Statistics
Pages: 388-399
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1180991
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1180991
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:388-399
Template-Type: ReDIF-Article 1.0
Author-Name: Seojeong Lee
Author-X-Name-First: Seojeong
Author-X-Name-Last: Lee
Title: A Consistent Variance Estimator for 2SLS When Instruments Identify Different LATEs
Abstract:
Under treatment effect heterogeneity, an instrument identifies the instrument-specific local average treatment effect (LATE). With multiple instruments, two-stage least squares (2SLS) estimand is a weighted average of different LATEs. What is often overlooked in the literature is that the postulated moment condition evaluated at the 2SLS estimand does not hold unless those LATEs are the same. If so, the conventional heteroscedasticity-robust variance estimator would be inconsistent, and 2SLS standard errors based on such estimators would be incorrect. I derive the correct asymptotic distribution, and propose a consistent asymptotic variance estimator by using the result of Hall and Inoue (2003, Journal of Econometrics) on misspecified moment condition models. This can be used to correctly calculate the standard errors regardless of whether there is more than one LATE or not.
Journal: Journal of Business & Economic Statistics
Pages: 400-410
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1186555
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1186555
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:400-410
Template-Type: ReDIF-Article 1.0
Author-Name: Donald Robertson
Author-X-Name-First: Donald
Author-X-Name-Last: Robertson
Author-Name: Vasilis Sarafidis
Author-X-Name-First: Vasilis
Author-X-Name-Last: Sarafidis
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: Unit Root Inference in Generally Trending and Cross-Correlated Fixed-T Panels
Abstract:
This article proposes a new panel unit root test based on the generalized method of moments approach for panels with a possibly small number of time periods, T, and a large number of cross-sectional units, N. In the model that we consider the deterministic trend function is essentially unrestricted and the errors obey a multifactor structure that allows for rich forms of unobserved heterogeneity. In spite of these allowances, the GMM estimator considered is shown to be asymptotically unbiased, N$\sqrt{N}$-consistent, and asymptotically normal for all values of the autoregressive (AR) coefficient, ρ, including unity, making it a natural candidate for unit root inference. Results from our Monte Carlo study suggest that the asymptotic properties are borne out well in small samples. The implementation is illustrated by using a large sample of US banking institutions to test Gibrat’s Law.
Journal: Journal of Business & Economic Statistics
Pages: 493-504
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1191501
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1191501
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:493-504
Template-Type: ReDIF-Article 1.0
Author-Name: Junye Li
Author-X-Name-First: Junye
Author-X-Name-Last: Li
Author-Name: Gabriele Zinna
Author-X-Name-First: Gabriele
Author-X-Name-Last: Zinna
Title: The Variance Risk Premium: Components, Term Structures, and Stock Return Predictability
Abstract:
This article examines the properties of the variance risk premium (VRP). We propose a flexible asset pricing model that captures co-jumps in prices and volatility, and self-exciting jump clustering. We estimate the model on equity returns and variance swap rates at different horizons. The total VRP is negative and has a downward-sloping term structure, while its jump component displays an upward-sloping term structure. The abrupt and persistent response of the short-term jump VRP to extreme events makes this specific premium a proxy for investors’ fear of a market crash. Furthermore, the use of the VRP level and slope, and of its components, helps improve the short-run predictability of equity excess returns.
Journal: Journal of Business & Economic Statistics
Pages: 411-425
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1191502
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1191502
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:411-425
Template-Type: ReDIF-Article 1.0
Author-Name: Guangying Liu
Author-X-Name-First: Guangying
Author-X-Name-Last: Liu
Author-Name: Bing-Yi Jing
Author-X-Name-First: Bing-Yi
Author-X-Name-Last: Jing
Title: On Estimation of Hurst Parameter Under Noisy Observations
Abstract:
It is widely accepted that some financial data exhibit long memory or long dependence, and that the observed data usually possess noise. In the continuous time situation, the factional Brownian motion BH and its extension are an important class of models to characterize the long memory or short memory of data, and Hurst parameter H is an index to describe the degree of dependence. In this article, we estimate the Hurst parameter of a discretely sampled fractional integral process corrupted by noise. We use the preaverage method to diminish the impact of noise, employ the filter method to exclude the strong dependence, and obtain the smoothed data, and estimate the Hurst parameter by the smoothed data. The asymptotic properties such as consistency and asymptotic normality of the estimator are established. Simulations for evaluating the performance of the estimator are conducted. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 483-492
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1191503
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1191503
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:483-492
Template-Type: ReDIF-Article 1.0
Author-Name: Yi-Ting Chen
Author-X-Name-First: Yi-Ting
Author-X-Name-Last: Chen
Title: A Unified Approach to Estimating and Testing Income Distributions With Grouped Data
Abstract:
We propose a unified approach that is flexibly applicable to various types of grouped data for estimating and testing parametric income distributions. To simplify the use of our approach, we also provide a parametric bootstrap method and show its asymptotic validity. We also compare this approach with existing methods for grouped income data, and assess their finite-sample performance by a Monte Carlo simulation. For empirical demonstrations, we apply our approach to recovering China's income/consumption distributions from a sequence of income/consumption share tables and the U.S. income distributions from a combination of income shares and sample quantiles. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 438-455
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1194762
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1194762
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:438-455
Template-Type: ReDIF-Article 1.0
Author-Name: Yi Yang
Author-X-Name-First: Yi
Author-X-Name-Last: Yang
Author-Name: Wei Qian
Author-X-Name-First: Wei
Author-X-Name-Last: Qian
Author-Name: Hui Zou
Author-X-Name-First: Hui
Author-X-Name-Last: Zou
Title: Insurance Premium Prediction via Gradient Tree-Boosted Tweedie Compound Poisson Models
Abstract:
The Tweedie GLM is a widely used method for predicting insurance premiums. However, the structure of the logarithmic mean is restricted to a linear form in the Tweedie GLM, which can be too rigid for many applications. As a better alternative, we propose a gradient tree-boosting algorithm and apply it to Tweedie compound Poisson models for pure premiums. We use a profile likelihood approach to estimate the index and dispersion parameters. Our method is capable of fitting a flexible nonlinear Tweedie model and capturing complex interactions among predictors. A simulation study confirms the excellent prediction performance of our method. As an application, we apply our method to an auto-insurance claim data and show that the new method is superior to the existing methods in the sense that it generates more accurate premium predictions, thus helping solve the adverse selection issue. We have implemented our method in a user-friendly R package that also includes a nice visualization tool for interpreting the fitted model.
Journal: Journal of Business & Economic Statistics
Pages: 456-470
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1200981
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1200981
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:456-470
Template-Type: ReDIF-Article 1.0
Author-Name: Peter Stüttgen
Author-X-Name-First: Peter
Author-X-Name-Last: Stüttgen
Author-Name: Peter Boatwright
Author-X-Name-First: Peter
Author-X-Name-Last: Boatwright
Author-Name: Joseph B. Kadane
Author-X-Name-First: Joseph B.
Author-X-Name-Last: Kadane
Title: Stockouts and Restocking: Monitoring the Retailer from the Supplier’s Perspective
Abstract:
Suppliers and retailers typically do not have identical incentives to avoid stockouts (lost sales due to the lack of product availability on the shelf). Thus, the supplier needs to monitor the retailer’s restocking efforts with the available data. We empirically assess stockout levels using only shipment and sales data that is readily available to the supplier. The model distinguishes between store stockouts (zero inventory in the store) and shelf stockouts (an empty shelf but some inventory in other parts of the store), thereby identifying the cause of the stockout to be either a supply chain or a restocking issue. We find that, as suspected by the supplier, the average stockout rate is much higher than published averages. In addition, stockout rates vary widely between stores. Moreover, almost all stockouts are shelf stockouts. The model identifies stores that may have restocking issues.
Journal: Journal of Business & Economic Statistics
Pages: 471-482
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1200982
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1200982
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:471-482
Template-Type: ReDIF-Article 1.0
Author-Name: Jin Seo Cho
Author-X-Name-First: Jin Seo
Author-X-Name-Last: Cho
Author-Name: Myung-Ho Park
Author-X-Name-First: Myung-Ho
Author-X-Name-Last: Park
Author-Name: Peter C. B. Phillips
Author-X-Name-First: Peter C. B.
Author-X-Name-Last: Phillips
Title: Practical Kolmogorov–Smirnov Testing by Minimum Distance Applied to Measure Top Income Shares in Korea
Abstract:
We study Kolmogorov–Smirnov goodness-of-fit tests for evaluating distributional hypotheses where unknown parameters need to be fitted. Following the work of Pollard (1980), our approach uses a Cramér–von Mises minimum distance estimator for parameter estimation. The asymptotic null distribution of the resulting test statistic is represented by invariance principle arguments as a functional of a Brownian bridge in a simple regression format for which asymptotic critical values are readily delivered by simulations. Asymptotic power is examined under fixed and local alternatives and finite sample performance of the test is evaluated in simulations. The test is applied to measure top income shares using Korean income tax return data over 2007–2012. When the data relate to estimating the upper 0.1% or higher income shares, the conventional assumption of a Pareto tail distribution cannot be rejected. But the Pareto tail hypothesis is rejected for estimating the top 1.0% or 0.5% income shares at the 5% significance level. A supplement containing proofs and data descriptions is available online.
Journal: Journal of Business & Economic Statistics
Pages: 523-537
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1200983
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1200983
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:523-537
Template-Type: ReDIF-Article 1.0
Author-Name: Joshua D. Angrist
Author-X-Name-First: Joshua D.
Author-X-Name-Last: Angrist
Author-Name: Òscar Jordà
Author-X-Name-First: Òscar
Author-X-Name-Last: Jordà
Author-Name: Guido M. Kuersteiner
Author-X-Name-First: Guido M.
Author-X-Name-Last: Kuersteiner
Title: Semiparametric Estimates of Monetary Policy Effects: String Theory Revisited
Abstract:
We develop flexible semiparametric time series methods for the estimation of the causal effect of monetary policy on macroeconomic aggregates. Our estimator captures the average causal response to discrete policy interventions in a macrodynamic setting, without the need for assumptions about the process generating macroeconomic outcomes. The proposed estimation strategy, based on propensity score weighting, easily accommodates asymmetric and nonlinear responses. Using this estimator, we show that monetary tightening has clear effects on the yield curve and on economic activity. Monetary accommodation, however, appears to generate less pronounced responses from both. Estimates for recent financial crisis years display a similarly dampened response to monetary accommodation.
Journal: Journal of Business & Economic Statistics
Pages: 371-387
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1204919
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1204919
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:371-387
Template-Type: ReDIF-Article 1.0
Author-Name: L. Bissonnette
Author-X-Name-First: L.
Author-X-Name-Last: Bissonnette
Author-Name: J. de Bresser
Author-X-Name-First: J.
Author-X-Name-Last: de Bresser
Title: Eliciting Subjective Survival Curves: Lessons from Partial Identification
Abstract:
When analyzing data on subjective expectations of continuous outcomes, researchers have access to a limited number of reported probabilities for each respondent from which to construct complete distribution functions. Moreover, reported probabilities may be rounded and thus not equal to true beliefs. Using survival expectations elicited from a representative sample from the Netherlands, we investigate what can be learned if we take these two sources of missing information into account and expectations are therefore only partially identified. We find novel evidence for rounding by checking whether reported expectations are consistent with a hazard of death that increases weakly with age. Only 39% of reported beliefs are consistent with this under the assumption that all probabilities are reported precisely, while 92% are if we allow for rounding. Using the available information to construct bounds on subjective life expectancy, we show that the data alone are not sufficiently informative to allow for useful inference in partially identified linear models, even in the absence of rounding. We propose to improve precision by interpolation between rounded probabilities. Interpolation in combination with a limited amount of rounding does yield informative intervals.
Journal: Journal of Business & Economic Statistics
Pages: 505-515
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1213635
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1213635
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:505-515
Template-Type: ReDIF-Article 1.0
Author-Name: Michael W. McCracken
Author-X-Name-First: Michael W.
Author-X-Name-Last: McCracken
Author-Name: Giorgio Valente
Author-X-Name-First: Giorgio
Author-X-Name-Last: Valente
Title: Asymptotic Inference for Performance Fees and the Predictability of Asset Returns
Abstract:
In this article, we provide analytical, simulation, and empirical evidence on a test of equal economic value from competing predictive models of asset returns. We define economic value using the concept of a performance fee—the amount an investor would be willing to pay to have access to an alternative predictive model used to make investment decisions. We establish that this fee can be asymptotically normal under modest assumptions. Monte Carlo evidence shows that our test can be accurately sized in reasonably large samples. We apply the proposed test to predictions of the U.S. equity premium.
Journal: Journal of Business & Economic Statistics
Pages: 426-437
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1215317
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1215317
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:426-437
Template-Type: ReDIF-Article 1.0
Author-Name: Daniel Melser
Author-X-Name-First: Daniel
Author-X-Name-Last: Melser
Title: Scanner Data Price Indexes: Addressing Some Unresolved Issues
Abstract:
Scanner data are increasingly being used in the calculation of price indexes such as the CPI. The preeminent approach is the RYGEKS method (Ivancic, Diewert and Fox 2011). This uses multilateral methods to construct price parities across a rolling year then links these to construct a nonrevisable index. While this approach performs well there remain some unresolved issues, in particular; the optimal window length and the linking method. In this note, these questions are addressed. A novel linking method is proposed along with the use of weighted GEKS as opposed to a fixed window. These approaches are illustrated empirically on a large scanner dataset and perform well.
Journal: Journal of Business & Economic Statistics
Pages: 516-522
Issue: 3
Volume: 36
Year: 2018
Month: 7
X-DOI: 10.1080/07350015.2016.1218339
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1218339
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:3:p:516-522
Template-Type: ReDIF-Article 1.0
Author-Name: Jeffrey M. Wooldridge
Author-X-Name-First: Jeffrey M.
Author-X-Name-Last: Wooldridge
Author-Name: Ying Zhu
Author-X-Name-First: Ying
Author-X-Name-Last: Zhu
Title: Inference in Approximately Sparse Correlated Random Effects Probit Models With Panel Data
Abstract:
We propose a simple procedure based on an existing “debiased” l1-regularized method for inference of the average partial effects (APEs) in approximately sparse probit and fractional probit models with panel data, where the number of time periods is fixed and small relative to the number of cross-sectional observations. Our method is computationally simple and does not suffer from the incidental parameters problems that come from attempting to estimate as a parameter the unobserved heterogeneity for each cross-sectional unit. Furthermore, it is robust to arbitrary serial dependence in underlying idiosyncratic errors. Our theoretical results illustrate that inference concerning APEs is more challenging than inference about fixed and low-dimensional parameters, as the former concerns deriving the asymptotic normality for sample averages of linear functions of a potentially large set of components in our estimator when a series approximation for the conditional mean of the unobserved heterogeneity is considered. Insights on the applicability and implications of other existing Lasso-based inference procedures for our problem are provided. We apply the debiasing method to estimate the effects of spending on test pass rates. Our results show that spending has a positive and statistically significant average partial effect; moreover, the effect is comparable to found using standard parametric methods.
Journal: Journal of Business & Economic Statistics
Pages: 1-18
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2019.1681276
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1681276
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:1-18
Template-Type: ReDIF-Article 1.0
Author-Name: David A. Hirshberg
Author-X-Name-First: David A.
Author-X-Name-Last: Hirshberg
Author-Name: Stefan Wager
Author-X-Name-First: Stefan
Author-X-Name-Last: Wager
Title: Debiased Inference of Average Partial Effects in Single-Index Models: Comment on Wooldridge and Zhu
Journal: Journal of Business & Economic Statistics
Pages: 19-24
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2019.1681277
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1681277
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:19-24
Template-Type: ReDIF-Article 1.0
Author-Name: Jeffrey M. Wooldridge
Author-X-Name-First: Jeffrey M.
Author-X-Name-Last: Wooldridge
Author-Name: Ying Zhu
Author-X-Name-First: Ying
Author-X-Name-Last: Zhu
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 25-26
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2019.1681278
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1681278
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:25-26
Template-Type: ReDIF-Article 1.0
Author-Name: Peter Hördahl
Author-X-Name-First: Peter
Author-X-Name-Last: Hördahl
Author-Name: Eli M. Remolona
Author-X-Name-First: Eli M.
Author-X-Name-Last: Remolona
Author-Name: Giorgio Valente
Author-X-Name-First: Giorgio
Author-X-Name-Last: Valente
Title: Expectations and Risk Premia at 8:30 a.m.: Deciphering the Responses of Bond Yields to Macroeconomic Announcements
Abstract:
What explains the sharp movements of the yield curve upon the release of major U.S. macroeconomic announcements? To answer this question, we estimate an arbitrage-free dynamic term structure model with macroeconomic fundamentals as risk factors. We assume that the yield curve reacts to announcements primarily because of the information they contain about the fundamentals of output, inflation, and the Fed’s inflation target. We model the updating process by linking the factor shocks to announcement surprises. Fitting this process to data on yield curve movements in 20-min event windows, we find that most major announcements, especially those about the labor market, are informative largely about the output gap rather than about inflation. The resulting changes in short-rate expectations account for the bulk of observed yield movements. But adjustments in risk premia are also sizable. In partly offsetting the effects of short-rate expectations, these adjustments help to account for the well-known hump-shaped pattern of yield reactions across maturities.
Journal: Journal of Business & Economic Statistics
Pages: 27-42
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1429278
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1429278
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:27-42
Template-Type: ReDIF-Article 1.0
Author-Name: Daisuke Yagi
Author-X-Name-First: Daisuke
Author-X-Name-Last: Yagi
Author-Name: Yining Chen
Author-X-Name-First: Yining
Author-X-Name-Last: Chen
Author-Name: Andrew L. Johnson
Author-X-Name-First: Andrew L.
Author-X-Name-Last: Johnson
Author-Name: Timo Kuosmanen
Author-X-Name-First: Timo
Author-X-Name-Last: Kuosmanen
Title: Shape-Constrained Kernel-Weighted Least Squares: Estimating Production Functions for Chilean Manufacturing Industries
Abstract:
In this article, we examine a novel way of imposing shape constraints on a local polynomial kernel estimator. The proposed approach is referred to as shape constrained kernel-weighted least squares (SCKLS). We prove uniform consistency of the SCKLS estimator with monotonicity and convexity/concavity constraints and establish its convergence rate. In addition, we propose a test to validate whether shape constraints are correctly specified. The competitiveness of SCKLS is shown in a comprehensive simulation study. Finally, we analyze Chilean manufacturing data using the SCKLS estimator and quantify production in the plastics and wood industries. The results show that exporting firms have significantly higher productivity.
Journal: Journal of Business & Economic Statistics
Pages: 43-54
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1431128
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1431128
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:43-54
Template-Type: ReDIF-Article 1.0
Author-Name: Giuseppe Cavaliere
Author-X-Name-First: Giuseppe
Author-X-Name-Last: Cavaliere
Author-Name: Heino Bohn Nielsen
Author-X-Name-First: Heino Bohn
Author-X-Name-Last: Nielsen
Author-Name: Anders Rahbek
Author-X-Name-First: Anders
Author-X-Name-Last: Rahbek
Title: Bootstrapping Noncausal Autoregressions: With Applications to Explosive Bubble Modeling
Abstract:
In this article, we develop new bootstrap-based inference for noncausal autoregressions with heavy-tailed innovations. This class of models is widely used for modeling bubbles and explosive dynamics in economic and financial time series. In the noncausal, heavy-tail framework, a major drawback of asymptotic inference is that it is not feasible in practice as the relevant limiting distributions depend crucially on the (unknown) decay rate of the tails of the distribution of the innovations. In addition, even in the unrealistic case where the tail behavior is known, asymptotic inference may suffer from small-sample issues. To overcome these difficulties, we propose bootstrap inference procedures using parameter estimates obtained with the null hypothesis imposed (the so-called restricted bootstrap). We discuss three different choices of bootstrap innovations: wild bootstrap, based on Rademacher errors; permutation bootstrap; a combination of the two (“permutation wild bootstrap”). Crucially, implementation of these bootstraps do not require any a priori knowledge about the distribution of the innovations, such as the tail index or the convergence rates of the estimators. We establish sufficient conditions ensuring that, under the null hypothesis, the bootstrap statistics estimate consistently particular conditionaldistributions of the original statistics. In particular, we show that validity of the permutation bootstrap holds without any restrictions on the distribution of the innovations, while the permutation wild and the standard wild bootstraps require further assumptions such as symmetry of the innovation distribution. Extensive Monte Carlo simulations show that the finite sample performance of the proposed bootstrap tests is exceptionally good, both in terms of size and of empirical rejection probabilities under the alternative hypothesis. We conclude by applying the proposed bootstrap inference to Bitcoin/USD exchange rates and to crude oil price data. We find that indeed noncausal models with heavy-tailed innovations are able to fit the data, also in periods of bubble dynamics. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 55-67
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1448830
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1448830
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:55-67
Template-Type: ReDIF-Article 1.0
Author-Name: Joshua C. C. Chan
Author-X-Name-First: Joshua C. C.
Author-X-Name-Last: Chan
Title: Large Bayesian VARs: A Flexible Kronecker Error Covariance Structure
Abstract:
We introduce a class of large Bayesian vector autoregressions (BVARs) that allows for non-Gaussian, heteroscedastic, and serially dependent innovations. To make estimation computationally tractable, we exploit a certain Kronecker structure of the likelihood implied by this class of models. We propose a unified approach for estimating these models using Markov chain Monte Carlo (MCMC) methods. In an application that involves 20 macroeconomic variables, we find that these BVARs with more flexible covariance structures outperform the standard variant with independent, homoscedastic Gaussian innovations in both in-sample model-fit and out-of-sample forecast performance.
Journal: Journal of Business & Economic Statistics
Pages: 68-79
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1451336
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1451336
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:68-79
Template-Type: ReDIF-Article 1.0
Author-Name: Chung Eun Lee
Author-X-Name-First: Chung Eun
Author-X-Name-Last: Lee
Author-Name: Xiaofeng Shao
Author-X-Name-First: Xiaofeng
Author-X-Name-Last: Shao
Title: Volatility Martingale Difference Divergence Matrix and Its Application to Dimension Reduction for Multivariate Volatility
Abstract:
In this article, we propose the so-called volatility martingale difference divergence matrix (VMDDM) to quantify the conditional variance dependence of a random vector Y∈Rp$Y\in \mathbb {R}^p$ given X∈Rq$X\in \mathbb {R}^q$, building on the recent work on martigale difference divergence matrix (MDDM) that measures the conditional mean dependence. We further generalize VMDDM to the time series context and apply it to do dimension reduction for multivariate volatility, following the recent work by Hu and Tsay and Li et al. Unlike the latter two papers, our metric is easy to compute, can fully capture nonlinear serial dependence and involves less user-chosen numbers. Furthermore, we propose a variant of VMDDM and apply it to the estimation of conditional uncorrelated components model (Fan, Wang, and Yao 2008). Simulation and data illustration show that our method can perform well in comparison with the existing ones with less computational time, and can outperform others in cases of strong nonlinear dependence.
Journal: Journal of Business & Economic Statistics
Pages: 80-92
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1458621
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1458621
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:80-92
Template-Type: ReDIF-Article 1.0
Author-Name: Carlos Daniel Santos
Author-X-Name-First: Carlos Daniel
Author-X-Name-Last: Santos
Title: Identifying Demand Shocks From Production Data
Abstract:
Standard productivity estimates contain a mixture of cost efficiency and demand conditions. I propose a method to identify the distribution of the demand shock using production data. Identification does not depend on functional form restrictions. It is also robust to dynamic demand considerations and flexible labor. In the parametric case, the ratio of intermediate inputs to the wage bill (input ratio) contains information about the magnitude of the demand shock. The method is tested using data from Spain that contains information on prices and demand conditions. Finally, we generate Monte Carlo simulations to evaluate the method’s performance and sensitivity. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 93-106
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1458622
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1458622
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:93-106
Template-Type: ReDIF-Article 1.0
Author-Name: Jack Fosten
Author-X-Name-First: Jack
Author-X-Name-Last: Fosten
Author-Name: Daniel Gutknecht
Author-X-Name-First: Daniel
Author-X-Name-Last: Gutknecht
Title: Testing Nowcast Monotonicity with Estimated Factors
Abstract:
This article proposes a test to determine whether “big data” nowcasting methods, which have become an important tool to many public and private institutions, are monotonically improving as new information becomes available. The test is the first to formalize existing evaluation procedures from the nowcasting literature. We place particular emphasis on models involving estimated factors, since factor-based methods are a leading case in the high-dimensional empirical nowcasting literature, although our test is still applicable to small-dimensional set-ups like bridge equations and MIDAS models. Our approach extends a recent methodology for testing many moment inequalities to the case of nowcast monotonicity testing, which allows the number of inequalities to grow with the sample size. We provide results showing the conditions under which both parameter estimation error and factor estimation error can be accommodated in this high-dimensional setting when using the pseudo out-of-sample approach. The finite sample performance of our test is illustrated using a wide range of Monte Carlo simulations, and we conclude with an empirical application of nowcasting U.S. real gross domestic product (GDP) growth and five GDP sub-components. Our test results confirm monotonicity for all but one sub-component (government spending), suggesting that the factor-augmented model may be misspecified for this GDP constituent. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 107-123
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1458623
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1458623
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:107-123
Template-Type: ReDIF-Article 1.0
Author-Name: Pooyan Amir-Ahmadi
Author-X-Name-First: Pooyan
Author-X-Name-Last: Amir-Ahmadi
Author-Name: Christian Matthes
Author-X-Name-First: Christian
Author-X-Name-Last: Matthes
Author-Name: Mu-Chun Wang
Author-X-Name-First: Mu-Chun
Author-X-Name-Last: Wang
Title: Choosing Prior Hyperparameters: With Applications to Time-Varying Parameter Models
Abstract:
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 124-136
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1459302
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1459302
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:124-136
Template-Type: ReDIF-Article 1.0
Author-Name: David Gunawan
Author-X-Name-First: David
Author-X-Name-Last: Gunawan
Author-Name: Mohamad A. Khaled
Author-X-Name-First: Mohamad A.
Author-X-Name-Last: Khaled
Author-Name: Robert Kohn
Author-X-Name-First: Robert
Author-X-Name-Last: Kohn
Title: Mixed Marginal Copula Modeling
Abstract:
This article extends the literature on copulas with discrete or continuous marginals to the case where some of the marginals are a mixture of discrete and continuous components. We do so by carefully defining the likelihood as the density of the observations with respect to a mixed measure. The treatment is quite general, although we focus on mixtures of Gaussian and Archimedean copulas. The inference is Bayesian with the estimation carried out by Markov chain Monte Carlo. We illustrate the methodology and algorithms by applying them to estimate a multivariate income dynamics model. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 137-147
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1469998
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1469998
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:137-147
Template-Type: ReDIF-Article 1.0
Author-Name: Kuangyu Wen
Author-X-Name-First: Kuangyu
Author-X-Name-Last: Wen
Author-Name: Ximing Wu
Author-X-Name-First: Ximing
Author-X-Name-Last: Wu
Title: Transformation-Kernel Estimation of Copula Densities
Abstract:
The standard kernel estimator of copula densities suffers from boundary biases and inconsistency due to unbounded densities. Transforming the domain of estimation into an unbounded one remedies both problems, but also introduces an unbounded multiplier that may produce erratic boundary behaviors in the final density estimate. We propose an improved transformation-kernel estimator that employs a smooth tapering device to counter the undesirable influence of the multiplier. We establish the theoretical properties of the new estimator and its automatic higher-order improvement under Gaussian copulas. We present two practical methods of smoothing parameter selection. Extensive Monte Carlo simulations demonstrate the competence of the proposed estimator in terms of global and tail performance. Two real-world examples are provided. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 148-164
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1469999
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1469999
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:148-164
Template-Type: ReDIF-Article 1.0
Author-Name: Kurt Lavetti
Author-X-Name-First: Kurt
Author-X-Name-Last: Lavetti
Title: The Estimation of Compensating Wage Differentials: Lessons From the Deadliest Catch
Abstract:
I use longitudinal survey data from commercial fishing deckhands in the Alaskan Bering Sea to provide new insights on empirical methods commonly used to estimate compensating wage differentials and the value of statistical life (VSL). The unique setting exploits intertemporal variation in fatality rates and wages within worker-vessel pairs caused by a combination of weather patterns and policy changes, allowing identification of parameters and biases that it has only been possible to speculate about in more general settings. I show that estimation strategies common in the literature produce biased estimates in this setting, and decompose the bias components due to latent worker, establishment, and job-match heterogeneity. The estimates also remove the confounding effects of endogenous job mobility and dynamic labor market search, narrowing a conceptual gap between search-based hedonic wage theory and its empirical applications. I find that workers’ marginal aversion to fatal risk falls as risk levels rise, which suggests complementarities in the benefits of public safety policies. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 165-182
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1470000
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1470000
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:165-182
Template-Type: ReDIF-Article 1.0
Author-Name: Hugo Bodory
Author-X-Name-First: Hugo
Author-X-Name-Last: Bodory
Author-Name: Lorenzo Camponovo
Author-X-Name-First: Lorenzo
Author-X-Name-Last: Camponovo
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Author-Name: Michael Lechner
Author-X-Name-First: Michael
Author-X-Name-Last: Lechner
Title: The Finite Sample Performance of Inference Methods for Propensity Score Matching and Weighting Estimators
Abstract:
This article investigates the finite sample properties of a range of inference methods for propensity score-based matching and weighting estimators frequently applied to evaluate the average treatment effect on the treated. We analyze both asymptotic approximations and bootstrap methods for computing variances and confidence intervals in our simulation designs, which are based on German register data and U.S. survey data. We vary the design w.r.t. treatment selectivity, effect heterogeneity, share of treated, and sample size. The results suggest that in general, theoretically justified bootstrap procedures (i.e., wild bootstrapping for pair matching and standard bootstrapping for “smoother” treatment effect estimators) dominate the asymptotic approximations in terms of coverage rates for both matching and weighting estimators. Most findings are robust across simulation designs and estimators.
Journal: Journal of Business & Economic Statistics
Pages: 183-200
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1476247
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1476247
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:183-200
Template-Type: ReDIF-Article 1.0
Author-Name: Natalia Nolde
Author-X-Name-First: Natalia
Author-X-Name-Last: Nolde
Author-Name: Jinyuan Zhang
Author-X-Name-First: Jinyuan
Author-X-Name-Last: Zhang
Title: Conditional Extremes in Asymmetric Financial Markets
Abstract:
The global financial crisis of 2007–2009 revealed the great extent to which systemic risk can jeopardize the stability of the entire financial system. An effective methodology to quantify systemic risk is at the heart of the process of identifying the so-called systemically important financial institutions for regulatory purposes as well as to investigate key drivers of systemic contagion. The article proposes a method for dynamic forecasting of CoVaR, a popular measure of systemic risk. As a first step, we develop a semi-parametric framework using asymptotic results in the spirit of extreme value theory (EVT) to model the conditional probability distribution of a bivariate random vector given that one of the components takes on a large value, taking into account important features of financial data such as asymmetry and heavy tails. In the second step, we embed the proposed EVT method into a dynamic framework via a bivariate GARCH process. An empirical analysis is conducted to demonstrate and compare the performance of the proposed methodology relative to a very flexible fully parametric alternative.
Journal: Journal of Business & Economic Statistics
Pages: 201-213
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1476248
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1476248
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:201-213
Template-Type: ReDIF-Article 1.0
Author-Name: Shujie Ma
Author-X-Name-First: Shujie
Author-X-Name-Last: Ma
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Chih-Ling Tsai
Author-X-Name-First: Chih-Ling
Author-X-Name-Last: Tsai
Title: Testing Alphas in Conditional Time-Varying Factor Models With High-Dimensional Assets
Abstract:
For conditional time-varying factor models with high-dimensional assets, this article proposes a high-dimensional alpha (HDA) test to assess whether there exist abnormal returns on securities (or portfolios) over the theoretical expected returns. To employ this test effectively, a constant coefficient test is also introduced. It examines the validity of constant alphas and factor loadings. Simulation studies and an empirical example are presented to illustrate the finite sample performance and the usefulness of the proposed tests. Using the HDA test, the empirical example demonstrates that the FF three-factor model is better than CAPM in explaining the mean-variance efficiency of both the Chinese and U.S. stock markets. Furthermore, our results suggest that the U.S. stock market is more efficient in terms of mean-variance efficiency than the Chinese stock market. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 214-227
Issue: 1
Volume: 38
Year: 2020
Month: 1
X-DOI: 10.1080/07350015.2018.1482758
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1482758
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:1:p:214-227
Template-Type: ReDIF-Article 1.0
Author-Name: Stelios Arvanitis
Author-X-Name-First: Stelios
Author-X-Name-Last: Arvanitis
Author-Name: Mark Hallam
Author-X-Name-First: Mark
Author-X-Name-Last: Hallam
Author-Name: Thierry Post
Author-X-Name-First: Thierry
Author-X-Name-Last: Post
Author-Name: Nikolas Topaloglou
Author-X-Name-First: Nikolas
Author-X-Name-Last: Topaloglou
Title: Stochastic Spanning
Abstract:
This study develops and implements methods for determining whether introducing new securities or relaxing investment constraints improves the investment opportunity set for all risk averse investors. We develop a test procedure for “stochastic spanning” for two nested portfolio sets based on subsampling and linear programming. The test is statistically consistent and asymptotically exact for a class of weakly dependent processes. A Monte Carlo simulation experiment shows good statistical size and power properties in finite samples of realistic dimensions. In an application to standard datasets of historical stock market returns, we accept market portfolio efficiency but reject two-fund separation, which suggests an important role for higher-order moment risk in portfolio theory and asset pricing. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 573-585
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1391099
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1391099
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:573-585
Template-Type: ReDIF-Article 1.0
Author-Name: Alexandre Poirier
Author-X-Name-First: Alexandre
Author-X-Name-Last: Poirier
Author-Name: Nicolas L. Ziebarth
Author-X-Name-First: Nicolas L.
Author-X-Name-Last: Ziebarth
Title: Estimation of Models With Multiple-Valued Explanatory Variables
Abstract:
We study estimation and inference when there are multiple values (“matches”) for the explanatory variables and only one of the matches is the correct one. This problem arises often when two datasets are linked together on the basis of information that does not uniquely identify regressor values. We offer a set of two intuitive conditions that ensure consistent inference using the average of the possible matches in a linear framework. The first condition is the exogeneity of the false match with respect to the regression error. The second condition is a notion of exchangeability between the true and false matches. Conditioning on the observed data, the probability that each match is correct is completely unrestricted. We perform a Monte Carlo study to investigate the estimator’s finite-sample performance relative to others proposed in the literature. Finally, we provide an empirical example revisiting a main area of application: the measurement of intergenerational elasticities in income. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 586-597
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1391694
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1391694
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:586-597
Template-Type: ReDIF-Article 1.0
Author-Name: Tadao Hoshino
Author-X-Name-First: Tadao
Author-X-Name-Last: Hoshino
Title: Two-Step Estimation of Incomplete Information Social Interaction Models With Sample Selection
Abstract:
This article considers linear social interaction models under incomplete information that allow for missing outcome data due to sample selection. For model estimation, assuming that each individual forms his/her belief about the other members’ outcomes based on rational expectations, we propose a two-step series nonlinear least squares estimator. Both the consistency and asymptotic normality of the estimator are established. As an empirical illustration, we apply the proposed model and method to National Longitudinal Study of Adolescent Health (Add Health) data to examine the impacts of friendship interactions on adolescents’ academic achievements. We provide empirical evidence that the interaction effects are important determinants of grade point average and that controlling for sample selection bias has certain impacts on the estimation results. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 598-612
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1394861
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1394861
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:598-612
Template-Type: ReDIF-Article 1.0
Author-Name: Yannick Hoga
Author-X-Name-First: Yannick
Author-X-Name-Last: Hoga
Title: Confidence Intervals for Conditional Tail Risk Measures in ARMA–GARCH Models
Abstract:
ARMA–GARCH models are widely used to model the conditional mean and conditional variance dynamics of returns on risky assets. Empirical results suggest heavy-tailed innovations with positive extreme value index for these models. Hence, one may use extreme value theory to estimate extreme quantiles of residuals. Using weak convergence of the weighted sequential tail empirical process of the residuals, we derive the limiting distribution of extreme conditional Value-at-Risk (CVaR) and conditional expected shortfall (CES) estimates for a wide range of extreme value index estimators. To construct confidence intervals, we propose to use self-normalization. This leads to improved coverage vis-à-vis the normal approximation, while delivering slightly wider confidence intervals. A data-driven choice of the number of upper order statistics in the estimation is suggested and shown to work well in simulations. An application to stock index returns documents the improvements of CVaR and CES forecasts.
Journal: Journal of Business & Economic Statistics
Pages: 613-624
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1401543
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1401543
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:613-624
Template-Type: ReDIF-Article 1.0
Author-Name: Zhongjun Qu
Author-X-Name-First: Zhongjun
Author-X-Name-Last: Qu
Author-Name: Jungmo Yoon
Author-X-Name-First: Jungmo
Author-X-Name-Last: Yoon
Title: Uniform Inference on Quantile Effects under Sharp Regression Discontinuity Designs
Abstract:
This study develops methods for conducting uniform inference on quantile treatment effects for sharp regression discontinuity designs. We develop a score test for the treatment significance hypothesis and Wald-type tests for the hypotheses related to treatment significance, homogeneity, and unambiguity. The bias from the nonparametric estimation is studied in detail. In particular, we show that under some conditions, the asymptotic distribution of the score test is unaffected by the bias, without under-smoothing. For situations where the conditions can be restrictive, we incorporate a bias correction into the Wald tests and account for the estimation uncertainty. We also provide a procedure for constructing uniform confidence bands for quantile treatment effects. As an empirical application, we use the proposed methods to study the effect of cash-on-hand on unemployment duration. The results reveal pronounced treatment heterogeneity and also emphasize the importance of considering the long-term unemployed.
Journal: Journal of Business & Economic Statistics
Pages: 625-647
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1407323
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1407323
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:625-647
Template-Type: ReDIF-Article 1.0
Author-Name: Virginia Lacal
Author-X-Name-First: Virginia
Author-X-Name-Last: Lacal
Author-Name: Dag Tjøstheim
Author-X-Name-First: Dag
Author-X-Name-Last: Tjøstheim
Title: Estimating and Testing Nonlinear Local Dependence Between Two Time Series
Abstract:
The most common measure of dependence between two time series is the cross-correlation function. This measure gives a complete characterization of dependence for two linear and jointly Gaussian time series, but it often fails for nonlinear and non-Gaussian time series models, such as the ARCH-type models used in finance. The cross-correlation function is a global measure of dependence. In this article, we apply to bivariate time series the nonlinear local measure of dependence called local Gaussian correlation. It generally works well also for nonlinear models, and it can distinguish between positive and negative local dependence. We construct confidence intervals for the local Gaussian correlation and develop a test based on this measure of dependence. Asymptotic properties are derived for the parameter estimates, for the test functional and for a block bootstrap procedure. For both simulated and financial index data, we construct confidence intervals and we compare the proposed test with one based on the ordinary correlation and with one based on the Brownian distance correlation. Financial indexes are examined over a long time period and their local joint behavior, including tail behavior, is analyzed prior to, during and after the financial crisis. Supplementary material for this article is available online.
Journal: Journal of Business & Economic Statistics
Pages: 648-660
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1407777
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1407777
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:648-660
Template-Type: ReDIF-Article 1.0
Author-Name: Deyuan Li
Author-X-Name-First: Deyuan
Author-X-Name-Last: Li
Author-Name: Huixia Judy Wang
Author-X-Name-First: Huixia Judy
Author-X-Name-Last: Wang
Title: Extreme Quantile Estimation for Autoregressive Models
Abstract:
A quantile autoregresive model is a useful extension of classical autoregresive models as it can capture the influences of conditioning variables on the location, scale, and shape of the response distribution. However, at the extreme tails, standard quantile autoregression estimator is often unstable due to data sparsity. In this article, assuming quantile autoregresive models, we develop a new estimator for extreme conditional quantiles of time series data based on extreme value theory. We build the connection between the second-order conditions for the autoregression coefficients and for the conditional quantile functions, and establish the asymptotic properties of the proposed estimator. The finite sample performance of the proposed method is illustrated through a simulation study and the analysis of U.S. retail gasoline price.
Journal: Journal of Business & Economic Statistics
Pages: 661-670
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1408469
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1408469
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:661-670
Template-Type: ReDIF-Article 1.0
Author-Name: Max Tabord-Meehan
Author-X-Name-First: Max
Author-X-Name-Last: Tabord-Meehan
Title: Inference With Dyadic Data: Asymptotic Behavior of the Dyadic-Robust t-Statistic
Abstract:
This article is concerned with inference in the linear model with dyadic data. Dyadic data are indexed by pairs of “units;” for example, trade data between pairs of countries. Because of the potential for observations with a unit in common to be correlated, standard inference procedures may not perform as expected. We establish a range of conditions under which a t-statistic with the dyadic-robust variance estimator of Fafchamps and Gubert is asymptotically normal. Using our theoretical results as a guide, we perform a simulation exercise to study the validity of the normal approximation, as well as the performance of a novel finite-sample correction. We conclude with guidelines for applied researchers wishing to use the dyadic-robust estimator for inference.
Journal: Journal of Business & Economic Statistics
Pages: 671-680
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1409630
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1409630
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:671-680
Template-Type: ReDIF-Article 1.0
Author-Name: James Mitchell
Author-X-Name-First: James
Author-X-Name-Last: Mitchell
Author-Name: Donald Robertson
Author-X-Name-First: Donald
Author-X-Name-Last: Robertson
Author-Name: Stephen Wright
Author-X-Name-First: Stephen
Author-X-Name-Last: Wright
Title: R2 Bounds for Predictive Models: What Univariate Properties Tell us About Multivariate Predictability
Abstract:
A long-standing puzzle in macroeconomic forecasting has been that a wide variety of multivariate models have struggled to out-predict univariate models consistently. We seek an explanation for this puzzle in terms of population properties. We derive bounds for the predictive R2 of the true, but unknown, multivariate model from univariate ARMA parameters alone. These bounds can be quite tight, implying little forecasting gain even if we knew the true multivariate model. We illustrate using CPI inflation data. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 681-695
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1415909
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1415909
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:681-695
Template-Type: ReDIF-Article 1.0
Author-Name: Maciej Augustyniak
Author-X-Name-First: Maciej
Author-X-Name-Last: Augustyniak
Author-Name: Luc Bauwens
Author-X-Name-First: Luc
Author-X-Name-Last: Bauwens
Author-Name: Arnaud Dufays
Author-X-Name-First: Arnaud
Author-X-Name-Last: Dufays
Title: A New Approach to Volatility Modeling: The Factorial Hidden Markov Volatility Model
Abstract:
A new process—the factorial hidden Markov volatility (FHMV) model—is proposed to model financial returns or realized variances. Its dynamics are driven by a latent volatility process specified as a product of three components: a Markov chain controlling volatility persistence, an independent discrete process capable of generating jumps in the volatility, and a predictable (data-driven) process capturing the leverage effect. An economic interpretation is attached to each one of these components. Moreover, the Markov chain and jump components allow volatility to switch abruptly between thousands of states, and the transition matrix of the model is structured to generate a high degree of volatility persistence. An empirical study on six financial time series shows that the FHMV process compares favorably to state-of-the-art volatility models in terms of in-sample fit and out-of-sample forecasting performance over time horizons ranging from 1 to 100 days. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 696-709
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1415910
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1415910
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:696-709
Template-Type: ReDIF-Article 1.0
Author-Name: Eva Deuchert
Author-X-Name-First: Eva
Author-X-Name-Last: Deuchert
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Author-Name: Mark Schelker
Author-X-Name-First: Mark
Author-X-Name-Last: Schelker
Title: Direct and Indirect Effects Based on Difference-in-Differences With an Application to Political Preferences Following the Vietnam Draft Lottery
Abstract:
We propose a difference-in-differences approach for disentangling a total treatment effect within specific subpopulations into a direct effect and an indirect effect operating through a binary mediating variable. Random treatment assignment along with specific common trend and effect homogeneity assumptions identify the direct effects on the always and never takers, whose mediator is not affected by the treatment, as well as the direct and indirect effects on the compliers, whose mediator reacts to the treatment. In our empirical application, we analyze the impact of the Vietnam draft lottery on political preferences. The results suggest that a high draft risk due to the draft lottery outcome leads to an increase in mild preferences for the Republican Party, but has no effect on strong preferences for either party or on specific political attitudes. The increase in Republican support is mostly driven by the direct effect not operating through the mediator that is military service.
Journal: Journal of Business & Economic Statistics
Pages: 710-720
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1419139
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1419139
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:710-720
Template-Type: ReDIF-Article 1.0
Author-Name: Scott Cederburg
Author-X-Name-First: Scott
Author-X-Name-Last: Cederburg
Author-Name: Michael S. O’Doherty
Author-X-Name-First: Michael S.
Author-X-Name-Last: O’Doherty
Title: Understanding the Risk-Return Relation: The Aggregate Wealth Proxy Actually Matters
Abstract:
The ICAPM implies that the market’s conditional expected return is proportional to its conditional variance and that the reward-to-risk ratio equals the representative investor’s coefficient of relative risk aversion. Prior studies examine this relation using the stock market to proxy for aggregate wealth and find mixed results. We show, however, that stock-based tests suffer from low power and lead to biased estimates of the risk-return tradeoff when stocks are an imperfect market proxy. Tests designed to mitigate this bias by incorporating a more comprehensive measure of aggregate wealth produce large, positive estimates of the risk-aversion coefficient around seven to nine. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 721-735
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1419140
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1419140
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:721-735
Template-Type: ReDIF-Article 1.0
Author-Name: Markus Frölich
Author-X-Name-First: Markus
Author-X-Name-Last: Frölich
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Title: Including Covariates in the Regression Discontinuity Design
Abstract:
This article proposes a fully nonparametric kernel method to account for observed covariates in regression discontinuity designs (RDD), which may increase precision of treatment effect estimation. It is shown that conditioning on covariates reduces the asymptotic variance and allows estimating the treatment effect at the rate of one-dimensional nonparametric regression, irrespective of the dimension of the continuously distributed elements in the conditioning set. Furthermore, the proposed method may decrease bias and restore identification by controlling for discontinuities in the covariate distribution at the discontinuity threshold, provided that all relevant discontinuously distributed variables are controlled for. To illustrate the estimation approach and its properties, we provide a simulation study and an empirical application to an Austrian labor market reform. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 736-748
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1421544
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1421544
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:736-748
Template-Type: ReDIF-Article 1.0
Author-Name: Murillo Campello
Author-X-Name-First: Murillo
Author-X-Name-Last: Campello
Author-Name: Antonio F. Galvao
Author-X-Name-First: Antonio F.
Author-X-Name-Last: Galvao
Author-Name: Ted Juhl
Author-X-Name-First: Ted
Author-X-Name-Last: Juhl
Title: Testing for Slope Heterogeneity Bias in Panel Data Models
Abstract:
Standard econometric methods can overlook individual heterogeneity in empirical work, generating inconsistent parameter estimates in panel data models. We propose the use of methods that allow researchers to easily identify, quantify, and address estimation issues arising from individual slope heterogeneity. We first characterize the bias in the standard fixed effects estimator when the true econometric model allows for heterogeneous slope coefficients. We then introduce a new test to check whether the fixed effects estimation is subject to heterogeneity bias. The procedure tests the population moment conditions required for fixed effects to consistently estimate the relevant parameters in the model. We establish the limiting distribution of the test and show that it is very simple to implement in practice. Examining firm investment models to showcase our approach, we show that heterogeneity bias-robust methods identify cash flow as a more important driver of investment than previously reported. Our study demonstrates analytically, via simulations, and empirically the importance of carefully accounting for individual specific slope heterogeneity in drawing conclusions about economic behavior.
Journal: Journal of Business & Economic Statistics
Pages: 749-760
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2017.1421545
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1421545
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:749-760
Template-Type: ReDIF-Article 1.0
Author-Name: Ximing Wu
Author-X-Name-First: Ximing
Author-X-Name-Last: Wu
Title: Robust Likelihood Cross-Validation for Kernel Density Estimation
Abstract:
Likelihood cross-validation for kernel density estimation is known to be sensitive to extreme observations and heavy-tailed distributions. We propose a robust likelihood-based cross-validation method to select bandwidths in multivariate density estimations. We derive this bandwidth selector within the framework of robust maximum likelihood estimation. This method establishes a smooth transition from likelihood cross-validation for nonextreme observations to least squares cross-validation for extreme observations, thereby combining the efficiency of likelihood cross-validation and the robustness of least-squares cross-validation. We also suggest a simple rule to select the transition threshold. We demonstrate the finite sample performance and practical usefulness of the proposed method via Monte Carlo simulations and a real data application on Chinese air pollution.
Journal: Journal of Business & Economic Statistics
Pages: 761-770
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2018.1424633
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1424633
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:761-770
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Editorial Collaborators
Journal: Journal of Business & Economic Statistics
Pages: 771-774
Issue: 4
Volume: 37
Year: 2019
Month: 10
X-DOI: 10.1080/07350015.2019.1670479
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1670479
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:4:p:771-774
Template-Type: ReDIF-Article 1.0
Author-Name: Kajal Lahiri
Author-X-Name-First: Kajal
Author-X-Name-Last: Lahiri
Author-Name: Liu Yang
Author-X-Name-First: Liu
Author-X-Name-Last: Yang
Title: Confidence Bands for ROC Curves With Serially Dependent Data
Abstract:
We propose serial correlation-robust asymptotic confidence bands for the receiver operating characteristic (ROC) curve and its functional, viz., the area under ROC curve (AUC), estimated by quasi-maximum likelihood in the binormal model. Our simulation experiments confirm that this new method performs fairly well in finite samples, and confers an additional measure of robustness to nonnormality. The conventional procedure is found to be markedly undersized in terms of yielding empirical coverage probabilities lower than the nominal level, especially when the serial correlation is strong. An example from macroeconomic forecasting demonstrates the importance of accounting for serial correlation when the probability forecasts for real GDP declines are evaluated using ROC. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 115-130
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1073593
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1073593
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:115-130
Template-Type: ReDIF-Article 1.0
Author-Name: Mehmet Caner
Author-X-Name-First: Mehmet
Author-X-Name-Last: Caner
Author-Name: Xu Han
Author-X-Name-First: Xu
Author-X-Name-Last: Han
Author-Name: Yoonseok Lee
Author-X-Name-First: Yoonseok
Author-X-Name-Last: Lee
Title: Adaptive Elastic Net GMM Estimation With Many Invalid Moment Conditions: Simultaneous Model and Moment Selection
Abstract:
This article develops the adaptive elastic net generalized method of moments (GMM) estimator in large-dimensional models with potentially (locally) invalid moment conditions, where both the number of structural parameters and the number of moment conditions may increase with the sample size. The basic idea is to conduct the standard GMM estimation combined with two penalty terms: the adaptively weighted lasso shrinkage and the quadratic regularization. It is a one-step procedure of valid moment condition selection, nonzero structural parameter selection (i.e., model selection), and consistent estimation of the nonzero parameters. The procedure achieves the standard GMM efficiency bound as if we know the valid moment conditions ex ante, for which the quadratic regularization is important. We also study the tuning parameter choice, with which we show that selection consistency still holds without assuming Gaussianity. We apply the new estimation procedure to dynamic panel data models, where both the time and cross-section dimensions are large. The new estimator is robust to possible serial correlations in the regression error terms.
Journal: Journal of Business & Economic Statistics
Pages: 24-46
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1129344
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1129344
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:24-46
Template-Type: ReDIF-Article 1.0
Author-Name: Lancelot F. James
Author-X-Name-First: Lancelot F.
Author-X-Name-Last: James
Author-Name: Gernot Müller
Author-X-Name-First: Gernot
Author-X-Name-Last: Müller
Author-Name: Zhiyuan Zhang
Author-X-Name-First: Zhiyuan
Author-X-Name-Last: Zhang
Title: Stochastic Volatility Models Based on OU-Gamma Time Change: Theory and Estimation
Abstract:
We consider stochastic volatility models that are defined by an Ornstein–Uhlenbeck (OU)-Gamma time change. These models are most suitable for modeling financial time series and follow the general framework of the popular non-Gaussian OU models of Barndorff-Nielsen and Shephard. One current problem of these otherwise attractive nontrivial models is, in general, the unavailability of a tractable likelihood-based statistical analysis for the returns of financial assets, which requires the ability to sample from a nontrivial joint distribution. We show that an OU process driven by an infinite activity Gamma process, which is an OU-Gamma process, exhibits unique features, which allows one to explicitly describe and exactly sample from relevant joint distributions. This is a consequence of the OU structure and the calculus of Gamma and Dirichlet processes. We develop a particle marginal Metropolis–Hastings algorithm for this type of continuous-time stochastic volatility models and check its performance using simulated data. For illustration we finally fit the model to S&P500 index data.
Journal: Journal of Business & Economic Statistics
Pages: 75-87
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1133427
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1133427
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:75-87
Template-Type: ReDIF-Article 1.0
Author-Name: Roberto Casarin
Author-X-Name-First: Roberto
Author-X-Name-Last: Casarin
Author-Name: Domenico Sartore
Author-X-Name-First: Domenico
Author-X-Name-Last: Sartore
Author-Name: Marco Tronzano
Author-X-Name-First: Marco
Author-X-Name-Last: Tronzano
Title: A Bayesian Markov-Switching Correlation Model for Contagion Analysis on Exchange Rate Markets
Abstract:
This article develops a new Markov-switching vector autoregressive (VAR) model with stochastic correlation for contagion analysis on financial markets. The correlation and the log-volatility dynamics are driven by two independent Markov chains, thus allowing for different effects such as volatility spill-overs and correlation shifts with various degrees of intensity. We outline a suitable Bayesian inference procedure based on Markov chain Monte Carlo algorithms. We then apply the model to some major and Asian-Pacific cross rates against the U.S. dollar and find strong evidence supporting the existence of contagion effects and correlation drops during crises, closely in line with the stylized facts outlined in the contagion literature. A comparison of this model with its closest competitors, such as a time-varying parameter VAR, reveals that our model has a better predictive ability. Supplementary materials for this article are available online
Journal: Journal of Business & Economic Statistics
Pages: 101-114
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1137757
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1137757
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:101-114
Template-Type: ReDIF-Article 1.0
Author-Name: Francesco Andreoli
Author-X-Name-First: Francesco
Author-X-Name-Last: Andreoli
Title: Robust Inference for Inverse Stochastic Dominance
Abstract:
The notion of inverse stochastic dominance is gaining increasing support in risk, inequality, and welfare analysis as a relevant criterion for ranking distributions, which is alternative to the standard stochastic dominance approach. Its implementation rests on comparisons of two distributions’ quantile functions, or of their multiple partial integrals, at fixed population proportions. This article develops a novel statistical inference model for inverse stochastic dominance that is based on the influence function approach. The proposed method allows model-free evaluations that are limitedly affected by contamination in the data. Asymptotic normality of the estimators allows to derive tests for the restrictions implied by various forms of inverse stochastic dominance. Monte Carlo experiments and an application promote the qualities of the influence function estimator when compared with alternative dominance criteria.
Journal: Journal of Business & Economic Statistics
Pages: 146-159
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1137758
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1137758
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:146-159
Template-Type: ReDIF-Article 1.0
Author-Name: Artūras Juodis
Author-X-Name-First: Artūras
Author-X-Name-Last: Juodis
Title: Pseudo Panel Data Models With Cohort Interactive Effects
Abstract:
When genuine panel data samples are not available, repeated cross-sectional surveys can be used to form so-called pseudo panels. In this article, we investigate the properties of linear pseudo panel data estimators with fixed number of cohorts and time observations. We extend standard linear pseudo panel data setup to models with factor residuals by adapting the quasi-differencing approach developed for genuine panels. In a Monte Carlo study, we find that the proposed procedure has good finite sample properties in situations with endogeneity, cohort interactive effects, and near nonidentification. Finally, as an illustration the proposed method is applied to data from Ecuador to study labor supply elasticity. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 47-61
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1137759
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1137759
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:47-61
Template-Type: ReDIF-Article 1.0
Author-Name: Knut Are Aastveit
Author-X-Name-First: Knut Are
Author-X-Name-Last: Aastveit
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Author-Name: Herman K. van Dijk
Author-X-Name-First: Herman K.
Author-X-Name-Last: van Dijk
Title: Combined Density Nowcasting in an Uncertain Economic Environment
Abstract:
We introduce a combined density nowcasting (CDN) approach to dynamic factor models (DFM) that in a coherent way accounts for time-varying uncertainty of several model and data features to provide more accurate and complete density nowcasts. The combination weights are latent random variables that depend on past nowcasting performance and other learning mechanisms. The combined density scheme is incorporated in a Bayesian sequential Monte Carlo method which rebalances the set of nowcasted densities in each period using updated information on the time-varying weights. Experiments with simulated data show that CDN works particularly well in a situation of early data releases with relatively large data uncertainty and model incompleteness. Empirical results, based on U.S. real-time data of 120 monthly variables, indicate that CDN gives more accurate density nowcasts of U.S. GDP growth than a model selection strategy and other combination strategies throughout the quarter with relatively large gains for the two first months of the quarter. CDN also provides informative signals on model incompleteness during recent recessions. Focusing on the tails, CDN delivers probabilities of negative growth, that provide good signals for calling recessions and ending economic slumps in real time.
Journal: Journal of Business & Economic Statistics
Pages: 131-145
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1137760
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1137760
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:131-145
Template-Type: ReDIF-Article 1.0
Author-Name: Qiurong Cui
Author-X-Name-First: Qiurong
Author-X-Name-Last: Cui
Author-Name: Zhengjun Zhang
Author-X-Name-First: Zhengjun
Author-X-Name-Last: Zhang
Title: Max-Linear Competing Factor Models
Abstract:
Models incorporating “latent” variables have been commonplace in financial, social, and behavioral sciences. Factor model, the most popular latent model, explains the continuous observed variables in a smaller set of latent variables (factors) in a matter of linear relationship. However, complex data often simultaneously display asymmetric dependence, asymptotic dependence, and positive (negative) dependence between random variables, which linearity and Gaussian distributions and many other extant distributions are not capable of modeling. This article proposes a nonlinear factor model that can model the above-mentioned variable dependence features but still possesses a simple form of factor structure. The random variables, marginally distributed as unit Fréchet distributions, are decomposed into max linear functions of underlying Fréchet idiosyncratic risks, transformed from Gaussian copula, and independent shared external Fréchet risks. By allowing the random variables to share underlying (latent) pervasive risks with random impact parameters, various dependence structures are created. This innovates a new promising technique to generate families of distributions with simple interpretations. We dive in the multivariate extreme value properties of the proposed model and investigate maximum composite likelihood methods for the impact parameters of the latent risks. The estimates are shown to be consistent. The estimation schemes are illustrated on several sets of simulated data, where comparisons of performance are addressed. We employ a bootstrap method to obtain standard errors in real data analysis. Real application to financial data reveals inherent dependencies that previous work has not disclosed and demonstrates the model’s interpretability to real data. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 62-74
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2015.1137761
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1137761
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:62-74
Template-Type: ReDIF-Article 1.0
Author-Name: Xiangjin B. Chen
Author-X-Name-First: Xiangjin B.
Author-X-Name-Last: Chen
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Degui Li
Author-X-Name-First: Degui
Author-X-Name-Last: Li
Author-Name: Param Silvapulle
Author-X-Name-First: Param
Author-X-Name-Last: Silvapulle
Title: Nonparametric Estimation and Forecasting for Time-Varying Coefficient Realized Volatility Models
Abstract:
This article introduces a new specification for the heterogenous autoregressive (HAR) model for the realized volatility of S&P 500 index returns. In this modeling framework, the coefficients of the HAR are allowed to be time-varying with unspecified functional forms. The local linear method with the cross-validation (CV) bandwidth selection is applied to estimate the time-varying coefficient HAR (TVC-HAR) model, and a bootstrap method is used to construct the point-wise confidence bands for the coefficient functions. Furthermore, the asymptotic distribution of the proposed local linear estimators of the TVC-HAR model is established under some mild conditions. The results of the simulation study show that the local linear estimator with CV bandwidth selection has favorable finite sample properties. The outcomes of the conditional predictive ability test indicate that the proposed nonparametric TVC-HAR model outperforms the parametric HAR and its extension to HAR with jumps and/or GARCH in terms of multi-step out-of-sample forecasting, in particular in the post-2003 crisis and 2007 global financial crisis (GFC) periods, during which financial market volatilities were unduly high.
Journal: Journal of Business & Economic Statistics
Pages: 88-100
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2016.1138118
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1138118
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:88-100
Template-Type: ReDIF-Article 1.0
Author-Name: Tadao Hoshino
Author-X-Name-First: Tadao
Author-X-Name-Last: Hoshino
Title: Semiparametric Spatial Autoregressive Models With Endogenous Regressors: With an Application to Crime Data
Abstract:
This study considers semiparametric spatial autoregressive models that allow for endogenous regressors, as well as the heterogenous effects of these regressors across spatial units. For the model estimation, we propose a semiparametric series generalized method of moments estimator. We establish that the proposed estimator is both consistent and asymptotically normal. As an empirical illustration, we apply the proposed model and method to Tokyo crime data to estimate how the existence of a neighborhood police substation (NPS) affects the household burglary rate. The results indicate that the presence of an NPS helps reduce household burglaries, and that the effects of some variables are heterogenous with respect to residential distribution patterns. Furthermore, we show that using a model that does not adjust for the endogeneity of NPS does not allow us to observe the significant relationship between NPS and the household burglary rate. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 160-172
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2016.1146145
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1146145
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:160-172
Template-Type: ReDIF-Article 1.0
Author-Name: Yao Luo
Author-X-Name-First: Yao
Author-X-Name-Last: Luo
Author-Name: Yuanyuan Wan
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Wan
Title: Integrated-Quantile-Based Estimation for First-Price Auction Models
Abstract:
This article considers nonparametric estimation of first-price auction models under the monotonicity restriction on the bidding strategy. Based on an integrated-quantile representation of the first-order condition, we propose a tuning-parameter-free estimator for the valuation quantile function. We establish its cube-root-n consistency and asymptotic distribution under weaker smoothness assumptions than those typically assumed in the empirical literature. If the latter are true, we also provide a trimming-free smoothed estimator and show that it is asymptotically normal and achieves the optimal rate of Guerre, Perrigne, and Vuong (2000). We illustrate our method using Monte Carlo simulations and an empirical study of the California highway procurement auctions. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 173-180
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2016.1166119
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1166119
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:173-180
Template-Type: ReDIF-Article 1.0
Author-Name: Hyungtaik Ahn
Author-X-Name-First: Hyungtaik
Author-X-Name-Last: Ahn
Author-Name: Hidehiko Ichimura
Author-X-Name-First: Hidehiko
Author-X-Name-Last: Ichimura
Author-Name: James L. Powell
Author-X-Name-First: James L.
Author-X-Name-Last: Powell
Author-Name: Paul A. Ruud
Author-X-Name-First: Paul A.
Author-X-Name-Last: Ruud
Title: Simple Estimators for Invertible Index Models
Abstract:
This article considers estimation of the unknown linear index coefficients of a model in which a number of nonparametrically identified reduced form parameters are assumed to be smooth and invertible function of one or more linear indices. The results extend the previous literature by allowing the number of reduced form parameters to exceed the number of indices (i.e., the indices are “overdetermined” by the reduced form parameters. The estimator of the unknown index coefficients (up to scale) is the eigenvector of a matrix (defined in terms of a first-step nonparametric estimator of the reduced form parameters) corresponding to its smallest (in magnitude) eigenvalue. Under suitable conditions, the proposed estimator is shown to be root-n-consistent and asymptotically normal, and under additional restrictions an efficient choice of a “weight matrix” is derived in the overdetermined case.
Journal: Journal of Business & Economic Statistics
Pages: 1-10
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2017.1379405
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1379405
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:1-10
Template-Type: ReDIF-Article 1.0
Author-Name: Andres Aradillas-Lopez
Author-X-Name-First: Andres
Author-X-Name-Last: Aradillas-Lopez
Title: A Comment on “Simple Estimators for Invertible Index Models”
Journal: Journal of Business & Economic Statistics
Pages: 18-21
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2017.1379406
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1379406
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:18-21
Template-Type: ReDIF-Article 1.0
Author-Name: S. Khan
Author-X-Name-First: S.
Author-X-Name-Last: Khan
Author-Name: E. Tamer
Author-X-Name-First: E.
Author-X-Name-Last: Tamer
Title: Discussion of “Simple Estimators for Invertible Index Models” by H. Ahn, H. Ichimura, J. Powell, and P. Ruud
Abstract:
This is an interesting article that considers the question of inference on unknown linear index coefficients in a general class of models where reduced form parameters are invertible function of one or more linear index. Interpretable sufficient conditions such as monotonicity and or smoothness for the invertibility condition are provided. The results generalize some work in the previous literature by allowing the number of reduced form parameters to exceed the number of indices. The identification and estimation expand on the approach taken in previous work by the authors. Examples include Ahn, Powell, and Ichimura (2004) for monotone single-index regression models to a multi-index setting and extended by Blundell and Powell (2004) and Powell and Ruud (2008) to models with endogenous regressors and multinomial response, respectively. A key property of the inference approach taken is that the estimator of the unknown index coefficients (up to scale) is computationally simple to obtain (relative to other estimators in the literature) in that it is closed form. Specifically, unifying an approach for all models considered in this article, the authors propose an estimator, which is the eigenvector of a matrix (defined in terms of a preliminary estimator of the reduced form parameters) corresponding to its smallest eigenvalue. Under suitable conditions, the proposed estimator is shown to be root-n-consistent and asymptotically normal.
Journal: Journal of Business & Economic Statistics
Pages: 11-15
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2017.1392312
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1392312
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:11-15
Template-Type: ReDIF-Article 1.0
Author-Name: Hyungtaik Ahn
Author-X-Name-First: Hyungtaik
Author-X-Name-Last: Ahn
Author-Name: Hidehiko Ichimura
Author-X-Name-First: Hidehiko
Author-X-Name-Last: Ichimura
Author-Name: James L. Powell
Author-X-Name-First: James L.
Author-X-Name-Last: Powell
Author-Name: Paul A. Ruud
Author-X-Name-First: Paul A.
Author-X-Name-Last: Ruud
Title: Rejoinder for “Simple Estimators for Invertible Index Models”
Journal: Journal of Business & Economic Statistics
Pages: 22-23
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2017.1392313
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1392313
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:22-23
Template-Type: ReDIF-Article 1.0
Author-Name: Jack Porter
Author-X-Name-First: Jack
Author-X-Name-Last: Porter
Title: Comment on “Simple Estimators for Invertible Index Models”
Journal: Journal of Business & Economic Statistics
Pages: 16-17
Issue: 1
Volume: 36
Year: 2018
Month: 1
X-DOI: 10.1080/07350015.2017.1396989
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1396989
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:36:y:2018:i:1:p:16-17
Template-Type: ReDIF-Article 1.0
Author-Name: Drew D. Creal
Author-X-Name-First: Drew D.
Author-X-Name-Last: Creal
Title: A Class of Non-Gaussian State Space Models With Exact Likelihood Inference
Abstract:
The likelihood function of a general nonlinear, non-Gaussian state space model is a high-dimensional integral with no closed-form solution. In this article, I show how to calculate the likelihood function exactly for a large class of non-Gaussian state space models that include stochastic intensity, stochastic volatility, and stochastic duration models among others. The state variables in this class follow a nonnegative stochastic process that is popular in econometrics for modeling volatility and intensities. In addition to calculating the likelihood, I also show how to perform filtering and smoothing to estimate the latent variables in the model. The procedures in this article can be used for either Bayesian or frequentist estimation of the model’s unknown parameters as well as the latent state variables. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 585-597
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1092977
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1092977
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:585-597
Template-Type: ReDIF-Article 1.0
Author-Name: Adam McCloskey
Author-X-Name-First: Adam
Author-X-Name-Last: McCloskey
Author-Name: Jonathan B. Hill
Author-X-Name-First: Jonathan B.
Author-X-Name-Last: Hill
Title: Parameter Estimation Robust to Low-Frequency Contamination
Abstract:
We provide methods to robustly estimate the parameters of stationary ergodic short-memory time series models in the potential presence of additive low-frequency contamination. The types of contamination covered include level shifts (changes in mean) and monotone or smooth time trends, both of which have been shown to bias parameter estimates toward regions of persistence in a variety of contexts. The estimators presented here minimize trimmed frequency domain quasi-maximum likelihood (FDQML) objective functions without requiring specification of the low-frequency contaminating component. When proper sample size-dependent trimmings are used, the FDQML estimators are consistent and asymptotically normal, asymptotically eliminating the presence of any spurious persistence. These asymptotic results also hold in the absence of additive low-frequency contamination, enabling the practitioner to robustly estimate model parameters without prior knowledge of whether contamination is present. Popular time series models that fit into the framework of this article include autoregressive moving average (ARMA), stochastic volatility, generalized autoregressive conditional heteroscedasticity (GARCH), and autoregressive conditional heteroscedasticity (ARCH) models. We explore the finite sample properties of the trimmed FDQML estimators of the parameters of some of these models, providing practical guidance on trimming choice. Empirical estimation results suggest that a large portion of the apparent persistence in certain volatility time series may indeed be spurious. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 598-610
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1093948
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1093948
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:598-610
Template-Type: ReDIF-Article 1.0
Author-Name: Philip L. H. Yu
Author-X-Name-First: Philip L. H.
Author-X-Name-Last: Yu
Author-Name: W. K. Li
Author-X-Name-First: W. K.
Author-X-Name-Last: Li
Author-Name: F. C. Ng
Author-X-Name-First: F. C.
Author-X-Name-Last: Ng
Title: The Generalized Conditional Autoregressive Wishart Model for Multivariate Realized Volatility
Abstract:
It is well known that in finance variances and covariances of asset returns move together over time. Recently, much interest has been aroused by an approach involving the use of the realized covariance (RCOV) matrix constructed from high-frequency returns as the ex-post realization of the covariance matrix of low-frequency returns. For the analysis of dynamics of RCOV matrices, we propose the generalized conditional autoregressive Wishart (GCAW) model. Both the noncentrality matrix and scale matrix of the Wishart distribution are driven by the lagged values of RCOV matrices, and represent two different sources of dynamics, respectively. The GCAW is a generalization of the existing models, and accounts for symmetry and positive definiteness of RCOV matrices without imposing any parametric restriction. Some important properties such as conditional moments, unconditional moments, and stationarity are discussed. Empirical examples including sequences of daily RCOV matrices from the New York Stock Exchange illustrate that our model outperforms the existing models in terms of model fitting and forecasting.
Journal: Journal of Business & Economic Statistics
Pages: 513-527
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1096788
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1096788
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:513-527
Template-Type: ReDIF-Article 1.0
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Xi Qu
Author-X-Name-First: Xi
Author-X-Name-Last: Qu
Title: Specification Test for Spatial Autoregressive Models
Abstract:
This article considers a simple test for the correct specification of linear spatial autoregressive models, assuming that the choice of the weight matrix Wn is true. We derive the limiting distributions of the test under the null hypothesis of correct specification and a sequence of local alternatives. We show that the test is free of nuisance parameters asymptotically under the null and prove the consistency of our test. To improve the finite sample performance of our test, we also propose a residual-based wild bootstrap and justify its asymptotic validity. We conduct a small set of Monte Carlo simulations to investigate the finite sample properties of our tests. Finally, we apply the test to two empirical datasets: the vote cast and the economic growth rate. We reject the linear spatial autoregressive model in the vote cast example but fail to reject it in the economic growth rate example. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 572-584
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1102734
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1102734
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:572-584
Template-Type: ReDIF-Article 1.0
Author-Name: Tucker McElroy
Author-X-Name-First: Tucker
Author-X-Name-Last: McElroy
Title: Multivariate Seasonal Adjustment, Economic Identities, and Seasonal Taxonomy
Abstract:
This article extends the methodology for multivariate seasonal adjustment by exploring the statistical modeling of seasonality jointly across multiple time series, using latent dynamic factor models fitted using maximum likelihood estimation. Signal extraction methods for the series then allow us to calculate a model-based seasonal adjustment. We emphasize several facets of our analysis: (i) we quantify the efficiency gain in multivariate signal extraction versus univariate approaches; (ii) we address the problem of the preservation of economic identities; (iii) we describe a foray into seasonal taxonomy via the device of seasonal co-integration rank. These contributions are developed through two empirical studies of aggregate U.S. retail trade series and U.S. regional housing starts. Our analysis identifies different seasonal subcomponents that are able to capture the transition from prerecession to postrecession seasonal patterns. We also address the topic of indirect seasonal adjustment by analyzing the regional aggregate series. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 611-625
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1123159
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1123159
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:611-625
Template-Type: ReDIF-Article 1.0
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Author-Name: Wai Keung Li
Author-X-Name-First: Wai Keung
Author-X-Name-Last: Li
Author-Name: Philip L. H. Yu
Author-X-Name-First: Philip L. H.
Author-X-Name-Last: Yu
Title: Buffered Autoregressive Models With Conditional Heteroscedasticity: An Application to Exchange Rates
Abstract:
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model.
Journal: Journal of Business & Economic Statistics
Pages: 528-542
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1123634
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1123634
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:528-542
Template-Type: ReDIF-Article 1.0
Author-Name: Bo E. Honoré
Author-X-Name-First: Bo E.
Author-X-Name-Last: Honoré
Author-Name: Michaela Kesina
Author-X-Name-First: Michaela
Author-X-Name-Last: Kesina
Title: Estimation of Some Nonlinear Panel Data Models With Both Time-Varying and Time-Invariant Explanatory Variables
Abstract:
The so-called “fixed effects” approach to the estimation of panel data models suffers from the limitation that it is not possible to estimate the coefficients on explanatory variables that are time-invariant. This is in contrast to a “random effects” approach, which achieves this by making much stronger assumptions on the relationship between the explanatory variables and the individual-specific effect. In a linear model, it is possible to obtain the best of both worlds by making random effects-type assumptions on the time-invariant explanatory variables while maintaining the flexibility of a fixed effects approach when it comes to the time-varying covariates. This article attempts to do the same for some popular nonlinear models.
Journal: Journal of Business & Economic Statistics
Pages: 543-558
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1123635
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1123635
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:543-558
Template-Type: ReDIF-Article 1.0
Author-Name: Christophe Hurlin
Author-X-Name-First: Christophe
Author-X-Name-Last: Hurlin
Author-Name: Sébastien Laurent
Author-X-Name-First: Sébastien
Author-X-Name-Last: Laurent
Author-Name: Rogier Quaedvlieg
Author-X-Name-First: Rogier
Author-X-Name-Last: Quaedvlieg
Author-Name: Stephan Smeekes
Author-X-Name-First: Stephan
Author-X-Name-Last: Smeekes
Title: Risk Measure Inference
Abstract:
We propose a bootstrap-based test of the null hypothesis of equality of two firms’ conditional risk measures (RMs) at a single point in time. The test can be applied to a wide class of conditional risk measures issued from parametric or semiparametric models. Our iterative testing procedure produces a grouped ranking of the RMs, which has direct application for systemic risk analysis. Firms within a group are statistically indistinguishable from each other, but significantly more risky than the firms belonging to lower ranked groups. A Monte Carlo simulation demonstrates that our test has good size and power properties. We apply the procedure to a sample of 94 U.S. financial institutions using ΔCoVaR, MES, and %SRISK. We find that for some periods and RMs, we cannot statistically distinguish the 40 most risky firms due to estimation uncertainty.
Journal: Journal of Business & Economic Statistics
Pages: 499-512
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1127815
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1127815
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:499-512
Template-Type: ReDIF-Article 1.0
Author-Name: Songnian Chen
Author-X-Name-First: Songnian
Author-X-Name-Last: Chen
Author-Name: Jichun Si
Author-X-Name-First: Jichun
Author-X-Name-Last: Si
Author-Name: Hanghui Zhang
Author-X-Name-First: Hanghui
Author-X-Name-Last: Zhang
Author-Name: Yahong Zhou
Author-X-Name-First: Yahong
Author-X-Name-Last: Zhou
Title: Root- Consistent Estimation of a Panel Data Binary Response Model With Unknown Correlated Random Effects
Abstract:
In this article, we consider the estimation of a panel data binary response model with a weak restriction imposed on the individual specific effects. Our estimator is n$\sqrt{n}$-consistent and asymptotically normal under reasonable regularity conditions. Furthermore, we allow the error terms to be heteroscedastic over time. The proposed estimator has a closed form expression and thus is very easy to compute. Simulations and the empirical illustration demonstrate the usefulness of our proposed estimator.
Journal: Journal of Business & Economic Statistics
Pages: 559-571
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2015.1130635
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1130635
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:559-571
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Editorial Collaborators
Journal: Journal of Business & Economic Statistics
Pages: 642-645
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2017.1357925
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1357925
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:642-645
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Editorial Board EOV
Journal: Journal of Business & Economic Statistics
Pages: ebi-ebi
Issue: 4
Volume: 35
Year: 2017
Month: 10
X-DOI: 10.1080/07350015.2017.1357927
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1357927
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:4:p:ebi-ebi
Template-Type: ReDIF-Article 1.0
Author-Name: Pierre Hoonhout
Author-X-Name-First: Pierre
Author-X-Name-Last: Hoonhout
Author-Name: Geert Ridder
Author-X-Name-First: Geert
Author-X-Name-Last: Ridder
Title: Nonignorable Attrition in Multi-Period Panels With Refreshment Samples
Abstract:
In panel surveys, some observation units drop out before the end of the observation period. This panel attrition should not be ignored if it is related to the variables of interest. Hirano, Imbens, Ridder, and Rubin proposed the additively nonignorable (AN) attrition model to correct for the potential selectivity of the attrition in panels with two periods. If a refreshment sample is available in the second period, their model nonparametrically just-identifies the population distribution and the observation probability. We propose the sequential additively nonignorable attrition model that just-identifies the population distribution and the sequence of observation hazards for panels with more than two periods.
Journal: Journal of Business & Economic Statistics
Pages: 377-390
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1345744
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1345744
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:377-390
Template-Type: ReDIF-Article 1.0
Author-Name: Philip Liu
Author-X-Name-First: Philip
Author-X-Name-Last: Liu
Author-Name: Konstantinos Theodoridis
Author-X-Name-First: Konstantinos
Author-X-Name-Last: Theodoridis
Author-Name: Haroon Mumtaz
Author-X-Name-First: Haroon
Author-X-Name-Last: Mumtaz
Author-Name: Francesco Zanetti
Author-X-Name-First: Francesco
Author-X-Name-Last: Zanetti
Title: Changing Macroeconomic Dynamics at the Zero Lower Bound
Abstract:
This article develops a change-point VAR model that isolates four major macroeconomic regimes in the US since the 1960s. The model identifies shocks to demand, supply, monetary policy, and spread yield using restrictions from a general equilibrium model. The analysis discloses important changes to the statistical properties of key macroeconomic variables and their responses to the identified shocks. During the crisis period, spread shocks became more important for movements in unemployment and inflation. A counterfactual exercise evaluates the importance of lower bond-yield spread during the crises and suggests that the Fed’s large-scale asset purchases helped lower the unemployment rate by about 0.6 percentage points, while boosting inflation by about 1 percentage point.
Journal: Journal of Business & Economic Statistics
Pages: 391-404
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1350186
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1350186
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:391-404
Template-Type: ReDIF-Article 1.0
Author-Name: John M. Abowd
Author-X-Name-First: John M.
Author-X-Name-Last: Abowd
Author-Name: Kevin L. McKinney
Author-X-Name-First: Kevin L.
Author-X-Name-Last: McKinney
Author-Name: Ian M. Schmutte
Author-X-Name-First: Ian M.
Author-X-Name-Last: Schmutte
Title: Modeling Endogenous Mobility in Earnings Determination
Abstract:
We evaluate the bias from endogenous job mobility in fixed-effects estimates of worker- and firm-specific earnings heterogeneity using longitudinally linked employer–employee data from the LEHD infrastructure file system of the U.S. Census Bureau. First, we propose two new residual diagnostic tests of the assumption that mobility is exogenous to unmodeled determinants of earnings. Both tests reject exogenous mobility. We relax exogenous mobility by modeling the matched data as an evolving bipartite graph using a Bayesian latent-type framework. Our results suggest that allowing endogenous mobility increases the variation in earnings explained by individual heterogeneity and reduces the proportion due to employer and match effects. To assess external validity, we match our estimates of the wage components to out-of-sample estimates of revenue per worker. The mobility-bias-corrected estimates attribute much more of the variation in revenue per worker to variation in match quality and worker quality than the uncorrected estimates. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 405-418
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1356727
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1356727
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:405-418
Template-Type: ReDIF-Article 1.0
Author-Name: Markus Bibinger
Author-X-Name-First: Markus
Author-X-Name-Last: Bibinger
Author-Name: Nikolaus Hautsch
Author-X-Name-First: Nikolaus
Author-X-Name-Last: Hautsch
Author-Name: Peter Malec
Author-X-Name-First: Peter
Author-X-Name-Last: Malec
Author-Name: Markus Reiss
Author-X-Name-First: Markus
Author-X-Name-Last: Reiss
Title: Estimating the Spot Covariation of Asset Prices—Statistical Theory and Empirical Evidence
Abstract:
We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semimartingale log asset price process, which is subject to noise and nonsynchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM), which recently has been introduced by Bibinger et al.. We prove consistency and a point-wise stable central limit theorem for the proposed spot covariance estimator in a very general setup with stochastic volatility, leverage effects, and general noise distributions. Moreover, we extend the LMM estimator to be robust against autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. Based on simulations we provide empirical guidance on the effective implementation of the estimator and apply it to high-frequency data of a cross-section of Nasdaq blue chip stocks. Employing the estimator to estimate spot covariances, correlations, and volatilities in normal but also unusual periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, and (iii) can increase strongly and nearly instantaneously if new information arrives. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 419-435
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1356728
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1356728
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:419-435
Template-Type: ReDIF-Article 1.0
Author-Name: Soojin Jo
Author-X-Name-First: Soojin
Author-X-Name-Last: Jo
Author-Name: Rodrigo Sekkel
Author-X-Name-First: Rodrigo
Author-X-Name-Last: Sekkel
Title: Macroeconomic Uncertainty Through the Lens of Professional Forecasters
Abstract:
We analyze the evolution of macroeconomic uncertainty in the United States, based on the forecast errors of consensus survey forecasts of various economic indicators. Comprehensive information contained in the survey forecasts enables us to capture a real-time measure of uncertainty surrounding subjective forecasts in a simple framework. We jointly model and estimate macroeconomic (common) and indicator-specific uncertainties of four indicators, using a factor stochastic volatility model. Our macroeconomic uncertainty estimates have three major spikes has three major spikes aligned with the 1973–1975, 1980, and 2007–2009 recessions, while other recessions were characterized by increases in indicator-specific uncertainties. We also show that the selection of data vintages affects the estimates and relative size of jumps in estimated uncertainty series. Finally, our macroeconomic uncertainty has a persistent negative impact on real economic activity, rather than producing “wait-and-see” dynamics.
Journal: Journal of Business & Economic Statistics
Pages: 436-446
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1356729
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1356729
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:436-446
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew Gelman
Author-X-Name-First: Andrew
Author-X-Name-Last: Gelman
Author-Name: Guido Imbens
Author-X-Name-First: Guido
Author-X-Name-Last: Imbens
Title: Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs
Abstract:
It is common in regression discontinuity analysis to control for third, fourth, or higher-degree polynomials of the forcing variable. There appears to be a perception that such methods are theoretically justified, even though they can lead to evidently nonsensical results. We argue that controlling for global high-order polynomials in regression discontinuity analysis is a flawed approach with three major problems: it leads to noisy estimates, sensitivity to the degree of the polynomial, and poor coverage of confidence intervals. We recommend researchers instead use estimators based on local linear or quadratic polynomials or other smooth functions.
Journal: Journal of Business & Economic Statistics
Pages: 447-456
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1366909
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1366909
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:447-456
Template-Type: ReDIF-Article 1.0
Author-Name: Jean-Marie Dufour
Author-X-Name-First: Jean-Marie
Author-X-Name-Last: Dufour
Author-Name: Emmanuel Flachaire
Author-X-Name-First: Emmanuel
Author-X-Name-Last: Flachaire
Author-Name: Lynda Khalaf
Author-X-Name-First: Lynda
Author-X-Name-Last: Khalaf
Title: Permutation Tests for Comparing Inequality Measures
Abstract:
Asymptotic and bootstrap tests for inequality measures are known to perform poorly in finite samples when the underlying distribution is heavy-tailed. We propose Monte Carlo permutation and bootstrap methods for the problem of testing the equality of inequality measures between two samples. Results cover the Generalized Entropy class, which includes Theil’s index, the Atkinson class of indices, and the Gini index. We analyze finite-sample and asymptotic conditions for the validity of the proposed methods, and we introduce a convenient rescaling to improve finite-sample performance. Simulation results show that size correct inference can be obtained with our proposed methods despite heavy tails if the underlying distributions are sufficiently close in the upper tails. Substantial reduction in size distortion is achieved more generally. Studentized rescaled Monte Carlo permutation tests outperform the competing methods we consider in terms of power.
Journal: Journal of Business & Economic Statistics
Pages: 457-470
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1371027
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1371027
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:457-470
Template-Type: ReDIF-Article 1.0
Author-Name: Hans G. Bloemen
Author-X-Name-First: Hans G.
Author-X-Name-Last: Bloemen
Title: Collective Labor Supply, Taxes, and Intrahousehold Allocation: An Empirical Approach
Abstract:
Most empirical studies of the impact of labor income taxation on the labor supply behavior of households use a unitary modeling approach. In this article, we empirically analyze income taxation and the choice of working hours by combining the collective approach for household behavior and the discrete hours choice framework with fixed costs of work. We identify the sharing rule parameters with data on working hours of both the husband and the wife within a couple. Parameter estimates are used to evaluate various model outcomes, like the wage elasticities of labor supply and the impacts of wage changes on the intrahousehold allocation of income. We also simulate the consequences of a policy change in the tax system. We find that the collective model has different empirical outcomes of income sharing than a restricted model that imposes income pooling. In particular, a specification with income pooling fails to capture asymmetries in the income sharing across spouses. These differences in outcomes have consequences for the evaluation of policy changes in the tax system and shed light on the effectiveness of certain policies.
Journal: Journal of Business & Economic Statistics
Pages: 471-483
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1379407
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1379407
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:471-483
Template-Type: ReDIF-Article 1.0
Author-Name: Rocio Alvarez
Author-X-Name-First: Rocio
Author-X-Name-Last: Alvarez
Author-Name: Maximo Camacho
Author-X-Name-First: Maximo
Author-X-Name-Last: Camacho
Author-Name: Manuel Ruiz
Author-X-Name-First: Manuel
Author-X-Name-Last: Ruiz
Title: Inference on Filtered and Smoothed Probabilities in Markov-Switching Autoregressive Models
Abstract:
We derive a statistical theory that provides useful asymptotic approximations to the distributions of the single inferences of filtered and smoothed probabilities, derived from time series characterized by Markov-switching dynamics. We show that the uncertainty in these probabilities diminishes when the states are separated, the variance of the shocks is low, and the time series or the regimes are persistent. As empirical illustrations of our approach, we analyze the U.S. GDP growth rates and the U.S. real interest rates. For both models, we illustrate the usefulness of the confidence intervals when identifying the business cycle phases and the interest rate regimes.
Journal: Journal of Business & Economic Statistics
Pages: 484-495
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1380032
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1380032
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:484-495
Template-Type: ReDIF-Article 1.0
Author-Name: Brigham R. Frandsen
Author-X-Name-First: Brigham R.
Author-X-Name-Last: Frandsen
Title: Testing Censoring Point Independence
Abstract:
Identification in censored regression analysis and hazard models of duration outcomes relies on the condition that censoring points are conditionally independent of latent outcomes, an assumption which may be questionable in many settings. This article proposes a test for this assumption based on a Cramer–von-Mises-like test statistic comparing two different nonparametric estimators for the latent outcome cdf: the Kaplan–Meier estimator, and the empirical cdf conditional on the censoring point exceeding (for right-censored data) the cdf evaluation point. The test is consistent and has power against a wide variety of alternatives. Applying the test to unemployment duration data from the NLSY, the SIPP, and the PSID suggests the assumption is frequently suspect.
Journal: Journal of Business & Economic Statistics
Pages: 496-505
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1383261
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1383261
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:496-505
Template-Type: ReDIF-Article 1.0
Author-Name: Filip Klimenka
Author-X-Name-First: Filip
Author-X-Name-Last: Klimenka
Author-Name: James Lewis Wolter
Author-X-Name-First: James Lewis
Author-X-Name-Last: Wolter
Title: Multiple Regression Model Averaging and the Focused Information Criterion With an Application to Portfolio Choice
Abstract:
We consider multiple regression (MR) model averaging using the focused information criterion (FIC). Our approach is motivated by the problem of implementing a mean-variance portfolio choice rule. The usual approach is to estimate parameters ignoring the intention to use them in portfolio choice. We develop an estimation method that focuses on the trading rule of interest. Asymptotic distributions of submodel estimators in the MR case are derived using a localization framework. The localization is of both regression coefficients and error covariances. Distributions of submodel estimators are used for model selection with the FIC. This allows comparison of submodels using the risk of portfolio rule estimators. FIC model averaging estimators are then characterized. This extension further improves risk properties. We show in simulations that applying these methods in the portfolio choice case results in improved estimates compared with several competitors. An application to futures data shows superior performance as well.
Journal: Journal of Business & Economic Statistics
Pages: 506-516
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1383262
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1383262
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:506-516
Template-Type: ReDIF-Article 1.0
Author-Name: Fang Fang
Author-X-Name-First: Fang
Author-X-Name-Last: Fang
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Jingjing Tong
Author-X-Name-First: Jingjing
Author-X-Name-Last: Tong
Author-Name: Jun Shao
Author-X-Name-First: Jun
Author-X-Name-Last: Shao
Title: Model Averaging for Prediction With Fragmentary Data
Abstract:
One main challenge for statistical prediction with data from multiple sources is that not all the associated covariate data are available for many sampled subjects. Consequently, we need new statistical methodology to handle this type of “fragmentary data” that has become more and more popular in recent years. In this article, we propose a novel method based on the frequentist model averaging that fits some candidate models using all available covariate data. The weights in model averaging are selected by delete-one cross-validation based on the data from complete cases. The optimality of the selected weights is rigorously proved under some conditions. The finite sample performance of the proposed method is confirmed by simulation studies. An example for personal income prediction based on real data from a leading e-community of wealth management in China is also presented for illustration.
Journal: Journal of Business & Economic Statistics
Pages: 517-527
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1383263
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1383263
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:517-527
Template-Type: ReDIF-Article 1.0
Author-Name: Iliyan Georgiev
Author-X-Name-First: Iliyan
Author-X-Name-Last: Georgiev
Author-Name: David I. Harvey
Author-X-Name-First: David I.
Author-X-Name-Last: Harvey
Author-Name: Stephen J. Leybourne
Author-X-Name-First: Stephen J.
Author-X-Name-Last: Leybourne
Author-Name: A. M. Robert Taylor
Author-X-Name-First: A. M. Robert
Author-X-Name-Last: Taylor
Title: A Bootstrap Stationarity Test for Predictive Regression Invalidity
Abstract:
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006). Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 528-541
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1385467
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1385467
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:528-541
Template-Type: ReDIF-Article 1.0
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Author-Name: Julia Schaumburg
Author-X-Name-First: Julia
Author-X-Name-Last: Schaumburg
Author-Name: Bernd Schwaab
Author-X-Name-First: Bernd
Author-X-Name-Last: Schwaab
Title: Bank Business Models at Zero Interest Rates
Abstract:
We propose a novel observation-driven finite mixture model for the study of banking data. The model accommodates time-varying component means and covariance matrices, normal and Student’s t distributed mixtures, and economic determinants of time-varying parameters. Monte Carlo experiments suggest that units of interest can be classified reliably into distinct components in a variety of settings. In an empirical study of 208 European banks between 2008Q1–2015Q4, we identify six business model components and discuss how their properties evolve over time. Changes in the yield curve predict changes in average business model characteristics.
Journal: Journal of Business & Economic Statistics
Pages: 542-555
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1386567
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1386567
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:542-555
Template-Type: ReDIF-Article 1.0
Author-Name: Feng Yao
Author-X-Name-First: Feng
Author-X-Name-Last: Yao
Author-Name: Fan Zhang
Author-X-Name-First: Fan
Author-X-Name-Last: Zhang
Author-Name: Subal C. Kumbhakar
Author-X-Name-First: Subal C.
Author-X-Name-Last: Kumbhakar
Title: Semiparametric Smooth Coefficient Stochastic Frontier Model With Panel Data
Abstract:
We investigate the semiparametric smooth coefficient stochastic frontier model for panel data in which the distribution of the composite error term is assumed to be of known form but depends on some environmental variables. We propose multi-step estimators for the smooth coefficient functions as well as the parameters of the distribution of the composite error term and obtain their asymptotic properties. The Monte Carlo study demonstrates that the proposed estimators perform well in finite samples. We also consider an application and perform model specification test, construct confidence intervals, and estimate efficiency scores that depend on some environmental variables. The application uses a panel data on 451 large U.S. firms to explore the effects of computerization on productivity. Results show that two popular parametric models used in the stochastic frontier literature are likely to be misspecified. Compared with the parametric estimates, our semiparametric model shows a positive and larger overall effect of computer capital on the productivity. The efficiency levels, however, were not much different among the models. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 556-572
Issue: 3
Volume: 37
Year: 2019
Month: 7
X-DOI: 10.1080/07350015.2017.1390467
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1390467
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:3:p:556-572
Template-Type: ReDIF-Article 1.0
Author-Name: Jinyong Hahn
Author-X-Name-First: Jinyong
Author-X-Name-Last: Hahn
Author-Name: Hyungsik Roger Moon
Author-X-Name-First: Hyungsik Roger
Author-X-Name-Last: Moon
Author-Name: Connan Snider
Author-X-Name-First: Connan
Author-X-Name-Last: Snider
Title: LM Test of Neglected Correlated Random Effects and Its Application
Abstract:
This article aims at achieving two distinct goals. The first is to extend the existing LM test of overdispersion to the situation where the alternative hypothesis is characterized by the correlated random effects model. We obtain a result that the test against the random effects model has a certain max-min type optimality property. We will call such a test the LM test of overdispersion. The second goal of the article is to draw a connection between panel data analysis and the analysis of multiplicity of equilibrium in games. Because such multiplicity can be viewed as a particular form of neglected heterogeneity, we propose an intuitive specification test for a class of two-step game estimators.
Journal: Journal of Business & Economic Statistics
Pages: 359-370
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1063426
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1063426
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:359-370
Template-Type: ReDIF-Article 1.0
Author-Name: Michael P. Clements
Author-X-Name-First: Michael P.
Author-X-Name-Last: Clements
Author-Name: Ana Beatriz Galvão
Author-X-Name-First: Ana Beatriz
Author-X-Name-Last: Galvão
Title: Predicting Early Data Revisions to U.S. GDP and the Effects of Releases on Equity Markets
Abstract:
The effects of data uncertainty on real-time decision-making can be reduced by predicting data revisions to U.S. GDP growth. We show that survey forecasts efficiently predict the revision implicit in the second estimate of GDP growth, but that forecasting models incorporating monthly economic indicators and daily equity returns provide superior forecasts of the data revision implied by the release of the third estimate. We use forecasting models to measure the impact of surprises in GDP announcements on equity markets, and to analyze the effects of anticipated future revisions on announcement-day returns. We show that the publication of better than expected third-release GDP figures provides a boost to equity markets, and if future upward revisions are expected, the effects are enhanced during recessions.
Journal: Journal of Business & Economic Statistics
Pages: 389-406
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1076726
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1076726
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:389-406
Template-Type: ReDIF-Article 1.0
Author-Name: D. S. Poskitt
Author-X-Name-First: D. S.
Author-X-Name-Last: Poskitt
Author-Name: Wenying Yao
Author-X-Name-First: Wenying
Author-X-Name-Last: Yao
Title: Vector Autoregressions and Macroeconomic Modeling: An Error Taxonomy
Abstract:
In this article, we investigate the theoretical behavior of finite lag VAR(n)$\text{VAR}(n)$ models fitted to time series that in truth come from an infinite-order VAR(∞)$\text{VAR}(\infty)$ data-generating mechanism. We show that the overall error can be broken down into two basic components, an estimation error that stems from the difference between the parameter estimates and their population ensemble VAR(n)$\text{VAR}(n)$ counterparts, and an approximation error that stems from the difference between the VAR(n)$\text{VAR}(n)$ and the true VAR(∞)$\text{VAR}(\infty)$. The two sources of error are shown to be present in other performance indicators previously employed in the literature to characterize, so-called, truncation effects. Our theoretical analysis indicates that the magnitude of the estimation error exceeds that of the approximation error, but experimental results based upon a prototypical real business cycle model and a practical example indicate that the approximation error approaches its asymptotic position far more slowly than does the estimation error, their relative orders of magnitude notwithstanding. The experimental results suggest that with sample sizes and lag lengths like those commonly employed in practice VAR(n)$\text{VAR}(n)$ models are likely to exhibit serious errors of both types when attempting to replicate the dynamics of the true underlying process and that inferences based on VAR(n)$\text{VAR}(n)$ models can be very untrustworthy.
Journal: Journal of Business & Economic Statistics
Pages: 407-419
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1077139
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1077139
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:407-419
Template-Type: ReDIF-Article 1.0
Author-Name: Michael P. Clements
Author-X-Name-First: Michael P.
Author-X-Name-Last: Clements
Title: Assessing Macro Uncertainty in Real-Time When Data Are Subject To Revision
Abstract:
Model-based estimates of future uncertainty are generally based on the in-sample fit of the model, as when Box–Jenkins prediction intervals are calculated. However, this approach will generate biased uncertainty estimates in real time when there are data revisions. A simple remedy is suggested, and used to generate more accurate prediction intervals for 25 macroeconomic variables, in line with the theory. A simulation study based on an empirically estimated model of data revisions for U.S. output growth is used to investigate small-sample properties.
Journal: Journal of Business & Economic Statistics
Pages: 420-433
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1081596
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1081596
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:420-433
Template-Type: ReDIF-Article 1.0
Author-Name: Johannes Tang Kristensen
Author-X-Name-First: Johannes Tang
Author-X-Name-Last: Kristensen
Title: Diffusion Indexes With Sparse Loadings
Abstract:
The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs to be taken when choosing which variables to include in the model. A number of different approaches to determining these variables have been put forward. These are, however, often based on ad hoc procedures or abandon the underlying theoretical factor model. In this article, we will take a different approach to the problem by using the least absolute shrinkage and selection operator (LASSO) as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model that is better suited for forecasting compared to the traditional principal components (PC) approach. We provide an asymptotic analysis of the estimator and illustrate its merits empirically in a forecasting experiment based on U.S. macroeconomic data. Overall we find that compared to PC we obtain improvements in forecasting accuracy and thus find it to be an important alternative to PC. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 434-451
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1084308
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1084308
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:434-451
Template-Type: ReDIF-Article 1.0
Author-Name: Yonghong An
Author-X-Name-First: Yonghong
Author-X-Name-Last: An
Author-Name: Xun Tang
Author-X-Name-First: Xun
Author-X-Name-Last: Tang
Title: Identifying Structural Models of Committee Decisions With Heterogeneous Tastes and Ideological Bias
Abstract:
In practice, members of a committee often make different recommendations despite a common goal and shared sources of information. We study the nonparametric identification and estimation of a structural model, where such discrepancies are rationalized by the members’ unobserved types, which consist of ideological bias while weighing different sources of information, and tastes for multiple objectives announced in the policy target. We consider models with and without strategic incentives for members to make recommendations that conform to the final committee decision. We show that pure-strategy Bayesian Nash equilibria exist in both cases, and that the variation in common information recorded in the data helps us to recover the distribution of private types from the members’ choices. Building on the identification result, we estimate a structural model of interest rate decisions by the Monetary Policy Committee (MPC) at the Bank of England. We find some evidence that the external committee members are less affected by strategic incentives for conformity in their recommendations than the internal members. We also find that the difference in ideological bias between external and internal members is statistically insignificant. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 452-469
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1084309
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1084309
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:452-469
Template-Type: ReDIF-Article 1.0
Author-Name: Fabian Krüger
Author-X-Name-First: Fabian
Author-X-Name-Last: Krüger
Author-Name: Todd E. Clark
Author-X-Name-First: Todd E.
Author-X-Name-Last: Clark
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Title: Using Entropic Tilting to Combine BVAR Forecasts With External Nowcasts
Abstract:
This article shows entropic tilting to be a flexible and powerful tool for combining medium-term forecasts from BVARs with short-term forecasts from other sources (nowcasts from either surveys or other models). Tilting systematically improves the accuracy of both point and density forecasts, and tilting the BVAR forecasts based on nowcast means and variances yields slightly greater gains in density accuracy than does just tilting based on the nowcast means. Hence, entropic tilting can offer—more so for persistent variables than not-persistent variables—some benefits for accurately estimating the uncertainty of multi-step forecasts that incorporate nowcast information.
Journal: Journal of Business & Economic Statistics
Pages: 470-485
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1087856
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1087856
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:470-485
Template-Type: ReDIF-Article 1.0
Author-Name: Tao Zou
Author-X-Name-First: Tao
Author-X-Name-Last: Zou
Author-Name: Song Xi Chen
Author-X-Name-First: Song Xi
Author-X-Name-Last: Chen
Title: Enhancing Estimation for Interest Rate Diffusion Models With Bond Prices
Abstract:
We consider improving estimating parameters of diffusion processes for interest rates by incorporating information in bond prices. This is designed to improve the estimation of the drift parameters, which are known to be subject to large estimation errors. It is shown that having the bond prices together with the short rates leads to more efficient estimation of all parameters for the interest rate models. It enhances the estimation efficiency of the maximum likelihood estimation based on the interest rate dynamics alone. The combined estimation based on the bond prices and the interest rate dynamics can also provide inference to the risk premium parameter. Simulation experiments were conducted to confirm the theoretical properties of the estimators concerned. We analyze the overnight Fed fund rates together with the U.S. Treasury bond prices. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 486-498
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1089773
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1089773
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:486-498
Template-Type: ReDIF-Article 1.0
Author-Name: Ying Chen
Author-X-Name-First: Ying
Author-X-Name-Last: Chen
Author-Name: Bo Li
Author-X-Name-First: Bo
Author-X-Name-Last: Li
Title: An Adaptive Functional Autoregressive Forecast Model to Predict Electricity Price Curves
Abstract:
We propose an adaptive functional autoregressive (AFAR) forecast model to predict electricity price curves. With time-varying operators, the AFAR model can be safely used in both stationary and nonstationary situations. A closed-form maximum likelihood (ML) estimator is derived under stationarity. The result is further extended for nonstationarity, where the time-dependent operators are adaptively estimated under local homogeneity. We provide theoretical results of the ML estimator and the adaptive estimator. Simulation study illustrates nice finite sample performance of the AFAR modeling. The AFAR model also exhibits a superior accuracy in the forecast exercise of the California electricity daily price curves compared to several alternatives.
Journal: Journal of Business & Economic Statistics
Pages: 371-388
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1092976
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1092976
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:371-388
Template-Type: ReDIF-Article 1.0
Author-Name: Pedro H. C. Sant’Anna
Author-X-Name-First: Pedro H. C.
Author-X-Name-Last: Sant’Anna
Title: Testing for Uncorrelated Residuals in Dynamic Count Models With an Application to Corporate Bankruptcy
Abstract:
This article proposes new model checks for dynamic count models. Both portmanteau and omnibus-type tests for lack of residual autocorrelation are considered. The resulting test statistics are asymptotically pivotal when innovations are uncorrelated but possibly exhibit higher order serial dependence. Moreover, the tests are able to detect local alternatives converging to the null at the parametric rate T− 1/2, with T the sample size. The finite sample performance of the test statistics are examined by means of Monte Carlo experiments. Using a dataset on U.S. corporate bankruptcies, the proposed tests are applied to check if different risk models are correctly specified. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 349-358
Issue: 3
Volume: 35
Year: 2017
Month: 7
X-DOI: 10.1080/07350015.2015.1102732
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1102732
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:3:p:349-358
Template-Type: ReDIF-Article 1.0
Author-Name: Emily Oster
Author-X-Name-First: Emily
Author-X-Name-Last: Oster
Title: Unobservable Selection and Coefficient Stability: Theory and Evidence
Abstract:
A common approach to evaluating robustness to omitted variable bias is to observe coefficient movements after inclusion of controls. This is informative only if selection on observables is informative about selection on unobservables. Although this link is known in theory in existing literature, very few empirical articles approach this formally. I develop an extension of the theory that connects bias explicitly to coefficient stability. I show that it is necessary to take into account coefficient and R-squared movements. I develop a formal bounding argument. I show two validation exercises and discuss application to the economics literature. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 187-204
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2016.1227711
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1227711
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:187-204
Template-Type: ReDIF-Article 1.0
Author-Name: Christoph Breunig
Author-X-Name-First: Christoph
Author-X-Name-Last: Breunig
Title: Testing Missing at Random Using Instrumental Variables
Abstract:
This article proposes a test for missing at random (MAR). The MAR assumption is shown to be testable given instrumental variables which are independent of response given potential outcomes. A nonparametric testing procedure based on integrated squared distance is proposed. The statistic’s asymptotic distribution under the MAR hypothesis is derived. In particular, our results can be applied to testing missing completely at random (MCAR). A Monte Carlo study examines finite sample performance of our test statistic. An empirical illustration analyzes the nonresponse mechanism in labor income questions.
Journal: Journal of Business & Economic Statistics
Pages: 223-234
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1302879
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1302879
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:223-234
Template-Type: ReDIF-Article 1.0
Author-Name: Jan Heufer
Author-X-Name-First: Jan
Author-X-Name-Last: Heufer
Author-Name: Per Hjertstrand
Author-X-Name-First: Per
Author-X-Name-Last: Hjertstrand
Title: Homothetic Efficiency: Theory and Applications
Abstract:
We provide a nonparametric revealed preference approach to demand analysis based on homothetic efficiency. Homotheticity is widely assumed (often implicitly) because it is a convenient and often useful restriction. However, this assumption is rarely tested, and data rarely satisfy testable conditions. To overcome this, we provide a way to estimate homothetic efficiency of consumption choices. The method provides considerably higher discriminatory power against random behavior than the commonly used Afriat efficiency. We use experimental and household survey data to illustrate how our approach is useful for different empirical applications and can provide greater predictive success.
Journal: Journal of Business & Economic Statistics
Pages: 235-247
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1319372
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1319372
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:235-247
Template-Type: ReDIF-Article 1.0
Author-Name: Lili Tan
Author-X-Name-First: Lili
Author-X-Name-Last: Tan
Author-Name: Yichong Zhang
Author-X-Name-First: Yichong
Author-X-Name-Last: Zhang
Title: M-Estimators of U-Processes With a Change-Point Due to a Covariate Threshold
Abstract:
Economic theory often predicts a “tipping point” effect due to multiple equilibria. Linear threshold regressions estimate the “tipping point” by assuming at the same time that the response variable is linear in an index of covariates. However, economic theory rarely imposes a specific functional form, but rather predicts a monotonic relationship between the response variable and the index. We propose new, rank-based, estimators for both the “tipping point” and other regression coefficients, exploiting only the monotonicity condition. We derive the asymptotic properties of these estimators by establishing a more general result for M-estimators of U-processes with a change-point due to a covariate threshold. We finally apply our method to provide new estimates of the “tipping point” of social segregation in four major cities in the United States. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 248-259
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1319373
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1319373
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:248-259
Template-Type: ReDIF-Article 1.0
Author-Name: Gaosheng Ju
Author-X-Name-First: Gaosheng
Author-X-Name-Last: Ju
Author-Name: Li Gan
Author-X-Name-First: Li
Author-X-Name-Last: Gan
Author-Name: Qi Li
Author-X-Name-First: Qi
Author-X-Name-Last: Li
Title: Nonparametric Panel Estimation of Labor Supply
Abstract:
In this article, we estimate structural labor supply with piecewise-linear budgets and nonseparable endogenous unobserved heterogeneity. We propose a two-stage method to address the endogeneity issue that comes from the correlation between the covariates and unobserved heterogeneity. In the first stage, Evdokimov’s nonparametric de-convolution method serves to identify the conditional distribution of unobserved heterogeneity from the quasi-reduced model that uses panel data. In the second stage, the conditional distribution is plugged into the original structural model to estimate labor supply. We apply this methodology to estimate the labor supply of U.S. married men in 2004 and 2005. Our empirical work demonstrates that ignoring the correlation between the covariates and unobserved heterogeneity will bias the estimates of wage elasticities upward. The labor elasticity estimated from a fixed effects model is less than half of that obtained from a random effects model.
Journal: Journal of Business & Economic Statistics
Pages: 260-274
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1321546
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1321546
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:260-274
Template-Type: ReDIF-Article 1.0
Author-Name: Shulin Zhang
Author-X-Name-First: Shulin
Author-X-Name-Last: Zhang
Author-Name: Qian M. Zhou
Author-X-Name-First: Qian M.
Author-X-Name-Last: Zhou
Author-Name: Dongming Zhu
Author-X-Name-First: Dongming
Author-X-Name-Last: Zhu
Author-Name: Peter X.-K. Song
Author-X-Name-First: Peter X.-K.
Author-X-Name-Last: Song
Title: Goodness-of-Fit Test in Multivariate Jump Diffusion Models
Abstract:
In this article, we develop a new goodness-of-fit test for multivariate jump diffusion models. The test statistic is constructed by a contrast between an “in-sample” likelihood (or a likelihood of observed data) and an“out-of-sample” likelihood (or a likelihood of predicted data). We show that under the null hypothesis of a jump diffusion process being correctly specified, the proposed test statistic converges in probability to a constant that equals to the number of model parameters in the null model. We also establish the asymptotic normality for the proposed test statistic. To implement this method, we invoke a closed-form approximation to transition density functions, which results in a computationally efficient algorithm to evaluate the test. Using Monte Carlo simulation experiments, we illustrate that both exact and approximate versions of the proposed test perform satisfactorily. In addition, we demonstrate the proposed testing method in several popular stochastic volatility models for time series of weekly S&P 500 index during the period of January 1990 and December 2014, in which we invoke a linear affine relationship between latent stochastic volatility and the implied volatility index.
Journal: Journal of Business & Economic Statistics
Pages: 275-287
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1321547
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1321547
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:275-287
Template-Type: ReDIF-Article 1.0
Author-Name: Adriana Cornea-Madeira
Author-X-Name-First: Adriana
Author-X-Name-Last: Cornea-Madeira
Author-Name: Cars Hommes
Author-X-Name-First: Cars
Author-X-Name-Last: Hommes
Author-Name: Domenico Massaro
Author-X-Name-First: Domenico
Author-X-Name-Last: Massaro
Title: Behavioral Heterogeneity in U.S. Inflation Dynamics
Abstract:
In this article we develop and estimate a behavioral model of inflation dynamics with heterogeneous firms. In our stylized framework there are two groups of price setters, fundamentalists and random walk believers. Fundamentalists are forward-looking in the sense that they believe in a present-value relationship between inflation and real marginal costs, while random walk believers are backward-looking, using the simplest rule of thumb, naive expectations, to forecast inflation. Agents are allowed to switch between these different forecasting strategies conditional on their recent relative forecasting performance. We estimate the switching model using aggregate and survey data. Our results support behavioral heterogeneity and the significance of evolutionary learning mechanism. We show that there is substantial time variation in the weights of forward-looking and backward-looking behavior. Although on average the majority of firms use the simple backward-looking rule, the market has phases in which it is dominated by either the fundamentalists or the random walk believers.
Journal: Journal of Business & Economic Statistics
Pages: 288-300
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1321548
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1321548
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:288-300
Template-Type: ReDIF-Article 1.0
Author-Name: Yi He
Author-X-Name-First: Yi
Author-X-Name-Last: He
Author-Name: Yanxi Hou
Author-X-Name-First: Yanxi
Author-X-Name-Last: Hou
Author-Name: Liang Peng
Author-X-Name-First: Liang
Author-X-Name-Last: Peng
Author-Name: Jiliang Sheng
Author-X-Name-First: Jiliang
Author-X-Name-Last: Sheng
Title: Statistical Inference for a Relative Risk Measure
Abstract:
For monitoring systemic risk from regulators’ point of view, this article proposes a relative risk measure, which is sensitive to the market comovement. The asymptotic normality of a nonparametric estimator and its smoothed version is established when the observations are independent. To effectively construct an interval without complicated asymptotic variance estimation, a jackknife empirical likelihood inference procedure based on the smoothed nonparametric estimation is provided with a Wilks type of result in case of independent observations. When data follow from AR-GARCH models, the relative risk measure with respect to the errors becomes useful and so we propose a corresponding nonparametric estimator. A simulation study and real-life data analysis show that the proposed relative risk measure is useful in monitoring systemic risk.
Journal: Journal of Business & Economic Statistics
Pages: 301-311
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1321549
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1321549
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:301-311
Template-Type: ReDIF-Article 1.0
Author-Name: Jia Li
Author-X-Name-First: Jia
Author-X-Name-Last: Li
Author-Name: Viktor Todorov
Author-X-Name-First: Viktor
Author-X-Name-Last: Todorov
Author-Name: George Tauchen
Author-X-Name-First: George
Author-X-Name-Last: Tauchen
Author-Name: Huidi Lin
Author-X-Name-First: Huidi
Author-X-Name-Last: Lin
Title: Rank Tests at Jump Events
Abstract:
We propose a test for the rank of a cross-section of processes at a set of jump events. The jump events are either specific known times or are random and associated with jumps of some process. The test is formed from discretely sampled data on a fixed time interval with asymptotically shrinking mesh. In the first step, we form nonparametric estimates of the jump events via thresholding techniques. We then compute the eigenvalues of the outer product of the cross-section of increments at the identified jump events. The test for rank r is based on the asymptotic behavior of the sum of the squared eigenvalues excluding the largest r. A simple resampling method is proposed for feasible testing. The test is applied to financial data spanning the period 2007–2015 at the times of stock market jumps. We find support for a one-factor model of both industry portfolio and Dow 30 stock returns at market jump times. This stands in contrast with earlier evidence for higher-dimensional factor structure of stock returns during “normal” (nonjump) times. We identify the latent factor driving the stocks and portfolios as the size of the market jump.
Journal: Journal of Business & Economic Statistics
Pages: 312-321
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1328362
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1328362
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:312-321
Template-Type: ReDIF-Article 1.0
Author-Name: Wolfgang Karl Härdle
Author-X-Name-First: Wolfgang Karl
Author-X-Name-Last: Härdle
Author-Name: Li-Shan Huang
Author-X-Name-First: Li-Shan
Author-X-Name-Last: Huang
Title: Analysis of Deviance for Hypothesis Testing in Generalized Partially Linear Models
Abstract:
In this study, we develop nonparametric analysis of deviance tools for generalized partially linear models based on local polynomial fitting. Assuming a canonical link, we propose expressions for both local and global analysis of deviance, which admit an additivity property that reduces to analysis of variance decompositions in the Gaussian case. Chi-square tests based on integrated likelihood functions are proposed to formally test whether the nonparametric term is significant. Simulation results are shown to illustrate the proposed chi-square tests and to compare them with an existing procedure based on penalized splines. The methodology is applied to German Bundesbank Federal Reserve data.
Journal: Journal of Business & Economic Statistics
Pages: 322-333
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1330693
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1330693
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:322-333
Template-Type: ReDIF-Article 1.0
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Xia Wang
Author-X-Name-First: Xia
Author-X-Name-Last: Wang
Author-Name: Sainan Jin
Author-X-Name-First: Sainan
Author-X-Name-Last: Jin
Title: Sieve Estimation of Time-Varying Panel Data Models With Latent Structures
Abstract:
We propose a heterogeneous time-varying panel data model with a latent group structure that allows the coefficients to vary over both individuals and time. We assume that the coefficients change smoothly over time and form different unobserved groups. When treated as smooth functions of time, the individual functional coefficients are heterogeneous across groups but homogeneous within a group. We propose a penalized-sieve-estimation-based classifier-Lasso (C-Lasso) procedure to identify the individuals’ membership and to estimate the group-specific functional coefficients in a single step. The classification exhibits the desirable property of uniform consistency. The C-Lasso estimators and their post-Lasso versions achieve the oracle property so that the group-specific functional coefficients can be estimated as well as if the individuals’ membership were known. Several extensions are discussed. Simulations demonstrate excellent finite sample performance of the approach in both classification and estimation. We apply our method to study the heterogeneous trending behavior of GDP per capita across 91 countries for the period 1960–2012 and find four latent groups.
Journal: Journal of Business & Economic Statistics
Pages: 334-349
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1340299
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1340299
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:334-349
Template-Type: ReDIF-Article 1.0
Author-Name: Hang Qian
Author-X-Name-First: Hang
Author-X-Name-Last: Qian
Title: Inequality Constrained State-Space Models
Abstract:
The standard Kalman filter cannot handle inequality constraints imposed on the state variables, as state truncation induces a nonlinear and non-Gaussian model. We propose a Rao-Blackwellized particle filter with the optimal importance function for forward filtering and the likelihood function evaluation. The particle filter effectively enforces the state constraints when the Kalman filter violates them. Monte Carlo experiments demonstrate excellent performance of the proposed particle filter with Rao-Blackwellization, in which the Gaussian linear sub-structure is exploited at both the cross-sectional and temporal levels.
Journal: Journal of Business & Economic Statistics
Pages: 350-362
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1340300
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1340300
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:350-362
Template-Type: ReDIF-Article 1.0
Author-Name: Robert F. Engle
Author-X-Name-First: Robert F.
Author-X-Name-Last: Engle
Author-Name: Olivier Ledoit
Author-X-Name-First: Olivier
Author-X-Name-Last: Ledoit
Author-Name: Michael Wolf
Author-X-Name-First: Michael
Author-X-Name-Last: Wolf
Title: Large Dynamic Covariance Matrices
Abstract:
Second moments of asset returns are important for risk management and portfolio selection. The problem of estimating second moments can be approached from two angles: time series and the cross-section. In time series, the key is to account for conditional heteroscedasticity; a favored model is Dynamic Conditional Correlation (DCC), derived from the ARCH/GARCH family started by Engle (1982). In the cross-section, the key is to correct in-sample biases of sample covariance matrix eigenvalues; a favored model is nonlinear shrinkage, derived from Random Matrix Theory (RMT). The present article marries these two strands of literature to deliver improved estimation of large dynamic covariance matrices. Supplementary material for this article is available online.
Journal: Journal of Business & Economic Statistics
Pages: 363-375
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2017.1345683
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1345683
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:363-375
Template-Type: ReDIF-Article 1.0
Author-Name: Zhuan Pei
Author-X-Name-First: Zhuan
Author-X-Name-Last: Pei
Author-Name: Jörn-Steffen Pischke
Author-X-Name-First: Jörn-Steffen
Author-X-Name-Last: Pischke
Author-Name: Hannes Schwandt
Author-X-Name-First: Hannes
Author-X-Name-Last: Schwandt
Title: Poorly Measured Confounders are More Useful on the Left than on the Right
Abstract:
Researchers frequently test identifying assumptions in regression-based research designs (which include instrumental variables or difference-in-differences models) by adding additional control variables on the right-hand side of the regression. If such additions do not affect the coefficient of interest (much), a study is presumed to be reliable. We caution that such invariance may result from the fact that the observed variables used in such robustness checks are often poor measures of the potential underlying confounders. In this case, a more powerful test of the identifying assumption is to put the variable on the left-hand side of the candidate regression. We provide derivations for the estimators and test statistics involved, as well as power calculations, which can help applied researchers interpret their findings. We illustrate these results in the context of estimating the returns to schooling.
Journal: Journal of Business & Economic Statistics
Pages: 205-216
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2018.1462710
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1462710
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:205-216
Template-Type: ReDIF-Article 1.0
Author-Name: Giuseppe De Luca
Author-X-Name-First: Giuseppe
Author-X-Name-Last: De Luca
Author-Name: Jan R. Magnus
Author-X-Name-First: Jan R.
Author-X-Name-Last: Magnus
Author-Name: Franco Peracchi
Author-X-Name-First: Franco
Author-X-Name-Last: Peracchi
Title: Comments on “Unobservable Selection and Coefficient Stability: Theory and Evidence” and “Poorly Measured Confounders are More Useful on the Left Than on the Right”
Abstract:
We establish a link between the approaches proposed by Oster (2019) and Pei, Pischke, and Schwandt (2019) which contribute to the development of inferential procedures for causal effects in the challenging and empirically relevant situation where the unknown data-generation process is not included in the set of models considered by the investigator. We use the general misspecification framework recently proposed by De Luca, Magnus, and Peracchi (2018) to analyze and understand the implications of the restrictions imposed by the two approaches.
Journal: Journal of Business & Economic Statistics
Pages: 217-222
Issue: 2
Volume: 37
Year: 2019
Month: 4
X-DOI: 10.1080/07350015.2019.1575743
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1575743
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:37:y:2019:i:2:p:217-222
Template-Type: ReDIF-Article 1.0
Author-Name: Davide Pettenuzzo
Author-X-Name-First: Davide
Author-X-Name-Last: Pettenuzzo
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Forecasting Macroeconomic Variables Under Model Instability
Abstract:
We compare different approaches to accounting for parameter instability in the context of macroeconomic forecasting models that assume either small, frequent changes versus models whose parameters exhibit large, rare changes. An empirical out-of-sample forecasting exercise for U.S. gross domestic product (GDP) growth and inflation suggests that models that allow for parameter instability generate more accurate density forecasts than constant-parameter models although they fail to produce better point forecasts. Model combinations deliver similar gains in predictive performance although they fail to improve on the predictive accuracy of the single best model, which is a specification that allows for time-varying parameters and stochastic volatility. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 183-201
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1051183
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1051183
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:183-201
Template-Type: ReDIF-Article 1.0
Author-Name: Laurent Callot
Author-X-Name-First: Laurent
Author-X-Name-Last: Callot
Author-Name: Mehmet Caner
Author-X-Name-First: Mehmet
Author-X-Name-Last: Caner
Author-Name: Anders Bredahl Kock
Author-X-Name-First: Anders Bredahl
Author-X-Name-Last: Kock
Author-Name: Juan Andres Riquelme
Author-X-Name-First: Juan Andres
Author-X-Name-Last: Riquelme
Title: Sharp Threshold Detection Based on Sup-Norm Error Rates in High-Dimensional Models
Abstract:
We propose a new estimator, the thresholded scaled Lasso, in high-dimensional threshold regressions. First, we establish an upper bound on the ℓ∞ estimation error of the scaled Lasso estimator of Lee, Seo, and Shin. This is a nontrivial task as the literature on high-dimensional models has focused almost exclusively on ℓ1 and ℓ2 estimation errors. We show that this sup-norm bound can be used to distinguish between zero and nonzero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent variable selection via thresholding. Our simulations show that thresholding the scaled Lasso yields substantial improvements in terms of variable selection. Finally, we use our estimator to shed further empirical light on the long-running debate on the relationship between the level of debt (public and private) and GDP growth. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 250-264
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1052461
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052461
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:250-264
Template-Type: ReDIF-Article 1.0
Author-Name: Biqing Cai
Author-X-Name-First: Biqing
Author-X-Name-Last: Cai
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Dag Tjøstheim
Author-X-Name-First: Dag
Author-X-Name-Last: Tjøstheim
Title: A New Class of Bivariate Threshold Cointegration Models
Abstract:
In this article, we introduce a new class of bivariate threshold VAR cointegration models. In the models, outside a compact region, the processes are cointegrated, while in the compact region, we allow different kinds of possibilities. We show that the bivariate processes form a 1/2-null recurrent system. We also find that the convergence rate for the estimators for the coefficients in the outside regime is T$\sqrt{T}$, while the convergence rate for the estimators for the coefficients in the middle regime is T1/4. Moreover, we show that the convergence rate of the cointegrating coefficient is T, which is same as for the linear cointegration model. The Monte Carlo simulation results suggest that the estimators perform reasonably well in finite samples. Applying the proposed model to study the dynamic relationship between the federal funds rate and the 3-month Treasury bill rate, we find that cointegrating coefficients are the same for the two regimes while the short run loading coefficients are different.
Journal: Journal of Business & Economic Statistics
Pages: 288-305
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1062385
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1062385
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:288-305
Template-Type: ReDIF-Article 1.0
Author-Name: Yaxing Yang
Author-X-Name-First: Yaxing
Author-X-Name-Last: Yang
Author-Name: Shiqing Ling
Author-X-Name-First: Shiqing
Author-X-Name-Last: Ling
Title: Inference for Heavy-Tailed and Multiple-Threshold Double Autoregressive Models
Abstract:
This article develops a systematic inference procedure for heavy-tailed and multiple-threshold double autoregressive (MTDAR) models. We first study its quasi-maximum exponential likelihood estimator (QMELE). It is shown that the estimated thresholds are n-consistent, each of which converges weakly to the smallest minimizer of a two-sided compound Poisson process. The remaining parameters are n$\sqrt{n}$-consistent and asymptotically normal. Based on this theory, a score-based test is developed to identify the number of thresholds in the model. Furthermore, we construct a mixed sign-based portmanteau test for model checking. Simulation study is carried out to access the performance of our procedure and a real example is given.
Journal: Journal of Business & Economic Statistics
Pages: 318-333
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1064433
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1064433
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:318-333
Template-Type: ReDIF-Article 1.0
Author-Name: Ngai Hang Chan
Author-X-Name-First: Ngai Hang
Author-X-Name-Last: Chan
Author-Name: Ching-Kang Ing
Author-X-Name-First: Ching-Kang
Author-X-Name-Last: Ing
Author-Name: Yuanbo Li
Author-X-Name-First: Yuanbo
Author-X-Name-Last: Li
Author-Name: Chun Yip Yau
Author-X-Name-First: Chun Yip
Author-X-Name-Last: Yau
Title: Threshold Estimation via Group Orthogonal Greedy Algorithm
Abstract:
A threshold autoregressive (TAR) model is an important class of nonlinear time series models that possess many desirable features such as asymmetric limit cycles and amplitude-dependent frequencies. Statistical inference for the TAR model encounters a major difficulty in the estimation of thresholds, however. This article develops an efficient procedure to estimate the thresholds. The procedure first transforms multiple-threshold detection to a regression variable selection problem, and then employs a group orthogonal greedy algorithm to obtain the threshold estimates. Desirable theoretical results are derived to lend support to the proposed methodology. Simulation experiments are conducted to illustrate the empirical performances of the method. Applications to U.S. GNP data are investigated.
Journal: Journal of Business & Economic Statistics
Pages: 334-345
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1064820
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1064820
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:334-345
Template-Type: ReDIF-Article 1.0
Author-Name: Fei Su
Author-X-Name-First: Fei
Author-X-Name-Last: Su
Author-Name: Kung-Sik Chan
Author-X-Name-First: Kung-Sik
Author-X-Name-Last: Chan
Title: Testing for Threshold Diffusion
Abstract:
The threshold diffusion model assumes a piecewise linear drift term and a piecewise smooth diffusion term, which constitutes a rich model for analyzing nonlinear continuous-time processes. We consider the problem of testing for threshold nonlinearity in the drift term. We do this by developing a quasi-likelihood test derived under the working assumption of a constant diffusion term, which circumvents the problem of generally unknown functional form for the diffusion term. The test is first developed for testing for one threshold at which the drift term breaks into two linear functions. We show that under some mild regularity conditions, the asymptotic null distribution of the proposed test statistic is given by the distribution of certain functional of some centered Gaussian process. We develop a computationally efficient method for calibrating the p-value of the test statistic by bootstrapping its asymptotic null distribution. The local power function is also derived, which establishes the consistency of the proposed test. The test is then extended to testing for multiple thresholds. We demonstrate the efficacy of the proposed test by simulations. Using the proposed test, we examine the evidence of nonlinearity in the term structure of a long time series of U.S. interest rates.
Journal: Journal of Business & Economic Statistics
Pages: 218-227
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1073594
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1073594
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:218-227
Template-Type: ReDIF-Article 1.0
Author-Name: Bruce E. Hansen
Author-X-Name-First: Bruce E.
Author-X-Name-Last: Hansen
Title: Regression Kink With an Unknown Threshold
Abstract:
This article explores estimation and inference in a regression kink model with an unknown threshold. A regression kink model (or continuous threshold model) is a threshold regression constrained to be everywhere continuous with a kink at an unknown threshold. We present methods for estimation, to test for the presence of the threshold, for inference on the regression parameters, and for inference on the regression function. A novel finding is that inference on the regression function is nonstandard since the regression function is a nondifferentiable function of the parameters. We apply recently developed methods for inference on nondifferentiable functions. The theory is illustrated by an application to the growth and debt problem introduced by Reinhart and Rogoff, using their long-span time-series for the United States.
Journal: Journal of Business & Economic Statistics
Pages: 228-240
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1073595
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1073595
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:228-240
Template-Type: ReDIF-Article 1.0
Author-Name: Guodong Li
Author-X-Name-First: Guodong
Author-X-Name-Last: Li
Author-Name: Qianqian Zhu
Author-X-Name-First: Qianqian
Author-X-Name-Last: Zhu
Author-Name: Zhao Liu
Author-X-Name-First: Zhao
Author-X-Name-Last: Liu
Author-Name: Wai Keung Li
Author-X-Name-First: Wai Keung
Author-X-Name-Last: Li
Title: On Mixture Double Autoregressive Time Series Models
Abstract:
This article proposes a mixture double autoregressive model by introducing the flexibility of mixture models to the double autoregressive model, a novel conditional heteroscedastic model recently proposed in the literature. To make it more flexible, the mixing proportions are further assumed to be time varying, and probabilistic properties including strict stationarity and higher order moments are derived. Inference tools including the maximum likelihood estimation, an expectation–maximization (EM) algorithm for searching the estimator and an information criterion for model selection are carefully studied for the logistic mixture double autoregressive model, which has two components and is encountered more frequently in practice. Monte Carlo experiments give further support to the new models, and the analysis of an empirical example is also reported.
Journal: Journal of Business & Economic Statistics
Pages: 306-317
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1102735
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1102735
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:306-317
Template-Type: ReDIF-Article 1.0
Author-Name: Luc Bauwens
Author-X-Name-First: Luc
Author-X-Name-Last: Bauwens
Author-Name: Jean-François Carpantier
Author-X-Name-First: Jean-François
Author-X-Name-Last: Carpantier
Author-Name: Arnaud Dufays
Author-X-Name-First: Arnaud
Author-X-Name-Last: Dufays
Title: Autoregressive Moving Average Infinite Hidden Markov-Switching Models
Abstract:
Markov-switching models are usually specified under the assumption that all the parameters change when a regime switch occurs. Relaxing this hypothesis and being able to detect which parameters evolve over time is relevant for interpreting the changes in the dynamics of the series, for specifying models parsimoniously, and may be helpful in forecasting. We propose the class of sticky infinite hidden Markov-switching autoregressive moving average models, in which we disentangle the break dynamics of the mean and the variance parameters. In this class, the number of regimes is possibly infinite and is determined when estimating the model, thus avoiding the need to set this number by a model choice criterion. We develop a new Markov chain Monte Carlo estimation method that solves the path dependence issue due to the moving average component. Empirical results on macroeconomic series illustrate that the proposed class of models dominates the model with fixed parameters in terms of point and density forecasts.
Journal: Journal of Business & Economic Statistics
Pages: 162-182
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2015.1123636
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1123636
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:162-182
Template-Type: ReDIF-Article 1.0
Author-Name: Jesùs Gonzalo
Author-X-Name-First: Jesùs
Author-X-Name-Last: Gonzalo
Author-Name: Jean-Yves Pitarakis
Author-X-Name-First: Jean-Yves
Author-X-Name-Last: Pitarakis
Title: Inferring the Predictability Induced by a Persistent Regressor in a Predictive Threshold Model
Abstract:
We develop tests for detecting possibly episodic predictability induced by a persistent predictor. Our framework is that of a predictive regression model with threshold effects and our goal is to develop operational and easily implementable inferences when one does not wish to impose à priori restrictions on the parameters of the model other than the slopes corresponding to the persistent predictor. Differently put our tests for the null hypothesis of no predictability against threshold predictability remain valid without the need to know whether the remaining parameters of the model are characterized by threshold effects or not (e.g., shifting versus nonshifting intercepts). One interesting feature of our setting is that our test statistics remain unaffected by whether some nuisance parameters are identified or not. We subsequently apply our methodology to the predictability of aggregate stock returns with valuation ratios and document a robust countercyclicality in the ability of some valuation ratios to predict returns in addition to highlighting a strong sensitivity of predictability based results to the time period under consideration.
Journal: Journal of Business & Economic Statistics
Pages: 202-217
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2016.1164054
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1164054
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:202-217
Template-Type: ReDIF-Article 1.0
Author-Name: Young-Joo Kim
Author-X-Name-First: Young-Joo
Author-X-Name-Last: Kim
Author-Name: Myung Hwan Seo
Author-X-Name-First: Myung Hwan
Author-X-Name-Last: Seo
Title: Is There a Jump in the Transition?
Abstract:
This article develops a statistical test for the presence of a jump in an otherwise smooth transition process. In this testing, the null model is a threshold regression and the alternative model is a smooth transition model. We propose a quasi-Gaussian likelihood ratio statistic and provide its asymptotic distribution, which is defined as the maximum of a two parameter Gaussian process with a nonzero bias term. Asymptotic critical values can be tabulated and depend on the transition function employed. A simulation method to compute empirical critical values is also developed. Finite-sample performance of the test is assessed via Monte Carlo simulations. The test is applied to investigate the dynamics of racial segregation within cities across the United States.
Journal: Journal of Business & Economic Statistics
Pages: 241-249
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2016.1164055
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1164055
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:241-249
Template-Type: ReDIF-Article 1.0
Author-Name: Steven N. Durlauf
Author-X-Name-First: Steven N.
Author-X-Name-Last: Durlauf
Author-Name: Andros Kourtellos
Author-X-Name-First: Andros
Author-X-Name-Last: Kourtellos
Author-Name: Chih Ming Tan
Author-X-Name-First: Chih Ming
Author-X-Name-Last: Tan
Title: Status Traps
Abstract:
In this article, we explore nonlinearities in the intergenerational mobility process using threshold regression models. We uncover evidence of threshold effects in children's outcomes based on parental education and cognitive and noncognitive skills as well as their interaction with offspring characteristics. We interpret these thresholds as organizing dynastic earning processes into “status traps.” Status traps, unlike poverty traps, are not absorbing states. Rather, they reduce the impact of favorable shocks for disadvantaged children and so inhibit upward mobility in ways not captured by linear models. Our evidence of status traps is based on three complementary datasets; that is, the PSID, the NLSY, and US administrative data at the commuting zone level, which together suggest that the threshold-like mobility behavior we observe in the data is robust for a range of outcomes and contexts.
Journal: Journal of Business & Economic Statistics
Pages: 265-287
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2016.1189339
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1189339
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:265-287
Template-Type: ReDIF-Article 1.0
Author-Name: Kung-Sik Chan
Author-X-Name-First: Kung-Sik
Author-X-Name-Last: Chan
Author-Name: Bruce E. Hansen
Author-X-Name-First: Bruce E.
Author-X-Name-Last: Hansen
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Guest Editors’ Introduction: Regime Switching and Threshold Models
Journal: Journal of Business & Economic Statistics
Pages: 159-161
Issue: 2
Volume: 35
Year: 2017
Month: 4
X-DOI: 10.1080/07350015.2017.1236521
File-URL: http://hdl.handle.net/10.1080/07350015.2017.1236521
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:2:p:159-161
Template-Type: ReDIF-Article 1.0
Author-Name: Jiaying Gu
Author-X-Name-First: Jiaying
Author-X-Name-Last: Gu
Author-Name: Roger Koenker
Author-X-Name-First: Roger
Author-X-Name-Last: Koenker
Title: Unobserved Heterogeneity in Income Dynamics: An Empirical Bayes Perspective
Abstract:
Empirical Bayes methods for Gaussian compound decision problems involving longitudinal data are considered. The new convex optimization formulation of the nonparametric (Kiefer–Wolfowitz) maximum likelihood estimator for mixture models is employed to construct nonparametric Bayes rules for compound decisions. The methods are first illustrated with some simulation examples and then with an application to models of income dynamics. Using panel data, we estimate a simple dynamic model of earnings that incorporates bivariate heterogeneity in intercept and variance of the innovation process. Profile likelihood is employed to estimate an AR(1) parameter controlling the persistence of the innovations. We find that persistence is relatively modest, ρ^≈0.48$\hat{\rho }\approx 0.48$, when we permit heterogeneity in variances. Evidence of negative dependence between individual intercepts and variances is revealed by the nonparametric estimation of the mixing distribution, and has important consequences for forecasting future income trajectories.
Journal: Journal of Business & Economic Statistics
Pages: 1-16
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1052457
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052457
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:1-16
Template-Type: ReDIF-Article 1.0
Author-Name: Joshua C. C. Chan
Author-X-Name-First: Joshua C. C.
Author-X-Name-Last: Chan
Title: The Stochastic Volatility in Mean Model With Time-Varying Parameters: An Application to Inflation Modeling
Abstract:
This article generalizes the popular stochastic volatility in mean model to allow for time-varying parameters in the conditional mean. The estimation of this extension is nontrival since the volatility appears in both the conditional mean and the conditional variance, and its coefficient in the former is time-varying. We develop an efficient Markov chain Monte Carlo algorithm based on band and sparse matrix algorithms instead of the Kalman filter to estimate this more general variant. The methodology is illustrated with an application that involves U.S., U.K., and Germany inflation. The estimation results show substantial time-variation in the coefficient associated with the volatility, highlighting the empirical relevance of the proposed extension. Moreover, in a pseudo out-of-sample forecasting exercise, the proposed variant also forecasts better than various standard benchmarks.
Journal: Journal of Business & Economic Statistics
Pages: 17-28
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1052459
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052459
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:17-28
Template-Type: ReDIF-Article 1.0
Author-Name: Chenxue Li
Author-X-Name-First: Chenxue
Author-X-Name-Last: Li
Author-Name: Deyuan Li
Author-X-Name-First: Deyuan
Author-X-Name-Last: Li
Author-Name: Liang Peng
Author-X-Name-First: Liang
Author-X-Name-Last: Peng
Title: Uniform Test for Predictive Regression With AR Errors
Abstract:
Testing predictability is of importance in economics and finance. Based on a predictive regression model with independent and identically distributed errors, some uniform tests have been proposed in the literature without distinguishing whether the predicting variable is stationary or nearly integrated. In this article, we extend the empirical likelihood methods of Zhu, Cai, and Peng with independent errors to the case of an AR error process. Again, the proposed new tests do not need to know whether the predicting variable is stationary or nearly integrated, and whether it has a finite variance or an infinite variance. A simulation study shows the new methodologies perform well in finite sample.
Journal: Journal of Business & Economic Statistics
Pages: 29-39
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1052460
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052460
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:29-39
Template-Type: ReDIF-Article 1.0
Author-Name: Juan Carlos Escanciano
Author-X-Name-First: Juan Carlos
Author-X-Name-Last: Escanciano
Author-Name: Juan Carlos Pardo-Fernández
Author-X-Name-First: Juan Carlos
Author-X-Name-Last: Pardo-Fernández
Author-Name: Ingrid Van Keilegom
Author-X-Name-First: Ingrid
Author-X-Name-Last: Van Keilegom
Title: Semiparametric Estimation of Risk–Return Relationships
Abstract:
This article proposes semiparametric generalized least-squares estimation of parametric restrictions between the conditional mean and the conditional variance of excess returns given a set of parametric factors. A distinctive feature of our estimator is that it does not require a fully parametric model for the conditional mean and variance. We establish consistency and asymptotic normality of the estimates. The theory is nonstandard due to the presence of estimated factors. We provide sufficient conditions for the estimated factors not to have an impact in the asymptotic standard error of estimators. A simulation study investigates the finite sample performance of the estimates. Finally, an application to the CRSP value-weighted excess returns highlights the merits of our approach. In contrast to most previous studies using nonparametric estimates, we find a positive and significant price of risk in our semiparametric setting.
Journal: Journal of Business & Economic Statistics
Pages: 40-52
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1052879
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052879
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:40-52
Template-Type: ReDIF-Article 1.0
Author-Name: Sílvia Gonçalves
Author-X-Name-First: Sílvia
Author-X-Name-Last: Gonçalves
Author-Name: Benoit Perron
Author-X-Name-First: Benoit
Author-X-Name-Last: Perron
Author-Name: Antoine Djogbenou
Author-X-Name-First: Antoine
Author-X-Name-Last: Djogbenou
Title: Bootstrap Prediction Intervals for Factor Models
Abstract:
We propose bootstrap prediction intervals for an observation h periods into the future and its conditional mean. We assume that these forecasts are made using a set of factors extracted from a large panel of variables. Because we treat these factors as latent, our forecasts depend both on estimated factors and estimated regression coefficients. Under regularity conditions, asymptotic intervals have been shown to be valid under Gaussianity of the innovations. The bootstrap allows us to relax this assumption and to construct valid prediction intervals under more general conditions. Moreover, even under Gaussianity, the bootstrap leads to more accurate intervals in cases where the cross-sectional dimension is relatively small as it reduces the bias of the ordinary least-squares (OLS) estimator.
Journal: Journal of Business & Economic Statistics
Pages: 53-69
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1054492
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1054492
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:53-69
Template-Type: ReDIF-Article 1.0
Author-Name: Shih-Kang Chao
Author-X-Name-First: Shih-Kang
Author-X-Name-Last: Chao
Author-Name: Katharina Proksch
Author-X-Name-First: Katharina
Author-X-Name-Last: Proksch
Author-Name: Holger Dette
Author-X-Name-First: Holger
Author-X-Name-Last: Dette
Author-Name: Wolfgang Karl Härdle
Author-X-Name-First: Wolfgang Karl
Author-X-Name-Last: Härdle
Title: Confidence Corridors for Multivariate Generalized Quantile Regression
Abstract:
We focus on the construction of confidence corridors for multivariate nonparametric generalized quantile regression functions. This construction is based on asymptotic results for the maximal deviation between a suitable nonparametric estimator and the true function of interest, which follow after a series of approximation steps including a Bahadur representation, a new strong approximation theorem, and exponential tail inequalities for Gaussian random fields. As a byproduct we also obtain multivariate confidence corridors for the regression function in the classical mean regression. To deal with the problem of slowly decreasing error in coverage probability of the asymptotic confidence corridors, which results in meager coverage for small sample sizes, a simple bootstrap procedure is designed based on the leading term of the Bahadur representation. The finite-sample properties of both procedures are investigated by means of a simulation study and it is demonstrated that the bootstrap procedure considerably outperforms the asymptotic bands in terms of coverage accuracy. Finally, the bootstrap confidence corridors are used to study the efficacy of the National Supported Work Demonstration, which is a randomized employment enhancement program launched in the 1970s. This article has supplementary materials online.
Journal: Journal of Business & Economic Statistics
Pages: 70-85
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1054493
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1054493
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:70-85
Template-Type: ReDIF-Article 1.0
Author-Name: Jing Qin
Author-X-Name-First: Jing
Author-X-Name-Last: Qin
Author-Name: Biao Zhang
Author-X-Name-First: Biao
Author-X-Name-Last: Zhang
Author-Name: Denis H.Y. Leung
Author-X-Name-First: Denis H.Y.
Author-X-Name-Last: Leung
Title: Efficient Augmented Inverse Probability Weighted Estimation in Missing Data Problems
Abstract:
When analyzing data with missing data, a commonly used method is the inverse probability weighting (IPW) method, which reweights estimating equations with propensity scores. The popularity of the IPW method is due to its simplicity. However, it is often being criticized for being inefficient because most of the information from the incomplete observations is not used. Alternatively, the regression method is known to be efficient but is nonrobust to the misspecification of the regression function. In this article, we propose a novel way of optimally combining the propensity score function and the regression model. The resulting estimating equation enjoys the properties of robustness against misspecification of either the propensity score or the regression function, as well as being locally semiparametric efficient. We demonstrate analytically situations where our method leads to a more efficient estimator than some of its competitors. In a simulation study, we show the new method compares favorably with its competitors in finite samples. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 86-97
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1058266
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1058266
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:86-97
Template-Type: ReDIF-Article 1.0
Author-Name: Weichi Wu
Author-X-Name-First: Weichi
Author-X-Name-Last: Wu
Author-Name: Zhou Zhou
Author-X-Name-First: Zhou
Author-X-Name-Last: Zhou
Title: Nonparametric Inference for Time-Varying Coefficient Quantile Regression
Abstract:
The article considers nonparametric inference for quantile regression models with time-varying coefficients. The errors and covariates of the regression are assumed to belong to a general class of locally stationary processes and are allowed to be cross-dependent. Simultaneous confidence tubes (SCTs) and integrated squared difference tests (ISDTs) are proposed for simultaneous nonparametric inference of the latter models with asymptotically correct coverage probabilities and Type I error rates. Our methodologies are shown to possess certain asymptotically optimal properties. Furthermore, we propose an information criterion that performs consistent model selection for nonparametric quantile regression models of nonstationary time series. For implementation, a wild bootstrap procedure is proposed, which is shown to be robust to the dependent and nonstationary data structure. Our method is applied to studying the asymmetric and time-varying dynamic structures of the U.S. unemployment rate since the 1940s. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 98-109
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1060884
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1060884
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:98-109
Template-Type: ReDIF-Article 1.0
Author-Name: Daniele Bianchi
Author-X-Name-First: Daniele
Author-X-Name-Last: Bianchi
Author-Name: Massimo Guidolin
Author-X-Name-First: Massimo
Author-X-Name-Last: Guidolin
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Title: Macroeconomic Factors Strike Back: A Bayesian Change-Point Model of Time-Varying Risk Exposures and Premia in the U.S. Cross-Section
Abstract:
This article proposes a Bayesian estimation framework for a typical multi-factor model with time-varying risk exposures to macroeconomic risk factors and corresponding premia to price U.S. publicly traded assets. The model assumes that risk exposures and idiosyncratic volatility follow a break-point latent process, allowing for changes at any point on time but not restricting them to change at all points. The empirical application to 40 years of U.S. data and 23 portfolios shows that the approach yields sensible results compared to previous two-step methods based on naive recursive estimation schemes, as well as a set of alternative model restrictions. A variance decomposition test shows that although most of the predictable variation comes from the market risk premium, a number of additional macroeconomic risks, including real output and inflation shocks, are significantly priced in the cross-section. A Bayes factor analysis massively favors the proposed change-point model. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 110-129
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1061436
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1061436
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:110-129
Template-Type: ReDIF-Article 1.0
Author-Name: Jing Zhou
Author-X-Name-First: Jing
Author-X-Name-Last: Zhou
Author-Name: Yundong Tu
Author-X-Name-First: Yundong
Author-X-Name-Last: Tu
Author-Name: Yuxin Chen
Author-X-Name-First: Yuxin
Author-X-Name-Last: Chen
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Estimating Spatial Autocorrelation With Sampled Network Data
Abstract:
Spatial autocorrelation is a parameter of importance for network data analysis. To estimate spatial autocorrelation, maximum likelihood has been popularly used. However, its rigorous implementation requires the whole network to be observed. This is practically infeasible if network size is huge (e.g., Facebook, Twitter, Weibo, WeChat, etc.). In that case, one has to rely on sampled network data to infer about spatial autocorrelation. By doing so, network relationships (i.e., edges) involving unsampled nodes are overlooked. This leads to distorted network structure and underestimated spatial autocorrelation. To solve the problem, we propose here a novel solution. By temporarily assuming that the spatial autocorrelation is small, we are able to approximate the likelihood function by its first-order Taylor’s expansion. This leads to the method of approximate maximum likelihood estimator (AMLE), which further inspires the development of paired maximum likelihood estimator (PMLE). Compared with AMLE, PMLE is computationally superior and thus is particularly useful for large-scale network data analysis. Under appropriate regularity conditions (without assuming a small spatial autocorrelation), we show theoretically that PMLE is consistent and asymptotically normal. Numerical studies based on both simulated and real datasets are presented for illustration purpose.
Journal: Journal of Business & Economic Statistics
Pages: 130-138
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1061437
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1061437
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:130-138
Template-Type: ReDIF-Article 1.0
Author-Name: Dong Hwan Oh
Author-X-Name-First: Dong Hwan
Author-X-Name-Last: Oh
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Title: Modeling Dependence in High Dimensions With Factor Copulas
Abstract:
This article presents flexible new models for the dependence structure, or copula, of economic variables based on a latent factor structure. The proposed models are particularly attractive for relatively high-dimensional applications, involving 50 or more variables, and can be combined with semiparametric marginal distributions to obtain flexible multivariate distributions. Factor copulas generally lack a closed-form density, but we obtain analytical results for the implied tail dependence using extreme value theory, and we verify that simulation-based estimation using rank statistics is reliable even in high dimensions. We consider “scree” plots to aid the choice of the number of factors in the model. The model is applied to daily returns on all 100 constituents of the S&P 100 index, and we find significant evidence of tail dependence, heterogeneous dependence, and asymmetric dependence, with dependence being stronger in crashes than in booms. We also show that factor copula models provide superior estimates of some measures of systemic risk. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 139-154
Issue: 1
Volume: 35
Year: 2017
Month: 1
X-DOI: 10.1080/07350015.2015.1062384
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1062384
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:35:y:2017:i:1:p:139-154
Template-Type: ReDIF-Article 1.0
Author-Name: João D. F. Rodrigues
Author-X-Name-First: João D. F.
Author-X-Name-Last: Rodrigues
Title: Maximum-Entropy Prior Uncertainty and Correlation of Statistical Economic Data
Abstract:
Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions, or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This article uses concepts from Bayesian inference and the maximum entropy principle to estimate the prior probability distribution, uncertainty, and correlations of source data when such information is not explicitly provided. In the absence of additional information, an isolated datum is described by a truncated Gaussian distribution, and if an uncertainty estimate is missing, its prior equals the best guess. When the sum of a set of disaggregate data is constrained to match an aggregate datum, it is possible to determine the prior correlations among disaggregate data. If aggregate uncertainty is missing, all prior correlations are positive. If aggregate uncertainty is available, prior correlations can be either all positive, all negative, or a mix of both. An empirical example is presented, which reports relative uncertainties and correlation priors for the County Business Patterns database. In this example, relative uncertainties range from 1% to 80% and 20% of data pairs exhibit correlations below −0.9 or above 0.9. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 357-367
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1038545
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1038545
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:357-367
Template-Type: ReDIF-Article 1.0
Author-Name: Danyang Huang
Author-X-Name-First: Danyang
Author-X-Name-Last: Huang
Author-Name: Jun Yin
Author-X-Name-First: Jun
Author-X-Name-Last: Yin
Author-Name: Tao Shi
Author-X-Name-First: Tao
Author-X-Name-Last: Shi
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: A Statistical Model for Social Network Labeling
Abstract:
We consider a social network from which one observes not only network structure (i.e., nodes and edges) but also a set of labels (or tags, keywords) for each node (or user). These labels are self-created and closely related to the user’s career status, life style, personal interests, and many others. Thus, they are of great interest for online marketing. To model their joint behavior with network structure, a complete data model is developed. The model is based on the classical p1 model but allows the reciprocation parameter to be label-dependent. By focusing on connected pairs only, the complete data model can be generalized into a conditional model. Compared with the complete data model, the conditional model specifies only the conditional likelihood for the connected pairs. As a result, it suffers less risk from model misspecification. Furthermore, because the conditional model involves connected pairs only, the computational cost is much lower. The resulting estimator is consistent and asymptotically normal. Depending on the network sparsity level, the convergence rate could be different. To demonstrate its finite sample performance, numerical studies (based on both simulated and real datasets) are presented.
Journal: Journal of Business & Economic Statistics
Pages: 368-374
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1039014
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1039014
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:368-374
Template-Type: ReDIF-Article 1.0
Author-Name: Andrea Carriero
Author-X-Name-First: Andrea
Author-X-Name-Last: Carriero
Author-Name: Todd E. Clark
Author-X-Name-First: Todd E.
Author-X-Name-Last: Clark
Author-Name: Massimiliano Marcellino
Author-X-Name-First: Massimiliano
Author-X-Name-Last: Marcellino
Title: Common Drifting Volatility in Large Bayesian VARs
Abstract:
The general pattern of estimated volatilities of macroeconomic and financial variables is often broadly similar. We propose two models in which conditional volatilities feature comovement and study them using U.S. macroeconomic data. The first model specifies the conditional volatilities as driven by a single common unobserved factor, plus an idiosyncratic component. We label this model BVAR with general factor stochastic volatility (BVAR-GFSV) and we show that the loss in terms of marginal likelihood from assuming a common factor for volatility is moderate. The second model, which we label BVAR with common stochastic volatility (BVAR-CSV), is a special case of the BVAR-GFSV in which the idiosyncratic component is eliminated and the loadings to the factor are set to 1 for all the conditional volatilities. Such restrictions permit a convenient Kronecker structure for the posterior variance of the VAR coefficients, which in turn permits estimating the model even with large datasets. While perhaps misspecified, the BVAR-CSV model is strongly supported by the data when compared against standard homoscedastic BVARs, and it can produce relatively good point and density forecasts by taking advantage of the information contained in large datasets.
Journal: Journal of Business & Economic Statistics
Pages: 375-390
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1040116
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1040116
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:375-390
Template-Type: ReDIF-Article 1.0
Author-Name: Daniel Bauer
Author-X-Name-First: Daniel
Author-X-Name-Last: Bauer
Author-Name: Florian Kramer
Author-X-Name-First: Florian
Author-X-Name-Last: Kramer
Title: The Risk of a Mortality Catastrophe
Abstract:
We develop a continuous-time model for analyzing and valuing catastrophe mortality contingent claims based on stochastic modeling of the force of mortality. We derive parameter estimates from a 105-year time series of U.S. population mortality data using a simulated maximum likelihood approach based on a particle filter. Relying on the resulting parameters, we calculate loss profiles for a representative catastrophe mortality transaction and compare them to the “official” loss profiles that are provided by the issuers to investors and rating agencies. We find that although the loss profiles are subject to great uncertainties, the official figures fall significantly below the corresponding risk statistics based on our model. In particular, we find that the annualized incidence probability of a mortality catastrophe, defined as a 15% increase in aggregated mortality probabilities, is about 1.4%—compared to about 0.1% according to the official loss profiles.
Journal: Journal of Business & Economic Statistics
Pages: 391-405
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1040117
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1040117
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:391-405
Template-Type: ReDIF-Article 1.0
Author-Name: Zacharias Psaradakis
Author-X-Name-First: Zacharias
Author-X-Name-Last: Psaradakis
Title: Using the Bootstrap to Test for Symmetry Under Unknown Dependence
Abstract:
This article considers tests for symmetry of the one-dimensional marginal distribution of fractionally integrated processes. The tests are implemented by using an autoregressive sieve bootstrap approximation to the null sampling distribution of the relevant test statistics. The sieve bootstrap allows inference on symmetry to be carried out without knowledge of either the memory parameter of the data or of the appropriate norming factor for the test statistic and its asymptotic distribution. The small-sample properties of the proposed method are examined by means of Monte Carlo experiments, and applications to real-world data are also presented.
Journal: Journal of Business & Economic Statistics
Pages: 406-415
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1043368
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1043368
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:406-415
Template-Type: ReDIF-Article 1.0
Author-Name: Michael S. Smith
Author-X-Name-First: Michael S.
Author-X-Name-Last: Smith
Author-Name: Shaun P. Vahey
Author-X-Name-First: Shaun P.
Author-X-Name-Last: Vahey
Title: Asymmetric Forecast Densities for U.S. Macroeconomic Variables from a Gaussian Copula Model of Cross-Sectional and Serial Dependence
Abstract:
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.
Journal: Journal of Business & Economic Statistics
Pages: 416-434
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1044533
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1044533
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:416-434
Template-Type: ReDIF-Article 1.0
Author-Name: Alois Kneip
Author-X-Name-First: Alois
Author-X-Name-Last: Kneip
Author-Name: Léopold Simar
Author-X-Name-First: Léopold
Author-X-Name-Last: Simar
Author-Name: Paul W. Wilson
Author-X-Name-First: Paul W.
Author-X-Name-Last: Wilson
Title: Testing Hypotheses in Nonparametric Models of Production
Abstract:
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 435-456
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1049747
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1049747
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:435-456
Template-Type: ReDIF-Article 1.0
Author-Name: Yoonseok Lee
Author-X-Name-First: Yoonseok
Author-X-Name-Last: Lee
Author-Name: Donggyun Shin
Author-X-Name-First: Donggyun
Author-X-Name-Last: Shin
Title: Measuring Social Tension from Income Class Segregation
Abstract:
We develop an index that effectively measures the level of social tension generated by income class segregation. We adopt the basic concepts of between-group difference (or alienation) and within-group similarity (or identification) from the income [bi]polarization literature; but we allow for asymmetric degrees of between-group antagonism in the alienation function, and construct a more effective identification function using both the relative degree of within-group clustering and the group size. To facilitate statistical inference, we derive the asymptotic distribution of the proposed measure using results from U-statistics. As the new measure is general enough to include existing income polarization indices as well as the Gini index as special cases, the asymptotic result can be readily applied to these popular indices. Evidence from the Panel Study of Income Dynamics data suggests that, while the level of social tension shows an upward trend over the sample period of 1981 to 2005, government’s taxes and transfers have been effective in reducing the level of social tension significantly.
Journal: Journal of Business & Economic Statistics
Pages: 457-471
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1051624
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1051624
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:457-471
Template-Type: ReDIF-Article 1.0
Author-Name: Laura Coroneo
Author-X-Name-First: Laura
Author-X-Name-Last: Coroneo
Author-Name: Domenico Giannone
Author-X-Name-First: Domenico
Author-X-Name-Last: Giannone
Author-Name: Michele Modugno
Author-X-Name-First: Michele
Author-X-Name-Last: Modugno
Title: Unspanned Macroeconomic Factors in the Yield Curve
Abstract:
In this article, we extract common factors from a cross-section of U.S. macro-variables and Treasury zero-coupon yields. We find that two macroeconomic factors have an important predictive content for government bond yields and excess returns. These factors are not spanned by the cross-section of yields and are well proxied by economic growth and real interest rates.
Journal: Journal of Business & Economic Statistics
Pages: 472-485
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2015.1052456
File-URL: http://hdl.handle.net/10.1080/07350015.2015.1052456
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:472-485
Template-Type: ReDIF-Article 1.0
Author-Name: Marine Carrasco
Author-X-Name-First: Marine
Author-X-Name-Last: Carrasco
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Title: In-Sample Inference and Forecasting in Misspecified Factor Models
Abstract:
This article considers in-sample prediction and out-of-sample forecasting in regressions with many exogenous predictors. We consider four dimension-reduction devices: principal components, ridge, Landweber Fridman, and partial least squares. We derive rates of convergence for two representative models: an ill-posed model and an approximate factor model. The theory is developed for a large cross-section and a large time-series. As all these methods depend on a tuning parameter to be selected, we also propose data-driven selection methods based on cross-validation and establish their optimality. Monte Carlo simulations and an empirical application to forecasting inflation and output growth in the U.S. show that data-reduction methods outperform conventional methods in several relevant settings, and might effectively guard against instabilities in predictors’ forecasting ability.
Journal: Journal of Business & Economic Statistics
Pages: 313-338
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2016.1186029
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1186029
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:313-338
Template-Type: ReDIF-Article 1.0
Author-Name: James H. Stock
Author-X-Name-First: James H.
Author-X-Name-Last: Stock
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 339-341
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2016.1186030
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1186030
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:339-341
Template-Type: ReDIF-Article 1.0
Author-Name: Norman R. Swanson
Author-X-Name-First: Norman R.
Author-X-Name-Last: Swanson
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 348-353
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2016.1186554
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1186554
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:348-353
Template-Type: ReDIF-Article 1.0
Author-Name: Xu Cheng
Author-X-Name-First: Xu
Author-X-Name-Last: Cheng
Author-Name: Bruce E. Hansen
Author-X-Name-First: Bruce E.
Author-X-Name-Last: Hansen
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 345-347
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2016.1189338
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1189338
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:345-347
Template-Type: ReDIF-Article 1.0
Author-Name: Domenico Giannone
Author-X-Name-First: Domenico
Author-X-Name-Last: Giannone
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 342-344
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2016.1190280
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1190280
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:342-344
Template-Type: ReDIF-Article 1.0
Author-Name: Marine Carrasco
Author-X-Name-First: Marine
Author-X-Name-Last: Carrasco
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Title: Rejoinder: In-Sample Inference and Forecasting in Misspecified Factor Models
Journal: Journal of Business & Economic Statistics
Pages: 353-356
Issue: 3
Volume: 34
Year: 2016
Month: 7
X-DOI: 10.1080/07350015.2016.1191500
File-URL: http://hdl.handle.net/10.1080/07350015.2016.1191500
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:34:y:2016:i:3:p:353-356
Template-Type: ReDIF-Article 1.0
Author-Name: Christian Conrad
Author-X-Name-First: Christian
Author-X-Name-Last: Conrad
Author-Name: Melanie Schienle
Author-X-Name-First: Melanie
Author-X-Name-Last: Schienle
Title: Testing for an Omitted Multiplicative Long-Term Component in GARCH Models
Abstract:
We consider the problem of testing for an omitted multiplicative long-term component in GARCH-type models. Under the alternative, there is a two-component model with a short-term GARCH component that fluctuates around a smoothly time-varying long-term component which is driven by the dynamics of an explanatory variable. We suggest a Lagrange multiplier statistic for testing the null hypothesis that the variable has no explanatory power. We derive the asymptotic theory for our test statistic and investigate its finite sample properties by Monte Carlo simulation. Our test also covers the mixed-frequency case in which the returns are observed at a higher frequency than the explanatory variable. The usefulness of our procedure is illustrated by empirical applications to S&P 500 return data. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 229-242
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1482759
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1482759
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:229-242
Template-Type: ReDIF-Article 1.0
Author-Name: Yi-Ting Chen
Author-X-Name-First: Yi-Ting
Author-X-Name-Last: Chen
Author-Name: Yu-Chin Hsu
Author-X-Name-First: Yu-Chin
Author-X-Name-Last: Hsu
Author-Name: Hung-Jen Wang
Author-X-Name-First: Hung-Jen
Author-X-Name-Last: Wang
Title: A Stochastic Frontier Model with Endogenous Treatment Status and Mediator
Abstract:
Government policies are frequently used to promote productivity. Some policies are designed to enhance production technology, while others are meant to improve production efficiency. An important issue to consider when designing and evaluating policies is whether a mediator is required or effective in achieving the desired final outcome. To better understand and evaluate the policies, we propose a new stochastic frontier model with a treatment status and a mediator, both of which are allowed to be endogenous. The model allows us to decompose the total program (treatment) effect into technology and efficiency components, and to investigate whether the effect is derived directly from the program or indirectly through a particular mediator. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 243-256
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1497504
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497504
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:243-256
Template-Type: ReDIF-Article 1.0
Author-Name: Aiste Ruseckaite
Author-X-Name-First: Aiste
Author-X-Name-Last: Ruseckaite
Author-Name: Dennis Fok
Author-X-Name-First: Dennis
Author-X-Name-Last: Fok
Author-Name: Peter Goos
Author-X-Name-First: Peter
Author-X-Name-Last: Goos
Title: Flexible Mixture-Amount Models Using Multivariate Gaussian Processes
Abstract:
Many products and services can be described as mixtures of components whose proportions sum to one. Specialized models have been developed for relating the mixture component proportions to response variables, such as the preference, quality, and liking of products. If only the mixture component proportions affect the response variable, mixture models suffice to analyze the data. In case the total amount of the mixture also affects the response variable, mixture-amount models are needed. The current strategy for mixture-amount models is to express the response in terms of the mixture component proportions and subsequently specify the corresponding parameters as parametric functions of the amount. Specifying the functional form for these parameters may not be straightforward, and using a flexible functional form usually comes at the cost of a large number of parameters. In this article, we present a new modeling approach that is flexible, but parsimonious in the number of parameters. This new approach uses multivariate Gaussian processes and avoids the necessity to a priori specify the nature of the dependence of the mixture model parameters on the amount of the mixture. We show that this model encompasses two commonly used model specifications as extreme cases. We consider two applications and demonstrate that the new model outperforms standard models for mixture-amount data.
Journal: Journal of Business & Economic Statistics
Pages: 257-271
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1497506
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497506
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:257-271
Template-Type: ReDIF-Article 1.0
Author-Name: Jean Boivin
Author-X-Name-First: Jean
Author-X-Name-Last: Boivin
Author-Name: Marc P. Giannoni
Author-X-Name-First: Marc P.
Author-X-Name-Last: Giannoni
Author-Name: Dalibor Stevanović
Author-X-Name-First: Dalibor
Author-X-Name-Last: Stevanović
Title: Dynamic Effects of Credit Shocks in a Data-Rich Environment
Abstract:
We examine the dynamic effects of credit shocks using a large dataset of U.S. economic and financial indicators in a structural factor model. An identified credit shock resulting in an unanticipated increase in credit spreads causes a large and persistent downturn in indicators of real economic activity, labor market conditions, expectations of future economic conditions, a gradual decline in aggregate price indices, and a decrease in short- and longer-term riskless interest rates. Our identification procedure allows us to perform counterfactual experiments which suggest that credit spread shocks have largely contributed to the deterioration in economic conditions during the Great Recession. Recursive estimation of the model reveals relevant instabilities since 2007 and provides further evidence that monetary policy has partly offset the effects of credit shocks on economic activity. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 272-284
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1497507
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497507
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:272-284
Template-Type: ReDIF-Article 1.0
Author-Name: Pierre Guérin
Author-X-Name-First: Pierre
Author-X-Name-Last: Guérin
Author-Name: Danilo Leiva-Leon
Author-X-Name-First: Danilo
Author-X-Name-Last: Leiva-Leon
Author-Name: Massimiliano Marcellino
Author-X-Name-First: Massimiliano
Author-X-Name-Last: Marcellino
Title: Markov-Switching Three-Pass Regression Filter
Abstract:
We introduce a new approach for the estimation of high-dimensional factor models with regime-switching factor loadings by extending the linear three-pass regression filter to settings where parameters can vary according to Markov processes. The new method, denoted as Markov-switching three-pass regression filter (MS-3PRF), is suitable for datasets with large cross-sectional dimensions, since estimation and inference are straightforward, as opposed to existing regime-switching factor models where computational complexity limits applicability to few variables. In a Monte Carlo experiment, we study the finite sample properties of the MS-3PRF and find that it performs favorably compared with alternative modeling approaches whenever there is structural instability in factor loadings. For empirical applications, we consider forecasting economic activity and bilateral exchange rates, finding that the MS-3PRF approach is competitive in both cases. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 285-302
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1497508
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497508
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:285-302
Template-Type: ReDIF-Article 1.0
Author-Name: Bryan S. Graham
Author-X-Name-First: Bryan S.
Author-X-Name-Last: Graham
Author-Name: Guido W. Imbens
Author-X-Name-First: Guido W.
Author-X-Name-Last: Imbens
Author-Name: Geert Ridder
Author-X-Name-First: Geert
Author-X-Name-Last: Ridder
Title: Identification and Efficiency Bounds for the Average Match Function Under Conditionally Exogenous Matching
Abstract:
Consider two heterogenous populations of agents who, when matched, jointly produce an output, Y. For example, teachers and classrooms of students together produce achievement, parents raise children, whose life outcomes vary in adulthood, assembly plant managers and workers produce a certain number of cars per month, and lieutenants and their platoons vary in unit effectiveness. Let W∈W={w1,…,wJ}$W\in \mathbb {W}=\lbrace w_{1},\ldots,w_{J}\rbrace $ and X∈X={x1,…,xK}$X\in \mathbb {X}=\lbrace x_{1},\ldots,x_{K}\rbrace $ denote agent types in the two populations. Consider the following matching mechanism: take a random draw from the W = wj subgroup of the first population and match her with an independent random draw from the X = xk subgroup of the second population. Let β(wj, xk), the average match function (AMF), denote the expected output associated with this match. We show that (i) the AMF is identified when matching is conditionally exogenous, (ii) conditionally exogenous matching is compatible with a pairwise stable aggregate matching equilibrium under specific informational assumptions, and (iii) we calculate the AMF’s semiparametric efficiency bound.
Journal: Journal of Business & Economic Statistics
Pages: 303-316
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1497509
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497509
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:303-316
Template-Type: ReDIF-Article 1.0
Author-Name: David A. Jaeger
Author-X-Name-First: David A.
Author-X-Name-Last: Jaeger
Author-Name: Theodore J. Joyce
Author-X-Name-First: Theodore J.
Author-X-Name-Last: Joyce
Author-Name: Robert Kaestner
Author-X-Name-First: Robert
Author-X-Name-Last: Kaestner
Title: A Cautionary Tale of Evaluating Identifying Assumptions: Did Reality TV Really Cause a Decline in Teenage Childbearing?
Abstract:
Evaluating policy changes that occur everywhere at the same time is difficult because of the lack of a clear counterfactual. Hoping to address this problem, researchers often proxy for differential exposure using some observed characteristic in the pretreatment period. As a cautionary tale of how difficult identification is in such settings, we re-examine the results of an influential paper by Melissa Kearney and Phillip Levine, who found that the MTV program 16 and Pregnant had a substantial impact on teen birth rates. In what amounts to a difference-in-differences approach, they use the pretreatment levels of MTV viewership across media markets as an instrument. We show that controlling for differential time trends in birth rates by a market's pretreatment racial/ethnic composition or unemployment rate causes Kearney and Levine's results to disappear, invalidating the parallel trends assumption necessary for a causal interpretation. Extending the pretreatment period and estimating placebo tests, we find evidence of an “effect” long before 16 and Pregnant started broadcasting. Our results highlight the difficulty of drawing causal inferences from national point-in-time policy changes.
Journal: Journal of Business & Economic Statistics
Pages: 317-326
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1497510
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1497510
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:317-326
Template-Type: ReDIF-Article 1.0
Author-Name: Alejandro Bernales
Author-X-Name-First: Alejandro
Author-X-Name-Last: Bernales
Author-Name: Gonzalo Cortazar
Author-X-Name-First: Gonzalo
Author-X-Name-Last: Cortazar
Author-Name: Luka Salamunic
Author-X-Name-First: Luka
Author-X-Name-Last: Salamunic
Author-Name: George Skiadopoulos
Author-X-Name-First: George
Author-X-Name-Last: Skiadopoulos
Title: Learning and Index Option Returns
Abstract:
Little is known about the economic sources that may generate the abnormal returns observed in put index options. We show that the learning process followed by investors may be one such source. We develop an equilibrium model under partial information in which a rational Bayesian learner prices put option contracts. Our model generates put option returns similar to the empirical returns of S&P 500 put index options. This result is not obtained when we analyze alternative setups of the model in which no learning process exists.
Journal: Journal of Business & Economic Statistics
Pages: 327-339
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1505629
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1505629
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:327-339
Template-Type: ReDIF-Article 1.0
Author-Name: Marco Barassi
Author-X-Name-First: Marco
Author-X-Name-Last: Barassi
Author-Name: Lajos Horváth
Author-X-Name-First: Lajos
Author-X-Name-Last: Horváth
Author-Name: Yuqian Zhao
Author-X-Name-First: Yuqian
Author-X-Name-Last: Zhao
Title: Change‐Point Detection in the Conditional Correlation Structure of Multivariate Volatility Models
Abstract:
We propose semiparametric CUSUM tests to detect a change-point in the correlation structures of nonlinear multivariate models with dynamically evolving volatilities. The asymptotic distributions of the proposed statistics are derived under mild conditions. We discuss the applicability of our method to the most often used models, including constant conditional correlation (CCC), dynamic conditional correlation (DCC), BEKK, corrected DCC, and factor models. Our simulations show that, our tests have good size and power properties. Also, even though the near-unit root property distorts the size and power of tests, de-volatizing the data by means of appropriate multivariate volatility models can correct such distortions. We apply the semiparametric CUSUM tests in the attempt to date the occurrence of financial contagion from the US to emerging markets worldwide during the great recession. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 340-349
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1505630
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1505630
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:340-349
Template-Type: ReDIF-Article 1.0
Author-Name: Dante Amengual
Author-X-Name-First: Dante
Author-X-Name-Last: Amengual
Author-Name: Enrique Sentana
Author-X-Name-First: Enrique
Author-X-Name-Last: Sentana
Title: Is a Normal Copula the Right Copula?
Abstract:
We derive computationally simple and intuitive expressions for score tests of Gaussian copulas against generalized hyperbolic alternatives, including symmetric and asymmetric Student t, and many other examples. We decompose our tests into third and fourth moment components, and obtain one-sided Likelihood Ratio analogs, whose standard asymptotic distribution we provide. Our Monte Carlo exercises confirm the reliable size of parametric bootstrap versions of our tests, and their substantial power gains over alternative procedures. In an empirical application to CRSP stocks, we find that short-term reversals and momentum effects are better captured by non-Gaussian copulas, whose parameters we estimate by indirect inference. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 350-366
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1505631
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1505631
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:350-366
Template-Type: ReDIF-Article 1.0
Author-Name: Minchul Shin
Author-X-Name-First: Minchul
Author-X-Name-Last: Shin
Author-Name: Molin Zhong
Author-X-Name-First: Molin
Author-X-Name-Last: Zhong
Title: A New Approach to Identifying the Real Effects of Uncertainty Shocks
Abstract:
This article introduces the use of the sign restrictions methodology to identify uncertainty shocks. We apply our methodology to a class of vector autoregression models with stochastic volatility that allow volatility fluctuations to impact the conditional mean. We combine sign restrictions on the conditional mean and conditional second moment impulse responses to identify financial and macro uncertainty shocks. On U.S. data, we find stronger evidence that financial uncertainty shocks lead to a decline in real activity and an easing of the federal funds rate relative to macro uncertainty shocks. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 367-379
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1506342
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1506342
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:367-379
Template-Type: ReDIF-Article 1.0
Author-Name: Carsten Bormann
Author-X-Name-First: Carsten
Author-X-Name-Last: Bormann
Author-Name: Melanie Schienle
Author-X-Name-First: Melanie
Author-X-Name-Last: Schienle
Title: Detecting Structural Differences in Tail Dependence of Financial Time Series
Abstract:
An accurate assessment of tail inequalities and tail asymmetries of financial returns is key for risk management and portfolio allocation. We propose a new test procedure for detecting the full extent of such structural differences in the dependence of bivariate extreme returns. We decompose the testing problem into piecewise multiple comparisons of Cramér–von Mises distances of tail copulas. In this way, tail regions that cause differences in extreme dependence can be located and consequently be targeted by financial strategies. We derive the asymptotic properties of the test and provide a bootstrap approximation for finite samples. Moreover, we account for the multiplicity of the piecewise tail copula comparisons by adjusting individual p-values according to multiple testing techniques. Monte Carlo simulations demonstrate the test’s superior finite-sample properties for common financial tail risk models, both in the iid and the sequentially dependent case. During the last 90 years in U.S. stock markets, our test detects up to 20% more tail asymmetries than competing tests. This can be attributed to the presence of nonstandard tail dependence structures. We also find evidence for diminishing tail asymmetries during every major financial crisis—except for the 2007–2009 crisis—reflecting a risk-return trade-off for extreme returns. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 380-392
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1506343
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1506343
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:380-392
Template-Type: ReDIF-Article 1.0
Author-Name: Leif Anders Thorsrud
Author-X-Name-First: Leif Anders
Author-X-Name-Last: Thorsrud
Title: Words are the New Numbers: A Newsy Coincident Index of the Business Cycle
Abstract:
I construct a daily business cycle index based on quarterly GDP growth and textual information contained in a daily business newspaper. The newspaper data are decomposed into time series representing news topics, while the business cycle index is estimated using the topics and a time-varying dynamic factor model where dynamic sparsity is enforced upon the factor loadings using a latent threshold mechanism. The resulting index classifies the phases of the business cycle with almost perfect accuracy and provides broad-based high-frequency information about the type of news that drive or reflect economic fluctuations. In out-of-sample nowcasting experiments, the model is competitive with forecast combination systems and expert judgment, and produces forecasts with predictive power for future revisions in GDP. Thus, news reduces noise. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 393-409
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1506344
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1506344
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:393-409
Template-Type: ReDIF-Article 1.0
Author-Name: Jérôme Lahaye
Author-X-Name-First: Jérôme
Author-X-Name-Last: Lahaye
Author-Name: Christopher Neely
Author-X-Name-First: Christopher
Author-X-Name-Last: Neely
Title: The Role of Jumps in Volatility Spillovers in Foreign Exchange Markets: Meteor Shower and Heat Waves Revisited
Abstract:
This article extends the literature on geographic (heat waves) and intertemporal (meteor showers) foreign exchange volatility transmission to characterize the role of jumps and cross-rate propagation. We employ multivariate heterogenous autoregressive (HAR) models to capture the quasi-long memory properties of volatility and both Shapley–Owen R2’s and portfolio optimization exercises to quantify the contributions of information sets. We conclude that meteor showers (MS) are substantially more influential than heat waves (HW), that jumps play a modest but significant role in volatility transmission, that cross-market propagation of volatility is important, and that allowing for differential HW and MS effects and differential parameters across intraday market segments is valuable. Finally, we illustrate what types of news weaken or strengthen heat wave, meteor shower, continuous, and jump patterns with sensitivity analysis. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 410-427
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1512865
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1512865
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:410-427
Template-Type: ReDIF-Article 1.0
Author-Name: M. Hashem Pesaran
Author-X-Name-First: M. Hashem
Author-X-Name-Last: Pesaran
Author-Name: Ida Johnsson
Author-X-Name-First: Ida
Author-X-Name-Last: Johnsson
Title: Double-Question Survey Measures for the Analysis of Financial Bubbles and Crashes
Abstract:
This article proposes a new double-question survey whereby an individual is presented with two sets of questions; one on beliefs about current asset values and another on price expectations. A theoretical asset pricing model with heterogeneous agents is advanced and the existence of a negative relationship between price expectations and asset valuations is established, and is then tested using survey results on equity, gold, and house prices. Leading indicators of bubbles and crashes are proposed and their potential value is illustrated in the context of a dynamic panel regression of realized house price changes across key Metropolitan Statistical Areas (MSAs) in the U.S. In an out-of-sample forecasting exercise, it is also shown that forecasts of house price changes (pooled across MSAs) that make use of bubble and crash indicators perform significantly better than a benchmark model that only uses lagged and expected house price changes. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 428-442
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1513845
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1513845
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:428-442
Template-Type: ReDIF-Article 1.0
Author-Name: Kaspar Wüthrich
Author-X-Name-First: Kaspar
Author-X-Name-Last: Wüthrich
Title: A Comparison of Two Quantile Models With Endogeneity
Abstract:
This article studies the relationship between the two most-used quantile models with endogeneity: the instrumental variable quantile regression (IVQR) model (Chernozhukov and Hansen 2005) and the local quantile treatment effects (LQTE) model (Abadie, Angrist, and Imbens 2002). The key condition of the IVQR model is the rank similarity assumption, a restriction on the evolution of individual ranks across treatment states, under which population quantile treatment effects (QTE) are identified. By contrast, the LQTE model achieves identification through a monotonicity assumption on the selection equation but only identifies QTE for the subpopulation of compliers. This article shows that, despite these differences, there is a close connection between both models: (i) the IVQR estimands correspond to QTE for the compliers at transformed quantile levels and (ii) the IVQR estimand of the average treatment effect is equal to a convex combination of the local average treatment effect and a weighted average of integrated QTE for the compliers. These results do not rely on the rank similarity assumption and therefore provide a characterization of IVQR in settings where this key condition is violated. Underpinning the analysis are novel closed-form representations of the IVQR estimands. I illustrate the theoretical results with two empirical applications.
Journal: Journal of Business & Economic Statistics
Pages: 443-456
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1514307
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1514307
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:443-456
Template-Type: ReDIF-Article 1.0
Author-Name: Dean R. Hyslop
Author-X-Name-First: Dean R.
Author-X-Name-Last: Hyslop
Author-Name: Wilbur Townsend
Author-X-Name-First: Wilbur
Author-X-Name-Last: Townsend
Title: Earnings Dynamics and Measurement Error in Matched Survey and Administrative Data
Abstract:
This article analyzes earnings dynamics and measurement error using a matched longitudinal sample of individuals’ survey and administrative earnings. In line with previous literature, the reported differences are characterized by both persistent and transitory factors. Estimating a model consistent with past results, survey errors are mean-reverting when administrative reports are assumed correct, but not when this assumption is relaxed. Although most reported earnings variation is true, we conclude that measurement errors dominate observed changes, and that transitory earnings contribute little to overall earnings inequality. The results imply the reliability of matched administrative data should be treated with caution.
Journal: Journal of Business & Economic Statistics
Pages: 457-469
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1514308
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1514308
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:457-469
Template-Type: ReDIF-Article 1.0
Author-Name: Rubén Loaiza-Maya
Author-X-Name-First: Rubén
Author-X-Name-Last: Loaiza-Maya
Author-Name: Michael Stanley Smith
Author-X-Name-First: Michael Stanley
Author-X-Name-Last: Smith
Title: Real-Time Macroeconomic Forecasting With a Heteroscedastic Inversion Copula
Abstract:
There is a growing interest in allowing for asymmetry in the density forecasts of macroeconomic variables. In multivariate time series, this can be achieved with a copula model, where both serial and cross-sectional dependence is captured by a copula function, and the margins are nonparametric. Yet most existing copulas cannot capture heteroscedasticity well, which is a feature of many economic and financial time series. To do so, we propose a new copula created by the inversion of a multivariate unobserved component stochastic volatility model, and show how to estimate it using Bayesian methods. We fit the copula model to real-time data on five quarterly U.S. economic and financial variables. The copula model captures heteroscedasticity, dependence in the level, time-variation in higher moments, bounds on variables and other features. Over the window 1975Q1–2016Q2, the real-time density forecasts of all the macroeconomic variables exhibit time-varying asymmetry. In particular, forecasts of GDP growth have increased negative skew during recessions. The point and density forecasts from the copula model are competitive with those from benchmark models—particularly for inflation, a short-term interest rate and current quarter GDP growth. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 470-486
Issue: 2
Volume: 38
Year: 2020
Month: 4
X-DOI: 10.1080/07350015.2018.1514309
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1514309
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:2:p:470-486
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Lin
Author-X-Name-First: Wei
Author-X-Name-Last: Lin
Author-Name: Jianhua Z. Huang
Author-X-Name-First: Jianhua Z.
Author-X-Name-Last: Huang
Author-Name: Tucker McElroy
Author-X-Name-First: Tucker
Author-X-Name-Last: McElroy
Title: Time Series Seasonal Adjustment Using Regularized Singular Value Decomposition
Abstract:
We propose a new seasonal adjustment method based on the Regularized Singular Value Decomposition (RSVD) of the matrix obtained by reshaping the seasonal time series data. The method is flexible enough to capture two kinds of seasonality: the fixed seasonality that does not change over time and the time-varying seasonality that varies from one season to another. RSVD represents the time-varying seasonality by a linear combination of several seasonal patterns. The right singular vectors capture multiple seasonal patterns, and the corresponding left singular vectors capture the magnitudes of those seasonal patterns and how they change over time. By assuming the time-varying seasonal patterns change smoothly over time, the RSVD uses penalized least squares with a roughness penalty to effectively extract the left singular vectors. The proposed method applies to seasonal time-series data with a stationary or nonstationary nonseasonal component. The method also has a variant that can handle the case that an abrupt change (i.e., break) may occur in the magnitudes of seasonal patterns. Our proposed method compares favorably with the state-of-art X-13ARIMA-SEATS program on both simulated and real data examples.
Journal: Journal of Business & Economic Statistics
Pages: 487-501
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1515081
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1515081
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:487-501
Template-Type: ReDIF-Article 1.0
Author-Name: Gordon Anderson
Author-X-Name-First: Gordon
Author-X-Name-Last: Anderson
Author-Name: Thierry Post
Author-X-Name-First: Thierry
Author-X-Name-Last: Post
Author-Name: Yoon-Jae Whang
Author-X-Name-First: Yoon-Jae
Author-X-Name-Last: Whang
Title: Somewhere Between Utopia and Dystopia: Choosing From Multiple Incomparable Prospects
Abstract:
In many fields of decision making, choices have to be made from multiple alternatives, but stochastic dominance rules do not yield a complete ordering due to incomparability of some or all of the prospects. For ranking incomparable prospects, a “Utopia Index” measuring the proximity to a lower envelope of integrated distribution functions is proposed. Economic interpretations in terms of Expected Utility are provided for the envelope and deviations from it. The analysis generalizes the existing Almost Stochastic Dominance concept from pairwise comparison to a joint analysis of an arbitrary number of prospects. The limit distribution for the empirical counterpart of the index for a general class of dynamic processes is derived together with a consistent and feasible inference procedure based on subsampling techniques. Empirical applications to Chinese household income data and historical investment returns data show that, in every choice set, a single prospect is ranked above all alternatives at conventional significance levels, despite the incomparability problem.
Journal: Journal of Business & Economic Statistics
Pages: 502-515
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1515765
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1515765
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:502-515
Template-Type: ReDIF-Article 1.0
Author-Name: William C. Horrace
Author-X-Name-First: William C.
Author-X-Name-Last: Horrace
Author-Name: Ian A. Wright
Author-X-Name-First: Ian A.
Author-X-Name-Last: Wright
Title: Stationary Points for Parametric Stochastic Frontier Models
Abstract:
Stationary point results on the normal–half-normal stochastic frontier model are generalized using the theory of the Dirac delta, and distribution-free conditions are established to ensure a stationary point in the likelihood as the variance of the inefficiency distribution goes to zero. Stability of the stationary point and “wrong skew” results are derived or simulated for common parametric assumptions on the model. We discuss identification and extensions to more general stochastic frontier models.
Journal: Journal of Business & Economic Statistics
Pages: 516-526
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1526088
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1526088
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:516-526
Template-Type: ReDIF-Article 1.0
Author-Name: Carlo A. Favero
Author-X-Name-First: Carlo A.
Author-X-Name-Last: Favero
Author-Name: Fulvio Ortu
Author-X-Name-First: Fulvio
Author-X-Name-Last: Ortu
Author-Name: Andrea Tamoni
Author-X-Name-First: Andrea
Author-X-Name-Last: Tamoni
Author-Name: Haoxi Yang
Author-X-Name-First: Haoxi
Author-X-Name-Last: Yang
Title: Implications of Return Predictability for Consumption Dynamics and Asset Pricing
Abstract:
Two broad classes of consumption dynamics—long-run risks and rare disasters—have proven successful in explaining the equity premium puzzle when used in conjunction with recursive preferences. We show that bounds a-là Gallant, Hansen, and Tauchen that restrict the volatility of the stochastic discount factor by conditioning on a set of return predictors constitute a useful tool to discriminate between these alternative dynamics. In particular, we document that models that rely on rare disasters meet comfortably the bounds independently of the forecasting horizon and the asset returns used to construct the bounds. However, the specific nature of disasters is a relevant characteristic at the 1-year horizon: disasters that unfold over multiple years are more successful in meeting the predictors-based bounds than one-period disasters. Instead, at the 5-year horizon, the sole presence of disasters—even if one-period and permanent—is sufficient for the model to satisfy the bounds. Finally, the bounds point to multiple volatility components in consumption as a promising dimension for long-run risk models.
Journal: Journal of Business & Economic Statistics
Pages: 527-541
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1527702
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1527702
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:527-541
Template-Type: ReDIF-Article 1.0
Author-Name: S. Borağan Aruoba
Author-X-Name-First: S.
Author-X-Name-Last: Borağan Aruoba
Title: Term Structures of Inflation Expectations and Real Interest Rates
Abstract:
I use a statistical model to combine various surveys to produce a term structure of inflation expectations—inflation expectations at any horizon—and an associated term structure of real interest rates. Inflation expectations extracted from this model track realized inflation quite well, and in terms of forecast accuracy, they are at par with or superior to some popular alternatives. The real interest rates obtained from the model follow Treasury Inflation-Protected Securities rates as well.
Journal: Journal of Business & Economic Statistics
Pages: 542-553
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1529599
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1529599
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:542-553
Template-Type: ReDIF-Article 1.0
Author-Name: Hie Joo Ahn
Author-X-Name-First: Hie Joo
Author-X-Name-Last: Ahn
Author-Name: James D. Hamilton
Author-X-Name-First: James D.
Author-X-Name-Last: Hamilton
Title: Heterogeneity and Unemployment Dynamics
Abstract:
Many previous articles have studied the contribution of inflows and outflows to the cyclical variation in unemployment, but ignored the critical role of unobserved heterogeneity across workers. This article develops new estimates of unemployment inflows and outflows that allow for unobserved heterogeneity as well as direct effects of unemployment duration on unemployment-exit probabilities. With this approach, we can measure the contribution of different shocks to the short-run, medium-run, and long-run variance of unemployment as well as to specific historical episodes. We conclude that changes in the composition of new inflows into unemployment are the most important factor in economic recessions.
Journal: Journal of Business & Economic Statistics
Pages: 554-569
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1530116
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1530116
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:554-569
Template-Type: ReDIF-Article 1.0
Author-Name: Lajos Horváth
Author-X-Name-First: Lajos
Author-X-Name-Last: Horváth
Author-Name: Curtis Miller
Author-X-Name-First: Curtis
Author-X-Name-Last: Miller
Author-Name: Gregory Rice
Author-X-Name-First: Gregory
Author-X-Name-Last: Rice
Title: A New Class of Change Point Test Statistics of Rényi Type
Abstract:
A new class of change point test statistics is proposed that utilizes a weighting and trimming scheme for the cumulative sum (CUSUM) process inspired by Rényi. A thorough asymptotic analysis and simulations both demonstrate that this new class of statistics possess superior power compared to traditional change point statistics based on the CUSUM process when the change point is near the beginning or end of the sample. Generalizations of these “Rényi” statistics are also developed to test for changes in the parameters in linear and nonlinear regression models, and in generalized method of moments estimation. In these contexts, we applied the proposed statistics, as well as several others, to test for changes in the coefficients of Fama–French factor models. We observed that the Rényi statistic was the most effective in terms of retrospectively detecting change points that occur near the endpoints of the sample.
Journal: Journal of Business & Economic Statistics
Pages: 570-579
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1537923
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1537923
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:570-579
Template-Type: ReDIF-Article 1.0
Author-Name: Eelco Kappe
Author-X-Name-First: Eelco
Author-X-Name-Last: Kappe
Author-Name: Wayne S. DeSarbo
Author-X-Name-First: Wayne S.
Author-X-Name-Last: DeSarbo
Author-Name: Marcelo C. Medeiros
Author-X-Name-First: Marcelo C.
Author-X-Name-Last: Medeiros
Title: A Smooth Transition Finite Mixture Model for Accommodating Unobserved Heterogeneity
Abstract:
While the smooth transition (ST) model has become popular in business and economics, the treatment of unobserved heterogeneity within these models has received limited attention. We propose a ST finite mixture (STFM) model which simultaneously estimates the presence of time-varying effects and unobserved heterogeneity in a panel data context. Our objective is to accurately recover the heterogeneous effects of our independent variables of interest while simultaneously allowing these effects to vary over time. Accomplishing this objective may provide valuable insights for managers and policy makers. The STFM model nests several well-known ST and threshold models. We develop the specification, estimation, and model selection criteria for the STFM model using Bayesian methods. We also provide a theoretical assessment of the flexibility of the STFM model when the number of regimes grows with the sample size. In an extensive simulation study, we show that ignoring unobserved heterogeneity can lead to distorted parameter estimates, and that the STFM model is fairly robust when underlying model assumptions are violated. Empirically, we estimate the effects of in-game promotions on game attendance in Major League Baseball. Empirical results show that the STFM model outperforms all its nested versions. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 580-592
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1543126
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1543126
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:580-592
Template-Type: ReDIF-Article 1.0
Author-Name: Marinho Bertanha
Author-X-Name-First: Marinho
Author-X-Name-Last: Bertanha
Author-Name: Guido W. Imbens
Author-X-Name-First: Guido W.
Author-X-Name-Last: Imbens
Title: External Validity in Fuzzy Regression Discontinuity Designs
Abstract:
Fuzzy regression discontinuity designs identify the local average treatment effect (LATE) for the subpopulation of compliers, and with forcing variable equal to the threshold. We develop methods that assess the external validity of LATE to other compliance groups at the threshold, and allow for identification away from the threshold. Specifically, we focus on the equality of outcome distributions between treated compliers and always-takers, and between untreated compliers and never-takers. These equalities imply continuity of expected outcomes conditional on both the forcing variable and the treatment status. We recommend that researchers plot these conditional expectations and test for discontinuities at the threshold to assess external validity. We provide new commands in STATA and MATLAB to implement our proposed procedures.
Journal: Journal of Business & Economic Statistics
Pages: 593-612
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1546590
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1546590
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:593-612
Template-Type: ReDIF-Article 1.0
Author-Name: Ariella Kahn-Lang
Author-X-Name-First: Ariella
Author-X-Name-Last: Kahn-Lang
Author-Name: Kevin Lang
Author-X-Name-First: Kevin
Author-X-Name-Last: Lang
Title: The Promise and Pitfalls of Differences-in-Differences: Reflections on 16 and Pregnant and Other Applications
Abstract:
We use the exchange between Kearney/Levine and Jaeger/Joyce/Kaestner on 16 and Pregnant to reexamine the use of DiD as a response to the failure of nature to properly design an experiment for us. We argue that (1) any DiD paper should address why the original levels of the experimental and control groups differed, and why this would not impact trends, (2) the parallel trends argument requires a justification of the chosen functional form and that the use of the interaction coefficients in probit and logit may be justified in some cases, and (3) parallel trends in the period prior to treatment is suggestive of counterfactual parallel trends, but parallel pre-trends is neither necessary nor sufficient for the parallel counterfactual trends condition to hold. Importantly, the purely statistical approach uses pretesting and thus, generates the wrong standard errors. Moreover, we underline the dangers of implicitly or explicitly accepting the null hypothesis when failing to reject the absence of a differential pre-trend.
Journal: Journal of Business & Economic Statistics
Pages: 613-620
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1546591
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1546591
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:613-620
Template-Type: ReDIF-Article 1.0
Author-Name: Lorenzo Camponovo
Author-X-Name-First: Lorenzo
Author-X-Name-Last: Camponovo
Author-Name: Yukitoshi Matsushita
Author-X-Name-First: Yukitoshi
Author-X-Name-Last: Matsushita
Author-Name: Taisuke Otsu
Author-X-Name-First: Taisuke
Author-X-Name-Last: Otsu
Title: Empirical likelihood for high frequency data
Abstract:
This paper introduces empirical likelihood methods for interval estimation and hypothesis testing on volatility measures in some high frequency data environments. We propose a modified empirical likelihood statistic that is asymptotically pivotal under infill asymptotics, where the number of high frequency observations in a fixed time interval increases to infinity. The proposed statistic is extended to be robust to the presence of jumps and microstructure noise. We also provide an empirical likelihood-based test to detect the presence of jumps. Furthermore, we study higher-order properties of a general family of nonparametric likelihood statistics and show that a particular statistic admits a Bartlett correction: a higher-order refinement to achieve better coverage or size properties. Simulation and a real data example illustrate the usefulness of our approach.
Journal: Journal of Business & Economic Statistics
Pages: 621-632
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1549051
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1549051
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:621-632
Template-Type: ReDIF-Article 1.0
Author-Name: John Ameriks
Author-X-Name-First: John
Author-X-Name-Last: Ameriks
Author-Name: Gábor Kézdi
Author-X-Name-First: Gábor
Author-X-Name-Last: Kézdi
Author-Name: Minjoon Lee
Author-X-Name-First: Minjoon
Author-X-Name-Last: Lee
Author-Name: Matthew D. Shapiro
Author-X-Name-First: Matthew D.
Author-X-Name-Last: Shapiro
Title: Heterogeneity in Expectations, Risk Tolerance, and Household Stock Shares: The Attenuation Puzzle
Abstract:
This article jointly estimates the relationship between stock share and expectations and risk preferences. The survey allows individual-level, quantitative estimates of risk tolerance and of the perceived mean, and variance of stock returns. These estimates have economically and statistically significant association for the distribution of stock shares with relative magnitudes in proportion with the predictions of theories. Incorporating survey measurement error in the estimation model increases the estimated associations 2-fold, but they are still substantially attenuated being only about 5% of what benchmark finance theories predict. Because of the careful attention in the estimation to measurement error, the attenuation likely arises from economic behavior rather than errors in variables.
Journal: Journal of Business & Economic Statistics
Pages: 633-646
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1549560
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1549560
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:633-646
Template-Type: ReDIF-Article 1.0
Author-Name: Gordon C. R. Kemp
Author-X-Name-First: Gordon C. R.
Author-X-Name-Last: Kemp
Author-Name: Paulo M. D. C. Parente
Author-X-Name-First: Paulo M. D. C.
Author-X-Name-Last: Parente
Author-Name: J. M. C. Santos Silva
Author-X-Name-First: J. M. C.
Author-X-Name-Last: Santos Silva
Title: Dynamic Vector Mode Regression
Abstract:
We study the semiparametric estimation of the conditional mode of a random vector that has a continuous conditional joint density with a well-defined global mode. A novel full-system estimator is proposed and its asymptotic properties are studied. We specifically consider the estimation of vector autoregressive conditional mode models and of systems of linear simultaneous equations defined by mode restrictions. The proposed estimator is easy to implement and simulations suggest that it is reasonably behaved in finite samples. An empirical example illustrates the application of the proposed methods, including its use to obtain multistep forecasts and to construct impulse response functions.
Journal: Journal of Business & Economic Statistics
Pages: 647-661
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1562935
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1562935
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:647-661
Template-Type: ReDIF-Article 1.0
Author-Name: Torben G. Andersen
Author-X-Name-First: Torben G.
Author-X-Name-Last: Andersen
Author-Name: Nicola Fusari
Author-X-Name-First: Nicola
Author-X-Name-Last: Fusari
Author-Name: Viktor Todorov
Author-X-Name-First: Viktor
Author-X-Name-Last: Todorov
Title: The Pricing of Tail Risk and the Equity Premium: Evidence From International Option Markets
Abstract:
We explore the pricing of tail risk as manifest in index options across international equity markets. The risk premium associated with negative tail events displays persistent shifts, unrelated to volatility. This tail risk premium is a potent predictor of future returns for all the indices, while the option-implied volatility only forecasts the future return variation. Hence, compensation for negative jump risk is the primary driver of the equity premium, whereas the reward for pure diffusive variance risk is unrelated to future equity returns. We also document pronounced commonalities, suggesting a high degree of integration among the major global equity markets. KEY WORDS: Equity risk premium; International option markets; Predictability; Tail risk; Variance risk premium.
Journal: Journal of Business & Economic Statistics
Pages: 662-678
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2018.1564318
File-URL: http://hdl.handle.net/10.1080/07350015.2018.1564318
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:662-678
Template-Type: ReDIF-Article 1.0
Author-Name: Yoann Potiron
Author-X-Name-First: Yoann
Author-X-Name-Last: Potiron
Author-Name: Per Mykland
Author-X-Name-First: Per
Author-X-Name-Last: Mykland
Title: Local Parametric Estimation in High Frequency Data
Abstract:
We give a general time-varying parameter model, where the multidimensional parameter possibly includes jumps. The quantity of interest is defined as the integrated value over time of the parameter process Θ=T−1∫0Tθt*dt
. We provide a local parametric estimator (LPE) of Θ and conditions under which we can show the central limit theorem. Roughly speaking those conditions correspond to some uniform limit theory in the parametric version of the problem. The framework is restricted to the specific convergence rate n1∕2. Several examples of LPE are studied: estimation of volatility, powers of volatility, volatility when incorporating trading information and time-varying MA(1).
Journal: Journal of Business & Economic Statistics
Pages: 679-692
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2019.1566731
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1566731
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:679-692
Template-Type: ReDIF-Article 1.0
Author-Name: Clifford Lam
Author-X-Name-First: Clifford
Author-X-Name-Last: Lam
Author-Name: Pedro C.L. Souza
Author-X-Name-First: Pedro C.L.
Author-X-Name-Last: Souza
Title: Estimation and Selection of Spatial Weight Matrix in a Spatial Lag Model
Abstract:
Spatial econometric models allow for interactions among variables through the specification of a spatial weight matrix. Practitioners often face the risk of misspecification of such a matrix. In many problems a number of potential specifications exist, such as geographic distances, or various economic quantities among variables. We propose estimating the best linear combination of these specifications, added with a potentially sparse adjustment matrix. The coefficients in the linear combination, together with the sparse adjustment matrix, are subjected to variable selection through the adaptive least absolute shrinkage and selection operator (LASSO). As a special case, if no spatial weight matrices are specified, the sparse adjustment matrix becomes a sparse spatial weight matrix estimator of our model. Our method can therefore, be seen as a unified framework for the estimation and selection of a spatial weight matrix. The rate of convergence of all proposed estimators are determined when the number of time series variables can grow faster than the number of time points for data, while oracle properties for all penalized estimators are presented. Simulations and an application to stocks data confirms the good performance of our procedure.
Journal: Journal of Business & Economic Statistics
Pages: 693-710
Issue: 3
Volume: 38
Year: 2020
Month: 7
X-DOI: 10.1080/07350015.2019.1569526
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1569526
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:3:p:693-710
Template-Type: ReDIF-Article 1.0
Author-Name: Isaiah Andrews
Author-X-Name-First: Isaiah
Author-X-Name-Last: Andrews
Author-Name: Matthew Gentzkow
Author-X-Name-First: Matthew
Author-X-Name-Last: Gentzkow
Author-Name: Jesse M. Shapiro
Author-X-Name-First: Jesse M.
Author-X-Name-Last: Shapiro
Title: Transparency in Structural Research
Abstract:
We propose a formal definition of transparency in empirical research and apply it to structural estimation in economics. We discuss how some existing practices can be understood as attempts to improve transparency, and we suggest ways to improve current practice, emphasizing approaches that impose a minimal computational burden on the researcher. We illustrate with examples.
Journal: Journal of Business & Economic Statistics
Pages: 711-722
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2020.1796395
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1796395
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:711-722
Template-Type: ReDIF-Article 1.0
Author-Name: Stéphane Bonhomme
Author-X-Name-First: Stéphane
Author-X-Name-Last: Bonhomme
Title: Discussion of “Transparency in Structural Research” by Isaiah Andrews, Matthew Gentzkow, and Jesse Shapiro
Journal: Journal of Business & Economic Statistics
Pages: 723-725
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2020.1790377
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1790377
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:723-725
Template-Type: ReDIF-Article 1.0
Author-Name: Christopher Taber
Author-X-Name-First: Christopher
Author-X-Name-Last: Taber
Title: Thoughts on “Transparency in Structural Research”
Journal: Journal of Business & Economic Statistics
Pages: 726-727
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2020.1796396
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1796396
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:726-727
Template-Type: ReDIF-Article 1.0
Author-Name: Elie Tamer
Author-X-Name-First: Elie
Author-X-Name-Last: Tamer
Title: Discussion on “ Transparency in Structural Research” by I. Andrews, M. Gentkow and J. Shapiro
Abstract:
We provide a complementary approach to global sensitivity analysis that should be useful for empirical work in economics.
Journal: Journal of Business & Economic Statistics
Pages: 728-730
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2020.1804917
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1804917
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:728-730
Template-Type: ReDIF-Article 1.0
Author-Name: Isaiah Andrews
Author-X-Name-First: Isaiah
Author-X-Name-Last: Andrews
Author-Name: Matthew Gentzkow
Author-X-Name-First: Matthew
Author-X-Name-Last: Gentzkow
Author-Name: Jesse M. Shapiro
Author-X-Name-First: Jesse M.
Author-X-Name-Last: Shapiro
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 731-731
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2020.1791886
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1791886
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:731-731
Template-Type: ReDIF-Article 1.0
Author-Name: Daniel L. Millimet
Author-X-Name-First: Daniel L.
Author-X-Name-Last: Millimet
Author-Name: Hao Li
Author-X-Name-First: Hao
Author-X-Name-Last: Li
Author-Name: Punarjit Roychowdhury
Author-X-Name-First: Punarjit
Author-X-Name-Last: Roychowdhury
Title: Partial Identification of Economic Mobility: With an Application to the United States
Abstract:
The economic mobility of individuals and households is of fundamental interest. While many measures of economic mobility exist, reliance on transition matrices remains pervasive due to simplicity and ease of interpretation. However, estimation of transition matrices is complicated by the well-acknowledged problem of measurement error in self-reported and even administrative data. Existing methods of addressing measurement error are complex, rely on numerous strong assumptions, and often require data from more than two periods. In this article, we investigate what can be learned about economic mobility as measured via transition matrices while formally accounting for measurement error in a reasonably transparent manner. To do so, we develop a nonparametric partial identification approach to bound transition probabilities under various assumptions on the measurement error and mobility processes. This approach is applied to panel data from the United States to explore short-run mobility before and after the Great Recession.
Journal: Journal of Business & Economic Statistics
Pages: 732-753
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1569527
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1569527
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:732-753
Template-Type: ReDIF-Article 1.0
Author-Name: Haizhen Lin
Author-X-Name-First: Haizhen
Author-X-Name-Last: Lin
Author-Name: Matthijs R. Wildenbeest
Author-X-Name-First: Matthijs R.
Author-X-Name-Last: Wildenbeest
Title: Nonparametric Estimation of Search Costs for Differentiated Products: Evidence from Medigap
Abstract:
This article develops a method to estimate search frictions as well as preference parameters in differentiated product markets. Search costs are nonparametrically identified, which means our method can be used to estimate search costs in differentiated product markets that lack a suitable search cost shifter. We apply our model to the U.S. Medigap insurance market. We find that search costs are substantial: the estimated median cost of searching for an insurer is $30. Using the estimated parameters we find that eliminating search costs could result in price decreases of as much as $71 (or 4.7%), along with increases in average consumer welfare of up to $374.
Journal: Journal of Business & Economic Statistics
Pages: 754-770
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1573683
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1573683
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:754-770
Template-Type: ReDIF-Article 1.0
Author-Name: Siddhartha Chib
Author-X-Name-First: Siddhartha
Author-X-Name-Last: Chib
Author-Name: Xiaming Zeng
Author-X-Name-First: Xiaming
Author-X-Name-Last: Zeng
Title: Which Factors are Risk Factors in Asset Pricing? A Model Scan Framework
Abstract:
A key question for understanding the cross-section of expected returns of equities is the following: which factors, from a given collection of factors, are risk factors, equivalently, which factors are in the stochastic discount factor (SDF)? Though the SDF is unobserved, assumptions about which factors (from the available set of factors) are in the SDF restricts the joint distribution of factors in specific ways, as a consequence of the economic theory of asset pricing. A different starting collection of factors that go into the SDF leads to a different set of restrictions on the joint distribution of factors. The conditional distribution of equity returns has the same restricted form, regardless of what is assumed about the factors in the SDF, as long as the factors are traded, and hence the distribution of asset returns is irrelevant for isolating the risk-factors. The restricted factors models are distinct (nonnested) and do not arise by omitting or including a variable from a full model, thus precluding analysis by standard statistical variable selection methods, such as those based on the lasso and its variants. Instead, we develop what we call a Bayesian model scan strategy in which each factor is allowed to enter or not enter the SDF and the resulting restricted models (of which there are 114,674 in our empirical study) are simultaneously confronted with the data. We use a Student-t distribution for the factors, and model-specific independent Student-t distribution for the location parameters, a training sample to fix prior locations, and a creative way to arrive at the joint distribution of several other model-specific parameters from a single prior distribution. This allows our method to be essentially a scaleable and tuned-black-box method that can be applied across our large model space with little to no user-intervention. The model marginal likelihoods, and implied posterior model probabilities, are compared with the prior probability of 1/114,674 of each model to find the best-supported model, and thus the factors most likely to be in the SDF. We provide detailed simulation evidence about the high finite-sample accuracy of the method. Our empirical study with 13 leading factors reveals that the highest marginal likelihood model is a Student-t distributed factor model with 5 degrees of freedom and 8 risk factors.
Journal: Journal of Business & Economic Statistics
Pages: 771-783
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1573684
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1573684
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:771-783
Template-Type: ReDIF-Article 1.0
Author-Name: Jeffrey S. Racine
Author-X-Name-First: Jeffrey S.
Author-X-Name-Last: Racine
Author-Name: Ingrid Van Keilegom
Author-X-Name-First: Ingrid
Author-X-Name-Last: Van Keilegom
Title: A Smooth Nonparametric, Multivariate, Mixed-Data Location-Scale Test
Abstract:
A number of tests have been proposed for assessing the location-scale assumption that is often invoked by practitioners. Existing approaches include Kolmogorov–Smirnov and Cramer–von Mises statistics that each involve measures of divergence between unknown joint distribution functions and products of marginal distributions. In practice, the unknown distribution functions embedded in these statistics are typically approximated using nonsmooth empirical distribution functions (EDFs). In a recent article, Li, Li, and Racine establish the benefits of smoothing the EDF for inference, though their theoretical results are limited to the case where the covariates are observed and the distributions unobserved, while in the current setting some covariates and their distributions are unobserved (i.e., the test relies on population error terms from a location-scale model) which necessarily involves a separate theoretical approach. We demonstrate how replacing the nonsmooth distributions of unobservables with their kernel-smoothed sample counterparts can lead to substantial power improvements, and extend existing approaches to the smooth multivariate and mixed continuous and discrete data setting in the presence of unobservables. Theoretical underpinnings are provided, Monte Carlo simulations are undertaken to assess finite-sample performance, and illustrative applications are provided.
Journal: Journal of Business & Economic Statistics
Pages: 784-795
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1574227
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1574227
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:784-795
Template-Type: ReDIF-Article 1.0
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Title: Comparing Possibly Misspecified Forecasts
Abstract:
Recent work has emphasized the importance of evaluating estimates of a statistical functional (such as a conditional mean, quantile, or distribution) using a loss function that is consistent for the functional of interest, of which there is an infinite number. If forecasters all use correctly specified models free from estimation error, and if the information sets of competing forecasters are nested, then the ranking induced by a single consistent loss function is sufficient for the ranking by any consistent loss function. This article shows, via analytical results and realistic simulation-based analyses, that the presence of misspecified models, parameter estimation error, or nonnested information sets, leads generally to sensitivity to the choice of (consistent) loss function. Thus, rather than merely specifying the target functional, which narrows the set of relevant loss functions only to the class of loss functions consistent for that functional, forecast consumers or survey designers should specify the single specific loss function that will be used to evaluate forecasts. An application to survey forecasts of U.S. inflation illustrates the results.
Journal: Journal of Business & Economic Statistics
Pages: 796-809
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1585256
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1585256
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:796-809
Template-Type: ReDIF-Article 1.0
Author-Name: Adam McCloskey
Author-X-Name-First: Adam
Author-X-Name-Last: McCloskey
Title: Asymptotically Uniform Tests After Consistent Model Selection in the Linear Regression Model
Abstract:
This article specializes the critical value (CV) methods that are based upon (refinements of) Bonferroni bounds, introduced by McCloskey to a problem of inference after consistent model selection in a general linear regression model. The post-selection problem is formulated to mimic common empirical practice and is applicable to both cross-sectional and time series contexts. We provide algorithms for constructing the CVs in this setting and establish uniform asymptotic size results for the resulting tests. The practical implementation of the CVs is illustrated in an empirical application to the effect of classroom size on test scores.
Journal: Journal of Business & Economic Statistics
Pages: 810-825
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1592754
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1592754
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:810-825
Template-Type: ReDIF-Article 1.0
Author-Name: Tiziano Arduini
Author-X-Name-First: Tiziano
Author-X-Name-Last: Arduini
Author-Name: Eleonora Patacchini
Author-X-Name-First: Eleonora
Author-X-Name-Last: Patacchini
Author-Name: Edoardo Rainone
Author-X-Name-First: Edoardo
Author-X-Name-Last: Rainone
Title: Treatment Effects With Heterogeneous Externalities
Abstract:
This article proposes a new method for estimating heterogeneous externalities in policy analysis when social interactions take the linear-in-means form. We establish that the parameters of interest can be identified and consistently estimated using specific functions of the share of the eligible population. We also study the finite sample performance of the proposed estimators using Monte Carlo simulations. The method is illustrated using data on the PROGRESA program. We find that more than 50% of the effects of the program on schooling attendance are due to externalities, which are heterogeneous within and between poor and nonpoor households.
Journal: Journal of Business & Economic Statistics
Pages: 826-838
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1592755
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1592755
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:826-838
Template-Type: ReDIF-Article 1.0
Author-Name: Yuta Yamauchi
Author-X-Name-First: Yuta
Author-X-Name-Last: Yamauchi
Author-Name: Yasuhiro Omori
Author-X-Name-First: Yasuhiro
Author-X-Name-Last: Omori
Title: Multivariate Stochastic Volatility Model With Realized Volatilities and Pairwise Realized Correlations
Abstract:
Although stochastic volatility and GARCH (generalized autoregressive conditional heteroscedasticity) models have successfully described the volatility dynamics of univariate asset returns, extending them to the multivariate models with dynamic correlations has been difficult due to several major problems. First, there are too many parameters to estimate if available data are only daily returns, which results in unstable estimates. One solution to this problem is to incorporate additional observations based on intraday asset returns, such as realized covariances. Second, since multivariate asset returns are not synchronously traded, we have to use the largest time intervals such that all asset returns are observed to compute the realized covariance matrices. However, in this study, we fail to make full use of the available intraday informations when there are less frequently traded assets. Third, it is not straightforward to guarantee that the estimated (and the realized) covariance matrices are positive definite.Our contributions are the following: (1) we obtain the stable parameter estimates for the dynamic correlation models using the realized measures, (2) we make full use of intraday informations by using pairwise realized correlations, (3) the covariance matrices are guaranteed to be positive definite, (4) we avoid the arbitrariness of the ordering of asset returns, (5) we propose the flexible correlation structure model (e.g., such as setting some correlations to be zero if necessary), and (6) the parsimonious specification for the leverage effect is proposed. Our proposed models are applied to the daily returns of nine U.S. stocks with their realized volatilities and pairwise realized correlations and are shown to outperform the existing models with respect to portfolio performances.
Journal: Journal of Business & Economic Statistics
Pages: 839-855
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1602048
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1602048
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:839-855
Template-Type: ReDIF-Article 1.0
Author-Name: Giacomo Bormetti
Author-X-Name-First: Giacomo
Author-X-Name-Last: Bormetti
Author-Name: Roberto Casarin
Author-X-Name-First: Roberto
Author-X-Name-Last: Casarin
Author-Name: Fulvio Corsi
Author-X-Name-First: Fulvio
Author-X-Name-Last: Corsi
Author-Name: Giulia Livieri
Author-X-Name-First: Giulia
Author-X-Name-Last: Livieri
Title: A Stochastic Volatility Model With Realized Measures for Option Pricing
Abstract:
Based on the fact that realized measures of volatility are affected by measurement errors, we introduce a new family of discrete-time stochastic volatility models having two measurement equations relating both observed returns and realized measures to the latent conditional variance. A semi-analytical option pricing framework is developed for this class of models. In addition, we provide analytical filtering and smoothing recursions for the basic specification of the model, and an effective MCMC algorithm for its richer variants. The empirical analysis shows the effectiveness of filtering and smoothing realized measures in inflating the latent volatility persistence—the crucial parameter in pricing Standard and Poor’s 500 Index options.
Journal: Journal of Business & Economic Statistics
Pages: 856-871
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1604371
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1604371
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:856-871
Template-Type: ReDIF-Article 1.0
Author-Name: Lindsay R. Berry
Author-X-Name-First: Lindsay R.
Author-X-Name-Last: Berry
Author-Name: Mike West
Author-X-Name-First: Mike
Author-X-Name-Last: West
Title: Bayesian Forecasting of Many Count-Valued Time Series
Abstract:
We develop and exemplify application of new classes of dynamic models for time series of nonnegative counts. Our novel univariate models combine dynamic generalized linear models for binary and conditionally Poisson time series, with dynamic random effects for over-dispersion. These models estimate dynamic regression coefficients in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach—one new example of the concept of decouple/recouple in time series—enables information sharing across series. This incorporates cross-series linkages while insulating parallel estimation of univariate models, and hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics, comparisons with existing methods, and broader questions of probabilistic forecast assessment.
Journal: Journal of Business & Economic Statistics
Pages: 872-887
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1604372
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1604372
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:872-887
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Luo
Author-X-Name-First: Wei
Author-X-Name-Last: Luo
Author-Name: Yeying Zhu
Author-X-Name-First: Yeying
Author-X-Name-Last: Zhu
Title: Matching Using Sufficient Dimension Reduction for Causal Inference
Abstract:
To estimate causal treatment effects, we propose a new matching approach based on the reduced covariates obtained from sufficient dimension reduction. Compared with the original covariates and the propensity score, which are commonly used for matching in the literature, the reduced covariates are nonparametrically estimable and are effective in imputing the missing potential outcomes, under a mild assumption on the low-dimensional structure of the data. Under the ignorability assumption, the consistency of the proposed approach requires a weaker common support condition. In addition, researchers are allowed to employ different reduced covariates to find matched subjects for different treatment groups. We develop relevant asymptotic results and conduct simulation studies as well as real data analysis to illustrate the usefulness of the proposed approach.
Journal: Journal of Business & Economic Statistics
Pages: 888-900
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1609974
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1609974
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:888-900
Template-Type: ReDIF-Article 1.0
Author-Name: German Blanco
Author-X-Name-First: German
Author-X-Name-Last: Blanco
Author-Name: Xuan Chen
Author-X-Name-First: Xuan
Author-X-Name-Last: Chen
Author-Name: Carlos A. Flores
Author-X-Name-First: Carlos A.
Author-X-Name-Last: Flores
Author-Name: Alfonso Flores-Lagunes
Author-X-Name-First: Alfonso
Author-X-Name-Last: Flores-Lagunes
Title: Bounds on Average and Quantile Treatment Effects on Duration Outcomes Under Censoring, Selection, and Noncompliance
Abstract:
We consider the problem of assessing the effects of a treatment on duration outcomes using data from a randomized evaluation with noncompliance. For such settings, we derive nonparametric sharp bounds for average and quantile treatment effects addressing three pervasive problems simultaneously: self-selection into the spell of interest, endogenous censoring of the duration outcome, and noncompliance with the assigned treatment. Ignoring any of these issues could yield biased estimates of the effects. Notably, the proposed bounds do not impose the independent censoring assumption—which is commonly used to address censoring but is likely to fail in important settings—or exclusion restrictions to address endogeneity of censoring and selection. Instead, they employ monotonicity and stochastic dominance assumptions. To illustrate the use of these bounds we assess the effects of the Job Corps (JC) training program on its participants’ last complete employment spell duration. Our estimated bounds suggest that JC participation may increase the average duration of the last complete employment spell before week 208 after randomization by at least 5.6 log points (5.8%) for individuals who comply with their treatment assignment and experience a complete employment spell whether or not they enrolled in JC. The estimated quantile treatment effects suggest the impacts may be heterogeneous, and strengthen our conclusions based on the estimated average effects.
Journal: Journal of Business & Economic Statistics
Pages: 901-920
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1609975
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1609975
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:901-920
Template-Type: ReDIF-Article 1.0
Author-Name: Yuriy Gorodnichenko
Author-X-Name-First: Yuriy
Author-X-Name-Last: Gorodnichenko
Author-Name: Byoungchan Lee
Author-X-Name-First: Byoungchan
Author-X-Name-Last: Lee
Title: Forecast Error Variance Decompositions with Local Projections
Abstract:
We propose and study properties of an estimator of the forecast error variance decomposition in the local projections framework. We find for empirically relevant sample sizes that, after being bias-corrected with bootstrap, our estimator performs well in simulations. We also illustrate the workings of our estimator empirically for monetary policy and productivity shocks. KEYWORDS: Forecast error variance decomposition; Local projections.
Journal: Journal of Business & Economic Statistics
Pages: 921-933
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1610661
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1610661
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:921-933
Template-Type: ReDIF-Article 1.0
Author-Name: Jun Ma
Author-X-Name-First: Jun
Author-X-Name-Last: Ma
Author-Name: Hugo Jales
Author-X-Name-First: Hugo
Author-X-Name-Last: Jales
Author-Name: Zhengfei Yu
Author-X-Name-First: Zhengfei
Author-X-Name-Last: Yu
Title: Minimum Contrast Empirical Likelihood Inference of Discontinuity in Density*
Abstract:
This article investigates the asymptotic properties of a simple empirical-likelihood-based inference method for discontinuity in density. The parameter of interest is a function of two one-sided limits of the probability density function at (possibly) two cut-off points. Our approach is based on the first-order conditions from a minimum contrast problem. We investigate both first-order and second-order properties of the proposed method. We characterize the leading coverage error of our inference method and propose a coverage-error-optimal (CE-optimal, hereafter) bandwidth selector. We show that the empirical likelihood ratio statistic is Bartlett correctable. An important special case is the manipulation testing problem in a regression discontinuity design (RDD), where the parameter of interest is the density difference at a known threshold. In RDD, the continuity of the density of the assignment variable at the threshold is considered as a “no-manipulation” behavioral assumption, which is a testable implication of an identifying condition for the local average treatment effect. When specialized to the manipulation testing problem, the CE-optimal bandwidth selector has an explicit form. We propose a data-driven CE-optimal bandwidth selector for use in practice. Results from Monte Carlo simulations are presented. Usefulness of our method is illustrated by an empirical example.
Journal: Journal of Business & Economic Statistics
Pages: 934-950
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2019.1617155
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1617155
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:934-950
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Editorial Collaborators
Journal: Journal of Business & Economic Statistics
Pages: 951-954
Issue: 4
Volume: 38
Year: 2020
Month: 10
X-DOI: 10.1080/07350015.2020.1826254
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1826254
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:38:y:2020:i:4:p:951-954
Template-Type: ReDIF-Article 1.0
Author-Name: Xu Han
Author-X-Name-First: Xu
Author-X-Name-Last: Han
Title: Shrinkage Estimation of Factor Models With Global and Group-Specific Factors
Abstract:
This article develops an adaptive group lasso estimator for factor models with both global and group-specific factors. The global factors can affect all variables, whereas the group-specific factors are only allowed to affect the variables within a certain group. We propose a new method to separately identify the spaces spanned by global and group-specific factors, and we develop a new shrinkage estimator that can consistently estimate the factor loadings and determine the number of factors simultaneously. The asymptotic result shows that the proposed estimator can select the true model specification with a probability approaching one. An information criterion is developed to select the optimal tuning parameters in the shrinkage estimation. Monte Carlo simulations confirm our asymptotic theory, and the proposed estimator performs well in finite samples. In an empirical application, we implement the proposed method to a dataset consisting of Eurozone, United States, and United Kingdom macroeconomic variables, and we detect one global factor, one U.S.-specific factor, and one Eurozone-specific factor.
Journal: Journal of Business & Economic Statistics
Pages: 1-17
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1617157
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1617157
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:1-17
Template-Type: ReDIF-Article 1.0
Author-Name: Simon Clinet
Author-X-Name-First: Simon
Author-X-Name-Last: Clinet
Author-Name: Yoann Potiron
Author-X-Name-First: Yoann
Author-X-Name-Last: Potiron
Title: Disentangling Sources of High Frequency Market Microstructure Noise
Abstract:
Employing tick-by-tick maximum likelihood estimation on several leading models from the financial economics literature, we find that the market microstructure noise is mostly explained by a linear model where the trade direction, that is, whether the trade is buyer or seller initiated, is multiplied by the dynamic quoted bid-ask spread. Although reasonably stable intraday, this model manifests variability across days and stocks. Among different observable high frequency financial characteristics of the underlying stocks, this variability is best explained by the tick-to-spread ratio, implying that discreteness is the first residual source of noise. We determine the bid-ask bounce effect as the next source of noise.
Journal: Journal of Business & Economic Statistics
Pages: 18-39
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1617158
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1617158
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:18-39
Template-Type: ReDIF-Article 1.0
Author-Name: Rogier Quaedvlieg
Author-X-Name-First: Rogier
Author-X-Name-Last: Quaedvlieg
Title: Multi-Horizon Forecast Comparison
Abstract:
We introduce tests for multi-horizon superior predictive ability (SPA). Rather than comparing forecasts of different models at multiple horizons individually, we propose to jointly consider all horizons of a forecast path. We define the concepts of uniform and average SPA. The former entails superior performance at each individual horizon, while the latter allows inferior performance at some horizons to be compensated by others. The article illustrates how the tests lead to more coherent conclusions, and how they are better able to differentiate between models than the single-horizon tests. We provide an extension of the previously introduced model confidence set to allow for multi-horizon comparison of more than two models. Simulations demonstrate appropriate size and high power. An illustration of the tests on a large set of macroeconomic variables demonstrates the empirical benefits of multi-horizon comparison.
Journal: Journal of Business & Economic Statistics
Pages: 40-53
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1620074
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1620074
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:40-53
Template-Type: ReDIF-Article 1.0
Author-Name: Shou-Yung Yin
Author-X-Name-First: Shou-Yung
Author-X-Name-Last: Yin
Author-Name: Chu-An Liu
Author-X-Name-First: Chu-An
Author-X-Name-Last: Liu
Author-Name: Chang-Ching Lin
Author-X-Name-First: Chang-Ching
Author-X-Name-Last: Lin
Title: Focused Information Criterion and Model Averaging for Large Panels With a Multifactor Error Structure
Abstract:
This article considers model selection and model averaging in panel data models with a multifactor error structure. We investigate the limiting distribution of the common correlated effects estimator in a local asymptotic framework and show that the trade-off between bias and variance remains in asymptotic theory. We then propose a focused information criterion and a plug-in averaging estimator for large heterogeneous panels and examine their theoretical properties. The novel feature of the proposed method is that it aims to minimize the sample analog of the asymptotic mean squared error and can be applied to cases irrespective of whether the rank condition holds or not. Monte Carlo simulations show that both proposed selection and averaging methods generally achieve lower mean squared error than other methods. The proposed methods are applied to examine possible causes that lead to the increasing wage inequality between high-skilled and low-skilled workers in the U.S. manufacturing industries.
Journal: Journal of Business & Economic Statistics
Pages: 54-68
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1623044
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1623044
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:54-68
Template-Type: ReDIF-Article 1.0
Author-Name: Markku Lanne
Author-X-Name-First: Markku
Author-X-Name-Last: Lanne
Author-Name: Jani Luoto
Author-X-Name-First: Jani
Author-X-Name-Last: Luoto
Title: GMM Estimation of Non-Gaussian Structural Vector Autoregression
Abstract:
We consider estimation of the structural vector autoregression (SVAR) by the generalized method of moments (GMM). Given non-Gaussian errors and a suitable set of moment conditions, the GMM estimator is shown to achieve local identification of the structural shocks. The optimal set of moment conditions can be found by well-known moment selection criteria. Compared to recent alternatives, our approach has the advantage that the structural shocks need not be mutually independent, but only orthogonal, provided they satisfy a number of co-kurtosis conditions that prevail under independence. According to simulation results, the finite-sample performance of our estimation method is comparable, or even superior to that of the recently proposed pseudo maximum likelihood estimators. The two-step estimator is found to outperform the alternative GMM estimators. An empirical application to a small macroeconomic model estimated on postwar United States data illustrates the use of the methods.
Journal: Journal of Business & Economic Statistics
Pages: 69-81
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1629940
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1629940
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:69-81
Template-Type: ReDIF-Article 1.0
Author-Name: Feifei Wang
Author-X-Name-First: Feifei
Author-X-Name-Last: Wang
Author-Name: Jingyuan Liu
Author-X-Name-First: Jingyuan
Author-X-Name-Last: Liu
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Sequential Text-Term Selection in Vector Space Models
Abstract:
Text mining has recently attracted a great deal of attention with the accumulation of text documents in all fields. In this article, we focus on the use of textual information to explain continuous variables in the framework of linear regressions. To handle the unstructured texts, one common practice is to structuralize the text documents via vector space models. However, using words or phrases as the basic analysis terms in vector space models is in high debate. In addition, vector space models often lead to an extremely large term set and suffer from the curse of dimensionality, which makes term selection important and necessary. Toward this end, we propose a novel term screening method for vector space models under a linear regression setup. We first split the entire term space into different subspaces according to the length of terms and then conduct term screening in a sequential manner. We prove the screening consistency of the method and assess the empirical performance of the proposed method with simulations based on a dataset of online consumer reviews for cellphones. Then, we analyze the associated real data. The results show that the sequential term selection technique can effectively detect the relevant terms by a few steps.
Journal: Journal of Business & Economic Statistics
Pages: 82-97
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1634079
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1634079
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:82-97
Template-Type: ReDIF-Article 1.0
Author-Name: Marcelo C. Medeiros
Author-X-Name-First: Marcelo C.
Author-X-Name-Last: Medeiros
Author-Name: Gabriel F. R. Vasconcelos
Author-X-Name-First: Gabriel F. R.
Author-X-Name-Last: Vasconcelos
Author-Name: Álvaro Veiga
Author-X-Name-First: Álvaro
Author-X-Name-Last: Veiga
Author-Name: Eduardo Zilberman
Author-X-Name-First: Eduardo
Author-X-Name-Last: Zilberman
Title: Forecasting Inflation in a Data-Rich Environment: The Benefits of Machine Learning Methods
Abstract:
Inflation forecasting is an important but difficult task. Here, we explore advances in machine learning (ML) methods and the availability of new datasets to forecast U.S. inflation. Despite the skepticism in the previous literature, we show that ML models with a large number of covariates are systematically more accurate than the benchmarks. The ML method that deserves more attention is the random forest model, which dominates all other models. Its good performance is due not only to its specific method of variable selection but also the potential nonlinearities between past key macroeconomic variables and inflation. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 98-119
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1637745
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1637745
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:98-119
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaoyi Han
Author-X-Name-First: Xiaoyi
Author-X-Name-Last: Han
Author-Name: Chih-Sheng Hsieh
Author-X-Name-First: Chih-Sheng
Author-X-Name-Last: Hsieh
Author-Name: Stanley I. M. Ko
Author-X-Name-First: Stanley I. M.
Author-X-Name-Last: Ko
Title: Spatial Modeling Approach for Dynamic Network Formation and Interactions
Abstract:
This study primarily seeks to answer the following question: How do social networks evolve over time and affect individual economic activity? To provide an adequate empirical tool to answer this question, we propose a new modeling approach for longitudinal data of networks and activity outcomes. The key features of our model are the inclusion of dynamic effects and the use of time-varying latent variables to determine unobserved individual traits in network formation and activity interactions. The proposed model combines two well-known models in the field: latent space model for dynamic network formation and spatial dynamic panel data model for network interactions. This combination reflects real situations, where network links and activity outcomes are interdependent and jointly influenced by unobserved individual traits. Moreover, this combination enables us to (1) manage the endogenous selection issue inherited in network interaction studies, and (2) investigate the effect of homophily and individual heterogeneity in network formation. We develop a Bayesian Markov chain Monte Carlo sampling approach to estimate the model. We also provide a Monte Carlo experiment to analyze the performance of our estimation method and apply the model to a longitudinal student network data in Taiwan to study the friendship network formation and peer effect on academic performance. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 120-135
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1639395
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1639395
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:120-135
Template-Type: ReDIF-Article 1.0
Author-Name: Shiqing Ling
Author-X-Name-First: Shiqing
Author-X-Name-Last: Ling
Author-Name: Ruey S. Tsay
Author-X-Name-First: Ruey S.
Author-X-Name-Last: Tsay
Author-Name: Yaxing Yang
Author-X-Name-First: Yaxing
Author-X-Name-Last: Yang
Title: Testing Serial Correlation and ARCH Effect of High-Dimensional Time-Series Data
Abstract:
This article proposes several tests for detecting serial correlation and ARCH effect in high-dimensional data. The dimension of data p=p(n)
may go to infinity when the sample size n→∞
. It is shown that the sample autocorrelations and the sample rank autocorrelations (Spearman’s rank correlation) of the L1-norm of data are asymptotically normal. Two portmanteau tests based, respectively, on the norm and its rank are shown to be asymptotically χ2-distributed, and the corresponding weighted portmanteau tests are shown to be asymptotically distributed as a linear combination of independent χ2 random variables. These tests are dimension-free, that is, independent of p, and the norm rank-based portmanteau test and its weighted counterpart can be used for heavy-tailed time series. We further discuss two standardized norm-based tests. Simulation results show that the proposed test statistics have satisfactory sizes and are powerful even for the case of small n and large p. We apply the tests to two real datasets. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 136-147
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1647844
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1647844
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:136-147
Template-Type: ReDIF-Article 1.0
Author-Name: Pierre Perron
Author-X-Name-First: Pierre
Author-X-Name-Last: Perron
Author-Name: Yohei Yamamoto
Author-X-Name-First: Yohei
Author-X-Name-Last: Yamamoto
Title: Testing for Changes in Forecasting Performance
Abstract:
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi’s (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over values of m within a prespecified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald and Total Loss UDmax (TLUD) tests. Using theoretical analyses and simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes considered are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and empirical applications illustrate the relevance of our findings in practice. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 148-165
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1641410
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1641410
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:148-165
Template-Type: ReDIF-Article 1.0
Author-Name: Dimitris Christopoulos
Author-X-Name-First: Dimitris
Author-X-Name-Last: Christopoulos
Author-Name: Peter McAdam
Author-X-Name-First: Peter
Author-X-Name-Last: McAdam
Author-Name: Elias Tzavalis
Author-X-Name-First: Elias
Author-X-Name-Last: Tzavalis
Title: Dealing With Endogeneity in Threshold Models Using Copulas
Abstract:
We suggest a new method dealing with the problem of endogeneity of the threshold variable in structural threshold regression models based on copula theory. This method enables us to relax the assumption that the threshold variable is normally distributed and to capture the dependence structure between the threshold regression error term and the threshold variable independently of the marginal distribution of the threshold variable. For Gaussian and Student’s t copulas, this dependent structure can be captured by copula-type transformations of the distribution of the threshold variable, for each regime of the model. Augmenting the threshold model under these transformations can control for the endogeneity problem of threshold variable. The single-factor correlation structure of the threshold regression error term with these transformations allows us to consistently estimate the threshold and the slope parameters of the model based on a least squares method. Based on a Monte Carlo study, we show that our method is robust to nonlinear dependence structures between the regression error term and the threshold variable implied by the Archimedean family of copulas. We illustrate the method by estimating a model of the foreign-trade multiplier for seven OECD economies.
Journal: Journal of Business & Economic Statistics
Pages: 166-178
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1647213
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1647213
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:166-178
Template-Type: ReDIF-Article 1.0
Author-Name: Huazhen Lin
Author-X-Name-First: Huazhen
Author-X-Name-Last: Lin
Author-Name: Wei Liu
Author-X-Name-First: Wei
Author-X-Name-Last: Liu
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Title: Regression Analysis with Individual-Specific Patterns of Missing Covariates
Abstract:
It is increasingly common to collect data from heterogeneous sources in practice. Two major challenges complicate the statistical analysis of such data. First, only a small proportion of units have complete information across all sources. Second, the missing data patterns vary across individuals. Our motivating online-loan data have 93% missing covariates where the missing pattern is individual-specific. The existing regression analysis with missing covariates either are inefficient or require additional modeling assumptions on the covariates. We propose a simple yet efficient iterative least squares estimator of the regression coefficient for the data with individual-specific missing patterns. Our method has several desirable features. First, it does not require any modeling assumptions on the covariates. Second, the imputation of the missing covariates involves feasible one-dimensional nonparametric regressions, and can maximally use the information across units and the relationship among the covariates. Third, the iterative least squares estimate is both computationally and statistically efficient. We study the asymptotic properties of our estimator and apply it to the motivating online-loan data. Supplementary materials for this article are available online. KEY WORDS: High missing rate; Individual-specific missing; Iterative least squares; Missing covariates.
Journal: Journal of Business & Economic Statistics
Pages: 179-188
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1635486
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1635486
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:179-188
Template-Type: ReDIF-Article 1.0
Author-Name: Zhenting Sun
Author-X-Name-First: Zhenting
Author-X-Name-Last: Sun
Author-Name: Brendan K. Beare
Author-X-Name-First: Brendan K.
Author-X-Name-Last: Beare
Title: Improved Nonparametric Bootstrap Tests of Lorenz Dominance
Abstract:
One income or wealth distribution is said to Lorenz dominate another when the Lorenz curve for the former is nowhere below that of the latter, indicating a (weakly) more equitable allocation of resources. Existing tests of the null of Lorenz dominance based on pairs of samples of income or wealth achieve the nominal rejection rate asymptotically when the two Lorenz curves are equal, but are conservative at other null configurations. We propose new nonparametric bootstrap tests of Lorenz dominance based on preliminary estimation of a contact set. Our tests achieve the nominal rejection rate asymptotically on the boundary of the null; that is, when Lorenz dominance is satisfied, and the Lorenz curves coincide on some interval. Numerical simulations indicate that our tests enjoy substantially improved power compared to existing procedures at relevant sample sizes. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 189-199
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1647214
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1647214
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:189-199
Template-Type: ReDIF-Article 1.0
Author-Name: Karim Chalak
Author-X-Name-First: Karim
Author-X-Name-Last: Chalak
Author-Name: Daniel Kim
Author-X-Name-First: Daniel
Author-X-Name-Last: Kim
Title: Measurement Error Without the Proxy Exclusion Restriction
Abstract:
Abstract–This article studies the identification of the coefficients in a linear equation when data on the outcome, covariates, and an error-laden proxy for a latent variable are available. We maintain that the measurement error in the proxy is classical and relax the assumption that the proxy is excluded from the outcome equation. This enables the proxy to directly affect the outcome and allows for differential measurement error. Without the proxy exclusion restriction, we first show that the effects of the latent variable, the proxy, and the covariates are not identified. We then derive the sharp identification regions for these effects under any configuration of three auxiliary assumptions. The first weakens the assumption of no measurement error by imposing an upper bound on the noise-to-signal ratio. The second imposes an upper bound on the outcome equation coefficient of determination that would obtain had there been no measurement error. The third weakens the proxy exclusion restriction by specifying whether the latent variable and its proxy affect the outcome in the same or the opposite direction, if at all. Using the College Scorecard aggregate data, we illustrate our framework by studying the financial returns to college selectivity and characteristics and student characteristics when the average SAT score at an institution may directly affect earnings and serves as a proxy for the average ability of the student cohort.
Journal: Journal of Business & Economic Statistics
Pages: 200-216
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1617156
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1617156
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:200-216
Template-Type: ReDIF-Article 1.0
Author-Name: Rajeev Dehejia
Author-X-Name-First: Rajeev
Author-X-Name-Last: Dehejia
Author-Name: Cristian Pop-Eleches
Author-X-Name-First: Cristian
Author-X-Name-Last: Pop-Eleches
Author-Name: Cyrus Samii
Author-X-Name-First: Cyrus
Author-X-Name-Last: Samii
Title: From Local to Global: External Validity in a Fertility Natural Experiment
Abstract:
We study issues related to external validity for treatment effects using over 100 replications of the Angrist and Evans natural experiment on the effects of sibling sex composition on fertility and labor supply. The replications are based on census data from around the world going back to 1960. We decompose sources of error in predicting treatment effects in external contexts in terms of macro and micro sources of variation. In our empirical setting, we find that macro covariates dominate over micro covariates for reducing errors in predicting treatments, an issue that past studies of external validity have been unable to evaluate. We develop methods for two applications to evidence-based decision-making, including determining where to locate an experiment and whether policy-makers should commission new experiments or rely on an existing evidence base for making a policy decision.
Journal: Journal of Business & Economic Statistics
Pages: 217-243
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1639407
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1639407
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:217-243
Template-Type: ReDIF-Article 1.0
Author-Name: Laura Forastiere
Author-X-Name-First: Laura
Author-X-Name-Last: Forastiere
Author-Name: Patrizia Lattarulo
Author-X-Name-First: Patrizia
Author-X-Name-Last: Lattarulo
Author-Name: Marco Mariani
Author-X-Name-First: Marco
Author-X-Name-Last: Mariani
Author-Name: Fabrizia Mealli
Author-X-Name-First: Fabrizia
Author-X-Name-Last: Mealli
Author-Name: Laura Razzolini
Author-X-Name-First: Laura
Author-X-Name-Last: Razzolini
Title: Exploring Encouragement, Treatment, and Spillover Effects Using Principal Stratification, With Application to a Field Experiment on Teens’ Museum Attendance
Abstract:
This article revisits results from a field experiment, conducted in Florence, Italy, to study the effects of incentives provided to high school teens to motivate them to visit art museums. In the experiment, different classes of students were randomized to three types of encouragement and were offered a free visit to a main museum in the city. Using the principal stratification framework, the article explores causal pathways that may lead students to increase future visits, as induced by the encouragement received, or by the individual experience of the proposed free museum visit, or by the spillover of classmates’ experience. We do so by estimating and interpreting the causal effects of the three forms of encouragement within the principal strata defined by compliance behaviors. Bayesian inferential methods are used to derive the posterior distributions of weakly identified causal parameters.
Journal: Journal of Business & Economic Statistics
Pages: 244-258
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1647843
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1647843
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:244-258
Template-Type: ReDIF-Article 1.0
Author-Name: Peter Reinhard Hansen
Author-X-Name-First: Peter Reinhard
Author-X-Name-Last: Hansen
Author-Name: Matthias Schmidtblaicher
Author-X-Name-First: Matthias
Author-X-Name-Last: Schmidtblaicher
Title: A Dynamic Model of Vaccine Compliance: How Fake News Undermined the Danish HPV Vaccine Program
Abstract:
Increased vaccine hesitancy presents challenges to public health and undermines efforts to eradicate diseases such as measles, rubella, and polio. The decline is partly attributed to misconceptions that are shared on social media, such as the debunked association between vaccines and autism. Perhaps, more damaging to vaccine uptake are cases where trusted mainstream media run stories that exaggerate the risks associated with vaccines. It is important to understand the underlying causes of vaccine refusal, because these may be prevented, or countered, in a timely manner by educational campaigns. In this article, we develop a dynamic model of vaccine compliance that can help pinpoint events that disrupted vaccine compliance. We apply the framework to Danish HPV vaccine data, which experienced a sharp decline in compliance following the broadcast of a controversial TV documentary, and we show that media coverage significantly predicts vaccine uptake.
Journal: Journal of Business & Economic Statistics
Pages: 259-271
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1623045
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1623045
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:259-271
Template-Type: ReDIF-Article 1.0
Author-Name: Qingyuan Zhao
Author-X-Name-First: Qingyuan
Author-X-Name-Last: Zhao
Author-Name: Trevor Hastie
Author-X-Name-First: Trevor
Author-X-Name-Last: Hastie
Title: Causal Interpretations of Black-Box Models
Abstract:
The fields of machine learning and causal inference have developed many concepts, tools, and theory that are potentially useful for each other. Through exploring the possibility of extracting causal interpretations from black-box machine-trained models, we briefly review the languages and concepts in causal inference that may be interesting to machine learning researchers. We start with the curious observation that Friedman’s partial dependence plot has exactly the same formula as Pearl’s back-door adjustment and discuss three requirements to make causal interpretations: a model with good predictive performance, some domain knowledge in the form of a causal diagram and suitable visualization tools. We provide several illustrative examples and find some interesting and potentially causal relations using visualization tools for black-box models.
Journal: Journal of Business & Economic Statistics
Pages: 272-281
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1624293
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1624293
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:272-281
Template-Type: ReDIF-Article 1.0
Author-Name: Jing Tao
Author-X-Name-First: Jing
Author-X-Name-Last: Tao
Title: Empirical Likelihood Ratio Tests of Conditional Moment Restrictions With Unknown Functions
Abstract:
This article introduces empirical likelihood ratio tests for conditional moment models in which the unknown parameter contains infinite-dimensional components. We allow unknown functions to be included in the conditional moment restrictions. We discusses (1) the limiting distribution of the sieve conditional empirical likelihood ratio (SCELR) test statistic for functionals of parameters under the null hypothesis and local alternatives; and (2) the limiting distribution of the SCELR test statistic for conditional moment restrictions (a consistent specification test) under the null hypothesis and local alternatives. A Monte Carlo study examines finite sample performance. We then apply these tests in an empirical application to construct confidence intervals for Engel curves and test restrictions on the curves.
Journal: Journal of Business & Economic Statistics
Pages: 282-293
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1647845
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1647845
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:282-293
Template-Type: ReDIF-Article 1.0
Author-Name: Ignace De Vos
Author-X-Name-First: Ignace
Author-X-Name-Last: De Vos
Author-Name: Gerdie Everaert
Author-X-Name-First: Gerdie
Author-X-Name-Last: Everaert
Title: Bias-Corrected Common Correlated Effects Pooled Estimation in Dynamic Panels
Abstract:
This article extends the common correlated effects pooled (CCEP) estimator to homogenous dynamic panels. In this setting, CCEP suffers from a large bias when the time span (T) of the dataset is fixed. We develop a bias-corrected CCEP estimator that is consistent as the number of cross-sectional units (N) tends to infinity, for T fixed or growing large, provided that the specification is augmented with a sufficient number of cross-sectional averages, and lags thereof. Monte Carlo experiments show that the correction offers strong improvements in terms of bias and variance. We apply our approach to estimate the dynamic impact of temperature shocks on aggregate output growth.
Journal: Journal of Business & Economic Statistics
Pages: 294-306
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1654879
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1654879
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:294-306
Template-Type: ReDIF-Article 1.0
Author-Name: Gergely Ganics
Author-X-Name-First: Gergely
Author-X-Name-Last: Ganics
Author-Name: Atsushi Inoue
Author-X-Name-First: Atsushi
Author-X-Name-Last: Inoue
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Title: Confidence Intervals for Bias and Size Distortion in IV and Local Projections-IV Models
Abstract:
In this article, we propose methods to construct confidence intervals for the bias of the two-stage least squares estimator, and the size distortion of the associated Wald test in instrumental variables models with heteroscedasticity and serial correlation. Importantly our framework covers the local projections—instrumental variable model as well. Unlike tests for weak instruments, whose distributions are nonstandard and depend on nuisance parameters that cannot be consistently estimated, the confidence intervals for the strength of identification are straightforward and computationally easy to calculate, as they are obtained from inverting a chi-squared distribution. Furthermore, they provide more information to researchers on instrument strength than the binary decision offered by tests. Monte Carlo simulations show that the confidence intervals have good, albeit conservative, in some cases, small sample coverage. We illustrate the usefulness of the proposed methods in two empirical situations: the estimation of the intertemporal elasticity of substitution in a linearized Euler equation, and government spending multipliers. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 307-324
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1660175
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1660175
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:307-324
Template-Type: ReDIF-Article 1.0
Author-Name: Shunan Zhao
Author-X-Name-First: Shunan
Author-X-Name-Last: Zhao
Author-Name: Ruiqi Liu
Author-X-Name-First: Ruiqi
Author-X-Name-Last: Liu
Author-Name: Zuofeng Shang
Author-X-Name-First: Zuofeng
Author-X-Name-Last: Shang
Title: Statistical Inference on Panel Data Models: A Kernel Ridge Regression Method
Abstract:
We propose statistical inferential procedures for nonparametric panel data models with interactive fixed effects in a kernel ridge regression framework. Compared with the traditional sieve methods, our method is automatic in the sense that it does not require the choice of basis functions and truncation parameters. The model complexity is controlled by a continuous regularization parameter which can be automatically selected by the generalized cross-validation. Based on the empirical process theory and functional analysis tools, we derive the joint asymptotic distributions for the estimators in the heterogeneous setting. These joint asymptotic results are then used to construct the confidence intervals for the regression means and the prediction intervals for future observations, both being the first provably valid intervals in literature. The marginal asymptotic normality of the functional estimators in a homogeneous setting is also obtained. Our estimators can also be readily modified and applied to other widely used semiparametric models, such as partially linear models. Simulation and real data analyses demonstrate the advantages of our method. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 325-337
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1660176
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1660176
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:325-337
Template-Type: ReDIF-Article 1.0
Author-Name: Marcelo Fernandes
Author-X-Name-First: Marcelo
Author-X-Name-Last: Fernandes
Author-Name: Emmanuel Guerre
Author-X-Name-First: Emmanuel
Author-X-Name-Last: Guerre
Author-Name: Eduardo Horta
Author-X-Name-First: Eduardo
Author-X-Name-Last: Horta
Title: Smoothing Quantile Regressions
Abstract:
We propose to smooth the objective function, rather than only the indicator on the check function, in a linear quantile regression context. Not only does the resulting smoothed quantile regression estimator yield a lower mean squared error and a more accurate Bahadur–Kiefer representation than the standard estimator, but it is also asymptotically differentiable. We exploit the latter to propose a quantile density estimator that does not suffer from the curse of dimensionality. This means estimating the conditional density function without worrying about the dimension of the covariate vector. It also allows for two-stage efficient quantile regression estimation. Our asymptotic theory holds uniformly with respect to the bandwidth and quantile level. Finally, we propose a rule of thumb for choosing the smoothing bandwidth that should approximate well the optimal bandwidth. Simulations confirm that our smoothed quantile regression estimator indeed performs very well in finite samples. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 338-357
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1660177
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1660177
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:338-357
Template-Type: ReDIF-Article 1.0
Author-Name: Wallice Ao
Author-X-Name-First: Wallice
Author-X-Name-Last: Ao
Author-Name: Sebastian Calonico
Author-X-Name-First: Sebastian
Author-X-Name-Last: Calonico
Author-Name: Ying-Ying Lee
Author-X-Name-First: Ying-Ying
Author-X-Name-Last: Lee
Title: Multivalued Treatments and Decomposition Analysis: An Application to the WIA Program
Abstract:
This article provides a general estimation and inference framework to study how different levels of program participation affect participants’ outcomes. We decompose differences in the outcome distribution into (i) a structure effect, arising due to the conditional outcome distributions given covariates associated with different levels of participation; and (ii) a composition effect, arising due to differences in the distributions of observable characteristics. These counterfactual differences are equivalent to the multivalued treatment effects for the treated under a conditional independence assumption. We propose efficient nonparametric estimators based on propensity score weighting together with uniform inference theory. We employ our methods to study the effects of the Workforce Investment Act (WIA) programs on participants’ earnings. We find that heterogeneity in levels of program participation is an important dimension to evaluate the WIA and other social programs in which participation varies. The results of this article, both theoretically and empirically, provide rigorous assessment of intervention programs and relevant suggestions to improve their performance and cost-effectiveness. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 358-371
Issue: 1
Volume: 39
Year: 2021
Month: 1
X-DOI: 10.1080/07350015.2019.1660664
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1660664
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:1:p:358-371
Template-Type: ReDIF-Article 1.0
Author-Name: Gaurab Aryal
Author-X-Name-First: Gaurab
Author-X-Name-Last: Aryal
Author-Name: Maria F. Gabrielli
Author-X-Name-First: Maria F.
Author-X-Name-Last: Gabrielli
Author-Name: Quang Vuong
Author-X-Name-First: Quang
Author-X-Name-Last: Vuong
Title: Semiparametric Estimation of First-Price Auction Models
Abstract:
In this article, we propose a two-step semiparametric procedure to estimate first-price auction models. In the first step, we estimate the bid density and distribution using local polynomial method, and recover a sample of (pseudo) private values. In the second step, we apply the method of moments to the sample of private values to estimate a finite set of parameters that characterize the density of private values. We show that our estimator attains the parametric consistency rate and is asymptotically normal. And we also determine its asymptotic variance. The advantage of our approach is that it can accommodate multiple auction covariates. Monte Carlo exercises show that the estimator performs well both in estimating the value density and in choosing the revenue maximizing reserve price. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 373-385
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1665530
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1665530
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:373-385
Template-Type: ReDIF-Article 1.0
Author-Name: Heng Lian
Author-X-Name-First: Heng
Author-X-Name-Last: Lian
Author-Name: Xinghao Qiao
Author-X-Name-First: Xinghao
Author-X-Name-Last: Qiao
Author-Name: Wenyang Zhang
Author-X-Name-First: Wenyang
Author-X-Name-Last: Zhang
Title: Homogeneity Pursuit in Single Index Models based Panel Data Analysis
Abstract:
Panel data analysis is an important topic in statistics and econometrics. Traditionally, in panel data analysis, all individuals are assumed to share the same unknown parameters, e.g. the same coefficients of covariates when the linear models are used, and the differences between the individuals are accounted for by cluster effects. This kind of modelling only makes sense if our main interest is on the global trend, this is because it would not be able to tell us anything about the individual attributes which are sometimes very important. In this paper, we propose a modelling based on the single index models embedded with homogeneity for panel data analysis, which builds the individual attributes in the model and is parsimonious at the same time. We develop a data driven approach to identify the structure of homogeneity, and estimate the unknown parameters and functions based on the identified structure. Asymptotic properties of the resulting estimators are established. Intensive simulation studies conducted in this paper also show the resulting estimators work very well when sample size is finite. Finally, the proposed modelling is applied to a public financial dataset and a UK climate dataset, the results reveal some interesting findings.
Journal: Journal of Business & Economic Statistics
Pages: 386-401
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1665531
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1665531
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:386-401
Template-Type: ReDIF-Article 1.0
Author-Name: Geoffrey R. Dunbar
Author-X-Name-First: Geoffrey R.
Author-X-Name-Last: Dunbar
Author-Name: Arthur Lewbel
Author-X-Name-First: Arthur
Author-X-Name-Last: Lewbel
Author-Name: Krishna Pendakur
Author-X-Name-First: Krishna
Author-X-Name-Last: Pendakur
Title: Identification of Random Resource Shares in Collective Households Without Preference Similarity Restrictions
Abstract:
Resource shares, defined as the fraction of total household spending going to each person in a household, are important for assessing individual material well-being, inequality, and poverty. They are difficult to identify because consumption is measured typically at the household level, and many goods are jointly consumed, so that individual level consumption in multi-person households is not directly observed. We consider random resource shares, which vary across observationally identical households. We provide theorems that identify the distribution of random resource shares across households, including children’s shares. We also provide a new method of identifying the level of fixed or random resource shares that does not require previously needed preference similarity restrictions or marriage market assumptions. Our results can be applied to data with or without price variation. We apply our results to households in Malawi, estimating the distributions of child and of female poverty across households.
Journal: Journal of Business & Economic Statistics
Pages: 402-421
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1665532
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1665532
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:402-421
Template-Type: ReDIF-Article 1.0
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Author-Name: Andreas Steinmayr
Author-X-Name-First: Andreas
Author-X-Name-Last: Steinmayr
Title: A Framework for Separating Individual-Level Treatment Effects From Spillover Effects
Abstract:
This article suggests a causal framework for separating individual-level treatment effects and spillover effects such as general equilibrium, interference, or interaction effects related to treatment distribution. We relax the stable unit treatment value assumption assuming away treatment-dependent interaction between study participants and permit spillover effects within aggregates, for example, regions. Based on our framework, we systematically categorize the individual-level and spillover effects considered in the previous literature and clarify the assumptions required for identification under different designs, for instance, based on randomization or selection on observables. Furthermore, we propose a novel difference-in-differences approach and apply it to a policy intervention extending unemployment benefit durations in selected regions of Austria that arguably affected ineligibles in treated regions through general equilibrium effects in local labor markets.
Journal: Journal of Business & Economic Statistics
Pages: 422-436
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1668795
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1668795
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:422-436
Template-Type: ReDIF-Article 1.0
Author-Name: Wilson Ye Chen
Author-X-Name-First: Wilson Ye
Author-X-Name-Last: Chen
Author-Name: Richard H. Gerlach
Author-X-Name-First: Richard H.
Author-X-Name-Last: Gerlach
Title: Semiparametric GARCH via Bayesian Model Averaging
Abstract:
As the dynamic structure of financial markets is subject to dramatic change, a model capable of providing consistently accurate volatility estimates should not make rigid assumptions on how prices change over time. Most volatility models impose a particular parametric functional form that relates an observed price change to a volatility forecast (news impact function). Here, a new class of functional coefficient semiparametric volatility models is proposed, where the news impact function is allowed to be any smooth function. The ability of the proposed model to estimate volatility is studied and compared to the well-known parametric proposals, in both a simulation study and an empirical study with real financial market data. The news impact function is estimated using a Bayesian model averaging approach, implemented via a carefully developed Markov chain Monte Carlo sampling algorithm. Using simulations it is shown that the proposed flexible semiparametric model is able to learn the shape of the news impact function very effectively, from observed data. When applied to real financial time series, the proposed model suggests that news impact functions have quite different shapes over different asset types, but a consistent shape within the same asset class. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 437-452
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1668796
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1668796
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:437-452
Template-Type: ReDIF-Article 1.0
Author-Name: Juan Carlos Escanciano
Author-X-Name-First: Juan Carlos
Author-X-Name-Last: Escanciano
Author-Name: Javier Hualde
Author-X-Name-First: Javier
Author-X-Name-Last: Hualde
Title: Measuring Asset Market Linkages: Nonlinear Dependence and Tail Risk
Abstract:
Traditional measures of dependence in time series are based on correlations or periodograms. These are adequate in many circumstances but, in others, especially when trying to assess market linkages and tail risk during abnormal times (e.g., financial contagion), they might be inappropriate. In particular, popular tail dependence measures based on exceedance correlations and marginal expected shortfall (MES) have large variances and also contain limited information on tail risk. Motivated by these limitations, we introduce the (tail-restricted) integrated regression function, and we show how it characterizes conditional dependence and persistence. We propose simple estimates for these measures and establish their asymptotic properties. We employ the proposed methods to analyze the dependence structure of some of the major international stock market indices before, during, and after the 2007–2009 financial crisis. Monte Carlo simulations and the application show that our new measures are more reliable and accurate than competing methods based on MES or exceedance correlations for testing tail dependence. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 453-465
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1668797
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1668797
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:453-465
Template-Type: ReDIF-Article 1.0
Author-Name: Haroon Mumtaz
Author-X-Name-First: Haroon
Author-X-Name-Last: Mumtaz
Author-Name: Alberto Musso
Author-X-Name-First: Alberto
Author-X-Name-Last: Musso
Title: The Evolving Impact of Global, Region-Specific, and Country-Specific Uncertainty
Abstract:
We develop a dynamic factor model with time-varying parameters and stochastic volatility, estimate it using a large panel of macroeconomic and financial data for 22 countries and decompose the variance of each variable in terms of contributions from uncertainty common to all countries (“global uncertainty”), region-specific uncertainty, and country-specific uncertainty. Among other findings, the estimates suggest that global uncertainty plays a primary role in explaining the volatility of inflation, interest rates, and stock prices, although to a varying extent over time, while all uncertainty components are found to play a nonnegligible role for real economic activity, credit, and money for most countries. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 466-481
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1668798
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1668798
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:466-481
Template-Type: ReDIF-Article 1.0
Author-Name: Jean-Pierre Florens
Author-X-Name-First: Jean-Pierre
Author-X-Name-Last: Florens
Author-Name: Anna Simoni
Author-X-Name-First: Anna
Author-X-Name-Last: Simoni
Title: Gaussian Processes and Bayesian Moment Estimation
Abstract:
Given a set of moment restrictions (MRs) that overidentify a parameter θ, we investigate a semiparametric Bayesian approach for inference on θ that does not restrict the data distribution F apart from the MRs. As main contribution, we construct a degenerate Gaussian process prior that, conditionally on θ, restricts the F generated by this prior to satisfy the MRs with probability one. Our prior works even in the more involved case where the number of MRs is larger than the dimension of θ. We demonstrate that the corresponding posterior for θ is computationally convenient. Moreover, we show that there exists a link between our procedure, the generalized empirical likelihood with quadratic criterion and the limited information likelihood-based procedures. We provide a frequentist validation of our procedure by showing consistency and asymptotic normality of the posterior distribution of θ. The finite sample properties of our method are illustrated through Monte Carlo experiments and we provide an application to demand estimation in the airline market.
Journal: Journal of Business & Economic Statistics
Pages: 482-492
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1668799
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1668799
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:482-492
Template-Type: ReDIF-Article 1.0
Author-Name: Dimitris Korobilis
Author-X-Name-First: Dimitris
Author-X-Name-Last: Korobilis
Title: High-Dimensional Macroeconomic Forecasting Using Message Passing Algorithms
Abstract:
This article proposes two distinct contributions to econometric analysis of large information sets and structural instabilities. First, it treats a regression model with time-varying coefficients, stochastic volatility, and exogenous predictors, as an equivalent high-dimensional static regression problem with thousands of covariates. Inference in this specification proceeds using Bayesian hierarchical priors that shrink the high-dimensional vector of coefficients either toward zero or time-invariance. Second, it introduces the frameworks of factor graphs and message passing as a means of designing efficient Bayesian estimation algorithms. In particular, a generalized approximate message passing algorithm is derived that has low algorithmic complexity and is trivially parallelizable. The result is a comprehensive methodology that can be used to estimate time-varying parameter regressions with arbitrarily large number of exogenous predictors. In a forecasting exercise for U.S. price inflation this methodology is shown to work very well. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 493-504
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1677472
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1677472
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:493-504
Template-Type: ReDIF-Article 1.0
Author-Name: James G. MacKinnon
Author-X-Name-First: James G.
Author-X-Name-Last: MacKinnon
Author-Name: Morten Ørregaard Nielsen
Author-X-Name-First: Morten Ørregaard
Author-X-Name-Last: Nielsen
Author-Name: Matthew D. Webb
Author-X-Name-First: Matthew D.
Author-X-Name-Last: Webb
Title: Wild Bootstrap and Asymptotic Inference With Multiway Clustering
Abstract:
We study two cluster-robust variance estimators (CRVEs) for regression models with clustering in two dimensions and give conditions under which t-statistics based on each of them yield asymptotically valid inferences. In particular, one of the CRVEs requires stronger assumptions about the nature of the intra-cluster correlations. We then propose several wild bootstrap procedures and state conditions under which they are asymptotically valid for each type of t-statistic. Extensive simulations suggest that using certain bootstrap procedures with one of the t-statistics generally performs very well. An empirical example confirms that bootstrap inferences can differ substantially from conventional ones.
Journal: Journal of Business & Economic Statistics
Pages: 505-519
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1677473
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1677473
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:505-519
Template-Type: ReDIF-Article 1.0
Author-Name: Laurent Callot
Author-X-Name-First: Laurent
Author-X-Name-Last: Callot
Author-Name: Mehmet Caner
Author-X-Name-First: Mehmet
Author-X-Name-Last: Caner
Author-Name: A. Özlem Önder
Author-X-Name-First: A. Özlem
Author-X-Name-Last: Önder
Author-Name: Esra Ulaşan
Author-X-Name-First: Esra
Author-X-Name-Last: Ulaşan
Title: A Nodewise Regression Approach to Estimating Large Portfolios
Abstract:
This article investigates the large sample properties of the variance, weights, and risk of high-dimensional portfolios where the inverse of the covariance matrix of excess asset returns is estimated using a technique called nodewise regression. Nodewise regression provides a direct estimator for the inverse covariance matrix using the least absolute shrinkage and selection operator to estimate the entries of a sparse precision matrix. We show that the variance, weights, and risk of the global minimum variance portfolios and the Markowitz mean-variance portfolios are consistently estimated with more assets than observations. We show, empirically, that the nodewise regression-based approach performs well in comparison to factor models and shrinkage methods.Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 520-531
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1683018
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1683018
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:520-531
Template-Type: ReDIF-Article 1.0
Author-Name: Thomas M. Russell
Author-X-Name-First: Thomas M.
Author-X-Name-Last: Russell
Title: Sharp Bounds on Functionals of the Joint Distribution in the Analysis of Treatment Effects
Abstract:
This article proposes an identification and estimation method that allows researchers to bound continuous functionals of the joint distribution of potential outcomes from the literature on treatment effects. The focus is on a model where no restrictions are imposed on treatment selection. The method can sharply bound interesting parameters when analytical bounds are difficult to derive, can be used in settings in which instruments are available, and can easily accommodate additional model constraints. However, computational considerations for the method are found to be important and are discussed in detail. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 532-546
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1684300
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1684300
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:532-546
Template-Type: ReDIF-Article 1.0
Author-Name: Jasmien De Winne
Author-X-Name-First: Jasmien De
Author-X-Name-Last: Winne
Author-Name: Gert Peersman
Author-X-Name-First: Gert
Author-X-Name-Last: Peersman
Title: The Impact of Food Prices on Conflict Revisited
Abstract:
Studies that examine the impact of food prices on conflict usually assume that (all) changes in international food prices are exogenous shocks for individual countries or local areas. By isolating strictly exogenous shifts in global food commodity prices, we show that this assumption could seriously distort estimations of the impact on conflict in African regions. Specifically, we show that increases in food prices that are caused by harvest shocks outside Africa raise conflict significantly, whereas a “naive” regression of conflict on international food prices uncovers an inverse relationship. We also find that higher food prices lead to more conflict in regions with more agricultural production. Again, we document that failing to account for exogenous price changes exhibits a considerable bias in the impact. In addition, we show that the conventional approach to evaluate such effects; that is, estimations that include time fixed effects, ignores an important positive baseline effect that is common for all regions. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 547-560
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1684301
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1684301
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:547-560
Template-Type: ReDIF-Article 1.0
Author-Name: Toru Kitagawa
Author-X-Name-First: Toru
Author-X-Name-Last: Kitagawa
Author-Name: Aleksey Tetenov
Author-X-Name-First: Aleksey
Author-X-Name-Last: Tetenov
Title: Equality-Minded Treatment Choice
Abstract:
The goal of many randomized experiments and quasi-experimental studies in economics is to inform policies that aim to raise incomes and reduce economic inequality. A policy maximizing the sum of individual incomes may not be desirable if it magnifies economic inequality and post-treatment redistribution of income is infeasible. This article develops a method to estimate the optimal treatment assignment policy based on observable individual covariates when the policy objective is to maximize an equality-minded rank-dependent social welfare function, which puts higher weight on individuals with lower-ranked outcomes. We estimate the optimal policy by maximizing a sample analog of the rank-dependent welfare over a properly constrained set of policies. We show that the average social welfare attained by our estimated policy converges to the maximal attainable welfare at n−1/2
rate uniformly over a large class of data distributions when the propensity score is known. We also show that this rate is minimax optimal. We provide an application of our method using the data from the National JTPA Study.Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 561-574
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1688664
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1688664
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:561-574
Template-Type: ReDIF-Article 1.0
Author-Name: Santiago Pereda-Fernández
Author-X-Name-First: Santiago
Author-X-Name-Last: Pereda-Fernández
Title: Copula-Based Random Effects Models for Clustered Data
Abstract:
In a binary choice panel data framework, probabilities of the outcomes of several individuals depend on the correlation of the unobserved heterogeneity. I propose a random effects estimator that models the correlation of the unobserved heterogeneity among individuals in the same cluster using a copula. I discuss the asymptotic efficiency of the estimator relative to standard random effects estimators, and to choose the copula I propose a specification test. The implementation of the estimator requires the numerical approximation of high-dimensional integrals, for which I propose an algorithm that works for Archimedean copulas that does not suffer from the curse of dimensionality. This method is illustrated with an application of labor supply in married couples, finding that about one half of the difference in probability of a woman being employed when her husband is also employed, relative to those whose husband is unemployed, is explained by correlation in the unobservables. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 575-588
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1688665
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1688665
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:575-588
Template-Type: ReDIF-Article 1.0
Author-Name: Christian M. Hafner
Author-X-Name-First: Christian M.
Author-X-Name-Last: Hafner
Author-Name: Dimitra Kyriakopoulou
Author-X-Name-First: Dimitra
Author-X-Name-Last: Kyriakopoulou
Title: Exponential-Type GARCH Models With Linear-in-Variance Risk Premium
Abstract:
One of the implications of the intertemporal capital asset pricing model is that the risk premium of the market portfolio is a linear function of its variance. Yet, estimation theory of classical GARCH-in-mean models with linear-in-variance risk premium requires strong assumptions and is incomplete. We show that exponential-type GARCH models such as EGARCH or Log-GARCH are more natural in dealing with linear-in-variance risk premia. For the popular and more difficult case of EGARCH-in-mean, we derive conditions for the existence of a unique stationary and ergodic solution and invertibility following a stochastic recurrence equation approach. We then show consistency and asymptotic normality of the quasi-maximum likelihood estimator under weak moment assumptions. An empirical application estimates the dynamic risk premia of a variety of stock indices using both EGARCH-M and Log-GARCH-M models. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 589-603
Issue: 2
Volume: 39
Year: 2021
Month: 3
X-DOI: 10.1080/07350015.2019.1691564
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1691564
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:2:p:589-603
Template-Type: ReDIF-Article 1.0
Author-Name: Long Feng
Author-X-Name-First: Long
Author-X-Name-Last: Feng
Author-Name: Binghui Liu
Author-X-Name-First: Binghui
Author-X-Name-Last: Liu
Author-Name: Yanyuan Ma
Author-X-Name-First: Yanyuan
Author-X-Name-Last: Ma
Title: An Inverse Norm Sign Test of Location Parameter for High-Dimensional Data
Abstract:
We consider the one sample location testing problem for high-dimensional data, where the data dimension is potentially much larger than the sample size. We devise a novel inverse norm sign test (INST) that is consistent and has much improved power than many existing popular tests. We further construct a general class of weighted spatial sign tests which includes these existing tests, and show that INST is the optimal member within this class, in that it is consistent and is uniformly more powerful than all other members. We establish the asymptotic null distribution and local power property of the class of tests rigorously. Extensive numerical experiments demonstrate the superiority of INST in terms of both efficiency and robustness.
Journal: Journal of Business & Economic Statistics
Pages: 807-815
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1736084
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1736084
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:807-815
Template-Type: ReDIF-Article 1.0
Author-Name: Florian Huber
Author-X-Name-First: Florian
Author-X-Name-Last: Huber
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: Luca Onorante
Author-X-Name-First: Luca
Author-X-Name-Last: Onorante
Title: Inducing Sparsity and Shrinkage in Time-Varying Parameter Models
Abstract:
Time-varying parameter (TVP) models have the potential to be over-parameterized, particularly when the number of variables in the model is large. Global-local priors are increasingly used to induce shrinkage in such models. But the estimates produced by these priors can still have appreciable uncertainty. Sparsification has the potential to reduce this uncertainty and improve forecasts. In this article, we develop computationally simple methods which both shrink and sparsify TVP models. In a simulated data exercise, we show the benefits of our shrink-then-sparsify approach in a variety of sparse and dense TVP regressions. In a macroeconomic forecasting exercise, we find our approach to substantially improve forecast performance relative to shrinkage alone.
Journal: Journal of Business & Economic Statistics
Pages: 669-683
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1713796
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1713796
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:669-683
Template-Type: ReDIF-Article 1.0
Author-Name: Kei Miyazaki
Author-X-Name-First: Kei
Author-X-Name-Last: Miyazaki
Author-Name: Takahiro Hoshino
Author-X-Name-First: Takahiro
Author-X-Name-Last: Hoshino
Author-Name: Ulf Böckenholt
Author-X-Name-First: Ulf
Author-X-Name-Last: Böckenholt
Title: Dynamic Two Stage Modeling for Category-Level and Brand-Level Purchases Using Potential Outcome Approach With Bayes Inference
Abstract:
We propose an econometric two-stage model for category-level purchase and brand-level purchase that allows for simultaneous brand purchases in the analysis of scanner panel data. The proposed model formulation is consistent with the traditional theory of consumer behavior. We conduct Bayesian estimation with the Markov chain Monte Carlo algorithm for our proposed model. The simulation studies show that previously proposed related models can cause severe bias in predicting future brand choices, while the proposed method can effectively predict them. Additionally in a marketing application, the proposed method can examine brand switching behaviors that existing methods cannot. Moreover, we show that the prediction accuracy of the proposed method is higher than that of existing methods.
Journal: Journal of Business & Economic Statistics
Pages: 622-635
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2019.1702047
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1702047
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:622-635
Template-Type: ReDIF-Article 1.0
Author-Name: Giuseppe Buccheri
Author-X-Name-First: Giuseppe
Author-X-Name-Last: Buccheri
Author-Name: Fulvio Corsi
Author-X-Name-First: Fulvio
Author-X-Name-Last: Corsi
Author-Name: Stefano Peluso
Author-X-Name-First: Stefano
Author-X-Name-Last: Peluso
Title: High-Frequency Lead-Lag Effects and Cross-Asset Linkages: A Multi-Asset Lagged Adjustment Model
Abstract:
Motivated by the empirical evidence of high-frequency lead-lag effects and cross-asset linkages, we introduce a multi-asset price formation model which generalizes standard univariate microstructure models of lagged price adjustment. Econometric inference on such model provides: (i) a unified statistical test for the presence of lead-lag correlations in the latent price process and for the existence of a multi-asset price formation mechanism; (ii) separate estimation of contemporaneous and lagged dependencies; (iii) an unbiased estimator of the integrated covariance of the efficient martingale price process that is robust to microstructure noise, asynchronous trading, and lead-lag dependencies. Through an extensive simulation study, we compare the proposed estimator to alternative approaches and show its advantages in recovering the true lead-lag structure of the latent price process. Our application to a set of NYSE stocks provides empirical evidence for the existence of a multi-asset price formation mechanism and sheds light on its market microstructure determinants. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 605-621
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2019.1697699
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1697699
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:605-621
Template-Type: ReDIF-Article 1.0
Author-Name: Lung-Fei Lee
Author-X-Name-First: Lung-Fei
Author-X-Name-Last: Lee
Author-Name: Xiaodong Liu
Author-X-Name-First: Xiaodong
Author-X-Name-Last: Liu
Author-Name: Eleonora Patacchini
Author-X-Name-First: Eleonora
Author-X-Name-Last: Patacchini
Author-Name: Yves Zenou
Author-X-Name-First: Yves
Author-X-Name-Last: Zenou
Title: Who is the Key Player? A Network Analysis of Juvenile Delinquency
Abstract:
This article presents a methodology for empirically identifying the key player, whose removal from the network leads to the optimal change in aggregate activity level in equilibrium [Ballester, C., Calvó-Armengol, A., and Zenou, Y. (2006), “Who’s Who in Networks. Wanted: The Key Player,” Econometrica, 74: 1403–1417], allowing the network links to rewire after the removal of the key player. First, we propose an IV-based estimation strategy for the social-interaction effect, which is needed to determine the equilibrium activity level of a network, taking into account the potential network endogeneity. Next, to simulate the network evolution process after the removal of the key player, we adopt the general network formation model in Mele [(2017), “A Structural Model of Dense Network Formation,” Econometrica, 85: 825–850] and extend it to incorporate the unobserved individual heterogeneity in link formation decisions. We illustrate the methodology by providing the key player rankings in juvenile delinquency using information on friendship networks among U.S. teenagers. We find that the key player is not necessarily the most active delinquent or the delinquent who ranks the highest in standard (not microfounded) centrality measures. We also find that, compared to a policy that removes the most active delinquent from the network, a key-player-targeted policy leads to a much higher delinquency reduction.
Journal: Journal of Business & Economic Statistics
Pages: 849-857
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1737082
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1737082
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:849-857
Template-Type: ReDIF-Article 1.0
Author-Name: Michael Stanley Smith
Author-X-Name-First: Michael Stanley
Author-X-Name-Last: Smith
Author-Name: Nadja Klein
Author-X-Name-First: Nadja
Author-X-Name-Last: Klein
Title: Bayesian Inference for Regression Copulas
Abstract:
We propose a new semiparametric distributional regression smoother that is based on a copula decomposition of the joint distribution of the vector of response values. The copula is high-dimensional and constructed by inversion of a pseudo regression, where the conditional mean and variance are semiparametric functions of covariates modeled using regularized basis functions. By integrating out the basis coefficients, an implicit copula process on the covariate space is obtained, which we call a “regression copula.” We combine this with a nonparametric margin to define a copula model, where the entire distribution—including the mean and variance—of the response is a smooth semiparametric function of the covariates. The copula is estimated using both Hamiltonian Monte Carlo and variational Bayes; the latter of which is scalable to high dimensions. Using real data examples and a simulation study, we illustrate the efficacy of these estimators and the copula model. In a substantive example, we estimate the distribution of half-hourly electricity spot prices as a function of demand and two time covariates using radial bases and horseshoe regularization. The copula model produces distributional estimates that are locally adaptive with respect to the covariates, and predictions that are more accurate than those from benchmark models. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 712-728
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1721295
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1721295
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:712-728
Template-Type: ReDIF-Article 1.0
Author-Name: Chaohua Dong
Author-X-Name-First: Chaohua
Author-X-Name-Last: Dong
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Bin Peng
Author-X-Name-First: Bin
Author-X-Name-Last: Peng
Title: Varying-Coefficient Panel Data Models With Nonstationarity and Partially Observed Factor Structure
Abstract:
In this article, we study a varying-coefficient panel data model with both nonstationarity and partially observed factor structure. Two approaches are proposed. The first approach proposed in the main text considers a sieve based method to estimate the unknown coefficients as well as the factors and loading functions simultaneously, while the second approach proposed in the online supplementary document involving the principal component analysis provides an alternative estimation method. We establish asymptotic properties for them, compare the asymptotic efficiency of the two estimation methods and examine the theoretical findings through extensive Monte Carlo simulations. In an empirical study, we use our newly proposed model and the first method to study the returns to scale of large U.S. commercial banks, where some overlooked modeling issues in the literature of production econometrics are addressed. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 700-711
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1721294
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1721294
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:700-711
Template-Type: ReDIF-Article 1.0
Author-Name: Xiye Yang
Author-X-Name-First: Xiye
Author-X-Name-Last: Yang
Title: Semiparametric Estimation in Continuous-Time: Asymptotics for Integrated Volatility Functionals with Small and Large Bandwidths
Abstract:
This article studies the estimation of integrated volatility functionals, which is a semiparametric two-step estimation problem in the nonstationary continuous-time setting. We generalize the asymptotic normality results of Jacod and Rosenbaum to a wider range of bandwidths. Moreover, we employ matrix calculus to obtain a new analytical bias correction and variance estimation method. The proposed method gives more succinct expressions than the element-by-element analytical method of the above cited article. In addition, it has a computational advantage over the jackknife/simulation-based method proposed by Li, Liu, and Xiu. Comprehensive simulation studies demonstrate that our method has good finite sample performance for a variety of volatility functionals, including quadraticity, determinant, continuous beta, and eigenvalues.
Journal: Journal of Business & Economic Statistics
Pages: 793-806
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1733583
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1733583
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:793-806
Template-Type: ReDIF-Article 1.0
Author-Name: Yifan Xia
Author-X-Name-First: Yifan
Author-X-Name-Last: Xia
Author-Name: Ling Zhang
Author-X-Name-First: Ling
Author-X-Name-Last: Zhang
Author-Name: Iris L. Li
Author-X-Name-First: Iris L.
Author-X-Name-Last: Li
Title: Multidimensional Economic Dispersion Index and Application
Abstract:
The Gini index is widely used in economics as a measure of inequality with respect to income or wealth. However, it is not applicable when we consider evaluating the inequality level using more than one social resources. To comprehensively evaluate the social inequality level, we propose a multidimensional economic dispersion index (MEDI) based on the Lorenz hyper-surface determined by the distributions of multiple social resources. The MEDI is a natural extension of the Gini index and is equivalent to the Gini index in the presence of only one resource. We propose an estimator for the MEDI with good statistical properties and develop an algorithm to calculate the estimate. We further apply the MEDI for an empirical analysis to evaluate the social inequality level of Chinese provincial capitals. The results reveal some interesting phenomena of Chinese social inequalities and also demonstrate how the MEDI captures more information in complex economic situations than the classical Gini index.
Journal: Journal of Business & Economic Statistics
Pages: 729-740
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1730185
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1730185
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:729-740
Template-Type: ReDIF-Article 1.0
Author-Name: Pedro H. C. Sant’Anna
Author-X-Name-First: Pedro H. C.
Author-X-Name-Last: Sant’Anna
Title: Nonparametric Tests for Treatment Effect Heterogeneity With Duration Outcomes
Abstract:
This article proposes different tests for treatment effect heterogeneity when the outcome of interest, typically a duration variable, may be right-censored. The proposed tests study whether a policy (1) has zero distributional (average) effect for all subpopulations defined by covariate values, and (2) has homogeneous average effect across different subpopulations. The proposed tests are based on two-step Kaplan–Meier integrals and do not rely on parametric distributional assumptions, shape restrictions, or on restricting the potential treatment effect heterogeneity across different subpopulations. Our framework is suitable not only to exogenous treatment allocation but can also account for treatment noncompliance—an important feature in many applications. The proposed tests are consistent against fixed alternatives, and can detect nonparametric alternatives converging to the null at the parametric n−1/2-rate, n being the sample size. Critical values are computed with the assistance of a multiplier bootstrap. The finite sample properties of the proposed tests are examined by means of a Monte Carlo study and an application about the effect of labor market programs on unemployment duration. Open-source software is available for implementing all proposed tests.
Journal: Journal of Business & Economic Statistics
Pages: 816-832
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1737080
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1737080
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:816-832
Template-Type: ReDIF-Article 1.0
Author-Name: Michał Gradzewicz
Author-X-Name-First: Michał
Author-X-Name-Last: Gradzewicz
Title: What Happens After an Investment Spike—Investment Events and Firm Performance
Abstract:
Our study aims at investigating the relationship between investment spikes and subsequent productivity development at the firm level. We propose a novel identification scheme for the effects of an investment spike, using matching techniques and a tailored econometric modeling. It allows us to find efficiency differentials against matched firms in periods adjacent to the spike. We showed that TFP persistently falls after an investment spike, which is consistent with learning-by-doing models of firm decisions. As a result of capital deepening labor productivity actually rises after a spike. The capital deepening of larger firms is smaller and although the responses of TFP across size classes are similar, the labor productivity rise of smaller firms is more pronounced. Moreover, the positive correlation of responses of labor and K/L in periods after a spike shows that investments spikes induce complementarity between production factors.
Journal: Journal of Business & Economic Statistics
Pages: 636-651
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2019.1708369
File-URL: http://hdl.handle.net/10.1080/07350015.2019.1708369
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:636-651
Template-Type: ReDIF-Article 1.0
Author-Name: Bingduo Yang
Author-X-Name-First: Bingduo
Author-X-Name-Last: Yang
Author-Name: Xiaohui Liu
Author-X-Name-First: Xiaohui
Author-X-Name-Last: Liu
Author-Name: Liang Peng
Author-X-Name-First: Liang
Author-X-Name-Last: Peng
Author-Name: Zongwu Cai
Author-X-Name-First: Zongwu
Author-X-Name-Last: Cai
Title: Unified Tests for a Dynamic Predictive Regression
Abstract:
Testing for predictability of asset returns has been a long history in economics and finance. Recently, based on a simple predictive regression, Kostakis, Magdalinos, and Stamatogiannis derived a Wald type test based on the context of the extended instrumental variable (IVX) methodology for testing predictability of stock returns, and Demetrescu showed that the local power of the standard IVX-based test could be improved for some range of alternative hypotheses and the tuning parameter when a lagged predicted variable is added to the predictive regression on purpose, which poses an important question on whether the predictive model should include a lagged predicted variable. This article proposes novel robust procedures for testing both the existence of a lagged predicted variable and the predictability of asset returns regardless of regressors being stationary or nearly integrated or unit root and the AR model for regressors with or without an intercept. A simulation study confirms the good finite sample performance of the proposed tests before illustrating their practical usefulness in analyzing real financial datasets.
Journal: Journal of Business & Economic Statistics
Pages: 684-699
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1714632
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1714632
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:684-699
Template-Type: ReDIF-Article 1.0
Author-Name: Zhanfeng Wang
Author-X-Name-First: Zhanfeng
Author-X-Name-Last: Wang
Author-Name: Xianhui Liu
Author-X-Name-First: Xianhui
Author-X-Name-Last: Liu
Author-Name: Wenlu Tang
Author-X-Name-First: Wenlu
Author-X-Name-Last: Tang
Author-Name: Yuanyuan Lin
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Lin
Title: Incorporating Graphical Structure of Predictors in Sparse Quantile Regression
Abstract:
Quantile regression in high-dimensional settings is useful in analyzing high-dimensional heterogeneous data. In this article, different from existing methods in quantile regression which treat all the predictors equally with the same priori, we take advantage of the graphical structure among predictors to improve the performance of parameter estimation, model selection, and prediction in sparse quantile regression. It is shown under mild conditions that the proposed method enjoys the model selection consistency and the oracle properties. An alternating direction method of multipliers algorithm with a linearization technique is proposed to implement the proposed method numerically, and its convergence is justified. Simulation studies are conducted, showing that the proposed method is superior to existing methods in terms of estimation accuracy and predictive power. The proposed method is also applied to a real dataset.
Journal: Journal of Business & Economic Statistics
Pages: 783-792
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1730859
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1730859
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:783-792
Template-Type: ReDIF-Article 1.0
Author-Name: Degui Li
Author-X-Name-First: Degui
Author-X-Name-Last: Li
Author-Name: Qi Li
Author-X-Name-First: Qi
Author-X-Name-Last: Li
Author-Name: Zheng Li
Author-X-Name-First: Zheng
Author-X-Name-Last: Li
Title: Nonparametric Quantile Regression Estimation With Mixed Discrete and Continuous Data
Abstract:
In this article, we investigate the problem of nonparametrically estimating a conditional quantile function with mixed discrete and continuous covariates. A local linear smoothing technique combining both continuous and discrete kernel functions is introduced to estimate the conditional quantile function. We propose using a fully data-driven cross-validation approach to choose the bandwidths, and further derive the asymptotic optimality theory. In addition, we also establish the asymptotic distribution and uniform consistency (with convergence rates) for the local linear conditional quantile estimators with the data-dependent optimal bandwidths. Simulations show that the proposed approach compares well with some existing methods. Finally, an empirical application with the data taken from the IMDb website is presented to analyze the relationship between box office revenues and online rating scores. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 741-756
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1730856
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1730856
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:741-756
Template-Type: ReDIF-Article 1.0
Author-Name: Cavit Pakel
Author-X-Name-First: Cavit
Author-X-Name-Last: Pakel
Author-Name: Neil Shephard
Author-X-Name-First: Neil
Author-X-Name-Last: Shephard
Author-Name: Kevin Sheppard
Author-X-Name-First: Kevin
Author-X-Name-Last: Sheppard
Author-Name: Robert F. Engle
Author-X-Name-First: Robert F.
Author-X-Name-Last: Engle
Title: Fitting Vast Dimensional Time-Varying Covariance Models
Abstract:
Estimation of time-varying covariances is a key input in risk management and asset allocation. ARCH-type multivariate models are used widely for this purpose. Estimation of such models is computationally costly and parameter estimates are meaningfully biased when applied to a moderately large number of assets. Here, we propose a novel estimation approach that suffers from neither of these issues, even when the number of assets is in the hundreds. The theory of this new method is developed in some detail. The performance of the proposed method is investigated using extensive simulation studies and empirical examples. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 652-668
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1713795
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1713795
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:652-668
Template-Type: ReDIF-Article 1.0
Author-Name: Likai Chen
Author-X-Name-First: Likai
Author-X-Name-Last: Chen
Author-Name: Weining Wang
Author-X-Name-First: Weining
Author-X-Name-Last: Wang
Author-Name: Wei Biao Wu
Author-X-Name-First: Wei Biao
Author-X-Name-Last: Wu
Title: Dynamic Semiparametric Factor Model With Structural Breaks
Abstract:
For the change-point analysis of a high-dimensional time series, we consider a semiparametric model with dynamic structural break factors. With our model, the observations are described by a few low-dimensional factors with time-invariant loading functions of the covariates. Regarding the structural break, the factors are assumed to be nonstationary and follow a vector autoregression process with a change in the parameter values. In addition, to account for the known spatial discrepancies, we introduce discrete loading functions. We study the theoretical properties of the estimates of the loading functions and the factors. Moreover, we provide both the consistency and the asymptotic normality for making an inference on the estimated breakpoint. Importantly, our results hold for both large and small breaks in the factor dependency structure. The estimation precision is further illustrated via a simulation study. Finally, we present two empirical applications in modeling the dynamics of the minimum wage policy in China and analyzing a limit order book dataset. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 757-771
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1730857
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1730857
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:757-771
Template-Type: ReDIF-Article 1.0
Author-Name: Sascha Alexander Keweloh
Author-X-Name-First: Sascha Alexander
Author-X-Name-Last: Keweloh
Title: A Generalized Method of Moments Estimator for Structural Vector Autoregressions Based on Higher Moments
Abstract:
I propose a generalized method of moments estimator for structural vector autoregressions with independent and non-Gaussian shocks. The shocks are identified by exploiting information contained in higher moments of the data. Extending the standard identification approach, which relies on the covariance, to the coskewness and cokurtosis allows the simultaneous interaction to be identified and estimated without any further restrictions. I analyze the finite sample properties of the estimator and apply it to illustrate the simultaneous interaction between economic activity, oil, and stock prices. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 772-782
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1730858
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1730858
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:772-782
Template-Type: ReDIF-Article 1.0
Author-Name: Otávio Bartalotti
Author-X-Name-First: Otávio
Author-X-Name-Last: Bartalotti
Author-Name: Quentin Brummet
Author-X-Name-First: Quentin
Author-X-Name-Last: Brummet
Author-Name: Steven Dieterle
Author-X-Name-First: Steven
Author-X-Name-Last: Dieterle
Title: A Correction for Regression Discontinuity Designs With Group-Specific Mismeasurement of the Running Variable
Abstract:
When the running variable in a regression discontinuity (RD) design is measured with error, identification of the local average treatment effect of interest will typically fail. While the form of this measurement error varies across applications, in many cases the measurement error structure is heterogeneous across different groups of observations. We develop a novel measurement error correction procedure capable of addressing heterogeneous mismeasurement structures by leveraging auxiliary information. We also provide adjusted asymptotic variance and standard errors that take into consideration the variability introduced by the estimation of nuisance parameters, and honest confidence intervals that account for potential misspecification. Simulations provide evidence that the proposed procedure corrects the bias introduced by heterogeneous measurement error and achieves empirical coverage closer to nominal test size than “naive” alternatives. Two empirical illustrations demonstrate that correcting for measurement error can either reinforce the results of a study or provide a new empirical perspective on the data.
Journal: Journal of Business & Economic Statistics
Pages: 833-848
Issue: 3
Volume: 39
Year: 2021
Month: 7
X-DOI: 10.1080/07350015.2020.1737081
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1737081
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:3:p:833-848
Template-Type: ReDIF-Article 1.0
Author-Name: Davide Delle Monache
Author-X-Name-First: Davide Delle
Author-X-Name-Last: Monache
Author-Name: Ivan Petrella
Author-X-Name-First: Ivan
Author-X-Name-Last: Petrella
Author-Name: Fabrizio Venditti
Author-X-Name-First: Fabrizio
Author-X-Name-Last: Venditti
Title: Price Dividend Ratio and Long-Run Stock Returns: A Score-Driven State Space Model
Abstract:
Abstract–In this article, we develop a general framework to analyze state space models with time-varying system matrices, where time variation is driven by the score of the conditional likelihood. We derive a new filter that allows for the simultaneous estimation of the state vector and of the time-varying matrices. We use this method to study the time-varying relationship between the price dividend ratio, expected stock returns and expected dividend growth in the United States since 1880. We find a significant increase in the long-run equilibrium value of the price dividend ratio over time, associated with a fall in the long-run expected rate of return on stocks. The latter can be attributed mainly to a decrease in the natural rate of interest, as the long-run risk premium has only slightly fallen.
Journal: Journal of Business & Economic Statistics
Pages: 1054-1065
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1763805
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1763805
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1054-1065
Template-Type: ReDIF-Article 1.0
Author-Name: Damian Kozbur
Author-X-Name-First: Damian
Author-X-Name-Last: Kozbur
Title: Inference in Additively Separable Models With a High-Dimensional Set of Conditioning Variables
Abstract:
This article studies nonparametric series estimation and inference for the effect of a single variable of interest x on an outcome y in the presence of potentially high-dimensional conditioning variables z. The context is an additively separable model E[y|x,z]=g0(x)+h0(z). The model is high-dimensional in the sense that the series of approximating functions for h0(z) can have more terms than the sample size, thereby allowing z potentially to have very many measured characteristics. The model is required to be approximately sparse: h0(z) can be approximated using only a small subset of series terms whose identities are unknown. This article proposes an estimation and inference method for g0(x) called Post-Nonparametric Double Selection, which is a generalization of Post-Double Selection. Rates of convergence and asymptotic normality for the estimator are derived and hold over a large class of sparse data-generating processes. A simulation study illustrates finite sample estimation properties of the proposed estimator and coverage properties of the corresponding confidence intervals. Finally, an empirical application to college admissions policy demonstrates the practical implementation of the proposed method.
Journal: Journal of Business & Economic Statistics
Pages: 984-1000
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1753524
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1753524
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:984-1000
Template-Type: ReDIF-Article 1.0
Author-Name: Francis J. DiTraglia
Author-X-Name-First: Francis J.
Author-X-Name-Last: DiTraglia
Author-Name: Camilo García-Jimeno
Author-X-Name-First: Camilo
Author-X-Name-Last: García-Jimeno
Title: A Framework for Eliciting, Incorporating, and Disciplining Identification Beliefs in Linear Models
Abstract:
To estimate causal effects from observational data, an applied researcher must impose beliefs. The instrumental variables exclusion restriction, for example, represents the belief that the instrument has no direct effect on the outcome of interest. Yet beliefs about instrument validity do not exist in isolation. Applied researchers often discuss the likely direction of selection and the potential for measurement error in their articles but lack formal tools for incorporating this information into their analyses. Failing to use all relevant information not only leaves money on the table; it runs the risk of leading to a contradiction in which one holds mutually incompatible beliefs about the problem at hand. To address these issues, we first characterize the joint restrictions relating instrument invalidity, treatment endogeneity, and non-differential measurement error in a workhorse linear model, showing how beliefs over these three dimensions are mutually constrained by each other and the data. Using this information, we propose a Bayesian framework to help researchers elicit their beliefs, incorporate them into estimation, and ensure their mutual coherence. We conclude by illustrating our framework in a number of examples drawn from the empirical microeconomics literature.
Journal: Journal of Business & Economic Statistics
Pages: 1038-1053
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1753528
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1753528
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1038-1053
Template-Type: ReDIF-Article 1.0
Author-Name: John H. J. Einmahl
Author-X-Name-First: John H. J.
Author-X-Name-Last: Einmahl
Author-Name: Fan Yang
Author-X-Name-First: Fan
Author-X-Name-Last: Yang
Author-Name: Chen Zhou
Author-X-Name-First: Chen
Author-X-Name-Last: Zhou
Title: Testing the Multivariate Regular Variation Model
Abstract:
In this article, we propose a test for the multivariate regular variation (MRV) model. The test is based on testing whether the extreme value indices of the radial component conditional on the angular component falling in different subsets are the same. Combining the test on the constancy across extreme value indices in different directions with testing the regular variation of the radial component, we obtain the test for testing MRV. Simulation studies demonstrate the good performance of the proposed tests. We apply this test to examine two datasets used in previous studies that are assumed to follow the MRV model.
Journal: Journal of Business & Economic Statistics
Pages: 907-919
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1737533
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1737533
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:907-919
Template-Type: ReDIF-Article 1.0
Author-Name: Giuseppe Buccheri
Author-X-Name-First: Giuseppe
Author-X-Name-Last: Buccheri
Author-Name: Giacomo Bormetti
Author-X-Name-First: Giacomo
Author-X-Name-Last: Bormetti
Author-Name: Fulvio Corsi
Author-X-Name-First: Fulvio
Author-X-Name-Last: Corsi
Author-Name: Fabrizio Lillo
Author-X-Name-First: Fabrizio
Author-X-Name-Last: Lillo
Title: A Score-Driven Conditional Correlation Model for Noisy and Asynchronous Data: An Application to High-Frequency Covariance Dynamics
Abstract:
The analysis of the intraday dynamics of covariances among high-frequency returns is challenging due to asynchronous trading and market microstructure noise. Both effects lead to significant data reduction and may severely affect the estimation of the covariances if traditional methods for low-frequency data are employed. We propose to model intraday log-prices through a multivariate local-level model with score-driven covariance matrices and to treat asynchronicity as a missing value problem. The main advantages of this approach are: (i) all available data are used when filtering the covariances, (ii) market microstructure noise is taken into account, (iii) estimation is performed by standard maximum likelihood. Our empirical analysis, performed on 1-sec NYSE data, shows that opening hours are dominated by idiosyncratic risk and that a market factor progressively emerges in the second part of the day. The method can be used as a nowcasting tool for high-frequency data, allowing to study the real-time response of covariances to macro-news announcements and to build intraday portfolios with very short optimization horizons.
Journal: Journal of Business & Economic Statistics
Pages: 920-936
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1739530
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1739530
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:920-936
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaofei Xu
Author-X-Name-First: Xiaofei
Author-X-Name-Last: Xu
Author-Name: Ying Chen
Author-X-Name-First: Ying
Author-X-Name-Last: Chen
Author-Name: Steven Kou
Author-X-Name-First: Steven
Author-X-Name-Last: Kou
Title: Discussion on “Text Selection”
Abstract:
This is a discussion on the paper "Text Selection" by Kelly et al. (2021).
Journal: Journal of Business & Economic Statistics
Pages: 883-887
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2021.1942890
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1942890
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:883-887
Template-Type: ReDIF-Article 1.0
Author-Name: Markus Pelger
Author-X-Name-First: Markus
Author-X-Name-Last: Pelger
Title: Discussion of “Text Selection” by Bryan Kelly, Asaf Manela, and Alan Moreira
Journal: Journal of Business & Economic Statistics
Pages: 880-882
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2021.1948420
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1948420
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:880-882
Template-Type: ReDIF-Article 1.0
Author-Name: Tim Bollerslev
Author-X-Name-First: Tim
Author-X-Name-Last: Bollerslev
Author-Name: Jia Li
Author-X-Name-First: Jia
Author-X-Name-Last: Li
Author-Name: Leonardo Salim Saker Chaves
Author-X-Name-First: Leonardo Salim Saker
Author-X-Name-Last: Chaves
Title: Generalized Jump Regressions for Local Moments
Abstract:
We develop new high-frequency-based inference procedures for analyzing the relationship between jumps in instantaneous moments of stochastic processes. The estimation consists of two steps: the nonparametric determination of the jumps as differences in local averages, followed by a minimum-distance type estimation of the parameters of interest under general loss functions that include both least-square and more robust quantile regressions as special cases. The resulting asymptotic distribution of the estimator, derived under an infill asymptotic setting, is highly nonstandard and generally not mixed normal. In addition, we establish the validity of a novel bootstrap algorithm for making feasible inference including bias-correction. The new methods are applied in a study on the relationship between trading intensity and spot volatility in the U.S. equity market at the time of important macroeconomic news announcement.
Journal: Journal of Business & Economic Statistics
Pages: 1015-1025
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1753526
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1753526
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1015-1025
Template-Type: ReDIF-Article 1.0
Author-Name: Zifeng Zhao
Author-X-Name-First: Zifeng
Author-X-Name-Last: Zhao
Title: Dynamic Bivariate Peak Over Threshold Model for Joint Tail Risk Dynamics of Financial Markets
Abstract:
We propose a novel dynamic bivariate peak over threshold (PoT) model to study the time-varying behavior of joint tail risk in financial markets. The proposed framework provides simultaneous modeling for dynamics of marginal and joint tail risk, and generalizes the existing tail risk literature from univariate dimension to multivariate dimension. We introduce a natural and interpretable tail connectedness measure and examine the dynamics of joint tail behavior of global stock markets: empirical evidence suggests markets from the same continent have time-varying and high-level joint tail risk, and tail connectedness increases during periods of crisis. We further enrich the tail risk literature by developing a novel portfolio optimization procedure based on bivariate joint tail risk minimization, which gives promising risk-rewarding performance in backtesting.
Journal: Journal of Business & Economic Statistics
Pages: 892-906
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1737083
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1737083
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:892-906
Template-Type: ReDIF-Article 1.0
Author-Name: Anne Opschoor
Author-X-Name-First: Anne
Author-X-Name-Last: Opschoor
Author-Name: André Lucas
Author-X-Name-First: André
Author-X-Name-Last: Lucas
Author-Name: István Barra
Author-X-Name-First: István
Author-X-Name-Last: Barra
Author-Name: Dick van Dijk
Author-X-Name-First: Dick
Author-X-Name-Last: van Dijk
Title: Closed-Form Multi-Factor Copula Models With Observation-Driven Dynamic Factor Loadings
Abstract:
We develop new multi-factor dynamic copula models with time-varying factor loadings and observation-driven dynamics. The new models are highly flexible, scalable to high dimensions, and ensure positivity of covariance and correlation matrices. A closed-form likelihood expression allows for straightforward parameter estimation and likelihood inference. We apply the new model to a large panel of 100 U.S. stocks over the period 2001–2014. The proposed multi-factor structure is much better than existing (single-factor) models at describing stock return dependence dynamics in high-dimensions. The new factor models also improve one-step-ahead copula density forecasts and global minimum variance portfolio performance. Finally, we investigate different mechanisms to allocate firms into groups and find that a simple industry classification outperforms alternatives based on observable risk factors, such as size, value, or momentum.
Journal: Journal of Business & Economic Statistics
Pages: 1066-1079
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1763806
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1763806
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1066-1079
Template-Type: ReDIF-Article 1.0
Author-Name: Nail Kashaev
Author-X-Name-First: Nail
Author-X-Name-Last: Kashaev
Author-Name: Bruno Salcedo
Author-X-Name-First: Bruno
Author-X-Name-Last: Salcedo
Title: Discerning Solution Concepts for Discrete Games
Abstract:
The empirical analysis of discrete complete-information games has relied on behavioral restrictions in the form of solution concepts, such as Nash equilibrium. Choosing the right solution concept is crucial not just for the identification of payoff parameters, but also for the validity and informativeness of counterfactual exercises and policy implications. We say that a solution concept is discernible if it is possible to determine whether it generated the observed data on the players’ behavior and covariates. We propose a set of conditions that make it possible to discern solution concepts. In particular, our conditions are sufficient to tell whether the players’ choices emerged from Nash equilibria. We can also discriminate between rationalizable behavior, maxmin behavior, and collusive behavior. Finally, we identify the correlation structure of unobserved shocks in our model using a novel approach.
Journal: Journal of Business & Economic Statistics
Pages: 1001-1014
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1753525
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1753525
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1001-1014
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Correction
Journal: Journal of Business & Economic Statistics
Pages: 1080-1080
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2021.1970572
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1970572
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1080-1080
Template-Type: ReDIF-Article 1.0
Author-Name: Ping Yu
Author-X-Name-First: Ping
Author-X-Name-Last: Yu
Author-Name: Xiaodong Fan
Author-X-Name-First: Xiaodong
Author-X-Name-Last: Fan
Title: Threshold Regression With a Threshold Boundary
Abstract:
This article studies computation, estimation, inference, and testing for linearity in threshold regression with a threshold boundary. We first put forward a new algorithm to ease the computation of the threshold boundary, and develop the asymptotics for the least squares estimator in both the fixed-threshold-effect framework and the small-threshold-effect framework. We also show that the inverting-likelihood-ratio method is not suitable to construct confidence sets for the threshold parameters, while the nonparametric posterior interval is still applicable. We then propose a new score-type test to test for the existence of threshold effects. Comparing with the usual Wald-type test, it is computationally less intensive, and its critical values are easier to obtain by the simulation method. Simulation studies corroborate the theoretical results. We finally conduct two empirical applications in labor economics to illustrate the nonconstancy of thresholds.
Journal: Journal of Business & Economic Statistics
Pages: 953-971
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1740712
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1740712
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:953-971
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaojun Song
Author-X-Name-First: Xiaojun
Author-X-Name-Last: Song
Author-Name: Abderrahim Taamouti
Author-X-Name-First: Abderrahim
Author-X-Name-Last: Taamouti
Title: Measuring Granger Causality in Quantiles
Abstract:
We consider measures of Granger causality in quantiles, which detect and quantify both linear and nonlinear causal effects between random variables. The measures are based on nonparametric quantile regressions and defined as logarithmic functions of restricted and unrestricted expectations of quantile check loss functions. They can consistently be estimated by replacing the unknown expectations of check loss functions by their nonparametric kernel estimates. We derive a Bahadur-type representation for the nonparametric estimator of the measures. We establish the asymptotic distribution of this estimator, which can be used to build tests for the statistical significance of the measures. Thereafter, we show the validity of a smoothed local bootstrap that can be used in finite-sample settings to perform statistical tests. A Monte Carlo simulation study reveals that the bootstrap-based test has a good finite-sample size and power properties for a variety of data-generating processes and different sample sizes. Finally, we provide an empirical application to illustrate the usefulness of measuring Granger causality in quantiles. We quantify the degree of predictability of the quantiles of equity risk premium using the variance risk premium, unemployment rate, inflation, and the effective federal funds rate. The empirical results show that the variance risk premium and effective federal funds rate have a strong predictive power for predicting the risk premium when compared to that of the predictive power of the other two macro variables. In particular, the variance risk premium is able to predict the center, lower, and upper quantiles of the distribution of the risk premium; however, the effective federal funds rate predicts only the lower and upper quantiles. Nevertheless, unemployment and inflation rates have no effect on the risk premium.
Journal: Journal of Business & Economic Statistics
Pages: 937-952
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1739531
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1739531
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:937-952
Template-Type: ReDIF-Article 1.0
Author-Name: Bryan Kelly
Author-X-Name-First: Bryan
Author-X-Name-Last: Kelly
Author-Name: Asaf Manela
Author-X-Name-First: Asaf
Author-X-Name-Last: Manela
Author-Name: Alan Moreira
Author-X-Name-First: Alan
Author-X-Name-Last: Moreira
Title: Text Selection
Abstract:
Text data is ultra-high dimensional, which makes machine learning techniques indispensable for textual analysis. Text is often selected—journalists, speechwriters, and others craft messages to target their audiences’ limited attention. We develop an economically motivated high-dimensional selection model that improves learning from text (and from sparse counts data more generally). Our model is especially useful when the choice to include a phrase is more interesting than the choice of how frequently to repeat it. It allows for parallel estimation, making it computationally scalable. A first application revisits the partisanship of the U.S. congressional speech. We find that earlier spikes in partisanship manifested in increased repetition of different phrases, whereas the upward trend starting in the 1990s is due to distinct phrase selection. Additional applications show how our model can backcast, nowcast, and forecast macroeconomic indicators using newspaper text, and that it substantially improves out-of-sample fit relative to alternative approaches.
Journal: Journal of Business & Economic Statistics
Pages: 859-879
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2021.1947843
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1947843
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:859-879
Template-Type: ReDIF-Article 1.0
Author-Name: Fabian Krüger
Author-X-Name-First: Fabian
Author-X-Name-Last: Krüger
Author-Name: Johanna F. Ziegel
Author-X-Name-First: Johanna F.
Author-X-Name-Last: Ziegel
Title: Generic Conditions for Forecast Dominance
Abstract:
Recent studies have analyzed whether one forecast method dominates another under a class of consistent scoring functions. While the existing literature focuses on empirical tests of forecast dominance, little is known about the theoretical conditions under which one forecast dominates another. To address this question, we derive a new characterization of dominance among forecasts of the mean functional. We present various scenarios under which dominance occurs. Unlike existing results, our results allow for the case that the forecasts’ underlying information sets are not nested, and allow for uncalibrated forecasts that suffer, for example, from model misspecification or parameter estimation error. We illustrate the empirical relevance of our results via data examples from finance and economics.
Journal: Journal of Business & Economic Statistics
Pages: 972-983
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1741376
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1741376
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:972-983
Template-Type: ReDIF-Article 1.0
Author-Name: Nitish Ranjan Sinha
Author-X-Name-First: Nitish Ranjan
Author-X-Name-Last: Sinha
Title: A Discussion of “Text Selection”
Abstract:
Kelly, Manela, and Moreira provided an economic model of word choice in text. A writer is modeled as someone who is first choosing whether to use a word at all (selection problem) and then deciding how often a selected word should be used (positive counts problem). The resulting model leads to better sufficient reduction for large number of words/phrases in the text as demonstrated many diverse applications that use information captured from the text of the front page of the Wall Street Journal such as back-casting regulatory capital ratio of banks, and forecasting and nowcasting U.S. macroeconomic variables. Researchers interested in quantifying information in text will benefit from reading the article and thinking about some of the issues raised in the article. I provide background, context from other foundational papers, a very short summary of the article, and make some broad observations in my discussion.
Journal: Journal of Business & Economic Statistics
Pages: 888-891
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2021.1961785
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1961785
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:888-891
Template-Type: ReDIF-Article 1.0
Author-Name: Juwon Seo
Author-X-Name-First: Juwon
Author-X-Name-Last: Seo
Title: Randomization Tests for Equality in Dependence Structure
Abstract:
We develop a new statistical procedure to test whether the dependence structure is identical between two groups. Rather than relying on a single index such as Pearson’s correlation coefficient or Kendall’s τ, we consider the entire dependence structure by investigating the dependence functions (copulas). The critical values are obtained by a modified randomization procedure designed to exploit asymptotic group invariance conditions. Implementation of the test is intuitive and simple, and does not require any specification of a tuning parameter or weight function. At the same time, the test exhibits excellent finite sample performance, with the null rejection rates almost equal to the nominal level even when the sample size is extremely small. Two empirical applications concerning the dependence between income and consumption, and the Brexit effect on European financial market integration are provided.
Journal: Journal of Business & Economic Statistics
Pages: 1026-1037
Issue: 4
Volume: 39
Year: 2021
Month: 10
X-DOI: 10.1080/07350015.2020.1753527
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1753527
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:39:y:2021:i:4:p:1026-1037
Template-Type: ReDIF-Article 1.0
Author-Name: Daniel Borup
Author-X-Name-First: Daniel
Author-X-Name-Last: Borup
Author-Name: Erik Christian Montes Schütte
Author-X-Name-First: Erik Christian Montes
Author-X-Name-Last: Schütte
Title: In Search of a Job: Forecasting Employment Growth Using Google Trends
Abstract:
We show that Google search activity on relevant terms is a strong out-of-sample predictor for future employment growth in the United States over the period 2004–2019 at both short and long horizons. Starting from an initial search term “jobs,” we construct a large panel of 172 variables using Google’s own algorithms to find semantically related search queries. The best Google Trends model achieves an out-of-sample R2 between 29% and 62% at horizons spanning from one month to one year ahead, strongly outperforming benchmarks based on a single search query or a large set of macroeconomic, financial, and sentiment predictors. This strong predictability is due to heterogeneity in search terms and extends to industry-level and state-level employment growth using state-level specific search activity. Encompassing tests indicate that when the Google Trends panel is exploited using a nonlinear model, it fully encompasses the macroeconomic forecasts and provides significant information in excess of those.
Journal: Journal of Business & Economic Statistics
Pages: 186-200
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1791133
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1791133
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:186-200
Template-Type: ReDIF-Article 1.0
Author-Name: Chaohui Guo
Author-X-Name-First: Chaohui
Author-X-Name-Last: Guo
Author-Name: Jialiang Li
Author-X-Name-First: Jialiang
Author-X-Name-Last: Li
Title: Homogeneity and Structure Identification in Semiparametric Factor Models
Abstract:
Factor modeling is an essential tool for exploring intrinsic dependence structures in financial and economic studies through the construction of common latent variables, including the famous Fama–French three factor models for the description of asset returns in finance. However, most of the existing statistical methods for analyzing latent factors have been developed through a linear approach. In this article, we consider a semiparametric factor model and present a regularized estimation procedure for linear component identification on the transformed factor that combines B-spline basis function approximations and the smoothly clipped absolute deviation penalty. In addition, a binary segmentation based algorithm is also developed to identify the homogeneous groups in loading parameters, producing more efficient estimation by pooling information across units within the same group. We carefully derive the asymptotic properties for the proposed procedures. Finally, simulation studies and a real data analysis are conducted to evaluate the finite sample performance of our proposals.
Journal: Journal of Business & Economic Statistics
Pages: 408-422
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1831516
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1831516
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:408-422
Template-Type: ReDIF-Article 1.0
Author-Name: Maximo Camacho
Author-X-Name-First: Maximo
Author-X-Name-Last: Camacho
Author-Name: María Dolores Gadea
Author-X-Name-First: María Dolores
Author-X-Name-Last: Gadea
Author-Name: Ana Gómez Loscos
Author-X-Name-First: Ana Gómez
Author-X-Name-Last: Loscos
Title: A New Approach to Dating the Reference Cycle
Abstract:
Abstract–This article proposes a new approach to the analysis of the reference cycle turning points, defined on the basis of the specific turning points of a broad set of coincident economic indicators. Each individual pair of specific peaks and troughs from these indicators is viewed as a realization of a mixture of an unspecified number of separate bivariate Gaussian distributions whose different means are the reference turning points. These dates break the sample into separate reference cycle phases, whose shifts are modeled by a hidden Markov chain. The transition probability matrix is constrained so that the specification is equivalent to a multiple change-point model. Bayesian estimation of finite Markov mixture modeling techniques is suggested to estimate the model. Several Monte Carlo experiments are used to show the accuracy of the model to date reference cycles that suffer from short phases, uncertain turning points, small samples, and asymmetric cycles. In the empirical section, we show the high performance of our approach to identifying the US reference cycle, with little difference from the timing of the turning point dates established by the NBER. In a pseudo real-time analysis, we also show the good performance of this methodology in terms of accuracy and speed of detection of turning point dates.
Journal: Journal of Business & Economic Statistics
Pages: 66-81
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1773834
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1773834
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:66-81
Template-Type: ReDIF-Article 1.0
Author-Name: Qingliang Fan
Author-X-Name-First: Qingliang
Author-X-Name-Last: Fan
Author-Name: Yu-Chin Hsu
Author-X-Name-First: Yu-Chin
Author-X-Name-Last: Hsu
Author-Name: Robert P. Lieli
Author-X-Name-First: Robert P.
Author-X-Name-Last: Lieli
Author-Name: Yichong Zhang
Author-X-Name-First: Yichong
Author-X-Name-Last: Zhang
Title: Estimation of Conditional Average Treatment Effects With High-Dimensional Data
Abstract:
Given the unconfoundedness assumption, we propose new nonparametric estimators for the reduced dimensional conditional average treatment effect (CATE) function. In the first stage, the nuisance functions necessary for identifying CATE are estimated by machine learning methods, allowing the number of covariates to be comparable to or larger than the sample size. The second stage consists of a low-dimensional local linear regression, reducing CATE to a function of the covariate(s) of interest. We consider two variants of the estimator depending on whether the nuisance functions are estimated over the full sample or over a hold-out sample. Building on Belloni at al. and Chernozhukov et al., we derive functional limit theory for the estimators and provide an easy-to-implement procedure for uniform inference based on the multiplier bootstrap. The empirical application revisits the effect of maternal smoking on a baby’s birth weight as a function of the mother’s age.
Journal: Journal of Business & Economic Statistics
Pages: 313-327
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1811102
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1811102
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:313-327
Template-Type: ReDIF-Article 1.0
Author-Name: Yong He
Author-X-Name-First: Yong
Author-X-Name-Last: He
Author-Name: Xinbing Kong
Author-X-Name-First: Xinbing
Author-X-Name-Last: Kong
Author-Name: Long Yu
Author-X-Name-First: Long
Author-X-Name-Last: Yu
Author-Name: Xinsheng Zhang
Author-X-Name-First: Xinsheng
Author-X-Name-Last: Zhang
Title: Large-Dimensional Factor Analysis Without Moment Constraints
Abstract:
Large-dimensional factor model has drawn much attention in the big-data era, to reduce the dimensionality and extract underlying features using a few latent common factors. Conventional methods for estimating the factor model typically requires finite fourth moment of the data, which ignores the effect of heavy-tailedness and thus may result in unrobust or even inconsistent estimation of the factor space and common components. In this article, we propose to recover the factor space by performing principal component analysis to the spatial Kendall’s tau matrix instead of the sample covariance matrix. In a second step, we estimate the factor scores by the ordinary least square regression. Theoretically, we show that under the elliptical distribution framework the factor loadings and scores as well as the common components can be estimated consistently without any moment constraint. The convergence rates of the estimated factor loadings, scores, and common components are provided. The finite sample performance of the proposed procedure is assessed through thorough simulations. An analysis of a financial dataset of asset returns shows the superiority of the proposed method over the classical principle component analysis method.
Journal: Journal of Business & Economic Statistics
Pages: 302-312
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1811101
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1811101
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:302-312
Template-Type: ReDIF-Article 1.0
Author-Name: Zhewen Pan
Author-X-Name-First: Zhewen
Author-X-Name-Last: Pan
Author-Name: Xianbo Zhou
Author-X-Name-First: Xianbo
Author-X-Name-Last: Zhou
Author-Name: Yahong Zhou
Author-X-Name-First: Yahong
Author-X-Name-Last: Zhou
Title: Semiparametric Estimation of a Censored Regression Model Subject to Nonparametric Sample Selection
Abstract:
This study proposes a semiparametric estimation method for a censored regression model subject to nonparametric sample selection without the exclusion restriction. Consistency and asymptotic normality of the proposed estimator are established under mild regularity conditions. A Monte Carlo simulation study indicates that the estimator performs well in various designs and outperforms parametric maximum likelihood estimators. An empirical application to female smoking is provided to illustrate the usefulness of the estimator.
Journal: Journal of Business & Economic Statistics
Pages: 141-151
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1784746
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1784746
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:141-151
Template-Type: ReDIF-Article 1.0
Author-Name: Jing Zhou
Author-X-Name-First: Jing
Author-X-Name-Last: Zhou
Author-Name: Jin Liu
Author-X-Name-First: Jin
Author-X-Name-Last: Liu
Author-Name: Feifei Wang
Author-X-Name-First: Feifei
Author-X-Name-Last: Wang
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Autoregressive Model With Spatial Dependence and Missing Data
Abstract:
We study herein an autoregressive model with spatially correlated error terms and missing data. A logistic regression model with completely observed covariates is used to model the missingness mechanism. An autoregressive model is used to accommodate time series dependence, and a spatial error model is used to capture spatial dependence. To estimate the model, a weighted least squares estimator is developed for the temporal component, and a weighted maximum likelihood estimator is developed for the spatial component. The asymptotic properties for both estimators are investigated. The finite sample performance is assessed through extensive simulation studies. A real data example about Beijing’s PM2.5 level data is illustrated.
Journal: Journal of Business & Economic Statistics
Pages: 28-34
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1766471
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1766471
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:28-34
Template-Type: ReDIF-Article 1.0
Author-Name: Dominik Bertsche
Author-X-Name-First: Dominik
Author-X-Name-Last: Bertsche
Author-Name: Robin Braun
Author-X-Name-First: Robin
Author-X-Name-Last: Braun
Title: Identification of Structural Vector Autoregressions by Stochastic Volatility
Abstract:
We propose to exploit stochastic volatility for statistical identification of structural vector autoregressive models (SV-SVAR). We discuss full and partial identification of the model and develop efficient EM algorithms for maximum likelihood inference. Simulation evidence suggests that the SV-SVAR works well in identifying structural parameters also under misspecification of the variance process, particularly if compared to alternative heteroscedastic SVARs. We apply the model to study the importance of oil supply shocks for driving oil prices. Since shocks identified by heteroscedasticity may not be economically meaningful, we exploit the framework to test instrumental variable restrictions which are overidentifying in the heteroscedastic model. Our findings suggest that conventional supply shocks are negligible, while news shocks about future supply account for almost all the variation in oil prices.
Journal: Journal of Business & Economic Statistics
Pages: 328-341
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1813588
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1813588
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:328-341
Template-Type: ReDIF-Article 1.0
Author-Name: Giuseppe Cavaliere
Author-X-Name-First: Giuseppe
Author-X-Name-Last: Cavaliere
Author-Name: Morten Ørregaard Nielsen
Author-X-Name-First: Morten Ørregaard
Author-X-Name-Last: Nielsen
Author-Name: A. M. Robert Taylor
Author-X-Name-First: A. M.
Author-X-Name-Last: Robert Taylor
Title: Adaptive Inference in Heteroscedastic Fractional Time Series Models
Abstract:
We consider estimation and inference in fractionally integrated time series models driven by shocks which can display conditional and unconditional heteroscedasticity of unknown form. Although the standard conditional sum-of-squares (CSS) estimator remains consistent and asymptotically normal in such cases, unconditional heteroscedasticity inflates its variance matrix by a scalar quantity, λ>1, thereby inducing a loss in efficiency relative to the unconditionally homoscedastic case, λ = 1. We propose an adaptive version of the CSS estimator, based on nonparametric kernel-based estimation of the unconditional volatility process. We show that adaptive estimation eliminates the factor λ from the variance matrix, thereby delivering the same asymptotic efficiency as that attained by the standard CSS estimator in the unconditionally homoscedastic case and, hence, asymptotic efficiency under Gaussianity. Importantly, the asymptotic analysis is based on a novel proof strategy, which does not require consistent estimation (in the sup norm) of the volatility process. Consequently, we are able to work under a weaker set of assumptions than those employed in the extant literature. The asymptotic variance matrices of both the standard and adaptive CSS (ACSS) estimators depend on any weak parametric autocorrelation present in the fractional model and any conditional heteroscedasticity in the shocks. Consequently, asymptotically pivotal inference can be achieved through the development of confidence regions or hypothesis tests using either heteroscedasticity-robust standard errors and/or a wild bootstrap. Monte Carlo simulations and empirical applications illustrate the practical usefulness of the methods proposed.
Journal: Journal of Business & Economic Statistics
Pages: 50-65
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1773275
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1773275
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:50-65
Template-Type: ReDIF-Article 1.0
Author-Name: Patrick Arni
Author-X-Name-First: Patrick
Author-X-Name-Last: Arni
Author-Name: Gerard J. van den Berg
Author-X-Name-First: Gerard J.
Author-X-Name-Last: van den Berg
Author-Name: Rafael Lalive
Author-X-Name-First: Rafael
Author-X-Name-Last: Lalive
Title: Treatment Versus Regime Effects of Carrots and Sticks
Abstract:
Public employment service (PES) agencies and caseworkers (CWs) often have substantial leeway in the design and implementation of active labor market policies for the unemployed, and they use policies to a varying extent. We estimate regime effects which capture how CW and PES affect outcomes through different policy intensities. These operate potentially on all forward-looking job seekers regardless of actual treatment exposure. We consider regime effects for two sets of programs, supporting (“carrots”) and restricting (“sticks”) programs, and contrast regime and treatment effects on unemployment durations, employment, and post-unemployment earnings using register data that contain PES and caseworker identifiers for about 130,000 job spells. Regime effects are important: earnings are higher in a PES if carrot-type programs are used more intensively and stick-type programs are used less intensively. Actual treatment effects on earnings have a similar order of magnitude as regime effects and are positive for participation in carrot-type programs and negative for stick-type treatments. Regime effects are economically substantial. A modest increase in the intended usage of carrots and sticks reduces the total cost of an unemployed individual by up to 7.5%.
Journal: Journal of Business & Economic Statistics
Pages: 111-127
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1784744
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1784744
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:111-127
Template-Type: ReDIF-Article 1.0
Author-Name: Xiufan Yu
Author-X-Name-First: Xiufan
Author-X-Name-Last: Yu
Author-Name: Jiawei Yao
Author-X-Name-First: Jiawei
Author-X-Name-Last: Yao
Author-Name: Lingzhou Xue
Author-X-Name-First: Lingzhou
Author-X-Name-Last: Xue
Title: Nonparametric Estimation and Conformal Inference of the Sufficient Forecasting With a Diverging Number of Factors
Abstract:
The sufficient forecasting (SF) provides a nonparametric procedure to estimate forecasting indices from high-dimensional predictors to forecast a single time series, allowing for the possibly nonlinear forecasting function. This article studies the asymptotic theory of the SF with a diverging number of factors and develops its predictive inference. First, we revisit the SF and explore its connections to Fama–MacBeth regression and partial least squares. Second, with a diverging number of factors, we derive the rate of convergence of the estimated factors and loadings and characterize the asymptotic behavior of the estimated SF directions. Third, we use the local linear regression to estimate the possibly nonlinear forecasting function and obtain the rate of convergence. Fourth, we construct the distribution-free conformal prediction set for the SF that accounts for the serial dependence. Moreover, we demonstrate the finite-sample performance of the proposed nonparametric estimation and conformal inference in simulation studies and a real application to forecast financial time series.
Journal: Journal of Business & Economic Statistics
Pages: 342-354
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1813589
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1813589
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:342-354
Template-Type: ReDIF-Article 1.0
Author-Name: Cheng Chen
Author-X-Name-First: Cheng
Author-X-Name-Last: Chen
Author-Name: Shaojun Guo
Author-X-Name-First: Shaojun
Author-X-Name-Last: Guo
Author-Name: Xinghao Qiao
Author-X-Name-First: Xinghao
Author-X-Name-Last: Qiao
Title: Functional Linear Regression: Dependence and Error Contamination
Abstract:
Functional linear regression is an important topic in functional data analysis. It is commonly assumed that samples of the functional predictor are independent realizations of an underlying stochastic process, and are observed over a grid of points contaminated by iid measurement errors. In practice, however, the dynamical dependence across different curves may exist and the parametric assumption on the error covariance structure could be unrealistic. In this article, we consider functional linear regression with serially dependent observations of the functional predictor, when the contamination of the predictor by the white noise is genuinely functional with fully nonparametric covariance structure. Inspired by the fact that the autocovariance function of observed functional predictors automatically filters out the impact from the unobservable noise term, we propose a novel autocovariance-based generalized method-of-moments estimate of the slope function. We also develop a nonparametric smoothing approach to handle the scenario of partially observed functional predictors. The asymptotic properties of the resulting estimators under different scenarios are established. Finally, we demonstrate that our proposed method significantly outperforms possible competing methods through an extensive set of simulations and an analysis of a public financial dataset.
Journal: Journal of Business & Economic Statistics
Pages: 444-457
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1832503
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1832503
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:444-457
Template-Type: ReDIF-Article 1.0
Author-Name: William C. Horrace
Author-X-Name-First: William C.
Author-X-Name-Last: Horrace
Author-Name: Hyunseok Jung
Author-X-Name-First: Hyunseok
Author-X-Name-Last: Jung
Author-Name: Shane Sanders
Author-X-Name-First: Shane
Author-X-Name-Last: Sanders
Title: Network Competition and Team Chemistry in the NBA
Abstract:
Abstract–We consider a heterogeneous social interaction model where agents interact with peers within their own network but also interact with agents across other (non-peer) networks. To address potential endogeneity in the networks, we assume that each network has a central planner who makes strategic network decisions based on observable and unobservable characteristics of the peers in her charge. The model forms a simultaneous equation system that can be estimated by quasi-maximum likelihood. We apply a restricted version of our model to data on National Basketball Association games, where agents are players, networks are individual teams organized by coaches, and competition is head-to-head. That is, at any time a player only interacts with two networks: their team and the opposing team. We find significant positive within-team peer-effects and both negative and positive opposing-team competitor-effects in NBA games. The former are interpretable as “team chemistries” which enhance the individual performances of players on the same team. The latter are interpretable as “team rivalries,” which can either enhance or diminish the individual performance of opposing players.
Journal: Journal of Business & Economic Statistics
Pages: 35-49
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1773273
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1773273
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:35-49
Template-Type: ReDIF-Article 1.0
Author-Name: Aleksey Kolokolov
Author-X-Name-First: Aleksey
Author-X-Name-Last: Kolokolov
Title: Estimating Jump Activity Using Multipower Variation
Abstract:
Realized multipower variation, originally introduced to eliminate jumps, can be extremely useful for inference in pure-jump models. This article shows how to build a simple and precise estimator of the jump activity index of a semimartingale observed at a high frequency by comparing different multipowers. The novel methodology allows to infer whether a discretely observed process contains a continuous martingale component. The empirical part of the article undertakes a nonparametric analysis of the jump activity of bitcoin and shows that bitcoin is a pure jump process with high jump activity, which is critically different from conventional currencies that include a Brownian motion component.
Journal: Journal of Business & Economic Statistics
Pages: 128-140
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1784745
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1784745
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:128-140
Template-Type: ReDIF-Article 1.0
Author-Name: Ricardo Masini
Author-X-Name-First: Ricardo
Author-X-Name-Last: Masini
Author-Name: Marcelo C. Medeiros
Author-X-Name-First: Marcelo C.
Author-X-Name-Last: Medeiros
Title: Counterfactual Analysis and Inference With Nonstationary Data
Abstract:
Recently, there has been growing interest in developing econometric tools to conduct counterfactual analysis with aggregate data when a single “treated” unit suffers an intervention, such as a policy change, and there is no obvious control group. Usually, the proposed methods are based on the construction of an artificial/synthetic counterfactual from a pool of “untreated” peers, organized in a panel data structure. In this article, we investigate the consequences of applying such methodologies when the data comprise integrated processes of order 1, I(1), or are trend-stationary. We find that for I(1) processes without a cointegrating relationship (spurious case) the estimator of the effects of the intervention diverges, regardless of its existence. Although spurious regression is a well-known concept in time-series econometrics, they have been ignored in most of the literature on counterfactual estimation based on artificial/synthetic controls. For the case when at least one cointegration relationship exists, we have consistent estimators for the intervention effect albeit with a nonstandard distribution. Finally, we discuss a test based on resampling which can be applied when there is at least one cointegration relationship or when the data are trend-stationary.
Journal: Journal of Business & Economic Statistics
Pages: 227-239
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1799814
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1799814
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:227-239
Template-Type: ReDIF-Article 1.0
Author-Name: Christian Brownlees
Author-X-Name-First: Christian
Author-X-Name-Last: Brownlees
Author-Name: Guðmundur Stefán Guðmundsson
Author-X-Name-First: Guðmundur Stefán
Author-X-Name-Last: Guðmundsson
Author-Name: Gábor Lugosi
Author-X-Name-First: Gábor
Author-X-Name-Last: Lugosi
Title: Community Detection in Partial Correlation Network Models
Abstract:
We introduce a class of partial correlation network models with a community structure for large panels of time series. In the model, the series are partitioned into latent groups such that correlation is higher within groups than between them. We then propose an algorithm that allows one to detect the communities using the eigenvectors of the sample covariance matrix. We study the properties of the procedure and establish its consistency. The methodology is used to study real activity clustering in the United States.
Journal: Journal of Business & Economic Statistics
Pages: 216-226
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1798241
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1798241
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:216-226
Template-Type: ReDIF-Article 1.0
Author-Name: Nikolay Iskrev
Author-X-Name-First: Nikolay
Author-X-Name-Last: Iskrev
Title: On the Sources of Information in the Moment Structure of Dynamic Macroeconomic Models
Abstract:
What features of the data are the key sources of information about the parameters in structural macroeconomic models? As such models grow in size and complexity, the answer to this question has become increasingly difficult. This article shows how to identify the main sources of parameter information across different parts of the moment structure of macroeconomic models. In particular, we propose a measure of the relative contribution of information by a given subset of moments with respect to any parameter of interest. The measure is trivial to compute even for large-scale models with many free parameters and observed variables. We illustrate our method with an application to a news-driven business cycle model developed by Schmitt-Grohé and Uribe. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 272-284
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1803079
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1803079
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:272-284
Template-Type: ReDIF-Article 1.0
Author-Name: Zhouyu Shen
Author-X-Name-First: Zhouyu
Author-X-Name-Last: Shen
Author-Name: Yu Chen
Author-X-Name-First: Yu
Author-X-Name-Last: Chen
Author-Name: Ruxin Shi
Author-X-Name-First: Ruxin
Author-X-Name-Last: Shi
Title: Modeling Tail Index With Autoregressive Conditional Pareto Model
Abstract:
We propose an autoregressive conditional Pareto (AcP) model based on the dynamic peaks over threshold method to model a dynamic tail index in the financial markets. Unlike the score-based approach which is widely used in many articles, we use an exponential function to model the tail index process for its intuitiveness and interpretability. Probabilistic properties of the AcP model and the statistical properties of its parameter estimators of maximum likelihood are studied in this article. Real data are used to show the advantages of AcP, especially, compared to the estimation volatility of GARCH model, the result of AcP is more sensitive to turmoil. The estimated tail index of AcP can accurately reflect the risk of the stock and may even play an early warning role to the turmoil of stock market. We also calculate the tail connectedness based on the estimated tail index of AcP and show that tail connectedness increases during period of turmoil, which is consistent with the result of the score-based approach.
Journal: Journal of Business & Economic Statistics
Pages: 458-466
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1832504
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1832504
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:458-466
Template-Type: ReDIF-Article 1.0
Author-Name: Artūras Juodis
Author-X-Name-First: Artūras
Author-X-Name-Last: Juodis
Author-Name: Vasilis Sarafidis
Author-X-Name-First: Vasilis
Author-X-Name-Last: Sarafidis
Title: A Linear Estimator for Factor-Augmented Fixed-T Panels With Endogenous Regressors
Abstract:
A novel method-of-moments approach is proposed for the estimation of factor-augmented panel data models with endogenous regressors when T is fixed. The underlying methodology involves approximating the unobserved common factors using observed factor proxies. The resulting moment conditions are linear in the parameters. The proposed approach addresses several issues which arise with existing nonlinear estimators that are available in fixed T panels, such as local minima-related problems, a sensitivity to particular normalization schemes, and a potential lack of global identification. We apply our approach to a large panel of households and estimate the price elasticity of urban water demand. A simulation study confirms that our approach performs well in finite samples.
Journal: Journal of Business & Economic Statistics
Pages: 1-15
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1766469
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1766469
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:1-15
Template-Type: ReDIF-Article 1.0
Author-Name: Jan P. A. M. Jacobs
Author-X-Name-First: Jan P. A. M.
Author-X-Name-Last: Jacobs
Author-Name: Samad Sarferaz
Author-X-Name-First: Samad
Author-X-Name-Last: Sarferaz
Author-Name: Jan-Egbert Sturm
Author-X-Name-First: Jan-Egbert
Author-X-Name-Last: Sturm
Author-Name: Simon van Norden
Author-X-Name-First: Simon
Author-X-Name-Last: van Norden
Title: Can GDP Measurement Be Further Improved? Data Revision and Reconciliation
Abstract:
Recent years have seen many attempts to combine expenditure-side estimates of U.S. real output (GDE) growth with income-side estimates (GDI) to improve estimates of real GDP growth. We show how to incorporate information from multiple releases of noisy data to provide more precise estimates while avoiding some of the identifying assumptions required in earlier work. This relies on a new insight: using multiple data releases allows us to distinguish news and noise measurement errors in situations where a single vintage does not. We find that (a) the data prefer averaging across multiple releases instead of discarding early releases in favor of later ones, and (b) that initial estimates of GDI are quite informative. Our new measure, GDP++, undergoes smaller revisions and tracks expenditure measures of GDP growth more closely than either the simple average of the expenditure and income measures published by the BEA or the GDP growth measure of Aruoba et al. published by the Federal Reserve Bank of Philadelphia.
Journal: Journal of Business & Economic Statistics
Pages: 423-431
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1831928
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1831928
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:423-431
Template-Type: ReDIF-Article 1.0
Author-Name: Matei Demetrescu
Author-X-Name-First: Matei
Author-X-Name-Last: Demetrescu
Author-Name: Benjamin Hillmann
Author-X-Name-First: Benjamin
Author-X-Name-Last: Hillmann
Title: Nonlinear Predictability of Stock Returns? Parametric Versus Nonparametric Inference in Predictive Regressions
Abstract:
Nonparametric test procedures in predictive regressions have χ2 limiting null distributions under both low and high regressor persistence, but low local power compared to misspecified linear predictive regressions. We argue that IV inference is better suited (in terms of local power) for analyzing additive predictive models with uncertain predictor persistence. Then, a two-step procedure is proposed for out-of-sample predictions. For the current estimation window, one first tests for predictability; in case of a rejection, one predicts using a nonlinear regression model, otherwise the historic average of the stock returns is used. This two-step approach performs better than competitors (though not by a large margin) in a pseudo-out-of-sample prediction exercise for the S&P 500.
Journal: Journal of Business & Economic Statistics
Pages: 382-397
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1819821
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1819821
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:382-397
Template-Type: ReDIF-Article 1.0
Author-Name: Yonghong An
Author-X-Name-First: Yonghong
Author-X-Name-Last: An
Author-Name: Le Wang
Author-X-Name-First: Le
Author-X-Name-Last: Wang
Author-Name: Ruli Xiao
Author-X-Name-First: Ruli
Author-X-Name-Last: Xiao
Title: A Nonparametric Nonclassical Measurement Error Approach to Estimating Intergenerational Mobility Elasticities
Abstract:
This article provides a framework for estimating intergenerational mobility elasticities (IGEs) of children’s income with respect to parental income. We allow the IGEs to be heterogeneous, by leaving the relationship of parental and child incomes unspecified, while acknowledging and addressing the latent nature of both child and parental permanent incomes and the resulted life-cycle bias. Our framework enables us to test the widely imposed assumption that the intergenerational mobility function is linear. Applying our method to the Panel Studies of Income Dynamics data, we decisively reject the commonly imposed linearity assumption and find substantial heterogeneity in the IGEs across the population. We confirm an important finding that the IGEs with respect to parental income exhibit a U-shape pattern, which is occasionally highlighted in the analysis using transition matrices. Specifically, there is a considerable degree of mobility among the broadly defined middle class, but the children of both high- and low-income parents are more likely to be high- and low-income adults, respectively. This result also provides insights into the (intertemporal) Great Gatsby curve, suggesting that a higher level of inequality within one generation may lead to a higher level of social immobility in the next generation in the United States.
Journal: Journal of Business & Economic Statistics
Pages: 169-185
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1787176
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1787176
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:169-185
Template-Type: ReDIF-Article 1.0
Author-Name: Martin Huber
Author-X-Name-First: Martin
Author-X-Name-Last: Huber
Author-Name: Mark Schelker
Author-X-Name-First: Mark
Author-X-Name-Last: Schelker
Author-Name: Anthony Strittmatter
Author-X-Name-First: Anthony
Author-X-Name-Last: Strittmatter
Title: Direct and Indirect Effects based on Changes-in-Changes
Abstract:
We propose a novel approach for causal mediation analysis based on changes-in-changes assumptions restricting unobserved heterogeneity over time. This allows disentangling the causal effect of a binary treatment on a continuous outcome into an indirect effect operating through a binary intermediate variable (called mediator) and a direct effect running via other causal mechanisms. We identify average and quantile direct and indirect effects for various subgroups under the condition that the outcome is monotonic in the unobserved heterogeneity and that the distribution of the latter does not change over time conditional on the treatment and the mediator. We also provide a simulation study and two empirical applications regarding a training program evaluation and maternity leave reform.
Journal: Journal of Business & Economic Statistics
Pages: 432-443
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1831929
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1831929
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:432-443
Template-Type: ReDIF-Article 1.0
Author-Name: Mengheng Li
Author-X-Name-First: Mengheng
Author-X-Name-Last: Li
Author-Name: Marcel Scharth
Author-X-Name-First: Marcel
Author-X-Name-Last: Scharth
Title: Leverage, Asymmetry, and Heavy Tails in the High-Dimensional Factor Stochastic Volatility Model
Abstract:
We develop a factor stochastic volatility model that incorporates leverage effects, return asymmetry, and heavy tails across all systematic and idiosyncratic model components. Our model leads to a flexible high-dimensional dependence structure that allows for time-varying correlations, tail dependence, and volatility response to both systematic and idiosyncratic return shocks. We develop an efficient Markov chain Monte Carlo algorithm for posterior estimation based on the particle Gibbs, ancestor sampling, particle efficient importance sampling methods, and interweaving strategy. To obtain parsimonious specifications in practice, we build computationally efficient model selection directly into our estimation algorithm. We validate the performance of our proposed estimation method via simulation studies with different model specifications. An empirical study for a sample of U.S. stocks shows that return asymmetry is a systematic phenomenon and our model outperforms other factor models for value-at-risk evaluation.
Journal: Journal of Business & Economic Statistics
Pages: 285-301
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1806853
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1806853
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:285-301
Template-Type: ReDIF-Article 1.0
Author-Name: Yeqing Zhou
Author-X-Name-First: Yeqing
Author-X-Name-Last: Zhou
Author-Name: Yaowu Zhang
Author-X-Name-First: Yaowu
Author-X-Name-Last: Zhang
Author-Name: Liping Zhu
Author-X-Name-First: Liping
Author-X-Name-Last: Zhu
Title: A Projective Approach to Conditional Independence Test for Dependent Processes
Abstract:
Conditional independence is a fundamental concept in many scientific fields. In this article, we propose a projective approach to measuring and testing departure from conditional independence for dependent processes. Through projecting high-dimensional dependent processes on to low-dimensional subspaces, our proposed projective approach is insensitive to the dimensions of the processes. We show that, under the common β-mixing conditions, our proposed projective test statistic is n-consistent if these processes are conditionally independent and root-n-consistent otherwise. We suggest a bootstrap procedure to approximate the asymptotic null distribution of the test statistic. The consistency of this bootstrap procedure is also rigorously established. The finite-sample performance of our proposed projective test is demonstrated through simulations against various alternatives and an economic application to test for Granger causality.
Journal: Journal of Business & Economic Statistics
Pages: 398-407
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1826952
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1826952
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:398-407
Template-Type: ReDIF-Article 1.0
Author-Name: W. Erwin Diewert
Author-X-Name-First: W. Erwin
Author-X-Name-Last: Diewert
Author-Name: Kevin J. Fox
Author-X-Name-First: Kevin J.
Author-X-Name-Last: Fox
Title: Substitution Bias in Multilateral Methods for CPI Construction
Abstract:
The use of multilateral indexes is increasingly an accepted approach for incorporating scanner data in a consumer price index. The attractiveness stems from the ability to be able to control for chain drift bias. Consensus on two key issues has yet to be achieved: (i) the best multilateral method to use, and (ii) the best way of extending the resulting series when new observations become available. We present theoretical and simulation evidence on the extent of substitution biases in alternative methods. Our results suggest the use of the Caves–Christensen–Diewert–Inklaar index with a new method, the “mean splice,” for updating.
Journal: Journal of Business & Economic Statistics
Pages: 355-369
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1816176
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1816176
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:355-369
Template-Type: ReDIF-Article 1.0
Author-Name: Rui Li
Author-X-Name-First: Rui
Author-X-Name-Last: Li
Author-Name: Chenlei Leng
Author-X-Name-First: Chenlei
Author-X-Name-Last: Leng
Author-Name: Jinhong You
Author-X-Name-First: Jinhong
Author-X-Name-Last: You
Title: Semiparametric Tail Index Regression
Abstract:
Abstract–Understanding why extreme events occur is often of major scientific interest in many fields. The occurrence of these events naturally depends on explanatory variables, but there is a severe lack of flexible models with asymptotic theory for understanding this dependence, especially when variables can affect the outcome nonlinearly. This article proposes a novel semiparametric tail index regression model to fill the gap for this purpose. We construct consistent estimators for both parametric and nonparametric components of the model, establish the corresponding asymptotic normality properties for these components that can be applied for further inference, and illustrate the usefulness of the model via extensive Monte Carlo simulation and the analysis of return on equity data and Alps meteorology data.
Journal: Journal of Business & Economic Statistics
Pages: 82-95
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1775616
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1775616
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:82-95
Template-Type: ReDIF-Article 1.0
Author-Name: Josefine Quast
Author-X-Name-First: Josefine
Author-X-Name-Last: Quast
Author-Name: Maik H. Wolters
Author-X-Name-First: Maik H.
Author-X-Name-Last: Wolters
Title: Reliable Real-Time Output Gap Estimates Based on a Modified Hamilton Filter
Abstract:
We propose a simple modification of Hamilton’s time series filter that yields reliable and economically meaningful real-time output gap estimates. The original filter relies on 8 quarter ahead forecast errors of a simple autoregression of real GDP. While this approach yields a cyclical component that is hardly revised with new incoming data due to the one-sided filtering approach, it does not cover typical business cycle frequencies evenly, but mutes short and amplifies medium length cycles. Further, as the estimated trend contains high-frequency noise, it can hardly be interpreted as potential GDP. A simple modification based on the mean of 4 to 12 quarter ahead forecast errors shares the favorable real-time properties of the Hamilton filter, but leads to a much better coverage of typical business cycle frequencies and a smooth estimated trend. Based on output growth and inflation forecasts and a comparison to revised output gap estimates from policy institutions, we find that real-time output gaps based on the modified and the original Hamilton filter are economically much more meaningful measures of the business cycle than those based on other simple statistical trend-cycle decomposition techniques, such as the HP or bandpass filter, and should thus be used preferably.
Journal: Journal of Business & Economic Statistics
Pages: 152-168
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1784747
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1784747
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:152-168
Template-Type: ReDIF-Article 1.0
Author-Name: Jim E. Griffin
Author-X-Name-First: Jim E.
Author-X-Name-Last: Griffin
Author-Name: Gelly Mitrodima
Author-X-Name-First: Gelly
Author-X-Name-Last: Mitrodima
Title: A Bayesian Quantile Time Series Model for Asset Returns
Abstract:
We consider jointly modeling a finite collection of quantiles over time. Formal Bayesian inference on quantiles is challenging since we need access to both the quantile function and the likelihood. We propose a flexible Bayesian time-varying transformation model, which allows the likelihood and the quantile function to be directly calculated. We derive conditions for stationarity, discuss suitable priors, and describe a Markov chain Monte Carlo algorithm for inference. We illustrate the usefulness of the model for estimation and forecasting on stock, index, and commodity returns.
Journal: Journal of Business & Economic Statistics
Pages: 16-27
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1766470
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1766470
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:16-27
Template-Type: ReDIF-Article 1.0
Author-Name: Kin Wai Chan
Author-X-Name-First: Kin Wai
Author-X-Name-Last: Chan
Title: Mean-Structure and Autocorrelation Consistent Covariance Matrix Estimation
Abstract:
We consider estimation of the asymptotic covariance matrix in nonstationary time series. A nonparametric estimator that is robust against unknown forms of trends and possibly a divergent number of change points (CPs) is proposed. It is algorithmically fast because neither a search for CPs, estimation of trends, nor cross-validation is required. Together with our proposed automatic optimal bandwidth selector, the resulting estimator is both statistically and computationally efficient. It is, therefore, useful in many statistical procedures, for example, CPs detection and construction of simultaneous confidence bands of trends. Empirical studies on four stock market indices are also discussed.
Journal: Journal of Business & Economic Statistics
Pages: 201-215
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1796397
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1796397
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:201-215
Template-Type: ReDIF-Article 1.0
Author-Name: Yuan Ke
Author-X-Name-First: Yuan
Author-X-Name-Last: Ke
Author-Name: Heng Lian
Author-X-Name-First: Heng
Author-X-Name-Last: Lian
Author-Name: Wenyang Zhang
Author-X-Name-First: Wenyang
Author-X-Name-Last: Zhang
Title: High-Dimensional Dynamic Covariance Matrices With Homogeneous Structure
Abstract:
High-dimensional covariance matrices appear in many disciplines. Much literature has devoted to the research in high-dimensional constant covariance matrices. However, constant covariance matrices are not sufficient in applications, for example, in portfolio allocation, dynamic covariance matrices would be more appropriate. As argued in this article, there are two difficulties in the introduction of dynamic structures into covariance matrices: (1) simply assuming each entry of a covariance matrix is a function of time to introduce the dynamic needed would not work; (2) there is a risk of having too many unknowns to estimate due to the high dimensionality. In this article, we propose a dynamic structure embedded with a homogeneous structure. We will demonstrate the proposed dynamic structure makes more sense in applications and avoids, in the meantime, too many unknown parameters/functions to estimate, due to the embedded homogeneous structure. An estimation procedure is also proposed to estimate the proposed high-dimensional dynamic covariance matrices, and asymptotic properties are established to justify the proposed estimation procedure. Intensive simulation studies show the proposed estimation procedure works very well when the sample size is finite. Finally, we apply the proposed high-dimensional dynamic covariance matrices to portfolio allocation. It is interesting to see the resulting portfolio yields much better returns than some commonly used ones.
Journal: Journal of Business & Economic Statistics
Pages: 96-110
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1779079
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1779079
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:96-110
Template-Type: ReDIF-Article 1.0
Author-Name: Evan Munro
Author-X-Name-First: Evan
Author-X-Name-Last: Munro
Author-Name: Serena Ng
Author-X-Name-First: Serena
Author-X-Name-Last: Ng
Title: Latent Dirichlet Analysis of Categorical Survey Responses
Abstract:
Beliefs are important determinants of an individual’s choices and economic outcomes, so understanding how they comove and differ across individuals is of considerable interest. Researchers often rely on surveys that report individual beliefs as qualitative data. We propose using a Bayesian hierarchical latent class model to analyze the comovements and observed heterogeneity in categorical survey responses. We show that the statistical model corresponds to an economic structural model of information acquisition, which guides interpretation and estimation of the model parameters. An algorithm based on stochastic optimization is proposed to estimate a model for repeated surveys when responses follow a dynamic structure and conjugate priors are not appropriate. Guidance on selecting the number of belief types is also provided. Two examples are considered. The first shows that there is information in the Michigan survey responses beyond the consumer sentiment index that is officially published. The second shows that belief types constructed from survey responses can be used in a subsequent analysis to estimate heterogeneous returns to education.
Journal: Journal of Business & Economic Statistics
Pages: 256-271
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1802285
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1802285
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:256-271
Template-Type: ReDIF-Article 1.0
Author-Name: Holger Dette
Author-X-Name-First: Holger
Author-X-Name-Last: Dette
Author-Name: Weichi Wu
Author-X-Name-First: Weichi
Author-X-Name-Last: Wu
Title: Prediction in Locally Stationary Time Series
Abstract:
We develop an estimator for the high-dimensional covariance matrix of a locally stationary process with a smoothly varying trend and use this statistic to derive consistent predictors in nonstationary time series. In contrast to the currently available methods for this problem the predictor developed here does not rely on fitting an autoregressive model and does not require a vanishing trend. The finite sample properties of the new methodology are illustrated by means of a simulation study and a financial indices study. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 370-381
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1819296
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1819296
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:370-381
Template-Type: ReDIF-Article 1.0
Author-Name: Yu-Chin Hsu
Author-X-Name-First: Yu-Chin
Author-X-Name-Last: Hsu
Author-Name: Tsung-Chih Lai
Author-X-Name-First: Tsung-Chih
Author-X-Name-Last: Lai
Author-Name: Robert P. Lieli
Author-X-Name-First: Robert P.
Author-X-Name-Last: Lieli
Title: Counterfactual Treatment Effects: Estimation and Inference
Abstract:
This article proposes statistical methods to evaluate the quantile counterfactual treatment effect (QCTE) if one were to change the composition of the population targeted by a status quo program. QCTE enables a researcher to carry out an ex-ante assessment of the distributional impact of certain policy interventions or to investigate the possible explanations for treatment effect heterogeneity. Assuming unconfoundedness and invariance of the conditional distributions of the potential outcomes, QCTE is identified and can be nonparametrically estimated by a kernel-based method. Viewed as a random function over the continuum of quantile indices, the estimator converges weakly to a zero mean Gaussian process at the parametric rate. We propose a multiplier bootstrap procedure to construct uniform confidence bands, and provide similar results for average effects and for the counterfactually treated subpopulation. We also present Monte Carlo simulations and two counterfactual exercises that provide insight into the heterogeneous earnings effects of the Job Corps training program in the United States.
Journal: Journal of Business & Economic Statistics
Pages: 240-255
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2020.1800479
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1800479
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:240-255
Template-Type: ReDIF-Article 1.0
Author-Name: The Editors
Title: Correction
Journal: Journal of Business & Economic Statistics
Pages: 467-467
Issue: 1
Volume: 40
Year: 2022
Month: 1
X-DOI: 10.1080/07350015.2021.1971536
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1971536
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:1:p:467-467
Template-Type: ReDIF-Article 1.0
Author-Name: H. Peter Boswijk
Author-X-Name-First: H. Peter
Author-X-Name-Last: Boswijk
Author-Name: Yang Zu
Author-X-Name-First: Yang
Author-X-Name-Last: Zu
Title: Adaptive Testing for Cointegration With Nonstationary Volatility
Abstract:
This article develops a class of adaptive cointegration tests for multivariate time series with nonstationary volatility. Persistent changes in the innovation variance matrix of a vector autoregressive model lead to size distortions in conventional cointegration tests, which may be resolved using the wild bootstrap, as shown in recent work by Cavaliere, Rahbek, and Taylor. We show that it also leads to the possibility of constructing tests with higher power, by taking the time-varying volatilities and correlations into account in the formulation of the likelihood function and the resulting likelihood ratio test statistic. We find that under suitable conditions, adaptation with respect to the volatility process is possible, in the sense that nonparametric volatility matrix estimation does not lead to a loss of asymptotic local power relative to the case where the volatilities are observed. The asymptotic null distribution of the test is nonstandard and depends on the volatility process; we show that various bootstrap implementations may be used to conduct asymptotically valid inference. Monte Carlo simulations show that the resulting test has good size properties, and higher power than existing tests. Empirical analyses of the U.S. term structure of interest rates and purchasing power parity illustrate the applicability of the tests.
Journal: Journal of Business & Economic Statistics
Pages: 744-755
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1867558
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1867558
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:744-755
Template-Type: ReDIF-Article 1.0
Author-Name: Håkon Otneim
Author-X-Name-First: Håkon
Author-X-Name-Last: Otneim
Author-Name: Dag Tjøstheim
Author-X-Name-First: Dag
Author-X-Name-Last: Tjøstheim
Title: The Locally Gaussian Partial Correlation
Abstract:
It is well known in econometrics and other fields that the dependence structure for jointly Gaussian variables can be fully captured using correlations, and that the conditional dependence structure in the same way can be described using partial correlations. The partial correlation does not, however, characterize conditional dependence in many non-Gaussian populations. This article introduces the local Gaussian partial correlation (LGPC), a new measure of conditional dependence. It is a local version of the partial correlation coefficient that characterizes conditional dependence in a large class of populations. It has some useful and novel properties besides: The LGPC reduces to the ordinary partial correlation for jointly normal variables, and it distinguishes between positive and negative conditional dependence. Furthermore, the LGPC can be used to study departures from conditional independence in specific parts of the distribution. We provide several examples of this, both simulated and real, and derive estimation theory under a local likelihood estimation framework. Finally, we indicate how the LGPC can be used to construct a powerful test for conditional independence, which, for example, can be used to detect nonlinear Granger causality in time series.
Journal: Journal of Business & Economic Statistics
Pages: 924-936
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1886107
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1886107
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:924-936
Template-Type: ReDIF-Article 1.0
Author-Name: Zemin Zheng
Author-X-Name-First: Zemin
Author-X-Name-Last: Zheng
Author-Name: Yang Li
Author-X-Name-First: Yang
Author-X-Name-Last: Li
Author-Name: Jie Wu
Author-X-Name-First: Jie
Author-X-Name-Last: Wu
Author-Name: Yuchen Wang
Author-X-Name-First: Yuchen
Author-X-Name-Last: Wang
Title: Sequential Scaled Sparse Factor Regression
Abstract:
Large-scale association analysis between multivariate responses and predictors is of great practical importance, as exemplified by modern business applications including social media marketing and crisis management. Despite the rapid methodological advances, how to obtain scalable estimators with free tuning of the regularization parameters remains unclear under general noise covariance structures. In this article, we develop a new methodology called sequential scaled sparse factor regression (SESS) based on a new viewpoint that the problem of recovering a jointly low-rank and sparse regression coefficient matrix can be decomposed into several univariate response sparse regressions through regular eigenvalue decomposition. It combines the strengths of sequential estimation and scaled sparse regression, thus sharing the scalability and the tuning free property for sparsity parameters inherited from the two approaches. The stepwise convex formulation, sequential factor regression framework, and tuning insensitiveness make SESS highly scalable for big data applications. Comprehensive theoretical justifications with new insights into high-dimensional multi-response regressions are also provided. We demonstrate the scalability and effectiveness of the proposed method by simulation studies and stock short interest data analysis.
Journal: Journal of Business & Economic Statistics
Pages: 595-604
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1844212
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1844212
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:595-604
Template-Type: ReDIF-Article 1.0
Author-Name: Fabrizio Iacone
Author-X-Name-First: Fabrizio
Author-X-Name-Last: Iacone
Author-Name: Morten Ørregaard Nielsen
Author-X-Name-First: Morten Ørregaard
Author-X-Name-Last: Nielsen
Author-Name: A. M. Robert Taylor
Author-X-Name-First: A. M. Robert
Author-X-Name-Last: Taylor
Title: Semiparametric Tests for the Order of Integration in the Possible Presence of Level Breaks
Abstract:
Lobato and Robinson developed semiparametric tests for the null hypothesis that a series is weakly autocorrelated, or I(0), about a constant level, against fractionally integrated alternatives. These tests have the advantage that the user is not required to specify a parametric model for any weak autocorrelation present in the series. We extend this approach in two distinct ways. First, we show that it can be generalized to allow for testing of the null hypothesis that a series is I(δ) for any δ lying in the usual stationary and invertible region of the parameter space. The second extension is the more substantive and addresses the well-known issue in the literature that long memory and level breaks can be mistaken for one another, with unmodeled level breaks rendering fractional integration tests highly unreliable. To deal with this inference problem, we extend the Lobato and Robinson approach to allow for the possibility of changes in level at unknown points in the series. We show that the resulting statistics have standard limiting null distributions, and that the tests based on these statistics attain the same asymptotic local power functions as infeasible tests based on the unobserved errors, and hence there is no loss in asymptotic local power from allowing for level breaks, even where none is present. We report results from a Monte Carlo study into the finite-sample behavior of our proposed tests, as well as several empirical examples.
Journal: Journal of Business & Economic Statistics
Pages: 880-896
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1876712
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1876712
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:880-896
Template-Type: ReDIF-Article 1.0
Author-Name: Haolei Weng
Author-X-Name-First: Haolei
Author-X-Name-Last: Weng
Author-Name: Yang Feng
Author-X-Name-First: Yang
Author-X-Name-Last: Feng
Title: Discussion of “Cocitation and Coauthorship Networks of Statisticians”
Abstract:
We congratulate the authors for their stimulating and thought-provoking work on network data analysis. In the article, the authors not only introduce a new large-scale and high-quality publication dataset that will surely become an important benchmark for further network research, but also present novel statistical methods and modeling which lead to very interesting findings about the statistics community. There is much material for thought and exploration. In this discussion, we will focus on the cocitation networks, and discuss a few points for the coauthorship networks toward the end.
Journal: Journal of Business & Economic Statistics
Pages: 486-490
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2022.2037432
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2037432
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:486-490
Template-Type: ReDIF-Article 1.0
Author-Name: Zifeng Zhao
Author-X-Name-First: Zifeng
Author-X-Name-Last: Zhao
Author-Name: Peng Shi
Author-X-Name-First: Peng
Author-X-Name-Last: Shi
Author-Name: Zhengjun Zhang
Author-X-Name-First: Zhengjun
Author-X-Name-Last: Zhang
Title: Modeling Multivariate Time Series With Copula-Linked Univariate D-Vines
Abstract:
This article proposes a novel multivariate time series model named copula-linked univariate D-vines (CuDvine), which enables the simultaneous copula-based modeling of both temporal and cross-sectional dependence for multivariate time series. To construct CuDvine, we first build a semiparametric univariate D-vine time series model (uDvine) based on a D-vine. The uDvine generalizes the existing first-order copula-based Markov chain models to Markov chains of an arbitrary-order. Building upon uDvine, we construct CuDvine by linking multiple uDvines via a parametric copula. As a simple and tractable model, CuDvine provides flexible models for marginal behavior and temporal dependence of time series, and can also incorporate sophisticated cross-sectional dependence such as time-varying and spatio-temporal dependence for high-dimensional applications. Robust and computationally efficient procedures, including a sequential model selection method and a two-stage MLE, are proposed for model estimation and inference, and their statistical properties are investigated. Numerical experiments are conducted to demonstrate the flexibility of CuDvine, and to examine the performance of the sequential model selection procedure and the two-stage MLE. Real data applications on the Australian electricity price data demonstrate the superior performance of CuDvine to traditional multivariate time series models.
Journal: Journal of Business & Economic Statistics
Pages: 690-704
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1859381
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1859381
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:690-704
Template-Type: ReDIF-Article 1.0
Author-Name: Mengya Liu
Author-X-Name-First: Mengya
Author-X-Name-Last: Liu
Author-Name: Fukang Zhu
Author-X-Name-First: Fukang
Author-X-Name-Last: Zhu
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Title: Multifrequency-Band Tests for White Noise Under Heteroscedasticity
Abstract:
This article proposes a new family of multifrequency-band tests for the white noise hypothesis by using the maximum overlap discrete wavelet packet transform. At each scale, the proposed multifrequency-band test has the chi-square asymptotic null distribution under mild conditions, which allow the data to be heteroscedastic. Moreover, an automatic multifrequency-band test is further proposed by using a data-driven method to select the scale, and its asymptotic null distribution is chi-square with one degree of freedom. Both multifrequency-band and automatic multifrequency-band tests are shown to have the desirable size and power performance by simulation studies, and their usefulness is further illustrated by two applications. As an extension, similar tests are given to check the adequacy of linear time series regression models, based on the unobserved model residuals.
Journal: Journal of Business & Economic Statistics
Pages: 799-814
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1870478
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1870478
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:799-814
Template-Type: ReDIF-Article 1.0
Author-Name: Helmut Farbmacher
Author-X-Name-First: Helmut
Author-X-Name-Last: Farbmacher
Author-Name: Raphael Guber
Author-X-Name-First: Raphael
Author-X-Name-Last: Guber
Author-Name: Sven Klaassen
Author-X-Name-First: Sven
Author-X-Name-Last: Klaassen
Title: Instrument Validity Tests With Causal Forests
Abstract:
Assumptions that are sufficient to identify local average treatment effects (LATEs) generate necessary conditions that allow instrument validity to be refuted. The degree to which instrument validity is violated, however, probably varies across subpopulations. In this article, we use causal forests to search and test for such local violations of the LATE assumptions in a data-driven way. Unlike previous instrument validity tests, our procedure is able to detect local violations. We evaluate the performance of our procedure in simulations and apply it in two different settings: parental preferences for mixed-sex composition of children and the Vietnam draft lottery.
Journal: Journal of Business & Economic Statistics
Pages: 605-614
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1847122
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1847122
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:605-614
Template-Type: ReDIF-Article 1.0
Author-Name: Yang Feng
Author-X-Name-First: Yang
Author-X-Name-Last: Feng
Author-Name: Qingfeng Liu
Author-X-Name-First: Qingfeng
Author-X-Name-Last: Liu
Author-Name: Qingsong Yao
Author-X-Name-First: Qingsong
Author-X-Name-Last: Yao
Author-Name: Guoqing Zhao
Author-X-Name-First: Guoqing
Author-X-Name-Last: Zhao
Title: Model Averaging for Nonlinear Regression Models
Abstract:
This article considers the problem of model averaging for regression models that can be nonlinear in their parameters and variables. We consider a nonlinear model averaging (NMA) framework and propose a weight-choosing criterion, the nonlinear information criterion (NIC). We show that up to a constant, NIC is an asymptotically unbiased estimator of the risk function under nonlinear settings with some mild assumptions. We also prove the optimality of NIC and show the convergence of the model averaging weights. Monte Carlo experiments reveal that NMA leads to relatively lower risks compared with alternative model selection and model averaging methods in most situations. Finally, we apply the NMA method to predicting the individual wage, where our approach leads to the lowest prediction errors in most cases.
Journal: Journal of Business & Economic Statistics
Pages: 785-798
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1870477
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1870477
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:785-798
Template-Type: ReDIF-Article 1.0
Author-Name: Nicolas Debarsy
Author-X-Name-First: Nicolas
Author-X-Name-Last: Debarsy
Author-Name: James P. LeSage
Author-X-Name-First: James P.
Author-X-Name-Last: LeSage
Title: Bayesian Model Averaging for Spatial Autoregressive Models Based on Convex Combinations of Different Types of Connectivity Matrices
Abstract:
There is a great deal of literature regarding use of nongeographically based connectivity matrices or combinations of geographic and non-geographic structures in spatial econometric models. We focus on convex combinations of weight matrices that result in a single weight matrix reflecting multiple types of connectivity, where coefficients from the convex combination can be used for inference regarding the relative importance of each type of connectivity in the global cross-sectional dependence scheme. We tackle the question of model uncertainty regarding selection of the best convex combination by Bayesian model averaging. We use Metropolis–Hastings guided Monte Carlo integration during MCMC estimation of the models to produce log-marginal likelihoods and associated posterior model probabilities. We focus on MCMC estimation, computation of posterior model probabilities, model averaged estimates of the parameters, scalar summary measures of the non-linear partial derivative impacts, and their associated empirical measures of dispersion.
Journal: Journal of Business & Economic Statistics
Pages: 547-558
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1840993
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1840993
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:547-558
Template-Type: ReDIF-Article 1.0
Author-Name: Jone Ascorbebeitia
Author-X-Name-First: Jone
Author-X-Name-Last: Ascorbebeitia
Author-Name: Eva Ferreira
Author-X-Name-First: Eva
Author-X-Name-Last: Ferreira
Author-Name: Susan Orbe
Author-X-Name-First: Susan
Author-X-Name-Last: Orbe
Title: The Effect of Dependence on European Market Risk. A Nonparametric Time Varying Approach
Abstract:
Multivariate dependence measures are crucial for risk management, where variables usually have heavy tails and non-Gaussian distributions. We propose a multivariate time varying Kendall’s tau estimator in a nonparametric context, allowing for local stationary variables. Consistency and asymptotic normality of the estimator are provided. A simulation study is conducted which supports the idea of better performance than other related methods in many complex scenarios. The proposal is used to draw up a daily estimation of the dependence between European financial market indexes. Nonparametric conditional quantiles are estimated to detect any influence of the degree of dependence on the market returns. That dependence emerges as an important factor in the Euro Stoxx distribution. It is noteworthy that the Kendall’s tau only depends on the multivariate copula, so the effect is not due to hidden effects of the marginals. Local Granger causality is tested and evidence is found that the degree of dependence affects the Euro Stoxx returns in the left tail of the distribution. We believe that these results encourage further research into the effect of diversification in quantiles, linked to the factors behind systemic risk. Additionally, there is a noteworthy increase in dependence following the outbreak of COVID-19.
Journal: Journal of Business & Economic Statistics
Pages: 913-923
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1883439
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1883439
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:913-923
Template-Type: ReDIF-Article 1.0
Author-Name: Zhuying Xu
Author-X-Name-First: Zhuying
Author-X-Name-Last: Xu
Author-Name: Seonjin Kim
Author-X-Name-First: Seonjin
Author-X-Name-Last: Kim
Author-Name: Zhibiao Zhao
Author-X-Name-First: Zhibiao
Author-X-Name-Last: Zhao
Title: Locally Stationary Quantile Regression for Inflation and Interest Rates
Abstract:
Motivated by the potential time-varying and quantile-specific relation between inflation and interest rates, we propose a locally stationary quantile regression approach to model the inflation and interest rates relation. Large sample theory for estimation and inference of quantile-varying and time-varying coefficients are established. In empirical analysis of inflation and interest rates relation, it is found that the estimated functional coefficients vary with time in a complicated manner. Furthermore, the relation is quantile-specific: not only do the selected orders differ for different quantiles, but also the coefficients corresponding to different quantiles can display completely different patterns.
Journal: Journal of Business & Economic Statistics
Pages: 838-851
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1874389
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1874389
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:838-851
Template-Type: ReDIF-Article 1.0
Author-Name: Ayanendranath Basu
Author-X-Name-First: Ayanendranath
Author-X-Name-Last: Basu
Author-Name: Abhik Ghosh
Author-X-Name-First: Abhik
Author-X-Name-Last: Ghosh
Author-Name: Nirian Martin
Author-X-Name-First: Nirian
Author-X-Name-Last: Martin
Author-Name: Leandro Pardo
Author-X-Name-First: Leandro
Author-X-Name-Last: Pardo
Title: A Robust Generalization of the Rao Test
Abstract:
This article presents new families of Rao-type test statistics based on the minimum density power divergence estimators which provide robust generalizations for testing simple and composite null hypotheses. The asymptotic null distributions of the proposed tests are obtained and their robustness properties are also theoretically studied. Numerical illustrations are provided to substantiate the theory developed. On the whole, the proposed tests are seen to be excellent alternatives to the classical Rao test as well as other well-known tests.
Journal: Journal of Business & Economic Statistics
Pages: 868-879
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1876711
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1876711
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:868-879
Template-Type: ReDIF-Article 1.0
Author-Name: Ying Lun Cheung
Author-X-Name-First: Ying Lun
Author-X-Name-Last: Cheung
Title: Long Memory Factor Model: On Estimation of Factor Memories
Abstract:
This article considers the estimation of the integration orders of the latent factors in an approximate factor model. Both the common factors and idiosyncratic error terms are potentially nonstationary fractionally integrated processes. We propose a two-stage approach to estimate the factor memories. We show the consistency and asymptotic normality of the proposed estimator. Applying the estimator to the log-squared returns of the U.S. financial institutions, we find evidence of long memory in the estimated factor. We also find that the factor becomes more persistent after 2007.
Journal: Journal of Business & Economic Statistics
Pages: 756-769
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1867559
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1867559
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:756-769
Template-Type: ReDIF-Article 1.0
Author-Name: Xuexin Wang
Author-X-Name-First: Xuexin
Author-X-Name-Last: Wang
Author-Name: Yixiao Sun
Author-X-Name-First: Yixiao
Author-X-Name-Last: Sun
Title: A Simple Asymptotically F-Distributed Portmanteau Test for Diagnostic Checking of Time Series Models With Uncorrelated Innovations
Abstract:
We propose a simple asymptotically F-distributed portmanteau test for diagnostically checking whether the innovations in a parametric time series model are uncorrelated while allowing them to exhibit higher-order dependence of unknown forms. A transform of sample residual autocovariances removing the influence of parameter estimation uncertainty makes the test simple. Further, by employing the orthonormal series variance estimator, a special sample autocovariances estimator that is asymptotically invariant to parameter estimation uncertainty, we show that the proposed test statistic is asymptotically F-distributed under fixed-smoothing asymptotics. The asymptotic F-theory accounts for the estimation error of the variance estimator that the asymptotic chi-squared theory ignores. Moreover, an extensive Monte Carlo study demonstrates that the F-test has more accurate finite sample size than existing tests with virtually no power loss. An application to S&P 500 returns illustrates the merits of the proposed methodology.
Journal: Journal of Business & Economic Statistics
Pages: 505-521
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1832505
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1832505
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:505-521
Template-Type: ReDIF-Article 1.0
Author-Name: David Donoho
Author-X-Name-First: David
Author-X-Name-Last: Donoho
Title: Data Come First: Discussion of “Co-citation and Co-authorship Networks of Statisticians”
Journal: Journal of Business & Economic Statistics
Pages: 491-491
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2022.2055356
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2055356
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:491-491
Template-Type: ReDIF-Article 1.0
Author-Name: Lu Yang
Author-X-Name-First: Lu
Author-X-Name-Last: Yang
Title: Nonparametric Copula Estimation for Mixed Insurance Claim Data
Abstract:
Multivariate claim data are common in insurance applications, for example, claims of each policyholder from different types of insurance coverages. Understanding the dependencies among such multivariate risks is critical to the solvency and profitability of insurers. Effectively modeling insurance claim data is challenging due to their special complexities. At the policyholder level, claim outcomes usually follow a two-part mixed distribution: a probability mass at zero corresponding to no claim and an otherwise positive claim from a skewed and long-tailed distribution. To simultaneously accommodate the complex features of the marginal distributions while flexibly quantifying the dependencies among multivariate claims, copula models are commonly used. Although a substantial body of literature focusing on copulas with continuous outcomes has emerged, some key steps do not carry over to mixed data. In particular, existing nonparametric copula estimators are not consistent for mixed data, and thus copula specification and diagnostics for mixed outcomes have been a problem. However, insurance is a closely regulated industry in which model validation is particularly important, and it is essential to develop a baseline nonparametric copula estimator to identify the underlying dependence structure. In this article, we fill in this gap by developing a nonparametric copula estimator for mixed data. We show the uniform convergence of the proposed nonparametric copula estimator. Through simulation studies, we demonstrate that the proportion of zeros plays a key role in the finite sample performance of the proposed estimator. Using the claim data from the Wisconsin Local Government Property Insurance Fund, we illustrate that our nonparametric copula estimator can assist analysts in identifying important features of the underlying dependence structure, revealing how different claims or risks are related to one another.
Journal: Journal of Business & Economic Statistics
Pages: 537-546
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1835668
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1835668
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:537-546
Template-Type: ReDIF-Article 1.0
Author-Name: Pengsheng Ji
Author-X-Name-First: Pengsheng
Author-X-Name-Last: Ji
Author-Name: Jiashun Jin
Author-X-Name-First: Jiashun
Author-X-Name-Last: Jin
Author-Name: Zheng Tracy Ke
Author-X-Name-First: Zheng Tracy
Author-X-Name-Last: Ke
Author-Name: Wanshan Li
Author-X-Name-First: Wanshan
Author-X-Name-Last: Li
Title: Co-citation and Co-authorship Networks of Statisticians
Abstract:
We collected and cleaned a large dataset on publications in statistics. The dataset consists of the co-author relationships and citation relationships of 83, 331 articles published in 36 representative journals in statistics, probability, and machine learning, spanning 41 years. The dataset allows us to construct many different networks, and motivates a number of research problems about the research patterns and trends, research impacts, and network topology of the statistics community. In this article we focus on (i) using the citation relationships to estimate the research interests of authors, and (ii) using the co-author relationships to study the network topology. Using co-citation networks we constructed, we discover a “statistics triangle,” reminiscent of the statistical philosophy triangle (Efron 1998). We propose new approaches to constructing the “research map” of statisticians, as well as the “research trajectory” for a given author to visualize his/her research interest evolvement. Using co-authorship networks we constructed, we discover a multi-layer community tree and produce a Sankey diagram to visualize the author migrations in different sub-areas. We also propose several new metrics for research diversity of individual authors. We find that “Bayes,” “Biostatistics,” and “Nonparametric” are three primary areas in statistics. We also identify 15 sub-areas, each of which can be viewed as a weighted average of the primary areas, and identify several underlying reasons for the formation of co-authorship communities. We also find that the research interests of statisticians have evolved significantly in the 41-year time window we studied: some areas (e.g., biostatistics, high-dimensional data analysis, etc.) have become increasingly more popular. The research diversity of statisticians may be lower than we might have expected. For example, for the personalized networks of most authors, the p-values of the proposed significance tests are relatively large.
Journal: Journal of Business & Economic Statistics
Pages: 469-485
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1978469
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1978469
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:469-485
Template-Type: ReDIF-Article 1.0
Author-Name: Genya Kobayashi
Author-X-Name-First: Genya
Author-X-Name-Last: Kobayashi
Author-Name: Yuta Yamauchi
Author-X-Name-First: Yuta
Author-X-Name-Last: Yamauchi
Author-Name: Kazuhiko Kakamu
Author-X-Name-First: Kazuhiko
Author-X-Name-Last: Kakamu
Author-Name: Yuki Kawakubo
Author-X-Name-First: Yuki
Author-X-Name-Last: Kawakubo
Author-Name: Shonosuke Sugasawa
Author-X-Name-First: Shonosuke
Author-X-Name-Last: Sugasawa
Title: Bayesian Approach to Lorenz Curve Using Time Series Grouped Data
Abstract:
This study is concerned with estimating the inequality measures associated with the underlying hypothetical income distribution from the times series grouped data on the income proportions. We adopt the Dirichlet likelihood approach where the parameters of the Dirichlet likelihood are set to the differences between the Lorenz curve of the hypothetical income distribution for the consecutive income classes and propose a state-space model which combines the transformed parameters of the Lorenz curve through a time series structure. The present article also studies the possibility of extending the likelihood model by considering a generalized version of the Dirichlet distribution where the mean is modeled based on the Lorenz curve with an additional hierarchical structure. The simulated data and real data on the Japanese monthly income survey confirmed that the proposed approach produces more efficient estimates on the inequality measures than the existing method that estimates the model independently without time series structures.
Journal: Journal of Business & Economic Statistics
Pages: 897-912
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1883438
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1883438
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:897-912
Template-Type: ReDIF-Article 1.0
Author-Name: Joshua Daniel Loyal
Author-X-Name-First: Joshua Daniel
Author-X-Name-Last: Loyal
Author-Name: Yuguo Chen
Author-X-Name-First: Yuguo
Author-X-Name-Last: Chen
Title: Discussion of “Co-citation and Co-authorship Networks of Statisticians”
Journal: Journal of Business & Economic Statistics
Pages: 497-498
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2022.2044828
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2044828
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:497-498
Template-Type: ReDIF-Article 1.0
Author-Name: Yuya Sasaki
Author-X-Name-First: Yuya
Author-X-Name-Last: Sasaki
Author-Name: Yulong Wang
Author-X-Name-First: Yulong
Author-X-Name-Last: Wang
Title: Fixed-k Inference for Conditional Extremal Quantiles
Abstract:
We develop a new extreme value theory for repeated cross-sectional and longitudinal/panel data to construct asymptotically valid confidence intervals (CIs) for conditional extremal quantiles from a fixed number k of nearest-neighbor tail observations. As a by-product, we also construct CIs for extremal quantiles of coefficients in linear random coefficient models. For any fixed k, the CIs are uniformly valid without parametric assumptions over a set of nonparametric data generating processes associated with various tail indices. Simulation studies show that our CIs exhibit superior small-sample coverage and length properties than alternative nonparametric methods based on asymptotic normality. Applying the proposed method to Natality Vital Statistics, we study factors of extremely low birth weights. We find that signs of major effects are the same as those found in preceding studies based on parametric models, but with different magnitudes.
Journal: Journal of Business & Economic Statistics
Pages: 829-837
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1870985
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1870985
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:829-837
Template-Type: ReDIF-Article 1.0
Author-Name: Michela Bia
Author-X-Name-First: Michela
Author-X-Name-Last: Bia
Author-Name: Alessandra Mattei
Author-X-Name-First: Alessandra
Author-X-Name-Last: Mattei
Author-Name: Andrea Mercatanti
Author-X-Name-First: Andrea
Author-X-Name-Last: Mercatanti
Title: Assessing Causal Effects in a Longitudinal Observational Study With “Truncated” Outcomes Due to Unemployment and Nonignorable Missing Data
Abstract:
Important statistical issues pervade the evaluation of training programs’ effects for unemployed people. In particular, the fact that offered wages are observed and well-defined only for subjects who are employed (truncation by death), and the problem that information on the individuals’ employment status and wage can be lost over time (attrition) raise methodological challenges for causal inference. We present an extended framework for simultaneously addressing the aforementioned problems, and thus answering important substantive research questions, in training evaluation observational studies with covariates, a binary treatment and longitudinal information on employment status and wage, which may be missing due to the lost to follow-up. There are two key features of this framework: we use principal stratification to properly define the causal effects of interest and to deal with nonignorable missingness, and we adopt a Bayesian approach for inference. The proposed framework allows us to answer an open issue in economics: the assessment of the trend of reservation wage over the duration of unemployment. We apply our framework to evaluate causal effects of foreign language training programs in Luxembourg, using administrative data on the labor force (IGSS-ADEM dataset). Our findings might be an incentive for the employment agencies to better design and implement future language training programs.
Journal: Journal of Business & Economic Statistics
Pages: 718-729
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1862672
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1862672
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:718-729
Template-Type: ReDIF-Article 1.0
Author-Name: Ekaterina Oparina
Author-X-Name-First: Ekaterina
Author-X-Name-Last: Oparina
Author-Name: Sorawoot Srisuma
Author-X-Name-First: Sorawoot
Author-X-Name-Last: Srisuma
Title: Analyzing Subjective Well-Being Data with Misclassification
Abstract:
We use novel nonparametric techniques to test for the presence of nonclassical measurement error in reported life satisfaction (LS) and study the potential effects from ignoring it. Our dataset comes from Wave 3 of the UK Understanding Society that is surveyed from 35,000 British households. Our test finds evidence of measurement error in reported LS for the entire dataset as well as for 26 out of 32 socioeconomic subgroups in the sample. We estimate the joint distribution of reported and latent LS nonparametrically in order to understand the mis-reporting behavior. We show this distribution can then be used to estimate parametric models of latent LS. We find measurement error bias is not severe enough to distort the main drivers of LS. But there is an important difference that is policy relevant. We find women tend to over-report their latent LS relative to men. This may help explain the gender puzzle that questions why women are reportedly happier than men despite being worse off in objective outcomes such as income and employment.
Journal: Journal of Business & Economic Statistics
Pages: 730-743
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1865169
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1865169
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:730-743
Template-Type: ReDIF-Article 1.0
Author-Name: Pengsheng Ji
Author-X-Name-First: Pengsheng
Author-X-Name-Last: Ji
Author-Name: Jiashun Jin
Author-X-Name-First: Jiashun
Author-X-Name-Last: Jin
Author-Name: Zheng Tracy Ke
Author-X-Name-First: Zheng Tracy
Author-X-Name-Last: Ke
Author-Name: Wanshan Li
Author-X-Name-First: Wanshan
Author-X-Name-Last: Li
Title: Rejoinder: “Co-citation and Co-authorship Networks of Statisticians”
Journal: Journal of Business & Economic Statistics
Pages: 499-504
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2022.2055358
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2055358
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:499-504
Template-Type: ReDIF-Article 1.0
Author-Name: Leopoldo Catania
Author-X-Name-First: Leopoldo
Author-X-Name-Last: Catania
Author-Name: Roberto Di Mari
Author-X-Name-First: Roberto
Author-X-Name-Last: Di Mari
Author-Name: Paolo Santucci de Magistris
Author-X-Name-First: Paolo
Author-X-Name-Last: Santucci de Magistris
Title: Dynamic Discrete Mixtures for High-Frequency Prices
Abstract:
The tick structure of the financial markets entails discreteness of stock price changes. Based on this empirical evidence, we develop a multivariate model for discrete price changes featuring a mechanism to account for the large share of zero returns at high frequency. We assume that the observed price changes are independent conditional on the realization of two hidden Markov chains determining the dynamics and the distribution of the multivariate time series at hand. We study the properties of the model, which is a dynamic mixture of zero-inflated Skellam distributions. We develop an expectation-maximization algorithm with closed-form M-step that allows us to estimate the model by maximum likelihood. In the empirical application, we study the joint distribution of the price changes of a number of assets traded on NYSE. Particular focus is dedicated to the assessment of the quality of univariate and multivariate density forecasts, and of the precision of the predictions of moments like volatility and correlations. Finally, we look at the predictability of price staleness and its determinants in relation to the trading activity on the financial markets.
Journal: Journal of Business & Economic Statistics
Pages: 559-577
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1840994
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1840994
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:559-577
Template-Type: ReDIF-Article 1.0
Author-Name: Peter W. MacDonald
Author-X-Name-First: Peter W.
Author-X-Name-Last: MacDonald
Author-Name: Elizaveta Levina
Author-X-Name-First: Elizaveta
Author-X-Name-Last: Levina
Author-Name: Ji Zhu
Author-X-Name-First: Ji
Author-X-Name-Last: Zhu
Title: Discussion of “Co-citation and Co-authorship Networks of Statisticians” by Pengsheng Ji, Jiashun Jin, Zheng Tracy Ke, and Wanshan Li
Journal: Journal of Business & Economic Statistics
Pages: 492-493
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2022.2041423
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2041423
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:492-493
Template-Type: ReDIF-Article 1.0
Author-Name: Lu Lu
Author-X-Name-First: Lu
Author-X-Name-Last: Lu
Author-Name: Sujit K. Ghosh
Author-X-Name-First: Sujit K.
Author-X-Name-Last: Ghosh
Title: Nonparametric Estimation and Testing for Positive Quadrant Dependent Bivariate Copula
Abstract:
In many practical scenarios (e.g., finance, system reliability, etc.), it is often of interest to estimate a bivariate distribution and test for some desired association properties like positive quadrant dependent (PQD) or negative quadrant dependent (NQD). Often estimation and testing for PQD/NQD property are performed using copula models as it then eliminates the need for estimating marginal distributions. Many parametric copula families have been used that allow for controlling the PQD/NQD property by a finite dimensional parameter (often just real-valued) and the problem reduces to the straightforward estimation and testing for fixed dimensional parameter using standard statistical methodologies (e.g., maximum likelihood). This article extends such a line of work by dropping any parametric assumptions and provides a fully data-dependent automated approach to estimate a copula and test for PQD property. The estimator is shown to be large-sample consistent under a set of mild regularity conditions. Numerical illustrations based on simulated data are also provided to compare the performance of the proposed testing procedure with some available methods and applications to real case studies are also provided.
Journal: Journal of Business & Economic Statistics
Pages: 664-677
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1855186
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1855186
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:664-677
Template-Type: ReDIF-Article 1.0
Author-Name: Yan Fang
Author-X-Name-First: Yan
Author-X-Name-Last: Fang
Author-Name: Lan Xue
Author-X-Name-First: Lan
Author-X-Name-Last: Xue
Author-Name: Carlos Martins-Filho
Author-X-Name-First: Carlos
Author-X-Name-Last: Martins-Filho
Author-Name: Lijian Yang
Author-X-Name-First: Lijian
Author-X-Name-Last: Yang
Title: Robust Estimation of Additive Boundaries With Quantile Regression and Shape Constraints
Abstract:
We consider the estimation of the boundary of a set when it is known to be sufficiently smooth, to satisfy certain shape constraints and to have an additive structure. Our proposed method is based on spline estimation of a conditional quantile regression and is resistant to outliers and/or extreme values in the data. This work is a desirable extension of existing works in the literature and can also be viewed as an alternative to existing estimators that have been used in empirical analysis. The results of a Monte Carlo study show that the new method outperforms the existing methods when outliers or heterogeneity are present. Our theoretical analysis indicates that our proposed boundary estimator is uniformly consistent under a set of standard assumptions. We illustrate practical use of our method by estimating two production functions using real-world datasets.
Journal: Journal of Business & Economic Statistics
Pages: 615-628
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1847123
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1847123
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:615-628
Template-Type: ReDIF-Article 1.0
Author-Name: Juan J. Dolado
Author-X-Name-First: Juan J.
Author-X-Name-Last: Dolado
Author-Name: Heiko Rachinger
Author-X-Name-First: Heiko
Author-X-Name-Last: Rachinger
Author-Name: Carlos Velasco
Author-X-Name-First: Carlos
Author-X-Name-Last: Velasco
Title: LM Tests for Joint Breaks in the Dynamics and Level of a Long-Memory Time Series
Abstract:
We consider a single-step Lagrange multiplier (LM) test for joint breaks (at known or unknown dates) in the long memory parameter, the short-run dynamics, and the level of a fractionally integrated time-series process. The regression version of this test is easily implementable and allows to identify the specific sources of the break when the null hypothesis of parameter stability is rejected. However, its size and power properties are sensitive to the correct specification of short-run dynamics under the null. To address this problem, we propose a slight modification of the LM test (labeled LMW-type test) which also makes use of some information under the alternative (in the spirit of a Wald test). This test shares the same limiting distribution as the LM test under the null and local alternatives but achieves higher power by facilitating the correct specification of the short-run dynamics under the null and any alternative (either local or fixed). Monte Carlo simulations provide support for these theoretical results. An empirical application, concerning the origin of shifts in the long-memory properties of forward discount rates in five G7 countries, illustrates the usefulness of the proposed LMW-type test.
Journal: Journal of Business & Economic Statistics
Pages: 629-650
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1855184
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1855184
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:629-650
Template-Type: ReDIF-Article 1.0
Author-Name: Yucheng Sun
Author-X-Name-First: Yucheng
Author-X-Name-Last: Sun
Author-Name: Wen Xu
Author-X-Name-First: Wen
Author-X-Name-Last: Xu
Title: A Factor-Based Estimation of Integrated Covariance Matrix With Noisy High-Frequency Data
Abstract:
This article studies a high-dimensional factor model with sparse idiosyncratic covariance matrix in continuous time, using asynchronous high-frequency financial data contaminated by microstructure noise. We focus on consistent estimations of the number of common factors, the integrated covariance matrix and its inverse, based on the flat-top realized kernels introduced by Varneskov. Simulation results illustrate the satisfactory performance of our estimators in finite samples. We apply our methodology to the high-frequency price data on a large number of stocks traded in Shanghai and Shenzhen stock exchanges, and demonstrate its value for capturing time-varying covariations and portfolio allocation.
Journal: Journal of Business & Economic Statistics
Pages: 770-784
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1868301
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1868301
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:770-784
Template-Type: ReDIF-Article 1.0
Author-Name: Wenhao Cui
Author-X-Name-First: Wenhao
Author-X-Name-Last: Cui
Title: Laplace Estimator of Integrated Volatility When Sampling Times Are Endogenous
Abstract:
We study a class of nonparametric volatility estimators based on the Laplace transform, which are robust to the presence of the endogeneity of observation times. Asymptotic properties and feasible central limit theorems are established. In the presence of time endogeneity, our bias-corrected Laplace estimator takes advantage of the informational content of time endogeneity, which leads to narrower confidence bounds. The finite sample properties of the estimator are studied through Monte Carlo simulations. Through the simulation study, we also find that due to the presence of the kernel, Laplace estimator could be adopted in a model with microstructure noise. The performance of the Laplace estimator is compared with other commonly used estimators through forecasting exercises by employing high frequency data. We conclude that the bias-corrected Laplace estimator performs better than most estimators in terms of forecasting equity return volatility in the presence of both time endogeneity and market microstructure noise.
Journal: Journal of Business & Economic Statistics
Pages: 651-663
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1855185
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1855185
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:651-663
Template-Type: ReDIF-Article 1.0
Author-Name: Xuerong Chen
Author-X-Name-First: Xuerong
Author-X-Name-Last: Chen
Author-Name: Denis Heng-Yan Leung
Author-X-Name-First: Denis Heng-Yan
Author-X-Name-Last: Leung
Author-Name: Jing Qin
Author-X-Name-First: Jing
Author-X-Name-Last: Qin
Title: Nonignorable Missing Data, Single Index Propensity Score and Profile Synthetic Distribution Function
Abstract:
In missing data problems, missing not at random is difficult to handle since the response probability or propensity score is confounded with the outcome data model in the likelihood. Existing works often assume the propensity score is known up to a finite dimensional parameter. We relax this assumption and consider an unspecified single index model for the propensity score. A pseudo-likelihood based on the complete data is constructed by profiling out a synthetic distribution function that involves the unknown propensity score. The pseudo-likelihood gives asymptotically normal estimates. Simulations show the method compares favorably with existing methods.
Journal: Journal of Business & Economic Statistics
Pages: 705-717
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1860065
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1860065
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:705-717
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaojing Zhu
Author-X-Name-First: Xiaojing
Author-X-Name-Last: Zhu
Author-Name: Eric D. Kolaczyk
Author-X-Name-First: Eric D.
Author-X-Name-Last: Kolaczyk
Title: Discussion of “Co-citation and Co-authorship Networks of Statisticians”
Journal: Journal of Business & Economic Statistics
Pages: 494-496
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2022.2044335
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2044335
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:494-496
Template-Type: ReDIF-Article 1.0
Author-Name: Yi He
Author-X-Name-First: Yi
Author-X-Name-Last: He
Author-Name: Liang Peng
Author-X-Name-First: Liang
Author-X-Name-Last: Peng
Author-Name: Dabao Zhang
Author-X-Name-First: Dabao
Author-X-Name-Last: Zhang
Author-Name: Zifeng Zhao
Author-X-Name-First: Zifeng
Author-X-Name-Last: Zhao
Title: Risk Analysis via Generalized Pareto Distributions
Abstract:
We compute the value-at-risk of financial losses by fitting a generalized Pareto distribution to exceedances over a threshold. Following the common practice of setting the threshold as high sample quantiles, we show that, for both independent observations and time-series data, the asymptotic variance for the maximum likelihood estimation depends on the choice of threshold, unlike the existing study of using a divergent threshold. We also propose a random weighted bootstrap method for the interval estimation of VaR, with critical values computed by the empirical distribution of the absolute differences between the bootstrapped estimators and the maximum likelihood estimator. While our asymptotic results unify the inference with nondivergent and divergent thresholds, the finite sample studies via simulation and application to real data show that the derived confidence intervals well cover the true VaR in insurance and finance.
Journal: Journal of Business & Economic Statistics
Pages: 852-867
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2021.1874390
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1874390
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:852-867
Template-Type: ReDIF-Article 1.0
Author-Name: Wen Xu
Author-X-Name-First: Wen
Author-X-Name-Last: Xu
Author-Name: Yanxi Hou
Author-X-Name-First: Yanxi
Author-X-Name-Last: Hou
Author-Name: Deyuan Li
Author-X-Name-First: Deyuan
Author-X-Name-Last: Li
Title: Prediction of Extremal Expectile Based on Regression Models With Heteroscedastic Extremes
Abstract:
Expectile recently receives much attention for its coherence as a tail risk measure. Estimation of conditional expectile at extremal tails is of great interest in quantitative risk management. Regression analysis is a convenient and useful way to quantify the conditional effect of some predictors or risk factors on an interesting response variable. However, when it comes to the estimation of extremal conditional expectile, the traditional inference methods may suffer from considerable variation due to a lack of sufficient samples on tail regions, which makes the prediction inaccurate. In this article, we study the estimation of extremal conditional expectile based on quantile regression and expectile regression models. We propose three methods to make extrapolation based on a second-order condition for a framework of the so-called conditionally heteroscedastic and unconditionally homoscedastic extremes. In addition, we establish the asymptotic properties of the proposed methods and show their empirical behaviors through simulation studies. Finally, data analysis is conducted to illustrate the applications of the proposed methods in real problems.
Journal: Journal of Business & Economic Statistics
Pages: 522-536
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1833890
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1833890
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:522-536
Template-Type: ReDIF-Article 1.0
Author-Name: Leopoldo Catania
Author-X-Name-First: Leopoldo
Author-X-Name-Last: Catania
Title: A Stochastic Volatility Model With a General Leverage Specification
Abstract:
We introduce a new stochastic volatility model that postulates a general correlation structure between the shocks of the measurement and log volatility equations at different temporal lags. The resulting specification is able to better characterize the leverage effect and propagation in financial time series. Furthermore, it nests other asymmetric volatility models and can be used for testing and diagnostics. We derive the simulated maximum likelihood and quasi maximum likelihood estimators and investigate their finite sample performance in a simulation study. An empirical illustration shows that the postulated correlation structure improves the fit of the leverage propagation and leads to more precise volatility predictions.
Journal: Journal of Business & Economic Statistics
Pages: 678-689
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1855187
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1855187
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:678-689
Template-Type: ReDIF-Article 1.0
Author-Name: Youquan Pei
Author-X-Name-First: Youquan
Author-X-Name-Last: Pei
Author-Name: Tao Huang
Author-X-Name-First: Tao
Author-X-Name-Last: Huang
Author-Name: Heng Peng
Author-X-Name-First: Heng
Author-X-Name-Last: Peng
Author-Name: Jinhong You
Author-X-Name-First: Jinhong
Author-X-Name-Last: You
Title: Network-Based Clustering for Varying Coefficient Panel Data Models
Abstract:
In this article, we introduce a novel varying-coefficient panel-data model with locally stationary regressors and unknown group structure, in which the number of groups and the group membership are left unspecified. We develop a triple-localization approach to estimate the unknown subject-specific coefficient functions and then identify the latent group structure via community detection. To improve the efficiency of the first-stage estimator, we further propose a two-stage estimation method that enables the estimator to achieve optimal rates of convergence. In the theoretical part of the article, we derive the asymptotic theory of the resultant estimators. In the empirical part, we present several simulated examples together with an analysis of real data to illustrate the finite-sample performance of the proposed method.
Journal: Journal of Business & Economic Statistics
Pages: 578-594
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1841648
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1841648
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:578-594
Template-Type: ReDIF-Article 1.0
Author-Name: Loraine Seng
Author-X-Name-First: Loraine
Author-X-Name-Last: Seng
Author-Name: Jialiang Li
Author-X-Name-First: Jialiang
Author-X-Name-Last: Li
Title: Structural Equation Model Averaging: Methodology and Application
Abstract:
The instrumental variable (IV) methods are attractive since they can lead to a consistent answer to the main question in causal modeling, that is, the estimation of average causal effect of an exposure on the outcome in the presence of unmeasured confounding. However, it is now acknowledged in the literature that using weak IVs might not suit the inference goal satisfactorily. We consider the problem of estimating causal effects in an observational study in this article, allowing some IVs to be weak. In many modern learning jobs, we may face a large number of instruments and their quality could range from poor to strong. To incorporate them in a 2-stage least squares estimation procedure, we consider a model averaging technique. The proposed methods only involve a few layers of least squares estimation with closed-form solutions and thus is easy to implement in practice. Theoretical properties are carefully established, including the consistency and asymptotic normality of the estimated causal parameter. Numerical studies are carried out to assess the performance in low- and high-dimensional settings and comparisons are made between our proposed method and a wide range of existing alternative methods. A real data example on home price is analyzed to illustrate our methodology.
Journal: Journal of Business & Economic Statistics
Pages: 815-828
Issue: 2
Volume: 40
Year: 2022
Month: 4
X-DOI: 10.1080/07350015.2020.1870479
File-URL: http://hdl.handle.net/10.1080/07350015.2020.1870479
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:2:p:815-828
Template-Type: ReDIF-Article 1.0
Author-Name: Stephen L. Ross
Author-X-Name-First: Stephen L.
Author-X-Name-Last: Ross
Author-Name: Zhentao Shi
Author-X-Name-First: Zhentao
Author-X-Name-Last: Shi
Title: Measuring Social Interaction Effects When Instruments Are Weak
Abstract:
Studies that distinguish between exogenous and endogenous peer effects are relatively rare. To separate these effects, De Giorgi, Pellizzari, and Redaelli exploited partially overlapping peer groups where attributes of a student’s peers in one group provide instrumental variables (IV) for endogenous effects in another group. We apply this identification strategy to data from a period of transition at a Chinese university: dormitory roommate assignments were changed as students moved between campuses. This transition allows us to measure the endogenous effects between test scores of current roommates, but the traditional IV method suggests the potential for weak IV. We use weak-IV robust techniques to obtain properly sized tests. The S-test, K-test, and QCLR test all reject the null of zero endogenous effects with p-values between 0.01 and 0.05, as compared with 0.003 implied by the traditional IV estimator. The largest 95% confidence interval lower bound is 0.154 from the QCLR test, in contrast to 0.244 from traditional IV. Our findings offer unique evidence that endogenous peer effects influence academic outcomes at an empirically relevant magnitude, and an example where weak-IV robust tests are essential to quantify the relationship. Our results are robust to alternative model specifications.
Journal: Journal of Business & Economic Statistics
Pages: 995-1006
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1895811
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1895811
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:995-1006
Template-Type: ReDIF-Article 1.0
Author-Name: Ashok Kaul
Author-X-Name-First: Ashok
Author-X-Name-Last: Kaul
Author-Name: Stefan Klößner
Author-X-Name-First: Stefan
Author-X-Name-Last: Klößner
Author-Name: Gregor Pfeifer
Author-X-Name-First: Gregor
Author-X-Name-Last: Pfeifer
Author-Name: Manuel Schieler
Author-X-Name-First: Manuel
Author-X-Name-Last: Schieler
Title: Standard Synthetic Control Methods: The Case of Using All Preintervention Outcomes Together With Covariates
Abstract:
It is becoming increasingly popular in applications of standard synthetic control methods to include the entire pretreatment path of the outcome variable as economic predictors. We demonstrate both theoretically and empirically that using all outcome lags as separate predictors renders all other covariates irrelevant in such settings. This finding holds irrespective of how important these covariates are for accurately predicting posttreatment values of the outcome, threatening the estimator’s unbiasedness. We show that estimation results and corresponding policy conclusions can change considerably when the usage of outcome lags as predictors is restricted, resulting in other covariates obtaining positive weights. Monte Carlo studies examine potential bias.
Journal: Journal of Business & Economic Statistics
Pages: 1362-1376
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1930012
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1930012
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1362-1376
Template-Type: ReDIF-Article 1.0
Author-Name: Marc K. Chan
Author-X-Name-First: Marc K.
Author-X-Name-Last: Chan
Author-Name: Simon S. Kwok
Author-X-Name-First: Simon S.
Author-X-Name-Last: Kwok
Title: The PCDID Approach: Difference-in-Differences When Trends Are Potentially Unparallel and Stochastic
Abstract:
We develop a class of regression-based estimators, called Principal Components Difference-in-Differences (PCDID) estimators, for treatment effect estimation. Analogous to a control function approach, PCDID uses factor proxies constructed from control units to control for unobserved trends, assuming that the unobservables follow an interactive effects structure. We clarify the conditions under which the estimands in this regression-based approach represent useful causal parameters of interest. We establish consistency and asymptotic normality results of PCDID estimators under minimal assumptions on the specification of time trends. The PCDID approach is illustrated in an empirical exercise that examines the effects of welfare waiver programs on welfare caseloads in the United States.
Journal: Journal of Business & Economic Statistics
Pages: 1216-1233
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1914636
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1914636
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1216-1233
Template-Type: ReDIF-Article 1.0
Author-Name: Markus Pelger
Author-X-Name-First: Markus
Author-X-Name-Last: Pelger
Author-Name: Ruoxuan Xiong
Author-X-Name-First: Ruoxuan
Author-X-Name-Last: Xiong
Title: State-Varying Factor Models of Large Dimensions
Abstract:
This article develops an inferential theory for state-varying factor models of large dimensions. Unlike constant factor models, loadings are general functions of some recurrent state process. We develop an estimator for the latent factors and state-varying loadings under a large cross-section and time dimension. Our estimator combines nonparametric methods with principal component analysis. We derive the rate of convergence and limiting normal distribution for the factors, loadings, and common components. In addition, we develop a statistical test for a change in the factor structure in different states. We apply the estimator to the U.S. Treasury yields and S&P500 stock returns. The systematic factor structure in treasury yields differs in times of booms and recessions as well as in periods of high market volatility. State-varying factors based on the VIX capture significantly more variation and pricing information in individual stocks than constant factor models.
Journal: Journal of Business & Economic Statistics
Pages: 1315-1333
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1927744
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1927744
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1315-1333
Template-Type: ReDIF-Article 1.0
Author-Name: Michael W. McCracken
Author-X-Name-First: Michael W.
Author-X-Name-Last: McCracken
Author-Name: Joseph T. McGillicuddy
Author-X-Name-First: Joseph T.
Author-X-Name-Last: McGillicuddy
Author-Name: Michael T. Owyang
Author-X-Name-First: Michael T.
Author-X-Name-Last: Owyang
Title: Binary Conditional Forecasts
Abstract:
While conditional forecasting has become prevalent both in the academic literature and in practice (e.g., bank stress testing, scenario forecasting), its applications typically focus on continuous variables. In this article, we merge elements from the literature on the construction and implementation of conditional forecasts with the literature on forecasting binary variables. We use the Qual-VAR, whose joint VAR-probit structure allows us to form conditional forecasts of the latent variable which can then be used to form probabilistic forecasts of the binary variable. We apply the model to forecasting recessions in real-time and investigate the role of monetary and oil shocks on the likelihood of two U.S. recessions.
Journal: Journal of Business & Economic Statistics
Pages: 1246-1258
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1920960
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1920960
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1246-1258
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Zhong
Author-X-Name-First: Wei
Author-X-Name-Last: Zhong
Author-Name: Chuang Wan
Author-X-Name-First: Chuang
Author-X-Name-Last: Wan
Author-Name: Wenyang Zhang
Author-X-Name-First: Wenyang
Author-X-Name-Last: Zhang
Title: Estimation and Inference for Multi-Kink Quantile Regression
Abstract:
This article proposes a new Multi-Kink Quantile Regression (MKQR) model which assumes different linear quantile regression forms in different regions of the domain of the threshold covariate but are still continuous at kink points. First, we investigate parameter estimation, kink points detection and statistical inference in MKQR models. We propose an iterative segmented quantile regression algorithm for estimating both the regression coefficients and the locations of kink points. The proposed algorithm is much more computationally efficient than the grid search algorithm and not sensitive to the selection of initial values. Second, asymptotic properties, such as selection consistency of the number of kink points and asymptotic normality of the estimators of both regression coefficients and kink effects, are established to justify the proposed method theoretically. Third, a score test based on partial subgradients is developed to verify whether the kink effects exist or not. Test-inversion confidence intervals for kink location parameters are also constructed. Monte Carlo simulations and two real data applications on the secondary industrial structure of China and the triceps skinfold thickness of Gambian females illustrate the excellent finite sample performances of the proposed MKQR model. A new R package MultiKink is developed to easily implement the proposed methods.
Journal: Journal of Business & Economic Statistics
Pages: 1123-1139
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1901720
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1901720
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1123-1139
Template-Type: ReDIF-Article 1.0
Author-Name: Andrii Babii
Author-X-Name-First: Andrii
Author-X-Name-Last: Babii
Author-Name: Eric Ghysels
Author-X-Name-First: Eric
Author-X-Name-Last: Ghysels
Author-Name: Jonas Striaukas
Author-X-Name-First: Jonas
Author-X-Name-Last: Striaukas
Title: Machine Learning Time Series Regressions With an Application to Nowcasting
Abstract:
This article introduces structured machine learning regressions for high-dimensional time series data potentially sampled at different frequencies. The sparse-group LASSO estimator can take advantage of such time series data structures and outperforms the unstructured LASSO. We establish oracle inequalities for the sparse-group LASSO estimator within a framework that allows for the mixing processes and recognizes that the financial and the macroeconomic data may have heavier than exponential tails. An empirical application to nowcasting US GDP growth indicates that the estimator performs favorably compared to other alternatives and that text data can be a useful addition to more traditional numerical data. Our methodology is implemented in the R package midasml, available from CRAN.
Journal: Journal of Business & Economic Statistics
Pages: 1094-1106
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1899933
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1899933
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1094-1106
Template-Type: ReDIF-Article 1.0
Author-Name: Harold D. Chiang
Author-X-Name-First: Harold D.
Author-X-Name-Last: Chiang
Author-Name: Kengo Kato
Author-X-Name-First: Kengo
Author-X-Name-Last: Kato
Author-Name: Yukun Ma
Author-X-Name-First: Yukun
Author-X-Name-Last: Ma
Author-Name: Yuya Sasaki
Author-X-Name-First: Yuya
Author-X-Name-Last: Sasaki
Title: Multiway Cluster Robust Double/Debiased Machine Learning
Abstract:
This article investigates double/debiased machine learning (DML) under multiway clustered sampling environments. We propose a novel multiway cross-fitting algorithm and a multiway DML estimator based on this algorithm. We also develop a multiway cluster robust standard error formula. Simulations indicate that the proposed procedure has favorable finite sample performance. Applying the proposed method to market share data for demand analysis, we obtain larger two-way cluster robust standard errors for the price coefficient than nonrobust ones in the demand model.
Journal: Journal of Business & Economic Statistics
Pages: 1046-1056
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1895815
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1895815
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1046-1056
Template-Type: ReDIF-Article 1.0
Author-Name: Sofia Anyfantaki
Author-X-Name-First: Sofia
Author-X-Name-Last: Anyfantaki
Author-Name: Esfandiar Maasoumi
Author-X-Name-First: Esfandiar
Author-X-Name-Last: Maasoumi
Author-Name: Jue Ren
Author-X-Name-First: Jue
Author-X-Name-Last: Ren
Author-Name: Nikolas Topaloglou
Author-X-Name-First: Nikolas
Author-X-Name-Last: Topaloglou
Title: Evidence of Uniform Inefficiency in Market Portfolios Based on Dominance Tests
Abstract:
We find stochastic uniform inefficiency of many widely held (active) portfolios and fund strategies relative to popular benchmarks. Uniformity provides robust findings over general classes of utility (loss) functions and unknown distribution of returns. Evidence is based on statistical tests for the null of stochastic uniform inefficiency of a given portfolio. The alternative is that there is at least one portfolio that dominates it. We derive an analytical characterization of stochastic uniform inefficiency. We give the limit distribution for the empirical test statistic, and present a practical implementation with block bootstrap for consistent estimation of p-values. Our test is asymptotically exact and performs well in Monte Carlo experiments.
Journal: Journal of Business & Economic Statistics
Pages: 937-949
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1888741
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1888741
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:937-949
Template-Type: ReDIF-Article 1.0
Author-Name: Alexander Chudik
Author-X-Name-First: Alexander
Author-X-Name-Last: Chudik
Author-Name: Georgios Georgiadis
Author-X-Name-First: Georgios
Author-X-Name-Last: Georgiadis
Title: Estimation of Impulse Response Functions When Shocks Are Observed at a Higher Frequency Than Outcome Variables
Abstract:
This article proposes mixed-frequency distributed-lag (MFDL) estimators of impulse response functions in a setup where (i) the shock of interest is observed, (ii) the impact variable of interest is observed at a lower frequency (as a temporally aggregated or sequentially sampled variable), (iii) the data generating process (DGP) is given by a VAR model at the frequency of the shock, and (iv) the full set of relevant endogenous variables entering the DGP is unknown or unobserved. Consistency and asymptotic normality of the proposed MFDL estimators is established, and their small-sample performance is documented by a set of Monte Carlo experiments. The usefulness of MFDL estimator is then illustrated in three empirical applications: (i) the daily pass-through of shocks to crude oil prices observed at the daily frequency to U.S. gasoline consumer prices observed at the weekly frequency, (ii) the impact of shocks to global investors’ risk appetite on global capital flows, and (iii) the impact of monetary policy shocks on real activity.
Journal: Journal of Business & Economic Statistics
Pages: 965-979
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1889567
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1889567
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:965-979
Template-Type: ReDIF-Article 1.0
Author-Name: Daoji Li
Author-X-Name-First: Daoji
Author-X-Name-Last: Li
Author-Name: Yinfei Kong
Author-X-Name-First: Yinfei
Author-X-Name-Last: Kong
Author-Name: Yingying Fan
Author-X-Name-First: Yingying
Author-X-Name-Last: Fan
Author-Name: Jinchi Lv
Author-X-Name-First: Jinchi
Author-X-Name-Last: Lv
Title: High-Dimensional Interaction Detection With False Sign Rate Control
Abstract:
Identifying interaction effects is fundamentally important in many scientific discoveries and contemporary applications, but it is challenging since the number of pairwise interactions increases quadratically with the number of covariates and that of higher-order interactions grows even faster. Although there is a growing literature on interaction detection, little work has been done on the prediction and false sign rate on interaction detection in ultrahigh-dimensional regression models. This article fills such a gap. More specifically, in this article we establish some theoretical results on interaction selection for ultrahigh-dimensional quadratic regression models under random designs. We prove that the examined method enjoys the same oracle inequalities as the lasso estimator and further admits an explicit bound on the false sign rate. Moreover, the false sign rate can be asymptotically vanishing. These new theoretical characterizations are confirmed by simulation studies. The performance of our proposed approach is further illustrated through a real data application.
Journal: Journal of Business & Economic Statistics
Pages: 1234-1245
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1917419
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1917419
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1234-1245
Template-Type: ReDIF-Article 1.0
Author-Name: Yiu Lim Lui
Author-X-Name-First: Yiu Lim
Author-X-Name-Last: Lui
Author-Name: Weilin Xiao
Author-X-Name-First: Weilin
Author-X-Name-Last: Xiao
Author-Name: Jun Yu
Author-X-Name-First: Jun
Author-X-Name-Last: Yu
Title: The Grid Bootstrap for Continuous Time Models
Abstract:
This article proposes the new grid bootstrap to construct confidence intervals (CI) for the persistence parameter in a class of continuous-time models. It is different from the standard grid bootstrap of Hansen in dealing with the initial condition. The asymptotic validity of the CI is discussed under the in-fill scheme. The modified grid bootstrap leads to uniform inferences on the persistence parameter. Its improvement over in-fill asymptotics is achieved by expanding the coefficient-based statistic around its in-fill asymptotic distribution that is non-pivotal and depends on the initial condition. Monte Carlo studies show that the modified grid bootstrap performs better than Hansen’s grid bootstrap. Empirical applications to the U.S. interest rates and volatilities suggest significant differences between the two bootstrap procedures when the initial condition is large.
Journal: Journal of Business & Economic Statistics
Pages: 1390-1402
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1930014
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1930014
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1390-1402
Template-Type: ReDIF-Article 1.0
Author-Name: Alessio Volpicella
Author-X-Name-First: Alessio
Author-X-Name-Last: Volpicella
Title: SVARs Identification Through Bounds on the Forecast Error Variance
Abstract:
This article identifies structural vector autoregressions (SVARs) through bound restrictions on the forecast error variance decomposition (FEVD). First, the article shows FEVD bounds correspond to quadratic inequality restrictions on the columns of the rotation matrix transforming reduced-form residuals into structural shocks. Second, the article establishes theoretical conditions such that bounds on the FEVD lead to a reduction in the width of the impulse response identified set relative to only imposing sign restrictions. Third, this article proposes a robust Bayesian approach to inference. Fourth, the article shows that elicitation of the bounds could be based on DSGE models with alternative parameterizations. Finally, an empirical application illustrates the potential usefulness of FEVD restrictions for obtaining informative inference in set-identified monetary SVARs and remove unreasonable implications of models identified through sign restrictions.
Journal: Journal of Business & Economic Statistics
Pages: 1291-1301
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1927742
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1927742
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1291-1301
Template-Type: ReDIF-Article 1.0
Author-Name: Wenlu Tang
Author-X-Name-First: Wenlu
Author-X-Name-Last: Tang
Author-Name: Jinhan Xie
Author-X-Name-First: Jinhan
Author-X-Name-Last: Xie
Author-Name: Yuanyuan Lin
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Lin
Author-Name: Niansheng Tang
Author-X-Name-First: Niansheng
Author-X-Name-Last: Tang
Title: Quantile Correlation-based Variable Selection
Abstract:
This article is concerned with identifying important features in high-dimensional data analysis, especially when there are complex relationships among predictors. Without any specification of an actual model, we first introduce a multiple testing procedure based on the quantile correlation to select important predictors in high dimensionality. The quantile-correlation statistic is able to capture a wide range of dependence. A stepwise procedure is studied for further identifying important variables. Moreover, a sure independent screening based on the quantile correlation is developed in handling ultrahigh dimensional data. It is computationally efficient and easy to implement. We establish the theoretical properties under mild conditions. Numerical studies including simulation studies and real data analysis contain supporting evidence that the proposal performs reasonably well in practical settings.
Journal: Journal of Business & Economic Statistics
Pages: 1081-1093
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1899932
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1899932
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1081-1093
Template-Type: ReDIF-Article 1.0
Author-Name: Xiaobin Liu
Author-X-Name-First: Xiaobin
Author-X-Name-Last: Liu
Author-Name: Thomas Tao Yang
Author-X-Name-First: Thomas Tao
Author-X-Name-Last: Yang
Author-Name: Yichong Zhang
Author-X-Name-First: Yichong
Author-X-Name-Last: Zhang
Title: Quasi-Bayesian Inference for Production Frontiers
Abstract:
This article proposes to estimate and infer the production frontier by combining multiple first-stage extreme quantile estimates via the quasi-Bayesian method. We show the asymptotic properties of the proposed estimator and the validity of the inference procedure. The finite sample performance of our method is illustrated through simulations and an empirical application.
Journal: Journal of Business & Economic Statistics
Pages: 1334-1345
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1927745
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1927745
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1334-1345
Template-Type: ReDIF-Article 1.0
Author-Name: Matteo Barigozzi
Author-X-Name-First: Matteo
Author-X-Name-Last: Barigozzi
Author-Name: Lorenzo Trapani
Author-X-Name-First: Lorenzo
Author-X-Name-Last: Trapani
Title: Testing for Common Trends in Nonstationary Large Datasets
Abstract:
We propose a testing-based procedure to determine the number of common trends in a large nonstationary dataset. Our procedure is based on a factor representation, where we determine whether there are (and how many) common factors (i) with linear trends, and (ii) with stochastic trends. Cointegration among the factors is also permitted. Our analysis is based on the fact that those largest eigenvalues of a suitably scaled covariance matrix of the data corresponding to the common factor part diverge, as the dimension N of the dataset diverges, whilst the others stay bounded. Therefore, we propose a class of randomized test statistics for the null that the pth largest eigenvalue diverges, based directly on the estimated eigenvalue. The tests only requires minimal assumptions on the data-generating process. Monte Carlo evidence shows that our procedure has very good finite sample properties, clearly dominating competing approaches when no common trends are present. We illustrate our methodology through an application to the U.S. bond yields with different maturities observed over the last 30 years.
Journal: Journal of Business & Economic Statistics
Pages: 1107-1122
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1901719
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1901719
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1107-1122
Template-Type: ReDIF-Article 1.0
Author-Name: Yuehao Bai
Author-X-Name-First: Yuehao
Author-X-Name-Last: Bai
Author-Name: Andres Santos
Author-X-Name-First: Andres
Author-X-Name-Last: Santos
Author-Name: Azeem M. Shaikh
Author-X-Name-First: Azeem M.
Author-X-Name-Last: Shaikh
Title: A Two-Step Method for Testing Many Moment Inequalities
Abstract:
This article considers the problem of testing a finite number of moment inequalities. For this problem, Romano, Shaikh, and Wolf proposed a two-step testing procedure. In the first step, the procedure incorporates information about the location of moments using a confidence region. In the second step, the procedure accounts for the use of the confidence region in the first step by adjusting the significance level of the test appropriately. Its justification, however, has so far been limited to settings in which the number of moments is fixed with the sample size. In this article, we provide weak assumptions under which the same procedure remains valid even in settings in which there are “many” moments in the sense that the number of moments grows rapidly with the sample size. We confirm the practical relevance of our theoretical guarantees in a simulation study. We additionally provide both numerical and theoretical evidence that the procedure compares favorably with the method proposed by Chernozhukov, Chetverikov, and Kato, which has also been shown to be valid in such settings.
Journal: Journal of Business & Economic Statistics
Pages: 1070-1080
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1897016
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1897016
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1070-1080
Template-Type: ReDIF-Article 1.0
Author-Name: Zhuan Pei
Author-X-Name-First: Zhuan
Author-X-Name-Last: Pei
Author-Name: David S. Lee
Author-X-Name-First: David S.
Author-X-Name-Last: Lee
Author-Name: David Card
Author-X-Name-First: David
Author-X-Name-Last: Card
Author-Name: Andrea Weber
Author-X-Name-First: Andrea
Author-X-Name-Last: Weber
Title: Local Polynomial Order in Regression Discontinuity Designs
Abstract:
Treatment effect estimates in regression discontinuity (RD) designs are often sensitive to the choice of bandwidth and polynomial order, the two important ingredients of widely used local regression methods. While Imbens and Kalyanaraman and Calonico, Cattaneo, and Titiunik provided guidance on bandwidth, the sensitivity to polynomial order still poses a conundrum to RD practitioners. It is understood in the econometric literature that applying the argument of bias reduction does not help resolve this conundrum, since it would always lead to preferring higher orders. We therefore extend the frameworks of Imbens and Kalyanaraman and Calonico, Cattaneo, and Titiunik and use the asymptotic mean squared error of the local regression RD estimator as the criterion to guide polynomial order selection. We show in Monte Carlo simulations that the proposed order selection procedure performs well, particularly in large sample sizes typically found in empirical RD applications. This procedure extends easily to fuzzy regression discontinuity and regression kink designs.
Journal: Journal of Business & Economic Statistics
Pages: 1259-1267
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1920961
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1920961
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1259-1267
Template-Type: ReDIF-Article 1.0
Author-Name: David Powell
Author-X-Name-First: David
Author-X-Name-Last: Powell
Title: Synthetic Control Estimation Beyond Comparative Case Studies: Does the Minimum Wage Reduce Employment?
Abstract:
Panel data are often used in empirical work to account for fixed additive time and unit effects. The synthetic control estimator relaxes the assumption of additive fixed effects for comparative case studies in which a treated unit adopts a single policy. This article generalizes the synthetic control estimator to estimate parameters associated with multiple discrete or continuous explanatory variables, jointly estimating the parameters and synthetic controls for each unit. I apply the estimator to study the disemployment effects of the minimum wage, estimating that increases in the minimum wage reduce employment.
Journal: Journal of Business & Economic Statistics
Pages: 1302-1314
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1927743
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1927743
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1302-1314
Template-Type: ReDIF-Article 1.0
Author-Name: Sven Klaassen
Author-X-Name-First: Sven
Author-X-Name-Last: Klaassen
Author-Name: Jannis Kueck
Author-X-Name-First: Jannis
Author-X-Name-Last: Kueck
Author-Name: Martin Spindler
Author-X-Name-First: Martin
Author-X-Name-Last: Spindler
Title: Transformation Models in High Dimensions
Abstract:
Transformation models are a very important tool for applied statisticians and econometricians. In many applications, the dependent variable is transformed so that homogeneity or normal distribution of the error holds. In this article, we analyze transformation models in a high-dimensional setting, where the set of potential covariates is large. We propose an estimator for the transformation parameter and we show that it is asymptotically normally distributed using an orthogonalized moment condition where the nuisance functions depend on the target parameter. In a simulation study, we show that the proposed estimator works well in small samples. A common practice in labor economics is to transform wage with the log-function. In this study, we test if this transformation holds in American Community Survey (ACS) data from the United States.
Journal: Journal of Business & Economic Statistics
Pages: 1168-1178
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1906259
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1906259
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1168-1178
Template-Type: ReDIF-Article 1.0
Author-Name: Robert Garlick
Author-X-Name-First: Robert
Author-X-Name-Last: Garlick
Author-Name: Joshua Hyman
Author-X-Name-First: Joshua
Author-X-Name-Last: Hyman
Title: Quasi-Experimental Evaluation of Alternative Sample Selection Corrections
Abstract:
Researchers routinely use datasets where outcomes of interest are unobserved for some cases, potentially creating a sample selection problem. Statisticians and econometricians have proposed many selection correction methods to address this challenge. We use a natural experiment to evaluate different sample selection correction methods’ performance. From 2007, the state of Michigan required that all students take a college entrance exam, increasing the exam-taking rate from 64% to 99% and largely eliminating selection into exam-taking. We apply different selection correction methods, using different sets of covariates, to the selected exam score data from before 2007. We compare the estimated coefficients from the selection-corrected models to those from OLS regressions using the complete exam score data from after 2007 as a benchmark. We find that less restrictive semiparametric correction methods typically perform better than parametric correction methods but not better than simple OLS regressions that do not correct for selection. Performance is generally worse for models that use only a few discrete covariates than for models that use more covariates with less coarse distributions.
Journal: Journal of Business & Economic Statistics
Pages: 950-964
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1889566
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1889566
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:950-964
Template-Type: ReDIF-Article 1.0
Author-Name: Artūras Juodis
Author-X-Name-First: Artūras
Author-X-Name-Last: Juodis
Author-Name: Simon Reese
Author-X-Name-First: Simon
Author-X-Name-Last: Reese
Title: The Incidental Parameters Problem in Testing for Remaining Cross-Section Correlation
Abstract:
In this article, we consider the properties of the Pesaran CD test for cross-section correlation when applied to residuals obtained from panel data models with many estimated parameters. We show that the presence of period-specific parameters leads the CD test statistic to diverge as the time dimension of the sample grows. This result holds even if cross-section dependence is correctly accounted for and hence constitutes an example of the incidental parameters problem. The relevance of this problem is investigated for both the classical two-way fixed-effects estimator and the Common Correlated Effects estimator of Pesaran. We suggest a weighted CD test statistic which re-establishes standard normal inference under the null hypothesis. Given the widespread use of the CD test statistic to test for remaining cross-section correlation, our results have far reaching implications for empirical researchers.
Journal: Journal of Business & Economic Statistics
Pages: 1191-1203
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1906687
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1906687
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1191-1203
Template-Type: ReDIF-Article 1.0
Author-Name: Helmut Lütkepohl
Author-X-Name-First: Helmut
Author-X-Name-Last: Lütkepohl
Author-Name: Thore Schlaak
Author-X-Name-First: Thore
Author-X-Name-Last: Schlaak
Title: Heteroscedastic Proxy Vector Autoregressions
Abstract:
In proxy vector autoregressive models, the structural shocks of interest are identified by an instrument. Although heteroscedasticity is occasionally allowed for in inference, it is typically taken for granted that the impact effects of the structural shocks are time-invariant despite the change in their variances. We develop a test for this implicit assumption and present evidence that the assumption of time-invariant impact effects may be violated in previously used empirical models.
Journal: Journal of Business & Economic Statistics
Pages: 1268-1281
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1920962
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1920962
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1268-1281
Template-Type: ReDIF-Article 1.0
Author-Name: Sander Barendse
Author-X-Name-First: Sander
Author-X-Name-Last: Barendse
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Title: Comparing Predictive Accuracy in the Presence of a Loss Function Shape Parameter
Abstract:
We develop tests for out-of-sample forecast comparisons based on loss functions that contain shape parameters. Examples include comparisons using average utility across a range of values for the level of risk aversion, comparisons of forecast accuracy using characteristics of a portfolio return across a range of values for the portfolio weight vector, and comparisons using recently-proposed “Murphy diagrams” for classes of consistent scoring rules. An extensive Monte Carlo study verifies that our tests have good size and power properties in realistic sample sizes, particularly when compared with existing methods which break down when then number of values considered for the shape parameter grows. We present three empirical illustrations of the new test.
Journal: Journal of Business & Economic Statistics
Pages: 1057-1069
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1896527
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1896527
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1057-1069
Template-Type: ReDIF-Article 1.0
Author-Name: Kashif Yousuf
Author-X-Name-First: Kashif
Author-X-Name-Last: Yousuf
Author-Name: Yang Feng
Author-X-Name-First: Yang
Author-X-Name-Last: Feng
Title: Targeting Predictors Via Partial Distance Correlation With Applications to Financial Forecasting
Abstract:
High-dimensional time series datasets are becoming increasingly common in various fields of economics and finance. Given the ubiquity of time series data, it is crucial to develop efficient variable screening methods that use the unique features of time series. This article introduces several model-free screening methods based on partial distance correlation and developed specifically to deal with time-dependent data. Methods are developed both for univariate models, such as nonlinear autoregressive models with exogenous predictors (NARX), and multivariate models such as linear or nonlinear VAR models. Sure screening properties are proved for our methods, which depend on the moment conditions, and the strength of dependence in the response and covariate processes, amongst other factors. We show the effectiveness of our methods via extensive simulation studies and an application on forecasting U.S. market returns.
Journal: Journal of Business & Economic Statistics
Pages: 1007-1019
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1895812
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1895812
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1007-1019
Template-Type: ReDIF-Article 1.0
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Xuerong Chen
Author-X-Name-First: Xuerong
Author-X-Name-Last: Chen
Author-Name: Tao Zou
Author-X-Name-First: Tao
Author-X-Name-Last: Zou
Author-Name: Chih-Ling Tsai
Author-X-Name-First: Chih-Ling
Author-X-Name-Last: Tsai
Title: Imputations for High Missing Rate Data in Covariates Via Semi-supervised Learning Approach
Abstract:
Advancements in data collection techniques and the heterogeneity of data resources can yield high percentages of missing observations on variables, such as block-wise missing data. Under missing-data scenarios, traditional methods such as the simple average, k-nearest neighbor, multiple, and regression imputations may lead to results that are unstable or unable be computed. Motivated by the concept of semi-supervised learning, we propose a novel approach with which to fill in missing values in covariates that have high missing rates. Specifically, we consider the missing and nonmissing subjects in any covariate as the unlabeled and labeled target outputs, respectively, and treat their corresponding responses as the unlabeled and labeled inputs. This innovative setting allows us to impute a large number of missing data without imposing any model assumptions. In addition, the resulting imputation has a closed form for continuous covariates, and it can be calculated efficiently. An analogous procedure is applicable for discrete covariates. We further employ the nonparametric techniques to show the theoretical properties of imputed covariates. Simulation studies and an online consumer finance example are presented to illustrate the usefulness of the proposed method.
Journal: Journal of Business & Economic Statistics
Pages: 1282-1290
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1922120
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1922120
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1282-1290
Template-Type: ReDIF-Article 1.0
Author-Name: Mike G. Tsionas
Author-X-Name-First: Mike G.
Author-X-Name-Last: Tsionas
Title: Estimating Monotone Concave Stochastic Production Frontiers
Abstract:
Recent research shows that the search for Bayesian estimation of concave production functions is a fruitful area of investigation. In this article, we use a flexible cost function that satisfies globally the monotonicity and curvature properties to estimate features of the production function. Specification of a globally monotone concave production function is a difficult task which is avoided here by using the first-order conditions for cost minimization from a globally monotone concave cost function. The problem of unavailable factor prices is bypassed by assuming structure for relative prices in the first-order conditions. The new technique is shown to perform well in a Monte Carlo experiment as well as in an empirical application to rice farming in India.
Journal: Journal of Business & Economic Statistics
Pages: 1403-1414
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1931240
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1931240
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1403-1414
Template-Type: ReDIF-Article 1.0
Author-Name: Min Seong Kim
Author-X-Name-First: Min Seong
Author-X-Name-Last: Kim
Title: Robust Inference for Diffusion-Index Forecasts With Cross-Sectionally Dependent Data
Abstract:
In this article, we propose the time-series average of spatial HAC estimators for the variance of the estimated common factors under the approximate factor structure. Based on this, we provide the confidence interval for the conditional mean of the diffusion-index forecasting model with cross-sectional heteroscedasticity and dependence of the idiosyncratic errors. We establish the asymptotics under very mild conditions, and no prior information about the dependence structure is needed to implement our procedure. We employ a bootstrap to select the bandwidth parameter. Simulation studies show that our procedure performs well in finite samples. We apply the proposed confidence interval to the problem of forecasting the unemployment rate using data by Ludvigson and Ng.
Journal: Journal of Business & Economic Statistics
Pages: 1153-1167
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1906258
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1906258
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1153-1167
Template-Type: ReDIF-Article 1.0
Author-Name: Jayeeta Bhattacharya
Author-X-Name-First: Jayeeta
Author-X-Name-Last: Bhattacharya
Author-Name: Nathalie Gimenes
Author-X-Name-First: Nathalie
Author-X-Name-Last: Gimenes
Author-Name: Emmanuel Guerre
Author-X-Name-First: Emmanuel
Author-X-Name-Last: Guerre
Title: Semiparametric Quantile Models for Ascending Auctions With Asymmetric Bidders
Abstract:
The article proposes a parsimonious and flexible semiparametric quantile regression specification for asymmetric bidders within the independent private value framework. Asymmetry is parameterized using powers of a parent private value distribution, which is generated by a quantile regression specification. As noted in Cantillon, this covers and extends models used for efficient collusion, joint bidding and mergers among homogeneous bidders. The specification can be estimated for ascending auctions using the winning bids and the winner’s identity. The estimation is in two stage. The asymmetry parameters are estimated from the winner’s identity using a simple maximum likelihood procedure. The parent quantile regression specification can be estimated using simple modifications of Gimenes. Specification testing procedures are also considered. A timber application reveals that weaker bidders have 30% less chances to win the auction than stronger ones. It is also found that increasing participation in an asymmetric ascending auction may not be as beneficial as using an optimal reserve price as would have been expected from a result of Bulow and Klemperer valid under symmetry.
Journal: Journal of Business & Economic Statistics
Pages: 1020-1033
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1895813
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1895813
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1020-1033
Template-Type: ReDIF-Article 1.0
Author-Name: Guochang Wang
Author-X-Name-First: Guochang
Author-X-Name-Last: Wang
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Author-Name: Xiaofeng Shao
Author-X-Name-First: Xiaofeng
Author-X-Name-Last: Shao
Title: Testing for the Martingale Difference Hypothesis in Multivariate Time Series Models
Abstract:
This article proposes a general class of tests to examine whether the error term is a martingale difference sequence in a multivariate time series model with parametric conditional mean. These new tests are formed based on recently developed martingale difference divergence matrix (MDDM), and they provide formal tools to test the multivariate martingale difference hypothesis in the literature for the first time. Under suitable conditions, the asymptotic null distributions of these MDDM-based tests are established. Moreover, these MDDM-based tests are consistent to detect a broad class of fixed alternatives, and have nontrivial power against local alternatives of order n−1/2
, where n is the sample size. Since the asymptotic null distributions depend on the data generating process and the parameter estimation, a wild bootstrap procedure is further proposed to approximate the critical values of these MDDM-based tests, and its theoretical validity is justified. Finally, the usefulness of these MDDM-based tests is illustrated by simulation studies and one real data example.
Journal: Journal of Business & Economic Statistics
Pages: 980-994
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1889568
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1889568
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:980-994
Template-Type: ReDIF-Article 1.0
Author-Name: Xin Chen
Author-X-Name-First: Xin
Author-X-Name-Last: Chen
Author-Name: Jia Zhang
Author-X-Name-First: Jia
Author-X-Name-Last: Zhang
Author-Name: Wang Zhou
Author-X-Name-First: Wang
Author-X-Name-Last: Zhou
Title: High-Dimensional Elliptical Sliced Inverse Regression in Non-Gaussian Distributions
Abstract:
Sliced inverse regression (SIR) is the most widely used sufficient dimension reduction method due to its simplicity, generality and computational efficiency. However, when the distribution of covariates deviates from multivariate normal distribution, the estimation efficiency of SIR gets rather low, and the SIR estimator may be inconsistent and misleading, especially in the high-dimensional setting. In this article, we propose a robust alternative to SIR—called elliptical sliced inverse regression (ESIR), to analysis high-dimensional, elliptically distributed data. There are wide applications of elliptically distributed data, especially in finance and economics where the distribution of the data is often heavy-tailed. To tackle the heavy-tailed elliptically distributed covariates, we novelly use the multivariate Kendall’s tau matrix in a framework of generalized eigenvalue problem in sufficient dimension reduction. Methodologically, we present a practical algorithm for our method. Theoretically, we investigate the asymptotic behavior of the ESIR estimator under the high-dimensional setting. Extensive simulation results show ESIR significantly improves the estimation efficiency in heavy-tailed scenarios, compared with other robust SIR methods. Analysis of the Istanbul stock exchange dataset also demonstrates the effectiveness of our proposed method. Moreover, ESIR can be easily extended to other sufficient dimension reduction methods and applied to nonelliptical heavy-tailed distributions.
Journal: Journal of Business & Economic Statistics
Pages: 1204-1215
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1910041
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1910041
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1204-1215
Template-Type: ReDIF-Article 1.0
Author-Name: Angelo Mele
Author-X-Name-First: Angelo
Author-X-Name-Last: Mele
Title: A Structural Model of Homophily and Clustering in Social Networks
Abstract:
I develop and estimate a structural model of network formation with heterogeneous players and latent community structure, whose equilibrium homophily and clustering levels match those usually observed in real-world social networks. Players belong to communities unobserved by the econometrician and have community-specific payoffs, allowing preferences to have a bias for similar people. Players meet sequentially and decide whether to form bilateral links, after receiving a random matching shock. The model converges to a hierarchical exponential family random graph. Using school friendship network data from Add Health, I estimate the posterior distribution of parameters and unobserved heterogeneity, detecting high levels of racial homophily and payoff heterogeneity across communities. The posterior predictions of sufficient statistics show that the model is able to replicate the homophily levels and the aggregate clustering of the observed network, in contrast with standard exponential family network models without community structure.
Journal: Journal of Business & Economic Statistics
Pages: 1377-1389
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1930013
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1930013
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1377-1389
Template-Type: ReDIF-Article 1.0
Author-Name: Jean-Marie Dufour
Author-X-Name-First: Jean-Marie
Author-X-Name-Last: Dufour
Author-Name: Denis Pelletier
Author-X-Name-First: Denis
Author-X-Name-Last: Pelletier
Title: Practical Methods for Modeling Weak VARMA Processes: Identification, Estimation and Specification With a Macroeconomic Application
Abstract:
We consider the problem of developing practical methods for modelling weak VARMA processes. We first propose new identified VARMA representations, the diagonal MA equation form and the final MA equation form, where the MA operator is either diagonal or scalar. Both these representations have the important feature that they constitute relatively simple modifications of a VAR model (in contrast with the echelon representation). Second, for estimating VARMA models, we develop computationally simple methods which only require linear regressions. The asymptotic properties of the estimator are derived under weak hypotheses on the innovations (uncorrelated and strong mixing), in order to broaden the class of models to which it can be applied. Third, we present a modified information criterion which yields consistent estimates of the orders under the proposed representations. The estimation methods are studied by simulation. To demonstrate the importance of using VARMA models to study multivariate time series, we compare the impulse-response functions and the out-of-sample forecasts generated by VARMA and VAR models. The proposed methodology is applied to a six-variable macroeconomic model of monetary policy, based on the U.S. monthly data over the period 1962–1996. The results demonstrate the advantages of using the VARMA methodology for impulse response estimation and forecasting, in contrast with standard VAR models.
Journal: Journal of Business & Economic Statistics
Pages: 1140-1152
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1904960
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1904960
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1140-1152
Template-Type: ReDIF-Article 1.0
Author-Name: Jad Beyhum
Author-X-Name-First: Jad
Author-X-Name-Last: Beyhum
Author-Name: Jean-Pierre Florens
Author-X-Name-First: Jean-Pierre
Author-X-Name-Last: Florens
Author-Name: Ingrid Van Keilegom
Author-X-Name-First: Ingrid
Author-X-Name-Last: Van Keilegom
Title: Nonparametric Instrumental Regression With Right Censored Duration Outcomes
Abstract:
This article analyzes the effect of a discrete treatment Z on a duration T. The treatment is not randomly assigned. The confounding issue is treated using a discrete instrumental variable explaining the treatment and independent of the error term of the model. Our framework is nonparametric and allows for random right censoring. This specification generates a nonlinear inverse problem and the average treatment effect is derived from its solution. We provide local and global identification properties that rely on a nonlinear system of equations. We propose an estimation procedure to solve this system and derive rates of convergence and conditions under which the estimator is asymptotically normal. When censoring makes identification fail, we develop partial identification results. Our estimators exhibit good finite sample properties in simulations. We also apply our methodology to the Illinois Reemployment Bonus Experiment.
Journal: Journal of Business & Economic Statistics
Pages: 1034-1045
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1895814
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1895814
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1034-1045
Template-Type: ReDIF-Article 1.0
Author-Name: Timo Dimitriadis
Author-X-Name-First: Timo
Author-X-Name-Last: Dimitriadis
Author-Name: Roxana Halbleib
Author-X-Name-First: Roxana
Author-X-Name-Last: Halbleib
Title: Realized Quantiles*
Abstract:
This article proposes a simple approach to estimate quantiles of daily financial returns directly from high-frequency data. We denote the resulting estimator as realized quantile (RQ) and use it to forecast tail risk measures, such as Value at Risk (VaR) and Expected Shortfall (ES). The RQ estimator is built on the assumption that financial logarithm prices are subordinated self-similar processes in intrinsic time. The intrinsic time dimension stochastically transforms the clock time in order to capture the real “heartbeat” of financial markets in accordance with their trading activity and/or riskiness. The self-similarity assumption allows to compute daily quantiles by simply scaling up their intraday counterparts, while the subordination technique can easily accommodate numerous empirical features of financial returns, such as volatility persistence and fat-tailedness. Our method, which is built on a flexible assumption, is simple to implement and exploits the rich information content of high-frequency data from another time perspective than the classical clock time. In a comprehensive empirical exercise, we show that our forecasts of VaR and ES are more accurate than the ones from a large set of up-to-date comparative models, for both, stocks and foreign exchange rates.
Journal: Journal of Business & Economic Statistics
Pages: 1346-1361
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1929249
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1929249
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1346-1361
Template-Type: ReDIF-Article 1.0
Author-Name: Gianluca Frasso
Author-X-Name-First: Gianluca
Author-X-Name-Last: Frasso
Author-Name: Paul H.C. Eilers
Author-X-Name-First: Paul H.C.
Author-X-Name-Last: Eilers
Title: Direct Semi-Parametric Estimation of the State Price Density Implied in Option Prices
Abstract:
We present a model for direct semi-parametric estimation of the state price density (SPD) implied by quoted option prices. We treat the observed prices as expected values of possible pay-offs at maturity, weighted by the unknown probability density function. We model the logarithm of the latter as a smooth function, using P-splines, while matching the expected values of the potential pay-offs with the observed prices. This leads to a special case of the penalized composite link model. Our estimates do not rely on any parametric assumption on the underlying asset price dynamics and are consistent with no-arbitrage conditions. The model shows excellent performance in simulations and in applications to real data.
Journal: Journal of Business & Economic Statistics
Pages: 1179-1190
Issue: 3
Volume: 40
Year: 2022
Month: 6
X-DOI: 10.1080/07350015.2021.1906686
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1906686
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:3:p:1179-1190
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1970576_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Ovidijus Stauskas
Author-X-Name-First: Ovidijus
Author-X-Name-Last: Stauskas
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: Tests of Equal Forecasting Accuracy for Nested Models with Estimated CCE Factors*
Abstract:
In this article, we propose new tests of equal predictive ability between nested models when factor-augmented regressions are used to forecast. In contrast to the previous literature, the unknown factors are not estimated by principal components but by the common correlated effects (CCE) approach, which employs cross-sectional averages of blocks of variables. This makes for easy interpretation of the estimated factors, and the resulting tests are easy to implement and they account for the block structure of the data. Assuming that the number of averages is larger than the true number of factors, we establish the limiting distributions of the new tests as the number of time periods and the number of variables within each block jointly go to infinity. The main finding is that the limiting distributions do not depend on the number of factors but only on the number of averages, which is known. The important practical implication of this finding is that one does not need to estimate the number of factors consistently in order to apply our tests.
Journal: Journal of Business & Economic Statistics
Pages: 1745-1758
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1970576
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1970576
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1745-1758
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1961788_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Rubén Loaiza-Maya
Author-X-Name-First: Rubén
Author-X-Name-Last: Loaiza-Maya
Author-Name: Didier Nibbering
Author-X-Name-First: Didier
Author-X-Name-Last: Nibbering
Title: Scalable Bayesian Estimation in the Multinomial Probit Model
Abstract:
The multinomial probit (MNP) model is a popular tool for analyzing choice behavior as it allows for correlation between choice alternatives. Because current model specifications employ a full covariance matrix of the latent utilities for the choice alternatives, they are not scalable to a large number of choice alternatives. This article proposes a factor structure on the covariance matrix, which makes the model scalable to large choice sets. The main challenge in estimating this structure is that the model parameters require identifying restrictions. We identify the parameters by a trace-restriction on the covariance matrix, which is imposed through a reparameterization of the factor structure. We specify interpretable prior distributions on the model parameters and develop an MCMC sampler for parameter estimation. The proposed approach significantly improves performance in large choice sets relative to existing MNP specifications. Applications to purchase data show the economic importance of including a large number of choice alternatives in consumer choice analysis.
Journal: Journal of Business & Economic Statistics
Pages: 1678-1690
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1961788
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1961788
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1678-1690
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1970574_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Shouxia Wang
Author-X-Name-First: Shouxia
Author-X-Name-Last: Wang
Author-Name: Tao Huang
Author-X-Name-First: Tao
Author-X-Name-Last: Huang
Author-Name: Jinhong You
Author-X-Name-First: Jinhong
Author-X-Name-Last: You
Author-Name: Ming-Yen Cheng
Author-X-Name-First: Ming-Yen
Author-X-Name-Last: Cheng
Title: Robust Inference for Nonstationary Time Series with Possibly Multiple Changing Periodic Structures
Abstract:
Motivated by two examples concerning global warming and monthly total import and export by China, we study time series that contain a nonparametric periodic component with an unknown period, a nonparametric trending behavior and also additive covariate effects. Further, as the amplitude function may change at some known or unknown change-point(s), we extend our model to take this dynamical periodicity into account and introduce two change-point estimators. To the best of knowledge, this is the first work to study such complex periodic structure. A two-step estimation procedure is proposed to estimate accurately the periodicity, trend and covariate effects. First, we estimate the period with the trend and covariate effects being approximated by B-splines rather than being ignored. To achieve robustness we employ a penalized M-estimation method which uses post model selection inference ideas. Next, given the period estimate, we estimate the amplitude, trend and covariate effects. Asymptotic properties of our estimators are derived, including consistency of the period estimator and asymptotic normality and oracle property of the estimated periodic sequence, trend and covariate effects. Simulation studies confirm superiority of our method and illustrate good performance of our change-point estimators. Applications to the two motivating examples demonstrate utilities of our methods.
Journal: Journal of Business & Economic Statistics
Pages: 1718-1731
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1970574
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1970574
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1718-1731
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1990072_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Xiao Huang
Author-X-Name-First: Xiao
Author-X-Name-Last: Huang
Author-Name: Zhaoguo Zhan
Author-X-Name-First: Zhaoguo
Author-X-Name-Last: Zhan
Title: Local Composite Quantile Regression for Regression Discontinuity
Abstract:
We introduce the local composite quantile regression (LCQR) to causal inference in regression discontinuity (RD) designs. Kai, Li and Zou study the efficiency property of LCQR, while we show that its nice boundary performance translates to accurate estimation of treatment effects in RD under a variety of data generating processes. Moreover, we propose a bias-corrected and standard error-adjusted t-test for inference, which leads to confidence intervals with good coverage probabilities. A bandwidth selector is also discussed. For illustration, we conduct a simulation study and revisit a classic example from Lee. A companion R package rdcqr is developed.
Journal: Journal of Business & Economic Statistics
Pages: 1863-1875
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1990072
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1990072
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1863-1875
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1939037_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Zacharias Psaradakis
Author-X-Name-First: Zacharias
Author-X-Name-Last: Psaradakis
Author-Name: Marián Vávra
Author-X-Name-First: Marián
Author-X-Name-Last: Vávra
Title: Using Triples to Assess Symmetry Under Weak Dependence
Abstract:
The problem of assessing symmetry about an unspecified center of the one-dimensional marginal distribution of a strictly stationary random process is considered. A well-known U-statistic based on data triples is used to detect deviations from symmetry, allowing the underlying process to satisfy suitable mixing or near-epoch dependence conditions. We suggest using subsampling for inference on the target parameter, establish the asymptotic validity of the method in our setting, and discuss data-driven rules for selecting the size of subsamples. The small-sample properties of the proposed inferential procedures are examined by means of Monte Carlo simulations. Applications to time series of output growth and stock returns are also presented.
Journal: Journal of Business & Economic Statistics
Pages: 1538-1551
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1939037
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1939037
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1538-1551
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1946067_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Phillip Heiler
Author-X-Name-First: Phillip
Author-X-Name-Last: Heiler
Title: Efficient Covariate Balancing for the Local Average Treatment Effect
Abstract:
This article develops an empirical balancing approach for the estimation of treatment effects under two-sided noncompliance using a binary instrumental variable. The method weighs both treatment and outcome information with inverse probabilities to impose exact finite sample balance across instrument level groups. It is free of functional form assumptions on the outcome or the treatment selection step. By tailoring the loss function for the instrument propensity scores, the resulting treatment effect estimates are automatically weight normalized and exhibit both low bias and reduced variance in finite samples compared to conventional inverse probability weighting methods. We provide conditions for asymptotic normality and semiparametric efficiency and demonstrate how to use additional information about the treatment selection step for bias reduction in finite samples. A doubly robust extension is proposed as well. Monte Carlo simulations suggest that the theoretical advantages translate well to finite samples. The method is illustrated in an empirical example.
Journal: Journal of Business & Economic Statistics
Pages: 1569-1582
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1946067
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1946067
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1569-1582
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1983438_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Cheng Hsiao
Author-X-Name-First: Cheng
Author-X-Name-Last: Hsiao
Author-Name: Zhentao Shi
Author-X-Name-First: Zhentao
Author-X-Name-Last: Shi
Author-Name: Qiankun Zhou
Author-X-Name-First: Qiankun
Author-X-Name-Last: Zhou
Title: Transformed Estimation for Panel Interactive Effects Models
Abstract:
We propose a transformed estimator for the slope coefficients of panel models with interactive effects. The transformed estimation method does not require the prior knowledge of the dimension of factor structure. It is consistent and asymptotically normally distributed under fairly general conditions when N is fixed and T→∞ or T is fixed and N→∞, or when both N and T are large and NT→a≠0<∞. Moreover, because the transformation is equivalent to aggregating cross-sectional units or time units before implementing the least-square method over time or across cross-sectional units, it can bypass the issues arising from heteroscedasticity across cross-sectional units or serial correlations over time in the idiosyncratic errors. Furthermore, in the case that the idiosyncratic errors are independent over time, there is no asymptotic bias even the explanatory variables contain lagged dependent variables when NT→a<∞ as T→∞. Extensive Monte Carlo simulations are also conducted to examine the finite sample performance of the transformed estimation method.
Journal: Journal of Business & Economic Statistics
Pages: 1831-1848
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1983438
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1983438
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1831-1848
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1984928_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Stéphane Bonhomme
Author-X-Name-First: Stéphane
Author-X-Name-Last: Bonhomme
Author-Name: Martin Weidner
Author-X-Name-First: Martin
Author-X-Name-Last: Weidner
Title: Posterior Average Effects
Abstract:
Economists are often interested in estimating averages with respect to distributions of unobservables, such as moments of individual fixed-effects, or average partial effects in discrete choice models. For such quantities, we propose and study posterior average effects (PAE), where the average is computed conditional on the sample, in the spirit of empirical Bayes and shrinkage methods. While the usefulness of shrinkage for prediction is well-understood, a justification of posterior conditioning to estimate population averages is currently lacking. We show that PAE have minimum worst-case specification error under various forms of misspecification of the parametric distribution of unobservables. In addition, we introduce a measure of informativeness of the posterior conditioning, which quantifies the worst-case specification error of PAE relative to parametric model-based estimators. As illustrations, we report PAE estimates of distributions of neighborhood effects in the U.S., and of permanent and transitory components in a model of income dynamics.
Journal: Journal of Business & Economic Statistics
Pages: 1849-1862
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1984928
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1984928
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1849-1862
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2102021_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Juan Rubio-Ramírez
Author-X-Name-First: Juan
Author-X-Name-Last: Rubio-Ramírez
Title: Comments on “Narrative Restrictions and Proxies” by Giacomini, Kitagawa, and Read
Abstract:
The views expressed in this paper are solely those of the author and do not necessarily reflect the views of the Federal Reserve Bank of Atlanta or the Federal Reserve System. Any errors or omissions are the responsibility of the author. No statements here should be treated as legal advice.Preliminary and Incomplete. Do not circulate without consent from the author.
Journal: Journal of Business & Economic Statistics
Pages: 1426-1428
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2022.2102021
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2102021
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1426-1428
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1933502_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Hang Qian
Author-X-Name-First: Hang
Author-X-Name-Last: Qian
Title: Bayesian Inference in Common Microeconometric Models With Massive Datasets by Double Marginalized Subsampling
Abstract:
Bayesian inference with a large dataset is computationally intensive, as Markov chain Monte Carlo simulation requires a complete scan of the dataset for each proposed parameter update. To reduce the number of data points evaluated at each iteration of posterior simulation, we develop a double marginalized subsampling method, which is applicable to a wide array of microeconometric models including Tobit, Probit, regressions with non-Gaussian errors, heteroscedasticity and stochastic volatility, hierarchical longitudinal models, time-varying-parameter regressions, Gaussian mixtures, etc. We also provide an extension to double pseudo-marginalized subsampling, which has more applications beyond conditionally conjugate models. With rank-one update of the cumulative statistics, both methods target the exact posterior distribution, from which a parameter draw can be obtained with every single observation. Simulation studies demonstrate the statistical and computational efficiency of the marginalized sampler. The methods are also applied to a real-world massive dataset on the incidentally truncated mortgage rates.
Journal: Journal of Business & Economic Statistics
Pages: 1484-1497
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1933502
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1933502
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1484-1497
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1933500_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Francisco Peñaranda
Author-X-Name-First: Francisco
Author-X-Name-Last: Peñaranda
Author-Name: Juan M. Rodríguez-Poo
Author-X-Name-First: Juan M.
Author-X-Name-Last: Rodríguez-Poo
Author-Name: Stefan Sperlich
Author-X-Name-First: Stefan
Author-X-Name-Last: Sperlich
Title: Nonparametric Specification Testing of Conditional Asset Pricing Models
Abstract:
This article presents an adaptive omnibus specification test of asset pricing models where the stochastic discount factor is conditionally affine in the pricing factors. These models provide constraints that conditional moments of returns and pricing factors must satisfy, but most of them do not provide information on the functional form of those moments. Our test is robust to functional form misspecification, and also detects any relationship between pricing errors and conditioning variables. We give special emphasis to the test implementation and calibration, and extensive simulation studies prove the functioning in practice. Our empirical applications show a conditional counterpart of a well-known problem of unconditional models. The lack of rejection of consumption based conditional models seems to be due to a poor conditional correlation between consumption and stock returns.
Journal: Journal of Business & Economic Statistics
Pages: 1455-1469
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1933500
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1933500
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1455-1469
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1971089_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Yujie Liao
Author-X-Name-First: Yujie
Author-X-Name-Last: Liao
Author-Name: Jingyuan Liu
Author-X-Name-First: Jingyuan
Author-X-Name-Last: Liu
Author-Name: Donna L. Coffman
Author-X-Name-First: Donna L.
Author-X-Name-Last: Coffman
Author-Name: Runze Li
Author-X-Name-First: Runze
Author-X-Name-Last: Li
Title: Varying Coefficient Mediation Model and Application to Analysis of Behavioral Economics Data
Abstract:
This paper is concerned with causal mediation analysis with varying indirect and direct effects. We propose a varying coefficient mediation model, which can also be viewed as an extension of moderation analysis on a causal diagram. We develop a new estimation procedure for the direct and indirect effects based on B-splines. Under mild conditions, rates of convergence and asymptotic distributions of the resulting estimates are established. We further propose a F-type test for the direct effect. We conduct simulation study to examine the finite sample performance of the proposed methodology, and apply the new procedures for empirical analysis of behavioral economics data.
Journal: Journal of Business & Economic Statistics
Pages: 1759-1771
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1971089
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1971089
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1759-1771
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2096042_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Mikkel Plagborg-Møller
Author-X-Name-First: Mikkel
Author-X-Name-Last: Plagborg-Møller
Title: Discussion of “Narrative Restrictions and Proxies” by Raffaella Giacomini, Toru Kitagawa, and Matthew Read
Journal: Journal of Business & Economic Statistics
Pages: 1434-1437
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2022.2096042
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2096042
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1434-1437
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1954527_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Xianshi Yu
Author-X-Name-First: Xianshi
Author-X-Name-Last: Yu
Author-Name: Ting Li
Author-X-Name-First: Ting
Author-X-Name-Last: Li
Author-Name: Ningchen Ying
Author-X-Name-First: Ningchen
Author-X-Name-Last: Ying
Author-Name: Bing-Yi Jing
Author-X-Name-First: Bing-Yi
Author-X-Name-Last: Jing
Title: Collaborative Filtering With Awareness of Social Networks
Abstract:
In this article, we present the NetRec method to leverage the social network data of users in collaborative filtering. We formulate two new network-related terms and obtain convex optimization problems that incorporate assumptions regarding users’ social connections and preferences about products. Our theory demonstrates that this procedure leads to a sharper error bound than before, as long as the observed social network is well structured. We point out that the larger the noise magnitude in the observed user preferences, the larger the reduction in the magnitude of the error bound. Moreover, our theory shows that the combination of the network-related term and the previously used term of nuclear norm gives estimates better than those achieved by any of them alone. We provide an algorithm to solve the new optimization problem and prove that it is guaranteed to find a global optimum. Both simulations and real data experiments are carried out to validate our theoretical findings. The application of the NetRec method on the Yelp data demonstrate its superiority over a state-of-the-art social recommendation method.
Journal: Journal of Business & Economic Statistics
Pages: 1629-1641
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1954527
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1954527
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1629-1641
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2115710_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Raffaella Giacomini
Author-X-Name-First: Raffaella
Author-X-Name-Last: Giacomini
Author-Name: Toru Kitagawa
Author-X-Name-First: Toru
Author-X-Name-Last: Kitagawa
Author-Name: Matthew Read
Author-X-Name-First: Matthew
Author-X-Name-Last: Read
Title: Narrative Restrictions and Proxies: Rejoinder
Abstract:
This rejoinder addresses the discussants’ specific comments on the article “Narrative Restrictions and Proxies” (Section 2) as well as more general comments on the approach to robust Bayesian inference that we have proposed in previous work (Section 1).
Journal: Journal of Business & Economic Statistics
Pages: 1438-1441
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2022.2115710
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2115710
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1438-1441
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1961787_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Gaorong Li
Author-X-Name-First: Gaorong
Author-X-Name-Last: Li
Author-Name: Lei Huang
Author-X-Name-First: Lei
Author-X-Name-Last: Huang
Author-Name: Jin Yang
Author-X-Name-First: Jin
Author-X-Name-Last: Yang
Author-Name: Wenyang Zhang
Author-X-Name-First: Wenyang
Author-X-Name-Last: Zhang
Title: A Synthetic Regression Model for Large Portfolio Allocation
Abstract:
Portfolio allocation is an important topic in financial data analysis. In this article, based on the mean-variance optimization principle, we propose a synthetic regression model for construction of portfolio allocation, and an easy to implement approach to generate the synthetic sample for the model. Compared with the regression approach in existing literature for portfolio allocation, the proposed method of generating the synthetic sample provides more accurate approximation for the synthetic response variable when the number of assets under consideration is large. Due to the embedded leave-one-out idea, the synthetic sample generated by the proposed method has weaker within sample correlation, which makes the resulting portfolio allocation more close to the optimal one. This intuitive conclusion is theoretically confirmed to be true by the asymptotic properties established in this article. We have also conducted intensive simulation studies in this article to compare the proposed method with the existing ones, and found the proposed method works better. Finally, we apply the proposed method to real datasets. The yielded returns look very encouraging.
Journal: Journal of Business & Economic Statistics
Pages: 1665-1677
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1961787
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1961787
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1665-1677
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1953509_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Yujia Wu
Author-X-Name-First: Yujia
Author-X-Name-Last: Wu
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Tao Zou
Author-X-Name-First: Tao
Author-X-Name-Last: Zou
Author-Name: Chih-Ling Tsai
Author-X-Name-First: Chih-Ling
Author-X-Name-Last: Tsai
Title: Inward and Outward Network Influence Analysis
Abstract:
Measuring heterogeneous influence across nodes in a network is critical in network analysis. This article proposes an inward and outward network influence (IONI) model to assess nodal heterogeneity. Specifically, we allow for two types of influence parameters; one measures the magnitude of influence that each node exerts on others (outward influence), while we introduce a new parameter to quantify the receptivity of each node to being influenced by others (inward influence). Accordingly, these two types of influence measures naturally classify all nodes into four quadrants (high inward and high outward, low inward and high outward, low inward and low outward, and high inward and low outward). To demonstrate our four-quadrant clustering method in practice, we apply the quasi-maximum likelihood approach to estimate the influence parameters, and we show the asymptotic properties of the resulting estimators. In addition, score tests are proposed to examine the homogeneity of the two types of influence parameters. To improve the accuracy of inferences about nodal influences, we introduce a Bayesian information criterion that selects the optimal influence model. The usefulness of the IONI model and the four-quadrant clustering method is illustrated via simulation studies and an empirical example involving customer segmentation.
Journal: Journal of Business & Economic Statistics
Pages: 1617-1628
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1953509
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1953509
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1617-1628
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1941055_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Steven F. Lehrer
Author-X-Name-First: Steven F.
Author-X-Name-Last: Lehrer
Author-Name: R. Vincent Pohl
Author-X-Name-First: R. Vincent
Author-X-Name-Last: Pohl
Author-Name: Kyungchul Song
Author-X-Name-First: Kyungchul
Author-X-Name-Last: Song
Title: Multiple Testing and the Distributional Effects of Accountability Incentives in Education
Abstract:
This article proposes bootstrap-based multiple testing procedures for quantile treatment effect (QTE) heterogeneity under the assumption of selection on observables, and shows its asymptotic validity. Our procedure can be used to detect the quantiles and subgroups exhibiting treatment effect heterogeneity. We apply the multiple testing procedures to data from a large-scale Pakistani school report card experiment, and uncover evidence of policy-relevant heterogeneous effects from information provision on child test scores. Furthermore, our analysis reinforces the importance of preventing the inflation of false positive conclusions because 63% of statistically significant QTEs become insignificant once corrections for multiple testing are applied.
Journal: Journal of Business & Economic Statistics
Pages: 1552-1568
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1941055
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1941055
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1552-1568
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1938085_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Haozhe Zhang
Author-X-Name-First: Haozhe
Author-X-Name-Last: Zhang
Author-Name: Yehua Li
Author-X-Name-First: Yehua
Author-X-Name-Last: Li
Title: Unified Principal Component Analysis for Sparse and Dense Functional Data under Spatial Dependency
Abstract:
We consider spatially dependent functional data collected under a geostatistics setting, where locations are sampled from a spatial point process. The functional response is the sum of a spatially dependent functional effect and a spatially independent functional nugget effect. Observations on each function are made on discrete time points and contaminated with measurement errors. Under the assumption of spatial stationarity and isotropy, we propose a tensor product spline estimator for the spatio-temporal covariance function. When a coregionalization covariance structure is further assumed, we propose a new functional principal component analysis method that borrows information from neighboring functions. The proposed method also generates nonparametric estimators for the spatial covariance functions, which can be used for functional kriging. Under a unified framework for sparse and dense functional data, infill and increasing domain asymptotic paradigms, we develop the asymptotic convergence rates for the proposed estimators. Advantages of the proposed approach are demonstrated through simulation studies and two real data applications representing sparse and dense functional data, respectively.
Journal: Journal of Business & Economic Statistics
Pages: 1523-1537
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1938085
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1938085
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1523-1537
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1974459_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Maddalena Cavicchioli
Author-X-Name-First: Maddalena
Author-X-Name-Last: Cavicchioli
Title: Markov Switching Garch Models: Higher Order Moments, Kurtosis Measures, and Volatility Evaluation in Recessions and Pandemic
Abstract:
In this article, we derive neat matrix formulas in closed form for computing higher order moments and kurtosis of univariate Markov switching GARCH models. Then we provide asymptotic theory for sample estimators of higher order moments and kurtosis which can be used for testing normality. We also check our theory statements numerically via Monte Carlo simulations. Finally, we take advantage of our theoretical results to recognize different periods of high volatility stressing the stock markets, such as financial crisis and pandemic.
Journal: Journal of Business & Economic Statistics
Pages: 1772-1783
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1974459
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1974459
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1772-1783
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1931241_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Johannes Ruf
Author-X-Name-First: Johannes
Author-X-Name-Last: Ruf
Author-Name: Weiguan Wang
Author-X-Name-First: Weiguan
Author-X-Name-Last: Wang
Title: Hedging With Linear Regressions and Neural Networks
Abstract:
We study neural networks as nonparametric estimation tools for the hedging of options. To this end, we design a network, named HedgeNet, that directly outputs a hedging strategy. This network is trained to minimize the hedging error instead of the pricing error. Applied to end-of-day and tick prices of S&P 500 and Euro Stoxx 50 options, the network is able to reduce the mean squared hedging error of the Black-Scholes benchmark significantly. However, a similar benefit arises by simple linear regressions that incorporate the leverage effect.
Journal: Journal of Business & Economic Statistics
Pages: 1442-1454
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1931241
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1931241
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1442-1454
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1990772_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Niko Hauzenberger
Author-X-Name-First: Niko
Author-X-Name-Last: Hauzenberger
Author-Name: Florian Huber
Author-X-Name-First: Florian
Author-X-Name-Last: Huber
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: Luca Onorante
Author-X-Name-First: Luca
Author-X-Name-Last: Onorante
Title: Fast and Flexible Bayesian Inference in Time-varying Parameter Regression Models
Abstract:
In this article, we write the time-varying parameter (TVP) regression model involving K explanatory variables and T observations as a constant coefficient regression model with KT explanatory variables. In contrast with much of the existing literature which assumes coefficients to evolve according to a random walk, a hierarchical mixture model on the TVPs is introduced. The resulting model closely mimics a random coefficients specification which groups the TVPs into several regimes. These flexible mixtures allow for TVPs that feature a small, moderate or large number of structural breaks. We develop computationally efficient Bayesian econometric methods based on the singular value decomposition of the KT regressors. In artificial data, we find our methods to be accurate and much faster than standard approaches in terms of computation time. In an empirical exercise involving inflation forecasting using a large number of predictors, we find our models to forecast better than alternative approaches and document different patterns of parameter change than are found with approaches which assume random walk evolution of parameters.
Journal: Journal of Business & Economic Statistics
Pages: 1904-1918
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1990772
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1990772
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1904-1918
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1990771_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Xuening Zhu
Author-X-Name-First: Xuening
Author-X-Name-Last: Zhu
Author-Name: Rui Pan
Author-X-Name-First: Rui
Author-X-Name-Last: Pan
Author-Name: Shuyuan Wu
Author-X-Name-First: Shuyuan
Author-X-Name-Last: Wu
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Feature Screening for Massive Data Analysis by Subsampling
Abstract:
Modern statistical analysis often encounters massive datasets with ultrahigh-dimensional features. In this work, we develop a subsampling approach for feature screening with massive datasets. The approach is implemented by repeated subsampling of massive data and can be used for analyzing tasks with memory constraints. To conduct the procedure, we first calculate an R-squared screening measure (and related sample moments) based on subsamples. Second, we consider three methods to combine the local statistics. In addition to the simple average method, we design a jackknife debiased screening measure and an aggregated moment screening measure. Both approaches reduce the bias of the subsampling screening measure and therefore increase the accuracy of the feature screening. Last, we consider a novel sequential sampling method, that is more computationally efficient than the traditional random sampling method. The theoretical properties of the three screening measures under both sampling schemes are rigorously discussed. Finally, we illustrate the usefulness of the proposed method with an airline dataset containing 32.7 million records.
Journal: Journal of Business & Economic Statistics
Pages: 1892-1903
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1990771
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1990771
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1892-1903
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1961789_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Rui Pan
Author-X-Name-First: Rui
Author-X-Name-Last: Pan
Author-Name: Tunan Ren
Author-X-Name-First: Tunan
Author-X-Name-Last: Ren
Author-Name: Baishan Guo
Author-X-Name-First: Baishan
Author-X-Name-Last: Guo
Author-Name: Feng Li
Author-X-Name-First: Feng
Author-X-Name-Last: Li
Author-Name: Guodong Li
Author-X-Name-First: Guodong
Author-X-Name-Last: Li
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: A Note on Distributed Quantile Regression by Pilot Sampling and One-Step Updating
Abstract:
Quantile regression is a method of fundamental importance. How to efficiently conduct quantile regression for a large dataset on a distributed system is of great importance. We show that the popularly used one-shot estimation is statistically inefficient if data are not randomly distributed across different workers. To fix the problem, a novel one-step estimation method is developed with the following nice properties. First, the algorithm is communication efficient. That is the communication cost demanded is practically acceptable. Second, the resulting estimator is statistically efficient. That is its asymptotic covariance is the same as that of the global estimator. Third, the estimator is robust against data distribution. That is its consistency is guaranteed even if data are not randomly distributed across different workers. Numerical experiments are provided to corroborate our findings. A real example is also presented for illustration.
Journal: Journal of Business & Economic Statistics
Pages: 1691-1700
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1961789
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1961789
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1691-1700
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1933501_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Andrii Babii
Author-X-Name-First: Andrii
Author-X-Name-Last: Babii
Title: High-Dimensional Mixed-Frequency IV Regression
Abstract:
This article introduces a high-dimensional linear IV regression for the data sampled at mixed frequencies. We show that the high-dimensional slope parameter of a high-frequency covariate can be identified and accurately estimated leveraging on a low-frequency instrumental variable. The distinguishing feature of the model is that it allows handing high-dimensional datasets without imposing the approximate sparsity restrictions. We propose a Tikhonov-regularized estimator and study its large sample properties for time series data. The estimator has a closed-form expression that is easy to compute and demonstrates excellent performance in our Monte Carlo experiments. We also provide the confidence bands and incorporate the exogenous covariates via the double/debiased machine learning approach. In our empirical illustration, we estimate the real-time price elasticity of supply on the Australian electricity spot market. Our estimates suggest that the supply is relatively inelastic throughout the day.
Journal: Journal of Business & Economic Statistics
Pages: 1470-1483
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1933501
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1933501
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1470-1483
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1934478_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Thomas Stringham
Author-X-Name-First: Thomas
Author-X-Name-Last: Stringham
Title: Fast Bayesian Record Linkage With Record-Specific Disagreement Parameters
Abstract:
Researchers are often interested in linking individuals between two datasets that lack a common unique identifier. Matching procedures often struggle to match records with common names, birthplaces, or other field values. Computational feasibility is also a challenge, particularly when linking large datasets. We develop a Bayesian method for automated probabilistic record linkage and show it recovers more than 50% more true matches, holding accuracy constant, than comparable methods in a matching of military recruitment data to the 1900 U.S. Census for which expert-labeled matches are available. Our approach, which builds on a recent state-of-the-art Bayesian method, refines the modeling of comparison data, allowing disagreement probability parameters conditional on nonmatch status to be record-specific in the smaller of the two datasets. This flexibility significantly improves matching when many records share common field values. We show that our method is computationally feasible in practice, despite the added complexity, with an R/C++ implementation that achieves a significant improvement in speed over comparable recent methods. We also suggest a lightweight method for treatment of very common names and show how to estimate true positive rate and positive predictive value when true match status is unavailable.
Journal: Journal of Business & Economic Statistics
Pages: 1509-1522
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1934478
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1934478
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1509-1522
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1981914_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Erhao Xie
Author-X-Name-First: Erhao
Author-X-Name-Last: Xie
Title: Inference in Games Without Equilibrium Restriction: An Application to Restaurant Competition in Opening Hours
Abstract:
This article relaxes the Bayesian Nash equilibrium assumption in the estimation of discrete choice games with incomplete information. Instead of assuming unbiased/correct expectations, the model specifies a player’s belief about the behaviors of other players as an unrestricted unknown function. I then study the joint identification of belief and payoff functions in a game where players have different numbers of actions (e.g., 3 × 2 game). This asymmetry in action sets partially identifies the payoff function of the player with more actions. Moreover, if usual exclusion restrictions are satisfied, the payoff and belief functions are point identified up to a scale, and the restriction of equilibrium beliefs is testable. Finally, under a multiplicative separability condition on payoffs, the above identification results are extended to the player with fewer actions and to games with symmetric action sets. I apply this model and its identification results to study the store hours competition between McDonald’s and Kentucky Fried Chicken in China. The null hypothesis of unbiased beliefs is rejected. If researchers incorrectly impose the equilibrium assumption, then the estimated interactive effect would be biased downward by more than 50%.
Journal: Journal of Business & Economic Statistics
Pages: 1803-1816
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1981914
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1981914
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1803-1816
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2102022_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Lutz Kilian
Author-X-Name-First: Lutz
Author-X-Name-Last: Kilian
Title: Comment on Giacomini, Kitagawa, and Read’s “Narrative Restrictions and Proxies”
Journal: Journal of Business & Economic Statistics
Pages: 1429-1433
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2022.2102022
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2102022
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1429-1433
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1981915_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Wei Huang
Author-X-Name-First: Wei
Author-X-Name-Last: Huang
Author-Name: Oliver Linton
Author-X-Name-First: Oliver
Author-X-Name-Last: Linton
Author-Name: Zheng Zhang
Author-X-Name-First: Zheng
Author-X-Name-Last: Zhang
Title: A Unified Framework for Specification Tests of Continuous Treatment Effect Models
Abstract:
We propose a general framework for the specification testing of continuous treatment effect models. We assume a general residual function, which includes the average and quantile treatment effect models as special cases. The null models are identified under the unconfoundedness condition and contain a nonparametric weighting function. We propose a test statistic for the null model in which the weighting function is estimated by solving an expanding set of moment equations. We establish the asymptotic distributions of our test statistic under the null hypothesis and under fixed and local alternatives. The proposed test statistic is shown to be more efficient than that constructed from the true weighting function and can detect local alternatives deviated from the null models at the rate of O(N−1/2). A simulation method is provided to approximate the null distribution of the test statistic. Monte-Carlo simulations show that our test exhibits a satisfactory finite-sample performance, and an application shows its practical value.
Journal: Journal of Business & Economic Statistics
Pages: 1817-1830
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1981915
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1981915
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1817-1830
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1933991_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Zhanxiong Xu
Author-X-Name-First: Zhanxiong
Author-X-Name-Last: Xu
Author-Name: Zhibiao Zhao
Author-X-Name-First: Zhibiao
Author-X-Name-Last: Zhao
Title: Efficient Estimation for Models With Nonlinear Heteroscedasticity
Abstract:
We study efficient estimation for models with nonlinear heteroscedasticity. In two-step quantile regression for heteroscedastic models, motivated by several undesirable issues caused by the preliminary estimator, we propose an efficient estimator by constrainedly weighting information across quantiles. When the weights are optimally chosen under certain constraints, the new estimator can simultaneously eliminate the effect of preliminary estimator as well as achieve good estimation efficiency. When compared to the Cramér-Rao lower bound, the relative efficiency loss of the new estimator has a conservative upper bound, regardless of the model design structure. The upper bound is close to zero for practical situations. In particular, the new estimator can asymptotically achieve the optimal Cramér-Rao lower bound if the noise has either a symmetric density or the asymmetric Laplace density. Monte Carlo studies show that the proposed method has substantial efficiency gain over existing ones. In an empirical application to GDP and inflation rate modeling, the proposed method has better prediction performance than existing methods.
Journal: Journal of Business & Economic Statistics
Pages: 1498-1508
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1933991
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1933991
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1498-1508
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1970573_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Rossella Calvi
Author-X-Name-First: Rossella
Author-X-Name-Last: Calvi
Author-Name: Arthur Lewbel
Author-X-Name-First: Arthur
Author-X-Name-Last: Lewbel
Author-Name: Denni Tommasi
Author-X-Name-First: Denni
Author-X-Name-Last: Tommasi
Title: LATE With Missing or Mismeasured Treatment
Abstract:
We provide a new estimator, MR-LATE, that consistently estimates local average treatment effects when treatment is missing for some observations, not at random. If instead treatment is mismeasured for some observations, then MR-LATE usually has less bias than the standard LATE estimator. We discuss potential applications where an endogenous binary treatment may be unobserved or mismeasured. We apply MR-LATE to study the impact of women’s control over household resources on health outcomes in Indian families. This application illustrates the use of MR-LATE when treatment is estimated rather than observed. In these situations, treatment mismeasurement may arise from model misspecification and estimation errors.
Journal: Journal of Business & Economic Statistics
Pages: 1701-1717
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1970573
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1970573
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1701-1717
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2115496_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Raffaella Giacomini
Author-X-Name-First: Raffaella
Author-X-Name-Last: Giacomini
Author-Name: Toru Kitagawa
Author-X-Name-First: Toru
Author-X-Name-Last: Kitagawa
Author-Name: Matthew Read
Author-X-Name-First: Matthew
Author-X-Name-Last: Read
Title: Narrative Restrictions and Proxies
Abstract:
We compare two approaches to using information about the signs of structural shocks at specific dates within a structural vector autoregression (SVAR): imposing “narrative restrictions” (NR) on the shock signs in an otherwise set-identified SVAR; and casting the information about the shock signs as a discrete-valued “narrative proxy” (NP) to point-identify the impulse responses. The NP is likely to be “weak” given that the sign of the shock is typically known in a small number of periods, in which case the weak-proxy robust confidence intervals in Montiel Olea, Stock, and Watson are the natural approach to conducting inference. However, we show both theoretically and via Monte Carlo simulations that these confidence intervals have distorted coverage—which may be higher or lower than the nominal level—unless the sign of the shock is known in a large number of periods. Regarding the NR approach, we show that the prior-robust Bayesian credible intervals from Giacomini, Kitagawa, and Read deliver coverage exceeding the nominal level, but which converges toward the nominal level as the number of NR increases.
Journal: Journal of Business & Economic Statistics
Pages: 1415-1425
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2022.2115496
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2115496
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1415-1425
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1979564_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Xuan Liang
Author-X-Name-First: Xuan
Author-X-Name-Last: Liang
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Xiaodong Gong
Author-X-Name-First: Xiaodong
Author-X-Name-Last: Gong
Title: Semiparametric Spatial Autoregressive Panel Data Model with Fixed Effects and Time-Varying Coefficients
Abstract:
This article considers a semiparametric spatial autoregressive (SAR) panel data model with fixed effects and time-varying coefficients. The time-varying coefficients are allowed to follow unknown functions of time, while the other parameters are assumed to be unknown constants. We propose a local linear quasi-maximum likelihood estimation method to obtain consistent estimators for the SAR coefficient, the variance of the error term, and the nonparametric time-varying coefficients. The asymptotic properties of the proposed estimators are also established. Monte Carlo simulations are conducted to evaluate the finite sample performance of our proposed method. We apply the proposed model to study labor compensation in Chinese cities. The results show significant spatial dependence among cities and the impacts of capital, investment, and the economy’s structure on labor compensation change over time.
Journal: Journal of Business & Economic Statistics
Pages: 1784-1802
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1979564
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1979564
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1784-1802
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1952878_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Fengqing Zhang
Author-X-Name-First: Fengqing
Author-X-Name-Last: Zhang
Author-Name: Jiangtao Gou
Author-X-Name-First: Jiangtao
Author-X-Name-Last: Gou
Title: A Unified Framework for Estimation in Lognormal Models
Abstract:
Lognormal models have broad applications in various research areas such as economics, actuarial science, biology, environmental science and psychology. In this article, we summarize all the existing estimators for lognormal models, which belong to 12 estimator families. As some estimators were only proposed for the independent and identical distribution setting, we further generalize these estimators to accommodate the general loglinear regression setting. Additionally, we propose 19 new estimators based on different optimization criteria. Mostly importantly, we present a unified framework for all the existing and proposed estimators. The application and comparison of the various estimators using a lognormal linear regression model are demonstrated by simulations and data from the Economic Research Service in the United States Department of Agriculture. A general recommendation for choosing an estimator in practice is discussed. An R package to implement 39 estimators is made available on CRAN.
Journal: Journal of Business & Economic Statistics
Pages: 1583-1595
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1952878
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1952878
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1583-1595
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1970575_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Baoluo Sun
Author-X-Name-First: Baoluo
Author-X-Name-Last: Sun
Author-Name: Zhiqiang Tan
Author-X-Name-First: Zhiqiang
Author-X-Name-Last: Tan
Title: High-Dimensional Model-Assisted Inference for Local Average Treatment Effects With Instrumental Variables
Abstract:
Consider the problem of estimating the local average treatment effect with an instrument variable, where the instrument unconfoundedness holds after adjusting for a set of measured covariates. Several unknown functions of the covariates need to be estimated through regression models, such as instrument propensity score and treatment and outcome regression models. We develop a computationally tractable method in high-dimensional settings where the numbers of regression terms are close to or larger than the sample size. Our method exploits regularized calibrated estimation for estimating coefficients in these regression models, and then employs a doubly robust point estimator for the treatment parameter. We provide rigorous theoretical analysis to show that the resulting Wald confidence intervals are valid for the treatment parameter under suitable sparsity conditions if the instrument propensity score model is correctly specified, but the treatment and outcome regression models may be misspecified. In this sense, our confidence intervals are instrument propensity score model based, and treatment and outcome regression models assisted. For existing high-dimensional methods, valid confidence intervals are obtained for the treatment parameter if all three models are correctly specified. We evaluate the proposed method via extensive simulation studies and an empirical application to estimate the returns to education. The methods are implemented in the R package RCAL.
Journal: Journal of Business & Economic Statistics
Pages: 1732-1744
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1970575
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1970575
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1732-1744
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1961786_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Markus Pelger
Author-X-Name-First: Markus
Author-X-Name-Last: Pelger
Author-Name: Ruoxuan Xiong
Author-X-Name-First: Ruoxuan
Author-X-Name-Last: Xiong
Title: Interpretable Sparse Proximate Factors for Large Dimensions
Abstract:
This article proposes sparse and easy-to-interpret proximate factors to approximate statistical latent factors. Latent factors in a large-dimensional factor model can be estimated by principal component analysis (PCA), but are usually hard to interpret. We obtain proximate factors that are easier to interpret by shrinking the PCA factor weights and setting them to zero except for the largest absolute ones. We show that proximate factors constructed with only 5%–10% of the data are usually sufficient to almost perfectly replicate the population and PCA factors without actually assuming a sparse structure in the weights or loadings. Using extreme value theory we explain why sparse proximate factors can be substitutes for non-sparse PCA factors. We derive analytical asymptotic bounds for the correlation of appropriately rotated proximate factors with the population factors. These bounds provide guidance on how to construct the proximate factors. In simulations and empirical analyses of financial portfolio and macroeconomic data, we illustrate that sparse proximate factors are close substitutes for PCA factors with average correlations of around 97.5%, while being interpretable.
Journal: Journal of Business & Economic Statistics
Pages: 1642-1664
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1961786
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1961786
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1642-1664
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1953508_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Sébastien Fries
Author-X-Name-First: Sébastien
Author-X-Name-Last: Fries
Title: Conditional Moments of Noncausal Alpha-Stable Processes and the Prediction of Bubble Crash Odds
Abstract:
Noncausal, or anticipative, heavy-tailed processes generate trajectories featuring locally explosive episodes akin to speculative bubbles in financial time series data. For (Xt)
a two-sided infinite α-stable moving average (MA), conditional moments up to integer order four are shown to exist provided (Xt)
is anticipative enough, despite the process featuring infinite marginal variance. Formulas of these moments at any forecast horizon under any admissible parameterization are provided. Under the assumption of errors with regularly varying tails, closed-form formulas of the predictive distribution during explosive bubble episodes are obtained and expressions of the ex ante crash odds at any horizon are available. It is found that the noncausal autoregression of order 1 (AR(1)) with AR coefficient ρ and tail exponent α generates bubbles whose survival distributions are geometric with parameter ρα
. This property extends to bubbles with arbitrarily shaped collapse after the peak, provided the inflation phase is noncausal AR(1)-like. It appears that mixed causal–noncausal processes generate explosive episodes with dynamics à la Blanchard and Watson which could reconcile rational bubbles with tail exponents greater than 1.
Journal: Journal of Business & Economic Statistics
Pages: 1596-1616
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1953508
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1953508
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1596-1616
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1990770_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Carsten Jentsch
Author-X-Name-First: Carsten
Author-X-Name-Last: Jentsch
Author-Name: Kurt G. Lunsford
Author-X-Name-First: Kurt G.
Author-X-Name-Last: Lunsford
Title: Asymptotically Valid Bootstrap Inference for Proxy SVARs
Abstract:
Proxy structural vector autoregressions identify structural shocks in vector autoregressions with external variables that are correlated with the structural shocks of interest but uncorrelated with all other structural shocks. We provide asymptotic theory for this identification approach under mild α-mixing conditions that cover a large class of uncorrelated, but possibly dependent innovation processes. We prove consistency of a residual-based moving block bootstrap (MBB) for inference on statistics such as impulse response functions and forecast error variance decompositions. The MBB serves as the basis for constructing confidence intervals when the proxy variables are strongly correlated with the structural shocks of interest. For the case of one proxy variable used to identify one structural shock, we show that the MBB can be used to construct confidence sets for normalized impulse responses that are valid regardless of proxy strength based on the inversion of the Anderson and Rubin statistic suggested by Montiel Olea, Stock, and Watson.
Journal: Journal of Business & Economic Statistics
Pages: 1876-1891
Issue: 4
Volume: 40
Year: 2022
Month: 10
X-DOI: 10.1080/07350015.2021.1990770
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1990770
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:40:y:2022:i:4:p:1876-1891
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2008404_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Sonja C. de New
Author-X-Name-First: Sonja C.
Author-X-Name-Last: de New
Author-Name: Stefanie Schurer
Author-X-Name-First: Stefanie
Author-X-Name-Last: Schurer
Title: Survey Response Behavior as a Proxy for Unobserved Ability: Theory and Evidence
Abstract:
An emerging literature is experimenting with using survey response behavior as a proxy for hard-to-measure abilities. We contribute to this literature by formalizing this idea and evaluating its benefits and risks. Using a standard and nationally representative survey from Australia, we demonstrate that the survey item-response rate (SIRR), a straightforward summary measure of response behavior, varies more with cognitive than with noncognitive ability. We evaluate whether SIRR is a useful proxy to reduce ability-related biases in a standard economic application. We show empirically that SIRR, although a weak and imperfect proxy, leads to omitted-variable bias reductions of up to 20%, and performs better than other proxy variables derived from paradata. Deriving the necessary and sufficient conditions for a valid proxy, we show that a strong proxy is neither a necessary nor a sufficient condition to reduce estimation biases. A critical consideration is to which degree the proxy introduces a multicollinearity problem, a finding of general interest. We illustrate the theoretical derivations with an empirical application.
Journal: Journal of Business & Economic Statistics
Pages: 197-212
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2008404
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2008404
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:197-212
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2126479_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Kevin L. McKinney
Author-X-Name-First: Kevin L.
Author-X-Name-Last: McKinney
Author-Name: John M. Abowd
Author-X-Name-First: John M.
Author-X-Name-Last: Abowd
Title: Male Earnings Volatility in LEHD Before, During, and After the Great Recession
Abstract:
This article is part of a coordinated collection of papers on prime-age male earnings volatility. Each paper produces a similar set of statistics for the same reference population using a different primary data source. Our primary data source is the Census Bureau’s Longitudinal Employer-Household Dynamics (LEHD) infrastructure files. Using LEHD data from 1998 to 2016, we create a well-defined population frame to facilitate accurate estimation of temporal changes comparable to designed longitudinal samples of people. We show that earnings volatility, excluding increases during recessions, has declined over the analysis period, a finding robust to various sensitivity analyses.
Journal: Journal of Business & Economic Statistics
Pages: 33-39
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2022.2126479
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2126479
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:33-39
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2006668_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Rong Zhu
Author-X-Name-First: Rong
Author-X-Name-Last: Zhu
Author-Name: Xinyu Zhang
Author-X-Name-First: Xinyu
Author-X-Name-Last: Zhang
Author-Name: Alan T. K. Wan
Author-X-Name-First: Alan T. K.
Author-X-Name-Last: Wan
Author-Name: Guohua Zou
Author-X-Name-First: Guohua
Author-X-Name-Last: Zou
Title: Kernel Averaging Estimators
Abstract:
The issue of bandwidth selection is a fundamental model selection problem stemming from the uncertainty about the smoothness of the regression. In this article, we advocate a model averaging approach to circumvent the problem caused by this uncertainty. Our new approach involves averaging across a series of Nadaraya-Watson kernel estimators each under a different bandwidth, with weights for these different estimators chosen such that a least-squares cross-validation criterion is minimized. We prove that the resultant combined-kernel estimator achieves the smallest possible asymptotic aggregate squared error. The superiority of the new estimator over estimators based on widely accepted conventional bandwidth choices in finite samples is demonstrated in a simulation study and a real data example.
Journal: Journal of Business & Economic Statistics
Pages: 157-169
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2006668
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2006668
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:157-169
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2008406_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Fukang Zhu
Author-X-Name-First: Fukang
Author-X-Name-Last: Zhu
Author-Name: Mengya Liu
Author-X-Name-First: Mengya
Author-X-Name-Last: Liu
Author-Name: Shiqing Ling
Author-X-Name-First: Shiqing
Author-X-Name-Last: Ling
Author-Name: Zongwu Cai
Author-X-Name-First: Zongwu
Author-X-Name-Last: Cai
Title: Testing for Structural Change of Predictive Regression Model to Threshold Predictive Regression Model
Abstract:
This article investigates two test statistics for testing structural changes and thresholds in predictive regression models. The generalized likelihood ratio (GLR) test is proposed for the stationary predictor and the generalized F test is suggested for the persistent predictor. Under the null hypothesis of no structural change and threshold, it is shown that the GLR test statistic converges to a function of a centered Gaussian process, and the generalized F test statistic converges to a function of Brownian motions. A Bootstrap method is proposed to obtain the critical values of test statistics. Simulation studies and a real example are given to assess the performances of the proposed tests.
Journal: Journal of Business & Economic Statistics
Pages: 228-240
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2008406
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2008406
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:228-240
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2126845_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Michael D. Carr
Author-X-Name-First: Michael D.
Author-X-Name-Last: Carr
Author-Name: Robert A. Moffitt
Author-X-Name-First: Robert A.
Author-X-Name-Last: Moffitt
Author-Name: Emily E. Wiemers
Author-X-Name-First: Emily E.
Author-X-Name-Last: Wiemers
Title: Reconciling Trends in Male Earnings Volatility: Evidence from the SIPP Survey and Administrative Data
Abstract:
As part of a set of papers using the same methods and sample selection criteria to estimate trends in male earnings volatility across survey and administrative datasets, we conduct a new investigation of male earnings volatility using data from the Survey of Income and Program Participation (SIPP) survey and SIPP-linked administrative earnings data (SIPP GSF). We find that the level of volatility is higher in the administrative earnings histories in the SIPP GSF than in the SIPP survey but that the trends are similar. Between 1984 and 2012, volatility in the SIPP survey declines slightly while volatility in the SIPP GSF increases slightly. Including imputations due to unit nonresponse in the SIPP survey data increases both the level and upward trend in volatility and poses a challenge for estimating a consistent series in the SIPP survey data. Because the density of low earnings differs considerably across datasets, and volatility may vary across the earnings distribution, we also estimate trends in volatility where we hold the earnings distribution fixed across the two data sources. Differences in the underlying earnings distribution explain much of the difference in the level of and trends in volatility between the SIPP survey and SIPP GSF.
Journal: Journal of Business & Economic Statistics
Pages: 26-32
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2022.2126845
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2126845
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:26-32
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2001341_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Lai Hung-pin
Author-X-Name-First: Lai
Author-X-Name-Last: Hung-pin
Author-Name: Subal C. Kumbhakar
Author-X-Name-First: Subal C.
Author-X-Name-Last: Kumbhakar
Title: Panel Stochastic Frontier Model With Endogenous Inputs and Correlated Random Components
Abstract:
In this article, we consider a panel stochastic frontier model in which the composite error term εit has four components, that is, εit=τi−ηi+vit−uit, where ηi and uit are persistent and transient inefficiency components, τi consists of the random firm effects and vit is the random noise. Two distinguishing features of the proposed model are (i) the inputs are allowed to be correlated with one or more of the error components in the production function; (ii) time-invariant and time-varying components, that is, (τi−ηi) and (vit−uit), are allowed to be correlated. To keep the formulation general, we do not specify whether this correlation comes from the correlations between (i) ηi and uit, (ii) τi and uit, (iii) τi and vit, (iv) ηi and vit, or some other combination of them. Further, we also consider the case when the correlation in the composite error arises from the time dependence of εit. To estimate the model parameters and predict (in)efficiency, we propose a two-step procedure. In the first step, either the within or the first difference transformation that eliminates the time-invariant components is proposed. We then use either the 2SLS or the GMM approach to obtain unbiased and consistent estimators of the parameters in the frontier function, except for the intercept. In the second step, the maximum simulated likelihood method is used to estimate the parameters associated with the distributions of τi and vit, ηi and uit as well as the intercept. The copula approach is used in this step to model the dependence between the time-varying and time-invariant components. Formulas to predict transient and persistent (in)efficiency are also derived. Finally, results from both simulated and real data are provided.
Journal: Journal of Business & Economic Statistics
Pages: 80-96
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2001341
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2001341
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:80-96
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2002160_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Donghang Luo
Author-X-Name-First: Donghang
Author-X-Name-Last: Luo
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Author-Name: Huan Gong
Author-X-Name-First: Huan
Author-X-Name-Last: Gong
Author-Name: Dong Li
Author-X-Name-First: Dong
Author-X-Name-Last: Li
Title: Testing Error Distribution by Kernelized Stein Discrepancy in Multivariate Time Series Models
Abstract:
Knowing the error distribution is important in many multivariate time series applications. To alleviate the risk of error distribution mis-specification, testing methodologies are needed to detect whether the chosen error distribution is correct. However, the majority of existing tests only deal with the multivariate normal distribution for some special multivariate time series models, and thus cannot be used for testing the often observed heavy-tailed and skewed error distributions in applications. In this article, we construct a new consistent test for general multivariate time series models, based on the kernelized Stein discrepancy. To account for the estimation uncertainty and unobserved initial values, a bootstrap method is provided to calculate the critical values. Our new test is easy-to-implement for a large scope of multivariate error distributions, and its importance is illustrated by simulated and real data. As an extension, we also show how to test for the error distribution in copula time series models.
Journal: Journal of Business & Economic Statistics
Pages: 111-125
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2002160
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2002160
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:111-125
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2002159_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Kosuke Imai
Author-X-Name-First: Kosuke
Author-X-Name-Last: Imai
Author-Name: Inbeom Lee
Author-X-Name-First: Inbeom
Author-X-Name-Last: Lee
Author-Name: Han Liu
Author-X-Name-First: Han
Author-X-Name-Last: Liu
Author-Name: Yang Ning
Author-X-Name-First: Yang
Author-X-Name-Last: Ning
Author-Name: Xiaolin Yang
Author-X-Name-First: Xiaolin
Author-X-Name-Last: Yang
Title: Optimal Covariate Balancing Conditions in Propensity Score Estimation
Abstract:
Inverse probability of treatment weighting (IPTW) is a popular method for estimating the average treatment effect (ATE). However, empirical studies show that the IPTW estimators can be sensitive to the misspecification of the propensity score model. To address this problem, researchers have proposed to estimate propensity score by directly optimizing the balance of pretreatment covariates. While these methods appear to empirically perform well, little is known about how the choice of balancing conditions affects their theoretical properties. To fill this gap, we first characterize the asymptotic bias and efficiency of the IPTW estimator based on the covariate balancing propensity score (CBPS) methodology under local model misspecification. Based on this analysis, we show how to optimally choose the covariate balancing functions and propose an optimal CBPS-based IPTW estimator. This estimator is doubly robust; it is consistent for the ATE if either the propensity score model or the outcome model is correct. In addition, the proposed estimator is locally semiparametric efficient when both models are correctly specified. To further relax the parametric assumptions, we extend our method by using a sieve estimation approach. We show that the resulting estimator is globally efficient under a set of much weaker assumptions and has a smaller asymptotic bias than the existing estimators. Finally, we evaluate the finite sample performance of the proposed estimators via simulation and empirical studies. An open-source software package is available for implementing the proposed methods.
Journal: Journal of Business & Economic Statistics
Pages: 97-110
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2002159
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2002159
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:97-110
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2011300_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Jad Beyhum
Author-X-Name-First: Jad
Author-X-Name-Last: Beyhum
Author-Name: Eric Gautier
Author-X-Name-First: Eric
Author-X-Name-Last: Gautier
Title: Factor and Factor Loading Augmented Estimators for Panel Regression With Possibly Nonstrong Factors
Abstract:
This article considers linear panel data models where the dependence of the regressors and the unobservables is modeled through a factor structure. The number of time periods and the sample size both go to infinity. Unlike in most existing methods for the estimation of this type of models, nonstrong factors are allowed and the number of factors can grow to infinity with the sample size. We study a class of two-step estimators of the regression coefficients. In the first step, factors and factor loadings are estimated. Then, the second step corresponds to the panel regression of the outcome on the regressors and the estimates of the factors and the factor loadings from the first step. The estimators enjoy double robustness. Different methods can be used in the first step while the second step is unique. We derive sufficient conditions on the first-step estimator and the data generating process under which the two-step estimator is asymptotically normal. Assumptions under which using an approach based on principal components analysis in the first step yields an asymptotically normal estimator are also given. The two-step procedure exhibits good finite sample properties in simulations. The approach is illustrated by an empirical application on fiscal policy.
Journal: Journal of Business & Economic Statistics
Pages: 270-281
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2011300
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2011300
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:270-281
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2004897_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Taras Bodnar
Author-X-Name-First: Taras
Author-X-Name-Last: Bodnar
Author-Name: Yarema Okhrin
Author-X-Name-First: Yarema
Author-X-Name-Last: Okhrin
Author-Name: Nestor Parolya
Author-X-Name-First: Nestor
Author-X-Name-Last: Parolya
Title: Optimal Shrinkage-Based Portfolio Selection in High Dimensions
Abstract:
In this article, we estimate the mean-variance portfolio in the high-dimensional case using the recent results from the theory of random matrices. We construct a linear shrinkage estimator which is distribution-free and is optimal in the sense of maximizing with probability 1 the asymptotic out-of-sample expected utility, that is, mean-variance objective function for different values of risk aversion coefficient which in particular leads to the maximization of the out-of-sample expected utility and to the minimization of the out-of-sample variance. One of the main features of our estimator is the inclusion of the estimation risk related to the sample mean vector into the high-dimensional portfolio optimization. The asymptotic properties of the new estimator are investigated when the number of assets p and the sample size n tend simultaneously to infinity such that p/n→c∈(0,+∞). The results are obtained under weak assumptions imposed on the distribution of the asset returns, namely the existence of the 4+ε moments is only required. Thereafter we perform numerical and empirical studies where the small- and large-sample behavior of the derived estimator is investigated. The suggested estimator shows significant improvements over the existent approaches including the nonlinear shrinkage estimator and the three-fund portfolio rule, especially when the portfolio dimension is larger than the sample size. Moreover, it is robust to deviations from normality.
Journal: Journal of Business & Economic Statistics
Pages: 140-156
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2004897
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2004897
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:140-156
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2008407_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Lengyang Wang
Author-X-Name-First: Lengyang
Author-X-Name-Last: Wang
Author-Name: Efang Kong
Author-X-Name-First: Efang
Author-X-Name-Last: Kong
Author-Name: Yingcun Xia
Author-X-Name-First: Yingcun
Author-X-Name-Last: Xia
Title: Bootstrap Tests for High-Dimensional White-Noise
Abstract:
The testing of white-noise (WN) is an essential step in time series analysis. In a high dimensional set-up, most existing methods either are computationally infeasible, or suffer from highly distorted Type-I errors, or both. We propose an easy-to-implement bootstrap method for high-dimensional WN test and prove its consistency for a variety of test statistics. Its power properties as well as extensions to WN tests based on fitted residuals are also considered. Simulation results show that compared to the existing methods, the new approach possesses much better power, while maintaining a proper control over the Type-I error. They also provide proofs that even in cases where our method is expected to suffer from lack of theoretical justification, it continues to outperform its competitors. The proposed method is applied to the analysis of the daily stock returns of the top 50 companies by market capitalization listed on the NYSE, and we find strong evidence that the common market factor is the main cause of cross-correlation between stocks.
Journal: Journal of Business & Economic Statistics
Pages: 241-254
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2008407
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2008407
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:241-254
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2006669_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Anna Bykhovskaya
Author-X-Name-First: Anna
Author-X-Name-Last: Bykhovskaya
Title: Time Series Approach to the Evolution of Networks: Prediction and Estimation
Abstract:
The article analyzes nonnegative multivariate time series which we interpret as weighted networks. We introduce a model where each coordinate of the time series represents a given edge across time. The number of time periods is treated as large compared to the size of the network. The model specifies the temporal evolution of a weighted network that combines classical autoregression with nonnegativity, a positive probability of vanishing, and peer effect interactions between weights assigned to edges in the process. The main results provide criteria for stationarity versus explosiveness of the network evolution process and techniques for estimation of the parameters of the model and for prediction of its future values. Natural applications arise in networks of fixed number of agents, such as countries, large corporations, or small social communities. The article provides an empirical implementation of the approach to monthly trade data in European Union. Overall, the results confirm that incorporating nonnegativity of dependent variables into the model matters and incorporating peer effects leads to the improved prediction power.
Journal: Journal of Business & Economic Statistics
Pages: 170-183
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2006669
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2006669
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:170-183
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1999821_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Christian Francq
Author-X-Name-First: Christian
Author-X-Name-Last: Francq
Author-Name: Genaro Sucarrat
Author-X-Name-First: Genaro
Author-X-Name-Last: Sucarrat
Title: Volatility Estimation When the Zero-Process is Nonstationary
Abstract:
Financial returns are frequently nonstationary due to the nonstationary distribution of zeros. In daily stock returns, for example, the nonstationarity can be due to an upwards trend in liquidity over time, which may lead to a downwards trend in the zero-probability. In intraday returns, the zero-probability may be periodic: It is lower in periods where the opening hours of the main financial centers overlap, and higher otherwise. A nonstationary zero-process invalidates standard estimators of volatility models, since they rely on the assumption that returns are strictly stationary. We propose a GARCH model that accommodates a nonstationary zero-process, derive a zero-adjusted QMLE for the parameters of the model, and prove its consistency and asymptotic normality under mild assumptions. The volatility specification in our model can contain higher order ARCH and GARCH terms, and past zero-indicators as covariates. Simulations verify the asymptotic properties in finite samples, and show that the standard estimator is biased. An empirical study of daily and intradaily returns illustrate our results. They show how a nonstationary zero-process induces time-varying parameters in the conditional variance representation, and that the distribution of zero returns can have a strong impact on volatility predictions.
Journal: Journal of Business & Economic Statistics
Pages: 53-66
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.1999821
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1999821
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:53-66
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2102023_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: James P. Ziliak
Author-X-Name-First: James P.
Author-X-Name-Last: Ziliak
Author-Name: Charles Hokayem
Author-X-Name-First: Charles
Author-X-Name-Last: Hokayem
Author-Name: Christopher R. Bollinger
Author-X-Name-First: Christopher R.
Author-X-Name-Last: Bollinger
Title: Trends in Earnings Volatility Using Linked Administrative and Survey Data
Abstract:
We document trends in earnings volatility separately by gender using unique linked survey data from the CPS ASEC and Social Security earnings records for the tax years spanning 1995–2015. The exact data link permits us to focus on differences in measured volatility from earnings nonresponse, survey attrition, and measurement between survey and administrative earnings data reports, while holding constant the sampling frame. Our results for both men and women suggest that the level and trend in volatility is similar in the survey and administrative data, showing substantial business-cycle sensitivity among men but no overall trend among continuous workers, while women demonstrate no change in earnings volatility over the business cycle but a declining trend. A substantive difference emerges with the inclusion of imputed earnings among survey nonrespondents, suggesting that users of the ASEC drop earnings nonrespondents.
Journal: Journal of Business & Economic Statistics
Pages: 12-19
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2022.2102023
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2102023
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:12-19
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2000418_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Rong Chen
Author-X-Name-First: Rong
Author-X-Name-Last: Chen
Author-Name: Yuanyuan Ji
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Ji
Author-Name: Guolin Jiang
Author-X-Name-First: Guolin
Author-X-Name-Last: Jiang
Author-Name: Han Xiao
Author-X-Name-First: Han
Author-X-Name-Last: Xiao
Author-Name: Ruoqing Xie
Author-X-Name-First: Ruoqing
Author-X-Name-Last: Xie
Author-Name: Pingfang Zhu
Author-X-Name-First: Pingfang
Author-X-Name-Last: Zhu
Title: Composite Index Construction with Expert Opinion
Abstract:
Composite index is a powerful and popularly used tool in providing an overall measure of a subject by summarizing a group of measurements (component indices) of different aspects of the subject. It is widely used in economics, finance, policy evaluation, performance ranking, and many other fields. Effective construction of a composite index has been studied extensively. The most widely used approach is to use a linear combination of the component indices, where the combination weights are determined by optimizing an objective function. To maximize the overall variation of the resulting composite index, the combination weights can be obtained through principal component analysis. In this article, we propose to incorporate expert opinions into the construction of the composite index. It is noted that expert opinion often provides useful information in assessing which of the component indices are more important for the overall measure of the subject. We consider the case that a group of experts have been consulted, each providing a set of importance scores for the component indices, along with a set of confidence scores which reflects the expert’s own confidence in his/her assessment. In addition, the constructor of the composite index can also provide an assessment of the expertise level of each expert. We use linear combinations to construct the composite index, where the combination weights are determined by maximizing the sum of resulting composite index variation and the negative weighted sum of squares of deviation between the combination weights used and the experts’ scores. A data-driven approach is used to find the optimal balance between the two sources of information. Theoretical properties of the procedure are investigated. Simulation examples and an economic application on constructing science and technology development index is carried out to illustrate the proposed method.
Journal: Journal of Business & Economic Statistics
Pages: 67-79
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2000418
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2000418
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:67-79
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2008405_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Yoshimasa Uematsu
Author-X-Name-First: Yoshimasa
Author-X-Name-Last: Uematsu
Author-Name: Takashi Yamagata
Author-X-Name-First: Takashi
Author-X-Name-Last: Yamagata
Title: Estimation of Sparsity-Induced Weak Factor Models
Abstract:
This article investigates estimation of sparsity-induced weak factor (sWF) models, with large cross-sectional and time-series dimensions (N and T, respectively). It assumes that the kth largest eigenvalue of a data covariance matrix grows proportionally to Nαk
with unknown exponents 0<αk≤1
for k=1,…,r
. Employing the same rotation of the principal components (PC) estimator, the growth rate αk is linked to the degree of sparsity of kth factor loadings. This is much weaker than the typical assumption on the recent factor models, in which all the r largest eigenvalues diverge proportionally to N. We apply the method of sparse orthogonal factor regression (SOFAR) by Uematsu et al. (2019) to estimate the sWF models and derive the estimation error bound. Importantly, our method also yields consistent estimation of αk. A finite sample experiment shows that the performance of the new estimator uniformly dominates that of the PC estimator. We apply our method to forecasting bond yields and the results demonstrate that our method outperforms that based on the PC. We also analyze S&P500 firm security returns and find that the first factor is consistently near strong while the others are weak.
Journal: Journal of Business & Economic Statistics
Pages: 213-227
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2008405
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2008405
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:213-227
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2006670_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Lei Jiang
Author-X-Name-First: Lei
Author-X-Name-Last: Jiang
Author-Name: Weimin Liu
Author-X-Name-First: Weimin
Author-X-Name-Last: Liu
Author-Name: Liang Peng
Author-X-Name-First: Liang
Author-X-Name-Last: Peng
Title: Test for Market Timing Using Daily Fund Returns
Abstract:
Using daily mutual fund returns to estimate market timing, some econometric issues, including heteroscedasticity, correlated errors, and heavy tails, make the traditional least-squares estimate in Treynor–Mazuy and Henriksson–Merton models biased and severely distort the t-test size. Using ARMA-GARCH models, weighted least-squares estimate to ensure a normal limit, and random weighted bootstrap method to quantify uncertainty, we find more funds with positive timing ability than the Newey–West t-test. Empirical evidence indicates that funds with perverse timing ability have high fund turnovers and funds tradeoff between timing and stock picking skills.
Journal: Journal of Business & Economic Statistics
Pages: 184-196
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2006670
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2006670
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:184-196
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2102020_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Robert Moffitt
Author-X-Name-First: Robert
Author-X-Name-Last: Moffitt
Author-Name: John Abowd
Author-X-Name-First: John
Author-X-Name-Last: Abowd
Author-Name: Christopher Bollinger
Author-X-Name-First: Christopher
Author-X-Name-Last: Bollinger
Author-Name: Michael Carr
Author-X-Name-First: Michael
Author-X-Name-Last: Carr
Author-Name: Charles Hokayem
Author-X-Name-First: Charles
Author-X-Name-Last: Hokayem
Author-Name: Kevin McKinney
Author-X-Name-First: Kevin
Author-X-Name-Last: McKinney
Author-Name: Emily Wiemers
Author-X-Name-First: Emily
Author-X-Name-Last: Wiemers
Author-Name: Sisi Zhang
Author-X-Name-First: Sisi
Author-X-Name-Last: Zhang
Author-Name: James Ziliak
Author-X-Name-First: James
Author-X-Name-Last: Ziliak
Title: Reconciling Trends in U.S. Male Earnings Volatility: Results from Survey and Administrative Data
Abstract:
There is a large literature on earnings and income volatility in labor economics, household finance, and macroeconomics. One strand of that literature has studied whether individual earnings volatility has risen or fallen in the United States over the last several decades. There are strong disagreements in the empirical literature on this important question, with some studies showing upward trends, some showing downward trends, and some showing no trends. Some studies have suggested that the differences are the result of using flawed survey data instead of more accurate administrative data. This article summarizes the results of a project attempting to reconcile these findings with four different datasets and six different data series—three survey and three administrative data series, including two which match survey respondent data to their administrative data. Using common specifications, measures of volatility, and other treatments of the data, four of the six data series show a lack of any significant long-term trend in male earnings volatility over the last 20-to-30+ years when differences across the datasets are properly accounted for. A fifth data series (the PSID) shows a positive net trend but small in magnitude. A sixth, administrative, dataset, available only since 1998, shows no net trend 1998–2011 and only a small decline thereafter. Many of the remaining differences across data series can be explained by differences in their cross-sectional distribution of earnings, particularly differences in the size of the lower tail. We conclude that the datasets we have analyzed, which include many of the most important available, show little evidence of any significant trend in male earnings volatility since the mid-1980s.
Journal: Journal of Business & Economic Statistics
Pages: 1-11
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2022.2102020
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2102020
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:1-11
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2008408_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: John H. J. Einmahl
Author-X-Name-First: John H. J.
Author-X-Name-Last: Einmahl
Author-Name: Yi He
Author-X-Name-First: Yi
Author-X-Name-Last: He
Title: Extreme Value Estimation for Heterogeneous Data
Abstract:
We develop a universal econometric formulation of empirical power laws possibly driven by parameter heterogeneity. Our approach extends classical extreme value theory to specifying the tail behavior of the empirical distribution of a general dataset with possibly heterogeneous marginal distributions. We discuss several model examples that satisfy our conditions and demonstrate in simulations how heterogeneity may generate empirical power laws. We observe a cross-sectional power law for the U.S. stock losses and show that this tail behavior is largely driven by the heterogeneous volatilities of the individual assets.
Journal: Journal of Business & Economic Statistics
Pages: 255-269
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2008408
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2008408
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:255-269
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_1996380_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Carlos Trucíos
Author-X-Name-First: Carlos
Author-X-Name-Last: Trucíos
Author-Name: João H. G. Mazzeu
Author-X-Name-First: João H. G.
Author-X-Name-Last: Mazzeu
Author-Name: Marc Hallin
Author-X-Name-First: Marc
Author-X-Name-Last: Hallin
Author-Name: Luiz K. Hotta
Author-X-Name-First: Luiz K.
Author-X-Name-Last: Hotta
Author-Name: Pedro L. Valls Pereira
Author-X-Name-First: Pedro L.
Author-X-Name-Last: Valls Pereira
Author-Name: Mauricio Zevallos
Author-X-Name-First: Mauricio
Author-X-Name-Last: Zevallos
Title: Forecasting Conditional Covariance Matrices in High-Dimensional Time Series: A General Dynamic Factor Approach
Abstract:
Based on a General Dynamic Factor Model with infinite-dimensional factor space and MGARCH volatility models, we develop new estimation and forecasting procedures for conditional covariance matrices in high-dimensional time series. The finite-sample performance of our approach is evaluated via Monte Carlo experiments and outperforms the most alternative methods. This new approach is also used to construct minimum one-step-ahead variance portfolios for a high-dimensional panel of assets. The results are shown to match the results of recent proposals by Engle, Ledoit, and Wolf and achieve better out-of-sample portfolio performance than alternative procedures proposed in the literature.
Journal: Journal of Business & Economic Statistics
Pages: 40-52
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.1996380
File-URL: http://hdl.handle.net/10.1080/07350015.2021.1996380
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:40-52
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2102024_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Robert Moffitt
Author-X-Name-First: Robert
Author-X-Name-Last: Moffitt
Author-Name: Sisi Zhang
Author-X-Name-First: Sisi
Author-X-Name-Last: Zhang
Title: Estimating Trends in Male Earnings Volatility with the Panel Study of Income Dynamics
Abstract:
The Panel Study of Income Dynamics (PSID) has been the workhorse dataset used to estimate trends in U.S. earnings volatility at the individual level. We provide updated estimates for male earnings volatility using additional years of data. The analysis confirms prior work showing upward trends in the 1970s and 1980s, with a near doubling of the level of volatility over that period. The results also confirm prior work showing a resumption of an upward trend starting in the 2000s, but the new years of data available show volatility to be falling in recent years. By 2018, volatility had grown by a modest amount relative to the 1990s, with a growth rate only one-fifth the magnitude of that in the 1970s and 1980s. We show that neither attrition or item nonresponse bias, nor other issues with the PSID, affect these conclusions.
Journal: Journal of Business & Economic Statistics
Pages: 20-25
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2022.2102024
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2102024
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:20-25
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2003203_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949
Author-Name: Yoshimasa Uematsu
Author-X-Name-First: Yoshimasa
Author-X-Name-Last: Uematsu
Author-Name: Takashi Yamagata
Author-X-Name-First: Takashi
Author-X-Name-Last: Yamagata
Title: Inference in Sparsity-Induced Weak Factor Models
Abstract:
In this article, we consider statistical inference for high-dimensional approximate factor models. We posit a weak factor structure, in which the factor loading matrix can be sparse and the signal eigenvalues may diverge more slowly than the cross-sectional dimension, N. We propose a novel inferential procedure to decide whether each component of the factor loadings is zero or not, and prove that this controls the false discovery rate (FDR) below a preassigned level, while the power tends to unity. This “factor selection” procedure is primarily based on a debiased version of the sparse orthogonal factor regression (SOFAR) estimator; but is also applicable to the principal component (PC) estimator. After the factor selection, the resparsified SOFAR and sparsified PC estimators are proposed and their consistency is established. Finite sample evidence supports the theoretical results. We apply our method to the FRED-MD dataset of macroeconomic variables and the monthly firm-level excess returns which constitute the S&P 500 index. The results give very strong statistical evidence of sparse factor loadings under the identification restrictions and exhibit clear associations of factors and categories of the variables. Furthermore, our method uncovers a very weak but statistically significant factor in the residuals of Fama-French five factor regression.
Journal: Journal of Business & Economic Statistics
Pages: 126-139
Issue: 1
Volume: 41
Year: 2022
Month: 12
X-DOI: 10.1080/07350015.2021.2003203
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2003203
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2022:i:1:p:126-139
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2013244_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Christian M. Hafner
Author-X-Name-First: Christian M.
Author-X-Name-Last: Hafner
Author-Name: Helmut Herwartz
Author-X-Name-First: Helmut
Author-X-Name-Last: Herwartz
Title: Dynamic Score-Driven Independent Component Analysis
Abstract:
A model for dynamic independent component analysis is introduced where the dynamics are driven by the score of the pseudo likelihood with respect to the rotation angle of model innovations. While conditional second moments are invariant with respect to rotations, higher conditional moments are not, which may have important implications for applications. The pseudo maximum likelihood estimator of the model is shown to be consistent and asymptotically normally distributed. A simulation study reports good finite sample properties of the estimator, including the case of a misspecification of the innovation density. In an application to a bivariate exchange rate series of the Euro and the British Pound against the U.S. Dollar, it is shown that the model-implied conditional portfolio kurtosis largely aligns with narratives on financial stress as a result of the global financial crisis in 2008, the European sovereign debt crisis (2010–2013) and early rumors signalling the United Kingdom to leave the European Union (2017). These insights are consistent with a recently proposed model that associates portfolio kurtosis with a geopolitical risk factor.
Journal: Journal of Business & Economic Statistics
Pages: 298-308
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2013244
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2013244
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:298-308
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2032721_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Monica Billio
Author-X-Name-First: Monica
Author-X-Name-Last: Billio
Author-Name: Roberto Casarin
Author-X-Name-First: Roberto
Author-X-Name-Last: Casarin
Author-Name: Matteo Iacopini
Author-X-Name-First: Matteo
Author-X-Name-Last: Iacopini
Author-Name: Sylvia Kaufmann
Author-X-Name-First: Sylvia
Author-X-Name-Last: Kaufmann
Title: Bayesian Dynamic Tensor Regression
Abstract:
High- and multi-dimensional array data are becoming increasingly available. They admit a natural representation as tensors and call for appropriate statistical tools. We propose a new linear autoregressive tensor process (ART) for tensor-valued data, that encompasses some well-known time series models as special cases. We study its properties and derive the associated impulse response function. We exploit the PARAFAC low-rank decomposition for providing a parsimonious parameterization and develop a Bayesian inference allowing for shrinking effects. We apply the ART model to time series of multilayer networks and study the propagation of shocks across nodes, layers and time.
Journal: Journal of Business & Economic Statistics
Pages: 429-439
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2032721
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2032721
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:429-439
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2023553_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Jingli Wang
Author-X-Name-First: Jingli
Author-X-Name-Last: Wang
Author-Name: Jialiang Li
Author-X-Name-First: Jialiang
Author-X-Name-Last: Li
Title: Multi-Threshold Structural Equation Model
Abstract:
In this article, we consider the instrumental variable estimation for causal regression parameters with multiple unknown structural changes across subpopulations. We propose a multiple change point detection method to determine the number of thresholds and estimate the threshold locations in the two-stage least square procedure. After identifying the estimated threshold locations, we use the Wald method to estimate the parameters of interest, that is, the regression coefficients of the endogenous variable. Based on some technical assumptions, we carefully establish the consistency of estimated parameters and the asymptotic normality of causal coefficients. Simulation studies are included to examine the performance of the proposed method. Finally, our method is illustrated via an application of the Philippine farm households data for which some new findings are discovered.
Journal: Journal of Business & Economic Statistics
Pages: 377-387
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2023553
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2023553
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:377-387
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2021922_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Laura Liu
Author-X-Name-First: Laura
Author-X-Name-Last: Liu
Title: Density Forecasts in Panel Data Models: A Semiparametric Bayesian Perspective
Abstract:
This article constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coefficients as well as cross-sectional heteroscedasticity. The panel considered in this article features a large cross-sectional dimension N but short time series T. Due to the short T, traditional methods have difficulty in disentangling the heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by combining information from the whole panel. Theoretically, I prove that in cross-sectional homoscedastic cases, both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast. Methodologically, I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. Monte Carlo simulations and an empirical application to young firm dynamics demonstrate improvements in density forecasts relative to alternative approaches.
Journal: Journal of Business & Economic Statistics
Pages: 349-363
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2021922
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2021922
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:349-363
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2013245_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Rong Jiang
Author-X-Name-First: Rong
Author-X-Name-Last: Jiang
Author-Name: Keming Yu
Author-X-Name-First: Keming
Author-X-Name-Last: Yu
Title: No-Crossing Single-Index Quantile Regression Curve Estimation
Abstract:
Single-index quantile regression (QR) models can avoid the curse of dimensionality in nonparametric problems by assuming that the response is only related to a single linear combination of the covariates. Like the standard parametric or nonparametric QR whose estimated curves may cross, the single-index QR can also suffer quantile crossing, leading to an invalid distribution for the response. This issue has attracted considerable attention in the literature in the recent year. In this article, we consider single-index models, develop methods for QR that guarantee noncrossing quantile curves, and extend the methods and results to composite quantile regression. The asymptotic properties of the proposed estimators are derived and their advantages over existing methods are explained. Simulation studies and a real data application are conducted to illustrate the finite sample performance of the proposed methods.
Journal: Journal of Business & Economic Statistics
Pages: 309-320
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2013245
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2013245
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:309-320
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2028631_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Trong-Nghia Nguyen
Author-X-Name-First: Trong-Nghia
Author-X-Name-Last: Nguyen
Author-Name: Minh-Ngoc Tran
Author-X-Name-First: Minh-Ngoc
Author-X-Name-Last: Tran
Author-Name: David Gunawan
Author-X-Name-First: David
Author-X-Name-Last: Gunawan
Author-Name: Robert Kohn
Author-X-Name-First: Robert
Author-X-Name-Last: Kohn
Title: A Statistical Recurrent Stochastic Volatility Model for Stock Markets
Abstract:
The stochastic volatility (SV) model and its variants are widely used in the financial sector, while recurrent neural network (RNN) models are successfully used in many large-scale industrial applications of deep learning. We combine these two methods in a nontrivial way and propose a model, which we call the statistical recurrent stochastic volatility (SR-SV) model, to capture the dynamics of stochastic volatility. The proposed model is able to capture complex volatility effects, for example, nonlinearity and long-memory auto-dependence, overlooked by the conventional SV models, is statistically interpretable and has an impressive out-of-sample forecast performance. These properties are carefully discussed and illustrated through extensive simulation studies and applications to five international stock index datasets: the German stock index DAX30, the Hong Kong stock index HSI50, the France market index CAC40, the U.S. stock market index SP500 and the Canada market index TSX250. An user-friendly software package together with the examples reported in the article are available at https://github.com/vbayeslab.
Journal: Journal of Business & Economic Statistics
Pages: 414-428
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2028631
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2028631
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:414-428
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2035226_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Laura Reh
Author-X-Name-First: Laura
Author-X-Name-Last: Reh
Author-Name: Fabian Krüger
Author-X-Name-First: Fabian
Author-X-Name-Last: Krüger
Author-Name: Roman Liesenfeld
Author-X-Name-First: Roman
Author-X-Name-Last: Liesenfeld
Title: Predicting the Global Minimum Variance Portfolio
Abstract:
We propose a novel dynamic approach to forecast the weights of the global minimum variance portfolio (GMVP) for the conditional covariance matrix of asset returns. The GMVP weights are the population coefficients of a linear regression of a benchmark return on a vector of return differences. This representation enables us to derive a consistent loss function from which we can infer the GMVP weights without imposing any distributional assumptions on the returns. In order to capture time variation in the returns’ conditional covariance structure, we model the portfolio weights through a recursive least squares (RLS) scheme as well as by generalized autoregressive score (GAS) type dynamics. Sparse parameterizations and targeting toward the weights of the equally weighted portfolio ensure scalability with respect to the number of assets. We apply these models to daily stock returns, and find that they perform well compared to existing static and dynamic approaches in terms of both the expected loss and unconditional portfolio variance.
Journal: Journal of Business & Economic Statistics
Pages: 440-452
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2035226
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2035226
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:440-452
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2050245_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yuanyuan Lin
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Lin
Author-Name: Jinhan Xie
Author-X-Name-First: Jinhan
Author-X-Name-Last: Xie
Author-Name: Ruijian Han
Author-X-Name-First: Ruijian
Author-X-Name-Last: Han
Author-Name: Niansheng Tang
Author-X-Name-First: Niansheng
Author-X-Name-Last: Tang
Title: Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design
Abstract:
Confidence sets are of key importance in high-dimensional statistical inference. Under case–control study, a popular response-selective sampling design in medical study or econometrics, we consider the confidence intervals and statistical tests for single or low-dimensional parameters in high-dimensional logistic regression model. The asymptotic properties of the resulting estimators are established under mild conditions. We also study statistical tests for testing more general and complex hypotheses of the high-dimensional parameters. The general testing procedures are proved to be asymptotically exact and have satisfactory power. Numerical studies including extensive simulations and a real data example confirm that the proposed method performs well in practical settings.
Journal: Journal of Business & Economic Statistics
Pages: 624-635
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2050245
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2050245
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:624-635
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2027777_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Dixin Zhang
Author-X-Name-First: Dixin
Author-X-Name-Last: Zhang
Author-Name: Yulin Wang
Author-X-Name-First: Yulin
Author-X-Name-Last: Wang
Author-Name: Hua Liang
Author-X-Name-First: Hua
Author-X-Name-Last: Liang
Title: A Novel Estimation Method in Generalized Single Index Models
Abstract:
The single index and generalized single index models have been demonstrated to be a powerful tool for studying nonlinear interaction effects of variables in the low-dimensional case. In this article, we propose a new estimation approach for generalized single index models E(Y | θ⊤X)=ψ(g(θ⊤X))
with ψ(·)
known but g(·)
unknown. Specifically, we first obtain a consistent estimator of the regression function by using a local linear smoother, and then estimate the parametric components by treating ψ(ĝ(θ⊤Xi))
as our continuous response. The resulting estimators of θ are asymptotically normal. The proposed procedure can substantially overcome convergence problems encountered in generalized linear models with discrete response variables when sparseness occurs and misspecification. We conduct simulation experiments to evaluate the numerical performance of the proposed methods and analyze a financial dataset from a peer-to-peer lending platform of China as an illustration.
Journal: Journal of Business & Economic Statistics
Pages: 399-413
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2027777
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2027777
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:399-413
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2044336_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: Stuart McIntyre
Author-X-Name-First: Stuart
Author-X-Name-Last: McIntyre
Author-Name: James Mitchell
Author-X-Name-First: James
Author-X-Name-Last: Mitchell
Author-Name: Aubrey Poon
Author-X-Name-First: Aubrey
Author-X-Name-Last: Poon
Title: Reconciled Estimates of Monthly GDP in the United States
Abstract:
In the United States, income and expenditure-side estimates of gross domestic product (GDP) (GDPI
and GDPE
) measure “true” GDP with error and are available at a quarterly frequency. Methods exist for using these proxies to produce reconciled quarterly estimates of true GDP. In this paper, we extend these methods to provide reconciled historical true GDP estimates at a monthly frequency. We do this using a Bayesian mixed frequency vector autoregression (MF-VAR) involving GDPE
, GDPI
, unobserved true GDP, and monthly indicators of short-term economic activity. Our MF-VAR imposes restrictions that reflect a measurement-error perspective (i.e., the two GDP proxies are assumed to equal true GDP plus measurement error). Without further restrictions, our model is unidentified. We consider a range of restrictions that allow for point and set identification of true GDP and show that they lead to informative monthly GDP estimates. We illustrate how these new monthly data contribute to our historical understanding of business cycles and we provide a real-time application nowcasting monthly GDP over the pandemic recession.
Journal: Journal of Business & Economic Statistics
Pages: 563-577
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2044336
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2044336
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:563-577
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2039159_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Knut Are Aastveit
Author-X-Name-First: Knut Are
Author-X-Name-Last: Aastveit
Author-Name: Jamie L. Cross
Author-X-Name-First: Jamie L.
Author-X-Name-Last: Cross
Author-Name: Herman K. van Dijk
Author-X-Name-First: Herman K.
Author-X-Name-Last: van Dijk
Title: Quantifying Time-Varying Forecast Uncertainty and Risk for the Real Price of Oil
Abstract:
We propose a novel and numerically efficient quantification approach to forecast uncertainty of the real price of oil using a combination of probabilistic individual model forecasts. Our combination method extends earlier approaches that have been applied to oil price forecasting, by allowing for sequentially updating of time-varying combination weights, estimation of time-varying forecast biases and facets of miscalibration of individual forecast densities and time-varying inter-dependencies among models. To illustrate the usefulness of the method, we present an extensive set of empirical results about time-varying forecast uncertainty and risk for the real price of oil over the period 1974–2018. We show that the combination approach systematically outperforms commonly used benchmark models and combination approaches, both in terms of point and density forecasts. The dynamic patterns of the estimated individual model weights are highly time-varying, reflecting a large time variation in the relative performance of the various individual models. The combination approach has built-in diagnostic information measures about forecast inaccuracy and/or model set incompleteness, which provide clear signals of model incompleteness during three crisis periods. To highlight that our approach also can be useful for policy analysis, we present a basic analysis of profit-loss and hedging against price risk.
Journal: Journal of Business & Economic Statistics
Pages: 523-537
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2039159
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2039159
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:523-537
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2040520_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Ilze Kalnina
Author-X-Name-First: Ilze
Author-X-Name-Last: Kalnina
Title: Inference for Nonparametric High-Frequency Estimators with an Application to Time Variation in Betas
Abstract:
We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semidefinite. Our simulation study indicates that the subsampling method is more robust than the plug-in method based on the asymptotic expression for the variance. We use our subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every 5 or 20 min. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
Journal: Journal of Business & Economic Statistics
Pages: 538-549
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2040520
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2040520
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:538-549
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2035228_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yukitoshi Matsushita
Author-X-Name-First: Yukitoshi
Author-X-Name-Last: Matsushita
Author-Name: Taisuke Otsu
Author-X-Name-First: Taisuke
Author-X-Name-Last: Otsu
Author-Name: Keisuke Takahata
Author-X-Name-First: Keisuke
Author-X-Name-Last: Takahata
Title: Estimating Density Ratio of Marginals to Joint: Applications to Causal Inference
Abstract:
In various fields of data science, researchers often face problems of estimating the ratios of two probability densities. Particularly in the context of causal inference, the product of marginals for a treatment variable and covariates to their joint density ratio typically emerges in the process of constructing causal effect estimators. This article applies the general least square density ratio estimation methodology by Kanamori, Hido and Sugiyama to the product of marginals to joint density ratio, and demonstrates its usefulness particularly for causal inference on continuous treatment effects and dose-response curves. The proposed method is illustrated by a simulation study and an empirical example to investigate the treatment effect of political advertisements in the U.S. presidential campaign data.
Journal: Journal of Business & Economic Statistics
Pages: 467-481
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2035228
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2035228
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:467-481
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2041424_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lung-Fei Lee
Author-X-Name-First: Lung-Fei
Author-X-Name-Last: Lee
Author-Name: Chao Yang
Author-X-Name-First: Chao
Author-X-Name-Last: Yang
Author-Name: Jihai Yu
Author-X-Name-First: Jihai
Author-X-Name-Last: Yu
Title: QML and Efficient GMM Estimation of Spatial Autoregressive Models with Dominant (Popular) Units
Abstract:
This article investigates QML and GMM estimation of spatial autoregressive (SAR) models in which the column sums of the spatial weights matrix might not be uniformly bounded. We develop a central limit theorem in which the number of columns with unbounded sums can be finite or infinite and the magnitude of their column sums can be O(nδ)
if δ<1
. Asymptotic distributions of QML and GMM estimators are derived under this setting, including the GMM estimators with the best linear and quadratic moments when the disturbances are not normally distributed. The Monte Carlo experiments show that these QML and GMM estimators have satisfactory finite sample performances, while cases with a column sums magnitude of O(n) might not have satisfactory performance. An empirical application with growth convergence in which the trade flow network has the feature of dominant units is provided. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 550-562
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2041424
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2041424
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:550-562
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2044337_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lijia Wang
Author-X-Name-First: Lijia
Author-X-Name-Last: Wang
Author-Name: Xu Han
Author-X-Name-First: Xu
Author-X-Name-Last: Han
Author-Name: Xin Tong
Author-X-Name-First: Xin
Author-X-Name-Last: Tong
Title: Skilled Mutual Fund Selection: False Discovery Control Under Dependence
Abstract:
Selecting skilled mutual funds through the multiple testing framework has received increasing attention from finance researchers and statisticians. The intercept α of Carhart four-factor model is commonly used to measure the true performance of mutual funds, and positive α’s are considered as skilled. We observe that the standardized ordinary least-square estimates of α’s across the funds possess strong dependence and nonnormality structures, indicating that the conventional multiple testing methods are inadequate for selecting the skilled funds. We start from a decision theoretical perspective, and propose an optimal multiple testing procedure to minimize a combination of false discovery rate and false nondiscovery rate. Our proposed testing procedure is constructed based on the probability of each fund not being skilled conditional on the information across all of the funds in our study. To model the distribution of the information used for the testing procedure, we consider a mixture model under dependence and propose a new method called “approximate empirical Bayes” to fit the parameters. Empirical studies show that our selected skilled funds have superior long-term and short-term performance, for example, our selection strongly outperforms the S&P 500 index during the same period.
Journal: Journal of Business & Economic Statistics
Pages: 578-592
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2044337
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2044337
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:578-592
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2019046_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Bertille Antoine
Author-X-Name-First: Bertille
Author-X-Name-Last: Antoine
Author-Name: Lynda Khalaf
Author-X-Name-First: Lynda
Author-X-Name-Last: Khalaf
Author-Name: Maral Kichian
Author-X-Name-First: Maral
Author-X-Name-Last: Kichian
Author-Name: Zhenjiang Lin
Author-X-Name-First: Zhenjiang
Author-X-Name-Last: Lin
Title: Identification-Robust Inference With Simulation-Based Pseudo-Matching
Abstract:
We develop a general simulation-based inference procedure for partially specified models. Our procedure is based on matching auxiliary statistics to simulated counterparts where nuisance parameters are calibrated neither assuming identification of parameters of interest nor a one-to-one binding function. The conditions underlying the asymptotic validity of our (pseudo-)simulators in conjunction with appropriate bootstraps are characterized beyond the strict and exact calibration of the parameters of the simulator. Our procedure is illustrated through impulse-response (IR) matching in a simulation study of a stylized dynamic stochastic equilibrium model, and two empirical applications on the New Keynesian Phillips curve and on the Industrial Production index. In addition to usual Wald-type statistics that combine structural or reduced form IRs, we analyze local projections IRs through a factor-analytic measure of distance which eschews the need to define a weighting matrix.
Journal: Journal of Business & Economic Statistics
Pages: 321-338
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2019046
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2019046
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:321-338
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2046007_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lina Lu
Author-X-Name-First: Lina
Author-X-Name-Last: Lu
Title: Simultaneous Spatial Panel Data Models with Common Shocks
Abstract:
We consider a simultaneous spatial panel data model, jointly modeling three effects: simultaneous effects, spatial effects and common shock effects. This joint modeling and consideration of cross-sectional heteroscedasticity result in a large number of incidental parameters. We propose two estimation approaches, a quasi-maximum likelihood method and an iterative generalized principal components method. We develop full inferential theories for the estimation approaches and study the tradeoff between the model specifications and their respective asymptotic properties. We further investigate the finite sample performance of both methods using Monte Carlo simulations. We find that both methods perform well and that the simulation results corroborate the inferential theories. Some extensions of the model are considered. Finally, we apply the model to analyze the relationship between trade and gross domestic product using a panel data over time and across countries.
Journal: Journal of Business & Economic Statistics
Pages: 608-623
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2046007
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2046007
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:608-623
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2036612_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Christopher Walsh
Author-X-Name-First: Christopher
Author-X-Name-Last: Walsh
Author-Name: Michael Vogt
Author-X-Name-First: Michael
Author-X-Name-Last: Vogt
Title: Locally Stationary Multiplicative Volatility Modeling
Abstract:
In this article, we study a semiparametric multiplicative volatility model, which splits up into a nonparametric part and a parametric GARCH component. The nonparametric part is modeled as a product of a deterministic time trend component and of further components that depend on stochastic regressors. We propose a two-step procedure to estimate the model. To estimate the nonparametric components, we transform the model and apply a backfitting procedure. The GARCH parameters are estimated in a second step via quasi maximum likelihood. We show consistency and asymptotic normality of our estimators. Our results are obtained using mixing properties and local stationarity. We illustrate our method using financial data. Finally, a small simulation study illustrates a substantial bias in the GARCH parameter estimates when omitting the stochastic regressors.
Journal: Journal of Business & Economic Statistics
Pages: 497-508
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2036612
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2036612
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:497-508
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2035227_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Jilin Wu
Author-X-Name-First: Jilin
Author-X-Name-Last: Wu
Author-Name: Xiaojun Song
Author-X-Name-First: Xiaojun
Author-X-Name-Last: Song
Author-Name: Zhijie Xiao
Author-X-Name-First: Zhijie
Author-X-Name-Last: Xiao
Title: Testing for Trend Specifications in Panel Data Models
Abstract:
This article proposes a consistent nonparametric test for common trend specifications in panel data models with fixed effects. The test is general enough to allow for heteroscedasticity, cross-sectional and serial dependence in the error components, has an asymptotically normal distribution under the null hypothesis of correct trend specification, and is consistent against various alternatives that deviate from the null. In addition, the test has an asymptotic unit power against two classes of local alternatives approaching the null at different rates. We also propose a wild bootstrap procedure to better approximate the finite sample null distribution of the test statistic. Simulation results show that the proposed test implemented with bootstrap p-values performs reasonably well in finite samples. Finally, an empirical application to the analysis of the U.S. per capita personal income trend highlights the usefulness of our test in real datasets.
Journal: Journal of Business & Economic Statistics
Pages: 453-466
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2035227
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2035227
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:453-466
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2035229_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Matteo Iacopini
Author-X-Name-First: Matteo
Author-X-Name-Last: Iacopini
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Author-Name: Luca Rossini
Author-X-Name-First: Luca
Author-X-Name-Last: Rossini
Title: Proper Scoring Rules for Evaluating Density Forecasts with Asymmetric Loss Functions
Abstract:
This article proposes a novel asymmetric continuous probabilistic score (ACPS) for evaluating and comparing density forecasts. It generalizes the proposed score and defines a weighted version, which emphasizes regions of interest, such as the tails or the center of a variable’s range. The (weighted) ACPS extends the symmetric (weighted) CRPS by allowing for asymmetries in the preferences underlying the scoring rule. A test is used to statistically compare the predictive ability of different forecasts. The ACPS is of general use in any situation where the decision-maker has asymmetric preferences in the evaluation of the forecasts. In an artificial experiment, the implications of varying the level of asymmetry in the ACPS are illustrated. Then, the proposed score and test are applied to assess and compare density forecasts of macroeconomic relevant datasets (U.S. employment growth) and of commodity prices (oil and electricity prices) with particular focus on the recent COVID-19 crisis period.
Journal: Journal of Business & Economic Statistics
Pages: 482-496
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2035229
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2035229
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:482-496
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2021923_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yannick Hoga
Author-X-Name-First: Yannick
Author-X-Name-Last: Hoga
Author-Name: Timo Dimitriadis
Author-X-Name-First: Timo
Author-X-Name-Last: Dimitriadis
Title: On Testing Equal Conditional Predictive Ability Under Measurement Error
Abstract:
Loss functions are widely used to compare several competing forecasts. However, forecast comparisons are often based on mismeasured proxy variables for the true target. We introduce the concept of exact robustness to measurement error for loss functions and fully characterize this class of loss functions as the Bregman class. Hence, only conditional mean forecasts can be evaluated exactly robustly. For such exactly robust loss functions, forecast loss differences are on average unaffected by the use of proxy variables and, thus, inference on conditional predictive ability can be carried out as usual. Moreover, we show that more precise proxies give predictive ability tests higher power in discriminating between competing forecasts. Simulations illustrate the different behavior of exactly robust and nonrobust loss functions. An empirical application to U.S. GDP growth rates demonstrates the nonrobustness of quantile forecasts. It also shows that it is easier to discriminate between mean forecasts issued at different horizons if a better proxy for GDP growth is used.
Journal: Journal of Business & Economic Statistics
Pages: 364-376
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2021923
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2021923
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:364-376
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2025065_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Jingfei Zhang
Author-X-Name-First: Jingfei
Author-X-Name-Last: Zhang
Author-Name: Biao Cai
Author-X-Name-First: Biao
Author-X-Name-Last: Cai
Author-Name: Xuening Zhu
Author-X-Name-First: Xuening
Author-X-Name-Last: Zhu
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Author-Name: Ganggang Xu
Author-X-Name-First: Ganggang
Author-X-Name-Last: Xu
Author-Name: Yongtao Guan
Author-X-Name-First: Yongtao
Author-X-Name-Last: Guan
Title: Learning Human Activity Patterns Using Clustered Point Processes With Active and Inactive States
Abstract:
Modeling event patterns is a central task in a wide range of disciplines. In applications such as studying human activity patterns, events often arrive clustered with sporadic and long periods of inactivity. Such heterogeneity in event patterns poses challenges for existing point process models. In this article, we propose a new class of clustered point processes that alternate between active and inactive states. The proposed model is flexible, highly interpretable, and can provide useful insights into event patterns. A composite likelihood approach and a composite EM estimation procedure are developed for efficient and numerically stable parameter estimation. We study both the computational and statistical properties of the estimator including convergence, consistency, and asymptotic normality. The proposed method is applied to Donald Trump’s Twitter data to investigate if and how his behaviors evolved before, during, and after the presidential campaign. Additionally, we analyze large-scale social media data from Sina Weibo and identify interesting groups of users with distinct behaviors.
Journal: Journal of Business & Economic Statistics
Pages: 388-398
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2025065
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2025065
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:388-398
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2051520_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Mingjing Chen
Author-X-Name-First: Mingjing
Author-X-Name-Last: Chen
Title: Circularly Projected Common Factors for Grouped Data
Abstract:
To extract the common factors from grouped data, multilevel factor models have been put forward in the literature, and methods based on iterative principal component analysis (PCA) and canonical correlation analysis (CCA) have been proposed for estimation purpose. While iterative PCA requires iteration and is hence time-consuming, CCA can only deal with two groups of data. Herein, we develop two new methods to address these problems. We first extract the factors within groups and then project the estimated group factors into the space spanned by them in a circular manner. We propose two projection processes to estimate the common factors and determine the number of them. The new methods do not require iteration and are thus computationally efficient. They can estimate the common factors for multiple groups of data in a uniform way, regardless of whether the number of groups is large or small. They not only overcome the drawbacks of CCA but also nest the CCA method as a special case. Finally, we theoretically and numerically study the consistency properties of these new methods and apply them to studying international business cycles and the comovements of retail prices.
Journal: Journal of Business & Economic Statistics
Pages: 636-649
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2051520
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2051520
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:636-649
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2044829_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Kerem Tuzcuoglu
Author-X-Name-First: Kerem
Author-X-Name-Last: Tuzcuoglu
Title: Composite Likelihood Estimation of an Autoregressive Panel Ordered Probit Model with Random Effects
Abstract:
Modeling and estimating autocorrelated discrete data can be challenging. In this article, we use an autoregressive panel ordered probit model where the serial correlation in the discrete variable is driven by the autocorrelation in the latent variable. In such a nonlinear model, the presence of a lagged latent variable results in an intractable likelihood containing high-dimensional integrals. To tackle this problem, we use composite likelihoods that involve a much lower order of integration. However, parameter identification might potentially become problematic since the information employed in lower dimensional distributions may not be rich enough for identification. Therefore, we characterize types of composite likelihoods that are valid for this model and study conditions under which the parameters can be identified. Moreover, we provide consistency and asymptotic normality results for two different composite likelihood estimators and conduct Monte Carlo studies to assess their finite-sample performances. Finally, we apply our method to analyze corporate bond ratings. Supplementary materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 593-607
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2044829
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2044829
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:593-607
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2019047_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yuya Sasaki
Author-X-Name-First: Yuya
Author-X-Name-Last: Sasaki
Author-Name: Yulong Wang
Author-X-Name-First: Yulong
Author-X-Name-Last: Wang
Title: Diagnostic Testing of Finite Moment Conditions for the Consistency and Root-N Asymptotic Normality of the GMM and M Estimators
Abstract:
Common econometric analyses based on point estimates, standard errors, and confidence intervals presume the consistency and the root-n asymptotic normality of the GMM or M estimators. However, their key assumptions that data entail finite moments may not be always satisfied in applications. This article proposes a method of diagnostic testing for these key assumptions with applications to both simulated and real datasets.
Journal: Journal of Business & Economic Statistics
Pages: 339-348
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2019047
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2019047
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:339-348
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2036613_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Wenxin Huang
Author-X-Name-First: Wenxin
Author-X-Name-Last: Huang
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Yuan Zhuang
Author-X-Name-First: Yuan
Author-X-Name-Last: Zhuang
Title: Detecting Unobserved Heterogeneity in Efficient Prices via Classifier-Lasso
Abstract:
This article proposes a new measure of efficient price as a weighted average of bid and ask prices, where the weights are constructed from the bid-ask long-run relationships in a panel error-correction model (ECM). To allow for heterogeneity in the long-run relationships, we consider a panel ECM with latent group structures so that all the stocks within a group share the same long-run relationship and do not otherwise. We extend the Classifier-Lasso method to the ECM to simultaneously identify the individual’s group membership and estimate the group-specific long-run relationship. We establish the uniform classification consistency and good asymptotic properties of the post-Lasso estimators under some regularity conditions. Empirically, we find that more than 30% of the Standard & Poor’s (S&P) 1500 stocks have estimated efficient prices significantly deviating from the midpoint—a conventional measure of efficient price. Such deviations explored from our data-driven method can provide dynamic information on the extent and direction of informed trading activities.
Journal: Journal of Business & Economic Statistics
Pages: 509-522
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2022.2036613
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2036613
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:509-522
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2174123_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: The Editors
Title: Corrigendum: Small Sample Methods for Cluster-Robust Variance Estimation and Hypothesis Testing in Fixed Effects Models
Abstract:
Pustejovsky and Tipton considered how to implement cluster-robust variance estimators for fixed effects models estimated by weighted (or unweighted) least squares. Theorem 2 of the paper concerns a computational short cut for a certain cluster-robust variance estimator in models with cluster-specific fixed effects. It claimed that this short cut works for models estimated by generalized least squares, as long as the weights are taken to be inverse of the working model. However, the theorem is incorrect. In this corrigendum, we review the CR2 variance estimator, describe the assertion of the theorem as originally stated, and demonstrate the error with a counter-example. We then provide a revised version of the theorem, which holds for the more limited set of models estimated by ordinary least squares.
Journal: Journal of Business & Economic Statistics
Pages: 650-652
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2023.2174123
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2174123
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:650-652
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2013243_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Zhewen Pan
Author-X-Name-First: Zhewen
Author-X-Name-Last: Pan
Author-Name: Jianhui Xie
Author-X-Name-First: Jianhui
Author-X-Name-Last: Xie
Title: -Penalized Pairwise Difference Estimation for a High-Dimensional Censored Regression Model
Abstract:
High-dimensional data are nowadays readily available and increasingly common in various fields of empirical economics. This article considers estimation and model selection for a high-dimensional censored linear regression model. We combine l1
-penalization method with the ideas of pairwise difference and propose an l1
-penalized pairwise difference least absolute deviations (LAD) estimator. Estimation consistency and model selection consistency of the estimator are established under regularity conditions. We also propose a post-penalized estimator that applies unpenalized pairwise difference LAD estimation to the model selected by the l1
-penalized estimator, and find that the post-penalized estimator generally can perform better than the l1
-penalized estimator in terms of the rate of convergence. Novel fast algorithms for computing the proposed estimators are provided based on the alternating direction method of multipliers. A simulation study is conducted to show the great improvements of our algorithms in terms of computation time and to illustrate the satisfactory statistical performance of our estimators.
Journal: Journal of Business & Economic Statistics
Pages: 283-297
Issue: 2
Volume: 41
Year: 2023
Month: 4
X-DOI: 10.1080/07350015.2021.2013243
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2013243
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:2:p:283-297
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2097913_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Tomohiro Ando
Author-X-Name-First: Tomohiro
Author-X-Name-Last: Ando
Author-Name: Jushan Bai
Author-X-Name-First: Jushan
Author-X-Name-Last: Bai
Title: Large-Scale Generalized Linear Models for Longitudinal Data with Grouped Patterns of Unobserved Heterogeneity
Abstract:
This article provides methods for flexibly capturing unobservable heterogeneity from longitudinal data in the context of an exponential family of distributions. The group memberships of individual units are left unspecified, and their heterogeneity is influenced by group-specific unobservable factor structures. The model includes, as special cases, probit, logit, and Poisson regressions with interactive fixed effects along with unknown group membership. We discuss a computationally efficient estimation method and derive the corresponding asymptotic theory. Uniform consistency of the estimated group membership is established. To test heterogeneous regression coefficients within groups, we propose a Swamy-type test that allows for unobserved heterogeneity. We apply the proposed method to the study of market structure of the taxi industry in New York City. Our method unveils interesting and important insights from large-scale longitudinal data that consist of over 450 million data points.
Journal: Journal of Business & Economic Statistics
Pages: 983-994
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2097913
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2097913
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:983-994
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2077349_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Guowei Cui
Author-X-Name-First: Guowei
Author-X-Name-Last: Cui
Author-Name: Kazuhiko Hayakawa
Author-X-Name-First: Kazuhiko
Author-X-Name-Last: Hayakawa
Author-Name: Shuichi Nagata
Author-X-Name-First: Shuichi
Author-X-Name-Last: Nagata
Author-Name: Takashi Yamagata
Author-X-Name-First: Takashi
Author-X-Name-Last: Yamagata
Title: A Robust Approach to Heteroscedasticity, Error Serial Correlation and Slope Heterogeneity in Linear Models with Interactive Effects for Large Panel Data
Abstract:
In this article, we propose a robust approach against heteroscedasticity, error serial correlation and slope heterogeneity in linear models with interactive effects for large panel data. First, consistency and asymptotic normality of the pooled iterated principal component (IPC) estimator for random coefficient and homogeneous slope models are established. Then, we prove the asymptotic validity of the associated Wald test for slope parameter restrictions based on the panel heteroscedasticity and autocorrelation consistent (PHAC) variance matrix estimator for both random coefficient and homogeneous slope models, which does not require the Newey-West type time-series parameter truncation. These results asymptotically justify the use of the same pooled IPC estimator and the PHAC standard error for both homogeneous-slope and heterogeneous-slope models. This robust approach can significantly reduce the model selection uncertainty for applied researchers. In addition, we propose a Lagrange Multiplier (LM) test for correlated random coefficients with covariates. This test has nontrivial power against correlated random coefficients, but not for random coefficients and homogeneous slopes. The LM test is important because the IPC estimator becomes inconsistent with correlated random coefficients. The finite sample evidence and an empirical application support the reliability and the usefulness of our robust approach.
Journal: Journal of Business & Economic Statistics
Pages: 862-875
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2077349
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2077349
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:862-875
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2075000_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Carlos Velasco
Author-X-Name-First: Carlos
Author-X-Name-Last: Velasco
Title: Identification and Estimation of Structural VARMA Models Using Higher Order Dynamics
Abstract:
We use information from higher order moments to achieve identification of non-Gaussian structural vector autoregressive moving average (SVARMA) models, possibly nonfundamental or noncausal, through a frequency domain criterion based on higher order spectral densities. This allows us to identify the location of the roots of the determinantal lag matrix polynomials and to identify the rotation of the model errors leading to the structural shocks up to sign and permutation. We describe sufficient conditions for global and local parameter identification that rely on simple rank assumptions on the linear dynamics and on finite order serial and component independence conditions for the non-Gaussian structural innovations. We generalize previous univariate analysis to develop asymptotically normal and efficient estimates exploiting second and higher order cumulant dynamics given a particular structural shocks ordering without assumptions on causality or invertibility. Finite sample properties of estimates are explored with real and simulated data.
Journal: Journal of Business & Economic Statistics
Pages: 819-832
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2075000
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2075000
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:819-832
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2085726_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Shirong Xu
Author-X-Name-First: Shirong
Author-X-Name-Last: Xu
Author-Name: Yaoming Zhen
Author-X-Name-First: Yaoming
Author-X-Name-Last: Zhen
Author-Name: Junhui Wang
Author-X-Name-First: Junhui
Author-X-Name-Last: Wang
Title: Covariate-Assisted Community Detection in Multi-Layer Networks
Abstract:
Communities in multi-layer networks consist of nodes with similar connectivity patterns across all layers. This article proposes a tensor-based community detection method in multi-layer networks, which leverages available node-wise covariates to improve community detection accuracy. This is motivated by the network homophily principle, which suggests that nodes with similar covariates tend to reside in the same community. To take advantage of the node-wise covariates, the proposed method augments the multi-layer network with an additional layer constructed from the node similarity matrix with proper scaling, and conducts a Tucker decomposition of the augmented multi-layer network, yielding the spectral embedding vector of each node for community detection. Asymptotic consistencies of the proposed method in terms of community detection are established, which are also supported by numerical experiments on various synthetic networks and two real-life multi-layer networks.
Journal: Journal of Business & Economic Statistics
Pages: 915-926
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2085726
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2085726
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:915-926
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2099871_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Caio Almeida
Author-X-Name-First: Caio
Author-X-Name-Last: Almeida
Author-Name: Jianqing Fan
Author-X-Name-First: Jianqing
Author-X-Name-Last: Fan
Author-Name: Gustavo Freire
Author-X-Name-First: Gustavo
Author-X-Name-Last: Freire
Author-Name: Francesca Tang
Author-X-Name-First: Francesca
Author-X-Name-Last: Tang
Title: Can a Machine Correct Option Pricing Models?
Abstract:
We introduce a novel two-step approach to predict implied volatility surfaces. Given any fitted parametric option pricing model, we train a feedforward neural network on the model-implied pricing errors to correct for mispricing and boost performance. Using a large dataset of S&P 500 options, we test our nonparametric correction on several parametric models ranging from ad-hoc Black–Scholes to structural stochastic volatility models and demonstrate the boosted performance for each model. Out-of-sample prediction exercises in the cross-section and in the option panel show that machine-corrected models always outperform their respective original ones, often by a large extent. Our method is relatively indiscriminate, bringing pricing errors down to a similar magnitude regardless of the misspecification of the original parametric model. Even so, correcting models that are less misspecified usually leads to additional improvements in performance and also outperforms a neural network fitted directly to the implied volatility surface.
Journal: Journal of Business & Economic Statistics
Pages: 995-1009
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2099871
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2099871
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:995-1009
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2093883_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Joel L. Horowitz
Author-X-Name-First: Joel L.
Author-X-Name-Last: Horowitz
Author-Name: Sokbae Lee
Author-X-Name-First: Sokbae
Author-X-Name-Last: Lee
Title: Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities
Abstract:
This article describes three methods for carrying out nonasymptotic inference on partially identified parameters that are solutions to a class of optimization problems. Applications in which the optimization problems arise include estimation under shape restrictions, estimation of models of discrete games, and estimation based on grouped data. The partially identified parameters are characterized by restrictions that involve the unknown population means of observed random variables in addition to structural parameters. Inference consists of finding confidence intervals for functions of the structural parameters. Our theory provides finite-sample lower bounds on the coverage probabilities of the confidence intervals under three sets of assumptions of increasing strength. With the moderate sample sizes found in most economics applications, the bounds become tighter as the assumptions strengthen. We discuss estimation of population parameters that the bounds depend on and contrast our methods with alternative methods for obtaining confidence intervals for partially identified parameters. The results of Monte Carlo experiments and empirical examples illustrate the usefulness of our method.
Journal: Journal of Business & Economic Statistics
Pages: 927-938
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2093883
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2093883
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:927-938
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2080684_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Harold D. Chiang
Author-X-Name-First: Harold D.
Author-X-Name-Last: Chiang
Author-Name: Bing Yang Tan
Author-X-Name-First: Bing Yang
Author-X-Name-Last: Tan
Title: Empirical Likelihood and Uniform Convergence Rates for Dyadic Kernel Density Estimation
Abstract:
This article studies the asymptotic properties of and alternative inference methods for kernel density estimation (KDE) for dyadic data. We first establish uniform convergence rates for dyadic KDE. Second, we propose a modified jackknife empirical likelihood procedure for inference. The proposed test statistic is asymptotically pivotal regardless of presence of dyadic clustering. The results are further extended to cover the practically relevant case of incomplete dyadic data. Simulations show that this modified jackknife empirical likelihood-based inference procedure delivers precise coverage probabilities even with modest sample sizes and with incomplete dyadic data. Finally, we illustrate the method by studying airport congestion in the United States.
Journal: Journal of Business & Economic Statistics
Pages: 906-914
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2080684
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2080684
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:906-914
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2058000_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Cem Çakmakl i
Author-X-Name-First: Cem
Author-X-Name-Last: Çakmakl i
Author-Name: Hamza Demircan
Author-X-Name-First: Hamza
Author-X-Name-Last: Demircan
Title: Using Survey Information for Improving the Density Nowcasting of U.S. GDP
Abstract:
We provide a methodology that efficiently combines the statistical models of nowcasting with the survey information for improving the (density) nowcasting of U.S. real GDP. Specifically, we use the conventional dynamic factor model together with stochastic volatility components as the baseline statistical model. We augment the model with information from the survey expectations by aligning the first and second moments of the predictive distribution implied by this baseline model with those extracted from the survey information at various horizons. Results indicate that survey information bears valuable information over the baseline model for nowcasting GDP. While the mean survey predictions deliver valuable information during extreme events such as the Covid-19 pandemic, the variation in the survey participants’ predictions, often used as a measure of “ambiguity,” conveys crucial information beyond the mean of those predictions for capturing the tail behavior of the GDP distribution.
Journal: Journal of Business & Economic Statistics
Pages: 667-682
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2058000
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2058000
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:667-682
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2076686_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yu-Ning Li
Author-X-Name-First: Yu-Ning
Author-X-Name-Last: Li
Author-Name: Degui Li
Author-X-Name-First: Degui
Author-X-Name-Last: Li
Author-Name: Piotr Fryzlewicz
Author-X-Name-First: Piotr
Author-X-Name-Last: Fryzlewicz
Title: Detection of Multiple Structural Breaks in Large Covariance Matrices
Abstract:
This article studies multiple structural breaks in large contemporaneous covariance matrices of high-dimensional time series satisfying an approximate factor model. The breaks in the second-order moment structure of the common components are due to sudden changes in either factor loadings or covariance of latent factors, requiring appropriate transformation of the factor models to facilitate estimation of the (transformed) common factors and factor loadings via the classical principal component analysis. With the estimated factors and idiosyncratic errors, an easy-to-implement CUSUM-based detection technique is introduced to consistently estimate the location and number of breaks and correctly identify whether they originate in the common or idiosyncratic error components. The algorithms of Wild Binary Segmentation for Covariance (WBS-Cov) and Wild Sparsified Binary Segmentation for Covariance (WSBS-Cov) are used to estimate breaks in the common and idiosyncratic error components, respectively. Under some technical conditions, the asymptotic properties of the proposed methodology are derived with near-optimal rates (up to a logarithmic factor) achieved for the estimated breaks. Monte Carlo simulation studies are conducted to examine the finite-sample performance of the developed method and its comparison with other existing approaches. We finally apply our method to study the contemporaneous covariance structure of daily returns of S&P 500 constituents and identify a few breaks including those occurring during the 2007–2008 financial crisis and the recent coronavirus (COVID-19) outbreak. An R package “BSCOV” is provided to implement the proposed algorithms.
Journal: Journal of Business & Economic Statistics
Pages: 846-861
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2076686
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2076686
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:846-861
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2063132_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Simon C. Smith
Author-X-Name-First: Simon C.
Author-X-Name-Last: Smith
Title: Structural Breaks in Grouped Heterogeneity
Abstract:
Generating accurate forecasts in the presence of structural breaks requires careful management of bias-variance tradeoffs. Forecasting panel data under breaks offers the possibility to reduce parameter estimation error without inducing any bias if there exists a regime-specific pattern of grouped heterogeneity. To this end, we develop a new Bayesian methodology to estimate and formally test panel regression models in the presence of multiple breaks and unobserved regime-specific grouped heterogeneity. In an empirical application to forecasting inflation rates across 20 U.S. industries, our method generates significantly more accurate forecasts relative to a range of popular methods.
Journal: Journal of Business & Economic Statistics
Pages: 752-764
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2063132
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2063132
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:752-764
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2097911_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Dongxiao Han
Author-X-Name-First: Dongxiao
Author-X-Name-Last: Han
Author-Name: Jian Huang
Author-X-Name-First: Jian
Author-X-Name-Last: Huang
Author-Name: Yuanyuan Lin
Author-X-Name-First: Yuanyuan
Author-X-Name-Last: Lin
Author-Name: Lei Liu
Author-X-Name-First: Lei
Author-X-Name-Last: Liu
Author-Name: Lianqiang Qu
Author-X-Name-First: Lianqiang
Author-X-Name-Last: Qu
Author-Name: Liuquan Sun
Author-X-Name-First: Liuquan
Author-X-Name-Last: Sun
Title: Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates
Abstract:
In this article, we propose a robust signal recovery method for high-dimensional linear log-contrast models, when the error distribution could be heavy-tailed and asymmetric. The proposed method is built on the Huber loss with ℓ1 penalization. We establish the ℓ1 and ℓ2 consistency for the resulting estimator. Under conditions analogous to the irrepresentability condition and the minimum signal strength condition, we prove that the signed support of the slope parameter vector can be recovered with high probability. The finite-sample behavior of the proposed method is evaluated through simulation studies, and applications to a GDP satisfaction dataset an HIV microbiome dataset are provided.
Journal: Journal of Business & Economic Statistics
Pages: 957-967
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2097911
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2097911
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:957-967
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2074426_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Shuyuan Wu
Author-X-Name-First: Shuyuan
Author-X-Name-Last: Wu
Author-Name: Danyang Huang
Author-X-Name-First: Danyang
Author-X-Name-Last: Huang
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Network Gradient Descent Algorithm for Decentralized Federated Learning
Abstract:
We study a fully decentralized federated learning algorithm, which is a novel gradient descent algorithm executed on a communication-based network. For convenience, we refer to it as a network gradient descent (NGD) method. In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy. Meanwhile, different clients communicate with each other directly according to a carefully designed network structure without a central master. This greatly enhances the reliability of the entire algorithm. Those nice properties inspire us to carefully study the NGD method both theoretically and numerically. Theoretically, we start with a classical linear regression model. We find that both the learning rate and the network structure play significant roles in determining the NGD estimator’s statistical efficiency. The resulting NGD estimator can be statistically as efficient as the global estimator, if the learning rate is sufficiently small and the network structure is weakly balanced, even if the data are distributed heterogeneously. Those interesting findings are then extended to general models and loss functions. Extensive numerical studies are presented to corroborate our theoretical findings. Classical deep learning models are also presented for illustration purpose.
Journal: Journal of Business & Economic Statistics
Pages: 806-818
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2074426
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2074426
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:806-818
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2061983_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Author-Name: Brian M. Weller
Author-X-Name-First: Brian M.
Author-X-Name-Last: Weller
Title: Testing for Unobserved Heterogeneity via k-means Clustering
Abstract:
Clustering methods such as k-means have found widespread use in a variety of applications. This article proposes a split-sample testing procedure to determine whether a null hypothesis of a single cluster, indicating homogeneity of the data, can be rejected in favor of multiple clusters. The test is simple to implement, valid under mild conditions (including nonnormality, and heterogeneity of the data in aspects beyond those in the clustering analysis), and applicable in a range of contexts (including clustering when the time series dimension is small, or clustering on parameters other than the mean). We verify that the test has good size control in finite samples, and we illustrate the test in applications to clustering vehicle manufacturers and U.S. mutual funds.
Journal: Journal of Business & Economic Statistics
Pages: 737-751
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2061983
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2061983
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:737-751
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2067545_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lars Spreng
Author-X-Name-First: Lars
Author-X-Name-Last: Spreng
Author-Name: Giovanni Urga
Author-X-Name-First: Giovanni
Author-X-Name-Last: Urga
Title: Combining p-values for Multivariate Predictive Ability Testing
Abstract:
In this article, we propose an intersection-union test for multivariate forecast accuracy based on the combination of a sequence of univariate tests. The testing framework evaluates a global null hypothesis of equal predictive ability using any number of univariate forecast accuracy tests under arbitrary dependence structures, without specifying the underlying multivariate distribution. An extensive Monte Carlo simulation exercise shows that our proposed test has very good size and power properties under several relevant scenarios, and performs well in both low- and high-dimensional settings. We illustrate the empirical validity of our testing procedure using a large dataset of 84 daily exchange rates running from January 1, 2011 to April 1, 2021. We show that our proposed test addresses inconclusive results that often arise in practice.
Journal: Journal of Business & Economic Statistics
Pages: 765-777
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2067545
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2067545
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:765-777
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2067546_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yousef Kaddoura
Author-X-Name-First: Yousef
Author-X-Name-Last: Kaddoura
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: Estimation of Panel Data Models with Random Interactive Effects and Multiple Structural Breaks when T is Fixed
Abstract:
In this article, we propose a new estimator of panel data models with random interactive effects and multiple structural breaks that is suitable when the number of time periods, T, is fixed and only the number of cross-sectional units, N, is large. This is done by viewing the determination of the breaks as a shrinkage problem, and to estimate both the regression coefficients, and the number of breaks and their locations by applying a version of the Lasso approach. We show that with probability approaching one the approach can correctly determine the number of breaks and the dates of these breaks, and that the estimator of the regime-specific regression coefficients is consistent and asymptotically normal. We also provide Monte Carlo results suggesting that the approach performs very well in small samples, and empirical results suggesting that while the coefficients of the controls are breaking, the coefficients of the main deterrence regressors in a model of crime are not.
Journal: Journal of Business & Economic Statistics
Pages: 778-790
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2067546
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2067546
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:778-790
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2061495_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Takuya Ishihara
Author-X-Name-First: Takuya
Author-X-Name-Last: Ishihara
Title: Panel Data Quantile Regression for Treatment Effect Models
Abstract:
In this study, we develop a novel estimation method for quantile treatment effects (QTE) under rank invariance and rank stationarity assumptions. Ishihara (2020) explores identification of the nonseparable panel data model under these assumptions and proposes a parametric estimation based on the minimum distance method. However, when the dimensionality of the covariates is large, the minimum distance estimation using this process is computationally demanding. To overcome this problem, we propose a two-step estimation method based on the quantile regression and minimum distance methods. We then show the uniform asymptotic properties of our estimator and the validity of the nonparametric bootstrap. The Monte Carlo studies indicate that our estimator performs well in finite samples. Finally, we present two empirical illustrations, to estimate the distributional effects of insurance provision on household production and TV watching on child cognitive development.
Journal: Journal of Business & Economic Statistics
Pages: 720-736
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2061495
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2061495
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:720-736
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2097910_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Xiye Yang
Author-X-Name-First: Xiye
Author-X-Name-Last: Yang
Title: Estimation of Leverage Effect: Kernel Function and Efficiency
Abstract:
This article proposes more efficient estimators for the leverage effect than the existing ones. The idea is to allow for nonuniform kernel functions in the spot volatility estimates or the aggregated returns. This finding highlights a critical difference between the leverage effect and integrated volatility functionals, where the uniform kernel is optimal. Another distinction between these two cases is that the overlapping estimators of the leverage effect are more efficient than the nonoverlapping ones. We offer two perspectives to explain these differences: one is based on the “effective kernel” and the other on the correlation structure of the nonoverlapping estimators. The simulation study shows that the proposed estimator with a nonuniform kernel substantially increases the estimation efficiency and testing power relative to the existing ones.
Journal: Journal of Business & Economic Statistics
Pages: 939-956
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2097910
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2097910
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:939-956
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2078332_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Anthony C. Davison
Author-X-Name-First: Anthony C.
Author-X-Name-Last: Davison
Author-Name: Simone A. Padoan
Author-X-Name-First: Simone A.
Author-X-Name-Last: Padoan
Author-Name: Gilles Stupfler
Author-X-Name-First: Gilles
Author-X-Name-Last: Stupfler
Title: Tail Risk Inference via Expectiles in Heavy-Tailed Time Series
Abstract:
Expectiles define the only law-invariant, coherent and elicitable risk measure apart from the expectation. The popularity of expectile-based risk measures is steadily growing and their properties have been studied for independent data, but further results are needed to establish that extreme expectiles can be applied with the kind of dependent time series models relevant to finance. In this article we provide a basis for inference on extreme expectiles and expectile-based marginal expected shortfall in a general β-mixing context that encompasses ARMA and GARCH models with heavy-tailed innovations. Our methods allow the estimation of marginal (pertaining to the stationary distribution) and dynamic (conditional on the past) extreme expectile-based risk measures. Simulations and applications to financial returns show that the new estimators and confidence intervals greatly improve on existing ones when the data are dependent.
Journal: Journal of Business & Economic Statistics
Pages: 876-889
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2078332
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2078332
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:876-889
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2053690_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yiannis Karavias
Author-X-Name-First: Yiannis
Author-X-Name-Last: Karavias
Author-Name: Paresh Kumar Narayan
Author-X-Name-First: Paresh Kumar
Author-X-Name-Last: Narayan
Author-Name: Joakim Westerlund
Author-X-Name-First: Joakim
Author-X-Name-Last: Westerlund
Title: Structural Breaks in Interactive Effects Panels and the Stock Market Reaction to COVID-19
Abstract:
Dealing with structural breaks is an essential step in most empirical economic research. This is particularly true in panel data comprised of many cross-sectional units, which are all affected by major events. The COVID-19 pandemic has affected most sectors of the global economy; however, its impact on stock markets is still unclear. Most markets seem to have recovered while the pandemic is ongoing, suggesting that the relationship between stock returns and COVID-19 has been subject to structural break. It is therefore important to know if a structural break has occurred and, if it has, to infer the date of the break. Motivated by this last observation, the present article develops a new break detection toolbox that is applicable to different sized panels, easy to implement and robust to general forms of unobserved heterogeneity. The toolbox, which is the first of its kind, includes a structural change test, a break date estimator, and a break date confidence interval. Application to a panel covering 61 countries from January 3 to September 25, 2020, leads to the detection of a structural break that is dated to the first week of April. The effect of COVID-19 is negative before the break and zero thereafter, implying that while markets did react, the reaction was short-lived. A possible explanation is the quantitative easing programs announced by central banks all over the world in the second half of March.
Journal: Journal of Business & Economic Statistics
Pages: 653-666
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2053690
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2053690
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:653-666
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2097912_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Dongho Song
Author-X-Name-First: Dongho
Author-X-Name-Last: Song
Author-Name: Jenny Tang
Author-X-Name-First: Jenny
Author-X-Name-Last: Tang
Title: News-Driven Uncertainty Fluctuations
Abstract:
We investigate the channels through which news influences the subjective beliefs of economic agents, with a particular focus on their subjective uncertainty. The main insight of the article is that news that is more at odds with agents’ prior beliefs generates an increase in uncertainty; news that is more consistent with their prior beliefs generates a decrease in uncertainty. We illustrate this insight theoretically and then estimate the model empirically using data on U.S. output and professional forecasts to provide novel measures of news shocks and uncertainty. We then estimate impulse responses from the identified shocks to show that news shocks can affect macroeconomic variables in ways that resemble the effects of uncertainty shocks. Our results suggest that controlling for news can potentially diminish the estimated effects of uncertainty shocks on real variables, particularly at longer horizons.
Journal: Journal of Business & Economic Statistics
Pages: 968-982
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2097912
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2097912
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:968-982
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2075370_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Gustav Alfelt
Author-X-Name-First: Gustav
Author-X-Name-Last: Alfelt
Author-Name: Taras Bodnar
Author-X-Name-First: Taras
Author-X-Name-Last: Bodnar
Author-Name: Farrukh Javed
Author-X-Name-First: Farrukh
Author-X-Name-Last: Javed
Author-Name: Joanna Tyrcha
Author-X-Name-First: Joanna
Author-X-Name-Last: Tyrcha
Title: Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices
Abstract:
Realized covariance matrices are often constructed under the assumption that richness of intra-day return data is greater than the portfolio size, resulting in nonsingular matrix measures. However, when for example the portfolio size is large, assets suffer from illiquidity issues, or market microstructure noise deters sampling on very high frequencies, this relation is not guaranteed. Under these common conditions, realized covariance matrices may obtain as singular by construction. Motivated by this situation, we introduce the Singular Conditional Autoregressive Wishart (SCAW) model to capture the temporal dynamics of time series of singular realized covariance matrices, extending the rich literature on econometric Wishart time series models to the singular case. This model is furthermore developed by covariance targeting adapted to matrices and a sector wise BEKK-specification, allowing excellent scalability to large and extremely large portfolio sizes. Finally, the model is estimated to a 20-year long time series containing 50 stocks and to a 10-year long time series containing 300 stocks, and evaluated using out-of-sample forecast accuracy. It outperforms the benchmark models with high statistical significance and the parsimonious specifications perform better than the baseline SCAW model, while using considerably less parameters.
Journal: Journal of Business & Economic Statistics
Pages: 833-845
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2075370
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2075370
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:833-845
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2060988_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Luca Barbaglia
Author-X-Name-First: Luca
Author-X-Name-Last: Barbaglia
Author-Name: Sergio Consoli
Author-X-Name-First: Sergio
Author-X-Name-Last: Consoli
Author-Name: Sebastiano Manzan
Author-X-Name-First: Sebastiano
Author-X-Name-Last: Manzan
Title: Forecasting with Economic News
Abstract:
The goal of this article is to evaluate the informational content of sentiment extracted from news articles about the state of the economy. We propose a fine-grained aspect-based sentiment analysis that has two main characteristics: (a) we consider only the text in the article that is semantically dependent on a term of interest (aspect-based) and, (b) assign a sentiment score to each word based on a dictionary that we develop for applications in economics and finance (fine-grained). Our dataset includes six large U.S. newspapers, for a total of over 6.6 million articles and 4.2 billion words. Our findings suggest that several measures of economic sentiment track closely business cycle fluctuations and that they are relevant predictors for four major macroeconomic variables. We find that there are significant improvements in forecasting when sentiment is considered along with macroeconomic factors. In addition, we also find that sentiment matters to explains the tails of the probability distribution across several macroeconomic variables.
Journal: Journal of Business & Economic Statistics
Pages: 708-719
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2060988
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2060988
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:708-719
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2058949_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Sílvia Gonçalves
Author-X-Name-First: Sílvia
Author-X-Name-Last: Gonçalves
Author-Name: Ulrich Hounyo
Author-X-Name-First: Ulrich
Author-X-Name-Last: Hounyo
Author-Name: Andrew J. Patton
Author-X-Name-First: Andrew J.
Author-X-Name-Last: Patton
Author-Name: Kevin Sheppard
Author-X-Name-First: Kevin
Author-X-Name-Last: Sheppard
Title: Bootstrapping Two-Stage Quasi-Maximum Likelihood Estimators of Time Series Models
Abstract:
This article provides results on the validity of bootstrap inference methods for two-stage quasi-maximum likelihood estimation involving time series data, such as those used for multivariate volatility models or copula-based models. Existing approaches require the researcher to compute and combine many first- and second-order derivatives, which can be difficult to do and is susceptible to error. Bootstrap methods are simpler to apply, allowing the substitution of capital (CPU cycles) for labor (keeping track of derivatives). We show the consistency of the bootstrap distribution and consistency of bootstrap variance estimators, thereby justifying the use of bootstrap percentile intervals and bootstrap standard errors.
Journal: Journal of Business & Economic Statistics
Pages: 683-694
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2058949
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2058949
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:683-694
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2080683_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Joshua C. C. Chan
Author-X-Name-First: Joshua C. C.
Author-X-Name-Last: Chan
Title: Large Hybrid Time-Varying Parameter VARs
Abstract:
Time-varying parameter VARs with stochastic volatility are routinely used for structural analysis and forecasting in settings involving a few endogenous variables. Applying these models to high-dimensional datasets has proved to be challenging due to intensive computations and over-parameterization concerns. We develop an efficient Bayesian sparsification method for a class of models we call hybrid TVP-VARs—VARs with time-varying parameters in some equations but constant coefficients in others. Specifically, for each equation, the new method automatically decides whether the VAR coefficients and contemporaneous relations among variables are constant or time-varying. Using U.S. datasets of various dimensions, we find evidence that the parameters in some, but not all, equations are time varying. The large hybrid TVP-VAR also forecasts better than many standard benchmarks.
Journal: Journal of Business & Economic Statistics
Pages: 890-905
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2080683
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2080683
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:890-905
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2071903_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Jinyuan Chang
Author-X-Name-First: Jinyuan
Author-X-Name-Last: Chang
Author-Name: Zhentao Shi
Author-X-Name-First: Zhentao
Author-X-Name-Last: Shi
Author-Name: Jia Zhang
Author-X-Name-First: Jia
Author-X-Name-Last: Zhang
Title: Culling the Herd of Moments with Penalized Empirical Likelihood
Abstract:
Models defined by moment conditions are at the center of structural econometric estimation, but economic theory is mostly agnostic about moment selection. While a large pool of valid moments can potentially improve estimation efficiency, in the meantime a few invalid ones may undermine consistency. This article investigates the empirical likelihood estimation of these moment-defined models in high-dimensional settings. We propose a penalized empirical likelihood (PEL) estimation and establish its oracle property with consistent detection of invalid moments. The PEL estimator is asymptotically normally distributed, and a projected PEL procedure further eliminates its asymptotic bias and provides more accurate normal approximation to the finite sample behavior. Simulation exercises demonstrate excellent numerical performance of these methods in estimation and inference.
Journal: Journal of Business & Economic Statistics
Pages: 791-805
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2071903
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2071903
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:791-805
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2060987_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Nail Kashaev
Author-X-Name-First: Nail
Author-X-Name-Last: Kashaev
Title: Identification and Estimation of Multinomial Choice Models with Latent Special Covariates
Abstract:
Identification of multinomial choice models is often established by using special covariates that have full support. This article shows how these identification results can be extended to a large class of multinomial choice models when all covariates are bounded. I also provide a new n-consistent asymptotically normal estimator of the finite-dimensional parameters of the model.
Journal: Journal of Business & Economic Statistics
Pages: 695-707
Issue: 3
Volume: 41
Year: 2023
Month: 7
X-DOI: 10.1080/07350015.2022.2060987
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2060987
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:3:p:695-707
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_643132_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: José Rangel
Author-X-Name-First: José
Author-X-Name-Last: Rangel
Author-Name: Robert Engle
Author-X-Name-First: Robert
Author-X-Name-Last: Engle
Title: The Factor–Spline–GARCH Model for High and Low Frequency Correlations
Abstract: We propose a new approach to model high and low frequency components of equity correlations. Our framework combines a factor asset pricing structure with other specifications capturing dynamic properties of volatilities and covariances between a single common factor and idiosyncratic returns. High frequency correlations mean revert to slowly varying functions that characterize long-term correlation patterns. We associate such term behavior with low frequency economic variables, including determinants of market and idiosyncratic volatilities. Flexibility in the time-varying level of mean reversion improves both the empirical fit of equity correlations in the United States and correlation forecasts at long horizons.
Journal: Journal of Business & Economic Statistics
Pages: 109-124
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.643132
File-URL: http://hdl.handle.net/10.1080/07350015.2012.643132
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:109-124
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_646575_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Charles Bellemare
Author-X-Name-First: Charles
Author-X-Name-Last: Bellemare
Author-Name: Luc Bissonnette
Author-X-Name-First: Luc
Author-X-Name-Last: Bissonnette
Author-Name: Sabine Kröger
Author-X-Name-First: Sabine
Author-X-Name-Last: Kröger
Title: Flexible Approximation of Subjective Expectations Using Probability Questions
Abstract: We propose a flexible method to approximate the subjective cumulative distribution function of an economic agent about the future realization of a continuous random variable. The method can closely approximate a wide variety of distributions while maintaining weak assumptions on the shape of distribution functions. We show how moments and quantiles of general functions of the random variable can be computed analytically and/or numerically. We illustrate the method by revisiting the determinants of income expectations in the United States. A Monte Carlo analysis suggests that a quantile-based flexible approach can be used to successfully deal with censoring and possible rounding levels present in the data. Finally, our analysis suggests that the performance of our flexible approach matches that of a correctly specified parametric approach and is clearly better than that of a misspecified parametric approach.
Journal: Journal of Business & Economic Statistics
Pages: 125-131
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1198/jbes.2011.09053
File-URL: http://hdl.handle.net/10.1198/jbes.2011.09053
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:125-131
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_646563_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Pascal Lavergne
Author-X-Name-First: Pascal
Author-X-Name-Last: Lavergne
Author-Name: Valentin Patilea
Author-X-Name-First: Valentin
Author-X-Name-Last: Patilea
Title: One for All and All for One: Regression Checks With Many Regressors
Abstract: We develop a novel approach to building checks of parametric regression models when many regressors are present, based on a class of sufficiently rich semiparametric alternatives, namely single-index models. We propose an omnibus test based on the kernel method that performs against a sequence of directional nonparametric alternatives as if there was only one regressor whatever the number of regressors. This test can be viewed as a smooth version of the integrated conditional moment test of Bierens. Qualitative information can be easily incorporated into the procedure to enhance power. In an extensive comparative simulation study, we find that our test is not very sensitive to the smoothing parameter and performs well in multidimensional settings. We apply this test to a cross-country growth regression model.
Journal: Journal of Business & Economic Statistics
Pages: 41-52
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1198/jbes.2011.07152
File-URL: http://hdl.handle.net/10.1198/jbes.2011.07152
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:41-52
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_646573_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Qian Li
Author-X-Name-First: Qian
Author-X-Name-Last: Li
Author-Name: Pravin Trivedi
Author-X-Name-First: Pravin
Author-X-Name-Last: Trivedi
Title: Medicare Health Plan Choices of the Elderly: A Choice-With-Screening Model
Abstract: With the expansion of Medicare, increasing attention has been paid to the behavior of elderly persons in choosing health insurance. This article investigates how the elderly use plan attributes to screen their Medicare health plans to simplify a complicated choice situation. The proposed model extends the conventional random utility models by considering a screening stage. Bayesian estimation is implemented, and the results based on Medicare data show that the elderly are likely to screen according to premium, prescription drug coverage, and vision coverage. These attributes have nonlinear effects on plan choice that cannot be captured by conventional models. This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 81-93
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1198/jbes.2011.0819
File-URL: http://hdl.handle.net/10.1198/jbes.2011.0819
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:81-93
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_646582_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Todd Clark
Author-X-Name-First: Todd
Author-X-Name-Last: Clark
Author-Name: Michael McCracken
Author-X-Name-First: Michael
Author-X-Name-Last: McCracken
Title: Reality Checks and Comparisons of Nested Predictive Models
Abstract: This article develops a simple bootstrap method for simulating asymptotic critical values for tests of equal forecast accuracy and encompassing among many nested models. Our method combines elements of fixed regressor and wild bootstraps. We first derive the asymptotic distributions of tests of equal forecast accuracy and encompassing applied to forecasts from multiple models that nest the benchmark model—that is, reality check tests. We then prove the validity of the bootstrap for these tests. Monte Carlo experiments indicate that our proposed bootstrap has better finite-sample size and power than other methods designed for comparison of nonnested models. Supplementary materials are available online.
Journal: Journal of Business & Economic Statistics
Pages: 53-66
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1198/jbes.2011.10278
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10278
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:53-66
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634350_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Kenneth West
Author-X-Name-First: Kenneth
Author-X-Name-Last: West
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 34-35
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634350
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634350
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:34-35
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634340_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Dean Croushore
Author-X-Name-First: Dean
Author-X-Name-Last: Croushore
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 17-20
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634340
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634340
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:17-20
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634342_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Kajal Lahiri
Author-X-Name-First: Kajal
Author-X-Name-Last: Lahiri
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 20-25
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634342
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634342
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:20-25
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634343_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Barbara Rossi
Author-X-Name-First: Barbara
Author-X-Name-Last: Rossi
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 25-29
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634343
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634343
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:25-29
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_637876_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Ingmar Nolte
Author-X-Name-First: Ingmar
Author-X-Name-Last: Nolte
Author-Name: Valeri Voev
Author-X-Name-First: Valeri
Author-X-Name-Last: Voev
Title: Least Squares Inference on Integrated Volatility and the Relationship Between Efficient Prices and Noise
Abstract: The expected value of sums of squared intraday returns (realized variance) gives rise to a least squares regression which adapts itself to the assumptions of the noise process and allows for joint inference on integrated variance (), noise moments, and price-noise relations. In the iid noise case, we derive the asymptotic variance of the and noise variance estimators and show that they are consistent. The joint estimation approach is particularly attractive as it reveals important characteristics of the noise process which can be related to liquidity and market efficiency. The analysis of dependence between the price and noise processes provides an often missing link to market microstructure theory. We find substantial differences in the noise characteristics of trade and quote data arising from the effect of distinct market microstructure frictions. This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 94-108
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/10473289.2011.637876
File-URL: http://hdl.handle.net/10.1080/10473289.2011.637876
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:94-108
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634358_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Wolfgang Rinnergschwentner
Author-X-Name-First: Wolfgang
Author-X-Name-Last: Rinnergschwentner
Author-Name: Gottfried Tappeiner
Author-X-Name-First: Gottfried
Author-X-Name-Last: Tappeiner
Author-Name: Janette Walde
Author-X-Name-First: Janette
Author-X-Name-Last: Walde
Title: Multivariate Stochastic Volatility via Wishart Processes: A Comment
Abstract: This comment refers to an error in the methodology for estimating the parameters of the model developed by Philipov and Glickman for modeling multivariate stochastic volatility via Wishart processes. For estimation they used Bayesian techniques. The derived expressions for the full conditionals of the model parameters as well as the expression for the acceptance ratio of the covariance matrix are erroneous. In this erratum all necessary formulae are given to guarantee an appropriate implementation and application of the model.
Journal: Journal of Business & Economic Statistics
Pages: 164-164
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634358
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634358
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:164-164
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634337_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Andrew Patton
Author-X-Name-First: Andrew
Author-X-Name-Last: Patton
Author-Name: Allan Timmermann
Author-X-Name-First: Allan
Author-X-Name-Last: Timmermann
Title: Forecast Rationality Tests Based on Multi-Horizon Bounds
Abstract: Forecast rationality under squared error loss implies various bounds on second moments of the data across forecast horizons. For example, the mean squared forecast error should be increasing in the horizon, and the mean squared forecast should be decreasing in the horizon. We propose rationality tests based on these restrictions, including new ones that can be conducted without data on the target variable, and implement them via tests of inequality constraints in a regression framework. A new test of optimal forecast revision based on a regression of the target variable on the long-horizon forecast and the sequence of interim forecast revisions is also proposed. The size and power of the new tests are compared with those of extant tests through Monte Carlo simulations. An empirical application to the Federal Reserve's Greenbook forecasts is presented.
Journal: Journal of Business & Economic Statistics
Pages: 1-17
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634337
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634337
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:1-17
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_634348_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lennart Hoogerheide
Author-X-Name-First: Lennart
Author-X-Name-Last: Hoogerheide
Author-Name: Francesco Ravazzolo
Author-X-Name-First: Francesco
Author-X-Name-Last: Ravazzolo
Author-Name: Herman van Dijk
Author-X-Name-First: Herman
Author-X-Name-Last: van Dijk
Title: Comment
Journal: Journal of Business & Economic Statistics
Pages: 30-33
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.634348
File-URL: http://hdl.handle.net/10.1080/07350015.2012.634348
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:30-33
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_637868_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Marigee Bacolod
Author-X-Name-First: Marigee
Author-X-Name-Last: Bacolod
Author-Name: John DiNardo
Author-X-Name-First: John
Author-X-Name-Last: DiNardo
Author-Name: Mireille Jacobson
Author-X-Name-First: Mireille
Author-X-Name-Last: Jacobson
Title: Beyond Incentives: Do Schools Use Accountability Rewards Productively?
Abstract: We use a regression discontinuity design to analyze an understudied aspect of school accountability systems—how schools use financial rewards. For two years, California's accountability system financially rewarded schools based on a deterministic function of test scores. Qualifying schools received per-pupil awards amounting to about 1% of statewide per-pupil spending. Corroborating anecdotal evidence that awards were paid out as teacher bonuses, we find no evidence that winning schools purchased more instructional material, increased teacher hiring, or changed the subject-specific composition of their teaching staff. Most importantly, we find no evidence that student achievement increased in winning schools. Supplemental materials for this article are available online.
Journal: Journal of Business & Economic Statistics
Pages: 149-163
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.637868
File-URL: http://hdl.handle.net/10.1080/07350015.2012.637868
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:149-163
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_643126_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Arthur Lewbel
Author-X-Name-First: Arthur
Author-X-Name-Last: Lewbel
Title: Using Heteroscedasticity to Identify and Estimate Mismeasured and Endogenous Regressor Models
Abstract: This article proposes a new method of obtaining identification in mismeasured regressor models, triangular systems, and simultaneous equation systems. The method may be used in applications where other sources of identification, such as instrumental variables or repeated measurements, are not available. Associated estimators take the form of two-stage least squares or generalized method of moments. Identification comes from a heteroscedastic covariance restriction that is shown to be a feature of many models of endogeneity or mismeasurement. Identification is also obtained for semiparametric partly linear models, and associated estimators are provided. Set identification bounds are derived for cases where point-identifying assumptions fail to hold. An empirical application estimating Engel curves is provided.
Journal: Journal of Business & Economic Statistics
Pages: 67-80
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1080/07350015.2012.643126
File-URL: http://hdl.handle.net/10.1080/07350015.2012.643126
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:67-80
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_646567_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Randal Verbrugge
Author-X-Name-First: Randal
Author-X-Name-Last: Verbrugge
Title: Do the Consumer Price Index's Utilities Adjustments for Owners’ Equivalent Rent Distort Inflation Measurement?
Abstract: The Consumer Price Index (CPI) is an important social index number, central to monetary policy, well being measurement, optimal pricing, and tax and contract escalation. Shelter costs have a large weight in the CPI, so their movements receive much attention. The CPI incorporates two shelter indexes: Rent, covering renters, and Owners’ Equivalent Rent (OER), covering owners. Between 1999 and 2006, Rent and OER inflation twice diverged markedly; this occurred again recently. Because these indexes share a common data source—a large sample of market rents—such divergence often prompts questions about CPI methods, particularly the OER utilities adjustment. (This adjustment is necessary to produce an unbiased OER index, because many market rents include utilities, but OER is a rent-of-shelter concept.) The utilities adjustment procedure is no smoking gun. It was not the major cause of these divergences, and it generates no long-run inflation measurement bias. Nonetheless, it increases OER inflation volatility and can drive OER inflation far from its measurement goal in the short run. This article develops a theory of utilities adjustment and outlines a straightforward improvement of Bureau of Labor Statistics procedures that eliminates their undesirable properties. The short-run impact on inflation measurement can be very sizable.
Journal: Journal of Business & Economic Statistics
Pages: 143-148
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1198/jbes.2011.08016
File-URL: http://hdl.handle.net/10.1198/jbes.2011.08016
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:143-148
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_646578_O.xml processed with: repec_from_tfja.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Xinyu Zhang
Author-X-Name-First: Xinyu
Author-X-Name-Last: Zhang
Author-Name: Alan Wan
Author-X-Name-First: Alan
Author-X-Name-Last: Wan
Author-Name: Sherry Zhou
Author-X-Name-First: Sherry
Author-X-Name-Last: Zhou
Title: Focused Information Criteria, Model Selection, and Model Averaging in a Tobit Model With a Nonzero Threshold
Abstract: Claeskens and Hjort (2003) have developed a focused information criterion (FIC) for model selection that selects different models based on different focused functions with those functions tailored to the parameters singled out for interest. Hjort and Claeskens (2003) also have presented model averaging as an alternative to model selection, and suggested a local misspecification framework for studying the limiting distributions and asymptotic risk properties of post-model selection and model average estimators in parametric models. Despite the burgeoning literature on Tobit models, little work has been done on model selection explicitly in the Tobit context. In this article we propose FICs for variable selection allowing for such measures as mean absolute deviation, mean squared error, and expected expected linear exponential errors in a type I Tobit model with an unknown threshold. We also develop a model average Tobit estimator using values of a smoothed version of the FIC as weights. We study the finite-sample performance of model selection and model average estimators resulting from various FICs via a Monte Carlo experiment, and demonstrate the possibility of using a model screening procedure before combining the models. Finally, we present an example from a well-known study on married women's working hours to illustrate the estimation methods discussed. This article has supplementary material online.
Journal: Journal of Business & Economic Statistics
Pages: 132-142
Issue: 1
Volume: 30
Year: 2012
X-DOI: 10.1198/jbes.2011.10075
File-URL: http://hdl.handle.net/10.1198/jbes.2011.10075
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:30:y:2012:i:1:p:132-142
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2110881_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: William C. Horrace
Author-X-Name-First: William C.
Author-X-Name-Last: Horrace
Author-Name: Hyunseok Jung
Author-X-Name-First: Hyunseok
Author-X-Name-Last: Jung
Author-Name: Yoonseok Lee
Author-X-Name-First: Yoonseok
Author-X-Name-Last: Lee
Title: LASSO for Stochastic Frontier Models with Many Efficient Firms
Abstract:
We apply the adaptive LASSO to select a set of maximally efficient firms in the panel fixed-effect stochastic frontier model. The adaptively weighted L1 penalty with sign restrictions allows simultaneous selection of a group of maximally efficient firms and estimation of firm-level inefficiency parameters with a faster rate of convergence than least squares dummy variable estimators. Our estimator possesses the oracle property. We propose a tuning parameter selection criterion and an efficient optimization algorithm based on coordinate descent. We apply the method to estimate a group of efficient police officers who are best at detecting contraband in motor vehicle stops (i.e., search efficiency) in Syracuse, NY.
Journal: Journal of Business & Economic Statistics
Pages: 1132-1142
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2110881
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2110881
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1132-1142
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2216740_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Matias D. Cattaneo
Author-X-Name-First: Matias D.
Author-X-Name-Last: Cattaneo
Author-Name: Xinwei Ma
Author-X-Name-First: Xinwei
Author-X-Name-Last: Ma
Author-Name: Yusufcan Masatlioglu
Author-X-Name-First: Yusufcan
Author-X-Name-Last: Masatlioglu
Title: Context-Dependent Heterogeneous Preferences: A Comment on Barseghyan and Molinari (2023)
Abstract:
Barseghyan and Molinari give sufficient conditions for semi-nonparametric point identification of parameters of interest in a mixture model of decision-making under risk, allowing for unobserved heterogeneity in utility functions and limited consideration. A key assumption in the model is that the heterogeneity of risk preferences is unobservable but context-independent. In this comment, we build on their insights and present identification results in a setting where the risk preferences are allowed to be context-dependent.
Journal: Journal of Business & Economic Statistics
Pages: 1030-1034
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2023.2216740
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2216740
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1030-1034
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2217870_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Elisabeth Honka
Author-X-Name-First: Elisabeth
Author-X-Name-Last: Honka
Title: Discussion of “Risk Preference Types, Limited Consideration, and Welfare” by Levon Barseghyan and Francesca Molinari
Journal: Journal of Business & Economic Statistics
Pages: 1039-1041
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2023.2217870
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2217870
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1039-1041
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2116026_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Alexander Henzi
Author-X-Name-First: Alexander
Author-X-Name-Last: Henzi
Title: Consistent Estimation of Distribution Functions under Increasing Concave and Convex Stochastic Ordering
Abstract:
A random variable Y1 is said to be smaller than Y2 in the increasing concave stochastic order if E[ϕ(Y1)]≤E[ϕ(Y2)]
for all increasing concave functions ϕ
for which the expected values exist, and smaller than Y2 in the increasing convex order if E[ψ(Y1)]≤E[ψ(Y2)]
for all increasing convex ψ. This article develops nonparametric estimators for the conditional cumulative distribution functions Fx(y)=ℙ(Y≤y|X=x)
of a response variable Y given a covariate X, solely under the assumption that the conditional distributions are increasing in x in the increasing concave or increasing convex order. Uniform consistency and rates of convergence are established both for the K-sample case X∈{1,…,K}
and for continuously distributed X.
Journal: Journal of Business & Economic Statistics
Pages: 1203-1214
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2116026
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2116026
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1203-1214
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2118127_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lajos Horváth
Author-X-Name-First: Lajos
Author-X-Name-Last: Horváth
Author-Name: Zhenya Liu
Author-X-Name-First: Zhenya
Author-X-Name-Last: Liu
Author-Name: Gregory Rice
Author-X-Name-First: Gregory
Author-X-Name-Last: Rice
Author-Name: Shixuan Wang
Author-X-Name-First: Shixuan
Author-X-Name-Last: Wang
Author-Name: Yaosong Zhan
Author-X-Name-First: Yaosong
Author-X-Name-Last: Zhan
Title: Testing Stability in Functional Event Observations with an Application to IPO Performance
Abstract:
Many sequentially observed functional data objects are available only at the times of certain events. For example, the trajectory of stock prices of companies after their initial public offering (IPO) can be observed when the offering occurs, and the resulting data may be affected by changing circumstances. It is of interest to investigate whether the mean behavior of such functions is stable over time, and if not, to estimate the times at which apparent changes occur. Since the frequency of events may fluctuates over time, we propose a change point analysis that has two steps. In the first step, we segment the series into segments in which the frequency of events is approximately homogeneous using a new binary segmentation procedure for event frequencies. After adjusting the observed curves in each segment based on the frequency of events, we proceed in the second step by developing a method to test for and estimate change points in the mean of the observed functional data objects. We establish the consistency and asymptotic distribution of the change point detector and estimator in both steps, and study their performance using Monte Carlo simulations. An application to IPO performance data illustrates the proposed methods.
Journal: Journal of Business & Economic Statistics
Pages: 1262-1273
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2118127
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2118127
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1262-1273
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2216255_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Cristina Gualdani
Author-X-Name-First: Cristina
Author-X-Name-Last: Gualdani
Title: Discussion of “Risk Preference Types, Limited Consideration, and Welfare” by Levon Barseghyan and Francesca Molinari
Journal: Journal of Business & Economic Statistics
Pages: 1035-1038
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2023.2216255
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2216255
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1035-1038
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2110880_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Ekaterina Kazak
Author-X-Name-First: Ekaterina
Author-X-Name-Last: Kazak
Author-Name: Winfried Pohlmeier
Author-X-Name-First: Winfried
Author-X-Name-Last: Pohlmeier
Title: Bagged Pretested Portfolio Selection
Abstract:
This article exploits the idea of combining pretesting and bagging to choose between competing portfolio strategies. We propose an estimator for the portfolio weight vector, which optimally trades off Type I against Type II errors when choosing the best investment strategy. Furthermore, we accommodate the idea of bagging in the portfolio testing problem, which helps to avoid sharp thresholding and reduces turnover costs substantially. Our Bagged Pretested Portfolio Selection (BPPS) approach borrows from both the shrinkage and the forecast combination literature. The portfolio weights of our strategy are weighted averages of the portfolio weights from a set of stand-alone strategies. More specifically, the weights are generated from pseudo-out-of-sample portfolio pretesting, such that they reflect the probability that a given strategy will be overall best performing. The resulting strategy allows for a flexible and smooth switch between the underlying strategies and outperforms the corresponding stand-alone strategies. Besides yielding high point estimates of the portfolio performance measures, the BPPS approach performs exceptionally well in terms of precision and is robust against outliers resulting from the choice of the asset space.
Journal: Journal of Business & Economic Statistics
Pages: 1116-1131
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2110880
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2110880
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1116-1131
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2115498_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Jialu Li
Author-X-Name-First: Jialu
Author-X-Name-Last: Li
Author-Name: Wan Zhang
Author-X-Name-First: Wan
Author-X-Name-Last: Zhang
Author-Name: Peiyao Wang
Author-X-Name-First: Peiyao
Author-X-Name-Last: Wang
Author-Name: Qizhai Li
Author-X-Name-First: Qizhai
Author-X-Name-Last: Li
Author-Name: Kai Zhang
Author-X-Name-First: Kai
Author-X-Name-Last: Zhang
Author-Name: Yufeng Liu
Author-X-Name-First: Yufeng
Author-X-Name-Last: Liu
Title: Nonparametric Prediction Distribution from Resolution-Wise Regression with Heterogeneous Data
Abstract:
Modeling and inference for heterogeneous data have gained great interest recently due to rapid developments in personalized marketing. Most existing regression approaches are based on the conditional mean and may require additional cluster information to accommodate data heterogeneity. In this article, we propose a novel nonparametric resolution-wise regression procedure to provide an estimated distribution of the response instead of one single value. We achieve this by decomposing the information of the response and the predictors into resolutions and patterns, respectively, based on marginal binary expansions. The relationships between resolutions and patterns are modeled by penalized logistic regressions. Combining the resolution-wise prediction, we deliver a histogram of the conditional response to approximate the distribution. Moreover, we show a sure independence screening property and the consistency of the proposed method for growing dimensions. Simulations and a real estate valuation dataset further illustrate the effectiveness of the proposed method.
Journal: Journal of Business & Economic Statistics
Pages: 1157-1172
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2115498
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2115498
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1157-1172
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2110882_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Léopold Simar
Author-X-Name-First: Léopold
Author-X-Name-Last: Simar
Author-Name: Paul W. Wilson
Author-X-Name-First: Paul W.
Author-X-Name-Last: Wilson
Title: Nonparametric, Stochastic Frontier Models with Multiple Inputs and Outputs
Abstract:
Stochastic frontier models along the lines of Aigner et al. are widely used to benchmark firms’ performances in terms of efficiency. The models are typically fully parametric, with functional form specifications for the frontier as well as both the noise and the inefficiency processes. Studies such as Kumbhakar et al. have attempted to relax some of the restrictions in parametric models, but so far all such approaches are limited to a univariate response variable. Some (e.g., Simar and Zelenyuk; Kuosmanen and Johnson) have proposed nonparametric estimation of directional distance functions to handle multiple inputs and outputs, raising issues of endogeneity that are either ignored or addressed by imposing restrictive and implausible assumptions. This article extends nonparametric methods developed by Simar et al. and Hafner et al. to allow multiple inputs and outputs in an almost fully nonparametric framework while avoiding endogeneity problems. We discuss properties of the resulting estimators, and examine their finite-sample performance through Monte Carlo experiments. Practical implementation of the method is illustrated using data on U.S. commercial banks.
Journal: Journal of Business & Economic Statistics
Pages: 1391-1403
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2110882
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2110882
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1391-1403
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2120486_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Christian Gourieroux
Author-X-Name-First: Christian
Author-X-Name-Last: Gourieroux
Author-Name: Joann Jasiak
Author-X-Name-First: Joann
Author-X-Name-Last: Jasiak
Title: Generalized Covariance Estimator
Abstract:
We consider a class of semi-parametric dynamic models with iid errors, including the nonlinear mixed causal-noncausal Vector Autoregressive (VAR), Double-Autoregressive (DAR) and stochastic volatility models. To estimate the parameters characterizing the (nonlinear) serial dependence, we introduce a generic Generalized Covariance (GCov) estimator, which minimizes a residual-based multivariate portmanteau statistic. In comparison to the standard methods of moments, the GCov estimator has an interpretable objective function, circumvents the inversion of high-dimensional matrices, and achieves semi-parametric efficiency in one step. We derive the asymptotic properties of the GCov estimator and show its semi-parametric efficiency. We also prove that the associated residual-based portmanteau statistic is asymptotically chi-square distributed. The finite sample performance of the GCov estimator is illustrated in a simulation study. The estimator is then applied to a dynamic model of commodity futures.
Journal: Journal of Business & Economic Statistics
Pages: 1315-1327
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2120486
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2120486
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1315-1327
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2118125_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Xiaoyu Zhang
Author-X-Name-First: Xiaoyu
Author-X-Name-Last: Zhang
Author-Name: Di Wang
Author-X-Name-First: Di
Author-X-Name-Last: Wang
Author-Name: Heng Lian
Author-X-Name-First: Heng
Author-X-Name-Last: Lian
Author-Name: Guodong Li
Author-X-Name-First: Guodong
Author-X-Name-Last: Li
Title: Nonparametric Quantile Regression for Homogeneity Pursuit in Panel Data Models
Abstract:
Many panel data have the latent subgroup effect on individuals, and it is important to correctly identify these groups since the efficiency of resulting estimators can be improved significantly by pooling the information of individuals within each group. However, the currently assumed parametric and semiparametric relationship between the response and predictors may be misspecified, which leads to a wrong grouping result, and the nonparametric approach hence can be considered to avoid such mistakes. Moreover, the response may depend on predictors in different ways at various quantile levels, and the corresponding grouping structure may also vary. To tackle these problems, this article proposes a nonparametric quantile regression method for homogeneity pursuit in panel data models with individual effects, and a pairwise fused penalty is used to automatically select the number of groups. The asymptotic properties are established, and an ADMM algorithm is also developed. The finite sample performance is evaluated by simulation experiments, and the usefulness of the proposed methodology is further illustrated by an empirical example.
Journal: Journal of Business & Economic Statistics
Pages: 1238-1250
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2118125
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2118125
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1238-1250
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2120483_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Yannick Hoga
Author-X-Name-First: Yannick
Author-X-Name-Last: Hoga
Title: Extremal Dependence-Based Specification Testing of Time Series
Abstract:
We propose a specification test for conditional location–scale models based on extremal dependence properties of the standardized residuals. We do so comparing the left-over serial extremal dependence—as measured by the pre-asymptotic tail copula—with that arising under serial independence at different lags. Our main theoretical results show that the proposed Portmanteau-type test statistics have nuisance parameter-free asymptotic limits. The test statistics are easy to compute, as they only depend on the standardized residuals, and critical values are likewise easily obtained from the limiting distributions. This contrasts with some extant tests (based, e.g., on autocorrelations of squared residuals), where test statistics depend on the parameter estimator of the model and critical values may need to be bootstrapped. We show that our tests perform well in simulations. An empirical application to S&P 500 constituents illustrates that our tests can uncover violations of residual serial independence that are not picked up by standard autocorrelation-based specification tests, yet are relevant when the model is used for, for example, risk forecasting.
Journal: Journal of Business & Economic Statistics
Pages: 1274-1287
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2120483
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2120483
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1274-1287
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2134872_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Markku Lanne
Author-X-Name-First: Markku
Author-X-Name-Last: Lanne
Author-Name: Keyan Liu
Author-X-Name-First: Keyan
Author-X-Name-Last: Liu
Author-Name: Jani Luoto
Author-X-Name-First: Jani
Author-X-Name-Last: Luoto
Title: Identifying Structural Vector Autoregression via Leptokurtic Economic Shocks
Abstract:
We revisit the generalized method of moments (GMM) estimation of the non-Gaussian structural vector autoregressive (SVAR) model. It is shown that in the n-dimensional SVAR model, global and local identification of the contemporaneous impact matrix is achieved with as few as n2+n(n−1)/2 suitably selected moment conditions, when at least n – 1 of the structural errors are all leptokurtic (or platykurtic). We also relax the potentially problematic assumption of mutually independent structural errors in part of the previous literature to the requirement that the errors be mutually uncorrelated. Moreover, we assume the error term to be only serially uncorrelated, not independent in time, which allows for univariate conditional heteroscedasticity in its components. A small simulation experiment highlights the good properties of the estimator and the proposed moment selection procedure. The use of the methods is illustrated by means of an empirical application to the effect of a tax increase on U.S. gasoline consumption and carbon dioxide emissions.
Journal: Journal of Business & Economic Statistics
Pages: 1341-1351
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2134872
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2134872
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1341-1351
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2120485_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Lajos Horváth
Author-X-Name-First: Lajos
Author-X-Name-Last: Horváth
Author-Name: Lorenzo Trapani
Author-X-Name-First: Lorenzo
Author-X-Name-Last: Trapani
Title: Changepoint Detection in Heteroscedastic Random Coefficient Autoregressive Models
Abstract:
We propose a family of CUSUM-based statistics to detect the presence of changepoints in the deterministic part of the autoregressive parameter in a Random Coefficient Autoregressive (RCA) sequence. Our tests can be applied irrespective of whether the sequence is stationary or not, and no prior knowledge of stationarity or lack thereof is required. Similarly, our tests can be applied even when the error term and the stochastic part of the autoregressive coefficient are non iid, covering the cases of conditional volatility and shifts in the variance, again without requiring any prior knowledge as to the presence or type thereof. In order to ensure the ability to detect breaks at sample endpoints, we propose weighted CUSUM statistics, deriving the asymptotics for virtually all possible weighing schemes, including the standardized CUSUM process (for which we derive a Darling-Erdős theorem) and even heavier weights (so-called Rényi statistics). Simulations show that our procedures work very well in finite samples. We complement our theory with an application to several financial time series.
Journal: Journal of Business & Economic Statistics
Pages: 1300-1314
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2120485
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2120485
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1300-1314
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2115499_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Caio Almeida
Author-X-Name-First: Caio
Author-X-Name-Last: Almeida
Author-Name: Gustavo Freire
Author-X-Name-First: Gustavo
Author-X-Name-Last: Freire
Author-Name: Rafael Azevedo
Author-X-Name-First: Rafael
Author-X-Name-Last: Azevedo
Author-Name: Kym Ardison
Author-X-Name-First: Kym
Author-X-Name-Last: Ardison
Title: Nonparametric Option Pricing with Generalized Entropic Estimators
Abstract:
We propose a family of nonparametric estimators for an option price that require only the use of underlying return data, but can also easily incorporate information from observed option prices. Each estimator comes from a risk-neutral measure minimizing generalized entropy according to a different Cressie–Read discrepancy. We apply our method to price S&P 500 options and the cross-section of individual equity options, using distinct amounts of option data in the estimation. Estimators incorporating mild nonlinearities produce optimal pricing accuracy within the Cressie–Read family and outperform several benchmarks such as Black–Scholes and different GARCH option pricing models. Overall, we provide a powerful option pricing technique suitable for scenarios of limited option data availability.
Journal: Journal of Business & Economic Statistics
Pages: 1173-1187
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2115499
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2115499
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1173-1187
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2126480_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Bryan S. Graham
Author-X-Name-First: Bryan S.
Author-X-Name-Last: Graham
Author-Name: Geert Ridder
Author-X-Name-First: Geert
Author-X-Name-Last: Ridder
Author-Name: Petra Thiemann
Author-X-Name-First: Petra
Author-X-Name-Last: Thiemann
Author-Name: Gema Zamarro
Author-X-Name-First: Gema
Author-X-Name-Last: Zamarro
Title: Teacher-to-Classroom Assignment and Student Achievement
Abstract:
We study the effects of counterfactual teacher-to-classroom assignments on average student achievement in U.S. elementary and middle schools. We use the Measures of Effective Teaching (MET) experiment to semiparametrically identify the average reallocation effects (AREs) of such assignments. Our identification strategy exploits the random assignment of teachers to classrooms in MET schools. To account for noncompliance of some students and teachers to the random assignment, we develop and implement a semiparametric instrumental variables estimator. We find that changes in within-district teacher assignments could have appreciable effects on student achievement. Unlike policies that aim at changing the pool of teachers (e.g., teacher tenure policies or class-size reduction measures), alternative teacher-to-classroom assignments do not require that districts hire new teachers or lay off existing ones; they raise student achievement through a more efficient deployment of existing teachers.
Journal: Journal of Business & Economic Statistics
Pages: 1328-1340
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2126480
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2126480
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1328-1340
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2239949_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Levon Barseghyan
Author-X-Name-First: Levon
Author-X-Name-Last: Barseghyan
Author-Name: Francesca Molinari
Author-X-Name-First: Francesca
Author-X-Name-Last: Molinari
Title: Risk Preference Types, Limited Consideration, and Welfare
Abstract:
We provide sufficient conditions for semi-nonparametric point identification of a mixture model of decision making under risk, when agents make choices in multiple lines of insurance coverage (contexts) by purchasing a bundle. As a first departure from the related literature, the model allows for two preference types. In the first one, agents behave according to standard expected utility theory with CARA Bernoulli utility function, with an agent-specific coefficient of absolute risk aversion whose distribution is left completely unspecified. In the other, agents behave according to the dual theory of choice under risk combined with a one-parameter family distortion function, where the parameter is agent-specific and is drawn from a distribution that is left completely unspecified. Within each preference type, the model allows for unobserved heterogeneity in consideration sets, where the latter form at the bundle level—a second departure from the related literature. Our point identification result rests on observing sufficient variation in covariates across contexts, without requiring any independent variation across alternatives within a single context. We estimate the model on data on households’ deductible choices in two lines of property insurance, and use the results to assess the welfare implications of a hypothetical market intervention where the two lines of insurance are combined into a single one. We study the role of limited consideration in mediating the welfare effects of such intervention.
Journal: Journal of Business & Economic Statistics
Pages: 1011-1029
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2023.2239949
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2239949
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1011-1029
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2118126_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Jeffrey S. Racine
Author-X-Name-First: Jeffrey S.
Author-X-Name-Last: Racine
Author-Name: Qi Li
Author-X-Name-First: Qi
Author-X-Name-Last: Li
Author-Name: Dalei Yu
Author-X-Name-First: Dalei
Author-X-Name-Last: Yu
Author-Name: Li Zheng
Author-X-Name-First: Li
Author-X-Name-Last: Zheng
Title: Optimal Model Averaging of Mixed-Data Kernel-Weighted Spline Regressions
Abstract:
Model averaging has a rich history dating from its use for combining forecasts from time-series models (Bates and Granger) and presents a compelling alternative to model selection methods. We propose a frequentist model averaging procedure defined over categorical regression splines (Ma, Racine, and Yang) that allows for mixed-data predictors, as well as nonnested and heteroscedastic candidate models. We demonstrate the asymptotic optimality of the proposed model averaging estimator, and develop a post-averaging inference theory for it. Theoretical underpinnings are provided, finite-sample performance is evaluated, and an empirical illustration reveals that the method is capable of outperforming a range of popular model selection criteria in applied settings. An R package is available for practitioners (Racine).
Journal: Journal of Business & Economic Statistics
Pages: 1251-1261
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2118126
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2118126
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1251-1261
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2139267_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Rubén Loaiza-Maya
Author-X-Name-First: Rubén
Author-X-Name-Last: Loaiza-Maya
Author-Name: Didier Nibbering
Author-X-Name-First: Didier
Author-X-Name-Last: Nibbering
Title: Fast Variational Bayes Methods for Multinomial Probit Models
Abstract:
The multinomial probit model is often used to analyze choice behavior. However, estimation with existing Markov chain Monte Carlo (MCMC) methods is computationally costly, which limits its applicability to large choice datasets. This article proposes a variational Bayes method that is accurate and fast, even when a large number of choice alternatives and observations are considered. Variational methods usually require an analytical expression for the unnormalized posterior density and an adequate choice of variational family. Both are challenging to specify in a multinomial probit, which has a posterior that requires identifying restrictions and is augmented with a large set of latent utilities. We employ a spherical transformation on the covariance matrix of the latent utilities to construct an unnormalized augmented posterior that identifies the parameters, and use the conditional posterior of the latent utilities as part of the variational family. The proposed method is faster than MCMC, and can be made scalable to both a large number of choice alternatives and a large number of observations. The accuracy and scalability of our method is illustrated in numerical experiments and real purchase data with one million observations.
Journal: Journal of Business & Economic Statistics
Pages: 1352-1363
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2139267
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2139267
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1352-1363
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2102025_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Danny Klinenberg
Author-X-Name-First: Danny
Author-X-Name-Last: Klinenberg
Title: Synthetic Control with Time Varying Coefficients A State Space Approach with Bayesian Shrinkage
Abstract:
Synthetic control methods are a popular tool for measuring the effects of policy interventions on a single treated unit. In practice, researchers create a counterfactual using a linear combination of untreated units that closely mimic the treated unit. Oftentimes, creating a synthetic control is not possible due to untreated units’ dynamic characteristics such as integrated processes or a time varying relationship. These are cases in which viewing the counterfactual estimation problem as a cross-sectional one fails. In this article, I investigate a new approach to estimate the synthetic control counterfactual incorporating time varying parameters to handle such situations. This is done using a state space framework and Bayesian shrinkage. The dynamics allow for a closer pretreatment fit leading to a more accurate counterfactual estimate. Monte Carlo simulations are performed showcasing the usefulness of the proposed model in a synthetic control setting. I then compare the proposed model to existing approaches in a classic synthetic control case study.
Journal: Journal of Business & Economic Statistics
Pages: 1065-1076
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2102025
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2102025
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1065-1076
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2139709_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Angelo Mele
Author-X-Name-First: Angelo
Author-X-Name-Last: Mele
Author-Name: Lingxin Hao
Author-X-Name-First: Lingxin
Author-X-Name-Last: Hao
Author-Name: Joshua Cape
Author-X-Name-First: Joshua
Author-X-Name-Last: Cape
Author-Name: Carey E. Priebe
Author-X-Name-First: Carey E.
Author-X-Name-Last: Priebe
Title: Spectral Estimation of Large Stochastic Blockmodels with Discrete Nodal Covariates
Abstract:
In many applications of network analysis, it is important to distinguish between observed and unobserved factors affecting network structure. We show that a network model with discrete unobserved link heterogeneity and binary (or discrete) covariates corresponds to a stochastic blockmodel (SBM). We develop a spectral estimator for the effect of covariates on link probabilities, exploiting the correspondence of SBMs and generalized random dot product graphs (GRDPG). We show that computing our estimator is much faster than standard variational expectation–maximization algorithms and scales well for large networks. Monte Carlo experiments suggest that the estimator performs well under different data generating processes. Our application to Facebook data shows evidence of homophily in gender, role and campus-residence, while allowing us to discover unobserved communities. Finally, we establish asymptotic normality of our estimators.
Journal: Journal of Business & Economic Statistics
Pages: 1364-1376
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2139709
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2139709
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1364-1376
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2120484_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Shaobo Li
Author-X-Name-First: Shaobo
Author-X-Name-Last: Li
Author-Name: Shaonan Tian
Author-X-Name-First: Shaonan
Author-X-Name-Last: Tian
Author-Name: Yan Yu
Author-X-Name-First: Yan
Author-X-Name-Last: Yu
Author-Name: Xiaorui Zhu
Author-X-Name-First: Xiaorui
Author-X-Name-Last: Zhu
Author-Name: Heng Lian
Author-X-Name-First: Heng
Author-X-Name-Last: Lian
Title: Corporate Probability of Default: A Single-Index Hazard Model Approach
Abstract:
Corporate probability of default (PD) prediction is vitally important for risk management and asset pricing. In search of accurate PD prediction, we propose a flexible yet easy-to-interpret default-prediction single-index hazard model (DSI). By applying it to a comprehensive U.S. corporate bankruptcy database we constructed, we discover an interesting V-shaped relationship, indicating a violation of the common linear hazard specification. Most importantly, the single-index hazard model passes the Hosmer-Lemeshow goodness-of-fit calibration test while neither does a state-of-the-art linear hazard model in finance nor a parametric class of Box-Cox transformation survival models. In an economic value analysis, we find that this may translate to as much as three times of profit compared to the linear hazard model. In model estimation, we adopt a penalized-spline approximation for the unknown function and propose an efficient algorithm. With a diverging number of spline knots, we establish consistency and asymptotic theories for the penalized-spline likelihood estimators. Furthermore, we reexamine the distress risk anomaly, that is, higher financially distressed stocks deliver anomalously lower excess returns. Based on the PDs from the proposed single-index hazard model, we find that the distress risk anomaly has weakened or even disappeared during the extended period.
Journal: Journal of Business & Economic Statistics
Pages: 1288-1299
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2120484
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2120484
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1288-1299
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2116025_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Laurent Ferrara
Author-X-Name-First: Laurent
Author-X-Name-Last: Ferrara
Author-Name: Anna Simoni
Author-X-Name-First: Anna
Author-X-Name-Last: Simoni
Title: When are Google Data Useful to Nowcast GDP? An Approach via Preselection and Shrinkage
Abstract:
Alternative datasets are widely used for macroeconomic nowcasting together with machine learning–based tools. The latter are often applied without a complete picture of their theoretical nowcasting properties. Against this background, this article proposes a theoretically grounded nowcasting methodology that allows researchers to incorporate alternative Google Search Data (GSD) among the predictors and that combines targeted preselection, Ridge regularization, and Generalized Cross Validation. Breaking with most existing literature, which focuses on asymptotic in-sample theoretical properties, we establish the theoretical out-of-sample properties of our methodology and support them by Monte Carlo simulations. We apply our methodology to GSD to nowcast GDP growth rate of several countries during various economic periods. Our empirical findings support the idea that GSD tend to increase nowcasting accuracy, even after controlling for official variables, but that the gain differs between periods of recessions and of macroeconomic stability.
Journal: Journal of Business & Economic Statistics
Pages: 1188-1202
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2116025
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2116025
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1188-1202
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2116027_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Donggyu Kim
Author-X-Name-First: Donggyu
Author-X-Name-Last: Kim
Author-Name: Minseok Shin
Author-X-Name-First: Minseok
Author-X-Name-Last: Shin
Author-Name: Yazhen Wang
Author-X-Name-First: Yazhen
Author-X-Name-Last: Wang
Title: Overnight GARCH-Itô Volatility Models
Abstract:
Various parametric volatility models for financial data have been developed to incorporate high-frequency realized volatilities and better capture market dynamics. However, because high-frequency trading data are not available during the close-to-open period, the volatility models often ignore volatility information over the close-to-open period and thus may suffer from loss of important information relevant to market dynamics. In this article, to account for whole-day market dynamics, we propose an overnight volatility model based on Itô diffusions to accommodate two different instantaneous volatility processes for the open-to-close and close-to-open periods. We develop a weighted least squares method to estimate model parameters for two different periods and investigate its asymptotic properties.
Journal: Journal of Business & Economic Statistics
Pages: 1215-1227
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2116027
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2116027
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1215-1227
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2106990_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Danning Li
Author-X-Name-First: Danning
Author-X-Name-Last: Li
Author-Name: Arun Srinivasan
Author-X-Name-First: Arun
Author-X-Name-Last: Srinivasan
Author-Name: Qian Chen
Author-X-Name-First: Qian
Author-X-Name-Last: Chen
Author-Name: Lingzhou Xue
Author-X-Name-First: Lingzhou
Author-X-Name-Last: Xue
Title: Robust Covariance Matrix Estimation for High-Dimensional Compositional Data with Application to Sales Data Analysis
Abstract:
Compositional data arises in a wide variety of research areas when some form of standardization and composition is necessary. Estimating covariance matrices is of fundamental importance for high-dimensional compositional data analysis. However, existing methods require the restrictive Gaussian or sub-Gaussian assumption, which may not hold in practice. We propose a robust composition adjusted thresholding covariance procedure based on Huber-type M-estimation to estimate the sparse covariance structure of high-dimensional compositional data. We introduce a cross-validation procedure to choose the tuning parameters of the proposed method. Theoretically, by assuming a bounded fourth moment condition, we obtain the rates of convergence and signal recovery property for the proposed method and provide the theoretical guarantees for the cross-validation procedure under the high-dimensional setting. Numerically, we demonstrate the effectiveness of the proposed method in simulation studies and also a real application to sales data analysis.
Journal: Journal of Business & Economic Statistics
Pages: 1090-1100
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2106990
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2106990
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1090-1100
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2104857_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Robin Braun
Author-X-Name-First: Robin
Author-X-Name-Last: Braun
Author-Name: Ralf Brüggemann
Author-X-Name-First: Ralf
Author-X-Name-Last: Brüggemann
Title: Identification of SVAR Models by Combining Sign Restrictions With External Instruments
Abstract:
We discuss combining sign restrictions with information in external instruments (proxy variables) to identify structural vector autoregressive (SVAR) models. In one setting, we assume the availability of valid external instruments. Sign restrictions may then be used to identify further orthogonal shocks, or as an additional piece of information to pin down the shocks identified by the external instruments more precisely. In a second setting, we assume that proxy variables are only “plausibly exogenous” and suggest various types of inequality restrictions to bound the relation between structural shocks and the external variable. This can be combined with conventional sign restrictions to further narrow down the set of admissible models. Within a proxy-augmented SVAR, we conduct Bayesian inference and discuss computation of Bayes factors. They can be useful to test either the sign- or IV restrictions as overidentifying. We illustrate the usefulness of our methodology in estimating the effects of oil supply and monetary policy shocks.
Journal: Journal of Business & Economic Statistics
Pages: 1077-1089
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2104857
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2104857
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1077-1089
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2116442_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Rong Zhu
Author-X-Name-First: Rong
Author-X-Name-Last: Zhu
Author-Name: Haiying Wang
Author-X-Name-First: Haiying
Author-X-Name-Last: Wang
Author-Name: Xinyu Zhang
Author-X-Name-First: Xinyu
Author-X-Name-Last: Zhang
Author-Name: Hua Liang
Author-X-Name-First: Hua
Author-X-Name-Last: Liang
Title: A Scalable Frequentist Model Averaging Method
Abstract:
Frequentist model averaging is an effective technique to handle model uncertainty. However, calculation of the weights for averaging is extremely difficult, if not impossible, even when the dimension of the predictor vector, p, is moderate, because we may have 2p candidate models. The exponential size of the candidate model set makes it difficult to estimate all candidate models, and brings additional numeric errors when calculating the weights. This article proposes a scalable frequentist model averaging method, which is statistically and computationally efficient, to overcome this problem by transforming the original model using the singular value decomposition. The method enables us to find the optimal weights by considering at most p candidate models. We prove that the minimum loss of the scalable model averaging estimator is asymptotically equal to that of the traditional model averaging estimator. We apply the Mallows and Jackknife criteria to the scalable model averaging estimator and prove that they are asymptotically optimal estimators. We further extend the method to the high-dimensional case (i.e., p≥n). Numerical studies illustrate the superiority of the proposed method in terms of both statistical efficiency and computational cost.
Journal: Journal of Business & Economic Statistics
Pages: 1228-1237
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2116442
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2116442
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1228-1237
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2239870_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Levon Barseghyan
Author-X-Name-First: Levon
Author-X-Name-Last: Barseghyan
Author-Name: Francesca Molinari
Author-X-Name-First: Francesca
Author-X-Name-Last: Molinari
Title: Rejoinder
Journal: Journal of Business & Economic Statistics
Pages: 1046-1049
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2023.2239870
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2239870
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1046-1049
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2127737_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Ulrich K. Müller
Author-X-Name-First: Ulrich K.
Author-X-Name-Last: Müller
Author-Name: Mark W. Watson
Author-X-Name-First: Mark W.
Author-X-Name-Last: Watson
Title: Spatial Correlation Robust Inference in Linear Regression and Panel Models
Abstract:
We consider inference about a scalar coefficient in a linear regression with spatially correlated errors. Recent suggestions for more robust inference require stationarity of both regressors and dependent variables for their large sample validity. This rules out many empirically relevant applications, such as difference-in-difference designs. We develop a robustified version of the recently suggested SCPC method that addresses this challenge. We find that the method has good size properties in a wide range of Monte Carlo designs that are calibrated to real world applications, both in a pure cross sectional setting, but also for spatially correlated panel data. We provide numerically efficient methods for computing the associated spatial-correlation robust test statistics, critical values, and confidence intervals.
Journal: Journal of Business & Economic Statistics
Pages: 1050-1064
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2127737
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2127737
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1050-1064
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2115497_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Gaurab Aryal
Author-X-Name-First: Gaurab
Author-X-Name-Last: Aryal
Author-Name: Hanna Charankevich
Author-X-Name-First: Hanna
Author-X-Name-Last: Charankevich
Author-Name: Seungwon Jeong
Author-X-Name-First: Seungwon
Author-X-Name-Last: Jeong
Author-Name: Dong-Hyuk Kim
Author-X-Name-First: Dong-Hyuk
Author-X-Name-Last: Kim
Title: Procurements with Bidder Asymmetry in Cost and Risk-Aversion
Abstract:
We propose an empirical method to analyze data from first-price procurements where bidders are asymmetric in their risk-aversion (CRRA) coefficients and distributions of private costs. Our Bayesian approach evaluates the likelihood by solving type-symmetric equilibria using the boundary-value method and integrates out unobserved heterogeneity through data augmentation. We study a new dataset from Russian government procurements focusing on the category of printing papers. We find that there is no unobserved heterogeneity (presumably because the job is routine), but bidders are highly asymmetric in their cost and risk-aversion. Our counterfactual study shows that choosing a type-specific cost-minimizing reserve price marginally reduces the procurement cost; however, inviting one more bidder substantially reduces the cost, by at least 5.5%. Furthermore, incorrectly imposing risk-neutrality would severely mislead inference and policy recommendations, but the bias from imposing homogeneity in risk-aversion is small.
Journal: Journal of Business & Economic Statistics
Pages: 1143-1156
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2115497
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2115497
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1143-1156
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2223592_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Julie Holland Mortimer
Author-X-Name-First: Julie Holland
Author-X-Name-Last: Mortimer
Title: Discussion of Levon Barseghyan and Francesca Molinari’s “Risk Preference Types, Limited Consideration, and Welfare”
Journal: Journal of Business & Economic Statistics
Pages: 1042-1045
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2023.2223592
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2223592
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1042-1045
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2140158_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Huijuan Ma
Author-X-Name-First: Huijuan
Author-X-Name-Last: Ma
Author-Name: Jing Qin
Author-X-Name-First: Jing
Author-X-Name-Last: Qin
Author-Name: Yong Zhou
Author-X-Name-First: Yong
Author-X-Name-Last: Zhou
Title: From Conditional Quantile Regression to Marginal Quantile Estimation with Applications to Missing Data and Causal Inference
Abstract:
It is well known that information on the conditional distribution of an outcome variable given covariates can be used to obtain an enhanced estimate of the marginal outcome distribution. This can be done easily by integrating out the marginal covariate distribution from the conditional outcome distribution. However, to date, no analogy has been established between marginal quantile and conditional quantile regression. This article provides a link between them. We propose two novel marginal quantile and marginal mean estimation approaches through conditional quantile regression when some of the outcomes are missing at random. The first of these approaches is free from the need to choose a propensity score. The second is double robust to model misspecification: it is consistent if either the conditional quantile regression model is correctly specified or the missing mechanism of outcome is correctly specified. Consistency and asymptotic normality of the two estimators are established, and the second double robust estimator achieves the semiparametric efficiency bound. Extensive simulation studies are performed to demonstrate the utility of the proposed approaches. An application to causal inference is introduced. For illustration, we apply the proposed methods to a job training program dataset.
Journal: Journal of Business & Economic Statistics
Pages: 1377-1390
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2140158
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2140158
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1377-1390
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2110879_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20
Author-Name: Xuehu Zhu
Author-X-Name-First: Xuehu
Author-X-Name-Last: Zhu
Author-Name: Qiming Zhang
Author-X-Name-First: Qiming
Author-X-Name-Last: Zhang
Author-Name: Lixing Zhu
Author-X-Name-First: Lixing
Author-X-Name-Last: Zhu
Author-Name: Jun Zhang
Author-X-Name-First: Jun
Author-X-Name-Last: Zhang
Author-Name: Luoyao Yu
Author-X-Name-First: Luoyao
Author-X-Name-Last: Yu
Title: Specification Testing of Regression Models with Mixed Discrete and Continuous Predictors
Abstract:
This article proposes a nonparametric projection-based adaptive-to-model specification test for regressions with discrete and continuous predictors. The test statistic is asymptotically normal under the null hypothesis and omnibus against alternative hypotheses. The test behaves like a locally smoothing test as if the number of continuous predictors was one and can detect the local alternative hypotheses distinct from the null hypothesis at the rate that can be achieved by existing locally smoothing tests for regressions with only one continuous predictor. Because of the model adaptation property, the test can fully use the model structure under the null hypothesis so that the dimensionality problem can be significantly alleviated. A discretization-expectation ordinary least squares estimation approach for partial central subspace in sufficient dimension reduction is developed as a by-product in the test construction. We suggest a residual-based wild bootstrap method to give an approximation by fully using the null model and thus closer to the limiting null distribution than existing bootstrap approximations. We conduct simulation studies to compare it with existing tests and two real data examples for illustration.
Journal: Journal of Business & Economic Statistics
Pages: 1101-1115
Issue: 4
Volume: 41
Year: 2023
Month: 10
X-DOI: 10.1080/07350015.2022.2110879
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2110879
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:41:y:2023:i:4:p:1101-1115
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2151449_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Ying Lun Cheung
Author-X-Name-First: Ying Lun
Author-X-Name-Last: Cheung
Title: Identification of Time-Varying Factor Models
Abstract:
The emergence of large datasets with long time spans has cast doubt on the assumption of constant loadings in conventional factor models. Being a potential solution, the time-varying factor model (TVFM) has attracted enormous interest in the literature. However, TVFM also suffers from the well-known problem of nonidentifiability. This article considers the situations under which both the factors and factor loadings can be estimated without rotations asymptotically. Asymptotic distributions of the proposed estimators are derived. Theoretical findings are supported by simulations. Finally, we evaluate the forecasting performance of the estimated factors subject to different identification restrictions using an extensive dataset of the U.S. macroeconomic variables. Substantial differences are found among the choices of identification restrictions.
Journal: Journal of Business & Economic Statistics
Pages: 76-94
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2151449
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2151449
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:76-94
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2174547_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Haibin Zhu
Author-X-Name-First: Haibin
Author-X-Name-Last: Zhu
Author-Name: Zhi Liu
Author-X-Name-First: Zhi
Author-X-Name-Last: Liu
Title: On Bivariate Time-Varying Price Staleness
Abstract:
Price staleness refers to the extent of zero returns in price dynamics. Bandi, Pirino, and Reno introduce two types of staleness: systematic and idiosyncratic staleness. In this study, we allow price staleness to be time-varying and study the statistical inference for idiosyncratic and common price staleness between two assets. We propose consistent estimators for both time-varying idiosyncratic and systematic price staleness and derive their asymptotic theory. Moreover, we develop a feasible nonparametric test for the simultaneous constancy of idiosyncratic and common price staleness. Our inference is based on infill asymptotics. Finally, we conduct simulation studies under various scenarios to assess the finite sample performance of the proposed approaches and provide an empirical application of the proposed theory.
Journal: Journal of Business & Economic Statistics
Pages: 229-242
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2174547
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2174547
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:229-242
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2140667_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Wu Wang
Author-X-Name-First: Wu
Author-X-Name-Last: Wang
Author-Name: Zhongyi Zhu
Author-X-Name-First: Zhongyi
Author-X-Name-Last: Zhu
Title: Homogeneity and Sparsity Analysis for High-Dimensional Panel Data Models
Abstract:
In this article, we are interested in detecting latent group structures and significant covariates in a high-dimensional panel data model with both individual and time fixed effects. The slope coefficients of the model are assumed to be subject dependent, and there exist group structures where the slope coefficients are homogeneous within groups and heterogeneous between groups. We develop a penalized estimator for recovering the group structures and the sparsity patterns simultaneously. We propose a new algorithm to optimize the objective function. Furthermore, we propose a strategy to reduce the computational complexity by pruning the penalty terms in the objective function, which also improves the accuracy of group structure detection. The proposed estimator can recover the latent group structures and the sparsity patterns consistently in large samples. The finite sample performance of the proposed estimator is evaluated through Monte Carlo studies and illustrated with a real dataset.
Journal: Journal of Business & Economic Statistics
Pages: 26-35
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2140667
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2140667
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:26-35
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166052_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Arthur Lewbel
Author-X-Name-First: Arthur
Author-X-Name-Last: Lewbel
Author-Name: Susanne M. Schennach
Author-X-Name-First: Susanne M.
Author-X-Name-Last: Schennach
Author-Name: Linqi Zhang
Author-X-Name-First: Linqi
Author-X-Name-Last: Zhang
Title: Identification of a Triangular Two Equation System Without Instruments
Abstract:
We show that a standard linear triangular two equation system can be point identified, without the use of instruments or any other side information. We find that the only case where the model is not point identified is when a latent variable that causes endogeneity is normally distributed. In this nonidentified case, we derive the sharp identified set. We apply our results to Acemoglu and Johnson’s model of life expectancy and GDP, obtaining point identification and comparable estimates to theirs, without using their (or any other) instrument.
Journal: Journal of Business & Economic Statistics
Pages: 14-25
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166052
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166052
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:14-25
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2174548_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Xu Guo
Author-X-Name-First: Xu
Author-X-Name-Last: Guo
Author-Name: Runze Li
Author-X-Name-First: Runze
Author-X-Name-Last: Li
Author-Name: Jingyuan Liu
Author-X-Name-First: Jingyuan
Author-X-Name-Last: Liu
Author-Name: Mudong Zeng
Author-X-Name-First: Mudong
Author-X-Name-Last: Zeng
Title: Estimations and Tests for Generalized Mediation Models with High-Dimensional Potential Mediators
Abstract:
Motivated by an empirical analysis of stock reaction to COVID-19 pandemic, we propose a generalized mediation model with high-dimensional potential mediators to study the mediation effects of financial metrics that bridge company’s sector and stock value. We propose an estimation procedure for the direct effect via a partial penalized maximum likelihood method and establish its theoretical properties. We develop a Wald test for the indirect effect and show that the proposed test has a χ2 limiting null distribution. We also develop a partial penalized likelihood ratio test for the direct effect and show that the proposed test asymptotically follows a χ2-distribution under null hypothesis. A more efficient estimator of indirect effect under complete mediation model is also developed. Simulation studies are conducted to examine the finite sample performance of the proposed procedures and compare with some existing methods. We further illustrate the proposed methodology with an empirical analysis of stock reaction to COVID-19 pandemic via exploring the underlying mechanism of the relationship between companies’ sectors and their stock values.
Journal: Journal of Business & Economic Statistics
Pages: 243-256
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2174548
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2174548
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:243-256
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2174549_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Long Feng
Author-X-Name-First: Long
Author-X-Name-Last: Feng
Author-Name: Binghui Liu
Author-X-Name-First: Binghui
Author-X-Name-Last: Liu
Author-Name: Yanyuan Ma
Author-X-Name-First: Yanyuan
Author-X-Name-Last: Ma
Title: A One-Sided Refined Symmetrized Data Aggregation Approach to Robust Mutual Fund Selection
Abstract:
We consider the problem of identifying skilled funds among a large number of candidates under the linear factor pricing models containing both observable and latent market factors. Motivated by the existence of non-strong potential factors and diversity of error distribution types of the linear factor pricing models, we develop a distribution-free multiple testing procedure to solve this problem. The proposed procedure is established based on the statistical tool of symmetrized data aggregation, which makes it robust to the strength of potential factors and distribution type of the error terms. We then establish the asymptotic validity of the proposed procedure in terms of both the false discovery rate and true discovery proportion under some mild regularity conditions. Furthermore, we demonstrate the advantages of the proposed procedure over some existing methods through extensive Monte Carlo experiments. In an empirical application, we illustrate the practical utility of the proposed procedure in the context of selecting skilled funds, which clearly has much more satisfactory performance than its main competitors.
Journal: Journal of Business & Economic Statistics
Pages: 257-271
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2174549
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2174549
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:257-271
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2154778_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Liu Yang
Author-X-Name-First: Liu
Author-X-Name-Last: Yang
Author-Name: Kajal Lahiri
Author-X-Name-First: Kajal
Author-X-Name-Last: Lahiri
Author-Name: Adrian Pagan
Author-X-Name-First: Adrian
Author-X-Name-Last: Pagan
Title: Getting the ROC into Sync
Abstract:
Judging the conformity of binary events in macroeconomics and finance has often been done with indices that measure synchronization. In recent years, the use of Receiver Operating Characteristic (ROC) curve has become popular for this task. This article shows that the ROC and synchronization approaches are closely related, and each can be derived from a decision-making framework. Furthermore, the resulting global measures of the degree of conformity can be identified and estimated using the standard method of moments estimators. The impact of serial dependence in the underlying series upon inferences can therefore be allowed for. Such serial correlation is common in macroeconomic and financial data.
Journal: Journal of Business & Economic Statistics
Pages: 109-121
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2154778
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2154778
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:109-121
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2182309_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Tate Jacobson
Author-X-Name-First: Tate
Author-X-Name-Last: Jacobson
Author-Name: Hui Zou
Author-X-Name-First: Hui
Author-X-Name-Last: Zou
Title: High-Dimensional Censored Regression via the Penalized Tobit Likelihood
Abstract:
High-dimensional regression and regression with a left-censored response are each well-studied topics. In spite of this, few methods have been proposed which deal with both of these complications simultaneously. The Tobit model—long the standard method for censored regression in economics—has not been adapted for high-dimensional regression at all. To fill this gap and bring up-to-date techniques from high-dimensional statistics to the field of high-dimensional left-censored regression, we propose several penalized Tobit models. We develop a fast algorithm which combines quadratic majorization with coordinate descent to compute the penalized Tobit solution path. Theoretically, we analyze the Tobit lasso and Tobit with a folded concave penalty, bounding the l2
estimation loss for the former and proving that a local linear approximation estimator for the latter possesses the strong oracle property. Through an extensive simulation study, we find that our penalized Tobit models provide more accurate predictions and parameter estimates than other methods on high-dimensional left-censored data. We use a penalized Tobit model to analyze high-dimensional left-censored HIV viral load data from the AIDS Clinical Trials Group and identify potential drug resistance mutations in the HIV genome. A supplementary file contains intermediate theoretical results and technical proofs.
Journal: Journal of Business & Economic Statistics
Pages: 286-297
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2182309
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2182309
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:286-297
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2291309_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: The Editors
Title: Associate Editors
Journal: Journal of Business & Economic Statistics
Pages: i-i
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2024.2291309
File-URL: http://hdl.handle.net/10.1080/07350015.2024.2291309
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:i-i
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166048_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Sium Bodha Hannadige
Author-X-Name-First: Sium Bodha
Author-X-Name-Last: Hannadige
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Mervyn J. Silvapulle
Author-X-Name-First: Mervyn J.
Author-X-Name-Last: Silvapulle
Author-Name: Param Silvapulle
Author-X-Name-First: Param
Author-X-Name-Last: Silvapulle
Title: Forecasting a Nonstationary Time Series Using a Mixture of Stationary and Nonstationary Factors as Predictors
Abstract:
We develop a method for constructing prediction intervals for a nonstationary variable, such as GDP. The method uses a Factor Augmented Regression (FAR) model. The predictors in the model include a small number of factors generated to extract most of the information in a set of panel data on a large number of macroeconomic variables that are considered to be potential predictors. The novelty of this article is that it provides a method and justification for a mixture of stationary and nonstationary factors as predictors in the FAR model; we refer to this as mixture-FAR method. This method is important because typically such a large set of panel data, for example the FRED-QD, is likely to contain a mixture of stationary and nonstationary variables. In our simulation study, we observed that the proposed mixture-FAR method performed better than its competitor that requires all the predictors to be nonstationary; the MSE of prediction was at least 33% lower for mixture-FAR. Using the data in FRED-QD for the United States, we evaluated the aforementioned methods for forecasting the nonstationary variables, GDP and Industrial Production. We observed that the mixture-FAR method performed better than its competitors.
Journal: Journal of Business & Economic Statistics
Pages: 122-134
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166048
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166048
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:122-134
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2146696_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Shen-Da Chang
Author-X-Name-First: Shen-Da
Author-X-Name-Last: Chang
Author-Name: Philip E. Cheng
Author-X-Name-First: Philip E.
Author-X-Name-Last: Cheng
Author-Name: Michelle Liou
Author-X-Name-First: Michelle
Author-X-Name-Last: Liou
Title: Likelihood Ratio Tests for Lorenz Dominance
Abstract:
In testing hypotheses pertaining to Lorenz dominance (LD), researchers have examined second- and third-order stochastic dominance using empirical Lorenz processes and integrated stochastic processes with the aid of bootstrap analysis. Among these topics, analysis of third-order stochastic dominance (TSD) based on the notion of risk aversion has been examined using crossing (generalized) Lorenz curves. These facts motivated the present study to characterize distribution pairs displaying the TSD without second-order (generalized Lorenz) dominance. It further motivated the development of likelihood ratio (LR) goodness-of-fit tests for examining the respective hypotheses of the LD, crossing (generalized) Lorenz curves, and TSD through approximate Chi-squared distributions. The proposed LR tests were assessed using simulated distributions, and applied to examine the COVID-19 regional death counts of bivariate samples collected by the World Health Organization between March 2020 and February 2021.
Journal: Journal of Business & Economic Statistics
Pages: 64-75
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2146696
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2146696
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:64-75
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166514_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Yingying Ma
Author-X-Name-First: Yingying
Author-X-Name-Last: Ma
Author-Name: Chenlei Leng
Author-X-Name-First: Chenlei
Author-X-Name-Last: Leng
Author-Name: Hansheng Wang
Author-X-Name-First: Hansheng
Author-X-Name-Last: Wang
Title: Optimal Subsampling Bootstrap for Massive Data
Abstract:
The bootstrap is a widely used procedure for statistical inference because of its simplicity and attractive statistical properties. However, the vanilla version of bootstrap is no longer feasible computationally for many modern massive datasets due to the need to repeatedly resample the entire data. Therefore, several improvements to the bootstrap method have been made in recent years, which assess the quality of estimators by subsampling the full dataset before resampling the subsamples. Naturally, the performance of these modern subsampling methods is influenced by tuning parameters such as the size of subsamples, the number of subsamples, and the number of resamples per subsample. In this article, we develop a novel hyperparameter selection methodology for selecting these tuning parameters. Formulated as an optimization problem to find the optimal value of some measure of accuracy of an estimator subject to computational cost, our framework provides closed-form solutions for the optimal hyperparameter values for subsampled bootstrap, subsampled double bootstrap and bag of little bootstraps, at no or little extra time cost. Using the mean square errors as a proxy of the accuracy measure, we apply our methodology to study, compare and improve the performance of these modern versions of bootstrap developed for massive data through numerical study. The results are promising.
Journal: Journal of Business & Economic Statistics
Pages: 174-186
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166514
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166514
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:174-186
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166050_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Sami Umut Can
Author-X-Name-First: Sami Umut
Author-X-Name-Last: Can
Author-Name: John H. J. Einmahl
Author-X-Name-First: John H. J.
Author-X-Name-Last: Einmahl
Author-Name: Roger J. A. Laeven
Author-X-Name-First: Roger J. A.
Author-X-Name-Last: Laeven
Title: Two-Sample Testing for Tail Copulas with an Application to Equity Indices
Abstract:
A novel, general two-sample hypothesis testing procedure is established for testing the equality of tail copulas associated with bivariate data. More precisely, using a martingale transformation of a natural two-sample tail copula process, a test process is constructed, which is shown to converge in distribution to a standard Wiener process. Hence, from this test process a myriad of asymptotically distribution-free two-sample tests can be obtained. The good finite-sample behavior of our procedure is demonstrated through Monte Carlo simulations. Using the new testing procedure, no evidence of a difference in the respective tail copulas is found for pairs of negative daily log-returns of equity indices during and after the global financial crisis.
Journal: Journal of Business & Economic Statistics
Pages: 147-159
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166050
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166050
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:147-159
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166515_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Xinyu Zhang
Author-X-Name-First: Xinyu
Author-X-Name-Last: Zhang
Author-Name: Huihang Liu
Author-X-Name-First: Huihang
Author-X-Name-Last: Liu
Author-Name: Yizheng Wei
Author-X-Name-First: Yizheng
Author-X-Name-Last: Wei
Author-Name: Yanyuan Ma
Author-X-Name-First: Yanyuan
Author-X-Name-Last: Ma
Title: Prediction Using Many Samples with Models Possibly Containing Partially Shared Parameters
Abstract:
We consider prediction based on a main model. When the main model shares partial parameters with several other helper models, we make use of the additional information. Specifically, we propose a Model Averaging Prediction (MAP) procedure that takes into account data related to the main model as well as data related to the helper models. We allow the data related to different models to follow different structures, as long as they share some common covariate effect. We show that when the main model is misspecified, MAP yields the optimal weights in terms of prediction. Further, if the main model is correctly specified, then MAP will automatically exclude all incorrect helper models asymptotically. Simulation studies are conducted to demonstrate the superior performance of MAP. We further implement MAP to analyze a dataset related to the probability of credit card default.
Journal: Journal of Business & Economic Statistics
Pages: 187-196
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166515
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166515
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:187-196
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2191672_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Wei Liu
Author-X-Name-First: Wei
Author-X-Name-Last: Liu
Author-Name: Huazhen Lin
Author-X-Name-First: Huazhen
Author-X-Name-Last: Lin
Author-Name: Jin Liu
Author-X-Name-First: Jin
Author-X-Name-Last: Liu
Author-Name: Shurong Zheng
Author-X-Name-First: Shurong
Author-X-Name-Last: Zheng
Title: Two-Directional Simultaneous Inference for High-Dimensional Models
Abstract:
This article proposes a general two-directional simultaneous inference (TOSI) framework for high-dimensional models with a manifest variable or latent variable structure, for example, high-dimensional mean models, high-dimensional sparse regression models, and high-dimensional latent factors models. TOSI performs simultaneous inference on a set of parameters from two directions, one to test whether the assumed zero parameters indeed are zeros and one to test whether exist zeros in the parameter set of nonzeros. As a result, we can better identify whether the parameters are zeros, thereby keeping the data structure fully and parsimoniously expressed. We theoretically prove that the single-split TOSI is asymptotically unbiased and the multi-split version of TOSI can control the Type I error below the prespecified significance level. Simulations are conducted to examine the performance of the proposed method in finite sample situations and two real datasets are analyzed. The results show that the TOSI method can provide more predictive and more interpretable estimators than existing methods.
Journal: Journal of Business & Economic Statistics
Pages: 298-309
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2191672
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2191672
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:298-309
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2200458_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Zhongfang He
Author-X-Name-First: Zhongfang
Author-X-Name-Last: He
Title: A Dynamic Binary Probit Model with Time-Varying Parameters and Shrinkage Prior
Abstract:
This article studies a time series binary probit model in which the underlying latent variable depends on its lag and exogenous regressors. The regression coefficients for the latent variable are allowed to vary over time to capture possible model instability. Bayesian shrinkage priors are applied to automatically differentiate fixed and truly time-varying coefficients and thus avoid unnecessary model complexity. I develop an MCMC algorithm for model estimation that exploits parameter blocking to boost sampling efficiency. An efficient Monte Carlo approximation based on the Kalman filter is developed to improve the numerical stability for computing the predictive likelihood of the binary outcome. Benefits of the proposed model are illustrated in a simulation study and an application to forecast economic recessions.
Journal: Journal of Business & Economic Statistics
Pages: 335-346
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2200458
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2200458
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:335-346
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2191676_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Yong He
Author-X-Name-First: Yong
Author-X-Name-Last: He
Author-Name: Xinbing Kong
Author-X-Name-First: Xinbing
Author-X-Name-Last: Kong
Author-Name: Long Yu
Author-X-Name-First: Long
Author-X-Name-Last: Yu
Author-Name: Xinsheng Zhang
Author-X-Name-First: Xinsheng
Author-X-Name-Last: Zhang
Author-Name: Changwei Zhao
Author-X-Name-First: Changwei
Author-X-Name-Last: Zhao
Title: Matrix Factor Analysis: From Least Squares to Iterative Projection
Abstract:
In this article, we study large-dimensional matrix factor models and estimate the factor loading matrices and factor score matrix by minimizing square loss function. Interestingly, the resultant estimators coincide with the Projected Estimators (PE) in Yu et al. which was proposed from the perspective of simultaneous reduction of the dimensionality and the magnitudes of the idiosyncratic error matrix. In other word, we provide a least-square interpretation of the PE for the matrix factor model, which parallels to the least-square interpretation of the PCA for the vector factor model. We derive the convergence rates of the theoretical minimizers under sub-Gaussian tails. Considering the robustness to the heavy tails of the idiosyncratic errors, we extend the least squares to minimizing the Huber loss function, which leads to a weighted iterative projection approach to compute and learn the parameters. We also derive the convergence rates of the theoretical minimizers of the Huber loss function under bounded fourth or even (2+ϵ) th moment of the idiosyncratic errors. We conduct extensive numerical studies to investigate the empirical performance of the proposed Huber estimators relative to the state-of-the-art ones. The Huber estimators perform robustly and much better than existing ones when the data are heavy-tailed, and as a result can be used as a safe replacement in practice. An application to a Fama-French financial portfolio dataset demonstrates the empirical advantage of the Huber estimator.
Journal: Journal of Business & Economic Statistics
Pages: 322-334
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2191676
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2191676
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:322-334
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2183212_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Matthew A. Masten
Author-X-Name-First: Matthew A.
Author-X-Name-Last: Masten
Author-Name: Alexandre Poirier
Author-X-Name-First: Alexandre
Author-X-Name-Last: Poirier
Author-Name: Linqi Zhang
Author-X-Name-First: Linqi
Author-X-Name-Last: Zhang
Title: Assessing Sensitivity to Unconfoundedness: Estimation and Inference
Abstract:
This article provides a set of methods for quantifying the robustness of treatment effects estimated using the unconfoundedness assumption. Specifically, we estimate and do inference on bounds for various treatment effect parameters, like the Average Treatment Effect (ATE) and the average effect of treatment on the treated (ATT), under nonparametric relaxations of the unconfoundedness assumption indexed by a scalar sensitivity parameter c. These relaxations allow for limited selection on unobservables, depending on the value of c. For large enough c, these bounds equal the no assumptions bounds. Using a nonstandard bootstrap method, we show how to construct confidence bands for these bound functions which are uniform over all values of c. We illustrate these methods with an empirical application to the National Supported Work Demonstration program. We implement these methods in the companion Stata module tesensitivity for easy use in practice.
Journal: Journal of Business & Economic Statistics
Pages: 1-13
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2183212
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2183212
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:1-13
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166049_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Roberto Casarin
Author-X-Name-First: Roberto
Author-X-Name-Last: Casarin
Author-Name: Mauro Costantini
Author-X-Name-First: Mauro
Author-X-Name-Last: Costantini
Author-Name: Anthony Osuntuyi
Author-X-Name-First: Anthony
Author-X-Name-Last: Osuntuyi
Title: Bayesian Nonparametric Panel Markov-Switching GARCH Models
Abstract:
This article proposes Bayesian nonparametric inference for panel Markov-switching GARCH models. The model incorporates series-specific hidden Markov chain processes that drive the GARCH parameters. To cope with the high-dimensionality of the parameter space, the article assumes soft parameter pooling through a hierarchical prior distribution and introduces cross sectional clustering through a Bayesian nonparametric prior distribution. An MCMC posterior approximation algorithm is developed and its efficiency is studied in simulations under alternative settings. An empirical application to financial returns data in the United States is offered with a portfolio performance exercise based on forecasts. A comparison shows that the Bayesian nonparametric panel Markov-switching GARCH model provides good forecasting performances and economic gains in optimal asset allocation.
Journal: Journal of Business & Economic Statistics
Pages: 135-146
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166049
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166049
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:135-146
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2174124_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Xinyu Zhang
Author-X-Name-First: Xinyu
Author-X-Name-Last: Zhang
Author-Name: Dong Li
Author-X-Name-First: Dong
Author-X-Name-Last: Li
Author-Name: Howell Tong
Author-X-Name-First: Howell
Author-X-Name-Last: Tong
Title: On the Least Squares Estimation of Multiple-Threshold-Variable Autoregressive Models
Abstract:
Most threshold models to-date contain a single threshold variable. However, in many empirical applications, models with multiple threshold variables may be needed and are the focus of this article. For the sake of readability, we start with the Two-Threshold-Variable Autoregressive (2-TAR) model and study its Least Squares Estimation (LSE). Among others, we show that the respective estimated thresholds are asymptotically independent. We propose a new method, namely the weighted Nadaraya-Watson method, to construct confidence intervals for the threshold parameters, that turns out to be, as far as we know, the only method to-date that enjoys good probability coverage, regardless of whether the threshold variables are endogenous or exogenous. Finally, we describe in some detail how our results can be extended to the K-Threshold-Variable Autoregressive (K-TAR) model, K > 2. We assess the finite-sample performance of the LSE by simulation and present two real examples to illustrate the efficacy of our modeling.
Journal: Journal of Business & Economic Statistics
Pages: 215-228
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2174124
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2174124
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:215-228
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2154777_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Yaein Baek
Author-X-Name-First: Yaein
Author-X-Name-Last: Baek
Title: Estimation of a Structural Break Point in Linear Regression Models
Abstract:
This study proposes a point estimator of the break location for a one-time structural break in linear regression models. If the break magnitude is small, the least-squares estimator of the break date has two modes at the ends of the finite sample period, regardless of the true break location. To solve this problem, I suggest an alternative estimator based on a modification of the least-squares objective function. The modified objective function incorporates estimation uncertainty that varies across potential break dates. The new break point estimator is consistent and has a unimodal finite sample distribution under small break magnitudes. A limit distribution is provided under an in-fill asymptotic framework. Monte Carlo simulation results suggest that the new estimator outperforms the least-squares estimator. I apply the method to estimate the break date in U.S. and U.K. stock return prediction models.
Journal: Journal of Business & Economic Statistics
Pages: 95-108
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2154777
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2154777
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:95-108
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2166513_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Jungbin Hwang
Author-X-Name-First: Jungbin
Author-X-Name-Last: Hwang
Author-Name: Gonzalo Valdés
Author-X-Name-First: Gonzalo
Author-X-Name-Last: Valdés
Title: Low Frequency Cointegrating Regression with Local to Unity Regressors and Unknown Form of Serial Dependence
Abstract:
This article develops new t and F tests in a low-frequency transformed triangular cointegrating regression when one may not be certain that the economic variables are exact unit root processes. We first show that the low-frequency transformed and augmented OLS (TA-OLS) method exhibits an asymptotic bias term in its limiting distribution. As a result, the test for the cointegration vector can have substantially large size distortion, even with minor deviations from the unit root regressors. To correct the asymptotic bias of the TA-OLS statistics for the cointegration vector, we develop modified TA-OLS statistics that adjust the bias and take account of the estimation uncertainty of the long-run endogeneity arising from the bias correction. Based on the modified test statistics, we provide Bonferroni-based tests of the cointegration vector using standard t and F critical values. Monte Carlo results show that our approach has the correct size and reasonable power for a wide range of local-to-unity parameters. Additionally, our method has advantages over the IVX approach when the serial dependence and the long-run endogeneity in the cointegration system are important.
Journal: Journal of Business & Economic Statistics
Pages: 160-173
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2166513
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2166513
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:160-173
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2142593_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Xinyan Fan
Author-X-Name-First: Xinyan
Author-X-Name-Last: Fan
Author-Name: Wei Lan
Author-X-Name-First: Wei
Author-X-Name-Last: Lan
Author-Name: Tao Zou
Author-X-Name-First: Tao
Author-X-Name-Last: Zou
Author-Name: Chih-Ling Tsai
Author-X-Name-First: Chih-Ling
Author-X-Name-Last: Tsai
Title: Covariance Model with General Linear Structure and Divergent Parameters
Abstract:
For estimating the large covariance matrix with a limited sample size, we propose the covariance model with general linear structure (CMGL) by employing the general link function to connect the covariance of the continuous response vector to a linear combination of weight matrices. Without assuming the distribution of responses, and allowing the number of parameters associated with weight matrices to diverge, we obtain the quasi-maximum likelihood estimators (QMLE) of parameters and show their asymptotic properties. In addition, an extended Bayesian information criteria (EBIC) is proposed to select relevant weight matrices, and the consistency of EBIC is demonstrated. Under the identity link function, we introduce the ordinary least squares estimator (OLS) that has the closed form. Hence, its computational burden is reduced compared to QMLE, and the theoretical properties of OLS are also investigated. To assess the adequacy of the link function, we further propose the quasi-likelihood ratio test and obtain its limiting distribution. Simulation studies are presented to assess the performance of the proposed methods, and the usefulness of generalized covariance models is illustrated by an analysis of the U.S. stock market.
Journal: Journal of Business & Economic Statistics
Pages: 36-48
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2142593
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2142593
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:36-48
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2143784_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Yimeng Ren
Author-X-Name-First: Yimeng
Author-X-Name-Last: Ren
Author-Name: Xuening Zhu
Author-X-Name-First: Xuening
Author-X-Name-Last: Zhu
Author-Name: Xiaoling Lu
Author-X-Name-First: Xiaoling
Author-X-Name-Last: Lu
Author-Name: Guanyu Hu
Author-X-Name-First: Guanyu
Author-X-Name-Last: Hu
Title: Graphical Assistant Grouped Network Autoregression Model: A Bayesian Nonparametric Recourse
Abstract:
Vector autoregression model is ubiquitous in classical time series data analysis. With the rapid advance of social network sites, time series data over latent graph is becoming increasingly popular. In this article, we develop a novel Bayesian grouped network autoregression model, which can simultaneously estimate group information (number of groups and group configurations) and group-wise parameters. Specifically, a graphically assisted Chinese restaurant process is incorporated under the framework of the network autoregression model to improve the statistical inference performance. An efficient Markov chain Monte Carlo sampling algorithm is used to sample from the posterior distribution. Extensive studies are conducted to evaluate the finite sample performance of our proposed methodology. Additionally, we analyze two real datasets as illustrations of the effectiveness of our approach.
Journal: Journal of Business & Economic Statistics
Pages: 49-63
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2022.2143784
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2143784
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:49-63
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2181176_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Shanika L. Wickramasuriya
Author-X-Name-First: Shanika L.
Author-X-Name-Last: Wickramasuriya
Title: Probabilistic Forecast Reconciliation under the Gaussian Framework
Abstract:
Forecast reconciliation of multivariate time series maps a set of incoherent forecasts into coherent forecasts to satisfy a given set of linear constraints. Available methods in the literature either follow a projection matrix-based approach or an empirical copula-based reordering approach to revise the incoherent future sample paths to obtain reconciled probabilistic forecasts. The projection matrices are estimated either by optimizing a scoring rule such as energy or variogram score or simply using a projection matrix derived for point forecast reconciliation.This article proves that (a) if the incoherent predictive distribution is jointly Gaussian, then MinT (minimum trace) minimizes the logarithmic scoring rule for the hierarchy; and (b) the logarithmic score of MinT for each marginal predictive density is smaller than that of OLS (ordinary least squares). We illustrate these theoretical results using a set of simulation studies and the Australian domestic tourism dataset. The estimation of MinT needs to estimate the covariance matrix of the base forecast errors. We have evaluated the performance using the sample covariance matrix and shrinkage estimator. It was observed that the theoretical properties noted above are greatly impacted by the covariance matrix used and highlighted the importance of estimating it reliably, especially with high dimensional data.
Journal: Journal of Business & Economic Statistics
Pages: 272-285
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2181176
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2181176
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:272-285
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2173206_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Giuseppe Cavaliere
Author-X-Name-First: Giuseppe
Author-X-Name-Last: Cavaliere
Author-Name: Indeewara Perera
Author-X-Name-First: Indeewara
Author-X-Name-Last: Perera
Author-Name: Anders Rahbek
Author-X-Name-First: Anders
Author-X-Name-Last: Rahbek
Title: Specification Tests for GARCH Processes with Nuisance Parameters on the Boundary
Abstract:
This article develops tests for the correct specification of the conditional variance function in GARCH models when the true parameter may lie on the boundary of the parameter space. The test statistics considered are of Kolmogorov-Smirnov and Cramér-von Mises type, and are based on empirical processes marked by centered squared residuals. The limiting distributions of the test statistics depend on unknown nuisance parameters in a nontrivial way, making the tests difficult to implement. We therefore introduce a novel bootstrap procedure which is shown to be asymptotically valid under general conditions, irrespective of the presence of nuisance parameters on the boundary. The proposed bootstrap approach is based on shrinking of the parameter estimates used to generate the bootstrap sample toward the boundary of the parameter space at a proper rate. It is simple to implement and fast in applications, as the associated test statistics have simple closed form expressions. Although the bootstrap test is designed for a data generating process with fixed parameters (i.e., independent of the sample size n), we also discuss how to obtain valid inference for sequences of DGPs with parameters approaching the boundary at the n−1/2
rate. A simulation study demonstrates that the new tests: (i) have excellent finite sample behavior in terms of empirical rejection probabilities under the null as well as under the alternative; (ii) provide a useful complement to existing procedures based on Ljung-Box type approaches. Two data examples illustrate the implementation of the proposed tests in applications.
Journal: Journal of Business & Economic Statistics
Pages: 197-214
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2173206
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2173206
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:197-214
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2191673_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857
Author-Name: Jiti Gao
Author-X-Name-First: Jiti
Author-X-Name-Last: Gao
Author-Name: Bin Peng
Author-X-Name-First: Bin
Author-X-Name-Last: Peng
Author-Name: Yayi Yan
Author-X-Name-First: Yayi
Author-X-Name-Last: Yan
Title: Estimation, Inference, and Empirical Analysis for Time-Varying VAR Models
Abstract:
Vector autoregressive (VAR) models are widely used in practical studies, for example, forecasting, modeling policy transmission mechanism, and measuring connection of economic agents. To better capture the dynamics, this article introduces a new class of time-varying VAR models in which the coefficients and covariance matrix of the error innovations are allowed to change smoothly over time. Accordingly, we establish a set of asymptotic properties including the impulse response analyses subject to structural VAR identification conditions, an information criterion to select the optimal lag, and a Wald-type test to determine the constant coefficients. Simulation studies are conducted to evaluate the theoretical findings. Finally, we demonstrate the empirical relevance and usefulness of the proposed methods through an application on U.S. government spending multipliers.
Journal: Journal of Business & Economic Statistics
Pages: 310-321
Issue: 1
Volume: 42
Year: 2024
Month: 1
X-DOI: 10.1080/07350015.2023.2191673
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2191673
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:1:p:310-321
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2146695_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Li Guo
Author-X-Name-First: Li
Author-X-Name-Last: Guo
Author-Name: Wolfgang Karl Härdle
Author-X-Name-First: Wolfgang Karl
Author-X-Name-Last: Härdle
Author-Name: Yubo Tao
Author-X-Name-First: Yubo
Author-X-Name-Last: Tao
Title: A Time-Varying Network for Cryptocurrencies
Abstract:
Cryptocurrencies return cross-predictability and technological similarity yield information on risk propagation and market segmentation. To investigate these effects, we build a time-varying network for cryptocurrencies, based on the evolution of return cross-predictability and technological similarities. We develop a dynamic covariate-assisted spectral clustering method to consistently estimate the latent community structure of cryptocurrencies network that accounts for both sets of information. We demonstrate that investors can achieve better risk diversification by investing in cryptocurrencies from different communities. A cross-sectional portfolio that implements an inter-crypto momentum trading strategy earns a 1.08% daily return. By dissecting the portfolio returns on behavioral factors, we confirm that our results are not driven by behavioral mechanisms.
Journal: Journal of Business & Economic Statistics
Pages: 437-456
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2022.2146695
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2146695
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:437-456
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2201313_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Sam Astill
Author-X-Name-First: Sam
Author-X-Name-Last: Astill
Author-Name: David I. Harvey
Author-X-Name-First: David I.
Author-X-Name-Last: Harvey
Author-Name: Stephen J. Leybourne
Author-X-Name-First: Stephen J.
Author-X-Name-Last: Leybourne
Author-Name: A. M. Robert Taylor
Author-X-Name-First: A. M. Robert
Author-X-Name-Last: Taylor
Title: Bonferroni Type Tests for Return Predictability and the Initial Condition
Abstract:
We develop tests for predictability that are robust to both the magnitude of the initial condition and the degree of persistence of the predictor. While the popular Bonferroni Q test of Campbell and Yogo displays excellent power properties for strongly persistent predictors with an asymptotically negligible initial condition, it can suffer from severe size distortions and power losses when either the initial condition is asymptotically non-negligible or the predictor is weakly persistent. The Bonferroni t test of Elliott, and Stock, although displaying power well below that of the Bonferroni Q test for strongly persistent predictors with an asymptotically negligible initial condition, displays superior size control and power when the initial condition is asymptotically nonnegligible. In the case where the predictor is weakly persistent, a conventional regression t test comparing to standard normal quantiles is known to be asymptotically optimal under Gaussianity. Based on these properties, we propose two asymptotically size controlled hybrid tests that are functions of the Bonferroni Q, Bonferroni t, and conventional t tests. Our proposed hybrid tests exhibit very good power regardless of the magnitude of the initial condition or the persistence degree of the predictor. An empirical application to the data originally analyzed by Campbell and Yogo shows our new hybrid tests are much more likely to find evidence of predictability than the Bonferroni Q test when the initial condition of the predictor is estimated to be large in magnitude.
Journal: Journal of Business & Economic Statistics
Pages: 499-515
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2201313
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2201313
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:499-515
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2221974_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: James Morley
Author-X-Name-First: James
Author-X-Name-Last: Morley
Author-Name: Trung Duc Tran
Author-X-Name-First: Trung Duc
Author-X-Name-Last: Tran
Author-Name: Benjamin Wong
Author-X-Name-First: Benjamin
Author-X-Name-Last: Wong
Title: A Simple Correction for Misspecification in Trend-Cycle Decompositions with an Application to Estimating r*
Abstract:
We propose a simple correction for misspecification in trend-cycle decompositions when the stochastic trend is assumed to be a random walk process but the estimated trend displays some serial correlation in first differences. Possible sources of misspecification that would otherwise be hard to detect and correct for include a small amount of measurement error, omitted variables, or minor approximation errors in model dynamics when estimating trend. Our proposed correction is conducted via application of a univariate Beveridge-Nelson decomposition to the preliminary estimated trend and we show with Monte Carlo analysis that our approach can work as well as if the original model used to estimate trend were correctly specified. We demonstrate the empirical relevance of the correction in an application to estimating r* as the trend of a risk-free short-term real interest rate. We find that our corrected estimate of r* is considerably smoother than the preliminary estimate from a multivariate Beveridge-Nelson decomposition based on a vector error correction model, consistent with the presence of at least a small amount of measurement error in some of the variables included in the multivariate model.
Journal: Journal of Business & Economic Statistics
Pages: 665-680
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2221974
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2221974
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:665-680
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2099870_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Shi Chen
Author-X-Name-First: Shi
Author-X-Name-Last: Chen
Author-Name: Melanie Schienle
Author-X-Name-First: Melanie
Author-X-Name-Last: Schienle
Title: Large Spillover Networks of Nonstationary Systems
Abstract:
This article proposes a vector error correction framework for constructing large consistent spillover networks of nonstationary systems grounded in the network theory of Diebold and Y ilmaz. We aim to provide a tailored methodology for the large nonstationary (macro)economic and financial system application settings avoiding technical and often hard to verify assumptions for general statistical high-dimensional approaches where the dimension can also increase with sample size. To achieve this, we propose an elementwise Lasso-type technique for consistent and numerically efficient model selection of VECM, and relate the resulting forecast error variance decomposition to the network topology representation. We also derive the corresponding asymptotic results for model selection and network estimation under standard assumptions. Moreover, we develop a refinement strategy for efficient estimation and show implications and modifications for general dependent innovations. In a comprehensive simulation study, we show convincing finite sample performance of our technique in all cases of moderate and low dimensions. In an application to a system of FX rates, the proposed method leads to novel insights on the connectedness and spillover effects in the FX market among the OECD countries.
Journal: Journal of Business & Economic Statistics
Pages: 422-436
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2022.2099870
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2099870
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:422-436
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2252039_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Joshua C. C. Chan
Author-X-Name-First: Joshua C. C.
Author-X-Name-Last: Chan
Author-Name: Gary Koop
Author-X-Name-First: Gary
Author-X-Name-Last: Koop
Author-Name: Xuewen Yu
Author-X-Name-First: Xuewen
Author-X-Name-Last: Yu
Title: Large Order-Invariant Bayesian VARs with Stochastic Volatility
Abstract:
Many popular specifications for Vector Autoregressions (VARs) with multivariate stochastic volatility are not invariant to the way the variables are ordered due to the use of a lower triangular parameterization of the error covariance matrix. We show that the order invariance problem in existing approaches is likely to become more serious in large VARs. We propose the use of a specification which avoids the use of this lower triangular parameterization. We show that the presence of multivariate stochastic volatility allows for identification of the proposed model and prove that it is invariant to ordering. We develop a Markov chain Monte Carlo algorithm which allows for Bayesian estimation and prediction. In exercises involving artificial and real macroeconomic data, we demonstrate that the choice of variable ordering can have non-negligible effects on empirical results when using the nonorder invariant approach. In a macroeconomic forecasting exercise involving VARs with 20 variables we find that our order-invariant approach leads to the best forecasts and that some choices of variable ordering can lead to poor forecasts using a conventional, non-order invariant, approach.
Journal: Journal of Business & Economic Statistics
Pages: 825-837
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2252039
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2252039
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:825-837
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2241529_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Michael Gechter
Author-X-Name-First: Michael
Author-X-Name-Last: Gechter
Title: Generalizing the Results from Social Experiments: Theory and Evidence from India
Abstract:
How informative are treatment effects estimated in one region or time period for another region or time? In this article, I derive bounds on the average treatment effect in a context of interest using experimental evidence from another context. The bounds are based on (a) the information identified about treatment effect heterogeneity due to unobservables in the experiment and (b) using differences in outcome distributions across contexts to learn about differences in distributions of unobservables. Empirically, using data from a pair of remedial education experiments carried out in India, I show the bounds are able to recover average treatment effects in one location using results from the other while the benchmark method cannot.
Journal: Journal of Business & Economic Statistics
Pages: 801-811
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2241529
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2241529
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:801-811
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2238788_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Lea Bottmer
Author-X-Name-First: Lea
Author-X-Name-Last: Bottmer
Author-Name: Guido W. Imbens
Author-X-Name-First: Guido W.
Author-X-Name-Last: Imbens
Author-Name: Jann Spiess
Author-X-Name-First: Jann
Author-X-Name-Last: Spiess
Author-Name: Merrill Warnick
Author-X-Name-First: Merrill
Author-X-Name-Last: Warnick
Title: A Design-Based Perspective on Synthetic Control Methods
Abstract:
Since their introduction by Abadie and Gardeazabal, Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected. While our results only directly apply in settings where treatment is assigned randomly, we believe that they can complement model-based approaches even for observational studies.
Journal: Journal of Business & Economic Statistics
Pages: 762-773
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2238788
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2238788
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:762-773
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2249509_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Yuya Sasaki
Author-X-Name-First: Yuya
Author-X-Name-Last: Sasaki
Author-Name: Yulong Wang
Author-X-Name-First: Yulong
Author-X-Name-Last: Wang
Title: Extreme Changes in Changes
Abstract:
Policy analysts are often interested in treating the units with extreme outcomes, such as infants with extremely low birth weights. Existing changes-in-changes (CIC) estimators are tailored to middle quantiles and do not work well for such subpopulations. This article proposes a new CIC estimator to accurately estimate treatment effects at extreme quantiles. With its asymptotic normality, we also propose a method of statistical inference, which is simple to implement. Based on simulation studies, we propose to use our extreme CIC estimator for extreme quantiles, while the conventional CIC estimator should be used for intermediate quantiles. Applying the proposed method, we study the effects of income gains from the 1993 EITC reform on infant birth weights for those in the most critical conditions. This article is accompanied by a Stata command.
Journal: Journal of Business & Economic Statistics
Pages: 812-824
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2249509
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2249509
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:812-824
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2016425_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Cathy Yi-Hsuan Chen
Author-X-Name-First: Cathy Yi-Hsuan
Author-X-Name-Last: Chen
Author-Name: Yarema Okhrin
Author-X-Name-First: Yarema
Author-X-Name-Last: Okhrin
Author-Name: Tengyao Wang
Author-X-Name-First: Tengyao
Author-X-Name-Last: Wang
Title: Monitoring Network Changes in Social Media
Abstract:
Econometricians are increasingly working with high-dimensional networks and their dynamics. Econometricians, however, are often confronted with unforeseen changes in network dynamics. In this article, we develop a method and the corresponding algorithm for monitoring changes in dynamic networks. We characterize two types of changes, edge-initiated and node-initiated, to feature the complexity of networks. The proposed approach accounts for three potential challenges in the analysis of networks. First, networks are high-dimensional objects causing the standard statistical tools to suffer from the curse of dimensionality. Second, any potential changes in social networks are likely driven by a few nodes or edges in the network. Third, in many dynamic network applications such as monitoring network connectedness or its centrality, it will be more practically applicable to detect the change in an online fashion than the offline version. The proposed detection method at each time point projects the entire network onto a low-dimensional vector by taking the sparsity into account, then sequentially detects the change by comparing consecutive estimates of the optimal projection direction. As long as the change is sizeable and persistent, the projected vectors will converge to the optimal one, leading to a jump in the sine angle distance between them. A change is therefore declared. Strong theoretical guarantees on both the false alarm rate and detection delays are derived in a sub-Gaussian setting, even under spatial and temporal dependence in the data stream. Numerical studies and an application to the social media messages network support the effectiveness of our method.
Journal: Journal of Business & Economic Statistics
Pages: 391-406
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2021.2016425
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2016425
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:391-406
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2203768_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: JoonHwan Cho
Author-X-Name-First: JoonHwan
Author-X-Name-Last: Cho
Author-Name: Thomas M. Russell
Author-X-Name-First: Thomas M.
Author-X-Name-Last: Russell
Title: Simple Inference on Functionals of Set-Identified Parameters Defined by Linear Moments
Abstract:
This article proposes a new approach to obtain uniformly valid inference for linear functionals or scalar subvectors of a partially identified parameter defined by linear moment inequalities. The procedure amounts to bootstrapping the value functions of randomly perturbed linear programming problems, and does not require the researcher to grid over the parameter space. The low-level conditions for uniform validity rely on genericity results for linear programs. The unconventional perturbation approach produces a confidence set with a coverage probability of 1 over the identified set, but obtains exact coverage on an outer set, is valid under weak assumptions, and is computationally simple to implement.
Journal: Journal of Business & Economic Statistics
Pages: 563-578
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2203768
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2203768
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:563-578
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2205918_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Arkadev Ghosh
Author-X-Name-First: Arkadev
Author-X-Name-Last: Ghosh
Author-Name: Sam Il Myoung Hwang
Author-X-Name-First: Sam Il Myoung
Author-X-Name-Last: Hwang
Author-Name: Munir Squires
Author-X-Name-First: Munir
Author-X-Name-Last: Squires
Title: Links and Legibility: Making Sense of Historical U.S. Census Automated Linking Methods
Abstract:
How does handwriting legibility affect the performance of algorithms that link individuals across census rounds? We propose a measure of legibility, which we implement at scale for the 1940 U.S. Census, and find strikingly wide variation in enumeration-district-level legibility. Using boundary discontinuities in enumeration districts, we estimate the causal effect of low legibility on the quality of linked samples, measured by linkage rates and share of validated links. Our estimates imply that, across eight linking algorithms, perfect legibility would increase the linkage rate by 5–10 percentage points. Improvements in transcription could substantially increase the quality of linked samples.
Journal: Journal of Business & Economic Statistics
Pages: 579-590
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2205918
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2205918
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:579-590
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2210181_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Juwon Seo
Author-X-Name-First: Juwon
Author-X-Name-Last: Seo
Title: Tie-Break Bootstrap for Nonparametric Rank Statistics
Abstract:
In this article, we propose a new bootstrap procedure for the empirical copula process. The procedure involves taking pseudo samples of normalized ranks in the same fashion as the classical bootstrap and applying small perturbations to break ties in the normalized ranks. Our procedure is a simple modification of the usual bootstrap based on sampling with replacement, yet it provides noticeable improvement in the finite sample performance. We also discuss how to incorporate our procedure into the time series framework. Since nonparametric rank statistics can be treated as functionals of the empirical copula, our proposal is useful in approximating the distribution of rank statistics in general. As an empirical illustration, we apply our bootstrap procedure to test the null hypotheses of positive quadrant dependence, tail monotonicity, and stochastic monotonicity, using U.S. Census data on spousal incomes in the past 15 years.
Journal: Journal of Business & Economic Statistics
Pages: 615-627
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2210181
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2210181
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:615-627
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2223683_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Xuanling Yang
Author-X-Name-First: Xuanling
Author-X-Name-Last: Yang
Author-Name: Zhoufan Zhu
Author-X-Name-First: Zhoufan
Author-X-Name-Last: Zhu
Author-Name: Dong Li
Author-X-Name-First: Dong
Author-X-Name-Last: Li
Author-Name: Ke Zhu
Author-X-Name-First: Ke
Author-X-Name-Last: Zhu
Title: Asset Pricing via the Conditional Quantile Variational Autoencoder
Abstract:
We propose a new asset pricing model that is applicable to the big panel of return data. The main idea of this model is to learn the conditional distribution of the return, which is approximated by a step distribution function constructed from conditional quantiles of the return. To study conditional quantiles of the return, we propose a new conditional quantile variational autoencoder (CQVAE) network. The CQVAE network specifies a factor structure for conditional quantiles with latent factors learned from a VAE network and nonlinear factor loadings learned from a “multi-head” network. Under the CQVAE network, we allow the observed covariates such as asset characteristics to guide the structure of latent factors and factor loadings. Furthermore, we provide a two-step estimation procedure for the CQVAE network. Using the learned conditional distribution of return from the CQVAE network, we propose our asset pricing model from the mean of this distribution, and additionally, we use both the mean and variance of this distribution to select portfolios. Finally, we apply our CQVAE asset pricing model to analyze a large 60-year US equity return dataset. Compared with the benchmark conditional autoencoder model, the CQVAE model not only delivers much larger values of out-of-sample total and predictive R2’s, but also earns at least 30.9% higher values of Sharpe ratios for both long-short and long-only portfolios.
Journal: Journal of Business & Economic Statistics
Pages: 681-694
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2223683
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2223683
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:681-694
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2326778_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Wolfgang Härdle
Author-X-Name-First: Wolfgang
Author-X-Name-Last: Härdle
Author-Name: Melanie Schienled
Author-X-Name-First: Melanie
Author-X-Name-Last: Schienled
Title: Introduction to the Special Issue on Statistics of Dynamic Networks
Journal: Journal of Business & Economic Statistics
Pages: 347-348
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2024.2326778
File-URL: http://hdl.handle.net/10.1080/07350015.2024.2326778
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:347-348
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2203207_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Aleksey Kolokolov
Author-X-Name-First: Aleksey
Author-X-Name-Last: Kolokolov
Author-Name: Roberto Renò
Author-X-Name-First: Roberto
Author-X-Name-Last: Renò
Title: Jumps or Staleness?
Abstract:
Even moderate amounts of zero returns in financial data, associated with stale prices, are heavily detrimental for reliable jump inference. We harness staleness-robust estimators to reappraise the statistical features of jumps in financial markets. We find that jumps are much less frequent and much less contributing to price variation than what found by the empirical literature so far. In particular, the empirical finding that volatility is driven by a pure jump process is actually shown to be an artifact due to staleness.
Journal: Journal of Business & Economic Statistics
Pages: 516-532
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2203207
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2203207
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:516-532
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2231041_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Cheuk Hin Cheng
Author-X-Name-First: Cheuk Hin
Author-X-Name-Last: Cheng
Author-Name: Kin Wai Chan
Author-X-Name-First: Kin Wai
Author-X-Name-Last: Chan
Title: A General Framework for Constructing Locally Self-Normalized Multiple-Change-Point Tests
Abstract:
We propose a general framework to construct self-normalized multiple-change-point tests with time series data. The only building block is a user-specified single-change-detecting statistic, which covers a large class of popular methods, including the cumulative sum process, outlier-robust rank statistics, and order statistics. The proposed test statistic does not require robust and consistent estimation of nuisance parameters, selection of bandwidth parameters, nor pre-specification of the number of change points. The finite-sample performance shows that the proposed test is size-accurate, robust against misspecification of the alternative hypothesis, and more powerful than existing methods. Case studies of the Shanghai-Hong Kong Stock Connect turnover are provided.
Journal: Journal of Business & Economic Statistics
Pages: 719-731
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2231041
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2231041
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:719-731
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2093882_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Xiu Xu
Author-X-Name-First: Xiu
Author-X-Name-Last: Xu
Author-Name: Weining Wang
Author-X-Name-First: Weining
Author-X-Name-Last: Wang
Author-Name: Yongcheol Shin
Author-X-Name-First: Yongcheol
Author-X-Name-Last: Shin
Author-Name: Chaowen Zheng
Author-X-Name-First: Chaowen
Author-X-Name-Last: Zheng
Title: Dynamic Network Quantile Regression Model
Abstract:
We propose a dynamic network quantile regression model to investigate the quantile connectedness using a predetermined network information. We extend the existing network quantile autoregression model of Zhu et al. by explicitly allowing the contemporaneous network effects and controlling for the common factors across quantiles. To cope with the endogeneity issue due to simultaneous network spillovers, we adopt the instrumental variable quantile regression (IVQR) estimation and derive the consistency and asymptotic normality of the IVQR estimator using the near epoch dependence property of the network process. Via Monte Carlo simulations, we confirm the satisfactory performance of the IVQR estimator across different quantiles under the different network structures. Finally, we demonstrate the usefulness of our proposed approach with an application to the dataset on the stocks traded in NYSE and NASDAQ in 2016.
Journal: Journal of Business & Economic Statistics
Pages: 407-421
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2022.2093882
File-URL: http://hdl.handle.net/10.1080/07350015.2022.2093882
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:407-421
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2011299_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Xiaofei Xu
Author-X-Name-First: Xiaofei
Author-X-Name-Last: Xu
Author-Name: Ying Chen
Author-X-Name-First: Ying
Author-X-Name-Last: Chen
Author-Name: Ge Zhang
Author-X-Name-First: Ge
Author-X-Name-Last: Zhang
Author-Name: Thorsten Koch
Author-X-Name-First: Thorsten
Author-X-Name-Last: Koch
Title: Modeling Functional Time Series and Mixed-Type Predictors With Partially Functional Autoregressions
Abstract:
In many business and economics studies, researchers have sought to measure the dynamic dependence of curves with high-dimensional mixed-type predictors. We propose a partially functional autoregressive model (pFAR) where the serial dependence of curves is controlled by coefficient operators that are defined on a two-dimensional surface, and the individual and group effects of mixed-type predictors are estimated with a two-layer regularization. We develop an efficient estimation with the proven asymptotic properties of consistency and sparsity. We show how to choose the sieve and tuning parameters in regularization based on a forward-looking criterion. In addition to the asymptotic properties, numerical validation suggests that the dependence structure is accurately detected. The implementation of the pFAR within a real-world analysis of dependence in German daily natural gas flow curves, with seven lagged curves and 85 scalar predictors, produces superior forecast accuracy and an insightful understanding of the dynamics of natural gas supply and demand for the municipal, industry, and border nodes, respectively.
Journal: Journal of Business & Economic Statistics
Pages: 349-366
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2021.2011299
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2011299
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:349-366
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2208183_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Qixian Zhong
Author-X-Name-First: Qixian
Author-X-Name-Last: Zhong
Author-Name: Jane-Ling Wang
Author-X-Name-First: Jane-Ling
Author-X-Name-Last: Wang
Title: Neural Networks for Partially Linear Quantile Regression
Abstract:
Deep learning has enjoyed tremendous success in a variety of applications but its application to quantile regression remains scarce. A major advantage of the deep learning approach is its flexibility to model complex data in a more parsimonious way than nonparametric smoothing methods. However, while deep learning brought breakthroughs in prediction, it is not well suited for statistical inference due to its black box nature. In this article, we leverage the advantages of deep learning and apply it to quantile regression where the goal is to produce interpretable results and perform statistical inference. We achieve this by adopting a semiparametric approach based on the partially linear quantile regression model, where covariates of primary interest for statistical inference are modeled linearly and all other covariates are modeled nonparametrically by means of a deep neural network. In addition to the new methodology, we provide theoretical justification for the proposed model by establishing the root-n consistency and asymptotically normality of the parametric coefficient estimator and the minimax optimal convergence rate of the neural nonparametric function estimator. Across several simulated and real data examples, the proposed model empirically produces superior estimates and more accurate predictions than various alternative approaches.
Journal: Journal of Business & Economic Statistics
Pages: 603-614
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2208183
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2208183
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:603-614
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2217871_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Qiang Xia
Author-X-Name-First: Qiang
Author-X-Name-Last: Xia
Author-Name: Xianyang Zhang
Author-X-Name-First: Xianyang
Author-X-Name-Last: Zhang
Title: Adaptive Testing for Alphas in High-Dimensional Factor Pricing Models
Abstract:
This article proposes a new procedure to validate the multi-factor pricing theory by testing the presence of alpha in linear factor pricing models with a large number of assets. Because the market’s inefficient pricing is likely to occur to a small fraction of exceptional assets, we develop a testing procedure that is particularly powerful against sparse signals. Based on the high-dimensional Gaussian approximation theory, we propose a simulation-based approach to approximate the limiting null distribution of the test. Our numerical studies show that the new procedure can deliver a reasonable size and achieve substantial power improvement compared to the existing tests under sparse alternatives, and especially for weak signals.
Journal: Journal of Business & Economic Statistics
Pages: 640-653
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2217871
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2217871
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:640-653
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2239869_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Bruno Feunou
Author-X-Name-First: Bruno
Author-X-Name-Last: Feunou
Title: Generalized Autoregressive Positive-valued Processes
Abstract:
We introduce generalized autoregressive positive-valued (GARP) processes, a class of autoregressive and moving-average processes that extends the class of existing autoregressive positive-valued (ARP) processes in one important dimension: each conditional moment dynamic is driven by a different and identifiable moving average of the variable of interest. The article provides ergodicity conditions for GARP processes and derives closed-form conditional and unconditional moments. The article also presents estimation and inference methods, illustrated by an application to European option pricing where the daily realized variance follows a GARP dynamic. Our results show that using GARP processes reduces pricing errors by substantially more than using ARP processes.
Journal: Journal of Business & Economic Statistics
Pages: 786-800
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2239869
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2239869
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:786-800
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2011736_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Shuyi Ge
Author-X-Name-First: Shuyi
Author-X-Name-Last: Ge
Author-Name: Shaoran Li
Author-X-Name-First: Shaoran
Author-X-Name-Last: Li
Author-Name: Oliver Linton
Author-X-Name-First: Oliver
Author-X-Name-Last: Linton
Title: Dynamic Peer Groups of Arbitrage Characteristics
Abstract:
We propose an asset pricing factor model constructed with semiparametric characteristics-based mispricing and factor loading functions. We approximate the unknown functions by B-splines sieve where the number of B-splines coefficients is diverging. We estimate this model and test the existence of the mispricing function by a power enhanced hypothesis test. The enhanced test solves the low power problem caused by diverging B-splines coefficients, with the strengthened power approaching one asymptotically. We also investigate the structure of mispricing components through Hierarchical K-means Clusterings. We apply our methodology to CRSP (Center for Research in Security Prices) and Compustat data for the U.S. stock market with one-year rolling windows during 1967–2017. This empirical study shows the presence of mispricing functions in certain time blocks. We also find that distinct clusters of the same characteristics lead to similar arbitrage returns, forming a “peer group” of arbitrage characteristics.
Journal: Journal of Business & Economic Statistics
Pages: 367-390
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2021.2011736
File-URL: http://hdl.handle.net/10.1080/07350015.2021.2011736
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:367-390
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2263537_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Alexander Kreiss
Author-X-Name-First: Alexander
Author-X-Name-Last: Kreiss
Author-Name: Enno Mammen
Author-X-Name-First: Enno
Author-X-Name-Last: Mammen
Author-Name: Wolfgang Polonik
Author-X-Name-First: Wolfgang
Author-X-Name-Last: Polonik
Title: Testing For Global Covariate Effects in Dynamic Interaction Event Networks
Abstract:
In statistical network analysis it is common to observe so called interaction data. Such data is characterized by actors forming the vertices and interacting along edges of the network, where edges are randomly formed and dissolved over the observation horizon. In addition, covariates are observed and the goal is to model the impact of the covariates on the interactions. We distinguish two types of covariates: global, system-wide covariates (i.e., covariates taking the same value for all individuals, such as seasonality) and local, dyadic covariates modeling interactions between two individuals in the network. Existing continuous time network models are extended to allow for comparing a completely parametric model and a model that is parametric only in the local covariates but has a global nonparametric time component. This allows, for instance, to test whether global time dynamics can be explained by simple global covariates like weather, seasonality etc. The procedure is applied to a bike-sharing network by using weather and weekdays as global covariates and distances between the bike stations as local covariates.
Journal: Journal of Business & Economic Statistics
Pages: 457-468
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2263537
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2263537
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:457-468
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2200514_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Tobias Fissler
Author-X-Name-First: Tobias
Author-X-Name-Last: Fissler
Author-Name: Yannick Hoga
Author-X-Name-First: Yannick
Author-X-Name-Last: Hoga
Title: Backtesting Systemic Risk Forecasts Using Multi-Objective Elicitability
Abstract:
Systemic risk measures such as CoVaR, CoES, and MES are widely-used in finance, macroeconomics and by regulatory bodies. Despite their importance, we show that they fail to be elicitable and identifiable. This renders forecast comparison and validation, commonly summarized as “backtesting,” impossible. The novel notion of multi-objective elicitability solves this problem by relying on bivariate scores equipped with the lexicographic order. Based on this concept, we propose Diebold–Mariano type tests with suitable bivariate scores to compare systemic risk forecasts. We illustrate the test decisions by an easy-to-apply traffic-light approach. Finally, we apply our traffic-light approach to DAX 30 and S&P 500 returns, and infer some recommendations for regulators.
Journal: Journal of Business & Economic Statistics
Pages: 485-498
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2200514
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2200514
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:485-498
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2238790_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Christian M. Hafner
Author-X-Name-First: Christian M.
Author-X-Name-Last: Hafner
Author-Name: Oliver B. Linton
Author-X-Name-First: Oliver B.
Author-X-Name-Last: Linton
Author-Name: Linqi Wang
Author-X-Name-First: Linqi
Author-X-Name-Last: Wang
Title: Dynamic Autoregressive Liquidity (DArLiQ)
Abstract:
We introduce a new class of semiparametric dynamic autoregressive models for the Amihud illiquidity measure, which captures both the long-run trend in the illiquidity series with a nonparametric component and the short-run dynamics with an autoregressive component. We develop a generalized method of moments (GMM) estimator based on conditional moment restrictions and an efficient semiparametric maximum likelihood (ML) estimator based on an iid assumption. We derive large sample properties for our estimators. Finally, we demonstrate the model fitting performance and its empirical relevance on an application. We investigate how the different components of the illiquidity process obtained from our model relate to the stock market risk premium using data on the S&P 500 stock market index.
Journal: Journal of Business & Economic Statistics
Pages: 774-785
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2238790
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2238790
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:774-785
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2203756_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Dachuan Chen
Author-X-Name-First: Dachuan
Author-X-Name-Last: Chen
Author-Name: Chenxu Li
Author-X-Name-First: Chenxu
Author-X-Name-Last: Li
Author-Name: Cheng Yong Tang
Author-X-Name-First: Cheng Yong
Author-X-Name-Last: Tang
Author-Name: Jun Yan
Author-X-Name-First: Jun
Author-X-Name-Last: Yan
Title: The Leverage Effect Puzzle under Semi-nonparametric Stochastic Volatility Models
Abstract:
This article extends the solution proposed by Aït-Sahalia, Fan, and Li for the leverage effect puzzle, which refers to a fact that empirical correlation between daily asset returns and the changes of daily volatility estimated from high frequency data is nearly zero. Complementing the analysis in Aït-Sahalia, Fan, and Li via the Heston model, we work with a generic semi-nonparametric stochastic volatility model via an operator-based expansion method. Under such a general setup, we identify a new source of bias due to the flexibility of variance dynamics, distinguishing the leverage effect parameter from the instantaneous correlation parameter. For estimating the leverage effect parameter, we show that the main results on analyzing the various sources of biases as well as the resulting statistical procedures for biases correction in Aït-Sahalia, Fan, and Li hold true and are thus indeed theoretically robust. For estimating the instantaneous correlation parameter, we developed a new nonparametric estimation method.
Journal: Journal of Business & Economic Statistics
Pages: 548-562
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2203756
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2203756
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:548-562
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2224850_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Marvin Borsch
Author-X-Name-First: Marvin
Author-X-Name-Last: Borsch
Author-Name: Alexander Mayer
Author-X-Name-First: Alexander
Author-X-Name-Last: Mayer
Author-Name: Dominik Wied
Author-X-Name-First: Dominik
Author-X-Name-Last: Wied
Title: Consistent Estimation of Multiple Breakpoints in Dependence Measures
Abstract:
This article proposes different methods to consistently detect multiple breaks in copula-based dependence measures. Starting with the classical binary segmentation, also the more recent wild binary segmentation (WBS) is considered. For binary segmentation, consistency of the estimators for the location of the breakpoints as well as the number of breaks is proved, taking filtering effects from AR-GARCH models explicitly into account. Monte Carlo simulations based on a factor copula as well as on a Clayton copula model illustrate the strengths and limitations of the procedures. A real data application on recent Euro Stoxx 50 data reveals some interpretable breaks in the dependence structure.
Journal: Journal of Business & Economic Statistics
Pages: 695-706
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2224850
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2224850
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:695-706
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2207617_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Tadao Hoshino
Author-X-Name-First: Tadao
Author-X-Name-Last: Hoshino
Title: Estimating a Continuous Treatment Model with Spillovers: A Control Function Approach
Abstract:
We study a continuous treatment effect model in the presence of treatment spillovers through social networks. We assume that one’s outcome is affected not only by his/her own treatment but also by a (weighted) average of his/her neighbors’ treatments, both of which are treated as endogenous variables. Using a control function approach with appropriate instrumental variables, we show that the conditional mean potential outcome can be nonparametrically identified. We also consider a more empirically tractable semiparametric model and develop a three-step estimation procedure for this model. As an empirical illustration, we investigate the causal effect of the regional unemployment rate on the crime rate.
Journal: Journal of Business & Economic Statistics
Pages: 591-602
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2207617
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2207617
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:591-602
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2210189_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Antonio F. Galvao
Author-X-Name-First: Antonio F.
Author-X-Name-Last: Galvao
Author-Name: Thomas Parker
Author-X-Name-First: Thomas
Author-X-Name-Last: Parker
Author-Name: Zhijie Xiao
Author-X-Name-First: Zhijie
Author-X-Name-Last: Xiao
Title: Bootstrap Inference for Panel Data Quantile Regression
Abstract:
This article develops bootstrap methods for practical statistical inference in panel data quantile regression models with fixed effects. We consider random-weighted bootstrap resampling and formally establish its validity for asymptotic inference. The bootstrap algorithm is simple to implement in practice by using a weighted quantile regression estimation for fixed effects panel data. We provide results under conditions that allow for temporal dependence of observations within individuals, thus, encompassing a large class of possible empirical applications. Monte Carlo simulations provide numerical evidence the proposed bootstrap methods have correct finite sample properties. Finally, we provide an empirical illustration using the environmental Kuznets curve.
Journal: Journal of Business & Economic Statistics
Pages: 628-639
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2210189
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2210189
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:628-639
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2203726_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Zhonghao Fu
Author-X-Name-First: Zhonghao
Author-X-Name-Last: Fu
Author-Name: Liangjun Su
Author-X-Name-First: Liangjun
Author-X-Name-Last: Su
Author-Name: Xia Wang
Author-X-Name-First: Xia
Author-X-Name-Last: Wang
Title: Estimation and Inference on Time-Varying FAVAR Models
Abstract:
We introduce a time-varying (TV) factor-augmented vector autoregressive (FAVAR) model to capture the TV behavior in the factor loadings and the VAR coefficients. To consistently estimate the TV parameters, we first obtain the unobserved common factors via the local principal component analysis (PCA) and then estimate the TV-FAVAR model via a local smoothing approach. The limiting distribution of the proposed estimators is established. To gauge possible sources of TV features in the FAVAR model, we propose three L2-distance-based test statistics and study their asymptotic properties under the null and local alternatives. Simulation studies demonstrate the excellent finite sample performance of the proposed estimators and tests. In an empirical application to the U.S. macroeconomic dataset, we document overwhelming evidence of structural changes in the FAVAR model and show that the TV-FAVAR model outperforms the conventional time-invariant FAVAR model in predicting certain key macroeconomic series.
Journal: Journal of Business & Economic Statistics
Pages: 533-547
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2203726
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2203726
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:533-547
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2238774_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: George Kapetanios
Author-X-Name-First: George
Author-X-Name-Last: Kapetanios
Author-Name: Laura Serlenga
Author-X-Name-First: Laura
Author-X-Name-Last: Serlenga
Author-Name: Yongcheol Shin
Author-X-Name-First: Yongcheol
Author-X-Name-Last: Shin
Title: An LM Test for the Conditional Independence between Regressors and Factor Loadings in Panel Data Models with Interactive Effects
Abstract:
A huge literature on modeling cross-sectional dependence in panels has been developed using interactive effects (IE). One area of contention is the hypothesis concerned with whether the regressors and factor loadings are correlated or not. Under the null hypothesis that they are conditionally independent, we can still apply the consistent and robust two-way fixed effects estimator. As an important specification test we develop an LM test for both static and dynamic panels with IE. Simulation results confirm the satisfactory performance of the LM test in small samples. We demonstrate its usefulness with an application to a total of 22 datasets, including static panels with a small T and dynamic panels with serially correlated factors, providing convincing evidence that the null hypothesis is not rejected in
Journal: Journal of Business & Economic Statistics
Pages: 743-761
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2238774
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2238774
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:743-761
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2224851_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Jiyang Ren
Author-X-Name-First: Jiyang
Author-X-Name-Last: Ren
Title: Model-Assisted Complier Average Treatment Effect Estimates in Randomized Experiments with Noncompliance
Abstract:
Noncompliance is a common problem in randomized experiments in various fields. Under certain assumptions, the complier average treatment effect is identifiable and equal to the ratio of the intention-to-treat effects of the potential outcomes to that of the treatment received. To improve the estimation efficiency, we propose three model-assisted estimators for the complier average treatment effect in randomized experiments with a binary outcome. We study their asymptotic properties, compare their efficiencies with that of the Wald estimator, and propose the Neyman-type conservative variance estimators to facilitate valid inferences. Moreover, we extend our methods and theory to estimate the multiplicative complier average treatment effect. Our analysis is randomization-based, allowing the working models to be misspecified. Finally, we conduct simulation studies to illustrate the advantages of the model-assisted methods and apply these analysis methods in a randomized experiment to evaluate the effect of academic services or incentives on academic performance.
Journal: Journal of Business & Economic Statistics
Pages: 707-718
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2224851
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2224851
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:707-718
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2200486_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Ye Yang
Author-X-Name-First: Ye
Author-X-Name-Last: Yang
Author-Name: Osman Doğan
Author-X-Name-First: Osman
Author-X-Name-Last: Doğan
Author-Name: Süleyman Taşp Inar
Author-X-Name-First: Süleyman Taşp
Author-X-Name-Last: Inar
Title: Estimation of Matrix Exponential Unbalanced Panel Data Models with Fixed Effects: An Application to US Outward FDI Stock
Abstract:
In this article, we consider a matrix exponential unbalanced panel data model that allows for (i) spillover effects using matrix exponential terms, (ii) unobserved heterogeneity across entities and time, and (iii) potential heteroscedasticity in the error terms across entities and time. We adopt a likelihood based direct estimation approach in which we jointly estimate the common parameters and fixed effects. To ensure that our estimator has the standard large sample properties, we show how the score functions should be suitably adjusted under both homoscedasticity and heteroscedasticity. We define our suggested estimator as the root of the adjusted score functions, and therefore our approach can be called the M-estimation approach. For inference, we suggest an analytical bias correction approach involving the sample counterpart and plug-in methods to consistently estimate the variance-covariance matrix of the suggested M-estimator. Through an extensive Monte Carlo study, we show that the suggested M-estimator has good finite sample properties. In an empirical application, we use our model to investigate the third country effects on the U.S. outward foreign direct investment (FDI) stock at the industry level.
Journal: Journal of Business & Economic Statistics
Pages: 469-484
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2200486
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2200486
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:469-484
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2231053_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Jad Beyhum
Author-X-Name-First: Jad
Author-X-Name-Last: Beyhum
Author-Name: Samuele Centorrino
Author-X-Name-First: Samuele
Author-X-Name-Last: Centorrino
Author-Name: Jean-Pierre Florens
Author-X-Name-First: Jean-Pierre
Author-X-Name-Last: Florens
Author-Name: Ingrid Van Keilegom
Author-X-Name-First: Ingrid
Author-X-Name-Last: Van Keilegom
Title: Instrumental Variable Estimation of Dynamic Treatment Effects on a Duration Outcome
Abstract:
This article considers identification and estimation of the causal effect of the time Z until a subject is treated on a duration T. The time-to-treatment is not randomly assigned, T is randomly right censored by a random variable C, and the time-to-treatment Z is right censored by T∧C. The endogeneity issue is treated using an instrumental variable explaining Z and independent of the error term of the model. We study identification in a fully nonparametric framework. We show that our specification generates an integral equation, of which the regression function of interest is a solution. We provide identification conditions that rely on this identification equation. We assume that the regression function follows a parametric model for estimation purposes. We propose an estimation procedure and give conditions under which the estimator is asymptotically normal. The estimators exhibit good finite sample properties in simulations. Our methodology is applied to evaluate the effect of the timing of a therapy for burnout.
Journal: Journal of Business & Economic Statistics
Pages: 732-742
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2231053
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2231053
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:732-742
Template-Type: ReDIF-Article 1.0
# input file: UBES_A_2219283_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a
Author-Name: Jia Li
Author-X-Name-First: Jia
Author-X-Name-Last: Li
Author-Name: Zhipeng Liao
Author-X-Name-First: Zhipeng
Author-X-Name-Last: Liao
Author-Name: Wenyu Zhou
Author-X-Name-First: Wenyu
Author-X-Name-Last: Zhou
Title: Uniform Nonparametric Inference for Spatially Dependent Panel Data
Abstract:
This article proposes a uniform functional inference method for nonparametric regressions in a panel-data setting that features general unknown forms of spatio-temporal dependence. The method requires a long time span, but does not impose any restriction on the size of the cross section or the strength of spatial correlation. The uniform inference is justified via a new growing-dimensional Gaussian coupling theory for spatio-temporally dependent panels. We apply the method in two empirical settings. One concerns the nonparametric relationship between asset price volatility and trading volume as depicted by the mixture of distribution hypothesis. The other pertains to testing the rationality of survey-based forecasts, in which we document nonparametric evidence for information rigidity among professional forecasters, offering new support for sticky-information and noisy-information models in macroeconomics.
Journal: Journal of Business & Economic Statistics
Pages: 654-664
Issue: 2
Volume: 42
Year: 2024
Month: 4
X-DOI: 10.1080/07350015.2023.2219283
File-URL: http://hdl.handle.net/10.1080/07350015.2023.2219283
File-Format: text/html
File-Restriction: Access to full text is restricted to subscribers.
Handle: RePEc:taf:jnlbes:v:42:y:2024:i:2:p:654-664