Multivariate tobit models estimated by maximum simulated likelihood
mvtobit equation1 equation2 ... equationM [weight] [if exp] [in range] [, draws(#) an seed(#) beta0 atrho0(matrix_name) prefix(string) burn(integer) random hrandom shuffle adoonly primes(matrix_name) init(matrix_name) robust cluster(varname) constraints(numlist) level(#) maximize_options ]
where each equation is specified as
( [eqname:] depvar [=] [varlist] [, noconstant] )
by ... : may be used with mvtobit; see help by.
pweights, fweights, aweights, and iweights are allowed; see help weights.
mvtobit is likely to produce a series of "(not concave)" statements in the beginning of the estimation process. It is recommended to specify the difficult option; see help maximize.
mvtobit shares the features of all estimation commands; see help est.
mvtobit typed without arguments redisplays the last estimates. The level option may be used.
mvtobit requires mdraws to be installed.
Note: much code in this routine is hacked from or inspired by Cappellari and Jenkins' mvprobit and mdraws commands (see mvprobit and mdraws if installed). This in particular applies to the help and syntax handling files. mdraws must be installed for mvtobit to work. The shuffle option requires installation of _gclsort. Both are available from SSC.
Using Stata version 9 or above? Take a look at cmp and Roodman (2009).
mvtobit estimates M-equation tobit models (including bivariate models), by the method of maximum simulated likelihood (MSL). Bivariate tobit models are estimated without simulation (see also Daniel Lawsons bitobit if installed). A limitation is that only models left-censored at zero can be estimated, i.e.
y(i) = max[xb(i)+e(i),0]
where e is M-variate normally distributed. Along with coefficients for each equation mvtobit estimates the cross-equation error-correlations and the variance of the error terms.
mvtobit uses the Geweke-Hajivassiliou-Keane (GHK) simulator implemented in egen function mvnp (if installed) and the related mdraws function to draw random numbers for evaluation of the multi-dimensional Normal integrals in the likelihood function. For each observation, a likelihood contribution is calculated for each replication, and the simulated likelihood contribution is the average of the values derived from all the replications. The simulated likelihood function for the sample as a whole is then maximized using standard methods (ml in this case). For a brief description of the GHK smooth recursive simulator, see Greene (2003, 931-933), who also provides references to the literature. See Cappellari and Jenkins (2006) for detailed information on implemention of MSL in Stata and the workings of mvnp and mdraws. Also see Train (2003).
Under standard conditions, the MSL estimator is consistent as the number of observations and the number of draws tend to infinity and is asymptotically equivalent to the true maximum likelihood estimator as the ratio of the square root of the sample size to the number of draws tends to zero. Thus, other things equal, the more draws, the better. In practice, however, it has been observed that a relatively small number of draws may work well for `smooth' likelihoods in the sense that the change in estimates as the number of draws is increased is negligible. It is the responsibility of the user to check that this is the case. Simulation variance may be reduced using antithetic draws in addition to the pseudo-random uniform variates used in the calculation of the simulated likelihood. The antithetic draws for a vector of pseudo-random uniform draws, z, are 1-z.
Estimation is numerically intensive and may be very slow if the data set is large, if the number of draws is large, or (especially) if the number of equations is large. Users may also need to set matsize and set memory to values above the default ones. (See help for matsize and memory.) Use of the atrho0 option may speed up convergence.
Models for which the error variance-covarince matrix is close to not being positive definite are likely to be difficult to maximize. (The Cholesky factorization used by MSL requires positive definiteness.) In these cases, ml may report difficulties calculating numerical derivatives and a non-concave log likelihood. In difficult maximization problems, the message "Warning: cannot do Cholesky factorization of rho matrix" may appear between iterations. It may be safely ignored if the maximization proceeds to a satisfactory conclusion. Results may differ depending on the sort order of the data, because the sort order affects which values of the random variable(s) get allocated to which observation. (Note, mvtobit does not change the sort order of the data.) This potential problem is reduced by the larger the number of random draws that is used.
beta0 specifies that the estimates of the marginal tobit regressions (used to provide starting values) are reported.
atrho0(matrix_name) allows users to specify starting values for the standard deviations and correlations that are different from the default values (zeroes and ones, respectively). The matrix matrix_name contains values of the incidental parameters, /lnsigmai and /atrhoij, for the M equations. Matrix matrix_name must have properly named column names. E.g., if a starting value in /atrho12 is being set, one would first use the command matrix matrix_name = (value), followed by matrix colnames matrix_name = atrho12:_cons. Between 1 and M(M-1)/2 /atrhoij, and between 1 and M /lnsigmai starting values may be specified, where i = 1,...,M-1, and j > i. One likely source for a non-default starting value for atrhoji is the /athrho parameter estimate from a bivariate model corresponding to equations i and j of the full mvtobit model.
robust specifies that the Huber/White/sandwich estimator of variance is to be used in place of the traditional calculation; see [U] 23.11 Obtaining robust variance estimates. robust combined with cluster() allows observations that are not independent within clusters (although they must be independent between clusters). If you specify pweights, robust is implied.
cluster(varname) specifies that the observations are independent across groups (clusters) but not necessarily within groups. varname specifies to which group each observation belongs; e.g., cluster(personid) in data with repeated observations on individuals. See [U] 23.11 Obtaining robust variance estimates. cluster() can be used with pweights to produce estimates for unstratified cluster-sampled data. Specifying cluster() implies robust.
noconstant suppresses the constant term (intercept) in the relevant regression.
constraints(numlist) specifies the linear constraints to be applied during estimation. Constraints are defined using the constraint command and are numbered; see help constraint. The default is to perform unconstrained estimation.
level(#) specifies the confidence level, in percent, for the confidence intervals of the coefficients; see help level.
init(matrix_name) specifies a matrix of starting values. Options from ml init can be specified inside the parenthesis.
maximize_options control the maximization process; see help maximize. Use of them is likely to be rare.
Options related to random number generation See also mdraws. The explanations below are taken from the mdraws helpfile.
draws(#) specifies the number of pseudo-random standard uniform variates drawn when calculating the simulated likelihood. The default is 5. (See the discussion above concerning the choice of the number of draws.) If the an option is specified, the total number of draws used in the calculations is twice the number specified in draws(#).
prefix(string) specifies the prefix common to the names of each of the created variables containing the random numbers used by the egen function mvnp(). The default prefix is X_MVT.
an specifies that antithetic draws is to be used by mdraws. The antithetic draw for a vector of uniform draws, z, is 1-z.
random specifies that pseudorandom number sequences are created rather than Halton sequences (the default).
seed(#) specifies the initial value of the (pseudo-)random-number seed used by the mdraws function in the simulation process. The value should be an integer (the default value is 123456789). Warning: if the number of draws is 'small', changes in the seed value may lead to surprisingly large changes in estimates. seed(#) only has effect when random, hrandom or shuffle are specified.
primes(matrix_name) specifies the name of an existing 1 x M or M x 1 matrix containing M different prime numbers. If the option is not specified and as long as M <= 20, the program uses the first M prime numbers in ascending order to generate the Halton sequences.
burn(#) specifies the number of initial sequence elements to drop for each equation when creating Halton sequences. The default is zero, and the option is ignored if random is specified. Specification of this option reduces the correlation between the sequences in each dimension. Train (2003, 230) recommends that # should be at least as large as the prime number used to generate the sequences.
hrandom specifies that each Halton sequence should be transformed by a random perturbation. For each dimension, a draw, u, is taken from the standard uniform distribution. Each sequence element has u added to it. If the sum is greater than 1, the element is transformed to the sum minus 1; otherwise, the element is transformed to the sum. See Train (2003, 234).
shuffle specifies that "shuffled" Halton draws should be created, as proposed by Hess and Polak (2003). Each Halton sequence in each dimension is randomly shuffled before sequence elements are allocated to observations. Philippe Van Kerm's program _gclsort, available via SSC, must be installed for this option to work.
adoonly prevents using the Stata plugin to perform the intensive numerical calculations. Specifying this option results in slower-running code but may be necessary if the plugin is not available for your platform. This option is also useful if you like to do speed comparisons!
In addition to the usual results saved after ml, mvtobit also saves the following:
e(draws) is the number of pseudo-random draws used when simulating probabilities. If the an option is specified, e(draws) is twice the number specified in draws(#), rather than equal to the number.
e(an) is a local macro containing "yes" if the an option is specified, and containing "no" otherwise.
e(seed) is the initial seed value used by the random-number generator.
e(neqs) is the number of equations in the M-equation model.
e(ll0) is the log likehood for the comparison model (the sum of the log likelihoods from the marginal univariate tobit models corresponding to each equation).
e(chi2_c) is chi-square test statistic for the likelihood ratio test of the multivariate tobit model against the comparison model.
e(nrho) is the number of estimated rhos (the degrees of freedom for the likelihood ratio test against the comparison model).
e(sigmai) is the estimate of the standard deviation of the i'th error term.
e(sesigmai) is the estimated standard error of sigmai.
e(rhoji) is the estimate of correlation ji in the variance-covariance matrix of cross-equation error terms.
e(serhoji) is the estimated standard error of correlation ji.
e(rhsi) is the list of explanatory variables used in equation i. This list does not include the constant term, regardless of whether there is one is implied by equation i.
e(nrhsi) is the number of explanatory variables in equation i. This number includes the constant term if there is one implied by equation i.
. mvtobit (y1 = x11 x12) (y2 = x21 x22)
. mvtobit (y1 = x11 x12) (y2 = x21 x22) (y3 = x31 x32), dr(20) an
. constraint define 1 [y1]x11 = [y2]x22
. mvtobit (y1 = x11 x12) (y2 = x21 x22) (y3 = x31 x32), dr(20) an constraints(1)
Mikkel Barslund, Danish Economic Councils, Denmark <email@example.com>
I have hacked a large amount of code from Cappellari and Jenkins mvprobit (Cappellari and Jenkins, 2003). In addition most of the heavy work in this routine is performed by their mdraws command. All errors are, of course, mine.
Version 1.0, August, 2007.
Cappellari, L. and S.P. Jenkins. 2003. Multivariate probit regression using simulated maximum likelihood. The Stata Journal 3(3): 278-294.
Cappellari, L. and S.P. Jenkins. 2006. Calculation of multivariate normal probabilities by simulation, with applications to maximum simulated likelihood estimation. The Stata Journal 6(2): 156-189.
Greene, W.H. 2003. Econometric Analysis, 5th ed. Upper Saddle River, NJ: Prentice-Hall.
Roodman, D. 2009. Estimating Fully Observed Recursive Mixed-Process Models with cmp. Working Paper 168. Center for Global Development.
Manual: [R] intreg, [R] tobit
Online: help for constraint, est, ereturn, postest, ml, tobit, and (if installed) bitobit, mdraws, mvnp.