Return-Path: Received: from corot.bc.edu (corot.bc.edu [136.167.2.209]) by monet.bc.edu (8.8.7/8.8.7) with ESMTP id MAA21938 for ; Wed, 1 Apr 1998 12:57:32 -0500 From: maiser@efs.mq.edu.au Received: (from root@localhost) by corot.bc.edu (8.8.7/8.8.7) with X.500 id MAA146776 for BAUM@MAIL1.BC.EDU; Wed, 1 Apr 1998 12:57:31 -0500 Received: from baldrick.ocs.mq.edu.au (baldrick.ocs.mq.edu.au [137.111.1.12]) by corot.bc.edu (8.8.7/8.8.7) with ESMTP id MAA62454 for ; Wed, 1 Apr 1998 12:57:19 -0500 Received: from efs1.efs.mq.edu.au (efs1.efs.mq.edu.au [137.111.64.8]) by baldrick.ocs.mq.edu.au (8.8.5/8.8.5) with ESMTP id DAA03047 for ; Thu, 2 Apr 1998 03:57:06 +1000 (EST) Received: from EFS1/SpoolDir by efs1.efs.mq.edu.au (Mercury 1.21); 2 Apr 98 03:56:14 GMT+1000 Received: from SpoolDir by EFS1 (Mercury 1.30); 2 Apr 98 03:56:13 GMT+1000 To: baum@bc.edu Date: Thu, 2 Apr 98 3:56:02 GMT+1000 Subject: Re: Message-ID: <5CD18EC3720@efs1.efs.mq.edu.au> From: "Hwang, Jae-kwang" To: "RATS Discussion List" Subject: ADF.SRC PROCEDURE Date: Sun, 1 Mar 1998 17:47:52 -0600 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.1960.3) (via Mercury MTS (Bindery) v1.30) Content-Type: text/plain Hello ! everydoy. I would like to run a unit root test by using ADF test but I do not have this ADF.SRC on my rats. So if anybody has an ADF.SRC procedure, would you send me that file. Thank you so much. JK Hwang. ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: ADF.SRC PROCEDURE Date: Mon, 2 Mar 1998 10:44:52 -0600 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v2.54) (via Mercury MTS (Bindery) v1.30) > Hello ! everydoy. > I would like to run a unit root test by using ADF test but I do not have > this ADF.SRC on my rats. So if anybody has an ADF.SRC procedure, would > you send me that file. Thank you so much. > This procedure is available on our Web site (www.estima.com), but I would recommend that you download a newer version of this file, which is called URADF.SRC. Tom Maycock Estima -- +-----------------------------+-----------------------------------------+ | Estima | | | P.O. Box 1818 | Voice: (847) 864-8772 | | Evanston, IL 60204-1818 | Fax: (847) 864-6221 | | U.S.A | BBS: (847) 864-8816 | | e-mail: estima@estima.com | CompuServe: 73140,2202 | |-----------------------------------------------------------------------| | Web Site: http://www.estima.com | | RATS Internet Mailing List: New members can join by sending e-mail to | | MAISER@EFS.MQ.EDU.AU with the message: SUBSCRIBE RATS-L | +-----------------------------------------------------------------------+ ---------- End of message ---------- From: Jetro Siekkinen To: "RATS Discussion List" Subject: Problems with GMM-estimation Date: Tue, 3 Mar 1998 12:36:19 +0200 (EET) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: ELM [version 2.4ME+ PL27 (25)] (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Hello everybody! I have problem with GMM-estimation. I'm doing my master's thesis about option pricing models (Black-Scholes and Amin-Jarrow). I have panel data, where I have 46 days and 4 observations per day (intra-day) and 5 variables (option price, interest rate, t My problem is that I can't get GMM-estimation work properly in my data (estimated coefficients are unreal). For example in Black-Scholes -model I have only one unknown variable (volatility), which should be round 0.2, but GMM-estimation gives me -0.08 Please help me to solve this problem!!!!!! Next is my code: cal(panelobs=46) all 0 4//46 open data ... data(...) / s k c rate maturit NONLIN A1 FRML Z = C-(S*%CDF((LOG(S/K) + (RATE + .5*A1**2)* $ MATURIT)/(A1*SQRT(MATURIT))) - K*EXP(-RATE*MATURIT)* $ %CDF(((LOG(S/K) + (RATE + .5*A1**2)*MATURIT)/(A1*SQRT $ (MATURIT))) - A1*SQRT(MATURIT))) COMPUTE A1=.135 INSTRUMENTS ... NLSYSTEM(TRACE,INSTR,ITERATION=...) CDF CHISQR %UZWZU d.o.f I have also tried to use NLPAR instruction and several initial values to find proper estimate to unknown parameter (A1). THANK YOU!!!! JETRO SIEKKINEN UNIVERSITY OF TAMPERE, FINLAND ---------- End of message ---------- From: Francisco Jose Climent Diranzo To: "RATS Discussion List" Subject: Re: ADF.SRC PROCEDURE Date: Tue, 3 Mar 1998 16:38:30 +0100 (MET) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Pro Version 3.0 (16) (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="=====================_889000121==_" --=====================_889000121==_ Content-Type: text/plain; charset="us-ascii" At 17:47 1/03/98 -0600, you wrote: >Hello ! everydoy. >I would like to run a unit root test by using ADF test but I do not have >this ADF.SRC on my rats. So if anybody has an ADF.SRC procedure, would >you send me that file. Thank you so much. > >JK Hwang. > > Dear partner: I send you the adf.src --=====================_889000121==_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="ADF.SRC" ****************************************************************************= ************ * * ADF series start end * * ADF computes the Augmented Dickey-Fuller unit root "t-tests". * For these tests, one adds lagged difference of the series until * the residuals of the regression: * * y(t) =3D rho*y(t-1) + b1*dely(t-1) + ... + bp*dely(t-p) + eps(t) * * are white noise. This is a NECESSARY condition for the tests to be * valid. * * ADF determines the appropriate number of lagged differences by any * of four methods: AIC criterion, BIC criterion (the default), by adding= lags * until the Ljung-Box test fails to reject no serial correlation at a user * defined level, or by adding lags until a Lagrange Multiplier test= fails * to reject no serial correlation at a user defined level. * * Critical values are displayed for the 1%, 5%, or 10% level. The * critical values are those computed/simulated VERY carefully by James * MacKinnon. The other critical values one runs across are generally= for * about 10,000 replications, whereas the response surface regressions * simulated by MacKinnon are for fifteen million replications (15 * different sample sizes, 40 different models for each sample size with * 25,000 replications for each model). These critical values seem more * robust to sample size variation than the others (Dickey and Fuller, * Yoo, Philips and Ouliaris). * * The three procedures nested above ADF compute the LM test, Ljung-Box= Tests * and finds the minimum AIC and BIC. * * References: * Fuller, Introduction to Statistical Time Series, New York, Wiley,= 1976. * Dickey and Fuller, "Distribution of the Estimators for Time Series * Regressions with a Unit Root", J.A.S.A., 1979, pp 427-431. * MacKinnon, "Critical Values for Cointegration Tests", Long-Run * Economic Relationships, R.F. Engle and C.W.J. Granger, eds, * London, Oxford, 1991, pp 267-276 * * * Revision Schedule: * Written November, 1989 * Updated January, 1992 to Version 4.0 * Dfunit.src -> ADF.src by Norman Morin, February/March,1994 * * Thanks to Clive Granger and Scott Spear at UCSD, and the tech-gurus * at Estima for helpful comments. * * * PROCEDURE ADF series start end residuals * * Parameters: * series The series to be analyzed * start end The ranger of the series to use: defaults to the entire range * * Options * DET =3D NONE/[CONSTANT]/TREND Note: TREND includes a constant= term, too. * CRITERION =3D AIC/[BIC]/LBTEST/LMTEST * MAXLAG =3D [12] The maximum lag considered in the AIC and BIC= formulations * ARSIGNIF =3D [0.05] The significance level for the Ljung-Box and LM= tests * SCLAGS =3D [8] The number of lags to use in the Ljung-Box and LM= tests * ****************************************************************************= ************ ****************************************************************************= ************ PROCEDURE LJUNGBOX2 depvar start end TYPE SERIES depvar TYPE INTEGRER start end OPTION INTEGER lags 4 OPTION INTEGER span 0 OPTION INTEGER dfc 0 LOCAL INTEGER startl endl qspan INQUIRE(SERIES=3Ddepvar) startl>>start endl>>end IF (span.eq.0) COMPUTE qspan =3D lags ELSE COMPUTE qspan =3D span CORRELATE(number=3Dlags,noprint,qstat,span=3Dqspan,dfc=3Ddfc) depvar startl= endl COMPUTE QSIG =3D %SIGNIF END LJUNGBOX2 ****************************************************************************= ************ ****************************************************************************= ************ PROCEDURE SCTEST depvar start end TYPE SERIES depvar TYPE INTEGRER start end OPTION INTEGER lags 4 OPTION SWITCH constant 1 OPTION SWITCH print 0 LOCAL SERIES resids LOCAL INDEX reglist LOCAL INTEGER startl endl ENTER(varying) reglist INQUIRE(regressorlist) startl>>start endl>>end #reglist depvar LINREG(print=3Dprint) depvar startl endl resids #reglist LINREG(noprint,dfc=3D%nreg) resids startl+lags endl #reglist resids{1 to lags} DISPLAY ' ' COMPUTE SCSIGNIF=3D%CHISQR(%TRSQ,LAGS) DISPLAY 'LM Test for Serial Correlation of Order ' Lags ' for '= %LABEL(depvar) DISPLAY 'Test Statistic:' %TRSQ ' Significance Level:' #.##### SCSIGNIF DISPLAY ' ' END SCTEST ****************************************************************************= ************ ****************************************************************************= ************ PROCEDURE SELECTUNI series start end TYPE series series OPTION integer number 24 OPTION choice det 2 none constant trend LOCAL integer maxlag lagnum LOCAL series aic bic series trend delseries LOCAL REAL aico bico INQUIRE(series=3Dseries) startl>>start endl>>end COMPUTE nobs =3D endl-startl+1 SET delseries startl endl =3D series - series{1} COMPUTE maxlag =3D number SMPL maxlag nobs IF (det.eq.1) { CMOM #delseries{1 to maxlag} series{0 to 1} SET aic 1 maxlag =3D 0. SET bic 1 maxlag =3D 0. LINREG(cmom,noprint) series # series{1} COMPUTE aico =3D log(%rss/%nobs) + 2.*%nreg/%nobs COMPUTE bico =3D log(%rss/%nobs) + (1.*%nreg/%nobs)*log(%nobs) DO lagnum =3D 1,maxlag LINREG(cmom,noprint) series # series{1} delseries{1 to lagnum} COMPUTE aic(lagnum) =3D log(%rss/%nobs) + 2.*%nreg/%nobs COMPUTE bic(lagnum) =3D log(%rss/%nobs) + (1.*%nreg/%nobs)*log(%nobs) END DO } ELSE IF (det.eq.2) { CMOM #constant delseries{1 to maxlag} series{0 to 1} SET aic 1 maxlag =3D 0. SET bic 1 maxlag =3D 0. LINREG(noprint,cmom) series #constant series{1} COMPUTE aico =3D log(%rss/%nobs) + 2.*%nreg/%nobs COMPUTE bico =3D log(%rss/%nobs) + (1.*%nreg/%nobs)*log(%nobs) DO lagnum =3D 1,maxlag LINREG(cmom,noprint) series #constant series{1} delseries{1 to lagnum} COMPUTE aic(lagnum) =3D log(%rss/%nobs) + 2.*%nreg/%nobs COMPUTE bic(lagnum) =3D log(%rss/%nobs) + (1.*%nreg/%nobs)*log(%nobs) END DO } ELSE { SET TREND =3D T CMOM #constant trend delseries{1 to maxlag} series{0 to 1} SET aic 1 maxlag =3D 0. SET bic 1 maxlag =3D 0. LINREG(noprint,cmom) series #constant trend series{1} COMPUTE aico =3D log(%rss/%nobs) + 2.*%nreg/%nobs COMPUTE bico =3D log(%rss/%nobs) + (1.*%nreg/%nobs)*log(%nobs) DO lagnum =3D 1,maxlag LINREG(cmom,noprint) series #constant trend series{1} delseries{1 to lagnum} COMPUTE aic(lagnum) =3D log(%rss/%nobs) + 2.*%nreg/%nobs COMPUTE bic(lagnum) =3D log(%rss/%nobs) + (1.*%nreg/%nobs)*log(%nobs) END DO } DISPLAY 'INFORMATION CRITERIA' DISPLAY ' ' EXTREMEUM(noprint) aic 1 maxlag IF (%MINIMUM.lt.aico) { DISPLAY ' Minimum AIC at lag: ' %MINENT COMPUTE aicmin =3D %MINENT } ELSE { DISPLAY ' Minimum AIC at lag: 0' COMPUTE aicmin =3D 0 } EXTREMEUM(noprint) bic 1 maxlag IF (%MINIMUM.lt.bico) { DISPLAY ' Minimum BIC at lag: ' %MINENT COMPUTE bicmin =3D %MINENT } ELSE { DISPLAY ' Minimum BIC at lag: 0' COMPUTE bicmin =3D 0 } DISPLAY ' ' END SELECTUNI ****************************************************************************= ************ ****************************************************************************= ************ ****************************************************************************= ************ PROCEDURE ADF series start end TYPE series series TYPE integer start end OPTION CHOICE DET 2 NONE CONSTANT TREND OPTION CHOICE CRITERION 2 AIC BIC LBtest LMtest OPTION INTEGER MAXLAG 12 OPTION REAL ARSIGNIF 0.05 OPTION REAL SIGNIF 0.05 OPTION INTEGER SCLAG 8 LOCAL real signf sig dfsig LOCAL integer lag maxlag sclag LOCAL series trend INQUIRE(series=3Dseries) startl>>start endl>>end DISPLAY ' ' ; DISPLAY ' ' DISPLAY '**************************************************************' DISPLAY '* TESTING THE NULL HYPOTHESIS OF A UNIT ROOT IN' %LABEL(series) = @61 '*' DISPLAY '* Choosing the optimal lag length for the ADF regression' @61= '*' IF (criterion.eq.1) DISPLAY '* using the AIC selection criterion.' @61 '*' ELSE IF (criterion.eq.2) DISPLAY '* using the BIC selection criterion.' @61 '*' ELSE IF (criterion.eq.3) { DISPLAY '* by adding lags until the Ljung-Box test rejects ' @61 '*' DISPLAY '* residual serial correlation at level' @+0 #.### arsignif @-1= '.' @61 '*' } ELSE { DISPLAY '* by adding lags until a Lagrange Multiplier test rejects' @61= '*' DISPLAY '* residual serial correlation at level' @+0 #.### arsignif @-1= '.' @61 '*' } DISPLAY '*************************************************************' DISPLAY ' ' DISPLAY 'Using data from ' %datelabel(startl) 'to' %datelabel(endl) DISPLAY DISPLAY COMPUTE signdf =3D 0.05 IF %defined(Signif) { IF (signif.ne.0.01.and.signif.ne.0.05.and.signif.ne.0.10) { DISPLAY 'YOU MUST CHOOSE SIGNIFICANCE LEVEL OF 0.01, 0.05, 0.10 FOR= ADF CRITICAL VALUE' DISPLAY ' DEFAULTING TO 0.05' DISPLAY } ELSE COMPUTE signdf =3D Signif } *************************************************************************** SET delseries startl endl =3D series - series{1} IF (criterion.lt.3) { IF (det.eq.1) { @SELECTUNI(number=3Dmaxlag,det=3Dnone) series startl endl=20 IF (criterion.eq.1) { IF (aicmin.gt.0) { LINREG(noprint) series startl endl resids # series{1} delseries{1 to aicmin} compute lag =3D aicmin } ELSE { LINREG(noprint) series startl endl resids # series{1} compute lag =3D 0 } } ELSE {=20 IF (bicmin.gt.0) { LINREG(noprint) series startl endl resids # series{1} delseries{1 to bicmin} compute lag =3D bicmin } ELSE { LINREG(noprint) series startl endl resids # series{1} compute lag =3D 0 } } } ELSE IF (det.eq.2) { @SELECTUNI(number=3Dmaxlag,det=3Dtrend) series startl endl IF (criterion.eq.1) { IF (aicmin.gt.0) { LINREG(noprint) series startl endl resids # series{1} constant delseries{1 to aicmin} compute lag =3D aicmin } ELSE { LINREG(noprint) series startl endl resids # series{1} constant compute lag =3D 0 } } ELSE {=20 IF (bicmin.gt.0) { LINREG(noprint) series startl endl resids # series{1} constant delseries{1 to bicmin} compute lag =3D bicmin } ELSE { LINREG(noprint) series startl endl resids # series{1} constant compute lag =3D 0 } } } ELSE IF (det.eq.3) { SET TREND =3D T @SELECTUNI(number=3Dmaxlag,det=3Dtrend) series startl endl =20 IF (criterion.eq.1) { IF (aicmin.gt.0) { LINREG(noprint) series startl endl resids # series{1} constant trend delseries{1 to aicmin} compute lag =3D aicmin } ELSE { LINREG(noprint) series startl endl resids # series{1} constant trend compute lag =3D 0 } } ELSE {=20 IF (bicmin.gt.0) { LINREG(noprint) series startl endl resids # series{1}constant trend delseries{1 to bicmin} compute lag =3D bicmin } ELSE { LINREG(noprint) series startl endl resids # series{1} constant trend compute lag =3D 0 } } }=20 } *************************************************************************** ELSE { IF %DEFINED(arsignif) compute signf =3D arsignif ELSE=20 compute signf =3D 0.05 COMPUTE lag =3D 0; COMPUTE sig =3D 0.0 *** IF (det.eq.2) { WHILE (sig.lt.signf) { DISPLAY 'Adding lag' lag IF (lag.gt.0) LINREG(noprint) series startl endl resids # series{1} constant delseries{1 to lag} ELSE LINREG(noprint) series startl endl resids # series{1} constant IF (criterion.eq.3) { @LJUNGBOX2(lags=3Dsclag) resids startl endl COMPUTE sig =3D qsig } ELSE IF (criterion.eq.4) { IF (lag.gt.0) { @SCTEST(lags=3Dsclag) series startl endl # series{1} constant delseries{1 to lag} COMPUTE sig =3D scsignif } ELSE { @SCTEST(lags=3Dsclag) series startl endl # series{1} constant COMPUTE sig =3D scsignif } } IF (sig.gt.signf) IF (lag.gt.0)=09 LINREG(noprint) series startl endl resids # series{1} constant delseries{1 to lag} ELSE LINREG(noprint) series startl endl resids # series{1} constant ELSE COMPUTE lag =3D lag + 1 } } =20 *** ELSE IF (det.eq.3) { SET TREND startl endl =3D T WHILE (sig.lt.signf) { DISPLAY 'Adding lag' lag IF (lag.gt.0)=20 LINREG(noprint) series startl endl resids # series{1} constant trend delseries{1 to lag} ELSE LINREG(noprint) series startl endl resids # series{1} constant trend IF (criterion.eq.3) { @LJUNGBOX2(lags=3Dsclag) resids startl endl COMPUTE sig =3D qsig } ELSE IF (criterion.eq.4) { IF (lag.gt.0) { @SCTEST(lags=3Dsclag) series startl endl # series{1} constant trend delseries{1 to lag} COMPUTE sig =3D scsignif } ELSE { @SCTEST(lags=3Dsclag) series startl endl # series{1} constant trend COMPUTE sig =3D scsignif=20 } } IF (sig.gt.signf) =20 IF (lag.gt.0) Linreg(noprint) series startl endl resids # series{1} constant trend delseries{1 to lag} ELSE LINREG(noprint) series startl endl resids # series{1} constant trend ELSE COMPUTE lag =3D lag + 1 } } =09 ** ELSE IF (det.eq.1) { WHILE (sig.lt.signf) { DISPLAY 'Adding lag' lag IF (lag.gt.0)=20 LINREG(noprint) series startl endl resids # series{1} delseries{1 to lag} ELSE LINREG(noprint) series startl endl resids # series{1} IF (criterion.eq.3) { @LJUNGBOX2(lags=3Dsclag) resids startl endl COMPUTE sig =3D qsig } ELSE IF (criterion.eq.4) { IF (lag.gt.0) { @SCTEST(lags=3Dsclag,noconstant) series startl endl # series{1} delseries{1 to lag} COMPUTE sig =3D scsignif } ELSE { @SCTEST(lags=3Dsclag,noconstant) series startl endl # series{1} COMPUTE sig =3D scsignif } } IF (sig.gt.signf) =20 IF (lag.gt.0) Linreg(noprint) series startl endl resids # series{1} delseries{1 to lag} ELSE Linreg(noprint) series startl endl resids # series{1} ELSE COMPUTE lag =3D lag + 1 } } ** } COMPUTE teststat=3D(%beta(1)-1.)/sqrt(%seesq*%xx(1,1)) DISPLAY ' ' DISPLAY= '**************************************************************' DISPLAY '* AUGMENTED DICKEY-FULLER TEST FOR' %LABEL(series) 'WITH' lag= 'LAGS:' @52 ##.#### teststat @61 '*' COMPUTE nobs =3D endl - (startl+lag) IF (DET.eq.1) { IF (signdf.eq.0.01) COMPUTE cval =3D -2.5658 - 1.960/nobs - 10.04/(nobs**2) ELSE IF (signdf.eq.0.05) COMPUTE cval =3D -1.9393 - 0.398/nobs ELSE COMPUTE cval =3D -1.6156 - 0.181/nobs } ELSE IF (DET.eq.2) { IF (signdf.eq.0.01) COMPUTE cval =3D -3.4335 - 5.999/nobs - 29.25/(nobs**2) ELSE IF (signdf.eq.0.05) COMPUTE cval =3D -2.8621 - 2.738/nobs - 8.36/(nobs**2) ELSE COMPUTE cval =3D -2.5671 - 1.438/nobs - 4.48/(nobs**2) } ELSE { IF (signdf.eq.0.01) COMPUTE cval =3D -3.9638 - 8.353/nobs - 47.44/(nobs**2) ELSE IF (signif.eq.0.05) COMPUTE cval =3D -3.4126 - 4.039/nobs - 17.83/(nobs**2) ELSE COMPUTE cval =3D -3.1279 - 2.418/nobs - 7.58/(nobs**2) } DISPLAY '* AT LEVEL' #.## signdf 'THE TABULATED CRITICAL VALUE:' @52= ##.#### cval @61 '*' IF (DET.eq.2) { DISPLAY '*' @61 '*' DISPLAY '* Coefficient and T-Statistic on the Constant:' @61 '*' DISPLAY '*' @3 ###.##### %beta(2) @20 ###.####= %beta(2)/sqrt(%seesq*%xx(2,2)) @61 '*' } IF (DET.eq.3) { DISPLAY '*' @61 '*' DISPLAY '* Coefficient and T-Statistic on the Constant:' @61 '*' DISPLAY '*' @3 ###.##### %beta(2) @20 ###.####= %beta(2)/sqrt(%seesq*%xx(2,2)) @61 '*' DISPLAY '* Coefficient and T-Statistic on the Linear Trend:' @61 '*' DISPLAY '*' @3 ###.##### %beta(3) @20 ###.####= %beta(3)/sqrt(%seesq*%xx(3,3)) @61 '*' } DISPLAY= '**************************************************************' DISPLAY END --=====================_889000121==_ Content-Type: text/plain; charset="us-ascii" Francisco Jose Climent Diranzo E-Mail: fcliment@uv.es Profesor del Departamento de Economia Financiera y Matematica Tlf: +34 6 382 83 69 Fax: +34 6 382 83 70 Facultad de Ciencias Economicas y Empresariales Edificio departamental Oriental Avd. Els Tarongers s/n 46022 Valencia Spain --=====================_889000121==_-- ---------- End of message ---------- From: "Philippe PROTIN" To: "RATS Discussion List" Subject: generalized Wald test Date: Tue, 3 Mar 1998 17:38:42 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Organization: Ecole Superieure des Affaires MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Windows (v2.54) (via Mercury MTS (Bindery) v1.30) Dear rats users, I am estimating a multivariate GARCH-M system and need to test the equality between 7 parameters : b1=b2=b3=...=b7=b. I know the generalized Wald test is robust and wonder how to compute it in rats. Does anybody have some advice ? thanks for help ********************* Philippe PROTIN ESA-CERAG BP 47X 38040 GRENOBLE CEDEX 09 FRANCE 04.76.82.57.48. protin@esa.upmf-grenoble.fr ---------- End of message ---------- From: "Philippe PROTIN" To: "RATS Discussion List" Subject: generalized Wald test Date: Tue, 3 Mar 1998 17:38:42 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Organization: Ecole Superieure des Affaires MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Windows (v2.54) (via Mercury MTS (Bindery) v1.30) Dear rats users, I am estimating a multivariate GARCH-M system and need to test the equality between 7 parameters : b1=b2=b3=...=b7=b. I know the generalized Wald test is robust and wonder how to compute it in rats. Does anybody have some advice ? thanks for help ********************* Philippe PROTIN ESA-CERAG BP 47X 38040 GRENOBLE CEDEX 09 FRANCE 04.76.82.57.48. protin@esa.upmf-grenoble.fr ---------- End of message ---------- From: "Christopher F Baum" To: "RATS Discussion List" Subject: GPH_SEAS.SRC Date: Tue, 03 Mar 1998 21:40:57 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Mulberry (MacOS) [1.3.2.2, s/n P020-300786-009] (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit GPH_SEAS.SRC: This procedure is a generalization of Estima's GPH.SRC procedure which performs the Geweke/Porter-Hudak log-periodogram regression on a timeseries to estimate the order of fractional integration. As recently noted by Ooms and Hassler (Econ. Letters, 56:2, 1997) this regression will inappropriately include ordinates corresponding to seasonal frequencies when the data series have been seasonally adjusted. The modified procedure 'zero-pads' the series and removes those ordinates, following recommendations of Ooms and Hassler. An option permits the disabling of these features for ready comparison with the standard GPH regression. The procedure is available from the Boston College Department of Economics Statistical Software Component Archive at http://ideas.uqam.ca/ideas/data/bocbocode.html Look near the top of the list for GPH_SEAS. There is also another package (at the bottom of the list) of tools for long-memory estimation: GPHROB. Each package has an 'abstract' describing its function. You are welcome to contribute any RATS code that you would like to make available to other users to the S.S.C. Archive, which (despite the preponderance of Stata modules) hosts RATS, Stata and Mathematica code (and is open to GAUSS, MATLAB, etc.) If you have a documented RATS procedure you'd like to share, email it to me (baum@bc.edu) and I will notify you when it is available on IDEAS (usually within 24 hours; depends on an automated update procedure) and you can inform the list. It will be located by any RePEc patron searching for 'RATS'. Kit Baum Statistical Software Component Archive maintainer ---------- End of message ---------- From: "Deitch, Jonathan" To: "RATS Discussion List" Subject: What would you do? Date: Wed, 4 Mar 1998 09:45:39 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Internet Mail Service (5.0.1458.49) (via Mercury MTS (Bindery) v1.30) I have a question that relates more to methodology than to RATS code, and I'm hoping I can get your opinions. I have constructed a VAR model that examines budgets and enforcement outcomes for clean air policy. Several of my research hypotheses concern the relationships of the endogenous variables. While I know the Granger causality tests are directionless, my question is this: How bad would it be to report direction of these relationships? My reasons for asking are several. First, the direction of these relationships is of substantive interest. Second, it seems easy enough to determine direction given that: 1) only two lags are required to ensure i.i.d. residuals and 2) all of the coefficients of the block of variables in which I am interested are significantly different from zero in the same direction. Again, I know that the block causality tests are not meant to indicate direction, but needless to say, I am sorely tempted (even though I am pretty sure it would be the wrong thing to do.) Can anyone comment on my dilemma? A cite or two would be helpful as well. J.D. Deitch mailto:b0deitch@hq.penfed.org (that's a zero between the 'b' and the 'd') The American University Washington, DC USA ---------- End of message ---------- From: "J.E. Sturm" To: "RATS Discussion List" Subject: Kendall's rank correlation coefficient Date: Wed, 4 Mar 1998 16:39:26 GMT+0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Organization: Economische Faculteit - RuG X-mailer: Pegasus Mail for Windows (v2.54) (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable Did somebody already write a programme in RATS to calculate Kendall's rank correlation coefficient and its significance level? If yes, could you please send me a copy of that programme. Thanks, Jan-Egbert Sturm University of Groningen Faculty of Economics Department of General Economics P.O.Box 800 9700 AV Groningen The Netherlands Phone: +31 50 3634538 Fax: +31 50 3637337 E-mail: J.E.Sturm@eco.RuG.Nl Web: http://www.eco.rug.nl/medewerk/sturm/ "Der Mensch w=E4chst mit seinen Aufgaben" ---------- End of message ---------- From: Peter Summers To: "RATS Discussion List" Subject: Re: What would you do? Date: Thu, 05 Mar 1998 09:39:59 +1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: QUALCOMM Windows Eudora Pro Version 3.0.5 (32) (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Dear J.D., You might want to have a look at some papers by John Geweke which develop measures of directional 'causality' (he uses the term 'feedback') between time series. The techniques are developed in "The measurement of linear dependence and feedback between multiple time series," Jrnl of the American Statistical Association 77, 304-313 (1982) (with discussion); and "Measures of conditional linear dependence and feedback between time series," JASA 79, 907-915 (1984). There's an application of the techniques in the first paper in "The superneutrality of money in the United States: an interpretation of the evidence," Econometrica 54, 1-21. I've written RATS code to implement these measures, and would be happy to share it with anyone who's interested. We used this code in Riezman, Whiteman & Summers, "The engine of growth or its handmaiden? A time series assessment of export-led growth," Empirical Economics 21 (1), 77-113. Hope this is useful. Pete Summers At 09:45 4/03/98 -0500, you wrote: >I have a question that relates more to methodology than to RATS code, >and I'm hoping I can get your opinions. > >I have constructed a VAR model that examines budgets and enforcement >outcomes for clean air policy. Several of my research hypotheses >concern the relationships of the endogenous variables. While I know the >Granger causality tests are directionless, my question is this: > >How bad would it be to report direction of these relationships? > >My reasons for asking are several. First, the direction of these >relationships is of substantive interest. Second, it seems easy enough >to determine direction given that: 1) only two lags are required to >ensure i.i.d. residuals and 2) all of the coefficients of the block of >variables in which I am interested are significantly different from zero >in the same direction. Again, I know that the block causality tests are >not meant to indicate direction, but needless to say, I am sorely >tempted (even though I am pretty sure it would be the wrong thing to >do.) > >Can anyone comment on my dilemma? A cite or two would be helpful as >well. > >J.D. Deitch >mailto:b0deitch@hq.penfed.org (that's a zero between the 'b' and the >'d') >The American University >Washington, DC USA > > ============================================================================== Melbourne Institute of Applied Economic and Social Research University of Melbourne Parkville, VIC 3052 AUSTRALIA ph: (03) 9344-5313 fax: (03) 9344-5630 ---------- End of message ---------- From: "Frieder Knüpling" To: "RATS Discussion List" Subject: GLS with nonskalar covariance matrix. Date: Thu, 05 Mar 1998 17:45:13 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Mozilla 4.04 [en] (Win95; I) (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit I want to estimate a single equation linear regression model with an unknown covariance matrix of the (normal) disturbances by maximum likelihood. I make strong assumptions concerning the covariance matrix, namely that it is a diagonal matrix with only two free parameters on the diagonal entries. Instead of using NLLS or related procedures, I first want to estimate the parameters of the covariance matrix by maximization of the concentrated log-likelihood function and use them to compute the ML-estimators of the remaining parameters (cf. Judge/Griffiths/Hill/Luetkepohl/Lee 1985, p. 180 ff.). Does anyone have RATS code for this problem, or can give me hints to program it efficiently? Thanks, yours sincerely Frieder -- Frieder Knuepling Albert-Ludwigs-Universitaet Freiburg Institut fuer Allgemeine Wirtschaftsforschung Abteilung Statistik und Oekonometrie Belfortstr. 24 D-79098 Freiburg Tel +49 761 / 203 - 2341 Fax +49 761 / 203 - 2340 ---------- End of message ---------- From: jjulioro@banrep.gov.co (Juan Julio Roman) To: "RATS Discussion List" Subject: TESTING FOR CERO DRIFT IN CATS/CIDRIFT MODEL Date: Thu, 05 Mar 1998 11:51:55 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Organization: Banco de la Republica X-Mailer: Mozilla 4.01 [en] (WinNT; I) (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Dear Rats-lers, When estimating a VECM in CATS using the model CIDRIFT, the intercept term splits in two (unidentifiable ?) parts, one that becomes a drit parameter and the other that becomes some sort of the mean of the cointegrating relation. Unfortunately CATS does not provide a model in which the mean and trend are restricted to the cointegrating relation only, and hence I can not test the restriction using CATS. I have tow questions: 1. Does the condition that the intercept part that goes to the coint relation IS THE MEAN of the cointegration error identifies the decomposition? 2. If not, how can I test for such a restriction? Regards Juan Manuel -- Juan Manuel Julio Unidad de Inflacion y Programacion Inflation and Financial Programming Unita Subgerencia de Estudios Economicos Economic Research Department Banco de la Republica The Central Bank of Colombia ---------- End of message ---------- From: monia To: "RATS Discussion List" Subject: Re: TESTING FOR CERO DRIFT IN CATS/CIDRIFT MODEL Date: Fri, 06 Mar 1998 16:36:39 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Light Version 3.0.1 (32) (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable hOLA JUAN >When estimating a VECM in CATS using the model CIDRIFT, the intercept >term splits in two (unidentifiable ?) parts, one that becomes a drit >parameter and the other that becomes some sort of the mean of the >cointegrating relation. Unfortunately CATS does not provide a model in >which the mean and trend are restricted to the cointegrating relation >only, and hence I can not test the restriction using CATS. > >I have tow questions: > >1. Does the condition that the intercept part that goes to the coint >relation IS THE MEAN of the cointegration error identifies the >decomposition? VOY A CONTESTAR EN ESPA=D1OL YA QUE ERES DE COLOMBIA Y SUPONGO QUE LO= ENTIENDES: SE PUEDE ENTRAR DENTYRO DEL PROGRAMA CATS Y MODIFICAR LA RESTRICCTION DE TAL MANERA QUE LA CONSTANTE SEA RESTRINGIDA AL ESPACIO DE COINTEGRACI=D3N, E= S MUY FACIL HACER SI ENTIENDES DE PROGRAMACI=D3N. SI NO ERES CAPAZ YO LO PUEDO HACER Y TE MANDO LAS COPIAS DE LOS PROGRAMAS. >2. If not, how can I test for such a restriction? UNA VEZ QUE DEFINES LA CONSTANTE Y LA TENDENCIA DENTRO DEL ESPACIO DE COINTEGRACI=D3N PUEDES HACER LOS CONTRASTES DE SIGNIFICATIVIDAD. LA VERDA ES QUE EL PROBLEMA ESTA EN LOS VALORES CR=CDTICOS YA QUE PARA ESTE TIO DE MODELO NO EXISTEN LOS VALORES CR=CDTICOS. SEGUNDO. NO SE SI TAL MODELO TENDRA SINTIDO ECONOMICAMENTE. IGUAL TE INTERESA M=C1S VER SI TUS SERIES PRESENTAN CAMBIO ESTRUCTURAL ENTONCES EN TA= L CASO TENDR=CDA QUE INTRODUCIR VARIABLES FICTICIAS Y NO UNA CONSTANTE. bUENO NO SE SI HE COMPREDENDIDO BIEN TU PROBLEMA. SI TIENES ALGUNA DUDA CONTACTA Y ESTOY ENCANTADA DE AYUDARTE. VOY A SER MODESTA PERO EN LO QUE ES COINTEGRACI=D3N Y LA PROGRAMACI=D3N CREO QUE ME DEFIENDO BASTANTE. MONIA >Regards > > >Juan Manuel > > >-- >Juan Manuel Julio >Unidad de Inflacion y Programacion Inflation and Financial >Programming Unita >Subgerencia de Estudios Economicos Economic Research Department >Banco de la Republica The Central Bank of >Colombia > > > Monia Ben Kaabia Unidad de Economia Araria Servicio de Investigacion Agroalimentaria Diputacion General de Aragon Apdo. 727 E-50080-Zaragoza (Spain) Tel: ++ 34-976-576361 Fax: ++ 34-976-575501 monia@mizar.csic.es ---------- End of message ---------- From: Rob Trevor To: "RATS Discussion List" Subject: Re: TESTING FOR CERO DRIFT IN CATS/CIDRIFT MODEL Date: Sat, 7 Mar 1998 08:02:42 +1100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.30 Hi folks If that was a helpful response, could someone please translate for the benefit of the majority of our readers? Thanks Rob Trevor At 2:36 AM +1100 7/3/98, monia wrote: >hOLA JUAN >>When estimating a VECM in CATS using the model CIDRIFT, the intercept >>term splits in two (unidentifiable ?) parts, one that becomes a drit >>parameter and the other that becomes some sort of the mean of the >>cointegrating relation. Unfortunately CATS does not provide a model in >>which the mean and trend are restricted to the cointegrating relation >>only, and hence I can not test the restriction using CATS. >> >>I have tow questions: >> >>1. Does the condition that the intercept part that goes to the coint >>relation IS THE MEAN of the cointegration error identifies the >>decomposition? >VOY A CONTESTAR EN ESPA=D1OL YA QUE ERES DE COLOMBIA Y SUPONGO QUE LO EN= TIENDES: > >SE PUEDE ENTRAR DENTYRO DEL PROGRAMA CATS Y MODIFICAR LA RESTRICCTION DE >TAL MANERA QUE LA CONSTANTE SEA RESTRINGIDA AL ESPACIO DE COINTEGRACI=D3= N, ES >MUY FACIL HACER SI ENTIENDES DE PROGRAMACI=D3N. SI NO ERES CAPAZ YO LO P= UEDO >HACER Y TE MANDO LAS COPIAS DE LOS PROGRAMAS. >>2. If not, how can I test for such a restriction? >UNA VEZ QUE DEFINES LA CONSTANTE Y LA TENDENCIA DENTRO DEL ESPACIO DE >COINTEGRACI=D3N PUEDES HACER LOS CONTRASTES DE SIGNIFICATIVIDAD. > >LA VERDA ES QUE EL PROBLEMA ESTA EN LOS VALORES CR=CDTICOS YA QUE PARA E= STE >TIO DE MODELO NO EXISTEN LOS VALORES CR=CDTICOS. > >SEGUNDO. NO SE SI TAL MODELO TENDRA SINTIDO ECONOMICAMENTE. IGUAL TE >INTERESA M=C1S VER SI TUS SERIES PRESENTAN CAMBIO ESTRUCTURAL ENTONCES E= N TAL >CASO TENDR=CDA QUE INTRODUCIR VARIABLES FICTICIAS Y NO UNA CONSTANTE. > >bUENO NO SE SI HE COMPREDENDIDO BIEN TU PROBLEMA. >SI TIENES ALGUNA DUDA CONTACTA Y ESTOY ENCANTADA DE AYUDARTE. VOY A SER >MODESTA PERO EN LO QUE ES COINTEGRACI=D3N Y LA PROGRAMACI=D3N CREO QUE M= E >DEFIENDO BASTANTE. > >MONIA > >>Regards >> >> >>Juan Manuel >> >> >>-- >>Juan Manuel Julio >>Unidad de Inflacion y Programacion Inflation and Financial >>Programming Unita >>Subgerencia de Estudios Economicos Economic Research Department >>Banco de la Republica The Central Bank of >>Colombia >> >> >> >Monia Ben Kaabia >Unidad de Economia Araria >Servicio de Investigacion Agroalimentaria >Diputacion General de Aragon >Apdo. 727 >E-50080-Zaragoza (Spain) >Tel: ++ 34-976-576361 >Fax: ++ 34-976-575501 >monia@mizar.csic.es ---------- End of message ---------- From: jtebeka@oddo.fr To: "RATS Discussion List" Subject: Special topics Date: Mon, 9 Mar 1998 14:28:59 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii X-Mailer: Mercury MTS (Bindery) v1.30 I have a problem. I would like to estimate something very particular and i do not find a solution in the rats manual. I have N time series, fo each date i want to estimate a nonlinear model and stock the estimated coefficients in time series. Thanks for your help. Jacques. ---------- End of message ---------- From: "Christopher F Baum" To: "RATS Discussion List" Subject: Re: Special topics Date: Mon, 09 Mar 1998 09:24:24 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Mulberry (MacOS) [1.3.2.2, s/n P020-300786-009] (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit (1) put the data into 'veced' or panel form, in which you have N*T observations. Before doing this ensure that you have an explicit 't' variable (call it tau) associated with each timeseries. This will make it possible to handle an unbalanced case as well. (2) Create a matrix (dec rect coeff(T,k) to hold the coefficients (3) Run a loop (do i= )over the T time periods, in which you create a dummy variable (thisper) for i=tau, and then use the SMPL=thisper option to select the 'observations' (which should then pick out only the tau'th obs on each unit) and estimate the nonlinear model (4) Within the loop, use compute to place each resulting coefficient into the appropriate cell of the coeff matrix When you are done the coeff matrix will have T rows, each with the k est. coeff. from that period's model. You may make it into timeseries if you wish, or just write it out to a copy unit for further analysis (presuming that you wish to treat the results as k timeseries of length T). Kit Baum Boston College --On Mon, Mar 9, 1998 14:28 +0100 jtebeka@oddo.fr wrote: > > > > I have a problem. I would like to estimate something very particular and i > do not find a solution in the rats manual. > I have N time series, fo each date i want to estimate a nonlinear model and > stock the estimated coefficients in time series. > Thanks for your help. > Jacques. > > ---------- End of message ---------- From: jtebeka@oddo.fr To: "RATS Discussion List" Subject: =?iso-8859-1?Q?R=E9f._:_Re:_Special_topics?= Date: Mon, 9 Mar 1998 15:58:05 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-type: text/plain; charset=iso-8859-1 Content-transfer-encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.30 I am not sure your third step is really clear. I have effectively N*T observation, for t fixed, i have N observations = and i would like to fit a nonlinear model depending on some parameters. The= n i want to stock these parameters into a series when t is allowed to chang= e. So i do not understand in your 3rd step your dummy variable. Jacques. "Christopher F Baum" sur 09/03/98 15:24:24 Veuillez r=E9pondre =E0 "RATS Discussion List" Pour : "RATS Discussion List" cc : (ccc : Jacques Tebeka/ODDO) Objet : Re: Special topics = Content-type: text/plain; charset=us-ascii (1) put the data into 'veced' or panel form, in which you have N*T observations. Before doing this ensure that you have an explicit 't' variable (call it tau) associated with each timeseries. This will make it possible to handle an unbalanced case as well. (2) Create a matrix (dec rect coeff(T,k) to hold the coefficients (3) Run a loop (do i= )over the T time periods, in which you create a dummy variable (thisper) for i=tau, and then use the SMPL=thisper option to select the 'observations' (which should then pick out only the tau'th obs on each unit) and estimate the nonlinear model (4) Within the loop, use compute to place each resulting coefficient into the appropriate cell of the coeff matrix When you are done the coeff matrix will have T rows, each with the k est. coeff. from that period's model. You may make it into timeseries if you wish, or just write it out to a copy unit for further analysis (presuming that you wish to treat the results as k timeseries of length T). Kit Baum Boston College --On Mon, Mar 9, 1998 14:28 +0100 jtebeka@oddo.fr wrote: > > > > I have a problem. I would like to estimate something very particular and i > do not find a solution in the rats manual. > I have N time series, fo each date i want to estimate a nonlinear model and > stock the estimated coefficients in time series. > Thanks for your help. > Jacques. > > ---------- End of message ---------- From: "Christopher F Baum" To: "RATS Discussion List" Subject: =?ISO-8859-1?Q?Re:_R=E9f._:_Re:_Special_topics?= Date: Mon, 09 Mar 1998 10:09:59 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Mulberry (MacOS) [1.3.2.2, s/n P020-300786-009] (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Jacques, using a dummy in RATS is often the way to deal with such a problem. E.g. $ cat jacques.rat comp n=3D3;comp tee=3D5 cal(panelobs=3Dtee) all n//tee set z =3D log(t) set z2 =3D sqrt(t) set tau =3D %period(t) tab print dec rect coeff(tee,2) do i=3D1,tee set thisper =3D (i=3D=3Dtau) linreg(smpl=3Dthisper) z # constant z2 comp coeff(i,1)=3D%beta(1) comp coeff(i,2)=3D%beta(2) enddo write coeff end Kit --On Mon, Mar 9, 1998 15:58 +0100 jtebeka@oddo.fr wrote: > > > > I am not sure your third step is really clear. > I have effectively N*T observation, for t fixed, i have N observations = and > i would like to fit a nonlinear model depending on some parameters. Then = i > want to stock these parameters into a series when t is allowed to = change. > So i do not understand in your 3rd step your dummy variable. > Jacques. > > > > > "Christopher F Baum" sur 09/03/98 15:24:24 > > Veuillez r=E9pondre =E0 "RATS Discussion List" > > Pour : "RATS Discussion List" > cc : (ccc : Jacques Tebeka/ODDO) > Objet : Re: Special topics > > > > Content-type: text/plain; charset ---------- End of message ---------- From: "Christopher F Baum" To: "RATS Discussion List" Subject: =?ISO-8859-1?Q?Re:_R=E9f._:_Re:_Special_topics?= Date: Mon, 09 Mar 1998 10:12:41 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Mulberry (MacOS) [1.3.2.2, s/n P020-300786-009] (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable I did the example for a balanced panel, but it would work equally well for an unbalanced panel..as long as you can define the variable tau = identifying each observation by its time period, the SMPL will pick out the obs. that belong to a specific time period. Note also that this would work for cases where you wanted to include a 'window' of observations in the estimation; just modify the dummy variable to =3D=3D1 when tau .ge. i .and. tau .le. (i+2) for instance. That would = create a moving window. Kit Baum --On Mon, Mar 9, 1998 15:58 +0100 jtebeka@oddo.fr wrote: > > > > I am not sure your third step is really clear. > I have effectively N*T observation, for t fixed, i have N observations = and > i would like to fit a nonlinear model depending on some parameters. Then = i > want to stock these parameters into a series when t is allowed to = change. > So i do not understand in your 3rd step your dummy variable. > Jacques. > > > > > "Christopher F Baum" sur 09/03/98 15:24:24 > > Veuillez r=E9pondre =E0 "RATS Discussion List" > > Pour : "RATS Discussion List" > cc : (ccc : Jacques Tebeka/ODDO) > Objet : Re: Special topics > > > > Content-type: text/plain; charset ---------- End of message ---------- From: "Philippe PROTIN" To: "RATS Discussion List" Subject: residual variance-covariance matrix in maximize Date: Thu, 12 Mar 1998 09:53:56 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Organization: Ecole Superieure des Affaires MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Windows (v2.54) (via Mercury MTS (Bindery) v1.30) Dear RATS users, I am estimating a multivariate GARCH model and need to compute the covariance matrix of residuals at the end of each iteration. Is there any way to do it using maximize ? or should I modify the log-likelihood function ? Thanks for help ********************* Philippe PROTIN ESA-CERAG BP 47X 38040 GRENOBLE CEDEX 09 FRANCE 04.76.82.57.48. protin@esa.upmf-grenoble.fr ---------- End of message ---------- From: Laurent Sebastien To: "RATS Discussion List" Subject: Variance-covariance matrix and GMM Date: Thu, 12 Mar 1998 10:09:23 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Pro Version 3.0.1 (32) [F] (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/enriched; charset="us-ascii" Hi everybody, We estimate a two equations system by GMM along the lines sketched out in section 5.11 of the manual. Can anybody tell me how to compute the variance-covariance matrix of the coefficients ? SEBASTIEN LAURENT UNIVERSITE DE LIEGE Tel.: 04/366.31.26 FACULTE ECONOMIE, GESTION ET SCIENCES SOCIALES Fax : 04/366.28.51 E-Mail :S.Laurent@ulg.ac.be \\\|||||/// \\ - - // ( @ @ ) +------oOOo-(_)-oOOo----------+---------------------------------+ | Laurent Sebastien | | | Assistant | | | FACULTE ECONOMIE, GESTION | Tel: 0032 (0)4/366.31.26 | | ET SCIENCES SOCIALES | Fax: 0032 (0)4/366.28.51 | | UNIVERSITE DE LIEGE | e-Mail: S.Laurent@ulg.ac.be | +---------------Oooo----------+---------------------------------+ oooO ( ) ( ) ) / \ ( (_/ \_) ---------- End of message ---------- From: Peter Mansfield To: "RATS Discussion List" Subject: The RESTRICT Command Date: Mon, 16 Mar 1998 10:29:31 +1100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Pro Version 3.0 (32) (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/enriched; charset="us-ascii" Hello Colleagues, Okay, here's the problem. Estimate some parameters (eg alpha and beta of a GARCH(1,1) model). What is the difference between using the RESTRICT command (alpha + beta = 1) versus re-estimating the model with the restriction imposed by hand (removing beta and replacing it with (1-alpha)? Using the first method, RATS reports a statistic (which it claims has a chi-squared distribution) following the RESTRICT command. Question: how does RATS compute this statistic? Using the second method, I can compute by hand 2*(maxloglikelihood(original)-maxloglikelihood(re-estimated model)), which I believe has a chi-squared distribution. The conclusions one might draw from these two different methods are not necessarily the same. Comments are welcome. Thanks in advance, Peter Mansfield 0000,0000,ffffPeter J Mansfield EMail: peter.mansfield@utas.edu.au School of Accounting and Finance University of Tasmania ffff,0000,0000_--_|\ 0000,0000,ffff GPO Box 252-86 ffff,0000,0000/ \ 0000,0000,ffffPhone: (03) 6226 7591 Hobart, Tasmania 7001 ffff,0000,0000\_.--._/0000,0000,ffff 0000,0000,ffffAustralia ffff,0000,0000v0000,0000,ffff Fax: (03) 6226 7845 ---------- End of message ---------- From: "Wai Lee" To: "RATS Discussion List" Subject: FW: The RESTRICT Command Date: Mon, 16 Mar 1998 08:53:16 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-type: multipart/mixed; X-Mailer: Mercury MTS (Bindery) v1.30 --0__=UEd15Z70krwXy8McGZ2jbQZtfKfc6EaalkLYHCMZ4iPj4KrOXjCGyWRp Content-type: text/plain; charset=US-ASCII Peter: About 2 months ago, someone asked how to test restriction in GARH estimation, and I replied with the suggestion of using the likelihood test (your "by hand" method). In the same message,I also asked what was the difference if I used RESTRICT, and how RATS computed the statistics in that case (with MLE). Tom Maycock replied and suggested me to read Section 6.1 of RATS manual. I asked the question precisely because I had read that section. I read it again, but I still cannot answer your question and thus, I am doing it "by hand." W. Lee Peter.Mansfield @ utas.edu.au on 03/15/98 06:29:31 PM Please respond to RATS-L@efs.mq.edu.au To: RATS-L @ efs.mq.edu.au cc: (Wai Lee) Subject: The RESTRICT Command --0__=UEd15Z70krwXy8McGZ2jbQZtfKfc6EaalkLYHCMZ4iPj4KrOXjCGyWRp-- ---------- End of message ---------- From: JASON LAU To: "RATS Discussion List" Subject: BIVARIATE GARCH Date: Mon, 16 Mar 1998 22:08:41 +0800 (EAT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Light Version 1.5.2 (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Dear Rats user, I am doing a project on how the futures affects the stocks using a bivariate GARCH model. If I don't allow for cross effects in the variance equations, most of the coefficients are significant. However, if I allow for the cross effects in the variance equations, almost none of the coefficients except two are significant. I don't know whether I have made any mistakes in my code. I appreciate any advice. Thank you very much! Jason Lau Economics and Finance University of Hong Kong **************************************************** The following code allows for cross effects in the variance equations **************************************************** calendar(irregular) allocate 72 open data FRCR.txt data(format=prn,org=obs) / FR CR NONLIN B11 B21 FRML RESID1 = FR-B11 FRML RESID2 = CR-B21 LINREG FR / R1 # CONSTANT COMPUTE B11=%BETA(1) LINREG CR / R2 # CONSTANT COMPUTE B21=%BETA(1) VCV(MATRIX=RR,NOPRINT) # R1 R2 DECLARE SERIES U1 U2 DECLARE SERIES H11 H12 H22 SET U1 = R1 SET U2 = R2 SET H11 = RR(1,1) SET H22 = RR(2,2) SET H12 = RR(1,2) DECLARE SYMMETRIC H DECLARE VECTOR U DECLARE FRML H11F H22F H12F FRML LOGL = $ H11(T)=H11F(T), H22(T)=H22F(T),H12(T)=H12F(T),$ U1(T)=RESID1(T),U2(T)=RESID2(T),$ H=||H11(T)|H12(T),H22(T)||,$ U=||U1(T),U2(T)||,$ %LOGDENSITY(H,U) NONLIN(ADD) VC11 VC12 VC22 VA11 VA12 VA21 VA22 VB11 VB12 VB21 VB22 * FRML H11F = (VC=||VC11,VC12|0.0,VC22||),$ (VA=||VA11,VA12|VA21,VA22||),$ (VB=||VB11,VB12|VB21,VB22||),$ (H=||H11{1}|H12{1},H22{1}||),$ (UB=||U1{1},U2{1}||*VB),$ (H=TR(VC)*VC+%MQFORM(H,VA)+TR(UB)*UB),$ H(1,1) FRML H12F = H(1,2) FRML H22F = H(2,2) COMPUTE CINIT = %DECOMP(RR) COMPUTE VC11=CINIT(1,1) , VC12 = CINIT(1,2) , VC22=CINIT(2,2) COMPUTE VA11 = VA22 = 0.05 , VA12 = VA21 = 0.0 COMPUTE VB11 = VB22 = 0.05 , VB12 = VB21 = 0.0 NLPAR(SUBITS=50) maximize(method=simplex,RECURSIVE,iters=30,noprint) logl 2 * maximize(iters=300, method=bhhh, recursive) logl 2 * Dependent Variable FR - Estimation by Least Squares Usable Observations 72 Degrees of Freedom 71 Centered R**2 -0.000000 R Bar **2 -0.000000 Uncentered R**2 0.007632 T x R**2 0.549 Mean of Dependent Variable -0.004108458 Std Error of Dependent Variable 0.047177866 Standard Error of Estimate 0.047177866 Sum of Squared Residuals 0.1580283239 Durbin-Watson Statistic 2.486487 Q(18-0) 34.578792 Significance Level of Q 0.01067441 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.004108458 0.005559965 -0.73894 0.46238174 Dependent Variable CR - Estimation by Least Squares Usable Observations 72 Degrees of Freedom 71 Centered R**2 0.000000 R Bar **2 0.000000 Uncentered R**2 0.010734 T x R**2 0.773 Mean of Dependent Variable -0.004162750 Std Error of Dependent Variable 0.040242528 Standard Error of Estimate 0.040242528 Sum of Squared Residuals 0.1149817354 Durbin-Watson Statistic 2.362286 Q(18-0) 33.151409 Significance Level of Q 0.01599989 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.004162750 0.004742627 -0.87773 0.38305037 Estimation by BHHH Iterations Taken 52 Usable Observations 71 Degrees of Freedom 58 Function Value 505.17983210 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. B11 -0.00073061 0.00393602 -0.18562 0.85274203 2. B21 -0.00125022 0.00338280 -0.36958 0.71169434 3. VC11 0.00554130 0.00978094 0.56654 0.57102584 4. VC12 0.01111306 0.01118002 0.99401 0.32021770 5. VC22 -0.00000358 87.62464920 -4.08884e-008 0.99999997 6. VA11 -0.84484607 1.42382707 -0.59336 0.55293836 7. VA12 -0.59525494 1.15828814 -0.51391 0.60731547 8. VA21 0.01286889 1.84856784 0.00696 0.99444553 9. VA22 -0.01323576 1.48370859 -0.00892 0.99288239 10. VB11 -0.91914389 0.62479777 -1.47111 0.14126239 11. VB12 -0.70711591 0.46061704 -1.53515 0.12474717 12. VB21 1.60237729 0.80405926 1.99286 0.04627681 13. VB22 1.45423354 0.64085231 2.26922 0.02325508 ************************************************************** The following code does not allow for the cross effects in the variance equations ************************************************************** calendar(irregular) allocate 72 open data FRCR.TXT data(format=prn,org=obs) / FR CR NONLIN B11 B21 FRML RESID1 = FR-B11 FRML RESID2 = CR-B21 LINREG FR / R1 # CONSTANT COMPUTE B11=%BETA(1) LINREG CR / R2 # CONSTANT COMPUTE B21=%BETA(1) VCV(MATRIX=RR,NOPRINT) # R1 R2 DECLARE SERIES U1 U2 DECLARE SERIES H11 H12 H22 SET U1 = R1 SET U2 = R2 SET H11 = RR(1,1) SET H22 = RR(2,2) SET H12 = RR(1,2) DECLARE SYMMETRIC H DECLARE VECTOR U DECLARE FRML H11F H22F H12F FRML LOGL = $ H11(T)=H11F(T), H22(T)=H22F(T),H12(T)=H12F(T),$ U1(T)=RESID1(T),U2(T)=RESID2(T),$ H=||H11(T)|H12(T),H22(T)||,$ U=||U1(T),U2(T)||,$ %LOGDENSITY(H,U) NONLIN(ADD) VC11 VC12 VC22 VB11 VB12 VB22 VA11 VA12 VA22 FRML H11F = VC11+VA11*H11(T-1)+VB11*U1(T-1)**2 FRML H22F = VC22+VA22*H22(T-1)+VB22*U2(T-1)**2 FRML H12F = VC12+VA12*H12(T-1)+VB12*U1(T-1)*U2(T-1) COMPUTE VC11 = RR(1,1), VC22 = RR(2,2), VC12 = RR(1,2) COMPUTE VB11 = VB22 = VA11 = VA22 = 0.05, VB12 = VA12 = 0.0 NLPAR(SUBITS=50) maximize(method=simplex,RECURSIVE,iters=5,noprint) logl 2 * maximize(iters=300, method=bhhh, recursive) logl 2 * Dependent Variable FR - Estimation by Least Squares Usable Observations 72 Degrees of Freedom 71 Centered R**2 -0.000000 R Bar **2 -0.000000 Uncentered R**2 0.007632 T x R**2 0.549 Mean of Dependent Variable -0.004108458 Std Error of Dependent Variable 0.047177866 Standard Error of Estimate 0.047177866 Sum of Squared Residuals 0.1580283239 Durbin-Watson Statistic 2.486487 Q(18-0) 34.578792 Significance Level of Q 0.01067441 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.004108458 0.005559965 -0.73894 0.46238174 Dependent Variable CR - Estimation by Least Squares Usable Observations 72 Degrees of Freedom 71 Centered R**2 0.000000 R Bar **2 0.000000 Uncentered R**2 0.010734 T x R**2 0.773 Mean of Dependent Variable -0.004162750 Std Error of Dependent Variable 0.040242528 Standard Error of Estimate 0.040242528 Sum of Squared Residuals 0.1149817354 Durbin-Watson Statistic 2.362286 Q(18-0) 33.151409 Significance Level of Q 0.01599989 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.004162750 0.004742627 -0.87773 0.38305037 Estimation by BHHH Iterations Taken 181 Usable Observations 71 Degrees of Freedom 60 Function Value 498.84293964 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. B11 0.0016805384 0.0038531742 0.43614 0.66273234 2. B21 0.0016646038 0.0035301202 0.47154 0.63725296 3. VC11 0.0007146801 0.0002514319 2.84244 0.00447697 4. VC12 0.0006191391 0.0002274929 2.72158 0.00649714 5. VC22 0.0005745600 0.0002110542 2.72233 0.00648226 6. VB11 0.3649773495 0.1404648440 2.59835 0.00936719 7. VB12 0.3313974988 0.1298914055 2.55134 0.01073087 8. VB22 0.3831216584 0.1320576303 2.90117 0.00371772 9. VA11 0.1055983589 0.1132997831 0.93203 0.35132302 10. VA12 0.0956154745 0.1305039087 0.73266 0.46376358 11. VA22 0.0539295732 0.1180810436 0.45672 0.64787476 ---------- End of message ---------- From: DRTEF@jazz.ucc.uno.edu To: "RATS Discussion List" Subject: Re: The RESTRICT Command Date: Mon, 16 Mar 1998 13:04:51 -0600 (CST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: MIME-version: 1.0 Content-type: TEXT/PLAIN; CHARSET=US-ASCII Content-transfer-encoding: 7BIT X-Mailer: Mercury MTS (Bindery) v1.30 RESTRICT calculates a Wald test. Asymptotically, this is equivalent to a likelihood ratio or LM test. In small samples, Berndt and Savin (Econo. 1977) showed that W > LR > LM, which may affect ones conclusions. Computationally, the difficulty of calculation increases from W to LM to LR (see Kennedy's Guide to Econometrics, chapter 4). These are probably the most important criteria for choosing a test (although I've found that referees complain less if you report LR tests). > From: IN%"RATS-L@efs.mq.edu.au" "RATS Discussion List" 15-MAR-1998 17:56:46.26 > Subj: The RESTRICT Command > Hello Colleagues, > > > Okay, here's the problem. > > > Estimate some parameters (eg alpha and beta of a GARCH(1,1) model). > > > What is the difference between using the RESTRICT command (alpha + beta = 1) versus re-estimating the model with the restriction imposed by hand (removing beta and replacing it with (1-alpha)? > > > Using the first method, RATS reports a statistic (which it claims has a chi-squared distribution) following the RESTRICT command. Question: how does RATS compute this statistic? > > > Using the second method, I can compute by hand 2*(maxloglikelihood(original)-maxloglikelihood(re-estimated model)), which I believe has a chi-squared distribution. > > > The conclusions one might draw from these two different methods are not necessarily the same. Comments are welcome. > > > Thanks in advance, > > > Peter Mansfield > > 0000,0000,ffffPeter J Mansfield EMail: > peter.mansfield@utas.edu.au > > School of Accounting and Finance > > > University of Tasmania > ffff,0000,0000_--_|\ > > 0000,0000,ffff GPO Box 252-86 > ffff,0000,0000/ \ > 0000,0000,ffffPhone: (03) 6226 7591 > > Hobart, Tasmania 7001 > ffff,0000,0000\_.--._/0000,0000,ffff > > > 0000,0000,ffffAustralia > > ffff,0000,0000v0000,0000,ffff > Fax: (03) 6226 7845 David Tufte Assistant Professor Department of Economics and Finance University of New Orleans New Orleans, LA 70148 (504) 280-7094 (office) (504) 280-6397 (fax) DRTEF@UNO.EDU ---------- End of message ---------- From: JASON LAU To: "RATS Discussion List" Subject: BIVARIATE GARCH Date: Tue, 17 Mar 1998 07:06:10 +0800 (EAT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Light Version 1.5.2 (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Dear Rats user, I am doing a project on how the futures affects the stocks using a switching GARCH model where the dummy is equal to 0 before time t* but it is equal to 1 since time t* and the return equation and variance equation are for the stock returns. I don't know whether I am correct in writing the code. I appreciate any advice. Thank you very much! Jason Lau Economics and Finance University of Hong Kong **************************************************** **************************************************** calendar(irregular) allocate 168 open data CR.txt data(format=prn,org=obs) / CR DUMMY declare series v nonlin b0 a0 a1 a2 a3 a4 a5 frml resid = CR-b0 frml hf = a0+a1*resid(t-1)**2+a2*v(t-1)+a3*DUMMY+a4*DUMMY*resid(t-1)**2+a5*DUMMY*v(t-1) frml logl =(v=hf(t)), -.5*(log(v)+resid(t)**2/v) linreg CR # constant compute b0=%beta(1) compute a0=%seesq,a1=a2=a3=a4=a5=0.05 set v = %seesq maximize(method=simplex, iters=5,noprint) logl 2 * maximize(iters=300, method=bhhh, recursive) logl 2 * ---------- End of message ---------- From: Peter Mansfield To: "RATS Discussion List" Subject: 3000 is a large number ... isn't it? Date: Tue, 17 Mar 1998 13:19:45 +1100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Pro Version 3.0 (32) (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/enriched; charset="us-ascii" Colleagues, I have a data set with approx 3000 data points. I make a first estimate of GARCH(1,1) parameters. Let maxlike(1) = the resulting maximum of the loglikelihood function. (It is about -7000.) Using the same data, I make an estimate of GARCH(1,2) parameters. maxlike(2) is also about -7000, as expected. Presumably 2*( maxlike(2) - maxlike(1) ) is chi-squared(1). My concern: the number of data points contributing to maxlike(2) is one less than the number of data points contributing to maxklike(1). For both, the average data point contributes approx 7000 / 3000 = 2.33 or so to the likelihood function. The 5% cri Thanks in advance, Peter Mansfield 0000,0000,ffffPeter J Mansfield EMail: peter.mansfield@utas.edu.au School of Accounting and Finance University of Tasmania ffff,0000,0000_--_|\ 0000,0000,ffff GPO Box 252-86 ffff,0000,0000/ \ 0000,0000,ffffPhone: (03) 6226 7591 Hobart, Tasmania 7001 ffff,0000,0000\_.--._/0000,0000,ffff 0000,0000,ffffAustralia ffff,0000,0000v0000,0000,ffff Fax: (03) 6226 7845 ---------- End of message ---------- From: Hyginus Leon To: "RATS Discussion List" Subject: 3000 is a large number ... isn't it? -Reply Date: Tue, 17 Mar 1998 09:09:07 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Novell GroupWise 4.1 (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/plain >>> Peter Mansfield 03/16/98 09:19pm >>> Your message ended abruptly and may have been truncated. However, why didn't you estimate both the GARCH (1,1) and the GARCH (1,2) on the same sample? This would be the valid base for a test of the null that second param in the GARCH (1,2) is zero. ---------- End of message ---------- From: Eric.Weigel@lgtna.com (Eric Weigel) To: "RATS Discussion List" Subject: Date: Tue, 17 Mar 1998 12:20:14 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Content-Description: cc:Mail note part X-Mailer: Mercury MTS (Bindery) v1.30 I am wondering if anybody has experimented with the QP optimizer in Rats and whether anybody has written code to allow for a Markowitz type of portfolio optimization allowing for transaction costs. Eric Weigel eric.weigel@lgtna.na ---------- End of message ---------- From: Peter Mansfield To: "RATS Discussion List" Subject: Re-broadcast of: 3000 is a large number ... isn't it? Date: Wed, 18 Mar 1998 08:46:18 +1100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Windows Eudora Pro Version 3.0 (32) (via Mercury MTS (Bindery) v1.30) Mime-Version: 1.0 Content-Type: text/enriched; charset="us-ascii" Colleagues, Apparently the following query was only partially broadcast ... so I post it again ... with some re-phrasing. I have a data set with approx 3000 data points. I make a first estimate of GARCH(1,1) parameters. Let maxlike(1) = the resulting maximum of the loglikelihood function. (It is about -7000.) Using the same data, I make an estimate of GARCH(1,2) parameters. maxlike(2) is also about -7000, as expected. Presumably 2*( maxlike(2) - maxlike(1) ) is chi-squared(1). My concern: the number of data points contributing to maxlike(2) is one less than the number of data points contributing to maxklike(1) because there is one more parameter to estimate (short of using the SMPL command, and then having to experiment with Thanks in advance, Peter Mansfield 0000,0000,ffffPeter J Mansfield EMail: peter.mansfield@utas.edu.au School of Accounting and Finance University of Tasmania ffff,0000,0000_--_|\ 0000,0000,ffff GPO Box 252-86 ffff,0000,0000/ \ 0000,0000,ffffPhone: (03) 6226 7591 Hobart, Tasmania 7001 ffff,0000,0000\_.--._/0000,0000,ffff 0000,0000,ffffAustralia ffff,0000,0000v0000,0000,ffff Fax: (03) 6226 7845 ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: Re-broadcast of: 3000 is a large number ... isn't it? Date: Tue, 17 Mar 1998 16:43:46 -0600 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v2.54) (via Mercury MTS (Bindery) v1.30) > > > Apparently the following query was only partially broadcast ... so I post it again ... with some re-phrasing. > Peter: I think your message was cut off again. You need to set the line-length parameter on your e-mail program to something much short (for most, something like 60 to 80 characters should work). Sincerely, Tom Maycock Estima ------------------------------------------------------------ | Estima | Sales: (800) 822-8038 | | P.O. Box 1818 | Support: (847) 864-1910 | | Evanston, IL 60204-1818 | Fax: (847) 864-6221 | | USA | estima@estima.com | | | http://www.estima.com | ------------------------------------------------------------ ---------- End of message ---------- From: jescobal@grade.org.pe (Javier Escobal) To: "RATS Discussion List" Subject: Kernel estimators Date: Fri, 20 Mar 1998 11:20:46 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook Express 4.71.1712.3 (via Mercury MTS (Bindery) v1.30) Hi: I am trying to estimate a Nadaraya-Watson kernel estimator for transition probabilities in a panel data framework. Does anyone knows abaout a rats program that can do the job? Thanks a lot. Javier A. Escobal ---------- End of message ---------- From: Hung-Jen Wang To: "RATS Discussion List" Subject: read in a large data set Date: Sun, 29 Mar 1998 16:52:12 -0500 (EST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Mailer: Mercury MTS (Bindery) v1.30 Hi, I have a huge MxN dataset with M observations and N (>700) variables, most of them are dummies. The 1st column contains the independent variable, and the next n columns are the indepenent variables, and the rest N-n-1 are the instrument variables. Because the number of variables is so large, it is nearly impossible to specify each variable names and read in the dataset by open data datasetname data(format=free,org=obs) / variable_names Is there any way to read in the dataset as three matrix/vetors such as V1 as the 1st column of the dataset (the dependent variable), V2 as an Mxn matrix (the independent variables), and V3 as an Mx(N-n-1) matrix (the instrument variables)? Which, I hope, will allow me to run regressions by: instrument V3 linreg(inst, robusterror) V1 # V2 . I tried to use command like "dec rectangular A(_dimention_)" and "dec rectangular[series] A(_dimention_)", and then followed by "read" without success. Any suggestion will be much appreciated! ps. Any tip on reading in the dummies as INTEGER rather than the default REAL will also be welcome. __________________________________________________________________ Hung-Jen Wang internet: hungjen@umich.edu Department of Economics telephone: 313 764-2182 University of Michigan FAX: 313 764-2769 Ann Arbor, MI 48109-1220 http://www.econ.lsa.umich.edu/~hungjen ---------- End of message ---------- From: Hakan Berument To: "RATS Discussion List" Subject: Re: residual variance-covariance matrix in maximize Date: Mon, 30 Mar 1998 19:30:02 +0300 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs1.efs.mq.edu.au X-listname: X-Mailer: Mozilla 4.03 [en] (Win95; I) (via Mercury MTS (Bindery) v1.30) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Dear Philippe, I saw your earlier message and realized that you do some multivariate GARCH models. I have a question for you. Do you assume that residuals are normal. If you have some other distribution formula like multivariate t-distribution or multivariate Generalized Error distribution, could you share it with me. If you assume that the resuiduals are normal, then do you calcualte the Standard Error as the way Bolloersle and Wooldridge(1992) does? If you do also could you send a copy of a program that you use. Thanks in adavance Hakan Philippe PROTIN wrote: > Dear RATS users, > > I am estimating a multivariate GARCH model and need to compute the > covariance matrix of residuals at the end of each iteration. Is there > any way to do it using maximize ? or should I modify the > log-likelihood function ? > > Thanks for help > > ********************* > Philippe PROTIN > ESA-CERAG > BP 47X > 38040 GRENOBLE CEDEX 09 > FRANCE > > 04.76.82.57.48. > protin@esa.upmf-grenoble.fr -- Hakan Berument Department of Economics e-mail: berument@bilkent.edu.tr Bilkent University Phone: + 90-312-241-1224 06533 Bilkent Ankara Fax : + 90-312-266-5140 Turkey Homepage: http://www.bilkent.edu.tr/~berument ---------- End of message ----------