Return-Path: Received: from zaphod.bc.edu (zaphod.bc.edu [136.167.2.207]) by monet.bc.edu (8.8.7/8.8.7) with ESMTP id UAA47246 for ; Sun, 3 Sep 2000 20:08:49 -0400 From: maiser@efs.mq.edu.au Received: (from root@localhost) by zaphod.bc.edu (8.8.7/8.8.7) with X.500 id UAA939254 for baum@mail1.bc.edu; Sun, 3 Sep 2000 20:08:49 -0400 Received: from sunb.ocs.mq.edu.au (sunb.ocs.mq.edu.au [137.111.1.11]) by zaphod.bc.edu (8.8.7/8.8.7) with ESMTP id UAA891858 for ; Sun, 3 Sep 2000 20:08:14 -0400 Received: from efs01.efs.mq.edu.au (EFS01.efs.mq.edu.au [137.111.64.21]) by sunb.ocs.mq.edu.au (8.10.2/8.10.2) with ESMTP id e84086X19467 for ; Mon, 4 Sep 2000 11:08:06 +1100 (EST) Received: from EFS01/SpoolDir by efs01.efs.mq.edu.au (Mercury 1.40); 4 Sep 100 11:07:44 GMT+1000 Received: from SpoolDir by EFS01 (Mercury 1.40); 4 Sep 100 11:07:43 GMT+1000 To: baum@bc.edu Date: Mon, 4 Sep 100 11:07:42 GMT+1000 Subject: Re: Message-ID: <5DFC0A7638@efs01.efs.mq.edu.au> From: cipolla@sbu.ac.uk To: "RATS Discussion List" Subject: Bivariate Garch Date: Tue, 6 Jun 2000 11:26:34 +0000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: South Bank University MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7BIT X-Mailer: Mercury MTS (Bindery) v1.40 Dear Rats users, I am estimating a bivariate GARCH model, by using Rob Trevor code. I am trying to modify it by allowing for errors with a GED or t-student distribution, but I get as an error message: "failed to invert the covariance matrix, trying the generalised inverse". For some dataset the generalised inversion procedure works, and I can compute point estimates and standard errors. For other dataset, the generalised inversion procedure does not work: I end up with NA standard errors and NA saved residuals. Does anyone have a code which allows for errors with GED or t-distribution distributions? Many thanks Andrea Cipollini Andrea Cipollini South Bank University Business School Southwark Campus 103 Borough Road London SE1 0AA Direct Line: +44 (0)171-8157077 E-mail: cipolla@sbu.ac.uk ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: Bivariate Garch Date: Tue, 6 Jun 2000 09:46:04 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Estima MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v3.11) (via Mercury MTS (Bindery) v1.40) On 6 Jun 00, at 11:26, cipolla@sbu.ac.uk wrote: > Dear Rats users, > I am estimating a bivariate GARCH model, by using Rob > Trevor code. I am trying to modify it by allowing for errors with a > GED or t-student distribution, but I get as an error message: "failed > to invert the covariance matrix, trying the generalised inverse". > For some dataset the generalised inversion procedure works, and I > can compute point estimates and standard errors. For other dataset, > the generalised inversion procedure does not work: I end up with NA > standard errors and NA saved residuals. Does anyone have a code which > allows for errors with GED or t-distribution distributions? Andrea: I don't think you are likely to find a "better" implementation of these models--the parameterizations in that example program are correct as far as I know. Rather, the problem you are experiencing most likely has to do with the initial conditions you are using (although there is always the possibility that your model simply doesn't fit the data). Thus the only likely improvement would be a method that produces better initial values for your particular model(s). If you haven't already done so, be sure to include the TRACE option on your MAXIMIZE statement. This is always a good practice, but is vital in this case, because you need to determine which iterations are producing the "Non-invertible" warning message. If you're just getting "Non-invertible" warnings on a few early iterations, but then things proceed smoothly (i.e. no such warnings on later iterations and the parameters/function value seem to converge nicely), then you are probably fine--the program has successfully moved away from bad initial conditions to an area where the model and parameter values produce valid covariance matrices. Obviously you'll want to experiment with some slightly different initial conditions to verify your results. If, on the other hand, you are seeing these warnings at later iterations (i.e. at or shortly before the last iteration performed), then any results you might get are probably meaningless--even if you don't get NA's for your coefficients (you need the covariance matrix to be invertible). You'll need to go back to the drawing board in terms of your initial guesses for the parameters. Possibly try a simpler model first to get a better idea of what to expect, etc. Sincerely, Tom Maycock Estima -- ------------------------------------------------------------ | Estima | Sales: (800) 822-8038 | | P.O. Box 1818 | (847) 864-8772 | | Evanston, IL 60204-1818 | Support: (847) 864-1910 | | USA | Fax: (847) 864-6221 | | http://www.estima.com | estima@estima.com | ------------------------------------------------------------ ---------- End of message ---------- From: Lana Poukliakova To: "RATS Discussion List" Subject: GARCHM with MA Date: Tue, 6 Jun 2000 14:27:13 -0700 (PDT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Mailer: Excite Inbox (via Mercury MTS (Bindery) v1.40) Dear RATS users, This is my third message already. I am a new user of RATS and I try to estimate three variable GARCHM process with two moving averages. Can I use available codes form the website for two variables and change them? How can one possibly include MA in the program for GARCHM? Thanks in advance. Lana, University of Alberta, Canada _______________________________________________________ Get 100% FREE Internet Access powered by Excite Visit http://freelane.excite.com/freeisp ---------- End of message ---------- From: Yuen Phui Ling Hazel To: "RATS Discussion List" Subject: DISPLAY OF RESIDUALS CORRELATION Date: Fri, 9 Jun 2000 12:59:27 +0800 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain; Hi everyone, Does someone know how to display the correlations of residuals/ errors of a VAR system? Thanks very much. Have a Nice day. Hazel -----Original Message----- From: Yuen Phui Ling Hazel Sent: Tuesday, May 02, 2000 7:37 PM To: 'RATS-L@EFS.MQ.EDU.AU' Subject: VMA Syntax Importance: High Does anyone know the VMA syntax and procedure in RATS, apart from the one posted on the web? Hazel fbap8383@nus.edu.sg University of Singapore ---------- End of message ---------- From: =?iso-8859-1?q?Andreas=20Faust?= To: "RATS Discussion List" Subject: Re: DISPLAY OF RESIDUALS CORRELATION Date: Fri, 9 Jun 2000 05:29:56 -0700 (PDT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Hi Hazel, if you will use finally the VAR.SRC you have the possibilty to see the correlation/error after estimation. Also you can display the ACF of the residulas. Try it! Andreas --- Yuen Phui Ling Hazel schrieb: >=20 > Hi everyone, >=20 > Does someone know how to display the correlations > of > residuals/ errors of a VAR system?=20 >=20 > Thanks very much.=20 >=20 > Have a Nice day.=20 > Hazel =20 >=20 > =20 > -----Original Message----- > From: Yuen Phui Ling Hazel =20 > Sent: Tuesday, May 02, 2000 7:37 PM > To: 'RATS-L@EFS.MQ.EDU.AU' > Subject: VMA Syntax > Importance: High >=20 > Does anyone know the VMA syntax and > procedure in RATS, apart from the one posted on the > web? >=20 >=20 > Hazel fbap8383@nus.edu.sg > University of Singapore=20 =3D=3D=3D=3D=3D Andreas Faust Av. Car=FApano N=B0 23 Qta. El Rosedal Las Palmas, Caracas Venezuela Tel.: +58 2 7818621 andreasfaust@yahoo.com __________________________________________________ Do You Yahoo!? Yahoo! Photos -- now, 100 FREE prints! http://photos.yahoo.com ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: DISPLAY OF RESIDUALS CORRELATION Date: Fri, 9 Jun 2000 10:18:59 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Estima MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v3.11) (via Mercury MTS (Bindery) v1.40) > Hi everyone, > > Does someone know how to display the correlations of > residuals/ errors of a VAR system? > Just use the SIGMA option on ESTIMATE if you want to see the covariance and correlation information for the residuals: ESTIMATE(OUTSIGMA=V,sigma) You can use CMOM(CORR) on the set of residuals series: SYSTEM 1 TO NEQN VARIABLES USARGNP CANTBILL CANM1S CANRGNP CANCPINF CANUSXSR LAGS 1 TO NLAGS DET CONSTANT END(SYSTEM) dec vec[series] resids(neqn) ESTIMATE(OUTSIGMA=V,sigma) / resids(1) cmom(corr,print) # resids Sincerely, Tom Maycock Estima -- ------------------------------------------------------------ | Estima | Sales: (800) 822-8038 | | P.O. Box 1818 | (847) 864-8772 | | Evanston, IL 60204-1818 | Support: (847) 864-1910 | | USA | Fax: (847) 864-6221 | | http://www.estima.com | estima@estima.com | ------------------------------------------------------------ ---------- End of message ---------- From: "Ana Timberlake" To: "RATS Discussion List" Subject: Newsflash: Special Offers on Training Date: Fri, 09 Jun 2000 19:35:30 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Dear Colleague As a user of statistical packages, you may be interested in the news belo= w. For=20 full details of our products, training courses and other services http://= www.timberlake.co.uk=20 or http://www.timberlake-consultancy.com CONTENTS 1. Training courses in the USA 2. Internet Training courses (SPECIAL OFFER) 3. Visit our stands at conferences (SPECIAL OFFER) 4. A new course: QUANTITATIVE METHODS IN FINANCIAL ECONOMICS: The Stock M= arket,=20 the Bond Market and the Foreign Exchange Market, 26 - 29 September 2000, = University=20 of London Computer Centre (ULCC), 20 Guilford Street, London WC1N 1DZ, UK. 5. Gauss version 3.5 1. Training courses in the USA In addition to distribute third-party statistical and econometric softwar= e, we=20 aim to provide the user, in addition to our technical support, with a var= iety=20 of training courses. In the U.S.A. we are offering the following courses/= workshops=20 : We are offering 4 workshops before/after the World Conference of the Econ= ometric=20 Society, 11 - 16 August 2000, Seattle, U.S.A. During the workshops, the = delegates=20 have access to computers so that they gain hands-on experience in the sof= tware.=20 The price of the 1-day workshop is 245 USD and the 1/2 day workshop is 1= 75 USD.=20 The titles for the workshops, dates and names of the lecturers can be fou= nd below. - Modelling Dynamic Econometric Systems Using PcGive, Principal lecturer:= Prof.=20 David F. Hendry,1-day workshop, 10 August 2000, Seattle, U.S.A. - Ox Course: An Introduction to Modern Econometric Programming, Principal= lecturers:=20 Jurgen Doornik and Marius Ooms, 1-day workshop, afternoon of 16 August & = morning=20 of 17 August 2000, Seattle, U.S.A. - STAMP 6: Structural time series analyser modeller and predictor Principal lecturers: Prof. Andrew Harvey and Siem Jan Koopman, 1-day work= shop,=20 10 August 2000, Seattle, U.S.A. - Modelling Markov Switching Processes Using MSVAR for Ox, Principal lect= urer:=20 Hans-Martin Krolzig, 1/2 day workshop, 17 afternoon, August 2000, Seattle= ,=20 U.S.A. In addition we are also organising a 4-day course in New York, 7-10 Novem= ber=20 2000 - The Practice of Econometrics with EViews, Principal lecturers: Sean Hol= ly and=20 Paul Turner Details at www.timberlake.co.uk 2. Internet Training courses We run several Internet courses during the year. All courses are four ses= sions=20 and the price is: 150 GBP, 245 USD or 240 Euros. Register in 2 or more co= urses=20 before 1st July 2000 and you will get 20% discount on the total cost of c= ourses=20 - mention in your order OFFER INT JULY2000. These are some of the courses= : - Introduction to Medical Statistics with Stata - 19/Jun - 28/Jul/2000 = =09 - Advanced Medical Statistics with Stata - 2/Oct- 10/Nov/2000 =09 - Sample Size Calculations - 13/Nov - 15/Dec/2000 =09 - An Introduction to Time Series using Stata - 4/Sep - 13/Oct/2000 - Meta-analysis techniques using Stata - 6/Nov - 15/Dec/2000 =09 - Panel Data Analysis using Stata - 25/Sept - 3/Nov/2000=09 - Statistical Quality Control with Statgraphics - 4/Sept - 13/Oct/2000 = =09 3. Visit our stands at conferences We will be present at various conferences. Please visit us. We offer spec= ial=20 prices for some of the software packages and training courses during thes= e conferences.=20 If you order software prior to the conference and collect it at the confe= rence,=20 you get a 15% discount. E-mail us conf@timberlake.co.uk to discuss your o= rder.=20 You will also be saving the shipping costs. Please contact us 3 weeks pri= or to=20 the start of the conference. - The 20th International Symposium on Forecasting, Lisbon, Portugal, 21-2= 6 June=20 2000 - CEF2000 Sixth International Conference of the Society for Computation E= conomics,=20 Barcelona, Spain, 6-8 July 2000 - Third European Congress of Mathematics, Barcelona, Spain, 10 - 14 July = 2000 - World Conference of the Econometric Society, Seattle, USA, 11 -16 Augus= t 2000 - COMDEX Brazil 2000, Sao Paulo, Brazil, 22-25 August 2000 - International conference of the Royal Statistical Society, Reading, UK= , 11-15=20 September 1000 - VII Annual Conference of the Sociedade Portuguesa de Estat=EDstica, Pen= iche,=20 Portugal, 4 - 7 October 2000 - Annual meeting of the American Economics Association, New Orleans, U.S.= A.,=20 5-7 January 2000 4. A new course: QUANTITATIVE METHODS IN FINANCIAL ECONOMICS: The Stock M= arket,=20 the Bond Market and the Foreign Exchange Market, 26 - 29 September 2000, = University=20 of London Computer Centre (ULCC), 20 Guilford Street, London WC1N 1DZ, UK. This course aims to provide delegates with a background to econometric mo= delling=20 methods, using real-life business and financial data, while introducing d= elegates=20 to a variety of modern Econometric Packages. Several software packages wi= ll be=20 used throughout the course, including among others PcGive Professional, S= tamp=20 and RATS. Who is the course for? People who have studied economics and finance to a good level but increas= ingly=20 find that their skills are being overtaken by modern developments in the = field.=20 Our four-day course, a synthesis of similar courses taught at Cambridge = and=20 Oxford Universities, is designed to meet at least the following four obje= ctives: =B7 Increase attendees' familiarity with recent developments in financial= economics=20 and econometric practice; =B7 Encourage modelling of a variety of models: noise traders; bubbles; n= on-linearities;=20 time-varying risk; cointegration methods; rational expectations and heter= oscedasticity; =B7 Provide real-world and worked examples for attendees to use; =B7 Introduce users to several modern econometric packages. The Principal Lecturers - The principal lecturers are: Jagjit S. Chadha. Currently Fellow in Economics at Clare College, Univer= sity=20 of Cambridge. Previously holding teaching and research positions at the = University=20 of Southampton, the London School of Economics, and the Monetary Analysis= Division=20 of the Bank of England, in addition to various research and consulting po= sitions=20 in financial companies of the City. Lucio Sarno. Currently Fellow in Economics at University College, Univer= sity=20 of Oxford; Research Affiliate, Centre for Economic Policy Research, Londo= n; Consultant=20 to the Research Department, Federal Reserve Bank of St. Louis, USA. Prev= iously=20 holding teaching and research positions at Columbia University, Brunel Un= iversity,=20 and the University of Liverpool, in addition to research and consulting p= ositions=20 at the World Bank and the European Commission. Further details: www.timberlake.co.uk 5. Gauss version 3.5 There are good news for those awaiting version 3.5 of Gauss. It can be do= wnloaded=20 from Aptech's Web site. The instructions are below: Access Aptech's FTP site and download Gauss for Windows 3.5 free of charg= e.=20 FTP instructions: ftp site: 199.29.185.98 u s e r n a m e: winship p a s s w o r d: abx47qkz If you are using a browser to ftp to this site you must supply a username= in=20 the ftp command line: ftp://u s e r n a m e:p a s s w o r d@199.29.185.98 Once logged on, you will be placed in your home directory and will have a= ccess=20 to all file contained in that subdirectory and no others. Remember to set= the=20 transfer protocol to "bin" before downloading binary files. Use the "get"= command=20 to get the files you want off the ftp site. You will get 120 days FREE T= ech=20 Support. If you want continuing support, please request our Sales Departm= ent=20 for pricing information. Alternatively, you may prefer to receive a CD Rom (only available around = mid=20 July). If so, request it at Gauss35@timberlake.co.uk. Please send us your= Gauss=20 serial number and full address details. Please note that this release of Gauss for Windows does contain a license= manager=20 and at a certain stage you will need to contact us with the following inf= ormation.=20 You can fax, email or post this form back to us. For orders or further details in any of our services contact: Head Office: info@timberlake.co.uk U.S.A Office: info@timberlake-consultancy.com Italian Office: timberlake@arc.it Portuguese Office: timberlake.co@mail.telepac.pt Spanish Office: timberlake@zoom.es I look forward to hearing from you, Ana Timberlake, tel: +44 (0)20 86973377 Timberlake Consultants Ltd., fax:+44 (0)20 86973388 Unit B3, Broomsleigh Business Park, e-mail: info@timberlake.co.uk Worsley Bridge Road, http://www.timberlake.co.uk London SE26 5BN, http://www.timberlake-consultancy.com U.K. The information we will be sending you is relevant to all statisticians a= nd econometricians.=20 However, if you do not want to continue receiving our e-mails, please ret= urn=20 this e-mail and add the text REMOVE to the first line. ---------- End of message ---------- From: Rob Trevor To: "RATS Discussion List" Subject: Re: Newsflash: Special Offers on Training Date: Sat, 10 Jun 2000 09:54:51 +1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" ; format="flowed" X-Mailer: Mercury MTS (Bindery) v1.40 Ana (and anybody else who may be tempted to do the same thing) Please do not use this list to advertise commercial products, especially when it is only peripherally related to RATS. Otherwise I will be forced to block your whole domain from this list. This is meant to be a high signal to noise ratio list for discussing/helping each other with the RATS econometric software. We have over 500 subscribers. If you want to keep those people as part of this list, then please keep your postings "pure". I'm sure that your posting was an oversight. Please do not let it happen again. Thank you for your understanding Rob Trevor RATS-L Administrator ---------- End of message ---------- From: Ricardo Chaves Lima To: "RATS Discussion List" Subject: Help on Seazonal Unit Root test Date: Thu, 08 Jun 2000 00:30:59 -0300 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mozilla 4.5 [en] (Win98; I) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Please Daoes anyone knows any RATS procedure for seazonal unit roots test. Thanks a lot. Ricardo Chaves Lima rcl@netpe.com.br rlima@npd.ufpe.br ---------- End of message ---------- From: Yuen Phui Ling Hazel To: "RATS Discussion List" Subject: RE: DISPLAY OF RESIDUALS CORRELATION Date: Sat, 10 Jun 2000 13:08:32 +0800 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain; Content-Transfer-Encoding: quoted-printable HI Andreas!=20 Thanks for your kind reply once again!! Hazel=20 -----Original Message----- From: Andreas Faust [mailto:andreasfaust@yahoo.com]=20 Sent: Friday, June 09, 2000 8:30 PM To: RATS Discussion List Subject: Re: DISPLAY OF RESIDUALS CORRELATION Hi Hazel, if you will use finally the VAR.SRC you have the possibility to see the correlation/error after estimation. Also you can display the ACF of the residuals. Try it! Andreas --- Yuen Phui Ling Hazel schrieb: >=20 > Hi everyone, >=20 > Does someone know how to display the correlations > of > residuals/ errors of a VAR system?=20 >=20 > Thanks very much.=20 >=20 > Have a Nice day.=20 > Hazel =20 >=20 > =20 > -----Original Message----- > From: Yuen Phui Ling Hazel =20 > Sent: Tuesday, May 02, 2000 7:37 PM > To: 'RATS-L@EFS.MQ.EDU.AU' > Subject: VMA Syntax > Importance: High >=20 > Does anyone know the VMA syntax and > procedure in RATS, apart from the one posted on the > web? >=20 >=20 > Hazel fbap8383@nus.edu.sg > University of Singapore=20 =3D=3D=3D=3D=3D Andreas Faust Av. Car=FApano N=B0 23 Qta. El Rosedal Las Palmas, Caracas Venezuela Tel.: +58 2 7818621 andreasfaust@yahoo.com __________________________________________________ Do You Yahoo!? Yahoo! Photos -- now, 100 FREE prints! http://photos.yahoo.com ---------- End of message ---------- From: Carlos Quintanilla To: "RATS Discussion List" Subject: Re: Help on Seazonal Unit Root test Date: Sat, 10 Jun 2000 12:54:36 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: QUALCOMM Windows Eudora Version 4.3.1 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed Ricardo: You can find a procedure at the Estima Web Page: http://www.estima.com/procindx.htm The name of the file is HEGY.SRC \Carlos At 12:30 AM 6/8/00 -0300, you wrote: >Please > >Daoes anyone knows any RATS procedure for seazonal unit roots >test. Thanks a lot. > >Ricardo Chaves Lima >rcl@netpe.com.br >rlima@npd.ufpe.br ------------------------------------------------------------------------- Carlos Quintanilla University of Michigan phone : (313) 747 7956 e-mail: carlosq@umich.edu ------------------------------------------------------------------------- ---------- End of message ---------- From: Ricardo Chaves Lima To: "RATS Discussion List" Subject: Re: Help on Seazonal Unit Root test Date: Thu, 08 Jun 2000 23:31:05 -0300 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mozilla 4.5 [en] (Win98; I) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Thanks Carlos Carlos Quintanilla wrote: > Ricardo: > You can find a procedure at the Estima Web Page: > http://www.estima.com/procindx.htm > > The name of the file is HEGY.SRC > > \Carlos > > At 12:30 AM 6/8/00 -0300, you wrote: > >Please > > > >Daoes anyone knows any RATS procedure for seazonal unit roots > >test. Thanks a lot. > > > >Ricardo Chaves Lima > >rcl@netpe.com.br > >rlima@npd.ufpe.br > > ------------------------------------------------------------------------- > Carlos Quintanilla > University of Michigan > phone : (313) 747 7956 > e-mail: carlosq@umich.edu > ------------------------------------------------------------------------- ---------- End of message ---------- From: "Timberlake Consultants Ltd" To: "RATS Discussion List" Subject: Re: Newsflash: Special Offers on Training Date: Mon, 12 Jun 2000 07:52:37 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Outlook Express 5.00.2014.211 (via Mercury MTS (Bindery) v1.40) I am sorry. The mailing wans not intended to be sent to the a whole mailing. I will remove the address from our database. LOOK IN http://www.timberlake.co.uk/ FOR DETAILS ON OUR INTERNET AND PUBLIC ATTENDANCE COURSES Regards Ana Timberlake For and on behalf of Timberlake Consultants Ltd Unit B3, Broomsleigh Business Park, Worsley Bridge Road, London SE26 5BN, United Kingdom Tel: +44 (0)208 6973377, Fax: +44 (0)208 6973388 e-mail: ana@timberlake.co.uk URL: http://www.timberlake.co.uk/ ----- Original Message ----- From: Rob Trevor To: RATS Discussion List Sent: 10 June 2000 00:54 Subject: Re: Newsflash: Special Offers on Training > Ana (and anybody else who may be tempted to do the same thing) > > Please do not use this list to advertise commercial products, > especially when it is only peripherally related to RATS. Otherwise I > will be forced to block your whole domain from this list. > > This is meant to be a high signal to noise ratio list for > discussing/helping each other with the RATS econometric software. We > have over 500 subscribers. If you want to keep those people as part > of this list, then please keep your postings "pure". > > I'm sure that your posting was an oversight. Please do not let it happen again. > > Thank you for your understanding > > Rob Trevor > RATS-L Administrator > ---------- End of message ---------- From: David Teall To: "RATS Discussion List" Subject: GED and Student-t distribution Date: Mon, 12 Jun 2000 09:06:43 -0700 (PDT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mercury MTS (Bindery) v1.40 Dear RATS users: I am a new RATS user and wonder if anyone knows how to write RATS codes for the log-likelihood functions of generalized error distribution (GED) and Student-t distribution in matrix forms (i.e., more than one variable). Thank you very much for your help. Sincerely, David __________________________________________________ Do You Yahoo!? Yahoo! Photos -- now, 100 FREE prints! http://photos.yahoo.com ---------- End of message ---------- From: =?iso-8859-1?q?slim=20skandes?= To: "RATS Discussion List" Subject: request for a program Date: Tue, 13 Jun 2000 17:30:50 +0200 (CEST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Hello=20 I m a student in PARIS X university (France).I m preparing a working paper dealing with the presence of long memory in finantial markets. can you send me the program (for RATS ) computing the hurst exponent with the modified rescaled range method or tell me where can i find it. thank you ___________________________________________________________ Do You Yahoo!? Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ---------- End of message ---------- From: "Stephan Kohns" To: "RATS Discussion List" Subject: choice of functional form in NLLS-estimation Date: Tue, 13 Jun 2000 17:39:32 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Microsoft Outlook Express 5.00.2919.6600 (via Mercury MTS (Bindery) v1.40) Dear RATS users, I am estimating a nonlinear Euler equation with GMM, using the method recommended in the manual on page 5-23 (i.e. NLLS in two steps with an adjusted weighting-matrix computed by MCOV for the second step). There are two alternative versions for the Euler equation: The first looks like gam*L(t)+lam*Z(t)+..., where gam and lam are two of the parameters to be estimated and L(t) and Z(t) variables. In the second version, the whole equation has been divided by gam, so that it looks like L(t)+(lam/gam)*Z(t)+... . I intend to extend the analysis to a multivariate version, so that I would prefer the first version which is "less nonlinear" and should therefore be computationally less demanding. Unfortunately, the estimates of the parameters differ between the two versions, most of them by an order of about 20%, even though I expected them to be roughly identical. I experimented with the starting values and the convergence criterion, but the phenomenon did not disappear. Can somebody give me a clue how to solve this problem? Maybe that my logic is flawed, i.e. the estimates should be different, but in this case I am wondering which is the specification I should use when defining my FRML. Thank you very much for your help, Stephan Kohns ---------- End of message ---------- From: "Christopher F. Baum" To: "RATS Discussion List" Subject: Re: request for a program Date: Tue, 13 Jun 2000 12:08:49 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mulberry/2.0.0 (MacOS) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable I suggest you consider use of the GPH (Geweke/Porter-Hudak) estimation technique (code for which is available at Estima's web site). I also have RATS code for some of P.M. Robinson's semiparametric routines. Either would = be an improvement over R/S. Kit Baum Boston College Economics --On Tuesday, June 13, 2000 17:30 +0200 slim skandes wrote: > Hello > > I m a student in PARIS X university (France).I m > preparing a working paper dealing with the presence of > long memory in finantial markets. > > can you send me the program (for RATS ) computing the > hurst exponent with the modified rescaled range method > or tell me where can i find it. > > thank you > > > > ___________________________________________________________ > Do You Yahoo!? > Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ----------------------------------------------------------------- Kit Baum baum@bc.edu http://fmwww.bc.edu/ec-v/baum.fac.html ---------- End of message ---------- From: Choyleva Diana To: "RATS Discussion List" Subject: Date: Wed, 14 Jun 2000 12:13:02 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain Hallo, I would appreciate it if someone could help me with a problem I incurred. I was using a program in Rats to estimate potential output using the structural approach and then smoothing with an HPfilter. I find that when I use different lambdas in the HPfilter i tend to get the same potential output growth rates for the last four years and different ones for the rest. It seems strange to me. I was wondering whether this is a feature of the HPfilter or there must be something wrong with the program. Thank you! diana.choyleva@lombard-st.co.uk Lombard Street Research's material is intended to encourage better understanding of economic policy and financial markets. It does not constitute a solicitation for the purchase or sale of any commodities, securities or investments. Although the information compiled herein is considered reliable, its accuracy is not guaranteed. Any person using Lombard Street Research's material does so solely at his own risk and Lombard Street Research shall be under no liability whatsoever in respect thereof. ---------- End of message ---------- From: "Torben Mark Pedersen" To: "RATS Discussion List" Subject: HP-filter Date: Wed, 14 Jun 2000 13:53:25 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Novell GroupWise 5.5.2 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Dear Diana Choyleva 1. From your description of the problem, it does seem like there is = something wrong. The deviation of output from trend should depend = critically on the value of the smoothing parameter, lambda, in the = HP-filter, though you should be aware of the fact that it takes large = changes in the value of lambda to get quantitatively large differences.=20 2. The HP-filter works like a symmetric, infinite time horizon weighted = moving average filter in the time domain, so there are some serious = end-point problems with the HP-filter. Near the end of the sample, the = HP-filter works like a one-sided filter and is consequently distorting = near the ends of the sample. Noone seems to have analyzed the endpoint = problems in the time domain in a quantitative way, but it can be seriously = misleading to use the deviation of output from the HP-trend as a measure = of potential output. That measure may be seriously distorted by the = endpoint problems.=20 Sincerely, Torben Mark Pedersen Ministry of Economic Affairs Ved Stranden DK-1061 Copenhagen K. Denmark E-mail: tmp@oem.dk >>> Diana.Choyleva@lombard-st.co.uk 14-06-00 13:13 >>> Hallo, I would appreciate it if someone could help me with a problem I incurred. = I was using a program in Rats to estimate potential output using the structural approach and then smoothing with an HPfilter. I find that when = I use different lambdas in the HPfilter i tend to get the same potential output growth rates for the last four years and different ones for the = rest. It seems strange to me. I was wondering whether this is a feature of the HPfilter or there must be something wrong with the program. Thank you! diana.choyleva@lombard-st.co.uk=20 =20 Lombard Street Research's material is intended to encourage better understanding of economic policy and financial markets. It does not constitute a solicitation for the purchase or sale of any commodities, securities or investments. Although the information compiled herein is considered reliable, its accuracy is not guaranteed. Any person using Lombard Street Research's material does so solely at his own risk and Lombard Street Research shall be under no liability whatsoever in respect thereof. ---------- End of message ---------- From: Simon van Norden To: "RATS Discussion List" Subject: Re: HP-filter Date: Wed, 14 Jun 2000 09:11:34 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Ecole des Hautes Etudes Commerciales X-Mailer: Mozilla 4.61 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Actually, we recently did a quantitative study of the end point problems = with the HP filter and several other popular univariate detrending methods in = the context of measuring potential output. Our results confirm what Dr. Peder= sen wrote. We also have several references to other papers on this and relate= d issues. The paper is "The Reliability of Output Gap Estimates in Real Time" by Orphanides and van Norden. You can find it as FEDS working paper 1999-38, available on my homepage (see below) or from the Federal Reserve Board. Regards, SvN Torben Mark Pedersen wrote: >=20 > Dear Diana Choyleva >=20 > 1. From your description of the problem, it does seem like there is som= ething wrong. The deviation of output from trend should depend critically= on the value of the smoothing parameter, lambda, in the HP-filter, thoug= h you should be aware of the fact that it takes large changes in the valu= e of lambda to get quantitatively large differences. >=20 > 2. The HP-filter works like a symmetric, infinite time horizon weighted= moving average filter in the time domain, so there are some serious end-= point problems with the HP-filter. Near the end of the sample, the HP-fil= ter works like a one-sided filter and is consequently distorting near the= ends of the sample. Noone seems to have analyzed the endpoint problems = in the time domain in a quantitative way, but it can be seriously mislead= ing to use the deviation of output from the HP-trend as a measure of pote= ntial output. That measure may be seriously distorted by the endpoint pro= blems. >=20 > Sincerely, >=20 > Torben Mark Pedersen > Ministry of Economic Affairs > Ved Stranden > DK-1061 Copenhagen K. > Denmark >=20 > E-mail: tmp@oem.dk >=20 > >>> Diana.Choyleva@lombard-st.co.uk 14-06-00 13:13 >>> > Hallo, >=20 > I would appreciate it if someone could help me with a problem I incurre= d. I > was using a program in Rats to estimate potential output using the > structural approach and then smoothing with an HPfilter. I find that wh= en I > use different lambdas in the HPfilter i tend to get the same potential > output growth rates for the last four years and different ones for the = rest. > It seems strange to me. I was wondering whether this is a feature of th= e > HPfilter or there must be something wrong with the program. > Thank you! >=20 > diana.choyleva@lombard-st.co.uk >=20 > Lombard Street Research's material is intended to encourage better > understanding of economic policy and financial markets. It does not > constitute a solicitation for the purchase or sale of any commodities, > securities or investments. Although the information compiled herein is > considered reliable, its accuracy is not guaranteed. Any person using > Lombard Street Research's material does so solely at his own risk and > Lombard Street Research shall be under no liability whatsoever in > respect thereof. --=20 Simon van Norden, Prof. agr=E9g=E9, www.hec.ca/pages/simon.van-norden Service de l'enseignement de la finance, =C9cole des H.E.C. 3000 Cote-Sainte-Catherine, Montreal QC, CANADA H3T 2A7 simon.van-norden@hec.ca or (514)340-6781 or fax:(514)340-5632 ---------- End of message ---------- From: =?iso-8859-1?q?slim=20skandes?= To: "RATS Discussion List" Subject: request for a program Date: Wed, 14 Jun 2000 16:47:51 +0200 (CEST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Hello=20 I m a student in PARIS X university (France).I m preparing a working paper dealing with the presence of long memory in finantial markets. I have programs computing the hurst exponent by the rescaled range method and the GPH method (Geweke Porter-Hudak). I want to apply the modified rescaled range method proposed by A.Lo (1991). AS i m beggining with RATS i couldn't do the progam . can you send me the program (for RATS ) computing the hurst exponent with the modified rescaled range methodor tell me where can i find them. please help me thank you ___________________________________________________________ Do You Yahoo!? Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ---------- End of message ---------- From: "Timberlake Consultants Ltd" To: "RATS Discussion List" Subject: Re: request for a program Date: Wed, 14 Jun 2000 16:18:06 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; X-Mailer: Microsoft Outlook Express 5.00.2014.211 (via Mercury MTS (Bindery) v1.40) Content-Transfer-Encoding: quoted-printable We are working on it. Please visit our Web for details on our Internet and Public Attendance courses Regards Ana Timberlake Timberlake Consultants Limited http://www.timberlake.co.uk Unit B3, Broomsleigh Business Park, Worsley Bridge Road, London SE26 5BN, U.K. Tel: +44 (0)20 86973377 Fax: +44 (0)20 86973388 e-mail: ana@timberlake.co.uk ----- Original Message ----- From: slim skandes To: RATS Discussion List Sent: 14 June 2000 15:47 Subject: request for a program > Hello > I m a student in PARIS X university (France).I m > preparing a working paper dealing with the presence of > long memory in finantial markets. > I have programs computing the hurst exponent by the > rescaled range method and the GPH method (Geweke > Porter-Hudak). I want to apply the modified rescaled > range method proposed by A.Lo (1991). > AS i m beggining with RATS i couldn't do the progam . > can you send me the program (for RATS ) computing the > hurst exponent with the modified rescaled range > methodor tell me where can i find them. > please help me > > thank you > > > > > > ___________________________________________________________ > Do You Yahoo!? > Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr > ---------- End of message ---------- From: Klaus Fischer To: "RATS Discussion List" Subject: Timberlake Consultants Date: Wed, 14 Jun 2000 12:52:36 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mozilla 4.7 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Dear Ana: I don not know what the the answer is to Slim's question about long memor= y in financial markets, but I am sure that there are probably better answer= s than to sending him/her to take a course with your consulting group. What's next, the price list for your courses? I think you should try to stop exploiting this list for your business purposes. We got the message that you exist. If you do not have a direct answer to a direct question leave us alone. And I suspect I am being kind in the way I said it. Yours, Klaus Fischer Timberlake Consultants Ltd wrote: > We are working on it. > > Please visit our Web for details on our Internet and Public Attendance > courses > > Regards > Ana Timberlake > Timberlake Consultants Limited > http://www.timberlake.co.uk > Unit B3, Broomsleigh Business Park, > Worsley Bridge Road, London SE26 5BN, U.K. > Tel: +44 (0)20 86973377 Fax: +44 (0)20 86973388 e-mail: > ana@timberlake.co.uk > ----- Original Message ----- > From: slim skandes > To: RATS Discussion List > Sent: 14 June 2000 15:47 > Subject: request for a program > > > Hello > > I m a student in PARIS X university (France).I m > > preparing a working paper dealing with the presence of > > long memory in finantial markets. > > I have programs computing the hurst exponent by the > > rescaled range method and the GPH method (Geweke > > Porter-Hudak). I want to apply the modified rescaled > > range method proposed by A.Lo (1991). > > AS i m beggining with RATS i couldn't do the progam . > > can you send me the program (for RATS ) computing the > > hurst exponent with the modified rescaled range > > methodor tell me where can i find them. > > please help me > > > > thank you > > > > > > > > > > > > ___________________________________________________________ > > Do You Yahoo!? > > Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr > > ---------- End of message ---------- From: SsnaithIPR@aol.com To: "RATS Discussion List" Subject: Re: Timberlake Consultants Date: Wed, 14 Jun 2000 15:17:43 EDT Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Mailer: AOL 4.0 for Windows 95 sub 100 (via Mercury MTS (Bindery) v1.40) In a message dated 6/14/2000 1:10:59 PM Eastern Daylight Time, Klaus.Fischer@fas.ulaval.ca writes: I think it may be time to block e-mails from this domain. Rob warned her politely but very clearly and her she is again pitching her site? What's next on the list adult web sites? Further if this person is so slow on the uptake, one wonders about the quality of the services being offered... This list is a valuable resource let's not let someone like this ruin it. > Dear Ana: > > I don not know what the the answer is to Slim's question about long memory > in financial markets, but I am sure that there are probably better answers > than to sending him/her to take a course with your consulting group. > What's next, the price list for your courses? I think you should try to > stop exploiting this list for your business purposes. We got the message > that you exist. If you do not have a direct answer to a direct question > leave us alone. > > And I suspect I am being kind in the way I said it. > > Yours, > > Klaus Fischer > ---------- End of message ---------- From: Jason Morris To: "RATS Discussion List" Subject: Re: Timberlake Consultants Date: Wed, 14 Jun 2000 16:08:44 -0400 (EDT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit X-Mailer: mail.com (via Mercury MTS (Bindery) v1.40) Hi! I'm curious. Is this Dr Snaith who Lectured Econometrics at UWI Jamaica, last year? If not, I'm sorry to bother you. Jason ----------------------------------------------- FREE! The World's Best Email Address @email.com Reserve your name now at http://www.email.com ---------- End of message ---------- From: =?iso-8859-1?q?slim=20skandes?= To: "RATS Discussion List" Subject: request for a program Date: Thu, 15 Jun 2000 13:11:25 +0200 (CEST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Hello=20 I m a student in PARIS X university (France).I m preparing a working paper dealing with the presence of long memory in finantial markets. I have programs computing the hurst exponent by the rescaled range method and the GPH method (Geweke Porter-Hudak). I want to apply the modified rescaled range method proposed by A.Lo (1991). AS i m beggining with RATS i couldn't do the progam . can you send me the program (for RATS ) computing the hurst exponent with the modified rescaled range methodor tell me where can i find them. please help me thank you ___________________________________________________________ Do You Yahoo!? Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ---------- End of message ---------- From: "Crowley, Patrick" To: "RATS Discussion List" Subject: RE: HP-filter Date: Thu, 15 Jun 2000 12:00:02 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain; Content-Transfer-Encoding: quoted-printable Dear Rats users, While we're on with the HP filter, I'm trying to use the HP filter on = annual data for Canadian provinces, as that's all there is available, and = there aren't many datapoints - 26 to be exact. How many datapoints do you = think are necessary to run an HP filter, and what value of lambda would you = advise for annual data? Patrick Crowley Dept of Economics, Middlebury College. =20 -----Original Message----- From: Simon van Norden To: RATS Discussion List Sent: 6/14/00 9:11 AM Subject: Re: HP-filter Actually, we recently did a quantitative study of the end point = problems with the HP filter and several other popular univariate detrending methods = in the context of measuring potential output. Our results confirm what Dr. Pedersen wrote. We also have several references to other papers on this and related issues. The paper is "The Reliability of Output Gap Estimates in Real Time" by Orphanides and van Norden. You can find it as FEDS working paper 1999-38, available on my homepage (see below) or from the Federal Reserve Board. Regards, SvN Torben Mark Pedersen wrote: >=20 > Dear Diana Choyleva >=20 > 1. From your description of the problem, it does seem like there is something wrong. The deviation of output from trend should depend critically on the value of the smoothing parameter, lambda, in the HP-filter, though you should be aware of the fact that it takes large changes in the value of lambda to get quantitatively large differences. >=20 > 2. The HP-filter works like a symmetric, infinite time horizon weighted moving average filter in the time domain, so there are some serious end-point problems with the HP-filter. Near the end of the sample, the HP-filter works like a one-sided filter and is consequently distorting near the ends of the sample. Noone seems to have analyzed the endpoint problems in the time domain in a quantitative way, but it can be seriously misleading to use the deviation of output from the HP-trend as a measure of potential output. That measure may be = seriously distorted by the endpoint problems. >=20 > Sincerely, >=20 > Torben Mark Pedersen > Ministry of Economic Affairs > Ved Stranden > DK-1061 Copenhagen K. > Denmark >=20 > E-mail: tmp@oem.dk >=20 > >>> Diana.Choyleva@lombard-st.co.uk 14-06-00 13:13 >>> > Hallo, >=20 > I would appreciate it if someone could help me with a problem I incurred. I > was using a program in Rats to estimate potential output using the > structural approach and then smoothing with an HPfilter. I find that when I > use different lambdas in the HPfilter i tend to get the same = potential > output growth rates for the last four years and different ones for = the rest. > It seems strange to me. I was wondering whether this is a feature of the > HPfilter or there must be something wrong with the program. > Thank you! >=20 > diana.choyleva@lombard-st.co.uk >=20 > Lombard Street Research's material is intended to encourage better > understanding of economic policy and financial markets. It does not > constitute a solicitation for the purchase or sale of any = commodities, > securities or investments. Although the information compiled herein = is > considered reliable, its accuracy is not guaranteed. Any person using > Lombard Street Research's material does so solely at his own risk and > Lombard Street Research shall be under no liability whatsoever in > respect thereof. --=20 Simon van Norden, Prof. agr=E9g=E9, www.hec.ca/pages/simon.van-norden Service de l'enseignement de la finance, =C9cole des H.E.C. 3000 Cote-Sainte-Catherine, Montreal QC, CANADA H3T 2A7 simon.van-norden@hec.ca or (514)340-6781 or fax:(514)340-5632 ---------- End of message ---------- From: Simon van Norden To: "RATS Discussion List" Subject: Re: HP-filter Date: Thu, 15 Jun 2000 13:49:55 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Ecole des Hautes Etudes Commerciales X-Mailer: Mozilla 4.61 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Pat; It's hard to give a precise answer, since it's not clear why and how the = HP filter is being used. Hodrick & Prescott argued that their filter gave "reasonable-looking resu= lts." On that basis, you should choose any lambda that gives "reasonable-lookin= g results." This will probably be 100 or less. (You can also formally define a state-= space model whose optimal smoother is the HP filter and then estimate the optim= al lambda by maximum-likelihood methods. However, this is rarely done and do= esn't give anything close to 1600 on quarterly data.) As for the number of data points, the same problem applies. Technically, = I think you only need 3 or 4. Realistically, I assume you're worried about endpoi= nt problems. The severity of this problem will depend on the lambda you choo= se, so it's hard to give a simple rule. I'd suggest that you look at the MA representation of the filter weights and make sure they are still fairly symmetric around the points you choose to include. A simple alternative might be to use the BPFILTER.SRC proc from Estima to= do band-pass filtering instead.=20 Another would be to detrend linearly, using recursive breakpoint tests to= allow for changes in the slope of the trend. Simon "Crowley, Patrick" wrote: >=20 > Dear Rats users, >=20 > While we're on with the HP filter, I'm trying to use the HP filter on a= nnual > data for Canadian provinces, as that's all there is available, and ther= e > aren't many datapoints - 26 to be exact. How many datapoints do you th= ink > are necessary to run an HP filter, and what value of lambda would you a= dvise > for annual data? >=20 > Patrick Crowley > Dept of Economics, > Middlebury College. >=20 >=20 > -----Original Message----- > From: Simon van Norden > To: RATS Discussion List > Sent: 6/14/00 9:11 AM > Subject: Re: HP-filter >=20 > Actually, we recently did a quantitative study of the end point problem= s > with > the HP filter and several other popular univariate detrending methods i= n > the > context of measuring potential output. Our results confirm what Dr. > Pedersen > wrote. We also have several references to other papers on this and > related > issues. >=20 > The paper is "The Reliability of Output Gap Estimates in Real Time" by > Orphanides and van Norden. You can find it as FEDS working paper > 1999-38, > available on my homepage (see below) or from the Federal Reserve Board. >=20 > Regards, >=20 > SvN >=20 > Torben Mark Pedersen wrote: > > > > Dear Diana Choyleva > > > > 1. From your description of the problem, it does seem like there is > something wrong. The deviation of output from trend should depend > critically on the value of the smoothing parameter, lambda, in the > HP-filter, though you should be aware of the fact that it takes large > changes in the value of lambda to get quantitatively large differences. > > > > 2. The HP-filter works like a symmetric, infinite time horizon > weighted moving average filter in the time domain, so there are some > serious end-point problems with the HP-filter. Near the end of the > sample, the HP-filter works like a one-sided filter and is consequently > distorting near the ends of the sample. Noone seems to have analyzed > the endpoint problems in the time domain in a quantitative way, but it > can be seriously misleading to use the deviation of output from the > HP-trend as a measure of potential output. That measure may be seriousl= y > distorted by the endpoint problems. > > > > Sincerely, > > > > Torben Mark Pedersen > > Ministry of Economic Affairs > > Ved Stranden > > DK-1061 Copenhagen K. > > Denmark > > > > E-mail: tmp@oem.dk > > > > >>> Diana.Choyleva@lombard-st.co.uk 14-06-00 13:13 >>> > > Hallo, > > > > I would appreciate it if someone could help me with a problem I > incurred. I > > was using a program in Rats to estimate potential output using the > > structural approach and then smoothing with an HPfilter. I find that > when I > > use different lambdas in the HPfilter i tend to get the same potentia= l > > output growth rates for the last four years and different ones for th= e > rest. > > It seems strange to me. I was wondering whether this is a feature of > the > > HPfilter or there must be something wrong with the program. > > Thank you! > > > > diana.choyleva@lombard-st.co.uk > > > > Lombard Street Research's material is intended to encourage better > > understanding of economic policy and financial markets. It does not > > constitute a solicitation for the purchase or sale of any commodities= , > > securities or investments. Although the information compiled herein i= s > > considered reliable, its accuracy is not guaranteed. Any person using > > Lombard Street Research's material does so solely at his own risk and > > Lombard Street Research shall be under no liability whatsoever in > > respect thereof. >=20 > -- > Simon van Norden, Prof. agr=E9g=E9, www.hec.ca/pages/simon.van-norden > Service de l'enseignement de la finance, =C9cole des H.E.C. > 3000 Cote-Sainte-Catherine, Montreal QC, CANADA H3T 2A7 > simon.van-norden@hec.ca or (514)340-6781 or fax:(514)340-5632 --=20 Simon van Norden, Prof. agr=E9g=E9, www.hec.ca/pages/simon.van-norden Service de l'enseignement de la finance, =C9cole des H.E.C. 3000 Cote-Sainte-Catherine, Montreal QC, CANADA H3T 2A7 simon.van-norden@hec.ca or (514)340-6781 or fax:(514)340-5632 ---------- End of message ---------- From: Hermanto Siregar To: "RATS Discussion List" Subject: RE: HP-filter Date: Fri, 16 Jun 2000 09:21:10 +1200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Lincoln University MIME-version: 1.0 X-Mailer: Pegasus Mail for Win32 (v3.12c) (via Mercury MTS (Bindery) v1.40) Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT On Thu, 15 Jun 2000 12:00 "Crowley, Patrick" wrote: > are necessary to run an HP filter, and what value of lambda would you advise > for annual data? Dear Patrick, Harvey and Jaeger (1993) demonstrate that for lambda=1600 the transfer function for the HP filter peaks around 30.1 quarters (about 7.5 years), suggesting to use lambda=7 for annual data. In general however the optimal choice of lambda depends on particular series. Regards, Hermanto Siregar ---------- End of message ---------- From: anjun zhou To: "RATS Discussion List" Subject: Unsubscribe Date: Thu, 15 Jun 2000 19:49:36 -0500 (CDT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Mailer: Mercury MTS (Bindery) v1.40 Please take me off the list. Thanks. Anjun Zhou ---------- End of message ---------- From: "Torben Mark Pedersen" To: "RATS Discussion List" Subject: Vedr.: RE: HP-filter Date: Fri, 16 Jun 2000 08:39:28 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Novell GroupWise 5.5.2 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Dear Patrick Crowley In a forthcoming paper in Journal of Economic Dynamics and Control, I show = how to compute the optimal value of lambda when filtering different time = series depending on the definition of the cyclical component (cycles = shorter than 8 years or shorter than 9 years etc.) and based on annual, = quarterly, or monthly data. The optimal value of lambda is the value which = is least distorting compared to filtering with an ideal high-pass filter. In general, the optimal value of lambda depends on the time series being = filtered (the spectral shape of the series).=20 For US real GDP, I find that a value of 4-5 is optimal when defining the = cyclical component as cycles with a duration shorter than 8 years. You can find the paper on http://www.econ.ku.dk/tmp/manuscripts.htm#Slutzky= =20 and more details on the use with annual and monthly data on http://www.econ.ku.dk/tmp/manuscripts.htm#Spectral Sincerely, Torben Mark Pedersen Ministry of Economic Affairs Ved Stranden 8 DK-1061 Copenhagen K. Denmark Tel.: +45 33924168 E-mail: tmp@oem.dk ---------- End of message ---------- From: Choyleva Diana To: "RATS Discussion List" Subject: Date: Fri, 16 Jun 2000 17:04:21 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain Hallo, Is a value of lambda in the HPfilter around 10000 or 4000 plausible for quarterly data? diana.choyleva@lombard-st.co.uk Lombard Street Research's material is intended to encourage better understanding of economic policy and financial markets. It does not constitute a solicitation for the purchase or sale of any commodities, securities or investments. Although the information compiled herein is considered reliable, its accuracy is not guaranteed. Any person using Lombard Street Research's material does so solely at his own risk and Lombard Street Research shall be under no liability whatsoever in respect thereof. ---------- End of message ---------- From: adolpus laidlow To: "RATS Discussion List" Subject: Re: Timberlake Consultants Date: Fri, 16 Jun 2000 13:14:42 -0700 (PDT) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mercury MTS (Bindery) v1.40 --- Jason Morris wrote: > Hi! > > I'm curious. Is this Dr Snaith who Lectured > Econometrics at UWI Jamaica, > last year? > > If not, I'm sorry to bother you. > > Jason > jason i think his name is Micheal so maybe not. tell kim to give you the grade and you'll send it because am receiving your mails...congrats on your grade. anyway take care Laidlow > ----------------------------------------------- > FREE! The World's Best Email Address @email.com > Reserve your name now at http://www.email.com > > __________________________________________________ Do You Yahoo!? Send instant messages with Yahoo! Messenger. http://im.yahoo.com/ ---------- End of message ---------- From: "Torben Mark Pedersen" To: "RATS Discussion List" Subject: Vedr.: Date: Sat, 17 Jun 2000 11:24:37 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Novell GroupWise 5.5.2 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable No! It should be around 1000-1050 for quarterly data when defining the = cyclical component as cycles with a duration shorter than 8 years. The = value of lambda depends on the time series being filtered. Sincerely, Torben Mark Pedersen Ministry of Economic Affairs Ved Stranden 8 DK-1061 Copenhagen K Denmark >>> Diana.Choyleva@lombard-st.co.uk 16-06-00 18:04 >>> Hallo, Is a value of lambda in the HPfilter around 10000 or 4000 plausible for quarterly data? diana.choyleva@lombard-st.co.uk=20 =20 Lombard Street Research's material is intended to encourage better understanding of economic policy and financial markets. It does not constitute a solicitation for the purchase or sale of any commodities, securities or investments. Although the information compiled herein is considered reliable, its accuracy is not guaranteed. Any person using Lombard Street Research's material does so solely at his own risk and Lombard Street Research shall be under no liability whatsoever in respect thereof. ---------- End of message ---------- From: Choyleva Diana To: "RATS Discussion List" Subject: RE: Vedr.: Date: Mon, 19 Jun 2000 10:05:52 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain What if the cyclical duration is greater than 8-9 years ( I am filtering total productivity)? > -----Original Message----- > From: Torben Mark Pedersen [SMTP:TMP@OEM.DK] > Sent: Saturday, June 17, 2000 10:25 AM > To: RATS Discussion List > Subject: Vedr.: > > No! > > It should be around 1000-1050 for quarterly data when defining the > cyclical component as cycles with a duration shorter than 8 years. The > value of lambda depends on the time series being filtered. > > Sincerely, > > Torben Mark Pedersen > Ministry of Economic Affairs > Ved Stranden 8 > DK-1061 Copenhagen K > Denmark > > > > >>> Diana.Choyleva@lombard-st.co.uk 16-06-00 18:04 >>> > Hallo, > > Is a value of lambda in the HPfilter around 10000 or 4000 plausible for > quarterly data? > > > diana.choyleva@lombard-st.co.uk > > Lombard Street Research's material is intended to encourage better > understanding of economic policy and financial markets. It does not > constitute a solicitation for the purchase or sale of any commodities, > securities or investments. Although the information compiled herein is > considered reliable, its accuracy is not guaranteed. Any person using > Lombard Street Research's material does so solely at his own risk and > Lombard Street Research shall be under no liability whatsoever in > respect thereof. > > ---------- End of message ---------- From: "Torben Mark Pedersen" To: "RATS Discussion List" Subject: HP-filtering Date: Mon, 19 Jun 2000 16:04:43 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Novell GroupWise 5.5.2 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable I have computed the following optimal values of the smoothing parameter of = the HP-filter (lambda) when filtering log of US real GDP for the period = 1947-97 for quarterly, annual, and monthly time series and for hte = following combination of cycle length: There is some uncertainty about the estimates since there is some = uncertainty about the true spectral shape of the US real GDP series. I = would personally use the lower numbers in the following ranges: cycles Quarterly =20 3 years 19-22 4 years 60-68 5 years 150-160 6 years 315-335 7 years 585-610 8 years 1005-1040 9 years 1620-1665 10 years 2450-2505 12 years 5215-5315 16 years 16130-16410 cycles Annual =20 3 years .09-.1 4 years .3-.5 5 years .7-1 6 years 1.3-1.8 7 years 2.2-3.1 8 years 3.7-4.9 9 years 5.9-7.6 10 years 9-11.2 12 years 18.9-22.2 16 years 60.9-67.6 cycles Monthly =20 3 years 1620-1665 4 years 5210-5310 5 years 12725-12950 6 years 25900-26400 7 years 49400-50400 8 years 82800-84800 9 years 130500-134000 10 years 201000-207750 12 years 427500-447000 16 years 1378000-1482500 The technique for computing the optimal value is found in=20 http://www.econ.ku.dk/tmp/manuscripts.htm#Slutzky=20 which will be published in the Journal of Economic Dynamics and Control = later this year. Torben Mark Pedersen, specialkonsulent =D8konomiministeriet Ved Stranden 8 1061 K=F8benhavn K. Tlf.: 33 92 41 68 E-mail: tmp@oem.dk URL: http://www.econ.ku.dk/tmp/default.htm Torben Mark Pedersen Ministry of Economic Affairs Ved Stranden 8 DK-1061 Copenhagen K. Denmark Tel: +45 33 92 41 68 E-mail: tmp@oem.dk URL: http://www.econ.ku.dk/tmp/default.htm >>> Diana.Choyleva@lombard-st.co.uk 19-06-00 11:05 >>> What if the cyclical duration is greater than 8-9 years ( I am filtering total productivity)? > -----Original Message----- > From: Torben Mark Pedersen [SMTP:TMP@OEM.DK]=20 > Sent: Saturday, June 17, 2000 10:25 AM > To: RATS Discussion List > Subject: Vedr.: >=20 > No! >=20 > It should be around 1000-1050 for quarterly data when defining the > cyclical component as cycles with a duration shorter than 8 years. The > value of lambda depends on the time series being filtered. >=20 > Sincerely, >=20 > Torben Mark Pedersen > Ministry of Economic Affairs > Ved Stranden 8 > DK-1061 Copenhagen K > Denmark >=20 >=20 >=20 > >>> Diana.Choyleva@lombard-st.co.uk 16-06-00 18:04 >>> > Hallo, >=20 > Is a value of lambda in the HPfilter around 10000 or 4000 plausible for > quarterly data? >=20 >=20 > diana.choyleva@lombard-st.co.uk=20 > =20 > Lombard Street Research's material is intended to encourage better > understanding of economic policy and financial markets. It does not > constitute a solicitation for the purchase or sale of any commodities, > securities or investments. Although the information compiled herein is > considered reliable, its accuracy is not guaranteed. Any person using > Lombard Street Research's material does so solely at his own risk and > Lombard Street Research shall be under no liability whatsoever in > respect thereof. >=20 >=20 ---------- End of message ---------- From: cipolla@sbu.ac.uk To: "RATS Discussion List" Subject: Inequality constraints Date: Mon, 19 Jun 2000 16:34:58 +0000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: South Bank University MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7BIT X-Mailer: Mercury MTS (Bindery) v1.40 Dear RATS users, I am estimating a GARCH(1,1) process: y(t)=c0 + u(t) h(t) = (1-c1-c2) + c1*h(t-1)+c2*u{1}**2 where u{1}**2, is the squared residual lagged once and the unconditional variance is normalised to unity. From the estimation results I obtain that both c1 and c2 are less than unity and positive, but their sum exceeds unity. Can anyone help me to impose the constraint c1+c2 < 1, in order to achieve stability in the conditional variance equation? Many thanks Andrea Cipollini South Bank University Business School Southwark Campus 103 Borough Road London SE1 0AA Direct Line: +44 (0)171-8157077 E-mail: cipolla@sbu.ac.uk ---------- End of message ---------- From: Brian Lucey To: "RATS Discussion List" Subject: stochastic dominance Date: Mon, 19 Jun 2000 18:14:14 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: TCD X-Mailer: Mozilla 3.01Gold (Win95; I) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit anybody with experience of stochastic dominance in rats ? -- The comments etc above are my own personal views and should not be taken as representing the official views of Trinity College nor of the School of Business Brian M Lucey Lecturer in Finance School of Business Studies Trinity College Dublin 2, Ireland 353-1-6081552 (Phone) 353-1-6799503 (Fax) ---------- End of message ---------- From: JohnVelis@intesasgr.it To: "RATS Discussion List" Subject: Oggetto: Inequality constraints Date: Tue, 20 Jun 2000 10:50:38 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-type: text/plain; charset=us-ascii X-Mailer: Mercury MTS (Bindery) v1.40 Ciao, Andrea: You could run an IGARCH model (integrated-GARCH) as suggested by Engle and Bollerslev (Econometric Reviews, 1986). in which the variance equation looks like the following: h(t)=alpha(epsilon(t)^2) + (1-alpha)h(t-1). There is a nicer exposition of the IGARCH model in Bera and Higgens (J Econ Surveys, 1993). It isn't unusual for series which are very non-stationary to have level shifts which give you explosive variance estimates (exchange rates which get devalued, for example -- see Baille and Bollerslve, JBES, 1989.) Good luck. John Velis Chief Economist Intesa Asset Management, SGR Foro Buonaparte 35 20121 Milano ITALIA Tel: 39-02-88102525 Fax: 39-02-88102500 email: johnvelis@intesasgr.it ---------- End of message ---------- From: Yuen Phui Ling Hazel To: "RATS Discussion List" Subject: Matrix Multiplication with residuals Date: Tue, 20 Jun 2000 17:51:55 +0800 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain; Hi everyone, Does someone know the syntax or has any clue as how to do matrix multiplication of which one matrix is the residuals series, i.e. to multiply the rectangular matrix of the residuals series with another matrix of compatible dimension. Thanks so much in advance. Have a nice day. Kind Rgds, Hazel ---------- End of message ---------- From: Allison To: "RATS Discussion List" Subject: question about BVAR Date: Tue, 20 Jun 2000 16:28:01 -1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-version: 1.0 X-Mailer: Mozilla 4.7 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) Content-type: multipart/alternative; --------------E444FCC7106BBD4BFD6B362A Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hi: I am a user of WINRATS 4.3. Lately I have been trying to reproduce the result of Bayesian VAR (with Minnesota prior) through a Mixed Regression program. I am able to get really close resemblance in the coefficients, however, they are not identical. In what follows, please find my program code and output. My main questions are: 1) Does built-in BVAR procedure first demean variables? 2) If not, how to handle deterministic variables? Is my way correct? 3) What is the reason for close, however not identical, regression coefficients? I look forward to your response. Thank you very much. Allison A) Code for mixreg.src ( identical to the example you give on page 5-13 of manual version 4, except that I adjust the degree of freedom from T-K to T-NDET with NDET being number of deterministic variables) PROCEDURE MIXED DEPVAR NDET NBEG NEND CAPR LOWR V TYPE SERIES DEPVAR TYPE REAL NDET TYPE INTEGER NBEG NEND ; * Estimation Range TYPE RECTANGULAR CAPR ; * R matrix: m x NREG TYPE VECTOR LOWR ; * r vector: m x 1 TYPE SYMMETRIC V ; * V matrix: m x m * LOCAL SYMMETRIC XXMIXED LOCAL VECTOR XYMIXED LOCAL INDEX REGSUPP ; * Array for supplementary card * ENTER(VARYING) REGSUPP ; * Bring in supplementary card CMOMENT NBEG NEND ; * CMOM including depvar # REGSUPP DEPVAR LINREG(CMOM, NOPRINT) DEPVAR # REGSUPP OVERLAY %CMOM(1,1) WITH XXMIXED(%NREG,%NREG) OVERLAY %CMOM(%NREG+1,1) WITH XYMIXED(%NREG) COMPUTE ADJ=(%NOBS-%NREG)/(%NOBS-NDET) DISPLAY ADJ COMPUTE XXMIXED=XXMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*CAPR) COMPUTE XYMIXED=XYMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*LOWR) LINREG(CMOM) DEPVAR #REGSUPP END B) Program code for doing mixed estimation regression. source(noecho) I:\winrats\uhero\mixreg.src cal 60 1 4 allocate 85:4 open data I:\winrats\usadata.rat data(format=rats) / * Using built-in procedure to do BVAR regression with Minnesota priors COMPUTE GAMMA=0.2 COMPUTE FIJ=0.5 system 1 to 2 variables usaprice usargnp lags 1 to 2 det constant specify(type=symmetric,tight=GAMMA) FIJ end(system) estimate * Mixed regression by hand * Compute S1, the standard error of auto regression on equation 1 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE LINREG(CMOMENT) USAPRICE # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} * Two alternative ways to calculate S1, one with Degree of Freedom T-K, the other T-NDET COMPUTE S1=SQRT(%SEESQ) *COMPUTE TEST1=%RSS/(%NOBS-1) *COMPUTE S1=SQRT(TEST1) DISPLAY S1 %SEESQ * Compute S2, the standard error of auto regression on equation 2 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP LINREG(CMOMENT) USARGNP # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} COMPUTE S2=SQRT(%SEESQ) *COMPUTE TEST2=%RSS/(%NOBS-1) *COMPUTE S2=SQRT(TEST2) DISPLAY S2 %SEESQ * Compute the prior information for mixed regression COMPUTE [RECTANGULAR] R = ||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0|| COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0|| COMPUTE [SYMMETRIC] V = ||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2|| WRITE R LITTLER V @mixed USAPRICE 1 60:1 85:4 R LITTLER V # USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT C) Output. source(noecho) I:\winrats\uhero\mixreg.src cal 60 1 4 allocate 85:4 open data I:\winrats\usadata.rat data(format=rats) / * Using built-in procedure to do BVAR regression with Minnesota priors COMPUTE GAMMA=0.2 COMPUTE FIJ=0.5 system 1 to 2 variables usaprice usargnp lags 1 to 2 det constant specify(type=symmetric,tight=GAMMA) FIJ end(system) Summary of the Prior... Tightness Parameter 0.200000 Harmonic Lag Decay with Parameter 0.000000 Standard Deviations as Fraction of Tightness and Prior Means Listed Under the Dependent Variable USAPRICE USARGNP USAPRICE 1.00000000 0.50000000 USARGNP 0.50000000 1.00000000 Mean 1.00000000 1.00000000 estimate Dependent Variable USAPRICE - Estimation by Mixed Estimation Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 101 Centered R**2 0.999907 R Bar **2 0.999907 Uncentered R**2 0.999984 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.231678752 Sum of Squared Residuals 5.4211794375 Durbin-Watson Statistic 2.032795 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} 1.558718576 0.063276664 24.63339 0.00000000 2. USAPRICE{2} -0.564449553 0.062927823 -8.96979 0.00000000 3. USARGNP{1} 0.000036688 0.000490931 0.07473 0.94057554 4. USARGNP{2} 0.000506875 0.000501614 1.01049 0.31467619 5. Constant -0.796232629 0.201750915 -3.94661 0.00014664 F-Tests, Dependent Variable USAPRICE Variable F-Statistic Signif USAPRICE 68851.8400 0.0000000 USARGNP 8.8141 0.0002963 Dependent Variable USARGNP - Estimation by Mixed Estimation Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 101 Centered R**2 0.997654 R Bar **2 0.997654 Uncentered R**2 0.999903 T x R**2 101.990 Mean of Dependent Variable 2639.7245098 Std Error of Dependent Variable 551.4344454 Standard Error of Estimate 26.7063979 Sum of Squared Residuals 72036.400834 Durbin-Watson Statistic 1.949370 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} -8.52617351 5.78265018 -1.47444 0.14347331 2. USAPRICE{2} 8.88189778 5.75035220 1.54458 0.12557472 3. USARGNP{1} 1.16968123 0.07879125 14.84532 0.00000000 4. USARGNP{2} -0.17729213 0.07989428 -2.21908 0.02871856 5. Constant 23.73450138 22.32510528 1.06313 0.29025804 F-Tests, Dependent Variable USARGNP Variable F-Statistic Signif USAPRICE 2.0012 0.1404944 USARGNP 2486.7562 0.0000000 * Mixed regression by hand * Compute S1, the standard error of auto regression on equation 1 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE LINREG(CMOMENT) USAPRICE # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} Dependent Variable USAPRICE - Estimation by Least Squares Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 97 Centered R**2 0.999911 R Bar **2 0.999907 Uncentered R**2 0.999985 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.231605150 Sum of Squared Residuals 5.2031717209 Regression F(4,97) 272586.2337 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.463235 Q(25-0) 25.940142 Significance Level of Q 0.41079860 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.584689219 0.209530831 -2.79047 0.00633768 2. USAPRICE{1} 1.702187514 0.073717462 23.09070 0.00000000 3. USAPRICE{2} -0.707202173 0.073306405 -9.64721 0.00000000 4. USARGNP{1} 0.000080177 0.000845945 0.09478 0.92468698 5. USARGNP{2} 0.000328262 0.000869452 0.37755 0.70658905 * Two alternative ways to calculate S1, one with Degree of Freedom T-K, the other T-NDET COMPUTE S1=SQRT(%SEESQ) *COMPUTE TEST1=%RSS/(%NOBS-1) *COMPUTE S1=SQRT(TEST1) DISPLAY S1 %SEESQ 0.23161 0.05364 * Compute S2, the standard error of auto regression on equation 2 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP LINREG(CMOMENT) USARGNP # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} Dependent Variable USARGNP - Estimation by Least Squares Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 97 Centered R**2 0.997688 R Bar **2 0.997593 Uncentered R**2 0.999904 T x R**2 101.990 Mean of Dependent Variable 2639.7245098 Std Error of Dependent Variable 551.4344454 Standard Error of Estimate 27.0542379 Sum of Squared Residuals 70997.383388 Regression F(4,97) 10465.8241 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.127956 Q(25-0) 21.316421 Significance Level of Q 0.67483977 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant 11.62277727 24.47569471 0.47487 0.63594696 2. USAPRICE{1} -15.95218521 8.61107693 -1.85252 0.06699205 3. USAPRICE{2} 16.26287458 8.56306055 1.89919 0.06051095 4. USARGNP{1} 1.22350624 0.09881640 12.38161 0.00000000 5. USARGNP{2} -0.22393299 0.10156234 -2.20488 0.02982563 COMPUTE S2=SQRT(%SEESQ) *COMPUTE TEST2=%RSS/(%NOBS-1) *COMPUTE S2=SQRT(TEST2) DISPLAY S2 %SEESQ 27.05424 731.93179 * Compute the prior information for mixed regression COMPUTE [RECTANGULAR] R = ||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0|| COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0|| COMPUTE [SYMMETRIC] V = ||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2|| WRITE R LITTLER V 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 233.6238 0.0000 0.0000 0.0000 0.0000 0.0000 233.6238 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 0.0400 0.0000 0.0400 0.0000 0.0000 0.0400 0.0000 0.0000 0.0000 0.0400 0.0000 0.0000 0.0000 0.0000 0.0400 @mixed USAPRICE 1 60:1 85:4 R LITTLER V # USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT 0.96040 Dependent Variable USAPRICE - Estimation by Least Squares Quarterly Data From 60:01 To 85:04 Usable Observations 102 Degrees of Freedom 97 Total Observations 104 Skipped/Missing 2 Centered R**2 0.999908 R Bar **2 0.999904 Uncentered R**2 0.999984 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.235824529 Sum of Squared Residuals 5.3944812376 Regression F(4,97) 262918.3940 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.062554 Q(26-0) 32.607800 Significance Level of Q 0.17378298 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} 1.567767815 0.064966164 24.13207 0.00000000 2. USAPRICE{2} -0.573452845 0.064607630 -8.87593 0.00000000 3. USARGNP{1} 0.000038387 0.000506590 0.07577 0.93975458 4. USARGNP{2} 0.000496622 0.000517796 0.95911 0.33988919 5. Constant -0.782826295 0.205787277 -3.80406 0.00024902 --------------E444FCC7106BBD4BFD6B362A Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit Hi:

    I am a user of WINRATS 4.3. Lately I have been trying to reproduce the result of
Bayesian VAR (with Minnesota prior) through a Mixed Regression program. I am able
to get really close resemblance in the coefficients, however, they are not identical. In
what follows, please find my program code and output. My main questions are:
1) Does built-in BVAR procedure first demean variables?
2) If not, how to handle deterministic variables? Is my way correct?
3) What is the reason for close, however not identical, regression coefficients?

    I look forward to your response. Thank you very much.
Allison

A) Code for mixreg.src ( identical to the example you give on page 5-13 of manual
version 4, except that I adjust the degree of freedom from T-K to T-NDET with NDET
being number of deterministic variables)

PROCEDURE MIXED DEPVAR NDET NBEG NEND CAPR LOWR V
TYPE  SERIES      DEPVAR
TYPE  REAL        NDET
TYPE  INTEGER     NBEG  NEND ; * Estimation Range
TYPE  RECTANGULAR CAPR       ; * R matrix: m x NREG
TYPE  VECTOR      LOWR       ; * r vector: m x 1
TYPE  SYMMETRIC   V          ; * V matrix: m x m
*
LOCAL SYMMETRIC   XXMIXED
LOCAL VECTOR      XYMIXED
LOCAL INDEX       REGSUPP    ; * Array for supplementary card
*
ENTER(VARYING) REGSUPP       ; * Bring in supplementary card
CMOMENT NBEG NEND            ; * CMOM including depvar
# REGSUPP DEPVAR
LINREG(CMOM, NOPRINT) DEPVAR
# REGSUPP
OVERLAY  %CMOM(1,1)       WITH XXMIXED(%NREG,%NREG)
OVERLAY  %CMOM(%NREG+1,1) WITH XYMIXED(%NREG)
COMPUTE ADJ=(%NOBS-%NREG)/(%NOBS-NDET)
DISPLAY ADJ
COMPUTE XXMIXED=XXMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*CAPR)
COMPUTE XYMIXED=XYMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*LOWR)
LINREG(CMOM) DEPVAR
#REGSUPP
END
 
 

B) Program code for doing mixed estimation regression.

source(noecho) I:\winrats\uhero\mixreg.src
cal 60 1 4
allocate 85:4
open data I:\winrats\usadata.rat
data(format=rats) /

* Using built-in procedure to do BVAR regression with Minnesota priors
COMPUTE GAMMA=0.2
COMPUTE FIJ=0.5
system 1 to 2
variables usaprice usargnp
lags 1 to 2
det constant
specify(type=symmetric,tight=GAMMA)  FIJ
end(system)
estimate

* Mixed regression by hand

* Compute S1, the standard error of auto regression on equation 1
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE
LINREG(CMOMENT) USAPRICE
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

* Two alternative ways to calculate S1, one with Degree of Freedom T-K,
the other T-NDET
COMPUTE S1=SQRT(%SEESQ)
*COMPUTE TEST1=%RSS/(%NOBS-1)
*COMPUTE S1=SQRT(TEST1)
DISPLAY S1 %SEESQ

* Compute S2, the standard error of auto regression on equation 2
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP
LINREG(CMOMENT) USARGNP
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}
COMPUTE S2=SQRT(%SEESQ)
*COMPUTE TEST2=%RSS/(%NOBS-1)
*COMPUTE S2=SQRT(TEST2)
DISPLAY S2 %SEESQ

* Compute the prior information for mixed regression

COMPUTE [RECTANGULAR] R =
||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0||
COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0||
COMPUTE [SYMMETRIC] V =
||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2||
WRITE R LITTLER V

@mixed USAPRICE  1  60:1 85:4    R   LITTLER   V
# USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT
 
 
 

C) Output.
source(noecho) I:\winrats\uhero\mixreg.src
cal 60 1 4
allocate 85:4
open data I:\winrats\usadata.rat
data(format=rats) /

* Using built-in procedure to do BVAR regression with Minnesota priors
COMPUTE GAMMA=0.2
COMPUTE FIJ=0.5
system 1 to 2
variables usaprice usargnp
lags 1 to 2
det constant
specify(type=symmetric,tight=GAMMA)  FIJ
end(system)
 

Summary of the Prior...
Tightness Parameter 0.200000
Harmonic Lag Decay with Parameter 0.000000
Standard Deviations as Fraction of Tightness and Prior Means
  Listed Under the Dependent Variable
          USAPRICE    USARGNP
USAPRICE  1.00000000 0.50000000
USARGNP   0.50000000 1.00000000
Mean      1.00000000 1.00000000

estimate

Dependent Variable USAPRICE - Estimation by Mixed Estimation
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom   101
Centered R**2     0.999907      R Bar **2   0.999907
Uncentered R**2   0.999984      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.231678752
Sum of Squared Residuals        5.4211794375
Durbin-Watson Statistic             2.032795

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               1.558718576  0.063276664     24.63339
0.00000000
2.  USAPRICE{2}              -0.564449553  0.062927823     -8.96979
0.00000000
3.  USARGNP{1}                0.000036688  0.000490931      0.07473
0.94057554
4.  USARGNP{2}                0.000506875  0.000501614      1.01049
0.31467619
5.  Constant                 -0.796232629  0.201750915     -3.94661
0.00014664

F-Tests, Dependent Variable USAPRICE
Variable            F-Statistic       Signif
USAPRICE              68851.8400     0.0000000
USARGNP                   8.8141     0.0002963
 

Dependent Variable USARGNP - Estimation by Mixed Estimation
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom   101
Centered R**2     0.997654      R Bar **2   0.997654
Uncentered R**2   0.999903      T x R**2     101.990
Mean of Dependent Variable      2639.7245098
Std Error of Dependent Variable  551.4344454
Standard Error of Estimate        26.7063979
Sum of Squared Residuals        72036.400834
Durbin-Watson Statistic             1.949370

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               -8.52617351   5.78265018     -1.47444
0.14347331
2.  USAPRICE{2}                8.88189778   5.75035220      1.54458
0.12557472
3.  USARGNP{1}                 1.16968123   0.07879125     14.84532
0.00000000
4.  USARGNP{2}                -0.17729213   0.07989428     -2.21908
0.02871856
5.  Constant                  23.73450138  22.32510528      1.06313
0.29025804

F-Tests, Dependent Variable USARGNP
Variable            F-Statistic       Signif
USAPRICE                  2.0012     0.1404944
USARGNP                2486.7562     0.0000000
 

* Mixed regression by hand

* Compute S1, the standard error of auto regression on equation 1
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE
LINREG(CMOMENT) USAPRICE
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

Dependent Variable USAPRICE - Estimation by Least Squares
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom    97
Centered R**2     0.999911      R Bar **2   0.999907
Uncentered R**2   0.999985      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.231605150
Sum of Squared Residuals        5.2031717209
Regression F(4,97)               272586.2337
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.463235
Q(25-0)                            25.940142
Significance Level of Q           0.41079860

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  Constant                 -0.584689219  0.209530831     -2.79047
0.00633768
2.  USAPRICE{1}               1.702187514  0.073717462     23.09070
0.00000000
3.  USAPRICE{2}              -0.707202173  0.073306405     -9.64721
0.00000000
4.  USARGNP{1}                0.000080177  0.000845945      0.09478
0.92468698
5.  USARGNP{2}                0.000328262  0.000869452      0.37755
0.70658905
 

* Two alternative ways to calculate S1, one with Degree of Freedom T-K,
the other T-NDET
COMPUTE S1=SQRT(%SEESQ)
*COMPUTE TEST1=%RSS/(%NOBS-1)
*COMPUTE S1=SQRT(TEST1)
DISPLAY S1 %SEESQ
      0.23161       0.05364

* Compute S2, the standard error of auto regression on equation 2
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP
LINREG(CMOMENT) USARGNP
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

Dependent Variable USARGNP - Estimation by Least Squares
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom    97
Centered R**2     0.997688      R Bar **2   0.997593
Uncentered R**2   0.999904      T x R**2     101.990
Mean of Dependent Variable      2639.7245098
Std Error of Dependent Variable  551.4344454
Standard Error of Estimate        27.0542379
Sum of Squared Residuals        70997.383388
Regression F(4,97)                10465.8241
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.127956
Q(25-0)                            21.316421
Significance Level of Q           0.67483977

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  Constant                  11.62277727  24.47569471      0.47487
0.63594696
2.  USAPRICE{1}              -15.95218521   8.61107693     -1.85252
0.06699205
3.  USAPRICE{2}               16.26287458   8.56306055      1.89919
0.06051095
4.  USARGNP{1}                 1.22350624   0.09881640     12.38161
0.00000000
5.  USARGNP{2}                -0.22393299   0.10156234     -2.20488
0.02982563

COMPUTE S2=SQRT(%SEESQ)
*COMPUTE TEST2=%RSS/(%NOBS-1)
*COMPUTE S2=SQRT(TEST2)
DISPLAY S2 %SEESQ
     27.05424     731.93179

* Compute the prior information for mixed regression

COMPUTE [RECTANGULAR] R =
||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0||
COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0||
COMPUTE [SYMMETRIC] V =
||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2||
WRITE R LITTLER V
      1.0000         0.0000         0.0000         0.0000         0.0000
      0.0000         1.0000         0.0000         0.0000         0.0000
      0.0000         0.0000       233.6238         0.0000         0.0000
      0.0000         0.0000         0.0000       233.6238         0.0000
      0.0000         0.0000         0.0000         0.0000         0.0000
 

      1.0000         0.0000         0.0000         0.0000         0.0000
 

      0.0400
      0.0000         0.0400
      0.0000         0.0000         0.0400
      0.0000         0.0000         0.0000         0.0400
      0.0000         0.0000         0.0000         0.0000         0.0400
 
 

@mixed USAPRICE  1  60:1 85:4    R   LITTLER   V
# USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT
      0.96040

Dependent Variable USAPRICE - Estimation by Least Squares
Quarterly Data From 60:01 To 85:04
Usable Observations    102      Degrees of Freedom    97
 Total Observations    104      Skipped/Missing        2
Centered R**2     0.999908      R Bar **2   0.999904
Uncentered R**2   0.999984      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.235824529
Sum of Squared Residuals        5.3944812376
Regression F(4,97)               262918.3940
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.062554
Q(26-0)                            32.607800
Significance Level of Q           0.17378298

   Variable                     Coeff       Std Error      T-Stat    Signif
*******************************************************************************

1.  USAPRICE{1}               1.567767815  0.064966164     24.13207 0.00000000
2.  USAPRICE{2}              -0.573452845  0.064607630     -8.87593 0.00000000
3.  USARGNP{1}                0.000038387  0.000506590      0.07577 0.93975458
4.  USARGNP{2}                0.000496622  0.000517796      0.95911 0.33988919
5.  Constant                 -0.782826295  0.205787277     -3.80406 0.00024902 --------------E444FCC7106BBD4BFD6B362A-- ---------- End of message ---------- From: Allison To: "RATS Discussion List" Subject: question about BVAR Date: Tue, 20 Jun 2000 20:49:06 -1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-version: 1.0 X-Mailer: Mozilla 4.7 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) Content-type: multipart/alternative; --------------B977848CA3CDD97EA1E5FB95 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hi: I am a user of WINRATS 4.3. Lately I have been trying to reproduce the result of Bayesian VAR (with Minnesota prior) through a Mixed Regression program. I am able to get really close resemblance in the coefficients, however, they are not identical. In what follows, please find my program code and output. My main questions are: 1) Does built-in BVAR procedure first demean variables? 2) If not, how to handle deterministic variables? Is my way correct? 3) What is the reason for close, however not identical, regression coefficients? I look forward to your response. Thank you very much. Allison A) Code for mixreg.src ( identical to the example on page 5-13 of manual version 4, except that I adjust the degree of freedom from T-K to T-NDET with NDET being number of deterministic variables) PROCEDURE MIXED DEPVAR NDET NBEG NEND CAPR LOWR V TYPE SERIES DEPVAR TYPE REAL NDET TYPE INTEGER NBEG NEND ; * Estimation Range TYPE RECTANGULAR CAPR ; * R matrix: m x NREG TYPE VECTOR LOWR ; * r vector: m x 1 TYPE SYMMETRIC V ; * V matrix: m x m * LOCAL SYMMETRIC XXMIXED LOCAL VECTOR XYMIXED LOCAL INDEX REGSUPP ; * Array for supplementary card * ENTER(VARYING) REGSUPP ; * Bring in supplementary card CMOMENT NBEG NEND ; * CMOM including depvar # REGSUPP DEPVAR LINREG(CMOM, NOPRINT) DEPVAR # REGSUPP OVERLAY %CMOM(1,1) WITH XXMIXED(%NREG,%NREG) OVERLAY %CMOM(%NREG+1,1) WITH XYMIXED(%NREG) COMPUTE ADJ=(%NOBS-%NREG)/(%NOBS-NDET) DISPLAY ADJ COMPUTE XXMIXED=XXMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*CAPR) COMPUTE XYMIXED=XYMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*LOWR) LINREG(CMOM) DEPVAR #REGSUPP END B) Program code for doing mixed estimation regression. source(noecho) I:\winrats\uhero\mixreg.src cal 60 1 4 allocate 85:4 open data I:\winrats\usadata.rat data(format=rats) / * Using built-in procedure to do BVAR regression with Minnesota priors COMPUTE GAMMA=0.2 COMPUTE FIJ=0.5 system 1 to 2 variables usaprice usargnp lags 1 to 2 det constant specify(type=symmetric,tight=GAMMA) FIJ end(system) estimate * Mixed regression by hand * Compute S1, the standard error of auto regression on equation 1 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE LINREG(CMOMENT) USAPRICE # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} * Two alternative ways to calculate S1, one with Degree of Freedom T-K, the other T-NDET COMPUTE S1=SQRT(%SEESQ) *COMPUTE TEST1=%RSS/(%NOBS-1) *COMPUTE S1=SQRT(TEST1) DISPLAY S1 %SEESQ * Compute S2, the standard error of auto regression on equation 2 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP LINREG(CMOMENT) USARGNP # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} COMPUTE S2=SQRT(%SEESQ) *COMPUTE TEST2=%RSS/(%NOBS-1) *COMPUTE S2=SQRT(TEST2) DISPLAY S2 %SEESQ * Compute the prior information for mixed regression COMPUTE [RECTANGULAR] R = ||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0|| COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0|| COMPUTE [SYMMETRIC] V = ||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2|| WRITE R LITTLER V @mixed USAPRICE 1 60:1 85:4 R LITTLER V # USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT C) Output. source(noecho) I:\winrats\uhero\mixreg.src cal 60 1 4 allocate 85:4 open data I:\winrats\usadata.rat data(format=rats) / * Using built-in procedure to do BVAR regression with Minnesota priors COMPUTE GAMMA=0.2 COMPUTE FIJ=0.5 system 1 to 2 variables usaprice usargnp lags 1 to 2 det constant specify(type=symmetric,tight=GAMMA) FIJ end(system) Summary of the Prior... Tightness Parameter 0.200000 Harmonic Lag Decay with Parameter 0.000000 Standard Deviations as Fraction of Tightness and Prior Means Listed Under the Dependent Variable USAPRICE USARGNP USAPRICE 1.00000000 0.50000000 USARGNP 0.50000000 1.00000000 Mean 1.00000000 1.00000000 estimate Dependent Variable USAPRICE - Estimation by Mixed Estimation Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 101 Centered R**2 0.999907 R Bar **2 0.999907 Uncentered R**2 0.999984 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.231678752 Sum of Squared Residuals 5.4211794375 Durbin-Watson Statistic 2.032795 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} 1.558718576 0.063276664 24.63339 0.00000000 2. USAPRICE{2} -0.564449553 0.062927823 -8.96979 0.00000000 3. USARGNP{1} 0.000036688 0.000490931 0.07473 0.94057554 4. USARGNP{2} 0.000506875 0.000501614 1.01049 0.31467619 5. Constant -0.796232629 0.201750915 -3.94661 0.00014664 F-Tests, Dependent Variable USAPRICE Variable F-Statistic Signif USAPRICE 68851.8400 0.0000000 USARGNP 8.8141 0.0002963 Dependent Variable USARGNP - Estimation by Mixed Estimation Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 101 Centered R**2 0.997654 R Bar **2 0.997654 Uncentered R**2 0.999903 T x R**2 101.990 Mean of Dependent Variable 2639.7245098 Std Error of Dependent Variable 551.4344454 Standard Error of Estimate 26.7063979 Sum of Squared Residuals 72036.400834 Durbin-Watson Statistic 1.949370 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} -8.52617351 5.78265018 -1.47444 0.14347331 2. USAPRICE{2} 8.88189778 5.75035220 1.54458 0.12557472 3. USARGNP{1} 1.16968123 0.07879125 14.84532 0.00000000 4. USARGNP{2} -0.17729213 0.07989428 -2.21908 0.02871856 5. Constant 23.73450138 22.32510528 1.06313 0.29025804 F-Tests, Dependent Variable USARGNP Variable F-Statistic Signif USAPRICE 2.0012 0.1404944 USARGNP 2486.7562 0.0000000 * Mixed regression by hand * Compute S1, the standard error of auto regression on equation 1 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE LINREG(CMOMENT) USAPRICE # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} Dependent Variable USAPRICE - Estimation by Least Squares Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 97 Centered R**2 0.999911 R Bar **2 0.999907 Uncentered R**2 0.999985 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.231605150 Sum of Squared Residuals 5.2031717209 Regression F(4,97) 272586.2337 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.463235 Q(25-0) 25.940142 Significance Level of Q 0.41079860 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.584689219 0.209530831 -2.79047 0.00633768 2. USAPRICE{1} 1.702187514 0.073717462 23.09070 0.00000000 3. USAPRICE{2} -0.707202173 0.073306405 -9.64721 0.00000000 4. USARGNP{1} 0.000080177 0.000845945 0.09478 0.92468698 5. USARGNP{2} 0.000328262 0.000869452 0.37755 0.70658905 * Two alternative ways to calculate S1, one with Degree of Freedom T-K, the other T-NDET COMPUTE S1=SQRT(%SEESQ) *COMPUTE TEST1=%RSS/(%NOBS-1) *COMPUTE S1=SQRT(TEST1) DISPLAY S1 %SEESQ 0.23161 0.05364 * Compute S2, the standard error of auto regression on equation 2 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP LINREG(CMOMENT) USARGNP # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} Dependent Variable USARGNP - Estimation by Least Squares Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 97 Centered R**2 0.997688 R Bar **2 0.997593 Uncentered R**2 0.999904 T x R**2 101.990 Mean of Dependent Variable 2639.7245098 Std Error of Dependent Variable 551.4344454 Standard Error of Estimate 27.0542379 Sum of Squared Residuals 70997.383388 Regression F(4,97) 10465.8241 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.127956 Q(25-0) 21.316421 Significance Level of Q 0.67483977 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant 11.62277727 24.47569471 0.47487 0.63594696 2. USAPRICE{1} -15.95218521 8.61107693 -1.85252 0.06699205 3. USAPRICE{2} 16.26287458 8.56306055 1.89919 0.06051095 4. USARGNP{1} 1.22350624 0.09881640 12.38161 0.00000000 5. USARGNP{2} -0.22393299 0.10156234 -2.20488 0.02982563 COMPUTE S2=SQRT(%SEESQ) *COMPUTE TEST2=%RSS/(%NOBS-1) *COMPUTE S2=SQRT(TEST2) DISPLAY S2 %SEESQ 27.05424 731.93179 * Compute the prior information for mixed regression COMPUTE [RECTANGULAR] R = ||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0|| COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0|| COMPUTE [SYMMETRIC] V = ||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2|| WRITE R LITTLER V 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 233.6238 0.0000 0.0000 0.0000 0.0000 0.0000 233.6238 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 0.0400 0.0000 0.0400 0.0000 0.0000 0.0400 0.0000 0.0000 0.0000 0.0400 0.0000 0.0000 0.0000 0.0000 0.0400 @mixed USAPRICE 1 60:1 85:4 R LITTLER V # USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT 0.96040 Dependent Variable USAPRICE - Estimation by Least Squares Quarterly Data From 60:01 To 85:04 Usable Observations 102 Degrees of Freedom 97 Total Observations 104 Skipped/Missing 2 Centered R**2 0.999908 R Bar **2 0.999904 Uncentered R**2 0.999984 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.235824529 Sum of Squared Residuals 5.3944812376 Regression F(4,97) 262918.3940 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.062554 Q(26-0) 32.607800 Significance Level of Q 0.17378298 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} 1.567767815 0.064966164 24.13207 0.00000000 2. USAPRICE{2} -0.573452845 0.064607630 -8.87593 0.00000000 3. USARGNP{1} 0.000038387 0.000506590 0.07577 0.93975458 4. USARGNP{2} 0.000496622 0.000517796 0.95911 0.33988919 5. Constant -0.782826295 0.205787277 -3.80406 0.00024902 --------------B977848CA3CDD97EA1E5FB95 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit  
Hi:

    I am a user of WINRATS 4.3. Lately I have been trying to reproduce the result of
Bayesian VAR (with Minnesota prior) through a Mixed Regression program. I am able
to get really close resemblance in the coefficients, however, they are not identical. In
what follows, please find my program code and output. My main questions are:
1) Does built-in BVAR procedure first demean variables?
2) If not, how to handle deterministic variables? Is my way correct?
3) What is the reason for close, however not identical, regression coefficients?

    I look forward to your response. Thank you very much.
Allison

A) Code for mixreg.src ( identical to the example on page 5-13 of manual
version 4, except that I adjust the degree of freedom from T-K to T-NDET with NDET
being number of deterministic variables)

PROCEDURE MIXED DEPVAR NDET NBEG NEND CAPR LOWR V
TYPE  SERIES      DEPVAR
TYPE  REAL        NDET
TYPE  INTEGER     NBEG  NEND ; * Estimation Range
TYPE  RECTANGULAR CAPR       ; * R matrix: m x NREG
TYPE  VECTOR      LOWR       ; * r vector: m x 1
TYPE  SYMMETRIC   V          ; * V matrix: m x m
*
LOCAL SYMMETRIC   XXMIXED
LOCAL VECTOR      XYMIXED
LOCAL INDEX       REGSUPP    ; * Array for supplementary card
*
ENTER(VARYING) REGSUPP       ; * Bring in supplementary card
CMOMENT NBEG NEND            ; * CMOM including depvar
# REGSUPP DEPVAR
LINREG(CMOM, NOPRINT) DEPVAR
# REGSUPP
OVERLAY  %CMOM(1,1)       WITH XXMIXED(%NREG,%NREG)
OVERLAY  %CMOM(%NREG+1,1) WITH XYMIXED(%NREG)
COMPUTE ADJ=(%NOBS-%NREG)/(%NOBS-NDET)
DISPLAY ADJ
COMPUTE XXMIXED=XXMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*CAPR)
COMPUTE XYMIXED=XYMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*LOWR)
LINREG(CMOM) DEPVAR
#REGSUPP
END
 
 

B) Program code for doing mixed estimation regression.

source(noecho) I:\winrats\uhero\mixreg.src
cal 60 1 4
allocate 85:4
open data I:\winrats\usadata.rat
data(format=rats) /

* Using built-in procedure to do BVAR regression with Minnesota priors
COMPUTE GAMMA=0.2
COMPUTE FIJ=0.5
system 1 to 2
variables usaprice usargnp
lags 1 to 2
det constant
specify(type=symmetric,tight=GAMMA)  FIJ
end(system)
estimate

* Mixed regression by hand

* Compute S1, the standard error of auto regression on equation 1
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE
LINREG(CMOMENT) USAPRICE
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

* Two alternative ways to calculate S1, one with Degree of Freedom T-K,
the other T-NDET
COMPUTE S1=SQRT(%SEESQ)
*COMPUTE TEST1=%RSS/(%NOBS-1)
*COMPUTE S1=SQRT(TEST1)
DISPLAY S1 %SEESQ

* Compute S2, the standard error of auto regression on equation 2
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP
LINREG(CMOMENT) USARGNP
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}
COMPUTE S2=SQRT(%SEESQ)
*COMPUTE TEST2=%RSS/(%NOBS-1)
*COMPUTE S2=SQRT(TEST2)
DISPLAY S2 %SEESQ

* Compute the prior information for mixed regression

COMPUTE [RECTANGULAR] R =
||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0||
COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0||
COMPUTE [SYMMETRIC] V =
||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2||
WRITE R LITTLER V

@mixed USAPRICE  1  60:1 85:4    R   LITTLER   V
# USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT
 
 
 

C) Output.
source(noecho) I:\winrats\uhero\mixreg.src
cal 60 1 4
allocate 85:4
open data I:\winrats\usadata.rat
data(format=rats) /

* Using built-in procedure to do BVAR regression with Minnesota priors
COMPUTE GAMMA=0.2
COMPUTE FIJ=0.5
system 1 to 2
variables usaprice usargnp
lags 1 to 2
det constant
specify(type=symmetric,tight=GAMMA)  FIJ
end(system)
 

Summary of the Prior...
Tightness Parameter 0.200000
Harmonic Lag Decay with Parameter 0.000000
Standard Deviations as Fraction of Tightness and Prior Means
  Listed Under the Dependent Variable
          USAPRICE    USARGNP
USAPRICE  1.00000000 0.50000000
USARGNP   0.50000000 1.00000000
Mean      1.00000000 1.00000000

estimate

Dependent Variable USAPRICE - Estimation by Mixed Estimation
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom   101
Centered R**2     0.999907      R Bar **2   0.999907
Uncentered R**2   0.999984      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.231678752
Sum of Squared Residuals        5.4211794375
Durbin-Watson Statistic             2.032795

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               1.558718576  0.063276664     24.63339
0.00000000
2.  USAPRICE{2}              -0.564449553  0.062927823     -8.96979
0.00000000
3.  USARGNP{1}                0.000036688  0.000490931      0.07473
0.94057554
4.  USARGNP{2}                0.000506875  0.000501614      1.01049
0.31467619
5.  Constant                 -0.796232629  0.201750915     -3.94661
0.00014664

F-Tests, Dependent Variable USAPRICE
Variable            F-Statistic       Signif
USAPRICE              68851.8400     0.0000000
USARGNP                   8.8141     0.0002963
 

Dependent Variable USARGNP - Estimation by Mixed Estimation
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom   101
Centered R**2     0.997654      R Bar **2   0.997654
Uncentered R**2   0.999903      T x R**2     101.990
Mean of Dependent Variable      2639.7245098
Std Error of Dependent Variable  551.4344454
Standard Error of Estimate        26.7063979
Sum of Squared Residuals        72036.400834
Durbin-Watson Statistic             1.949370

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               -8.52617351   5.78265018     -1.47444
0.14347331
2.  USAPRICE{2}                8.88189778   5.75035220      1.54458
0.12557472
3.  USARGNP{1}                 1.16968123   0.07879125     14.84532
0.00000000
4.  USARGNP{2}                -0.17729213   0.07989428     -2.21908
0.02871856
5.  Constant                  23.73450138  22.32510528      1.06313
0.29025804

F-Tests, Dependent Variable USARGNP
Variable            F-Statistic       Signif
USAPRICE                  2.0012     0.1404944
USARGNP                2486.7562     0.0000000
 

* Mixed regression by hand

* Compute S1, the standard error of auto regression on equation 1
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE
LINREG(CMOMENT) USAPRICE
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

Dependent Variable USAPRICE - Estimation by Least Squares
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom    97
Centered R**2     0.999911      R Bar **2   0.999907
Uncentered R**2   0.999985      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.231605150
Sum of Squared Residuals        5.2031717209
Regression F(4,97)               272586.2337
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.463235
Q(25-0)                            25.940142
Significance Level of Q           0.41079860

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  Constant                 -0.584689219  0.209530831     -2.79047
0.00633768
2.  USAPRICE{1}               1.702187514  0.073717462     23.09070
0.00000000
3.  USAPRICE{2}              -0.707202173  0.073306405     -9.64721
0.00000000
4.  USARGNP{1}                0.000080177  0.000845945      0.09478
0.92468698
5.  USARGNP{2}                0.000328262  0.000869452      0.37755
0.70658905
 

* Two alternative ways to calculate S1, one with Degree of Freedom T-K,
the other T-NDET
COMPUTE S1=SQRT(%SEESQ)
*COMPUTE TEST1=%RSS/(%NOBS-1)
*COMPUTE S1=SQRT(TEST1)
DISPLAY S1 %SEESQ
      0.23161       0.05364

* Compute S2, the standard error of auto regression on equation 2
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP
LINREG(CMOMENT) USARGNP
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

Dependent Variable USARGNP - Estimation by Least Squares
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom    97
Centered R**2     0.997688      R Bar **2   0.997593
Uncentered R**2   0.999904      T x R**2     101.990
Mean of Dependent Variable      2639.7245098
Std Error of Dependent Variable  551.4344454
Standard Error of Estimate        27.0542379
Sum of Squared Residuals        70997.383388
Regression F(4,97)                10465.8241
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.127956
Q(25-0)                            21.316421
Significance Level of Q           0.67483977

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  Constant                  11.62277727  24.47569471      0.47487
0.63594696
2.  USAPRICE{1}              -15.95218521   8.61107693     -1.85252
0.06699205
3.  USAPRICE{2}               16.26287458   8.56306055      1.89919
0.06051095
4.  USARGNP{1}                 1.22350624   0.09881640     12.38161
0.00000000
5.  USARGNP{2}                -0.22393299   0.10156234     -2.20488
0.02982563

COMPUTE S2=SQRT(%SEESQ)
*COMPUTE TEST2=%RSS/(%NOBS-1)
*COMPUTE S2=SQRT(TEST2)
DISPLAY S2 %SEESQ
     27.05424     731.93179

* Compute the prior information for mixed regression

COMPUTE [RECTANGULAR] R =
||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0||
COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0||
COMPUTE [SYMMETRIC] V =
||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2||
WRITE R LITTLER V
      1.0000         0.0000         0.0000         0.0000         0.0000
      0.0000         1.0000         0.0000         0.0000         0.0000
      0.0000         0.0000       233.6238         0.0000         0.0000
      0.0000         0.0000         0.0000       233.6238         0.0000
      0.0000         0.0000         0.0000         0.0000         0.0000
 

      1.0000         0.0000         0.0000         0.0000         0.0000
 

      0.0400
      0.0000         0.0400
      0.0000         0.0000         0.0400
      0.0000         0.0000         0.0000         0.0400
      0.0000         0.0000         0.0000         0.0000         0.0400
 
 

@mixed USAPRICE  1  60:1 85:4    R   LITTLER   V
# USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT
      0.96040

Dependent Variable USAPRICE - Estimation by Least Squares
Quarterly Data From 60:01 To 85:04
Usable Observations    102      Degrees of Freedom    97
 Total Observations    104      Skipped/Missing        2
Centered R**2     0.999908      R Bar **2   0.999904
Uncentered R**2   0.999984      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.235824529
Sum of Squared Residuals        5.3944812376
Regression F(4,97)               262918.3940
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.062554
Q(26-0)                            32.607800
Significance Level of Q           0.17378298

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               1.567767815  0.064966164     24.13207
0.00000000
2.  USAPRICE{2}              -0.573452845  0.064607630     -8.87593
0.00000000
3.  USARGNP{1}                0.000038387  0.000506590      0.07577
0.93975458
4.  USARGNP{2}                0.000496622  0.000517796      0.95911
0.33988919
5.  Constant                 -0.782826295  0.205787277     -3.80406
0.00024902 --------------B977848CA3CDD97EA1E5FB95-- ---------- End of message ---------- From: Allison To: "RATS Discussion List" Subject: [Fwd: question about BVAR] Date: Tue, 20 Jun 2000 21:25:36 -1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-version: 1.0 X-Mailer: Mozilla 4.7 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) Content-type: multipart/mixed; boundary="------------9090E4371E79398487DF3633" This is a multi-part message in MIME format. --------------9090E4371E79398487DF3633 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit --------------9090E4371E79398487DF3633 Content-Type: message/rfc822 Content-Transfer-Encoding: 7bit Content-Disposition: inline X-Mozilla-Status2: 00000000 Message-ID: <39502830.4C7956D2@hawaii.edu> Date: Tue, 20 Jun 2000 16:28:01 -1000 From: Allison X-Mailer: Mozilla 4.7 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: RATS-L@EFS.MQ.EDU.AU Subject: question about BVAR Content-Type: multipart/alternative; boundary="------------E444FCC7106BBD4BFD6B362A" --------------E444FCC7106BBD4BFD6B362A Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hi: I am a user of WINRATS 4.3. Lately I have been trying to reproduce the result of Bayesian VAR (with Minnesota prior) through a Mixed Regression program. I am able to get really close resemblance in the coefficients, however, they are not identical. In what follows, please find my program code and output. My main questions are: 1) Does built-in BVAR procedure first demean variables? 2) If not, how to handle deterministic variables? Is my way correct? 3) What is the reason for close, however not identical, regression coefficients? I look forward to your response. Thank you very much. Allison A) Code for mixreg.src ( identical to the example you give on page 5-13 of manual version 4, except that I adjust the degree of freedom from T-K to T-NDET with NDET being number of deterministic variables) PROCEDURE MIXED DEPVAR NDET NBEG NEND CAPR LOWR V TYPE SERIES DEPVAR TYPE REAL NDET TYPE INTEGER NBEG NEND ; * Estimation Range TYPE RECTANGULAR CAPR ; * R matrix: m x NREG TYPE VECTOR LOWR ; * r vector: m x 1 TYPE SYMMETRIC V ; * V matrix: m x m * LOCAL SYMMETRIC XXMIXED LOCAL VECTOR XYMIXED LOCAL INDEX REGSUPP ; * Array for supplementary card * ENTER(VARYING) REGSUPP ; * Bring in supplementary card CMOMENT NBEG NEND ; * CMOM including depvar # REGSUPP DEPVAR LINREG(CMOM, NOPRINT) DEPVAR # REGSUPP OVERLAY %CMOM(1,1) WITH XXMIXED(%NREG,%NREG) OVERLAY %CMOM(%NREG+1,1) WITH XYMIXED(%NREG) COMPUTE ADJ=(%NOBS-%NREG)/(%NOBS-NDET) DISPLAY ADJ COMPUTE XXMIXED=XXMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*CAPR) COMPUTE XYMIXED=XYMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*LOWR) LINREG(CMOM) DEPVAR #REGSUPP END B) Program code for doing mixed estimation regression. source(noecho) I:\winrats\uhero\mixreg.src cal 60 1 4 allocate 85:4 open data I:\winrats\usadata.rat data(format=rats) / * Using built-in procedure to do BVAR regression with Minnesota priors COMPUTE GAMMA=0.2 COMPUTE FIJ=0.5 system 1 to 2 variables usaprice usargnp lags 1 to 2 det constant specify(type=symmetric,tight=GAMMA) FIJ end(system) estimate * Mixed regression by hand * Compute S1, the standard error of auto regression on equation 1 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE LINREG(CMOMENT) USAPRICE # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} * Two alternative ways to calculate S1, one with Degree of Freedom T-K, the other T-NDET COMPUTE S1=SQRT(%SEESQ) *COMPUTE TEST1=%RSS/(%NOBS-1) *COMPUTE S1=SQRT(TEST1) DISPLAY S1 %SEESQ * Compute S2, the standard error of auto regression on equation 2 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP LINREG(CMOMENT) USARGNP # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} COMPUTE S2=SQRT(%SEESQ) *COMPUTE TEST2=%RSS/(%NOBS-1) *COMPUTE S2=SQRT(TEST2) DISPLAY S2 %SEESQ * Compute the prior information for mixed regression COMPUTE [RECTANGULAR] R = ||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0|| COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0|| COMPUTE [SYMMETRIC] V = ||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2|| WRITE R LITTLER V @mixed USAPRICE 1 60:1 85:4 R LITTLER V # USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT C) Output. source(noecho) I:\winrats\uhero\mixreg.src cal 60 1 4 allocate 85:4 open data I:\winrats\usadata.rat data(format=rats) / * Using built-in procedure to do BVAR regression with Minnesota priors COMPUTE GAMMA=0.2 COMPUTE FIJ=0.5 system 1 to 2 variables usaprice usargnp lags 1 to 2 det constant specify(type=symmetric,tight=GAMMA) FIJ end(system) Summary of the Prior... Tightness Parameter 0.200000 Harmonic Lag Decay with Parameter 0.000000 Standard Deviations as Fraction of Tightness and Prior Means Listed Under the Dependent Variable USAPRICE USARGNP USAPRICE 1.00000000 0.50000000 USARGNP 0.50000000 1.00000000 Mean 1.00000000 1.00000000 estimate Dependent Variable USAPRICE - Estimation by Mixed Estimation Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 101 Centered R**2 0.999907 R Bar **2 0.999907 Uncentered R**2 0.999984 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.231678752 Sum of Squared Residuals 5.4211794375 Durbin-Watson Statistic 2.032795 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} 1.558718576 0.063276664 24.63339 0.00000000 2. USAPRICE{2} -0.564449553 0.062927823 -8.96979 0.00000000 3. USARGNP{1} 0.000036688 0.000490931 0.07473 0.94057554 4. USARGNP{2} 0.000506875 0.000501614 1.01049 0.31467619 5. Constant -0.796232629 0.201750915 -3.94661 0.00014664 F-Tests, Dependent Variable USAPRICE Variable F-Statistic Signif USAPRICE 68851.8400 0.0000000 USARGNP 8.8141 0.0002963 Dependent Variable USARGNP - Estimation by Mixed Estimation Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 101 Centered R**2 0.997654 R Bar **2 0.997654 Uncentered R**2 0.999903 T x R**2 101.990 Mean of Dependent Variable 2639.7245098 Std Error of Dependent Variable 551.4344454 Standard Error of Estimate 26.7063979 Sum of Squared Residuals 72036.400834 Durbin-Watson Statistic 1.949370 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} -8.52617351 5.78265018 -1.47444 0.14347331 2. USAPRICE{2} 8.88189778 5.75035220 1.54458 0.12557472 3. USARGNP{1} 1.16968123 0.07879125 14.84532 0.00000000 4. USARGNP{2} -0.17729213 0.07989428 -2.21908 0.02871856 5. Constant 23.73450138 22.32510528 1.06313 0.29025804 F-Tests, Dependent Variable USARGNP Variable F-Statistic Signif USAPRICE 2.0012 0.1404944 USARGNP 2486.7562 0.0000000 * Mixed regression by hand * Compute S1, the standard error of auto regression on equation 1 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE LINREG(CMOMENT) USAPRICE # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} Dependent Variable USAPRICE - Estimation by Least Squares Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 97 Centered R**2 0.999911 R Bar **2 0.999907 Uncentered R**2 0.999985 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.231605150 Sum of Squared Residuals 5.2031717209 Regression F(4,97) 272586.2337 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.463235 Q(25-0) 25.940142 Significance Level of Q 0.41079860 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant -0.584689219 0.209530831 -2.79047 0.00633768 2. USAPRICE{1} 1.702187514 0.073717462 23.09070 0.00000000 3. USAPRICE{2} -0.707202173 0.073306405 -9.64721 0.00000000 4. USARGNP{1} 0.000080177 0.000845945 0.09478 0.92468698 5. USARGNP{2} 0.000328262 0.000869452 0.37755 0.70658905 * Two alternative ways to calculate S1, one with Degree of Freedom T-K, the other T-NDET COMPUTE S1=SQRT(%SEESQ) *COMPUTE TEST1=%RSS/(%NOBS-1) *COMPUTE S1=SQRT(TEST1) DISPLAY S1 %SEESQ 0.23161 0.05364 * Compute S2, the standard error of auto regression on equation 2 CMOMENT # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP LINREG(CMOMENT) USARGNP # CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} Dependent Variable USARGNP - Estimation by Least Squares Quarterly Data From 60:03 To 85:04 Usable Observations 102 Degrees of Freedom 97 Centered R**2 0.997688 R Bar **2 0.997593 Uncentered R**2 0.999904 T x R**2 101.990 Mean of Dependent Variable 2639.7245098 Std Error of Dependent Variable 551.4344454 Standard Error of Estimate 27.0542379 Sum of Squared Residuals 70997.383388 Regression F(4,97) 10465.8241 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.127956 Q(25-0) 21.316421 Significance Level of Q 0.67483977 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. Constant 11.62277727 24.47569471 0.47487 0.63594696 2. USAPRICE{1} -15.95218521 8.61107693 -1.85252 0.06699205 3. USAPRICE{2} 16.26287458 8.56306055 1.89919 0.06051095 4. USARGNP{1} 1.22350624 0.09881640 12.38161 0.00000000 5. USARGNP{2} -0.22393299 0.10156234 -2.20488 0.02982563 COMPUTE S2=SQRT(%SEESQ) *COMPUTE TEST2=%RSS/(%NOBS-1) *COMPUTE S2=SQRT(TEST2) DISPLAY S2 %SEESQ 27.05424 731.93179 * Compute the prior information for mixed regression COMPUTE [RECTANGULAR] R = ||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0|| COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0|| COMPUTE [SYMMETRIC] V = ||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2|| WRITE R LITTLER V 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 233.6238 0.0000 0.0000 0.0000 0.0000 0.0000 233.6238 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 0.0400 0.0000 0.0400 0.0000 0.0000 0.0400 0.0000 0.0000 0.0000 0.0400 0.0000 0.0000 0.0000 0.0000 0.0400 @mixed USAPRICE 1 60:1 85:4 R LITTLER V # USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT 0.96040 Dependent Variable USAPRICE - Estimation by Least Squares Quarterly Data From 60:01 To 85:04 Usable Observations 102 Degrees of Freedom 97 Total Observations 104 Skipped/Missing 2 Centered R**2 0.999908 R Bar **2 0.999904 Uncentered R**2 0.999984 T x R**2 101.998 Mean of Dependent Variable 53.205882353 Std Error of Dependent Variable 24.065162461 Standard Error of Estimate 0.235824529 Sum of Squared Residuals 5.3944812376 Regression F(4,97) 262918.3940 Significance Level of F 0.00000000 Durbin-Watson Statistic 2.062554 Q(26-0) 32.607800 Significance Level of Q 0.17378298 Variable Coeff Std Error T-Stat Signif ******************************************************************************* 1. USAPRICE{1} 1.567767815 0.064966164 24.13207 0.00000000 2. USAPRICE{2} -0.573452845 0.064607630 -8.87593 0.00000000 3. USARGNP{1} 0.000038387 0.000506590 0.07577 0.93975458 4. USARGNP{2} 0.000496622 0.000517796 0.95911 0.33988919 5. Constant -0.782826295 0.205787277 -3.80406 0.00024902 --------------E444FCC7106BBD4BFD6B362A Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit Hi:

    I am a user of WINRATS 4.3. Lately I have been trying to reproduce the result of
Bayesian VAR (with Minnesota prior) through a Mixed Regression program. I am able
to get really close resemblance in the coefficients, however, they are not identical. In
what follows, please find my program code and output. My main questions are:
1) Does built-in BVAR procedure first demean variables?
2) If not, how to handle deterministic variables? Is my way correct?
3) What is the reason for close, however not identical, regression coefficients?

    I look forward to your response. Thank you very much.
Allison

A) Code for mixreg.src ( identical to the example you give on page 5-13 of manual
version 4, except that I adjust the degree of freedom from T-K to T-NDET with NDET
being number of deterministic variables)

PROCEDURE MIXED DEPVAR NDET NBEG NEND CAPR LOWR V
TYPE  SERIES      DEPVAR
TYPE  REAL        NDET
TYPE  INTEGER     NBEG  NEND ; * Estimation Range
TYPE  RECTANGULAR CAPR       ; * R matrix: m x NREG
TYPE  VECTOR      LOWR       ; * r vector: m x 1
TYPE  SYMMETRIC   V          ; * V matrix: m x m
*
LOCAL SYMMETRIC   XXMIXED
LOCAL VECTOR      XYMIXED
LOCAL INDEX       REGSUPP    ; * Array for supplementary card
*
ENTER(VARYING) REGSUPP       ; * Bring in supplementary card
CMOMENT NBEG NEND            ; * CMOM including depvar
# REGSUPP DEPVAR
LINREG(CMOM, NOPRINT) DEPVAR
# REGSUPP
OVERLAY  %CMOM(1,1)       WITH XXMIXED(%NREG,%NREG)
OVERLAY  %CMOM(%NREG+1,1) WITH XYMIXED(%NREG)
COMPUTE ADJ=(%NOBS-%NREG)/(%NOBS-NDET)
DISPLAY ADJ
COMPUTE XXMIXED=XXMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*CAPR)
COMPUTE XYMIXED=XYMIXED+ADJ*%SEESQ*TR(CAPR)*(INV(V)*LOWR)
LINREG(CMOM) DEPVAR
#REGSUPP
END
 
 

B) Program code for doing mixed estimation regression.

source(noecho) I:\winrats\uhero\mixreg.src
cal 60 1 4
allocate 85:4
open data I:\winrats\usadata.rat
data(format=rats) /

* Using built-in procedure to do BVAR regression with Minnesota priors
COMPUTE GAMMA=0.2
COMPUTE FIJ=0.5
system 1 to 2
variables usaprice usargnp
lags 1 to 2
det constant
specify(type=symmetric,tight=GAMMA)  FIJ
end(system)
estimate

* Mixed regression by hand

* Compute S1, the standard error of auto regression on equation 1
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE
LINREG(CMOMENT) USAPRICE
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

* Two alternative ways to calculate S1, one with Degree of Freedom T-K,
the other T-NDET
COMPUTE S1=SQRT(%SEESQ)
*COMPUTE TEST1=%RSS/(%NOBS-1)
*COMPUTE S1=SQRT(TEST1)
DISPLAY S1 %SEESQ

* Compute S2, the standard error of auto regression on equation 2
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP
LINREG(CMOMENT) USARGNP
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}
COMPUTE S2=SQRT(%SEESQ)
*COMPUTE TEST2=%RSS/(%NOBS-1)
*COMPUTE S2=SQRT(TEST2)
DISPLAY S2 %SEESQ

* Compute the prior information for mixed regression

COMPUTE [RECTANGULAR] R =
||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0||
COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0||
COMPUTE [SYMMETRIC] V =
||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2||
WRITE R LITTLER V

@mixed USAPRICE  1  60:1 85:4    R   LITTLER   V
# USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT
 
 
 

C) Output.
source(noecho) I:\winrats\uhero\mixreg.src
cal 60 1 4
allocate 85:4
open data I:\winrats\usadata.rat
data(format=rats) /

* Using built-in procedure to do BVAR regression with Minnesota priors
COMPUTE GAMMA=0.2
COMPUTE FIJ=0.5
system 1 to 2
variables usaprice usargnp
lags 1 to 2
det constant
specify(type=symmetric,tight=GAMMA)  FIJ
end(system)
 

Summary of the Prior...
Tightness Parameter 0.200000
Harmonic Lag Decay with Parameter 0.000000
Standard Deviations as Fraction of Tightness and Prior Means
  Listed Under the Dependent Variable
          USAPRICE    USARGNP
USAPRICE  1.00000000 0.50000000
USARGNP   0.50000000 1.00000000
Mean      1.00000000 1.00000000

estimate

Dependent Variable USAPRICE - Estimation by Mixed Estimation
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom   101
Centered R**2     0.999907      R Bar **2   0.999907
Uncentered R**2   0.999984      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.231678752
Sum of Squared Residuals        5.4211794375
Durbin-Watson Statistic             2.032795

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               1.558718576  0.063276664     24.63339
0.00000000
2.  USAPRICE{2}              -0.564449553  0.062927823     -8.96979
0.00000000
3.  USARGNP{1}                0.000036688  0.000490931      0.07473
0.94057554
4.  USARGNP{2}                0.000506875  0.000501614      1.01049
0.31467619
5.  Constant                 -0.796232629  0.201750915     -3.94661
0.00014664

F-Tests, Dependent Variable USAPRICE
Variable            F-Statistic       Signif
USAPRICE              68851.8400     0.0000000
USARGNP                   8.8141     0.0002963
 

Dependent Variable USARGNP - Estimation by Mixed Estimation
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom   101
Centered R**2     0.997654      R Bar **2   0.997654
Uncentered R**2   0.999903      T x R**2     101.990
Mean of Dependent Variable      2639.7245098
Std Error of Dependent Variable  551.4344454
Standard Error of Estimate        26.7063979
Sum of Squared Residuals        72036.400834
Durbin-Watson Statistic             1.949370

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  USAPRICE{1}               -8.52617351   5.78265018     -1.47444
0.14347331
2.  USAPRICE{2}                8.88189778   5.75035220      1.54458
0.12557472
3.  USARGNP{1}                 1.16968123   0.07879125     14.84532
0.00000000
4.  USARGNP{2}                -0.17729213   0.07989428     -2.21908
0.02871856
5.  Constant                  23.73450138  22.32510528      1.06313
0.29025804

F-Tests, Dependent Variable USARGNP
Variable            F-Statistic       Signif
USAPRICE                  2.0012     0.1404944
USARGNP                2486.7562     0.0000000
 

* Mixed regression by hand

* Compute S1, the standard error of auto regression on equation 1
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USAPRICE
LINREG(CMOMENT) USAPRICE
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

Dependent Variable USAPRICE - Estimation by Least Squares
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom    97
Centered R**2     0.999911      R Bar **2   0.999907
Uncentered R**2   0.999985      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.231605150
Sum of Squared Residuals        5.2031717209
Regression F(4,97)               272586.2337
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.463235
Q(25-0)                            25.940142
Significance Level of Q           0.41079860

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  Constant                 -0.584689219  0.209530831     -2.79047
0.00633768
2.  USAPRICE{1}               1.702187514  0.073717462     23.09070
0.00000000
3.  USAPRICE{2}              -0.707202173  0.073306405     -9.64721
0.00000000
4.  USARGNP{1}                0.000080177  0.000845945      0.09478
0.92468698
5.  USARGNP{2}                0.000328262  0.000869452      0.37755
0.70658905
 

* Two alternative ways to calculate S1, one with Degree of Freedom T-K,
the other T-NDET
COMPUTE S1=SQRT(%SEESQ)
*COMPUTE TEST1=%RSS/(%NOBS-1)
*COMPUTE S1=SQRT(TEST1)
DISPLAY S1 %SEESQ
      0.23161       0.05364

* Compute S2, the standard error of auto regression on equation 2
CMOMENT
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2} USARGNP
LINREG(CMOMENT) USARGNP
# CONSTANT USAPRICE{1 TO 2} USARGNP{1 TO 2}

Dependent Variable USARGNP - Estimation by Least Squares
Quarterly Data From 60:03 To 85:04
Usable Observations    102      Degrees of Freedom    97
Centered R**2     0.997688      R Bar **2   0.997593
Uncentered R**2   0.999904      T x R**2     101.990
Mean of Dependent Variable      2639.7245098
Std Error of Dependent Variable  551.4344454
Standard Error of Estimate        27.0542379
Sum of Squared Residuals        70997.383388
Regression F(4,97)                10465.8241
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.127956
Q(25-0)                            21.316421
Significance Level of Q           0.67483977

   Variable                     Coeff       Std Error      T-Stat
Signif
*******************************************************************************

1.  Constant                  11.62277727  24.47569471      0.47487
0.63594696
2.  USAPRICE{1}              -15.95218521   8.61107693     -1.85252
0.06699205
3.  USAPRICE{2}               16.26287458   8.56306055      1.89919
0.06051095
4.  USARGNP{1}                 1.22350624   0.09881640     12.38161
0.00000000
5.  USARGNP{2}                -0.22393299   0.10156234     -2.20488
0.02982563

COMPUTE S2=SQRT(%SEESQ)
*COMPUTE TEST2=%RSS/(%NOBS-1)
*COMPUTE S2=SQRT(TEST2)
DISPLAY S2 %SEESQ
     27.05424     731.93179

* Compute the prior information for mixed regression

COMPUTE [RECTANGULAR] R =
||1,0,0,0,0|0,1,0,0,0|0,0,S2/(S1*FIJ),0,0|0,0,0,S2/(S1*FIJ),0|0,0,0,0,0||
COMPUTE [VECTOR] LITTLER = ||1,0,0,0,0||
COMPUTE [SYMMETRIC] V =
||GAMMA**2|0,GAMMA**2|0,0,GAMMA**2|0,0,0,GAMMA**2|0,0,0,0,GAMMA**2||
WRITE R LITTLER V
      1.0000         0.0000         0.0000         0.0000         0.0000
      0.0000         1.0000         0.0000         0.0000         0.0000
      0.0000         0.0000       233.6238         0.0000         0.0000
      0.0000         0.0000         0.0000       233.6238         0.0000
      0.0000         0.0000         0.0000         0.0000         0.0000
 

      1.0000         0.0000         0.0000         0.0000         0.0000
 

      0.0400
      0.0000         0.0400
      0.0000         0.0000         0.0400
      0.0000         0.0000         0.0000         0.0400
      0.0000         0.0000         0.0000         0.0000         0.0400
 
 

@mixed USAPRICE  1  60:1 85:4    R   LITTLER   V
# USAPRICE{1 TO 2} USARGNP{1 TO 2} CONSTANT
      0.96040

Dependent Variable USAPRICE - Estimation by Least Squares
Quarterly Data From 60:01 To 85:04
Usable Observations    102      Degrees of Freedom    97
 Total Observations    104      Skipped/Missing        2
Centered R**2     0.999908      R Bar **2   0.999904
Uncentered R**2   0.999984      T x R**2     101.998
Mean of Dependent Variable      53.205882353
Std Error of Dependent Variable 24.065162461
Standard Error of Estimate       0.235824529
Sum of Squared Residuals        5.3944812376
Regression F(4,97)               262918.3940
Significance Level of F           0.00000000
Durbin-Watson Statistic             2.062554
Q(26-0)                            32.607800
Significance Level of Q           0.17378298

   Variable                     Coeff       Std Error      T-Stat    Signif
*******************************************************************************

1.  USAPRICE{1}               1.567767815  0.064966164     24.13207 0.00000000
2.  USAPRICE{2}              -0.573452845  0.064607630     -8.87593 0.00000000
3.  USARGNP{1}                0.000038387  0.000506590      0.07577 0.93975458
4.  USARGNP{2}                0.000496622  0.000517796      0.95911 0.33988919
5.  Constant                 -0.782826295  0.205787277     -3.80406 0.00024902 --------------E444FCC7106BBD4BFD6B362A-- --------------9090E4371E79398487DF3633-- ---------- End of message ---------- From: Rob Trevor To: "RATS Discussion List" Subject: Re: [Fwd: question about BVAR] Date: Wed, 21 Jun 2000 17:35:05 +1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" ; format="flowed" X-Mailer: Mercury MTS (Bindery) v1.40 Allison Posting your message to the list 3 times in 5 hours is more likely to ensure that you do NOT get an answer to your question. If someone is interested in what you are trying to do, they will eventually try to help. If you are in urgent need of help, PHONE Estima's tech support line (see their web site). Rob ---------- End of message ---------- From: Yuen Phui Ling Hazel To: "RATS Discussion List" Subject: Matrix Multiplication Date: Wed, 21 Jun 2000 15:49:04 +0800 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain; Hello everyone, Does someone know the syntax to matrix multiplication? Thanks in advance. ---------- End of message ---------- From: =?iso-8859-1?q?slim=20skandes?= To: "RATS Discussion List" Subject: modified R/S Date: Wed, 21 Jun 2000 11:47:45 +0200 (CEST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Hello=20 I m a student in PARIS X university (France).I m preparing a working paper dealing with the presence of long memory in finantial markets. I have programs computing the hurst exponent by the rescaled range method and the GPH method (Geweke Porter-Hudak). I want to apply the modified rescaled range method proposed by A.Lo (1991). AS i m beggining with RATS i couldn't do the progam . can you send me the program (for RATS ) computing the hurst exponent with the modified rescaled range method or tell me where can i find them. please help me i can't find it anywhere. thank you ___________________________________________________________ Do You Yahoo!? Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ---------- End of message ---------- From: virginie traclet To: "RATS Discussion List" Subject: kolmogorov-smirnov statistics Date: Wed, 21 Jun 2000 14:51:11 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: QUALCOMM Windows Eudora Pro Version 4.0.2 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Dear RATS users, Do you know if the Kolmogorov-Smirnov statistics, which is necessary to compute the interval for the CUSUM=B2 test, is in RATS ? I have looked for this in the RATS manual and in the RATS help but haven't found it . If you have some insights about this subject . Thanks for your help Virginie *********************************************** Virginie Traclet Universit=E9 de Rennes 1 France e-mail : virginie.traclet@univ-rennes1.fr *********************************************** ---------- End of message ---------- From: cfb To: "RATS Discussion List" Subject: Re: Matrix Multiplication Date: Wed, 21 Jun 2000 09:20:55 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mulberry/2.0.0 (MacOS) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit I think you will find it quite easily if you Read The Fine Manual. Kit Baum --On Wednesday, June 21, 2000 3:49 PM +0800 Yuen Phui Ling Hazel wrote: > > Hello everyone, > > Does someone know the syntax to matrix multiplication? > > Thanks in advance. ---------- End of message ---------- From: "Sergio Zuniga" To: "RATS Discussion List" Subject: RE: Matrix Multiplication Date: Wed, 21 Jun 2000 10:22:23 -0600 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; Content-Transfer-Encoding: base64 X-Mailer: Microsoft Outlook Express 5.00.2013.1300 (via Mercury MTS (Bindery) v1.40) PiBIZWxsbyBldmVyeW9uZSwgDQo+IA0KPiBEb2VzIHNvbWVvbmUga25vdyB0aGUgc3ludGF4IHRv IG1hdHJpeCBtdWx0aXBsaWNhdGlvbj8gDQo+IA0KPiBUaGFua3MgaW4gYWR2YW5jZS4NCg0KDQpk ZWMgcmVjdCBYIFkNCmNvbXAgWSA9IHx8MTAwfDEwNnwxMDd8MTIwfDExMHwxMTZ8MTIzfDEzM3wx Mzd8fA0KY29tcCBYID0gfHwxLDEwMCwxMDB8MSwxMDQsOTl8MSwxMDYsMTEwfDEsMTExLDEyNnwx LDExMSwxMTN8MSwxMTUsMTAzfDEsMTIwLDEwMnwkDQogMSwxMjQsMTAzfDEsMTI2LDk4fHwNCg0K Y29tIGJldGEgPSBpbnYodHIoWCkqWCkqKHRyKFgpKlkpDQp3cml0ZSBiZXRhDQogICAgLTQ5LjM0 MTMNCiAgICAgIDEuMzY0Mg0KICAgICAgMC4xMTM5DQoNCg0KDQpDaGVlcnMsDQoNCioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KU2VyZ2lvIFp1bmln YSAgICAgICBzenVuaWdhQHVjbi5jbCAgICAgICAgICAgICANClVuaXZlcnNpZGFkIENhdG9saWNh IGRlbCBOb3J0ZSAgDQpDb3F1aW1ibyAtIENoaWxlIC0gVGVsLjogKDA5KTQxOTkxOTcgICAgICAg ICAgICAgICAgICAgICAgICANCioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKg0K ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: question about BVAR Date: Wed, 21 Jun 2000 15:32:22 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Estima MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v3.11) (via Mercury MTS (Bindery) v1.40) On 20 Jun 00, at 12:51, Allison wrote: > Dear Staff: > > I am a user of WINRATS 4.3. Lately I have been trying to reproduce > the result of Bayesian VAR (with Minnesota prior) through a Mixed > Regression program.  Allison: I've cc'd the RATS mailing list on this message because I think it applies there as well. Please send any responses intended directly for Estima to estima@estima.com, not to the list address. First, in order for us to provide support, I need you to send include your name and your RATS serial number so we know who you are. Second, As a reminder to you and everyone else on the RATS list, please don't send queries both to Estima _and_ to the RATS mailing list. If both we at Estima and some other helpful "private citizens" on the mailing list are spending time trying to come up with an answer your query, then someone is wasting time they could be spending on other things. So, one target at a time please. Thanks, Tom Maycock Estima -- ------------------------------------------------------------ | Estima | Sales: (800) 822-8038 | | P.O. Box 1818 | (847) 864-8772 | | Evanston, IL 60204-1818 | Support: (847) 864-1910 | | USA | Fax: (847) 864-6221 | | http://www.estima.com | estima@estima.com | ------------------------------------------------------------ ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: Matrix Multiplication Date: Wed, 21 Jun 2000 15:38:52 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Estima MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v3.11) (via Mercury MTS (Bindery) v1.40) On 21 Jun 00, at 9:20, cfb wrote: > I think you will find it quite easily if you Read The Fine Manual. > > Kit Baum > My thoughs exactly. The RATS manual isn't perfect, but I'm pretty sure it covers matrix multiplication. In the midst of many hours spent working on the v. 5 manual, with the end nearly in sight, sometimes I just have to wonder why we even bother.... Tom Maycock Estima -- ------------------------------------------------------------ | Estima | Sales: (800) 822-8038 | | P.O. Box 1818 | (847) 864-8772 | | Evanston, IL 60204-1818 | Support: (847) 864-1910 | | USA | Fax: (847) 864-6221 | | http://www.estima.com | estima@estima.com | ------------------------------------------------------------ ---------- End of message ---------- From: Rob Trevor To: "RATS Discussion List" Subject: Re: Matrix Multiplication Date: Thu, 22 Jun 2000 07:34:39 +1000 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" ; format="flowed" X-Mailer: Mercury MTS (Bindery) v1.40 Tom At 3:38 PM -0500 21/6/00, Estima wrote: >... >In the midst of many hours spent working on the v. 5 manual, with the >end nearly in sight, sometimes I just have to wonder why we even >bother.... It IS extremely useful to those of us who have legit copies :) Mind you, I've always figured that those who don't have their manual readily available (it is a bit inconvenient to carry everywhere) and can't figure how to use the online help or look through some of the provided examples and procedures, shouldn't be allowed to use a computer, let alone something as powerful as RATS :) Looking forward to V5... Rob ---------- End of message ---------- From: Yuen Phui Ling Hazel To: "RATS Discussion List" Subject: RE: Matrix Multiplication Date: Thu, 22 Jun 2000 10:34:21 +0800 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain; Thanks Sergio and everyone for your kind replies. Warm Rgds. -----Original Message----- From: Sergio Zuniga [mailto:szuniga@socompa.ucn.cl] Sent: Thursday, June 22, 2000 12:22 AM To: RATS Discussion List Subject: RE: Matrix Multiplication > Hello everyone, > > Does someone know the syntax to matrix multiplication? > > Thanks in advance. dec rect X Y comp Y = ||100|106|107|120|110|116|123|133|137|| comp X = ||1,100,100|1,104,99|1,106,110|1,111,126|1,111,113|1,115,103|1,120,102|$ 1,124,103|1,126,98|| com beta = inv(tr(X)*X)*(tr(X)*Y) write beta -49.3413 1.3642 0.1139 Cheers, *************************************************** Sergio Zuniga szuniga@ucn.cl Universidad Catolica del Norte Coquimbo - Chile - Tel.: (09)4199197 *************************************************** ---------- End of message ---------- From: Klaus Fischer To: "RATS Discussion List" Subject: Manual Date: Thu, 22 Jun 2000 10:02:12 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mozilla 4.7 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hi Tom: > In the midst of many hours spent working on the v. 5 manual, with the > end nearly in sight, sometimes I just have to wonder why we even > bother.... > Keep up the good work. The manual IS very much appreciated in its balance of not too technical and not too dumb..... by some at least. And if it gets improved, well that's equally appreciated! Klaus Fischer ---------- End of message ---------- From: Simon.Van-Norden@hec.ca To: "RATS Discussion List" Subject: Re: Manual Date: Thu, 22 Jun 2000 10:58:13 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Ecole des Hautes Etudes Commerciales X-Mailer: Mozilla 4.61 [en] (Win98; U) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable I wanted to try to make some constructive suggestions regarding the manua= l (if its not too late already.) I agree with Klaus that RATS usually gets it r= ight in the balance between technical/dumb, the use of examples, etc. However, I'= d wish for two changes; 1) Consolidation; Some topics are just too fragmented. Part of the descri= ption might be in Chapter 1, some in Chapter 4, some in Chap. 14 and some in th= e appendices. Or it might be only one of these places, but I can't tell whi= ch and have to check them all. I think I'm wishing that Chap. 14 were more compl= ete, even if this meant more duplication of material found elsewhere. 2) Chapter 9: I've always had trouble wrapping my mind around RATS's feat= ures for defining, manipulating and solving equations (and systems of equation= s.) It leaves me wishing this chapter were not so terse. Just my 2 cents. SvN Klaus Fischer wrote: >=20 > Hi Tom: >=20 > > In the midst of many hours spent working on the v. 5 manual, with the > > end nearly in sight, sometimes I just have to wonder why we even > > bother.... > > >=20 > Keep up the good work. The manual IS very much appreciated in its balan= ce > of not too technical and not too dumb..... by some at least. And if it > gets improved, well that's equally appreciated! >=20 > Klaus Fischer --=20 Simon van Norden, Prof. agr=E9g=E9, www.hec.ca/pages/simon.van-norden Service de l'enseignement de la finance, =C9cole des H.E.C. 3000 Cote-Sainte-Catherine, Montreal QC, CANADA H3T 2A7 simon.van-norden@hec.ca or (514)340-6781 or fax:(514)340-5632 ---------- End of message ---------- From: Luca Cazzulani To: "RATS Discussion List" Subject: unsubscribe Date: Thu, 22 Jun 2000 18:03:12 +0100 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: QUALCOMM Windows Eudora Light Version 3.0.6 (32) (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" unsubscribe ---------- End of message ---------- From: "Estima" To: "RATS Discussion List" Subject: Re: Manual Date: Thu, 22 Jun 2000 11:06:11 -0500 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: Organization: Estima MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT X-mailer: Pegasus Mail for Win32 (v3.11) (via Mercury MTS (Bindery) v1.40) > I wanted to try to make some constructive suggestions regarding the manual (if > its not too late already.) I agree with Klaus that RATS usually gets it right in > the balance between technical/dumb, the use of examples, etc. However, I'd wish > for two changes; We always appreciate constructive criticism! It is close to being too late, but we are still finalizing some things. I think we've already addressed some of your ideas though. > > 1) Consolidation; Some topics are just too fragmented. Part of the description > might be in Chapter 1, some in Chapter 4, some in Chap. 14 and some in the > appendices. Or it might be only one of these places, but I can't tell which and > have to check them all. I think I'm wishing that Chap. 14 were more complete, > even if this meant more duplication of material found elsewhere. I think you'll find Version 5 to improve on this. First, we've split into two books, using a smaller 7x9 format. This is primarily in response to people who wanted to have the "reference" section (Chapters 14 and 15 of current edition) in a more portable format, and because the manual would just have gotten too big and unwieldy otherwise. I don't know if you'll find that the reference section is more complete (although it does document functions much more extensively), and it some ways it may even be a bit more brief. However, I think we've gotten more consistent about where information appears (i.e. in the User's Guide section vs. the Reference section), and we are _much_ more consistent about providing technical details/algorithm info, etc. Chapter 1 has been expanded somewhat, with a revised and expanded tutorial example, and covering things like functions, computations, and procedures in a bit more detail. I think it makes for a more complete introduction to the program. Chapter 4 has been split into two parts--one chapter on scalar/matrix manipulations, data types, etc., which most people will want to know, and a separate "programmer's" chapter with details on writing procedures, repetitive analysis routines, interactive programs, and the like--which only interest a portion of the user base. > 2) Chapter 9: I've always had trouble wrapping my mind around RATS's features > for defining, manipulating and solving equations (and systems of equations.) It > leaves me wishing this chapter were not so terse. The chapter 9 (now Chapter 11) contents haven't really changed all that much, but, due to significant changes to the program itself, an expanded example, and more technical details, I think you'll be happy with the changes. For example, a big change in RATS 5 is that you can now create MODELS that contain linear equations, non-linear equations, or a mix of both- -previously MODELS could only contain non-linear FRMLS, so "models" of equations required repeating long lists of supplementary cards. It is also much easier to define a MODEL, and you can even combine MODELS into a bigger MODEL. This, along with improved options for saving results, makes handling VAR analysis, multivariate forecasting, etc., much easier. OK, that's enough hints for now. I'll save details on constrained optimization, "parameter sets", new instructions, new functions, etc. for later! No, I don't know when it will ship yet, so please don't ask. But, we should be posting details on new features fairly soon on the web site, along with an expected release date (newsletter to follow). And we always need more beta testers! Thanks, Tom Maycock Estima -- ------------------------------------------------------------ | Estima | Sales: (800) 822-8038 | | P.O. Box 1818 | (847) 864-8772 | | Evanston, IL 60204-1818 | Support: (847) 864-1910 | | USA | Fax: (847) 864-6221 | | http://www.estima.com | estima@estima.com | ------------------------------------------------------------ ---------- End of message ---------- From: gkoutmos@FAIR1.FAIRFIELD.EDU To: "RATS Discussion List" Subject: unsubscribe Date: Fri, 23 Jun 2000 15:01:36 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-version: 1.0 X-Mailer: Pegasus Mail for Win32 (v3.12b) (via Mercury MTS (Bindery) v1.40) Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT unsubscribe ---------- End of message ---------- From: =?iso-8859-1?q?slim=20skandes?= To: "RATS Discussion List" Subject: modified R/S Date: Fri, 23 Jun 2000 13:00:31 +0200 (CEST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 Hello=20 I m a student in PARIS X university (France).I m preparing a working paper dealing with the presence of long memory in finantial markets. I have programs computing the hurst exponent by the rescaled range method and the GPH method (Geweke Porter-Hudak). I want to apply the modified rescaled range method proposed by A.Lo (1991). AS i m beggining with RATS i couldn't do the progam . can you send me the program (for RATS ) computing the hurst exponent with the modified rescaled range method or tell me where can i find them. please help me i can't find it anywhere. thank you ___________________________________________________________ Do You Yahoo!? Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ---------- End of message ---------- From: =?iso-8859-1?q?slim=20skandes?= To: "RATS Discussion List" Subject: run test Date: Mon, 26 Jun 2000 17:08:54 +0200 (CEST) Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailer: Mercury MTS (Bindery) v1.40 does someone has the code for a run test (non parametric test of serial correlation) thank you ___________________________________________________________ Do You Yahoo!? Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ---------- End of message ---------- From: Binelli Maurizio To: "RATS Discussion List" Subject: Quantile regression Date: Wed, 28 Jun 2000 16:07:42 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2650.21) (via Mercury MTS (Bindery) v1.40) Content-Type: text/plain Dear RATS users, I got in trouble using the little flexible quantile regression procedure RQ. Does someone have one more flexible? Many thanks in advance. Maurizio. ---------- End of message ---------- From: baum To: "RATS Discussion List" Subject: Re: modified R/S Date: Thu, 29 Jun 2000 08:48:59 -0400 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: Mulberry/2.0.0 (MacOS) (via Mercury MTS (Bindery) v1.40) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Code to calculate the Lo modified R/S test (and, as a special case, the classical Hurst-Mandelbrot R/S test) is available at http://ideas.uqam.ca/ideas/data/Softwares/bocbocodeS412601.html This is written in Stata, but it should be fairly straightforward to translate to RATS, especially since RATS has the MCOV facility to do the Newey-West piece more succinctly than it may be done in Stata. The code is reasonably well documented and comes with a help file explaining how it works. The data may be accessed over the Internet, since Stata can open a dataset from a URL. Please note that the SSC-IDEAS archive is open to contributions of well-documented code for all major packages, and contains RATS, MATLAB, Ox, = and Mathematica modules as well as (predominantly) Stata among its 483 components. The predominance of Stata modules reflects the heavy traffic among StataList users, and the manner in which Stata modules may be installed to appear as native commands, extending the language accordingly. Kit Baum Boston College Economics SSC-IDEAS maintainer http://ideas.uqam.ca/ideas/data/bocbocode.html --On Friday, June 23, 2000 13.00 +0200 slim skandes = wrote: > Hello > I m a student in PARIS X university (France).I m > preparing a working paper dealing with the presence of > long memory in finantial markets. > I have programs computing the hurst exponent by the > rescaled range method and the GPH method (Geweke > Porter-Hudak). I want to apply the modified rescaled > range method proposed by A.Lo (1991). > AS i m beggining with RATS i couldn't do the progam . > can you send me the program (for RATS ) computing the > hurst exponent with the modified rescaled range method > or tell me where can i find them. > please help me i can't find it anywhere. > > thank you > > > > > > > ___________________________________________________________ > Do You Yahoo!? > Achetez, vendez! =C0 votre prix! Sur http://encheres.yahoo.fr ----------------------------------------------------------------- Kit Baum baum@bc.edu http://fmwww.bc.edu/ec-v/baum.fac.html ---------- End of message ---------- From: virginie traclet To: "RATS Discussion List" Subject: Chow tests Date: Fri, 30 Jun 2000 12:16:19 +0200 Errors-to: Reply-to: "RATS Discussion List" Sender: Maiser@efs01.efs.mq.edu.au X-listname: X-Mailer: QUALCOMM Windows Eudora Pro Version 4.0.2 (via Mercury MTS (Bindery) v1.40) Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Dear RATS users, I want to make Chow tests without knowing the break date a priori, so I make Chow test with all possible dates of my sample period ( see the example of program below) : is it possible to do it ? *******************Chow test for the period 1970:1 1997:4 for a bivariate VAR including 5 lags on nominal GDP and adjusted monetary base (both in first differences of logarithms) ************** do time = 1972:1, 1997:4 linreg dlpibn 1970:1 time #constant dlpibn{1 to 5} dlbasea{1 to 5} compute rss1=%rss linreg dlpibn time 1997:4 #constant dlpibn{1 to 5} dlbasea{1 to 5} compute rss2=%rss linreg dlpibn 1970:1 1997:4 #constant dlpibn{1 to 5} dlbasea{1 to 5} compute rsstotal=%rss compute nobstotal=%nobs compute F = ((rsstotal - rss1 - rss2) / %nreg ) / ((rss1 + rss2)/(nobstotal - 2*%nreg)) cdf ftest F %nreg nobstotal-2*%nreg end do time *************************************** Thanks for your help Regards, Virginie ---------- End of message ----------