-------------------------------------------------------------------------------help paran-------------------------------------------------------------------------------

Title

paran-- HornŐs Test of Principal Components/Factors

SyntaxParallel analysis of data

paran[varlist] [weight] [ifexp] [inrange] [,options]

optionsDescription ------------------------------------------------------------------------- Modeliterations(#)specify the number of iterationscentile(#)specify using a centile value of instead of the meanfactor(factor_type)use factor instead of pca (defaults topcaif left blank)citerate(#)communality re-estimation iterations (ipf only)Reporting

quietlysuppresses pca or factor outputnostatussuppresses the status indicatorallreport all eigenvalues (default reports only those retained)Graphing

graphgraphs unadjusted, adjusted, and random eigenvaluescolorrenders graph in color (default is black and white)lcolors(3 x rgb)specifies colors using three rgb triples for observed, random, and adjusted eigenvalues (overrides thecoloroption)saving(filename)saves graph as a.gphfilereplacereplaces an existing file whensaving()Miscellaneous

protect(#)perform#optimizations and report the best solution (mlonly with factor)seed(#)seed the random number generator with the supplied integermat(matrix name)option to provide a correlation matrix instead of thevarlistn(#)specifies the required sample size when using themat()optioncopyleftdisplays the GPL license forparan-------------------------------------------------------------------------fweights, andaweightsare allowed when usingvarlist. See help weights.

Description

paranis an implementation of HornŐs technique for evaluating the components or common factors retained in a principle component analysis (PCA) or a common factor analysis (FA). According to Horn, a common interpretation of non-correlated data is that they are perfectly non-collinear, and one would expect therefore to see eigenvalues equal to 1 in a PCA of such data (or equal to 0 in the case of a common factor analysis, as withpf). However, Horn notes that multi-colinearity occurs due to sampling error and least-squares "bias," even in uncorrelated data, and therefore actual PCAs of such data will reveal eigenvalues of components greater than and less than 1. His strategy is to contrast eigenvalues produced through a PCA on a random dataset (uncorrelated variables) with the same number of variables and observations as the experimental or observational dataset to produce eigenvalues for components or factors that are adjusted for the sample error-induced inflation. Values greater than zero are retained in the adjustment given by:

For principal component analysis:

Observed Data Eigenvalue_p - (Random Data Eigenvalue_p - 1)

For common factor analysis:

Observed Data Eigenvalue_p - Random Data Eigenvalue_p

paranis used in place of apcavarlistcommand (orfactor). The user may also specify how many times to make the contrast with a random dataset (default is 30 per variable). Values less than 1 will be ignored (or less than 0 forfactor), and the default value assumed. Random datasets are generated using theuniform()function. The program returns the estimated mean eigenvalues of random data if thecentileoption is unspecified, otherwise it returns the specifiedcentile. biases on each eigenvector are also returned.paranmay be thus be used to conduct parallel analysis following Glorfeld's suggestions to reduce the likelihood of over-retention. (Glorfeld, 1995)When the

alloption is not used,unadjustedeigenvalues greater than 1 (for prinicpal components) or 0 (for factors) are reported, with retained adjusted eigenvalues printed in yellow, and unretained adjusted eigenvalues printed in red.

Options

iteration(#)sets the number of contrast datasets to evaluate. The default value is 30 * the number of variables, and values less than 1 are ignored. For large datasets with large numbers of variables many iterations may be time consuming. The greater the number of iterations the more accurate the estimates of sample bias will be.

centile(#)specifies that supplied centile value is to be used instead of the mean (assumed median, since the distribution is symmetrical) in estimating bias. Values above the mean/median, such as the 95th percentile, are more conservative estimates of chance bias in the eigenvalues from a PCA of sample data. This option supercedes the olderpnf, which was equivalent tocentile(95). Values ofcentile()must be greater than 0 and less than 100. Non-integer values will be rounded to the nearest integer value. Runningparanwithout this option uses the mean value (very close tocentile(50)). (see Glorfeld, 1995)

quietlysuppresses output for the PCA or factor analysis. This option is only used if avarlistis specified in theparancommand.

nostatusBy defaultparanindicates when every tenth percent of the computation has been completed.nostatuseliminates this behavior.

factor(factor_type)selects one of the factor estimation types:pf,pcf,ipf, orml(for principal factors, principal component factors, iterated principal factors, or maximum likelihood factors, respectively). If you specify anything but one of these four abbreviations, you will be warned and the program will halt. CAVEAT: Conducting parallel analysis using factor methods other thanpfis unorthodox. Interpret such results at your own risk. Iffactoris not used, or if one of the factor estimation types is not usedparanperforms parallel analysis using pca by default.

citerate(#)sets how many iterations will be used to re-estimate communalities for the iterated principal factor type. (see factor)

protect(#)sets the number of optimizations for starting values option for the maximum likelihood factor type. (see factor)

allreports all components or factors, not just those with unadjusted eigenvalues greater than one (or greater than zero for factor). The default is not to report all components or factors.

graphdraws a graph of the observed eigenvalues, the random eigenvalues, and the adjusted eigenvalues much like the graphs presented by Horn in his 1965 paper.

colorrenders the graph in color (only withgraph) with unadjusted eigenvalues drawn in red, adjusted eigenvalues drawn in black, and random eigenvalues drawn in blue, and all lines drawn solid. Without thecoloroption, the graph is rendered in black and white, and the line connecting the unadjusted eigenvalues is dashed, the line connecting the random eigenvalues is dotted, and the line connecting the adjusted eigenvalues is solid.

lcolors(# # # # # # # # #)specifies the colors of each line on the graph using three rgb triples (only withgraph). The first triple indicates theR,GandBcomponents of the observed eigenvalues, the second triple sets the values for the mean or centile random eigenvalues, and the third triple sets the values for the adjusted eigenvalues. These settings override the default (red, blue, and black) colors of thecoloroption.

saving(filename)outputs the graph to the specifiedfilenameas a.gphfile (only withgraph).

replaceoverwrites an existingfilenamewhen thesaving()option is used withgraph.

seed(#)specifies an integer seed for the random number generator (see set seed) so that results ofparanupon a specific data set can be exactly reproduced. The default behavior ofparanis not to specify a seed.

mat(matrix name)specifies an optional correlation matrix to be used instead of thevarlist; requires then(#)option also be specified. This option is not compatible withaweightsorfweights.

n(#)specifies the sample size when using themat(matrix name)option.

copyleftdisplays the copying permission statement forparan.paranis free software, licensed under the GPL. The full license can be obtained by typing:. net describe paran, from (http://www.doyenne.com/stata)

and clicking on the click here to get link for the ancillary file.

RemarksHayton, et al. (2004) urge a parameterization of the random data to approximate the distribution of the observed data with respect to the middle ("mid-point") and the observed min and max. However, the PCA as I understand it is insensitive to standardizing transformations of each variable, and any linear transformation of all variables, and produces the same eigenvalues used in component or factor retention decisions. This is born by the notable lack of difference between analyses conducted using a variety of simulated distributional assumptions (Dinno, 2009). The central limit theorem would seem to make the selection of a distributional form for the random data moot with any sizable number of iterations. Former functionality implementing the recommendation by Hayton et al. (2004) has been removed, since parallel analysis is insensitive to it, and it only adds to the computation time required to conduct parallel analysis.

Examples. paran var1-var16

. paran var1-var26, iter(5000) q centile(95)

. paran var1-var10, iter(1) factor(ipf) cit(50)

AuthorAlexis Dinno alexis dot dinno at pdx dot edu

I am receptive to comments and requests.

ReferenceDinno A. 2009 "Exploring the Sensitivity of HornŐs Parallel Analysis to the Distributional Form of Simulated Data"

Multivariate BehavioralResearch. 44: 362-388Glorfeld LW. 1995. "An Improvement on HornŐs Parallel Analysis Methodology for Selecting the Correct Number of Factors to Retain. {it:Educational and Psychological Measurement}. 55: 377-393

Hayton JC, Allen DG, and Scarpello V. 2004. "Factor Retention Decisions in Exploratory Factor Analysis: A Tutorial on Parallel Analysis"

Organizational Research Methods. 7: 191-205Horn JL. 1965. "A rationale and a test for the number of factors in factor analysis."

Psychometrika. 30: 179-185Zwick WR, Velicer WF. 1986. "Comparison of Five Rules for Determining the Number of Components to Retain."

Psychological Bulletin. 99: 432-442

Saved results

paransaves the following 1 by P matrices ine():Matrices

e(UnadjustedEv)Unadjusted eigenvalues from thepcaorfactorcommande(AdjustedEv)Eigenvalues from the analysis adjusted by subtracting the estimated biase(MeanRandomEv)The mean of the eigenvalues of random data sets of size N by P (only if thecentile()option is unspecified)e(CentRandomEv)The centile of the eigenvalues of random data sets of size N by P as given by thecentile()option (and only if that option is specified)e(Bias)The estimated bias (whichisthe mean of the eigenvalues of random data sets of size N by P when using thefactoroption

Also SeeOn-line: help for: pca, factor