-------------------------------------------------------------------------------
help for kapci, kappaci                               (Author:  David Harrison)
-------------------------------------------------------------------------------

Confidence intervals for kappa

Two unique raters, two ratings:

kapci varname1 varname2 [if exp] [in range] [, positive(exp) [exact|wilson|agresti|jeffreys] level(#) ]

Two or more (non-unique) raters, two ratings:

kapci varname1 varname2 varname3 [...] [if exp] [in range] [, positive(exp) level(#) ]

kappaci varname1 varname2 [if exp] [in range] [, level(#) ]

Description

kapci (first syntax) calculates the kappa-statistic measure of interrater agreement when there are two unique raters and two ratings, with confidence interval using the goodness-of-fit approach of Donner & Eliasziw (1992).

kapci (second syntax) and kappaci calculate the kappa-statistic in the case of two or more (nonunique) raters and two ratings, with confidence interval using an inverted modified Wald test approach applied to the Fleis-Cuzick estimate of kappa as recommended by Zou & Donner (2004).

kapci (second syntax) and kappaci produce the same results; they merely assume the data are organized differently. Both commands assume each observation is a subject. In the case of kapci, varname1 contains the ratings by the first rater, varname2 the ratings by the second rater, and so on. kappaci, on the other hand, assumes each variable records the frequencies with which ratings were assigned. The first variable records the number of times a positive rating was assigned, and the second variable the number of times a negative rating was assigned. These definitions follow the same patterns as kap and kappa; see help kappa.

Options

positive(exp) specifies an expression identifying the ratings that should be considered to be positive; the default assumes non-zero (and non-missing) for positive and 0 for negative.

exact, wilson, agresti, and jeffreys specify how binomial confidence intervals for the observed agreement are to be calculated (see help ci); the default is exact.

level(#) specifies the confidence level, in percent, for confidence intervals; see help level.

Examples

Two raters, rating variables coded 0/1.

. kapci rada radb

Two raters, rating variables coded Y/N, Wilson confidence interval on observed agreement.

. kapci rada radb, pos("Y") wilson

More than two raters, 99% confidence interval.

. kappaci pos neg, level(99)

References

Donner, A. and Eliasziw, M. 1992. A goodness-of-fit approach to inference procedures for the kappa statistic: confidence interval construction, significance-testing and sample size estimation. Statistics in Medicine 11: 1511-1519.

Zou, G. and Donner, A. 2004. Confidence interval estimation of the intraclass correlation coefficient for binary outcome data. Biometrics 60: 807-811.

Maintainer

David A. Harrison Intensive Care National Audit & Research Centre david@icnarc.org

Also see

Online: help for kappa, ci