Template-Type: ReDIF-Article 1.0 Author-Name: Abhirup Datta Author-X-Name-First: Abhirup Author-X-Name-Last: Datta Author-Name: Wenyi Lin Author-X-Name-First: Wenyi Author-X-Name-Last: Lin Author-Name: Amrita Rao Author-X-Name-First: Amrita Author-X-Name-Last: Rao Author-Name: Daouda Diouf Author-X-Name-First: Daouda Author-X-Name-Last: Diouf Author-Name: Abo Kouame Author-X-Name-First: Abo Author-X-Name-Last: Kouame Author-Name: Jessie K. Edwards Author-X-Name-First: Jessie K. Author-X-Name-Last: Edwards Author-Name: Le Bao Author-X-Name-First: Le Author-X-Name-Last: Bao Author-Name: Thomas A. Louis Author-X-Name-First: Thomas A. Author-X-Name-Last: Louis Author-Name: Stefan Baral Author-X-Name-First: Stefan Author-X-Name-Last: Baral Title: Bayesian Estimation of MSM Population Size in Côte d’Ivoire Abstract: Côte d’Ivoire has among the most generalized HIV epidemics in West Africa with an estimated half million people living with HIV. Across West Africa, key populations, including gay men and other men who have sex with men (MSM), are often disproportionately burdened with HIV due to specific acquisition and transmission risks. Quantifying population sizes of MSM at the subnational level is critical to ensuring evidence-based decisions regarding the scale and content of HIV prevention interventions. While survey-based direct estimates of MSM numbers are available in a few urban centers across Côte d’Ivoire, no data on MSM population size exists in other areas without any community group infrastructure to facilitate sufficient access to communities of MSM. The data are used in a Bayesian regression setup to produce estimates of the numbers of MSM in areas of Côte d’Ivoire prioritized in the HIV response. Our hierarchical model imputes missing covariates using geo-spatial information and allows for proper uncertainty quantification leading to confidence bounds for predicted MSM population size estimates. This process provided population size estimates where there are no empirical data, to guide the prioritization of further collection of empirical data on MSM and inform evidence-based scaling of HIV prevention and treatment programs for MSM across Côte d’Ivoire. Journal: Statistics and Public Policy Pages: 1-13 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2018.1546634 File-URL: http://hdl.handle.net/10.1080/2330443X.2018.1546634 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:1-13 Template-Type: ReDIF-Article 1.0 Author-Name: James Nguyen Author-X-Name-First: James Author-X-Name-Last: Nguyen Author-Name: Carmen Tiu Author-X-Name-First: Carmen Author-X-Name-Last: Tiu Author-Name: Jane Stewart Author-X-Name-First: Jane Author-X-Name-Last: Stewart Author-Name: David Miller Author-X-Name-First: David Author-X-Name-Last: Miller Title: Global Zoning and Exchangeability of Field Trial Residues Between Zones: Are There Systematic Differences in Pesticide Residues Across Geographies? Abstract: Mixed-effects models were used to evaluate the global zoning concept using residue data from a comprehensive database of supervised field trials performed in various countries and regions on a variety of pesticide–crop combinations. No statistically significant systematic differences in pesticide residues were found between zones among the pesticide uses examined. In addition, we conducted a simulation to assess the impact of using regional versus global datasets for calculating maximum residue limits (MRLs). The conclusion of this assessment supports the concept of exchangeability of pesticide residue values across geographic regions and opens the possibility of improving harmonization of pesticide regulatory standards by establishing more globally aligned MRLs. Supplemental material for this article is available online. Journal: Statistics and Public Policy Pages: 14-23 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2018.1555068 File-URL: http://hdl.handle.net/10.1080/2330443X.2018.1555068 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:14-23 Template-Type: ReDIF-Article 1.0 Author-Name: Raid W. Amin Author-X-Name-First: Raid W. Author-X-Name-Last: Amin Author-Name: Alexander Bohnert Author-X-Name-First: Alexander Author-X-Name-Last: Bohnert Author-Name: David Banks Author-X-Name-First: David Author-X-Name-Last: Banks Title: Patterns of Pediatric Cancers in Florida: 2000–2015 Abstract: This study identifies pediatric cancer clusters in Florida for the years 2000–2015. Unlike previous publications on pediatric cancers in Florida, it draws upon an Environmental Protection Agency dataset on carcinogenic air pollution, the National Air Toxics Assessment, as well as more customary demographic variables (age, sex, race). The focus is upon the three most widely seen pediatric cancer types in the USA: brain tumors, leukemia, and lymphomas. The covariates are used in a Poisson regression to predict cancer incidence. The adjusted cluster analysis quantifies the role of each covariate. Using Florida Association of Pediatric Tumor Programs data for 2000–2015, we find statistically significant pediatric cancer clusters, but we cannot associate air pollution with the cancer incidence. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 24-35 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1574686 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1574686 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:24-35 Template-Type: ReDIF-Article 1.0 Author-Name: Steven P. Millard Author-X-Name-First: Steven P. Author-X-Name-Last: Millard Title: EPA is Mandating the Normal Distribution Abstract: The United States Environmental Protection Agency (USEPA) is responsible for overseeing the cleanup of sites that fall within the jurisdiction of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA; also known as “Superfund”). This process almost always involves a remedial investigation/feasibility (RI/FS) study, including deriving upper confidence, prediction, and/or tolerance limits based on concentrations from a designated “background” area which are subsequently used to determine whether a remediated site has achieved compliance. Past USEPA guidance states outlying observations in the background data should not be removed based solely on statistical tests, but rather on some scientific or quality assurance basis. However, recent USEPA guidance states “extreme” outliers, based on tests that assume a normal (Gaussian) distribution, should always be removed from background data, and because “extreme” is not defined, USEPA has interpreted this to mean all outliers identified by a test should be removed. This article discusses problems with current USEPA guidance and how it contradicts past guidance, and illustrates USEPA’s current policy via a case study of the Portland, Oregon Harbor Superfund site. Additional materials, including R code, data, and documentation of correspondence are available in the online supplement. Journal: Statistics and Public Policy Pages: 36-43 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2018.1564639 File-URL: http://hdl.handle.net/10.1080/2330443X.2018.1564639 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:36-43 Template-Type: ReDIF-Article 1.0 Author-Name: Wendy K. Tam Cho Author-X-Name-First: Wendy K. Author-X-Name-Last: Tam Cho Author-Name: Simon Rubinstein-Salzedo Author-X-Name-First: Simon Author-X-Name-Last: Rubinstein-Salzedo Title: Understanding Significance Tests From a Non-Mixing Markov Chain for Partisan Gerrymandering Claims Abstract: Recently, Chikina, Frieze, and Pegden proposed a way to assess significance in a Markov chain without requiring that Markov chain to mix. They presented their theorem as a rigorous test for partisan gerrymandering. We clarify that their ε-outlier test is distinct from a traditional global outlier test and does not indicate, as they imply, that a particular electoral map is associated with an extreme level of “partisan unfairness.” In fact, a map could simultaneously be an ε-outlier and have a typical partisan fairness value. That is, their test identifies local outliers but has no power for assessing whether that local outlier is a global outlier. How their specific definition of local outlier is related to a legal gerrymandering claim is unclear given Supreme Court precedent. Journal: Statistics and Public Policy Pages: 44-49 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1574687 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1574687 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:44-49 Template-Type: ReDIF-Article 1.0 Author-Name: Maria Chikina Author-X-Name-First: Maria Author-X-Name-Last: Chikina Author-Name: Alan Frieze Author-X-Name-First: Alan Author-X-Name-Last: Frieze Author-Name: Wesley Pegden Author-X-Name-First: Wesley Author-X-Name-Last: Pegden Title: Understanding Our Markov Chain Significance Test: A Reply to Cho and Rubinstein-Salzedo Abstract: The article of Cho and Rubinstein-Salzedo seeks to cast doubt on our previous paper, which described a rigorous statistical test which can be applied to reversible Markov chains. In particular, Cho and Rubinstein-Salzedo seem to suggest that the test we describe might not be a reliable indicator of gerrymandering, when the test is applied to certain redistricting Markov chains. However, the examples constructed by Cho and Rubinstein-Salzedo in fact demonstrate a different point: that our test is not the same as another class of gerrymandering tests, which Cho and Rubinstein-Salzedo prefer. But we agree and emphasized this very distinction in our original paper. In this reply, we reply to the criticisms of Cho and Rubinstein-Salzedo, and discuss, more generally, the advantages of the various tests available in the context of detecting gerrymandering of political districtings. Journal: Statistics and Public Policy Pages: 50-53 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1615396 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1615396 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:50-53 Template-Type: ReDIF-Article 1.0 Author-Name: Wendy K. Tam Cho Author-X-Name-First: Wendy K. Author-X-Name-Last: Tam Cho Author-Name: Simon Rubinstein-Salzedo Author-X-Name-First: Simon Author-X-Name-Last: Rubinstein-Salzedo Title: Rejoinder to “Understanding our Markov Chain Significance Test” Journal: Statistics and Public Policy Pages: 54-54 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1619427 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1619427 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:54-54 Template-Type: ReDIF-Article 1.0 Author-Name: James Normington Author-X-Name-First: James Author-X-Name-Last: Normington Author-Name: Eric Lock Author-X-Name-First: Eric Author-X-Name-Last: Lock Author-Name: Caroline Carlin Author-X-Name-First: Caroline Author-X-Name-Last: Carlin Author-Name: Kevin Peterson Author-X-Name-First: Kevin Author-X-Name-Last: Peterson Author-Name: Bradley Carlin Author-X-Name-First: Bradley Author-X-Name-Last: Carlin Title: A Bayesian Difference-in-Difference Framework for the Impact of Primary Care Redesign on Diabetes Outcomes Abstract: Although national measures of the quality of diabetes care delivery demonstrate improvement, progress has been slow. In 2008, the Minnesota legislature endorsed the patient-centered medical home (PCMH) as the preferred model for primary care redesign. In this work, we investigate the effect of PCMH-related clinic redesign and resources on diabetes outcomes from 2008 to 2012 among Minnesota clinics certified as PCMHs by 2011 by using a Bayesian framework for a continuous difference-in-differences model. Data from the Physician Practice Connections-Research Survey were used to assess a clinic’s maturity in primary care transformation, and diabetes outcomes were obtained from the MN Community Measurement program. These data have several characteristics that must be carefully considered from a modeling perspective, including the inability to match patients over time, the potential for dynamic confounding, and the hierarchical structure of clinics. An ad-hoc analysis suggests a significant correlation between PCMH-related clinic redesign and resources on diabetes outcomes; however, this effect is not detected after properly accounting for different sources of variability and confounding. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 55-66 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1626310 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1626310 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:55-66 Template-Type: ReDIF-Article 1.0 Author-Name: Michael D. Collins Author-X-Name-First: Michael D. Author-X-Name-Last: Collins Title: Statistics, Probability, and a Failed Conservation Policy Abstract: Many sightings of the Ivory-billed Woodpecker (Campephilus principalis) have been reported during the past several decades, but nobody has managed to obtain the clear photo that is regarded as the standard form of evidence for documenting birds. Despite reports of sightings by teams of ornithologists working independently in Arkansas and Florida, doubts cast on the persistence of this iconic species have impeded the establishment of a meaningful conservation program. An analysis of the expected waiting time for obtaining a photo provides insights into why the policy of insisting upon ideal evidence has failed for this species. Concepts in statistics and probability are used to analyze video footage that was obtained during encounters with birds that were identified in the field as Ivory-billed Woodpeckers. One of the videos shows a series of events that are consistent with that species and are believed to be inconsistent with every other species of the region. Another video shows a large bird in flight with the distinctive wing motion of a large woodpecker. Only two large woodpeckers occur in the region, and the flap rate is about ten standard deviations greater than the mean flap rate of the Pileated Woodpecker (Dryocopus pileatus). Supplemental materials for this article are available online. Journal: Statistics and Public Policy Pages: 67-79 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1637802 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1637802 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:67-79 Template-Type: ReDIF-Article 1.0 Author-Name: Yiwen Tang Author-X-Name-First: Yiwen Author-X-Name-Last: Tang Author-Name: Nicole Dalzell Author-X-Name-First: Nicole Author-X-Name-Last: Dalzell Title: Classifying Hate Speech Using a Two-Layer Model Abstract: Social media and other online sites are being increasingly scrutinized as platforms for cyberbullying and hate speech. Many machine learning algorithms, such as support vector machines, have been adopted to create classification tools to identify and potentially filter patterns of negative speech. While effective for prediction, these methodologies yield models that are difficult to interpret. In addition, many studies focus on classifying comments as either negative or neutral, rather than further separating negative comments into subcategories. To address both of these concerns, we introduce a two-stage model for classifying text. With this model, we illustrate the use of internal lexicons, collections of words generated from a pre-classified training dataset of comments that are specific to several subcategories of negative comments. In the first stage, a machine learning algorithm classifies each comment as negative or neutral, or more generally target or nontarget. The second stage of model building leverages the internal lexicons (called L2CLs) to create features specific to each subcategory. These features, along with others, are then used in a random forest model to classify the comments into the subcategories of interest. We demonstrate our approach using two sets of data. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 80-86 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1660285 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1660285 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:80-86 Template-Type: ReDIF-Article 1.0 Author-Name: Paul Harmon Author-X-Name-First: Paul Author-X-Name-Last: Harmon Author-Name: Sarah McKnight Author-X-Name-First: Sarah Author-X-Name-Last: McKnight Author-Name: Laura Hildreth Author-X-Name-First: Laura Author-X-Name-Last: Hildreth Author-Name: Ian Godwin Author-X-Name-First: Ian Author-X-Name-Last: Godwin Author-Name: Mark Greenwood Author-X-Name-First: Mark Author-X-Name-Last: Greenwood Title: An Alternative to the Carnegie Classifications: Identifying Similar Doctoral Institutions With Structural Equation Models and Clustering Abstract: The Carnegie Classification of Institutions of Higher Education is a commonly used framework for institutional classification that classifies doctoral-granting schools into three groups based on research productivity. Despite its wide use, the Carnegie methodology involves several shortcomings, including a lack of thorough documentation, subjectively placed thresholds between institutions, and a methodology that is not completely reproducible. We describe the methodology of the 2015 and 2018 updates to the classification and propose an alternative method of classification using the same data that relies on structural equation modeling (SEM) of latent factors rather than principal component-based indices of productivity. In contrast to the Carnegie methodology, we use SEM to obtain a single factor score for each school based on latent metrics of research productivity. Classifications are then made using a univariate model-based clustering algorithm as opposed to subjective thresholding, as is done in the Carnegie methodology. Finally, we present a Shiny web application that demonstrates sensitivity of both the Carnegie Classification and SEM-based classification of a selected university and generates a table of peer institutions in line with the stated goals of the Carnegie Classification. Journal: Statistics and Public Policy Pages: 87-97 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1666761 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1666761 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:87-97 Template-Type: ReDIF-Article 1.0 Author-Name: Diane Hu Author-X-Name-First: Diane Author-X-Name-Last: Hu Author-Name: Andrew Cooper Author-X-Name-First: Andrew Author-X-Name-Last: Cooper Author-Name: Neel Desai Author-X-Name-First: Neel Author-X-Name-Last: Desai Author-Name: Sophie Guo Author-X-Name-First: Sophie Author-X-Name-Last: Guo Author-Name: Steven Shi Author-X-Name-First: Steven Author-X-Name-Last: Shi Author-Name: David Banks Author-X-Name-First: David Author-X-Name-Last: Banks Title: Cost-Benefit Analysis of Discretionary Wars Abstract: Policy-makers should perform a cost-benefit analysis before initiating a war. This article describes a methodology for such assessment, and applies it post hoc to five military actions undertaken by the United States between 1950 and 2000 (the Korean War, the Vietnam War, the invasion of Grenada, the invasion of Panama, and the First Gulf War). The analysis identifies three broad categories of value: human capital, economic outcomes, and national influence. Different stakeholders (politicians, generals, industry, etc.) may assign different weights to these three categories, so this analysis tabulates each separately, and then, as may sometimes be necessary, monetizes them for unified comparison. Journal: Statistics and Public Policy Pages: 98-106 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1688740 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1688740 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:98-106 Template-Type: ReDIF-Article 1.0 Author-Name: Jonathan Ratner Author-X-Name-First: Jonathan Author-X-Name-Last: Ratner Title: Discretionary Wars, Cost-Benefit Analysis, and the Rashomon Effect: Searching for an Analytical Engine for Avoiding War Journal: Statistics and Public Policy Pages: 107-121 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1688742 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1688742 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:107-121 Template-Type: ReDIF-Article 1.0 Author-Name: David Banks Author-X-Name-First: David Author-X-Name-Last: Banks Title: Response to “Discretionary Wars, Cost-Benefit Analysis, and the Rashomon Effect” Journal: Statistics and Public Policy Pages: 122-123 Issue: 1 Volume: 6 Year: 2019 Month: 1 X-DOI: 10.1080/2330443X.2019.1688741 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1688741 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:6:y:2019:i:1:p:122-123 Template-Type: ReDIF-Article 1.0 Author-Name: Qing Pan Author-X-Name-First: Qing Author-X-Name-Last: Pan Author-Name: Weiwen Miao Author-X-Name-First: Weiwen Author-X-Name-Last: Miao Author-Name: Joseph L. Gastwirth Author-X-Name-First: Joseph L. Author-X-Name-Last: Gastwirth Title: Statistical Procedures for Assessing the Need for an Affirmative Action Plan: A Reanalysis of Shea v. Kerry Abstract: In the 1980s, reports from Congress and the Government Accountability Office (GAO) presented statistical evidence showing that employees in the Foreign Service were overwhelmingly White male, especially in the higher positions. To remedy this historical discrimination, the State Department instituted an affirmative action plan during 1990–1992 that allowed females and race-ethnic minorities to apply directly for mid-level positions. A White male employee claimed that he had been disadvantaged by the plan. The appellate court unanimously held that the manifest statistical imbalance supported the Department’s instituting the plan. One judge identified two statistical issues in the analysis of the data that neither party brought up. This article provides an empirical guideline for sample size and a one-sided Hotelling’s T2 test to answer these problems. First, an approximate rule is developed for the minimum number of expected minority appointments needed for a meaningful statistical analysis of under-representation. To avoid the multiple comparison issue when several protected groups are involved, a modification of Hotelling’s T2 test is developed for testing the null hypothesis of fair representation against a one-sided alternative of under-representation in at least one minority group. The test yields p-values less than 1 in 10,000 indicating that minorities were substantially under-represented. Excluding secretarial and clerical jobs led to even larger disparities.Supplemental materials for this article are available online. Journal: Statistics and Public Policy Pages: 1-8 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2019.1693313 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1693313 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:1-8 Template-Type: ReDIF-Article 1.0 Author-Name: Lucas Mentch Author-X-Name-First: Lucas Author-X-Name-Last: Mentch Title: On Racial Disparities in Recent Fatal Police Shootings Abstract: Fatal police shootings in the United States continue to be a polarizing social and political issue. Clear disagreement between racial proportions of victims and nationwide racial demographics together with graphic video footage has created fertile ground for controversy. However, simple population level summary statistics fail to take into account fundamental local characteristics such as county-level racial demography, local arrest demography, and law enforcement density. Using data on fatal police shootings between January 2015 and July 2016, I implement a number of straightforward resampling procedures designed to carefully examine how unlikely the victim totals from each race are with respect to these local population characteristics if no racial bias were present in the decision to shoot by police. I present several approaches considering the shooting locations both as fixed and also as a random sample. In both cases, I find overwhelming evidence of a racial disparity in shooting victims with respect to local population demographics but substantially less disparity after accounting for local arrest demographics. I conclude the analyses by examining the effect of police-worn body cameras and find no evidence that the presence of such cameras impacts the racial distribution of victims. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 9-18 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2019.1704330 File-URL: http://hdl.handle.net/10.1080/2330443X.2019.1704330 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:9-18 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Carter Author-X-Name-First: Daniel Author-X-Name-Last: Carter Author-Name: Zach Hunter Author-X-Name-First: Zach Author-X-Name-Last: Hunter Author-Name: Dan Teague Author-X-Name-First: Dan Author-X-Name-Last: Teague Author-Name: Gregory Herschlag Author-X-Name-First: Gregory Author-X-Name-Last: Herschlag Author-Name: Jonathan Mattingly Author-X-Name-First: Jonathan Author-X-Name-Last: Mattingly Title: Optimal Legislative County Clustering in North Carolina Abstract: North Carolina’s constitution requires that state legislative districts should not split counties. However, counties must be split to comply with the “one person, one vote” mandate of the U.S. Supreme Court. Given that counties must be split, the North Carolina legislature and the courts have provided guidelines that seek to reduce counties split across districts while also complying with the “one person, one vote” criterion. Under these guidelines, the counties are separated into clusters; each cluster contains a specified number of districts and that are drawn independent from other clusters. The primary goal of this work is to develop, present, and publicly release an algorithm to optimally cluster counties according to the guidelines set by the court in 2015. We use this tool to investigate the optimality and uniqueness of the enacted clusters under the 2017 redistricting process. We verify that the enacted clusters are optimal, but find other optimal choices. We emphasize that the tool we provide lists all possible optimal county clusterings. We also explore the stability of clustering under changing statewide populations and project what the county clusters may look like in the next redistricting cycle beginning in 2020/2021. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 19-29 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1748552 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1748552 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:19-29 Template-Type: ReDIF-Article 1.0 Author-Name: Gregory Herschlag Author-X-Name-First: Gregory Author-X-Name-Last: Herschlag Author-Name: Han Sung Kang Author-X-Name-First: Han Sung Author-X-Name-Last: Kang Author-Name: Justin Luo Author-X-Name-First: Justin Author-X-Name-Last: Luo Author-Name: Christy Vaughn Graves Author-X-Name-First: Christy Vaughn Author-X-Name-Last: Graves Author-Name: Sachet Bangia Author-X-Name-First: Sachet Author-X-Name-Last: Bangia Author-Name: Robert Ravier Author-X-Name-First: Robert Author-X-Name-Last: Ravier Author-Name: Jonathan C. Mattingly Author-X-Name-First: Jonathan C. Author-X-Name-Last: Mattingly Title: Quantifying Gerrymandering in North Carolina Abstract: By comparing a specific redistricting plan to an ensemble of plans, we evaluate whether the plan translates individual votes to election outcomes in an unbiased fashion. Explicitly, we evaluate if a given redistricting plan exhibits extreme statistical properties compared to an ensemble of nonpartisan plans satisfying all legal criteria. Thus, we capture how unbiased redistricting plans interpret individual votes via a state’s geo-political landscape. We generate the ensemble of plans through a Markov chain Monte Carlo algorithm coupled with simulated annealing based on a reference distribution that does not include partisan criteria. Using the ensemble and historical voting data, we create a null hypothesis for various election results, free from partisanship, accounting for the state’s geo-politics. We showcase our methods on two recent congressional districting plans of NC, along with a plan drawn by a bipartisan panel of retired judges. We find the enacted plans are extreme outliers whereas the bipartisan judges’ plan does not give rise to extreme partisan outcomes. Equally important, we illuminate anomalous structures in the plans of interest by developing graphical representations which help identify and understand instances of cracking and packing associated with gerrymandering. These methods were successfully used in recent court cases. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 30-38 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1796400 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1796400 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:30-38 Template-Type: ReDIF-Article 1.0 Author-Name: Sophia Caldera Author-X-Name-First: Sophia Author-X-Name-Last: Caldera Author-Name: Daryl DeFord Author-X-Name-First: Daryl Author-X-Name-Last: DeFord Author-Name: Moon Duchin Author-X-Name-First: Moon Author-X-Name-Last: Duchin Author-Name: Samuel C. Gutekunst Author-X-Name-First: Samuel C. Author-X-Name-Last: Gutekunst Author-Name: Cara Nix Author-X-Name-First: Cara Author-X-Name-Last: Nix Title: Mathematics of Nested Districts: The Case of Alaska Abstract: In eight states, a “nesting rule” requires that each state Senate district be exactly composed of two adjacent state House districts. In this article, we investigate the potential impacts of these nesting rules with a focus on Alaska, where Republicans have a 2/3 majority in the Senate while a Democratic-led coalition controls the House. Treating the current House plan as fixed and considering all possible pairings, we find that the choice of pairings alone can create a swing of 4–5 seats out of 20 against recent voting patterns, which is similar to the range observed when using a Markov chain procedure to generate plans without the nesting constraint. The analysis enables other insights into Alaska districting, including the partisan latitude available to districters with and without strong rules about nesting and contiguity. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 39-51 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1774452 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1774452 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:39-51 Template-Type: ReDIF-Article 1.0 Author-Name: Benjamin Fifield Author-X-Name-First: Benjamin Author-X-Name-Last: Fifield Author-Name: Kosuke Imai Author-X-Name-First: Kosuke Author-X-Name-Last: Imai Author-Name: Jun Kawahara Author-X-Name-First: Jun Author-X-Name-Last: Kawahara Author-Name: Christopher T. Kenny Author-X-Name-First: Christopher T. Author-X-Name-Last: Kenny Title: The Essential Role of Empirical Validation in Legislative Redistricting Simulation Abstract: As granular data about elections and voters become available, redistricting simulation methods are playing an increasingly important role when legislatures adopt redistricting plans and courts determine their legality. These simulation methods are designed to yield a representative sample of all redistricting plans that satisfy statutory guidelines and requirements such as contiguity, population parity, and compactness. A proposed redistricting plan can be considered gerrymandered if it constitutes an outlier relative to this sample according to partisan fairness metrics. Despite their growing use, an insufficient effort has been made to empirically validate the accuracy of the simulation methods. We apply a recently developed computational method that can efficiently enumerate all possible redistricting plans and yield an independent sample from this population. We show that this algorithm scales to a state with a couple of hundred geographical units. Finally, we empirically examine how existing simulation methods perform on realistic validation datasets. Journal: Statistics and Public Policy Pages: 52-68 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1791773 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1791773 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:52-68 Template-Type: ReDIF-Article 1.0 Author-Name: Daryl DeFord Author-X-Name-First: Daryl Author-X-Name-Last: DeFord Author-Name: Moon Duchin Author-X-Name-First: Moon Author-X-Name-Last: Duchin Author-Name: Justin Solomon Author-X-Name-First: Justin Author-X-Name-Last: Solomon Title: A Computational Approach to Measuring Vote Elasticity and Competitiveness Abstract: The recent wave of attention to partisan gerrymandering has come with a push to refine or replace the laws that govern political redistricting around the country. A common element in several states’ reform efforts has been the inclusion of competitiveness metrics, or scores that evaluate a districting plan based on the extent to which district-level outcomes are in play or are likely to be closely contested.In this article, we examine several classes of competitiveness metrics motivated by recent reform proposals and then evaluate their potential outcomes across large ensembles of districting plans at the Congressional and state Senate levels. This is part of a growing literature using MCMC techniques from applied statistics to situate plans and criteria in the context of valid redistricting alternatives. Our empirical analysis focuses on five states—Utah, Georgia, Wisconsin, Virginia, and Massachusetts—chosen to represent a range of partisan attributes. We highlight situation-specific difficulties in creating good competitiveness metrics and show that optimizing competitiveness can produce unintended consequences on other partisan metrics. These results demonstrate the importance of (1) avoiding writing detailed metric constraints into long-lasting constitutional reform and (2) carrying out careful mathematical modeling on real geo-electoral data in each redistricting cycle. Journal: Statistics and Public Policy Pages: 69-86 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1777915 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1777915 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:69-86 Template-Type: ReDIF-Article 1.0 Author-Name: Nicholas Eubank Author-X-Name-First: Nicholas Author-X-Name-Last: Eubank Author-Name: Jonathan Rodden Author-X-Name-First: Jonathan Author-X-Name-Last: Rodden Title: Who Is My Neighbor? The Spatial Efficiency of Partisanship Abstract: Relative to its overall statewide support, the Republican Party has been over-represented in congressional delegations and state legislatures over the last decade in a number of US states. A challenge is to determine the extent to which this can be explained by intentional gerrymandering as opposed to an underlying inefficient distribution of Democrats in cities. We explain the “spatial inefficiency” of support for Democrats, and demonstrate that it varies substantially both across states and also across legislative chambers within states. We introduce a simple method for measuring this inefficiency by assessing the partisanship of the nearest neighbors of each voter in each US state. Our measure of spatial efficiency helps explain cross-state patterns in legislative representation, and allows us to verify that political geography contributes substantially to inequalities in political representation. At the same time, however, we also show that even after controlling for spatial efficiency, partisan control of the redistricting process has had a substantial impact on the parties’ seat shares. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 87-100 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1806762 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1806762 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:87-100 Template-Type: ReDIF-Article 1.0 Author-Name: Maria Chikina Author-X-Name-First: Maria Author-X-Name-Last: Chikina Author-Name: Alan Frieze Author-X-Name-First: Alan Author-X-Name-Last: Frieze Author-Name: Jonathan C. Mattingly Author-X-Name-First: Jonathan C. Author-X-Name-Last: Mattingly Author-Name: Wesley Pegden Author-X-Name-First: Wesley Author-X-Name-Last: Pegden Title: Separating Effect From Significance in Markov Chain Tests Abstract: We give qualitative and quantitative improvements to theorems which enable significance testing in Markov chains, with a particular eye toward the goal of enabling strong, interpretable, and statistically rigorous claims of political gerrymandering. Our results can be used to demonstrate at a desired significance level that a given Markov chain state (e.g., a districting) is extremely unusual (rather than just atypical) with respect to the fragility of its characteristics in the chain. We also provide theorems specialized to leverage quantitative improvements when there is a product structure in the underlying probability space, as can occur due to geographical constraints on districtings. Journal: Statistics and Public Policy Pages: 101-114 Issue: 1 Volume: 7 Year: 2020 Month: 1 X-DOI: 10.1080/2330443X.2020.1806763 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1806763 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:7:y:2020:i:1:p:101-114 Template-Type: ReDIF-Article 1.0 Author-Name: Katy Klauenberg Author-X-Name-First: Katy Author-X-Name-Last: Klauenberg Author-Name: Cord A. Müller Author-X-Name-First: Cord A. Author-X-Name-Last: Müller Author-Name: Clemens Elster Author-X-Name-First: Clemens Author-X-Name-Last: Elster Title: Hypothesis-based Acceptance Sampling for Modules F and F1 of the European Measuring Instruments Directive Abstract: Millions of measuring instruments are verified each year before being placed on the markets worldwide. In the EU, such initial conformity assessments are regulated by the Measuring Instruments Directive (MID). The MID modules F and F1 on product verification allow for statistical acceptance sampling, whereby only random subsets of instruments need to be inspected. This article re-interprets the acceptance sampling conditions formulated by the MID. The new interpretation is contrasted with the one advanced in WELMEC guide 8.10, and three advantages have become apparent. First, an economic advantage of the new interpretation is a producers’ risk bounded from above, such that measuring instruments with sufficient quality are accepted with a guaranteed probability of no less than 95%. Second, a conceptual advantage is that the new MID interpretation fits into the well known, formal framework of statistical hypothesis testing. Thirdly, the new interpretation applies unambiguously to finite-sized lots, even very small ones. We conclude that the new interpretation is to be preferred and suggest re-formulating the statistical sampling conditions in the MID. Re-interpreting the MID conditions implies that currently available sampling plans are either not admissible or not optimal. We derive a new acceptance sampling scheme and recommend its application. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 9-17 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2021.1900762 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.1900762 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:9-17 Template-Type: ReDIF-Article 1.0 Author-Name: Andrew Gelman Author-X-Name-First: Andrew Author-X-Name-Last: Gelman Title: Failure and Success in Political Polling and Election Forecasting Abstract: The recent successes and failures of political polling invite several questions: Why did the polls get it wrong in some high-profile races? Conversely, how is it that polls can perform so well, even given all the evident challenges of conducting and interpreting them? Journal: Statistics and Public Policy Pages: 67-72 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2021.1971126 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.1971126 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:67-72 Template-Type: ReDIF-Article 1.0 Author-Name: Ruoqi Yu Author-X-Name-First: Ruoqi Author-X-Name-Last: Yu Author-Name: Dylan S. Small Author-X-Name-First: Dylan S. Author-X-Name-Last: Small Author-Name: David Harding Author-X-Name-First: David Author-X-Name-Last: Harding Author-Name: José Aveldanes Author-X-Name-First: José Author-X-Name-Last: Aveldanes Author-Name: Paul R. Rosenbaum Author-X-Name-First: Paul R. Author-X-Name-Last: Rosenbaum Title: Optimal Matching for Observational Studies That Integrate Quantitative and Qualitative Research Abstract: A quantitative study of treatment effects may form many matched pairs of a treated subject and an untreated control who look similar in terms of covariates measured prior to treatment. When treatments are not randomly assigned, one inevitable concern is that individuals who look similar in measured covariates may be dissimilar in unmeasured covariates. Another concern is that quantitative measures may be misinterpreted by investigators in the absence of context that is not recorded in quantitative data. When text information is automatically coded to form quantitative measures, examination of the narrative context can reveal the limitations of initial coding efforts. An existing proposal entails a narrative description of a subset of matched pairs, hoping in a subset of pairs to observe quite a bit more of what was not quantitatively measured or automatically encoded. A subset of pairs cannot rule out subtle biases that materially affect analyses of many pairs, but perhaps a subset of pairs can inform discussion of such biases, perhaps leading to a reinterpretation of quantitative data, or perhaps raising new considerations and perspectives. The large literature on qualitative research contends that open-ended, narrative descriptions of a subset of people can be informative. Here, we discuss and apply a form of optimal matching that supports such an integrated, quantitative-plus-qualitative study. The optimal match provides many closely matched pairs plus a subset of exceptionally close pairs suitable for narrative interpretation. We illustrate the matching technique using data from a recent study of police responses to domestic violence in Philadelphia, where the police report includes both quantitative and narrative information. Journal: Statistics and Public Policy Pages: 42-52 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2021.1919260 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.1919260 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:42-52 Template-Type: ReDIF-Article 1.0 Author-Name: James Alan Fox Author-X-Name-First: James Alan Author-X-Name-Last: Fox Author-Name: Nathan E. Sanders Author-X-Name-First: Nathan E. Author-X-Name-Last: Sanders Author-Name: Emma E. Fridel Author-X-Name-First: Emma E. Author-X-Name-Last: Fridel Author-Name: Grant Duwe Author-X-Name-First: Grant Author-X-Name-Last: Duwe Author-Name: Michael Rocque Author-X-Name-First: Michael Author-X-Name-Last: Rocque Title: The Contagion of Mass Shootings: The Interdependence of Large-Scale Massacres and Mass Media Coverage Abstract: Mass public shootings have generated significant levels of fear in the recent years, with many observers criticizing the media for fostering a moral panic, if not an actual rise in the frequency of such attacks. Scholarly research suggests that the media can potentially impact the prevalence of mass shootings in two respects: (i) some individuals may be inspired to mimic the actions of highly publicized offenders; and (ii) a more general contagion process may manifest as a temporary increase in the likelihood of shootings associated with a triggering event. In this study of mass shootings since 2000, we focus on short-term contagion, rather than imitation that can traverse years. Specifically, after highlighting the sequencing of news coverage prior and subsequent to mass shootings, we apply multivariate point process models to disentangle the correlated incidence of mass public shootings and news coverage of such events. The findings suggest that mass public shootings have a strong effect on the level of news reporting, but that news reporting on the topic has little impact, at least in the relative short-term, on the subsequent prevalence of mass shootings. Finally, the results appear to rule out the presence of strong self-excitation of mass shootings, placing clear limits on generalized short-term contagion effects. Supplementary files for this article are available online. Journal: Statistics and Public Policy Pages: 53-66 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2021.1932645 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.1932645 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:53-66 Template-Type: ReDIF-Article 1.0 Author-Name: George Mohler Author-X-Name-First: George Author-X-Name-Last: Mohler Author-Name: Martin B. Short Author-X-Name-First: Martin B. Author-X-Name-Last: Short Author-Name: Frederic Schoenberg Author-X-Name-First: Frederic Author-X-Name-Last: Schoenberg Author-Name: Daniel Sledge Author-X-Name-First: Daniel Author-X-Name-Last: Sledge Title: Analyzing the Impacts of Public Policy on COVID-19 Transmission: A Case Study of the Role of Model and Dataset Selection Using Data from Indiana Abstract: Dynamic estimation of the reproduction number of COVID-19 is important for assessing the impact of public health measures on virus transmission. State and local decisions about whether to relax or strengthen mitigation measures are being made in part based on whether the reproduction number, Rt, falls below the self-sustaining value of 1. Employing branching point process models and COVID-19 data from Indiana as a case study, we show that estimates of the current value of Rt, and whether it is above or below 1, depend critically on choices about data selection and model specification and estimation. In particular, we find a range of Rt values from 0.47 to 1.20 as we vary the type of estimator and input dataset. We present methods for model comparison and evaluation and then discuss the policy implications of our findings. Journal: Statistics and Public Policy Pages: 1-8 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2020.1859030 File-URL: http://hdl.handle.net/10.1080/2330443X.2020.1859030 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:1-8 Template-Type: ReDIF-Article 1.0 Author-Name: Jonathan Auerbach Author-X-Name-First: Jonathan Author-X-Name-Last: Auerbach Author-Name: Steve Pierson Author-X-Name-First: Steve Author-X-Name-Last: Pierson Title: Does Voting by Mail Increase Fraud? Estimating the Change in Reported Voter Fraud When States Switch to Elections By Mail Abstract: We estimate the change in the reported number of voter fraud cases when states switch to conducting elections by mail. We consider two types of states in which voting is facilitated by mail: states where a large number of voters receive ballots by mail (receive-by-mail states, RBM) and a subset of these states where registered voters are automatically sent ballots by mail (vote-by-mail states, VBM). We then compare the number of voter fraud cases in RBM (VBM) states to the number of cases in non-RBM (non-VBM) states, using two approaches standard in the social sciences. We find no evidence that voting by mail increases the risk of voter fraud overall. Between 2016 and 2019, RBM (VBM) states reported similar fraud rates to non-RBM (non-VBM) states. Moreover, we estimate Washington would have reported 73 more cases of fraud between 2011 and 2019 had it not introduced its VBM law. While our analysis of the data considers only two of many possible approaches, we argue our findings are unlikely were fraud more common when elections are held by mail. Journal: Statistics and Public Policy Pages: 18-41 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2021.1906806 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.1906806 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:18-41 Template-Type: ReDIF-Article 1.0 Author-Name: Greg Ridgeway Author-X-Name-First: Greg Author-X-Name-Last: Ridgeway Author-Name: James L. Rosenberger Author-X-Name-First: James L. Author-X-Name-Last: Rosenberger Author-Name: Lingzhou Xue Author-X-Name-First: Lingzhou Author-X-Name-Last: Xue Title: Statisticians Engage in Gun Violence Research Abstract: Government reports document more than 14,000 homicides and more than 195,000 aggravated assaults with firearms in 2017. In addition, there were 346 mass shootings, with 4 or more victims, including over 2000 people shot. These statistics do not include suicides (two-thirds of gun deaths) or accidents (5% of gun deaths). This article describes statistical issues discussed at a national forum to stimulate collaboration between statisticians and criminologists. Topics include: (i) available data sources and their shortcomings and efforts to improve the quality, and alternative new data registers of shootings; (ii) gun violence patterns and trends, with statistical models and clustering effects in urban areas; (iii) research for understanding effective strategies for gun violence prevention and the role of the police in solving gun homicides; (iv) the role of reliable forensic science in solving cases involving shootings; and (v) the topic of police shootings, where they are more prevalent and the characteristics of the officers involved. The final section calls the statistical community to engage in collaborations with social scientists to provide the most effective methodological tools for understanding and mitigating the societal problem of gun violence. Journal: Statistics and Public Policy Pages: 73-79 Issue: 1 Volume: 8 Year: 2021 Month: 1 X-DOI: 10.1080/2330443X.2021.1978354 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.1978354 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:8:y:2021:i:1:p:73-79 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2038744_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Lauren Ice Author-X-Name-First: Lauren Author-X-Name-Last: Ice Author-Name: James Scouras Author-X-Name-First: James Author-X-Name-Last: Scouras Author-Name: Edward Toton Author-X-Name-First: Edward Author-X-Name-Last: Toton Title: Wartime Fatalities in the Nuclear Era Abstract: Senior leaders in the U.S. Department of Defense, as well as nuclear strategists and academics, have argued that the advent of nuclear weapons is associated with a dramatic decrease in wartime fatalities. This assessment is often supported by an evolving series of figures that show a marked drop in wartime fatalities as a percentage of world population after 1945 to levels well below those of the prior centuries. The goal of this article is not to ascertain whether nuclear weapons are associated with or have led to a decrease in wartime fatalities, but rather to critique the supporting statistical evidence. We assess these wartime fatality figures and find that they are both irreproducible and misleading. We perform a more rigorous and traceable analysis and discover that post-1945 wartime fatalities as a percentage of world population are consistent with those of many other historical periods. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 49-57 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2038744 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2038744 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:49-57 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2050327_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Weiwen Miao Author-X-Name-First: Weiwen Author-X-Name-Last: Miao Author-Name: Qing Pan Author-X-Name-First: Qing Author-X-Name-Last: Pan Author-Name: Joseph L. Gastwirth Author-X-Name-First: Joseph L. Author-X-Name-Last: Gastwirth Title: A Misuse of Statistical Reasoning: The Statistical Arguments Offered by Texas to the Supreme Court in an Attempt to Overturn the Results of the 2020 Election Abstract: In December 2020, Texas filed a motion to the U.S. Supreme Court claiming that the four battleground states: Pennsylvania, Georgia, Michigan, and Wisconsin did not conduct their 2020 presidential elections in compliance with the Constitution. Texas supported its motion with a statistical analysis purportedly demonstrating that it was highly improbable that Biden had more votes than Trump in the four battleground states. This article points out that Texas’s claim is logically flawed and the analysis submitted violated several fundamental principles of statistics. Journal: Statistics and Public Policy Pages: 67-73 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2050327 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2050327 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:67-73 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2050328_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Akisato Suzuki Author-X-Name-First: Akisato Author-X-Name-Last: Suzuki Title: Policy Implications of Statistical Estimates: A General Bayesian Decision-Theoretic Model for Binary Outcomes Abstract: How should we evaluate the effect of a policy on the likelihood of an undesirable event, such as conflict? The significance test has three limitations. First, relying on statistical significance misses the fact that uncertainty is a continuous scale. Second, focusing on a standard point estimate overlooks the variation in plausible effect sizes. Third, the criterion of substantive significance is rarely explained or justified. A new Bayesian decision-theoretic model, “causal binary loss function model,” overcomes these issues. It compares the expected loss under a policy intervention with the one under no intervention. These losses are computed based on a particular range of the effect sizes of a policy, the probability mass of this effect size range, the cost of the policy, and the cost of the undesirable event the policy intends to address. The model is more applicable than common statistical decision-theoretic models using the standard loss functions or capturing costs in terms of false positives and false negatives. I exemplify the model’s use through three applications and provide an R package. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 85-96 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2050328 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2050328 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:85-96 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2050326_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Neil Hwang Author-X-Name-First: Neil Author-X-Name-Last: Hwang Author-Name: Shirshendu Chatterjee Author-X-Name-First: Shirshendu Author-X-Name-Last: Chatterjee Author-Name: Yanming Di Author-X-Name-First: Yanming Author-X-Name-Last: Di Author-Name: Sharmodeep Bhattacharyya Author-X-Name-First: Sharmodeep Author-X-Name-Last: Bhattacharyya Title: Observational Study of the Effect of the Juvenile Stay-At-Home Order on SARS-CoV-2 Infection Spread in Saline County, Arkansas Abstract: We assess the treatment effect of juvenile stay-at-home orders (JSAHO) on reducing the rate of SARS-CoV-2 infection spread in Saline County (“Saline”), Arkansas, by examining the difference between Saline’s and control Arkansas counties’ changes in daily and mean log infection rates of pretreatment (March 28–April 5, 2020) and treatment periods (April 6–May 6, 2020). A synthetic control county is constructed based on the parallel-trends assumption, least-squares fitting on pretreatment and socio-demographic covariates, and elastic-net-based methods, from which the counterfactual outcome is predicted and the treatment effect is estimated using the difference-in-differences, the synthetic control, and the changes-in-changes methodologies. Both the daily and average treatment effects of JSAHO are shown to be significant. Despite its narrow scope and lack of enforcement for compliance, JSAHO reduced the rate of the infection spread in Saline. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 74-84 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2050326 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2050326 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:74-84 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2120137_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Alan H. Dorfman Author-X-Name-First: Alan H. Author-X-Name-Last: Dorfman Author-Name: Richard Valliant Author-X-Name-First: Richard Author-X-Name-Last: Valliant Title: A Re-Analysis of Repeatability and Reproducibility in the Ames-USDOE-FBI Study Abstract: Forensic firearms identification, the determination by a trained firearms examiner as to whether or not bullets or cartridges came from a common weapon, has long been a mainstay in the criminal courts. Reliability of forensic firearms identification has been challenged in the general scientific community, and, in response, several studies have been carried out aimed at showing that firearms examination is accurate, that is, has low error rates. Less studied has been the question of consistency, of whether two examinations of the same bullets or cartridge cases come to the same conclusion, carried out by an examiner on separate occasions—intrarater reliability or repeatability—or by two examiners—interrater reliability or reproducibility.One important study, described in a 2020 Report by the Ames Laboratory-USDOE to the Federal Bureau of Investigation, went beyond considerations of accuracy to investigate firearms examination repeatability and reproducibility. The Report’s conclusions were paradoxical. The observed agreement of examiners with themselves or with other examiners appears mediocre. However, the study concluded repeatability and reproducibility are satisfactory, on grounds that the observed agreement exceeds a quantity called the expected agreement. We find that appropriately employing expected agreement as it was intended does not suggest satisfactory repeatability and reproducibility, but the opposite. Journal: Statistics and Public Policy Pages: 175-184 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2120137 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2120137 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:175-184 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2016083_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Benjamin J. Lobo Author-X-Name-First: Benjamin J. Author-X-Name-Last: Lobo Author-Name: Denise E. Bonds Author-X-Name-First: Denise E. Author-X-Name-Last: Bonds Author-Name: Karen Kafadar Author-X-Name-First: Karen Author-X-Name-Last: Kafadar Title: Estimating Local Prevalence of Obesity Via Survey Under Cost Constraints: Stratifying ZCTAs in Virginia’s Thomas Jefferson Health District Abstract: Currently, the most reliable estimate of the prevalence of obesity in Virginia’s Thomas Jefferson Health District (TJHD) comes from an annual telephone survey conducted by the Centers for Disease Control and Prevention. This district-wide estimate has limited use to decision makers who must target health interventions at a more granular level. A survey is one way of obtaining more granular estimates. This article describes the process of stratifying targeted geographic units (here, ZIP Code Tabulation Areas, or ZCTAs) prior to conducting the survey for those situations where cost considerations make it infeasible to sample each geographic unit (here, ZCTA) in the region (here, TJHD). Feature selection, allocation factor analysis, and hierarchical clustering were used to stratify ZCTAs. We describe the survey sampling strategy that we developed, by creating strata of ZCTAs; the data analysis using the R survey package; and the results. The resulting maps of obesity prevalence show stark differences in prevalence depending on the area of the health district, highlighting the importance of assessing health outcomes at a granular level. Our approach is a detailed and reproducible set of steps that can be used by others who face similar scenarios. Supplementary files for this article are available online. Journal: Statistics and Public Policy Pages: 8-19 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2021.2016083 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.2016083 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:8-19 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2105770_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Annika King Author-X-Name-First: Annika Author-X-Name-Last: King Author-Name: Jacob Murri Author-X-Name-First: Jacob Author-X-Name-Last: Murri Author-Name: Jake Callahan Author-X-Name-First: Jake Author-X-Name-Last: Callahan Author-Name: Adrienne Russell Author-X-Name-First: Adrienne Author-X-Name-Last: Russell Author-Name: Tyler J. Jarvis Author-X-Name-First: Tyler J. Author-X-Name-Last: Jarvis Title: Mathematical Analysis of Redistricting in Utah Abstract: We discuss difficulties of evaluating partisan gerrymandering in the congressional districts in Utah and the failure of many common metrics in Utah. We explain why the Republican vote share in the least-Republican district (LRVS) is a good indicator of the advantage or disadvantage each party has in the Utah congressional districts. Although the LRVS only makes sense in settings with at most one competitive district, in that setting it directly captures the extent to which a given redistricting plan gives advantage or disadvantage to the Republican and Democratic parties. We use the LRVS to evaluate the most common measures of partisan gerrymandering in the context of Utah’s 2011 congressional districts. We do this by generating large ensembles of alternative redistricting plans using Markov chain Monte Carlo methods. We also discuss the implications of this new metric and our results on the question of whether the 2011 Utah congressional plan was gerrymandered. Journal: Statistics and Public Policy Pages: 136-148 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2105770 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2105770 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:136-148 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2086191_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: David Puelz Author-X-Name-First: David Author-X-Name-Last: Puelz Author-Name: Robert Puelz Author-X-Name-First: Robert Author-X-Name-Last: Puelz Title: Financial Literacy and Perceived Economic Outcomes Abstract: We explore the relationship between financial literacy and self-reported, reflective economic outcomes from respondents using survey data from the United States. Our dataset includes a large number of covariates from the National Financial Capability Study (NFCS), widely used by literacy researchers, and we use a new econometric technique developed by Hahn et al., designed specifically for causal inference from observational data, to test whether changes in financial literacy infer meaningful changes in self-perceived economic outcomes. We find a negative treatment parameter on financial literacy consistent with the recent work of Netemeyer et al. and contrary to the presumption in many empirical studies that associate standard financial outcome measures with financial literacy. We conclude with a discussion of heterogeneity of the financial literacy treatment effect on household income, gender, and education level sub-populations. Our findings on the relationship between financial literacy and reflective economic outcomes also raise questions about its importance to an individual’s financial well-being. Journal: Statistics and Public Policy Pages: 122-135 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2086191 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2086191 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:122-135 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2071369_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Mikaela Meyer Author-X-Name-First: Mikaela Author-X-Name-Last: Meyer Author-Name: Ahmed Hassafy Author-X-Name-First: Ahmed Author-X-Name-Last: Hassafy Author-Name: Gina Lewis Author-X-Name-First: Gina Author-X-Name-Last: Lewis Author-Name: Prasun Shrestha Author-X-Name-First: Prasun Author-X-Name-Last: Shrestha Author-Name: Amelia M. Haviland Author-X-Name-First: Amelia M. Author-X-Name-Last: Haviland Author-Name: Daniel S. Nagin Author-X-Name-First: Daniel S. Author-X-Name-Last: Nagin Title: Changes in Crime Rates during the COVID-19 Pandemic Abstract: We estimate changes in the rates of five FBI Part 1 crimes during the 2020 spring COVID-19 pandemic lockdown period and the period after the killing of George Floyd through December 2020. We use weekly crime rate data from 28 of the 70 largest cities in the United States from January 2018 to December 2020. Homicide rates were higher throughout 2020, including during early 2020 prior to March lockdowns. Auto thefts increased significantly during the summer and remainder of 2020. In contrast, robbery and larceny significantly declined during all three post-pandemic periods. Point estimates of burglary rates pointed to a decline for all four periods of 2020, but only the pre-pandemic period was statistically significant. We construct a city-level openness index to examine whether the degree of openness just prior to and during the lockdowns was associated with changing crime rates. Larceny and robbery rates both had a positive and significant association with the openness index implying lockdown restrictions reduced offense rates whereas the other three crime types had no detectable association. While opportunity theory is a tempting post hoc explanation of some of these findings, no single crime theory provides a plausible explanation of all the results. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Pages: 97-109 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2071369 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2071369 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:97-109 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2019152_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Andrew Gelman Author-X-Name-First: Andrew Author-X-Name-Last: Gelman Author-Name: Shira Mitchell Author-X-Name-First: Shira Author-X-Name-Last: Mitchell Author-Name: Jeffrey Sachs Author-X-Name-First: Jeffrey Author-X-Name-Last: Sachs Author-Name: Sonia Sachs Author-X-Name-First: Sonia Author-X-Name-Last: Sachs Title: Reconciling Evaluations of the Millennium Villages Project Abstract: The Millennium Villages Project was an integrated rural development program carried out for a decade in 10 clusters of villages in sub-Saharan Africa starting in 2005, and in a few other sites for shorter durations. An evaluation of the 10 main sites compared to retrospectively chosen control sites estimated positive effects on a range of economic, social, and health outcomes (Mitchell et al. 2018). More recently, an outside group performed a prospective controlled (but also nonrandomized) evaluation of one of the shorter-duration sites and reported smaller or null results (Masset et al. 2020). Although these two conclusions seem contradictory, the differences can be explained by the fact that Mitchell et al. studied 10 sites where the project was implemented for 10 years, and Masset et al. studied one site with a program lasting less than 5 years, as well as differences in inference and framing. Insights from both evaluations should be valuable in considering future development efforts of this sort. Both studies are consistent with a larger picture of positive average impacts (compared to untreated villages) across a broad range of outcomes, but with effects varying across sites or requiring an adequate duration for impacts to be manifested. Journal: Statistics and Public Policy Pages: 1-7 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2021.2019152 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.2019152 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:1-7 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2086190_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Rachel Heyard Author-X-Name-First: Rachel Author-X-Name-Last: Heyard Author-Name: Manuela Ott Author-X-Name-First: Manuela Author-X-Name-Last: Ott Author-Name: Georgia Salanti Author-X-Name-First: Georgia Author-X-Name-Last: Salanti Author-Name: Matthias Egger Author-X-Name-First: Matthias Author-X-Name-Last: Egger Title: Rethinking the Funding Line at the Swiss National Science Foundation: Bayesian Ranking and Lottery Abstract: Funding agencies rely on peer review and expert panels to select the research deserving funding. Peer review has limitations, including bias against risky proposals or interdisciplinary research. The inter-rater reliability between reviewers and panels is low, particularly for proposals near the funding line. Funding agencies are also increasingly acknowledging the role of chance. The Swiss National Science Foundation (SNSF) introduced a lottery for proposals in the middle group of good but not excellent proposals. In this article, we introduce a Bayesian hierarchical model for the evaluation process. To rank the proposals, we estimate their expected ranks (ER), which incorporates both the magnitude and uncertainty of the estimated differences between proposals. A provisional funding line is defined based on ER and budget. The ER and its credible interval are used to identify proposals with similar quality and credible intervals that overlap with the provisional funding line. These proposals are entered into a lottery. We illustrate the approach for two SNSF grant schemes in career and project funding. We argue that the method could reduce bias in the evaluation process. R code, data and other materials for this article are available online. Journal: Statistics and Public Policy Pages: 110-121 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2086190 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2086190 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:110-121 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2120136_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Willem M. Van Der Wal Author-X-Name-First: Willem M. Author-X-Name-Last: Van Der Wal Title: Marginal Structural Models to Estimate Causal Effects of Right-to-Carry Laws on Crime Abstract: Right-to-carry (RTC) laws allow the legal carrying of concealed firearms for defense, in certain states in the United States. I used modern causal inference methodology from epidemiology to examine the effect of RTC laws on crime over a period from 1959 up to 2016. I fitted marginal structural models (MSMs), using inverse probability weighting (IPW) to correct for criminological, economic, political and demographic confounders. Results indicate that RTC laws significantly increase violent crime by 7.5% and property crime by 6.1%. RTC laws significantly increase murder and manslaughter, robbery, aggravated assault, burglary, larceny theft and motor vehicle theft rates. Applying this method to this topic for the first time addresses methodological shortcomings in previous studies such as conditioning away the effect, overfit and the inappropriate use of county level measurements. Data and analysis code for this article are available online. Journal: Statistics and Public Policy Pages: 163-174 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2120136 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2120136 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:163-174 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2024778_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Johann Gaebler Author-X-Name-First: Johann Author-X-Name-Last: Gaebler Author-Name: William Cai Author-X-Name-First: William Author-X-Name-Last: Cai Author-Name: Guillaume Basse Author-X-Name-First: Guillaume Author-X-Name-Last: Basse Author-Name: Ravi Shroff Author-X-Name-First: Ravi Author-X-Name-Last: Shroff Author-Name: Sharad Goel Author-X-Name-First: Sharad Author-X-Name-Last: Goel Author-Name: Jennifer Hill Author-X-Name-First: Jennifer Author-X-Name-Last: Hill Title: A Causal Framework for Observational Studies of Discrimination Abstract: In studies of discrimination, researchers often seek to estimate a causal effect of race or gender on outcomes. For example, in the criminal justice context, one might ask whether arrested individuals would have been subsequently charged or convicted had they been a different race. It has long been known that such counterfactual questions face measurement challenges related to omitted-variable bias, and conceptual challenges related to the definition of causal estimands for largely immutable characteristics. Another concern, which has been the subject of recent debates, is post-treatment bias: many studies of discrimination condition on apparently intermediate outcomes, like being arrested, that themselves may be the product of discrimination, potentially corrupting statistical estimates. There is, however, reason to be optimistic. By carefully defining the estimand—and by considering the precise timing of events—we show that a primary causal quantity of interest in discrimination studies can be estimated under an ignorability condition that may hold approximately in some observational settings. We illustrate these ideas by analyzing both simulated data and the charging decisions of a prosecutor’s office in a large county in the United States. Journal: Statistics and Public Policy Pages: 26-48 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2021.2024778 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.2024778 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:26-48 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2033654_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Christine Oehlert Author-X-Name-First: Christine Author-X-Name-Last: Oehlert Author-Name: Evan Schulz Author-X-Name-First: Evan Author-X-Name-Last: Schulz Author-Name: Anne Parker Author-X-Name-First: Anne Author-X-Name-Last: Parker Title: NAICS Code Prediction Using Supervised Methods Abstract: When compiling industry statistics or selecting businesses for further study, researchers often rely on North American Industry Classification System (NAICS) codes. However, codes are self-reported on tax forms and reporting incorrect codes or even leaving the code blank has no tax consequences, so they are often unusable. IRSs Statistics of Income (SOI) program validates NAICS codes for businesses in the statistical samples used to produce official tax statistics for various filing populations, including sole proprietorships (those filing Form 1040 Schedule C) and corporations (those filing Forms 1120). In this article we leverage these samples to explore ways to improve NAICS code reporting for all filers in the relevant populations. For sole proprietorships, we overcame several record linkage complications to combine data from SOI samples with other administrative data. Using the SOI-validated NAICS code values as ground truth, we trained classification-tree-based models (randomForest) to predict NAICS industry sector from other tax return data, including text descriptions, for businesses which did or did not initially report a valid NAICS code. For both sole proprietorships and corporations, we were able to improve slightly on the accuracy of valid self-reported industry sector and correctly identify sector for over half of businesses with no informative reported NAICS code. Journal: Statistics and Public Policy Pages: 58-66 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2033654 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2033654 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:58-66 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2016084_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Joshua Landon Author-X-Name-First: Joshua Author-X-Name-Last: Landon Author-Name: Joseph Gastwirth Author-X-Name-First: Joseph Author-X-Name-Last: Gastwirth Title: Graphical Measures Summarizing the Inequality of Income of Two Groups Abstract: Recently, Gastwirth proposed two transformations p∗(q) and m∗(q) of the Lorenz curve, which calculates the proportion of a population, cumulated from the poorest or middle, respectively, needed to have the same amount of income as top 100q% . Economists and policy makers are often interested in the comparative status of two groups, for example, females versus males or minority versus majority. This article adapts and extends the concept underlying the p∗(q) and m∗(q) curves to provide analogous curves comparing the relative status of two groups. Now one calculates the proportion of the minority group, cumulated from the bottom or middle needed to have the same total income as the top qth fraction of the majority group (after adjusting for sample size). The areas between these curves and the line of equality are analogous to the Gini index. The methodology is used to illustrate the change in the degree of inequality between males and females, as well as between black and white males, in the United States between 2000 and 2017, and can be used to examine disparities between the expenditures on health of minorities and white people. Journal: Statistics and Public Policy Pages: 20-25 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2021.2016084 File-URL: http://hdl.handle.net/10.1080/2330443X.2021.2016084 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:20-25 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2105769_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Edward J. Kim Author-X-Name-First: Edward J. Author-X-Name-Last: Kim Title: Signal Weighted Teacher Value-Added Models Abstract: This study introduces the signal weighted teacher value-added model (SW VAM), a value-added model that weights student-level observations based on each student’s capacity to signal their assigned teacher’s quality. Specifically, the model leverages the repeated appearance of a given student to estimate student reliability and sensitivity parameters, whereas traditional VAMs represent a special case where all students exhibit identical parameters. Simulation study results indicate that SW VAMs outperform traditional VAMs at recovering true teacher quality when the assumption of student parameter invariance is met but have mixed performance under alternative assumptions of the true data generating process depending on data availability and the choice of priors. Evidence using an empirical dataset suggests that SW VAM and traditional VAM results may disagree meaningfully in practice. These findings suggest that SW VAMs have promising potential to recover true teacher value-added in practical applications and, as a version of value-added models that attends to student differences, can be used to test the validity of traditional VAM assumptions in empirical contexts. Journal: Statistics and Public Policy Pages: 149-162 Issue: 1 Volume: 9 Year: 2022 Month: 12 X-DOI: 10.1080/2330443X.2022.2105769 File-URL: http://hdl.handle.net/10.1080/2330443X.2022.2105769 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:9:y:2022:i:1:p:149-162 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2218448_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Qianyu Dong Author-X-Name-First: Qianyu Author-X-Name-Last: Dong Author-Name: David Kline Author-X-Name-First: David Author-X-Name-Last: Kline Author-Name: Staci A. Hepler Author-X-Name-First: Staci A. Author-X-Name-Last: Hepler Title: A Bayesian Spatio-temporal Model to Optimize Allocation of Buprenorphine in North Carolina Abstract: The opioid epidemic is an ongoing public health crisis. In North Carolina, overdose deaths due to illicit opioid overdose have sharply increased over the last 5–7 years. Buprenorphine is a U.S. Food and Drug Administration approved medication for treatment of opioid use disorder and is obtained by prescription. Prior to January 2023, providers had to obtain a waiver and were limited in the number of patients that they could prescribe buprenorphine. Thus, identifying counties where increasing buprenorphine would yield the greatest overall reduction in overdose death can help policymakers target certain geographical regions to inform an effective public health response. We propose a Bayesian spatio-temporal model that relates yearly, county-level changes in illicit opioid overdose death rates to changes in buprenorphine prescriptions. We use our model to forecast the statewide count and rate of illicit opioid overdose deaths in future years, and we use nonlinear constrained optimization to identify the optimal buprenorphine increase in each county under a set of constraints on available resources. Our model estimates a negative relationship between death rate and increasing buprenorphine after accounting for other covariates, and our identified optimal single-year allocation strategy is estimated to reduce opioid overdose deaths by over 5%. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2218448 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2218448 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2218448 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2188069_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Max D. Morris Author-X-Name-First: Max D. Author-X-Name-Last: Morris Title: Comments on: A Re-analysis of Repeatability and Reproducibility in the Ames-USDOE-FBI Study, by Dorfman and Valliant Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2188069 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2188069 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2188069 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2239306_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Nicholas Scurich Author-X-Name-First: Nicholas Author-X-Name-Last: Scurich Author-Name: Richard S. John Author-X-Name-First: Richard S. Author-X-Name-Last: John Title: Three-Way ROCs for Forensic Decision Making Abstract: Firearm examiners use a comparison microscope to judge whether bullets or cartridge cases were fired by the same gun. Examiners can reach one of three possible conclusions: Identification (a match), Elimination (not a match), or Inconclusive. Numerous error rate studies report that firearm examiners commit few errors when they conduct these examinations. However, the studies also report many inconclusive judgments (> 50%), and how to score these responses is controversial. There have recently been three Signal Detection Theory (SDT) primers in this domain. Unfortunately, these analyses rely on hypothetical data and fail to address the inconclusive response issue adequately. This article reports an SDT analysis using data from a large error rate study of practicing firearm examiners. First, we demonstrate the problem of relying on the traditional two-way SDT model, which either drops or combines inconclusive responses; in addition to lacking ecological validity, this approach leads to implausible results. Second, we introduce readers to the three-way SDT model. We demonstrate this approach in the forensic firearms domain. While the three-way approach is statistically complicated, it is well suited to evaluate performance for any forensic domain in which three possible decision categories exist. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2239306 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2239306 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2239306 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2188062_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Constance F. Citro Author-X-Name-First: Constance F. Author-X-Name-Last: Citro Author-Name: Jonathan Auerbach Author-X-Name-First: Jonathan Author-X-Name-Last: Auerbach Author-Name: Katherine Smith Evans Author-X-Name-First: Katherine Smith Author-X-Name-Last: Evans Author-Name: Erica L. Groshen Author-X-Name-First: Erica L. Author-X-Name-Last: Groshen Author-Name: J. Steven Landefeld Author-X-Name-First: J. Steven Author-X-Name-Last: Landefeld Author-Name: Jeri Mulrow Author-X-Name-First: Jeri Author-X-Name-Last: Mulrow Author-Name: Thomas Petska Author-X-Name-First: Thomas Author-X-Name-Last: Petska Author-Name: Steve Pierson Author-X-Name-First: Steve Author-X-Name-Last: Pierson Author-Name: Nancy Potok Author-X-Name-First: Nancy Author-X-Name-Last: Potok Author-Name: Charles J. Rothwell Author-X-Name-First: Charles J. Author-X-Name-Last: Rothwell Author-Name: John Thompson Author-X-Name-First: John Author-X-Name-Last: Thompson Author-Name: James L. Woodworth Author-X-Name-First: James L. Author-X-Name-Last: Woodworth Author-Name: Edward Wu Author-X-Name-First: Edward Author-X-Name-Last: Wu Title: What Protects the Autonomy of the Federal Statistical Agencies? An Assessment of the Procedures in Place to Protect the Independence and Objectivity of Official U.S. Statistics Abstract: We assess the professional autonomy of the 13 principal U.S. federal statistical agencies. We define six components or measures of such autonomy and evaluate each of the 13 principal statistical agencies according to each measure. Our assessment yields three main findings: (a) Challenges to the objectivity, credibility, and utility of federal statistics arise largely as a consequence of insufficient autonomy. (b) There is remarkable variation in autonomy protections and a surprising lack of statutory protections for many agencies for many of the proposed measures. (c) Many existing autonomy rules and guidelines are weakened by unclear or unactionable language. We conclude that a lack of professional autonomy unduly exposes the principal federal statistical agencies to efforts to undermine the objectivity of their products and that agencies cannot completely rebuff these efforts. Our main recommendations are to strengthen the role of the OMB Chief Statistician and to legislate new statutory autonomy protections, including explicit authorization for the principal federal statistical agencies that currently have no recognition in statute. We also recommend periodic assessments of the health of the federal statistical system, including not only autonomy protections and resources, but also how well agencies are satisfying data needs for the public good and using best methods to do so. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2188062 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2188062 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2188062 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2188056_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Chris R. Surfus Author-X-Name-First: Chris R. Author-X-Name-Last: Surfus Title: A Statistical Understanding of Disability in the LGBT Community Abstract: For the first time ever, the United States Census Bureau began collecting data on the LGBT community with Phase 3.2 of the Household Pulse Survey. The Household Pulse Survey assesses how residents of the United States are doing during the COVID-19 pandemic. The data provided by the Household Pulse Survey Week 34 through Week 39 provides information to understand the lives of LGBT residents of the United States and how the LGBT community as a whole is doing economically. This study merges six weeks of the Household Pulse Survey, for a total of 382,908 survey responses. The sample represents a population of 250,265,449 adult residents aged 18 and older in the United States. This study provides the first nationally representative sample of residents of the United States that identify as transgender. This study specifically focuses on LGBT people with disabilities but highlights disparities facing transgender disabled U.S. adult residents. Disability is defined in the Household Pulse Survey as a severe or total impairment of those with seeing, hearing, remembering, and mobility disability types. The data indicates significant disparities for LGBT people compared to non-LGBT people, specifically in terms of economic considerations like work loss, household finances, and mental health. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2188056 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2188056 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2188056 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2244026_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Michael Cohen Author-X-Name-First: Michael Author-X-Name-Last: Cohen Title: Discussion of “What Protects the Autonomy of the Federal Statistical Agencies? An Assessment of the Procedures in Place to Protect the Independence and Objectivity of Official U.S. Statistics” by Citro et al. (2023) Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2244026 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2244026 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2244026 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2216748_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Kori Khan Author-X-Name-First: Kori Author-X-Name-Last: Khan Author-Name: Alicia L. Carriquiry Author-X-Name-First: Alicia L. Author-X-Name-Last: Carriquiry Title: Shining a Light on Forensic Black-Box Studies Abstract: Forensic science plays a critical role in the United States criminal legal system. For decades, many feature-based fields of forensic science, such as firearm and toolmark identification, developed outside the scientific community’s purview. The results of these studies are widely relied on by judges nationwide. However, this reliance is misplaced. Black-box studies to date suffer from inappropriate sampling methods and high rates of missingness. Current black-box studies ignore both problems in arriving at the error rate estimates presented to courts. We explore the impact of each type of limitation using available data from black-box studies and court materials. We show that black-box studies rely on unrepresentative samples of examiners. Using a case study of a popular ballistics study, we find evidence that these nonrepresentative samples may commit fewer errors than the wider population from which they came. We also find evidence that the missingness in black-box studies is non-ignorable. Using data from a recent latent print study, we show that ignoring this missingness likely results in systematic underestimates of error rates. Finally, we offer concrete steps to overcome these limitations. Supplementary materials for this article areavailable online. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2216748 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2216748 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2216748 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2199809_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Arnold Barnett Author-X-Name-First: Arnold Author-X-Name-Last: Barnett Author-Name: Arnaud Sarfati Author-X-Name-First: Arnaud Author-X-Name-Last: Sarfati Title: The Polls and the U.S. Presidential Election in 2020 …. and 2024 Abstract: Arguably, the single greatest determinant of U.S. public policy is the identity of the president. And if trusted, polls not only provide forecasts about presidential-election outcomes but can act to shape those outcomes. Looking ahead to the 2024 U.S. presidential election and recognizing that polls before the 2020 presidential election were sharply criticized, we consider whether such harsh assessments are warranted. Initially, we explore whether such polls as processed by the sophisticated aggregator FiveThirtyEight successfully forecast actual 2020 state-by-state outcomes. We evaluate FiveThirtyEight’s forecasts using customized statistical methods not used previously, methods that take account of likely correlations among election outcomes in similar states. We find that, taken together, the pollsters and FiveThirtyEight did an excellent job in predicting who would win in individual states, even those “tipping point” states where forecasting is more difficult. However, we also find that FiveThirtyEight underestimated Donald Trump’s vote shares by state to a modest but statistically significant extent. We further consider how the polls performed when the more primitive aggregator Real Clear Politics combined their results, and then how well single statewide polls performed without aggregation. It emerges that both Real Clear Politics and the individual polls fared surprisingly well. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2199809 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2199809 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2199809 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2190008_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Max Rubinstein Author-X-Name-First: Max Author-X-Name-Last: Rubinstein Author-Name: Amelia Haviland Author-X-Name-First: Amelia Author-X-Name-Last: Haviland Author-Name: Joshua Breslau Author-X-Name-First: Joshua Author-X-Name-Last: Breslau Title: The Effect of COVID-19 Vaccinations on Self-Reported Depression and Anxiety During February 2021 Abstract: Using the COVID-19 Trends and Impact Survey, we estimate the average effect of COVID-19 vaccinations on self-reported feelings of depression and anxiety, isolation, and worries about health among vaccine-accepting respondents in February 2021, and find 3.7, 3.3, and 4.3 percentage point reductions in the probability of each outcome, respectively, with particularly large reductions among respondents aged 18 and 24 years old. We show that interventions targeting social isolation account for 39.1% of the total effect of COVID-19 vaccinations on depression, while interventions targeting worries about health account for 8.3%. This suggests that social isolation is a stronger mediator of the effect of COVID-19 vaccinations on depression than worries about health. We caution that these causal interpretations rely on strong assumptions. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2190008 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2190008 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2190008 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2190368_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Banafsheh Behzad Author-X-Name-First: Banafsheh Author-X-Name-Last: Behzad Author-Name: Bhavana Bheem Author-X-Name-First: Bhavana Author-X-Name-Last: Bheem Author-Name: Daniela Elizondo Author-X-Name-First: Daniela Author-X-Name-Last: Elizondo Author-Name: Susan Martonosi Author-X-Name-First: Susan Author-X-Name-Last: Martonosi Title: Prevalence and Propagation of Fake News Abstract: In recent years, scholars have raised concerns on the effects that unreliable news, or “fake news,” has on our political sphere, and our democracy as a whole. For example, the propagation of fake news on social media is widely believed to have influenced the outcome of national elections, including the 2016 U.S. Presidential Election, and the 2020 COVID-19 pandemic. What drives the propagation of fake news on an individual level, and which interventions could effectively reduce the propagation rate? Our model disentangles bias from truthfulness of an article and examines the relationship between these two parameters and a reader’s own beliefs. Using the model, we create policy recommendations for both social media platforms and individual social media users to reduce the spread of untruthful or highly biased news. We recommend that platforms sponsor unbiased truthful news, focus fact-checking efforts on mild to moderately biased news, recommend friend suggestions across the political spectrum, and provide users with reports about the political alignment of their feed. We recommend that individual social media users fact check news that strongly aligns with their political belief and read articles of opposing political bias. Supplementary materials for this article are available online. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2190368 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2190368 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2190368 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2221314_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Hermann Habermann Author-X-Name-First: Hermann Author-X-Name-Last: Habermann Author-Name: Thomas A. Louis Author-X-Name-First: Thomas A. Author-X-Name-Last: Louis Author-Name: Franklin Reeder Author-X-Name-First: Franklin Author-X-Name-Last: Reeder Title: Is Autonomy Possible and Is It a Good Thing? Abstract: Recently, Citro et al. published an article focusing on the autonomy, or lack of same, of the 13 principle statistical agencies of the United States. The authors are to be congratulated for raising an important topic—the concept of autonomy. Among their conclusions are: (a) existing autonomy protections are inadequate, (b) a lack of professional autonomy unduly exposes the principal federal statistical agencies to efforts to undermine the objectivity of their products and (c) agencies cannot completely rebuff these efforts. Their main recommendations are that the role of the Chief Statistician be strengthened and new statutory autonomy protections be legislated. Here, we consider the meaning of autonomy for a federal agency in general and for federal statistical agencies in particular. Additionally, we consider the benefits and limitations of autonomy for federal statistical agencies. We note that while additional legislation is useful to produce required autonomy, a powerful tool—and one which is possibly more readily available—is effective leadership. Finally, we suggest that the process used to select the leaders of the statistical system needs to be fundamentally changed. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2221314 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2221314 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2221314 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2221324_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Claire McKay Bowen Author-X-Name-First: Claire McKay Author-X-Name-Last: Bowen Title: The Autonomy Gap: Response to Citro et al. and the statistical community Abstract: While the threat of biased AI has received considerable attention, another invisible threat to data democracy exists that has not received scientific or media attention. This threat is the lack of autonomy for the 13 principal United States federal statistical agencies. These agencies collect data that informs the United States federal government’s critical decisions, such as allocating resources and providing essential services. The lack of agency-specific statutory autonomy protections leaves the agencies vulnerable to political influence, which could have lasting ramifications without the public’s knowledge. Citro et al. evaluate the professional autonomy of the 13 federal statistical agencies and found that they lacked sufficient autonomy due to the absence of statutory protections (among other things). They provided three recommendations to enhance the strength of the federal statistical agency’s leadership and its autonomy to address each measure of autonomy for all 13 principal federal statistical agencies. Implementing these recommendations is an initial and crucial step toward preventing future erosion of the federal statistical system. Further, statisticians must take an active role in initiating and engaging in open dialogues with various scientific fields to protect and promote the vital work of federal statistical agencies. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2221324 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2221324 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2221324 Template-Type: ReDIF-Article 1.0 # input file: USPP_A_2221320_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Wayne Smith Author-X-Name-First: Wayne Author-X-Name-Last: Smith Title: Comment on “What Protects the Autonomy of the Federal Statistical Agencies? An Assessment of the Procedures in Place That Protect the Independence and Objectivity of Official Statistics” by Pierson et al. Journal: Statistics and Public Policy Issue: 1 Volume: 10 Year: 2023 Month: 12 X-DOI: 10.1080/2330443X.2023.2221320 File-URL: http://hdl.handle.net/10.1080/2330443X.2023.2221320 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:usppxx:v:10:y:2023:i:1:p:2221320