Content area
Full text
Contents
- Abstract
- GGM Estimation From Ordered Categorical Data
- Empirical Questions in Psychological Research
- Aim of the Article
- Method
- Data transformation
- Network Estimation
- qgraph
- Psychonetrics
- MGM
- BGGM
- GGMnonreg
- Network Model Construction
- Data Generation
- Quantifying Network Estimation Accuracy
- Sensitivity: Ability to Identify True Edges
- Specificity and Precision: Ability to Not Include False Edges
- Edge Weight Accuracy: Ability to Estimate Precise Edge Weights
- Centrality Index Accuracy: Ability to Identify Important Nodes
- Bridge Edges Detection: Ability to Detect Edges That Connect Clusters
- Network Replicability: Ability to Replicate Features in an Independent Dataset
- Simulation Setup Summary
- Results
- Simulation Exploration App
- Overall Results
- Desirable Asymptotic Properties
- Low Sample Size Discovery
- Specific Research Questions
- Visual Network Alignment
- Centrality
- Bridge Edges
- Network Replicability
- Discussion
- Limitations
- Conclusion
- Appendix A
- Appendix B
Figures and Tables
Abstract
The Gaussian graphical model (GGM) has recently grown popular in psychological research, with a large body of estimation methods being proposed and discussed across various fields of study, and several algorithms being identified and recommend as applicable to psychological data sets. Such high-dimensional model estimation, however, is not trivial, and algorithms tend to perform differently in different settings. In addition, psychological research poses unique challenges, including placing a strong focus on weak edges (e.g., bridge edges), handling data measured on ordered scales, and relatively limited sample sizes. As a result, there is currently no consensus regarding which estimation procedure performs best in which setting. In this large-scale simulation study, we aimed to overcome this gap in the literature by comparing the performance of several estimation algorithms suitable for Gaussian and skewed ordered categorical data across a multitude of settings, as to arrive at concrete guidelines from applied researchers. In total, we investigated 60 different metrics across 564,000 simulated data sets. We summarized our findings through a platform that allows for manually exploring simulation results. Overall, we found that an exchange between discovery (e.g., sensitivity, edge weight correlation) and caution (e.g., specificity, precision) should always be expected, and achieving both—which is a requirement for perfect replicability—is difficult. Further, we identified that the estimation method is best chosen in light of each research question and have highlighted, alongside desirable...





