Machine learning to analyze single-case graphs : a comparison to visual inspection
Fichiers
Date de publication
Autrices et auteurs
Contributrices et contributeurs
Direction de recherche
Publié dans
Date de la Conférence
Lieu de la Conférence
Éditeur
Cycle d'études
Programme
Organisme subventionnaire
Résumé
Behavior analysts commonly use visual inspection to analyze single-case graphs, but studies on its reliability have produced mixed results. To examine this issue, we compared the Type I error rate and power of visual inspection with a novel approach—machine learning. Five expert visual raters analyzed 1,024 simulated AB graphs, which differed on number of points per phase, autocorrelation, trend, variability, and effect size. The ratings were compared to those obtained by the conservative dual-criteria method and two models derived from machine learning. On average, visual raters agreed with each other on only 75% of graphs. In contrast, both models derived from machine learning showed the best balance between Type I error rate and power while producing more consistent results across different graph characteristics. The results suggest that machine learning may support researchers and practitioners in making fewer errors when analyzing single-case graphs, but replications remain necessary.