Category: Translational

Symposium

Building From Solid Ground: Establishing the Test-Retest Reliability of Computational Modeling Parameters

Friday, November 17
12:00 PM - 1:30 PM
Location: Sapphire Ballroom M & N, Level 4, Sapphire Level

Keywords: Psychometrics | Translational Research | Cognitive Processes
Presentation Type: Symposium

Background: Computational modeling can reproduce long sequences of behavior via just a few parameters. In theory, model parameters can capture precise individual differences that are obscured in raw behavior. Such parameters should be more stable within-subject than measures summarizing raw behavior (Huys, Maia, & Frank, 2016). Improved stability is important, because the stability of behavioral measures is often very poor (Lilienfeld, 2014) and this poses a serious threat to progress in clinical science (Rodebaugh, Scullin, et al., 2016). However, no data are available on the comparative reliability of model-based and behavioral measures. We performed a head-to-head comparison of reliabilities in a trial-and-error learning task wherein behavior can be modeled via a well-developed approach (Niv, Radulescu, et al., 2015).


Methods: Comparisons were performed in two datasets (Niv et al., 2015, N = 22, 500 trials/participant, hereafter D1; Radulescu et al., 2016, study 2, N = 54, ~1400 trials/participant, hereafter D2). Test-retest reliability was assessed through ‘A-1’ intraclass correlation coefficients (ICCs; Wong & McGraw, 1996) of behavioral and computational modeling measures in the first (test) and second (retest) halves of the task.


Results: Model parameters (d and β*η) showed higher test-retest reliability than measures summarizing raw behavior (accuracy and games learned) in both datasets (ICCs in D1: d = .17, β*η = .35, accuracy = .09, games learned = .10; ICCs in D2: d = .68, β*η = .49, accuracy = .16, games learned = -.12). Secondary bootstrapping analyses in D2 (which had many more trials than D1) revealed reliability improves with trials performed.


Conclusions: The results support the premise that modeling facilitates more stable measurement of individual differences than behavioral summary statistics. Consistent with findings in other tasks (e.g., the dot probe; Rodebaugh et al., 2016), behavioral measures exhibited close to 0 test-retest reliability. The reliability of model parameters outperformed the most reliable behavioral measure by factors ranging from 1.1 to 4.3. The most reliable computational measure approached reliability considered adequate for questionnaires. Questionnaires often have high reliability, but their validity is arguably marred by response biases (Podsakoff, MacKenzie, et al., 2003). Thus our early results offer hope that, as modeling approaches are refined, model parameters may eventually achieve the psychometric holy grail: high reliability and high validity.

Peter F. Hitchcock

Graduate Student
Drexel University

Presentation(s):

Send Email for Peter Hitchcock


Assets

Building From Solid Ground: Establishing the Test-Retest Reliability of Computational Modeling Parameters



Attendees who have favorited this

Please enter your access key

The asset you are trying to access is locked. Please enter your access key to unlock.

Send Email for Building From Solid Ground: Establishing the Test-Retest Reliability of Computational Modeling Parameters