BBC - Why Studying Individual Differences In Cognition Is Hard And How To Make It Easier
The promise of studying individual differences in cognition is straightforward. By asking participants to perform a battery of tasks, and by considering how performance covaries across the battery, we may uncover underlying relations. Even though this approach has lived up to the promise in personality psychology, the case for cognition is far less sanguine. Tasks that purportedly measure the same construct correlate weakly if at all; there is no agreement among researchers on the underlying factor structure; and there is widespread suspicion that cognitive performance measures are unreliable and unreplicable. The core of these problems is statistical---cognitive measures vary greatly from trial-to-trial necessitating a means of modeling this noise. I develop Bayesian hierarchical exploratory and confirmatory factor models for cognitive experiments. By modeling trial noise, these models allow for far more accurate assessment of covariation and underlying latent factor structure than conventional methods. Most importantly, they allow for an accurate assessment of uncertainty in assessing relations. The insights from these models are humbling. It seems that for many domains no statistical wizardry can save the day---there are severe limitations in the type of data we collect, and we should be more humble about what we may learn form them. Moreover, the limiting factor on certainty is the number of trials per person per task rather than the number of tasks or the number of people. Designs with too few trials will fail to uncover structure no matter how many people participate. I apply the models to data sets in cognitive control and in visual illusions to highlight when relations among tasks may be uncovered.