top of page

Professional Skeptics in the Social Sciences: Responding to Cognitive Bias

Has this ever happened to you? 

As a kid walking down the street at dusk, I was a little startled to see a streetlight go on at the exactmoment that I passed it.  When it happened again some time later, I really took notice.  After the fourth or fifth time, I was really starting to wonder whether I had some kind of aura…



We researchers are only human.  Yet it is the goal of this noble profession to obtain objective data that lead to meaningful and practical findings.  The writings of Cook, Campbell, Stanley and Shadish provide essential guidance on the pitfalls of threats to validity and reliability.  However, most of this body of literature addresses methodological conditions; what seems to receive less attention in the research literature are those cognitive biases that we humans bring to our work, and everything we do. 

Such challenges to our objectivity are legion.  To name but a few: Shermer and Guild describe “confirmation bias”—the tendency to pay more attention to evidence that confirms our beliefs, while dismissing or ignoring evidence that contradicts them.    Shermer also describes the “belief engine”: our human predilection towards credulity.  In his delightfully sobering blog, Dvorsky lists no less than 12 such biases, “those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions.”  In addition to confirmation bias, for example, Dvorsky includes among these the “Ingroup Bias,”—“a manifestation of our innate tribalistic tendencies…[that] causes us to overestimate the abilities and value of our immediate group.”

The reason I find these tendencies so compelling is that they cannot be explained away by naiveté or lack of education; indeed, many appear to be largely hard-wired.  In Shermer’s discussion of confirmation bias, he describes subjects on opposite sides of the political divide observing a controversial debate designed to elicit that frame of mind, while their brain patterns were recorded.  During this exercise, neuroimaging results “revealed that the part of the brain most associated with reasoning…was quiescent.”  In addition, while the human brain has apparently evolved to be highly adept at recognizing patterns, evolution has not always selected for accuracy in interpreting them.  Discussing the belief engine, Shermer explains one example of why this should be: when our ancestors were at the watering hole, those that believed that every rustling of the leaves was a tiger would be less likely to be eaten than those that were skeptical of all that leaf-rustling superstition.

With all of our human predilections towards bias, how should researchers ensure that findings are objective and meaningful?  Shermer points out that science provides us with “built-in, self-correcting machinery”—randomization, double blind controls, peer reviews, expectations that results be replicated and so forth.  However, many of these corrections are less prevalent in the social sciences.  Randomization is difficult to achieve for myriad reasons; double blind controls are virtually unheard of; peer review panels, being costly, are uncommon unless they are built into the grant requirements; and funding opportunities are often prioritized to explore variations and enrichments rather than replication of findings. 

All of this means that conscious skepticism is even more important in the social sciences.  Indeed, as researchers we are trained to question —“professional skeptics.”  (This training may put us at greater risk on the Serengeti, but it makes us better at achieving unbiased research.)  And we do have concrete tools available to us. 

  • When possible, get involved in the program design phase and help developers recognize assumptions inherent in the design.

  • Discuss your methods with people outside the research team who can offer a fresh perspective.

  • Ask others—including not only your colleagues but the clients themselves—to take a critical look at instrument design and conclusions.

While the less rigorously designed studies remain invaluable for obtaining insights about programs that are not fully fleshed out or when resources are limited, remain cognizant that such studies require even greater vigilance.

About those streetlights:  I confess that I was actually a young adult.  I didn’t really think I had an aura, but I was troubled by how perplexing the experience seemed.  Until I took the time to think the situation through: I’ve probably passed thousands of streetlights in my life, almost none of which magically turned on.  But I was only noticing it when they did.

____________________________________________________





Jonathan Tunik is a Senior Research Associate for the Program Evaluation and School Improvement Services Division at Measurement Incorporated.






Please learn more about our program evaluation and professional development services on this website.

22 views0 comments
bottom of page