Tuesday, February 20, 2018

Intuitive and Impossible: What do Short-Term and Long-Term Relationships Look Like?

People have long-term relationships and short-term relationships. In what ways do these two kinds of relationships differ?

You may find the answer to be extremely intuitive—or extremely counterintuitive—depending on your lay theories about relationships, or depending on which segment of the literature on human mating is more familiar to you.

The ReCAST Model. Double lines are long-term relationships,
and the single line is a short-term relationship.
In a recent paper, we collected data on people’s real-life relationships over time—beginning at the first moment they met a partner—to compare the relationships that people think of as “long-term” and “short-term.” There is a vast literature that asks people what they want in these kinds of relationships, but there is far less data on people’s real life experiences with short-term and long-term relationships and partners. We wanted to know: How exactly do these types of relationships differ, and when do these differences become apparent? It took us about 4 years to collect and publish these data, and they helped us inform and develop something we call the ReCAST model.

Perhaps the most important finding was this one: Differences did not emerge right away. That is, it took a considerable period of time—typically weeks or months—for short-term and long-term relationships to diverge. Put another way: You can’t tell, early on, whether a relationship is short-term or long-term; the trajectories only pull apart once you’ve known someone for quite awhile.

We have a high degree of confidence in these findings.[1] But here is today’s question: Are these findings intuitive and obvious? 

According to one type of reviewer (we had two reviewers like this), these data are extremely intuitive. These reviewers said: Researchers studying close relationships already know that relationships unfold gradually over time. Of course you cannot predict how long a relationship will last until two people have a chance to interact, assess interpersonal chemistry, and (preferably) have a few make-out sessions. These assumptions are built into the fabric of everything we have done for the past 30 years. Why would you try to test or publish something so obvious?

To another type of reviewer (we had four reviewers like this), these results were highly implausible. These reviewers said: Researchers studying evolved strategies know that people approach relationships very differently depending on whether that relationship is short-term or long-term. For example, women can view a photograph of a man and know from his chiseled features that he is good for a short-term but not a long-term relationship. Your data are at odds with the assumptions that are built into the fabric of everything we have done for the past 30 years. You can’t possibly be testing these predictions correctly—if your methods were right, you would have gotten different results. Therefore, these data shouldn’t be published.

Together, these reviews characterized our data as simultaneously obvious and implausible. And this juxtaposition highlights the risk of drawing on intuition when making scientific critiques.

=================

Here is a short history of the Pendulum of Intuitiveness in psychological journals.

When I was in graduate school in the early-mid 2000s, the easiest way to get rejected from a journal was to try to publish something that felt obvious and familiar. One way that people would try to combat this pressure: Find a result that was counterintuitive. Hopefully, very counterintuitive. Like “wow, can you believe it?!” counterintuitive.

Sometimes, though, that counterintuitive finding didn’t emerge from a deep dive into two theories to discover where they made divergent predictions. Rather, the finding was something flashy—something a lay person wouldn’t have expected. Conducting data analysis felt more like gambling than detective work; ten obvious p < .05s were worth a lot less than one shocking (and perhaps “lucky”) p < .05. These pressures and strategies probably led to the publication of some counterintuitive findings that would be tough to replicate over some intuitive but easily replicable ones.

But within the last few years, terms like “counterintuitive” have become radioactive in the wake of recent methodological advances in our field. In other words, if a result seems surprising to you, now there is reason to suspect that it might be “too good to be true.”

The counterintuitive backlash makes sense. But it’s not a sufficient place to stop: Unless we want to keep swinging with the pendulum, we have to remember to continually question our intuitions at the same time. If we’re not willing to test our intuitions and publish the results—whether those results are themselves intuitive or counterintuitive—we sound more like advocates for “stuff we already know” than scientists asking questions about the world.

So intuition may be great for inspiring study ideas and informing your own personal Bayesian priors about whether a study is likely to work or replicate. But it is not a substitute for actual empirical research. And if that research is appropriately-powered, theoretically grounded, and well conducted, the findings have value regardless of whether they happened to confirm or disconfirm your intuitions. After all, one scholar’s intuitive may be another scholar’s impossible.

---------------------------

[1] Please, please replicate us! The materials and preregistration can be found here. And don’t hesitate to email me if you have questions.