Monday, June 18, 2018

How Critical Are We?

One perennial issue in the best-practices discussion is whether or not our discipline is overly critical or not critical enough. When we evaluate other people’s research, should we be increasing our focus on the positive aspects or the negative aspects?

My current view is “yes, both,” and the moderator [1] is whether we are talking about criticism that takes place pre- or post-publication. Pre-publication, I think we need to dial up the positivity; post-publication, I think we need to dial up the criticism.


Pre-publication peer-review: We can afford to emphasize the positives

Lopsided criticism in peer-review
Here are two ways of envisioning the reviewer’s job. One way: The reviewer is the firewall that protects the world from weak manuscripts by pointing out all their flaws. A second way: The reviewer is a knowledgeable colleague who has been asked to offer input on ways to strengthen the manuscript.

As an editor, I find that—nine times out of ten—the latter approach ultimately makes for a better published literature. Here are two specific reviewer tactics that help tip the pre-publication peer review balance in a more constructive direction:

1. Reviews are especially helpful when they explain what a particular subfield can gain from the manuscript. Given that the manuscripts I handle are rarely (if ever) examining a topic that I myself study, I need reviewers to tell me about the value that the manuscript will have for them and their colleagues. Does the manuscript help to define or organize a problem? Will the findings be useful for other scholars when planning their own studies? Does the manuscript properly situate the findings in the literature they are trying to inform? It is extraordinarily informative when a reviewer says “Wow, my subfield really needs an article that does what this manuscript is trying to do.”

2. Reviews are especially helpful when they avoid getting hung up on small imperfections and inconsistencies that make a story less pretty and glossy. These small imperfections (e.g., a simple main effect was not significant on one of the five tests, different meta-analytic publication bias analyses reveal different conclusions) are very real parts of science, and all articles have some. Indeed, I would argue that the picture-perfect (but impossible) articles of the past emerged because authors and reviewers pushed each other to scrub away the imperfections. [2] As an editor, I prefer reviews that focus in depth on a few big picture concerns, if they exist (e.g., missing a large segment of the literature in a review, using the incorrect statistical test, drawing a conclusion that is not supported by the data). And if there are no big picture concerns, the reviews should so.

Post-publication peer-review: We can afford to be more critical

I think we have a natural tendency to assume that findings enter an “official canon” when they are accepted for publication. Canon is for fiction, like Star Wars and Marvel. As scientists, we must fight the urge to canonize.

Fictional scientist Bruce Banner fights the urge to
transform into the Hulk in two eponymous movies,
but only the second one is canon and "counts."

Real scientists have to fight the urge to canonize
articles just because they have crossed the
threshold from unpublished to published..
After all, good science should spark debates. I study the psychology of mating and relationships because I think much of this literature is debatable and debate-worthy. I criticize others’ approaches; others criticize mine. And I have served as a reviewer of productive back-and-forth debates between other scholars. These experiences were sometimes stressful, but in my view, these criticisms all served to advance the science.

I find it bewildering that some journals and editors are reluctant to devote page space to debates and criticism of previously published work. I have heard people express the opinion that criticism belongs only in the review process; if an article survives this “due process,” it earns a shield against any further published criticism. This attitude has a perverse effect: It prevents debates from moving forward openly for all to evaluate and confines them to a closed review process. I would posit that blogs, Facebook, and Twitter have become popular means of scientific criticism and debate in part because journals do not commonly offer opportunities for the ongoing, post-publication peer-review that is an essential part of science.

I would love to see journals embrace post-publication criticism—especially the thoughtful and productive kind of criticism that could even merit publication on its own. Indeed, I would love to see all journals operate like Behavioral and Brain Sciences or PNAS, where post-publication criticism is encouraged (or even solicited) shortly after the initial release of an article. If we create additional avenues for post-publication peer-review, I think we will see a much needed shift in the balance of criticism in our field.

[1] Hidden?
[2] I have always liked this piece about how changes in our scientific practices require changes on the part of reviewers, too.