Hey, Mom! The Explanation.

Here's the permanent dedicated link to my first Hey, Mom! post and the explanation of the feature it contains.

Thursday, June 25, 2020

A Sense of Doubt blog post #1955 - Peer-Reviewed Scientific Journals Don't Really Do Their Job

Collage of images of scientist writing microscope and journal pages

A Sense of Doubt blog post #1955 - Peer-Reviewed Scientific Journals Don't Really Do Their Job

I always say that material will present itself.

Because, the Internet.

I wanted something quick to complete today after the last two days of pretty intense writing and gathering together of a lot of material, which takes quite a bit of time.

I had planned to create more videos for school, but I did not have time in my day for that yet, so those will get pushed off to next week.

I was checking my email and spotted this article from WIRED, which is a meaningful coincidence as I was just explaining to a student this morning who claimed to have cited scholarly articles in their research project that they, in fact, did not. And so, spotting this article about the inherent flaws in the peer review process, which does not surprise me, seems like the perfect thing to post after weeks and weeks of mostly SJW material either on COVID-19, I Can't Breathe and antiracism, or the sexual misconduct of one of my favorite writers. See, there, I have not broken my trend of mentioning it even though I am not tagging him or linking today.

I like that this article feeds into the critical thinking about sources that I have been steering my students to do. Because students do as students do, the challenge lately has been just to get them to list scholarly articles at all in their projects let alone really read and grapple with the material. I have a long road to travel to really do what I want to co with that curriculum, but it's an evolving process.

So, here, an article that thinks critically about peer-reviewed journals because now in the pandemic research is more about speed than anything else but also is a time that more than ever we must employ really incisive critical thinking. And yet, apparently, the peer-review process has always been flawed and bad science gets through anyway, because, well, you know, HUMANS. And we're back to the issue raised in the last week re: sexual misconduct, etc. Same types of things.

So, this will be a good one to teach.

Here's the article.

Thanks for tuning in.


https://www.wired.com/story/peer-reviewed-scientific-journals-dont-really-do-their-job/



Peer-Reviewed Scientific Journals Don't Really Do Their Job
The rapid sharing of pandemic research shows there is a better way to filter good science from bad.


THE RUSH FOR scientific cures and treatments for Covid-19 has opened the floodgates of direct communication between scientists and the public. Instead of waiting for their work to go through the slow process of peer review at scientific journals, scientists are now often going straight to print themselves, posting write-ups of their work to public servers as soon as they’re complete. This disregard for the traditional gatekeepers has led to grave concerns among both scientists and commentators: Might not shoddy science—and dangerous scientific errors—make its way into the media, and spread before an author’s fellow experts can correct it? As two journalism professors suggested in an op-ed last month for The New York Times, it’s possible the recent spread of so-called preprints has only “sown confusion and discord with a general public not accustomed to the high level of uncertainty inherent in science.”

There’s another way to think about this development, however. Instead of showing (once again) that formal peer review is vital for good science, the last few months could just as well suggest the opposite. To me, at least—someone who’s served as an editor at seven different journals, and editor in chief at two—the recent spate of decisions to bypass traditional peer review gives the lie to a pair of myths that researchers have encouraged the public to believe for years: First, that peer-reviewed journals publish only trustworthy science; and second, that trustworthy science is published only in peer-reviewed journals.

Scientists allowed these myths to spread because it was convenient for us. Peer-reviewed journals came into existence largely to keep government regulators off our backs. Scientists believe that we are the best judges of the validity of each other's work. That's very likely true, but it's a huge leap from that to "peer-reviewed journals publish only good science." The most selective journals still allow flawed studies—even really terribly flawed ones—to be published all the time. Earlier this month, for instance, the journal Proceedings of the National Academy of Sciences put out a paper claiming that mandated face coverings are “the determinant in shaping the trends of the pandemic.” PNAS is a very prestigious journal, and their website claims that they are an “authoritative source” that works “to publish only the highest quality scientific research.” However, this paper was quickly and thoroughly criticized on social media; by last Thursday, 45 researchers had signed a letter formally calling for its retraction.

Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review. So they're ditching the pretense that journals are a sure-fire quality control filter, and sharing their papers as self-published PDFs. This might be just the shakeup that peer review needs.

The idea that journals have a special way to tell what’s good science and what’s bad has always been an illusion. In fact, the peer review process at journals leaves much to be desired. When a paper goes through, only those reviewers invited by the editor can weigh in on its quality, and their comments almost never get shared with readers. Journal peer review typically means that authors get a small dose of vetting—a few drops of criticism—on the way to publication. In contrast, when a paper is posted as a preprint, the authors’ peers still review it, but their vetting isn’t forced through the tip of a pipette. Instead, a firehose of criticism gets turned on. Because a preprint is public, any scientist can review the paper, and their comments may be posted to it using annotation software such as hypothes.is, or shared on social media for all readers to consider. That tends to make for better science, in the end.

In reality, it’s still quite rare for a preprint to get a lot of reviews. The firehose may be open for criticism to flow, but often no one bothers to turn it on. The use of preprints is still a new development in most fields, and it’s important to keep in mind that many such papers have been read by literally no one in the world besides their authors. But controversial findings about important issues, such as Covid-19, are a clear exception, especially when they get picked up by (or peddled to) the media. Those papers will almost certainly get more thorough vetting as a preprint than they would by going through journal peer review. That may be why one of the authors of the PNAS paper told BuzzFeed last week that he and his colleagues would “prefer not to engage in scientific debates via social media platform.”

One of the advantages of preprints is that they make the process of peer review more flexible. Indeed, it never really ends: A paper can be subjected to another round of scrutiny, for example, if it’s being picked up by policymakers, or if we later learn that the methods are flawed. At journals, peer review is almost always limited to just three or four reviewers whose work is over once the paper is accepted for publication.




Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review.

That’s not to say that moving to preprints and public peer review will solve all our problems. There are many obstacles that this shift won’t solve, and new problems it will create. But it’s clear that the old system doesn’t live up to the credibility we’ve bestowed on it. Journal peer review is full of holes, and the idea that scientific journals—and they alone—can tell us what’s trustworthy and what's not is a fantasy.

In many ways, journals don't even pretend to ensure the validity of scientific findings. If that were their primary goal, journal policies would require authors to share their data and analysis code with peer reviewers, and would ask reviewers to double-check results. In practice, reviewers can only judge the science based on what’s reported in the writeup, and they usually can’t see the details of the process that led to the findings. (This is kind of like asking a mechanic to evaluate a car without looking under the hood.) And for really important discoveries, you might expect journals to recruit an independent team of scientists to try to replicate a study from scratch. This basically never happens.
Journals do ask reviewers to weigh in on a study’s quality, but also its novelty and drama. Most peer-reviewed journals aren't simply trying to filter out inaccurate findings, they're also trying to select the stuff that will boost their "impact factor"—a public ranking based on how many times a journal's articles get cited in the few years after they've been published. Accuracy matters but many other aspects of a study also play important roles. Whether the authors are eminent scientists, for example, or whether they're from prestigious universities, or whether the discovery is likely to get media attention. (Journal peer review also makes no attempt to ferret out deliberate fraud.)

Scientists know all of this, in principle. I knew all of it myself. But I didn't know the full extent until I became editor in chief of a peer-reviewed journal, Social Psychological and Personality Science, in 2015. I should never have gotten the job: I was young, barely tenured, and a bit rebellious. But the gatekeepers took a chance on me, and, as obstreperous as I was, I knew this job was a big responsibility and I had to fulfill my duties according to professional norms and ethics. I took this to mean that I should evaluate the scientific merits of each manuscript submitted to the journal, and decide whether to publish it based only on considerations of quality. In fact, I chose to hide the authors' names from myself as much as possible (sometimes called “triple-blind” review), so that I wouldn't be swayed or intimidated by how famous they were.

Less than a month later, this got me into trouble. Apparently I had upset some Very Important People by “desk-rejecting” their papers, which means I turned them down on the basis of serious methodological flaws before sending out the work to other reviewers. (This practice historically accounted for about 30 percent of the rejections at this journal.) My bosses—the committee that hires the editor in chief and sets journal policy—sent me a warning via email. After expressing concern about “toes being stepped on,” especially the toes of "visible ... scholars whose disdain will have a greater impact on the journal's reputation," they forwarded a message from someone whom they called "a senior, highly respected, award-winning social psychologist." That psychologist had written them to say that my decision to reject a certain manuscript was "distasteful." I asked for a discussion of the scientific merits of that editorial decision and others, but got nowhere.

In the end, no one backed down. I kept doing what I was doing, and they stood by their concerns about how I was damaging the journal’s reputation. It’s not hard to imagine how things might have gone differently, though. Without the persistent support of the associate editors and the colleagues I called on for advice during this episode, I very likely would have caved and just agreed to keep the famous people happy.

This is the seedy underbelly of peer-reviewed journals. Award-winning scientists are so used to getting their way that they can email the editor's boss and complain that they find rejection "distasteful." Then the editor is pressured to be nicer to the award-winning scientists.



When a paper is posted as a preprint, the authors’ peers still review it, but their vetting isn’t forced through the tip of a pipette.


I heard later that the person who had hired me as editor in chief described the decision as "an experiment gone terribly, terribly wrong." Fair enough: That's basically what I think about the whole system of peer-reviewed science journals. It was once a good idea—even a necessary one—but it isn’t working anymore.
It's not that peer review can't work; indeed, as the old saying goes, it's the worst form of quality control, except for all the other ones that have been tried. But there are new ways of doing peer review that we haven't yet tried, and that's where preprints come into play.

Many of the problems with peer-reviewed journals are problems with the journals, rather than problems with peer review, per se. Preprints allow peer review to be taken out of the journals’ hands, which opens up dramatic, new opportunities to improve it. There’s no guarantee that the freewheeling, open-ended peer review of preprints will be rigorous and just, but everyone can see the process: Was it thorough? Do the reviews seem detailed, fair? We get to judge the judges. Journals don't let us do that. We just have to take their word that their peer review process is rigorous and just.

For now, most preprints will get very few, if any, reviews. That needs to change, but even just knowing that a paper has not been thoroughly reviewed is a huge improvement over the black box of journal-based peer review. As these public reviews become more commonplace, there is reason to hope that preprints will elicit more piercing criticism than typically happens at journals, particularly for sensationalistic papers by famous people. Journal editors and reviewers may be blinded by the flashiness of a paper’s claims, or the prominence of its authors; or else they may notice a study’s flaws but choose to publish it anyway for the “impact.” Either way they can be confident in the knowledge that they will not be held accountable for the stringency of the peer review process. In a preprint, though, a famous scientist’s exaggerated or unwarranted claims may be more likely to be called out, instead of less so.

Preprints also introduce new challenges, such as how to guarantee that unknown authors can get attention, or prevent friends from writing glowing reviews of one another’s work. But the most frequent concern I’ve heard—that preprints allow bad science to get into the hands of policymakers and practitioners—rings hollow. Peer-reviewed journals have been disastrously ineffective at preventing that very outcome. Indeed, some of the papers we published under my editorship at Social Psychological and Personality Science have been convincingly, and quite devastatingly, criticized. Editors and reviewers are fallible, and the journal peer review process is far too flimsy to live up to its reputation. It’s time we stop putting so much faith in journals, and look for more transparent and effective ways to peer review scientific claims.


Simine Vazire

ABOUT
Simine Vazire (@siminevazire) is a professor in the Melbourne School of Psychological Sciences at the University of Melbourne. She is the co-founder of the Society for the Improvement of Psychological Science and co-lead of MetaMelb, an interdisciplinary group doing metaresearch and metascience at the University of Melbourne. She is Editor in Chief of Collabra: Psychology.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


- Bloggery committed by chris tower - 2006.25 - 10:10

- Days ago = 1819 days ago

- New note - On 1807.06, I ceased daily transmission of my Hey Mom feature after three years of daily conversations. I plan to continue Hey Mom posts at least twice per week but will continue to post the days since ("Days Ago") count on my blog each day. The blog entry numbering in the title has changed to reflect total Sense of Doubt posts since I began the blog on 0705.04, which include Hey Mom posts, Daily Bowie posts, and Sense of Doubt posts. Hey Mom posts will still be numbered sequentially. New Hey Mom posts will use the same format as all the other Hey Mom posts; all other posts will feature this format seen here.

No comments: