It’s tempting to believe that facts speak for themselves. That, no matter how it is presented, data is data.
Science communication reveals that is not the case. How facts are communicated influence their reception, their critique, and the (in)action they spur. COVID-19 reporting is a clear example of how rhetoric can deeply change a reader’s perception of scientific fact. Compare, for example, two pieces by former Washington Post editor Andrew Freedman, published a year apart.
As the Northern hemisphere began to thaw in March of 2020, questions circulated about how the pandemic would evolve over the warm summer months. The Post took a look at a preprint uploaded to the research site SSRN that suggested rates of infection from COVID-19 may drop over the summer. The preprint’s authors, both affiliated with the Massachusetts Institute of Technology, postulated that the virus may spread more slowly in areas warmer than 63º F with high humidity. In the same article, Freedman and his colleague Simon Denyer also spoke to researchers who cautioned that summer would offer little respite, shrouding the preprint’s analysis in a veil of skepticism.
By March of 2021, scientific consensus had changed about the virus’ seasonality. Freedman reported on initial findings from a World Meteorological Organization panel that warm weather alone won’t decrease COVID-19 rates. The panel of 16 interdisciplinary experts from five continents analyzed peer-reviewed research papers published until January of 2021, just as new variants began to emerge. They found that human behavior –such as mask-wearing and social distancing– impacts virus transmission more than environmental factors do.
Is the change in tone due solely to the growing body of scientific research, or could there be other factors at play? In the year between the two articles, many aspects of science reporting didn’t change. Freedman turns to scientific researchers to interpret the findings of studies rather than conducting his own analyses. He cites individuals from five institutions of higher education in the first piece and sixteen in the second, nodding to the rigorous body of COVID-19 research that they had conducted. Names like the MIT and Johns Hopkins University, in addition to titles like immunologist, virologist, and earth scientist, encourage the reader to have faith in the article’s assertions by implicitly drawing upon the centuries of prestige held by these institutions.
Both pieces also express doubt in their findings, noting in 2020 that, “research is only just getting underway” and in 2021 that “no firm conclusions [about environmental effects on virus survival] can be drawn for COVID-19 at this time.” Science is an iterative process, so it follows that even years into a pandemic, questions remain.
But a deeper analysis of the news stories reveals key differences in the tools used to frame scientific research.
When SARS-CoV-2 emerged, since most researchers are competent enough through their past research to have a good idea of how pandemics spread in general, they skipped the stage of overconfidence and quickly realized how little they knew about the new virus. This is what is called “the valley of despair”.
First, the articles follow the Dunning-Kruger effect: essentially, when we’re not good at a task, we don’t know enough to accurately assess our ability. When SARS-CoV-2 emerged, since most researchers are competent enough through their past research to have a good idea of how pandemics spread in general, they skipped the stage of overconfidence and quickly realized how little they knew about the new virus. This is what is called “the valley of despair”. The 2020 piece puts this into action. After the initial shock and only a few weeks of research, scientists cautioned that preliminary data was only speculative. Verbs like “could”, “may”, and “might” show little certainty. Jeffrey Shaman, the director of the climate and health program at Columbia University’s Mailman School of Public Health, who was not involved in the study, asserts in the 2020 interview, “You can’t put a lot of stock in that. [...] It’s not a smart study from my perspective.” The authors emphasize that the study they reference has not been peer-reviewed. Science is used cautiously to preserve its perceived authority.
The second piece, on the other hand, illustrates the “slope of enlightenment” in which confidence grows as doubt about the basic science of the virus decreases. Freedman calls back to his prior rhetoric of uncertainty: “Early on in the coronavirus pandemic [...] it appeared there might be connections between a country’s weather and climate and virus transmission (emphasis mine).” He refutes this hypothesis in the 2021 publications, saying that researchers have “come to see” that weather is only a minor factor in COVID-19 transmission. Instead of skeptical modal verbs like “may” or “could”, Freedman uses “have to”, “would”, and “won’t”. Experts quoted in the piece use the adverbs “clearly” and “definitely” when describing COVID-19 transmission patterns and precautions, suggesting that some uncertainties about the virus from 2020 have been answered. This increase in confidence reflects the increase in peer-reviewed publications and COVID-19 knowledge.
Second, the two pieces differ in terms of their similarity to scientific papers. The 2020 article begins with an emotional hook: “[COVID-19] has killed thousands, sickened more than 350,000 and sent major economies into a tailspin.” It uses the pronouns “you” and “we” throughout the piece, drawing the reader into the story. These rhetorical strategies, more characteristic of journalism than science, reflect the uncertainty of the beginning of the pandemic. With an understanding of the public’s fear, Freedman and Denyer report on the studies not with scientific detachment but with emotion.
By 2021, however, the constant barrage of rapidly changing and often contradictory COVID-19 updates had caused emotional exhaustion and skepticism in many. Some science journalists adapted their rhetoric to mirror that of scientific research papers rather than conventional journalism. For Freedman’s 2021 article, this meant omitting second-person pronouns, using the word “experts” instead of “scientists” or “researchers”, and replacing an emotional hook with passive observation. In March of 2020, “our knowledge of the virus was limited”, he says, citing studies “released before peer review.” The passive voice serves both to establish authority by subscribing to a scientific convention but also to emphasize an increase in certainty about the virus’ behavior since 2020.
But just how much more certain are the findings from 2021?
This question brings us to the third difference between the articles: comfort with uncertainty. In the 2020 article, Freedman and Denyer suggest that government action and individual behavior “doesn’t fully explain why Cambodia, Thailand, Vietnam, and the Philippines have largely been spared mass outbreaks of the disease.” They position the studies of climate’s impact on COVID-19 transmission as a possibility to resolve this uncertainty. A subtitle of the piece states in bold: “uncertainties will take time to resolve.”
Yet, by 2021, scientists hadn’t eliminated all uncertainty about the virus. Rather, they became comfortable with it, acknowledging that while there would always be more to learn about COVID-19, we know enough to do something about it to do the right thing and save as many lives as possible. Whereas studies strongly suggest that weather and climate factors aren’t good predictors of COVID-19 transmission, the exact environmental mechanisms that impact the virus are unknown. Freedman points out that the study concluded in January of 2021 before more transmissible variants emerged. He presents this limitation of the study not as a concern of not knowing enough but as a possibility to learn more, closing the piece with a quote from one of the researchers: “We’ve learned a lot about how to do this research. We’ve caught our breath and said, ‘okay, let’s do this right.’”
The increase in confidence in scientific reporting from 2020-2021 is noticeable. The latter article communicates research in a way that makes it seem more certain, exact, and indisputable. It’s not that the study in 2021 necessarily had “better” science or more reliable results than research in 2020: any research involving sound experiments, clear and reproducible results, with logical and justified conclusions, while untainted by conflicts of interest is generally “good” research. Rather, the barrage of “good” science done over the year allows for more clarity, in turn enabling science journalism to appropriately present these findings with more confidence, making them more palatable and actionable to the public.
By 2021, scientists hadn’t eliminated all uncertainty about the virus. Rather, they became comfortable with it, acknowledging that while there would always be more to learn about COVID-19, we know enough to do something about it to do the right thing and save as many lives as possible.
Science journalism is a necessity for public knowledge. It provides hope for new treatments for diseases, inspires people to enter the field, and instills a spirit of curiosity in its readers. But (thankfully), articles aren’t just smatterings of facts thrown together. They are embedded with rhetorical decisions, value judgments, and stylistic flourishes. So, while it is important to read about scientific developments, it is equally important to reflect on them. How much of the piece’s confidence comes from the science and how much from the delivery?