What can we learn from misinformation on scientific matters in the water space?

2020 November. For the International Water Association.

For IWA.

Science and engineering help us understand the world and change it for the better. For example, advancements in drinking water, from germ theory and handwashing to filtration and chlorine disinfection, have substantially reduced deaths and disease, and improved public health. The results are self-evident for many of us at our kitchen taps.

However, despite growing public trust in scientists, there also appears to be more scientific misinformation and even attacks on legitimate expertise. Informed criticisms, no matter their origin, are valuable (think peer review, although they can be of poor quality), but attacks on scientific matters without evidence usually prove counterproductive. Here, I share my experiences with the latter kind, specifically after publishing research in an IWA journal.

I do this for two reasons: a) throw light on the common yet mostly unspoken phenomena of baseless attacks on science, and b) open a conversation to help others, especially young professionals and nontenured faculty, who may have suffered similar attacks, or are self-censoring their high risk or potentially unpopular scientific findings for fear of retaliation or public shaming by Twitter mobs.

With collaborators Drs. Marc Edwards and Min Tang, I recently published two peer-reviewed articles quantifying waterborne lead exposure in Flint, Michigan during the city’s water crisis using wastewater-based epidemiology. Specifically, we used datasets of lead in sewage sludge (or, biosolids), drinking water and children’s blood to estimate water lead trends over the past decade.

Two major findings from this work, first reported in the IWA journal Water Research in May 2019, were: a) lead in water and in children’s blood in Flint peaked only for a few months immediately after the city switched to the Flint River water in April 2014 and suspended corrosion control treatment but dropped thereafter despite staying worrisome, and b) a lead exposure event, much worse than seen during the entire crisis, occurred in mid-2011.

Having helped uncover the Flint Water Crisis with residents as part of the Virginia Tech research team in August 2015, this was a valuable addition to our and society’s knowledge base. To allow everyone, especially Flint residents, full access to the science, we made these papers open access. Two months ago, I shared these findings with a broader audience in a scientific opinion article with Dr. Edwards writing “lead levels in Flint were not as bad as first feared” in numerous speculations made in 2015 that lead levels got progressively worse over the crisis’ 18 months.

With many pursuing wastewater-based epidemiology to track the spread of the novel coronavirus, our article stimulated expected excitement and engagement with water/wastewater experts. However, it also started a small storm.

A journalist shared their anger on Twitter, and emphasized, while they were “not qualified” to assess the scientific merits of our work, my sharing of the findings was not sensitive. Repeated emails politely asking which specific lines from our article were “insensitive” have gone unanswered. An activist expressed how reporting this data was a “betrayal to the long suffering residents of Flint.” A lawyer made an understandable feces quip about wastewater data.

However, it was responses from academics that gave me pause. A mechanical engineering professor told a newspaper that our findings, whatever they may be because she had actually not read our lead papers, did not “eliminate the fact that there was high exposure.” A geography professor opined because we did not live in Flint or drink the water, we should keep quiet. By that logic, most research done since the beginning of time should not have been conducted or shared, including our original testing of Flint’s water that exposed the crisis. A social science professor pushed the theory that the timing of my article, which was completed before but came out weeks after news leaked of a historic $600 million settlement for Flint children, was suspect.

What if my article had released before the news had broken? The professor also equated our supporting Flint pediatricians’ call to not label all Flint children “lead poisoned” based on actual water and blood lead data, to attempts made to rewrite the gruesome killing of Mr. George Floyd. I sent a polite email asking for clarifications, but never heard back. The tweet-storm got significant traction with many US professors working in the water space, who shared these tweets with other academics without fact-checking. How are such actions different from the spreading of fake news on Facebook?

I highlight these examples to show how easily scientific work can be mischaracterized online. The lack of fact-checking and click-bait nature of social media allow falsehoods and outrage to fly in our “misinformation age.” The apparent rush to embrace subjectivity and share what one feels about a study while disregarding the thousands of data points on which that work stands is worrying. While I personally found many of these comments amusing, the narcissism and lack of due diligence on display by academics who, for example, comment without reading studies or conflate recommending accurate labels based on data to a horrific murder, strikes me as dangerous, and possibly not that uncommon. Why then should the public trust academia?

In an era of populism, hyper-polarization, and even reckless world leaders, scientific experts are still highly trusted. The general public looks to academics for knowledge that is, above all, evidence-based. Science is a matter of seeking the truth, not consensus on Twitter or through open letters. We should not fall in love with our pet scientific theories and political beliefs. In fact, we should actively try to disprove them. The answer to evidence is more compelling evidence, not conspiratorially thinking out loud on Twitter.

We all have behavioral and knowledge blind spots, but academics should do better. I am neither arguing against healthy scientific debate, nor pushing for suppressing speech, only that we exercise care when sharing information online. The social media platforms are rage machines. Let’s not knowingly make things worse.