top of page

Navigating the wild west of scicomm: is your online content actually any good?

Title: Exploring ‘quality’ in science communication online: Expert thoughts on how to assess and promote science communication quality in digital media contexts

Author(s) and Year: Birte Fähnrich, Emma Weitkamp and J. Frank Kupper, 2023

Journal: Public Understanding of Science (open access)

 

TL;DR: Fähnrich, Weitkamp and Kupper address the lack of rules in online science communication and present a framework to help scicommers evaluate the quality of their work. However, they found disagreement amongst experts on criteria such as accuracy and effectiveness. Scicomm scholars also seem to neglect emerging technologies, such as social media or blog posts.


Why I chose this paper: In a previous SciCommBites, I explored the challenges traditional science journalism and communication faced in the digital age. But if even well-established media struggle to adapt, how are influencers and bloggers who lack similar support expected to cope? I had hoped that this paper would guide these creators, but instead, it revealed a significant gap in academic guidance for a substantial portion of online scicomm.

 

The Issue: Where is the rulebook?

Content that you make whilst sitting in bed could reach millions by the push of a button, but as any science communicator knows, communicating online isn’t as easy as it sounds. Whereas traditional media often has rules and regulations to help guide communicators, online creators such as bloggers or science influencers often act independently without support or regulation.


Meanwhile, many online consumers often fail to distinguish between journalistic and non-journalistic content. Some studies suggest that certain publics prefer to use non-traditional media as a means of sourcing information, with apps like TikTok becoming a go-to source of communication. This means, despite the lack of a rulebook, emerging digital mediums, such as social media and blog posts, are competing with traditional mediums,


Fähnrich, Weitkamp, and Kupper recognised this as an issue, highlighting how unregulated emerging digital mediums have created a "wild west" of science communication. Thus, they set out to create a framework informed by academic theory to help scicommers assess the quality of their and others’ work and distinguish between the good, the bad and the ugly.


The Method: A two-pronged approach

To develop this framework, Fähnrich, Weitkamp and Kupper contacted 31 prominent science communication researchers. They deemed these researchers to be science communication “experts,” able to determine quality scicomm. These scholars were surveyed twice. The first survey was composed of open-ended questions to assess individual scholars’ views on:

  • the definition of online scicomm

  • the most important quality criteria for online scicomm,

  • the differences between online scicomm and other mediums,

  • how scicomm quality could be assessed, and whether this was even possible.

The second survey built on the results of the first, attempting to gain further input on these initial findings to help finalise a single framework that could be used to independently assess the quality of a piece of scicomm.


A mixed bag of results

However, despite hoping for consistency in responses from science communication academics, the authors found significant disagreement. A fact they stated echoed the conceptual conflicts currently plaguing science communication research.


A key finding was the varying stress put on the notion of accuracy as a quality indicator. Some scholars held the “traditional/science-centric” view that accuracy in information was the primary, if not only, criterion for identifying quality scicomm. Conversely, many scholars stressed effectiveness as a primary indicator of quality. This term was left undefined but was often said to depend on a piece of scicomm’s “objectives and target audience.”


An intriguing notion highlighted by the study was the association between effectiveness and competition. Many scholars stressed that competing for consumers’ attention becomes more important as the quantity of science communication content increases. As one participant put it,

"...if audiences don’t pay attention to something, it kind of doesn’t exist,"

suggesting accuracy isn't sufficient. Engagement was also spotlighted as a primary indicator of quality and also often left undefined. The tension between accuracy and engagement was highlighted nicely by one participant who suggested,

“Being wrong could motivate certain audiences to engage with the material more stridently than being correct,"

demonstrating a conflict between wanting to provide correct information and the inherent value of starting and maintaining a discursive scientific dialogue. Consequently, this statement demonstrates how differing beliefs on the goals and purpose of science communication can affect what is considered an indicator of quality.


Consequently, to acknowledge this variation in opinion, Fähnrich, Weitkamp, and Kupper decided to group their results into a set of five “meta-criteria for quality assessment” as follows:

Meta-criterion

Most important notions

Content

​Relevance, accuracy, completeness, objectivity, truthfulness

Presentation

​Accessible language and style, engaging communication

Technical

Opportunities for dialogue and feedback, technical accessibility

Context

Clear purpose/motivation, the expertise of sources, transparency, reliability of evidence

Process

Effective, having defined goals, adhering to standards, and capacity for evaluation.

Table 1: Showing meta-criterion for scicomm quality chosen by Fänhrich, Weitkamp and Kupper. All wording has been taken directly from the paper except for the criteria "process", which has been adapted here for better comprehensibility.


A missed opportunity (maybe academia is a bit old-fashioned…)

Fähnrich, Weitkamp, and Kupper clearly hoped their framework would help navigate the newly emerging and thus “more challenging” digital mediums, such as social media. However, they found that scholars preferred to focus on more traditional mediums, ignoring opportunities to discuss emerging digital mediums in favour of already well-studied mediums such as journalism, PR, and scientists’ public engagement.


This neglect was particularly evidenced in the second survey, where participants were asked to discuss the relevance of the five meta-criteria in six different situational settings. Here, all except two of the 31 scholars chose not to comment on


An influencer’s post on Instagram presenting spectacular scientific experiments,


or


The blog of environmental activists citing scientific studies to strengthen their argument.


The authors stated that this was significant, as these were the two situations that differed most from traditional journalistic and academic practices. Thus, neglecting to comment on these was a “missed opportunity.” The authors stated this result was “astonishing” and indicative of a broader knowledge gap within the field. They raised the question of whether science communication experts were struggling to keep pace with developments in science communication in the digital media world.


What does this mean?

Fähnrich, Weitkamp, and Kupper’s paper opened with a discussion of how the “emerging” digital content is changing the face of science communication and increasingly shaping societal perceptions of science. Although the participants of this study neglected these mediums, the framework created is a massive stepping stone in helping science communication practitioners simplify the complex process of generating online communication.


Influencers or bloggers could also still utilise the framework to add credibility to their content by showing that they meet the same standards expected of traditional media. Furthermore, by illuminating these knowledge gaps, the study demonstrated the need and space for emerging researchers to research emerging science communication practices and help bring science communication research into the 21st century. But our scholars better hurry because, at this rate, by the time we work out what makes a “good” Instagram post, everyone will have moved on over TikTok.


Edited by: Sarah Ferguson and Niveen AbiGhannam

Cover image credit: Leah Kelley via Pexels

bottom of page