Managing misinformation in food and drink

Four wooden cubes with fact and question mark sign on blue background are representing questioning facts concept.
2023 research from Ipsos found that across 16 countries, 56% of internet users use social media as their primary source of news, and 68% report misinformation to be most prevalent here. (Getty Images)

The last few months have raised interesting questions around accuracy and truth, prompting scrutiny of the influence of social media platforms, podcasts and indicative headlines.

You’ll no doubt recall that the end of 2024 saw some UK consumers tipping milk away after misguided concerns were spread online over the safety of feed additive Bovaer. Meanwhile, entrepreneur and Diary of CEO host Steven Barlett was challenged by a BBC investigation alleging his podcast was spreading harmful misinformation.

The following month, which marked the start of Veganuary, there were a number of media outlets publishing headlines that appeared to promote the idea that animal-based food is better for you. In response, several representatives in the plant-based sector took to social media to air their dissatisfaction and dispute the claims.

More recently, we’ve seen the emergence of a new report from The Animal Law Foundation which claims “pervasive misinformation” is misleading consumers about farming standards in the UK.

What is misinformation and disinformation?

The UK Government defines disinformation as the ‘deliberate creation and spreading of false and/or manipulated information that is intended to deceive and mislead people, either for the purposes of causing harm, or for political, personal or financial gain’.

Misinformation is defined as the inadvertent spread of false information.

In its 2025 global risk report, the World Economic Forum pinpointed these two issues as top short-term risks for the second consecutive year.

But identifying inaccurate information can be tricky, as Harith Alani, a professor of web science for The Open University and director of the Knowledge Media Institute, explained: “Although the science for detecting and tracking misinformation is maturing, predicting the emergence of misinformation and how it will spread and where is far more challenging.

“We still don’t have the appropriate tools and technologies to help us foresee misinformation and be more proactive in combatting it.”

Plant-based vs the media

Within the food sector, plant-based has been subject to particular debate, having recently suffered from the backlash of the UPF movement.

“If you look at the current attitudes towards plant-based products and brands, the narrative is that they are all UPFs – and this is down to constant media attacks, misleading headlines and influencers being able to say what they want with very little fact checking needed,” Mitch Lee, senior national account manager for frozen fruit and veg company Pack’d, told Food Manufacture.

He pointed out how generalisations can cause ripples across a category, tarring all products (no matter their nutritional value) with the same brush: “Tofu and tempeh are natural protein powerhouses and compared to some vegan burgers or sausages they are vastly different in terms of ingredients and nutrition.”

Lee has been among those in the plant-based community calling out recent media headlines. In a January post on Linked In, he flagged particular concern over a BBC headline, ‘A glass of milk a day cuts bowel cancer’, which he described as timely ‘clickbait’.

“The first sentence is ‘A large UK study has found further evidence that people with more calcium in their diet – equivalent to a glass of milk a day – can help reduce their risk of bowel cancer. The researchers analysed the diets of more than half a million women over 16 years and found dark leafy greens, bread and non-dairy milks containing calcium also had a protective effect’,” his post read.

In short, if you were only to read the headline you would be led to assume that dairy is better for you; yet as the text points out, alternative milks enriched with calcium also had a positive outcome.

“Leading with the milk statement is baffling,” Lee told Food Manufacture.

Other headlines were queried within the Linked In community by Alex Robinson, CEO at Hubbub, and Dr A. Driando Ahnan-Winarno, co-founder and CTO of Better Nature during January too. This included ‘Drinking plant-based milk increases risk of depression’ and ‘The real cost of those vegan staples’ from the Times, and ‘Vegans are more likely to be depressed, study suggests’ from the Telegraph.

Both industry representatives made similar arguments to Lee, claiming that the information pulled from their respective research studies had been misinterpreted.

“My takeaway is to always double-check the ‘study’ cited,” wrote Ahnan-Winarno. “If journal articles seem too overwhelming due to them containing lots of words, at least read the conclusion part. Or… Listen to the researcher.”

“Knowing how to read into the data and studies being reported is important,” agreed Lee - although he acknowledged that isn’t always easy.

The social network

But it’s social media where concerns are arguably most keenly felt. In November 2023, market research company Ipsos surveyed internet users across 16 countries and found that 56% use social media as their primary source of news. Sixty-eight percent reported that disinformation was most widespread through this medium.

A 2024 parliamentary report, ‘Disinformation: sources, spread and impact’ outlines how media algorithms can increase the spread of disinformation by amplifying content with high user engagement. One study tracked the spread of fact-checked false news stories on Twitter (now X) between 2006 and 2017. Within their sample, inaccurate news stories spread faster and more widely than credible information online.

“Social media platforms really need to have a higher standard of responsibility in terms of the information which they’re publishing, or allowing to be published on their platforms,” said food policy expert and content creator Gavin Wren.

“We already have standards for health claims in advertising with the ASA which limits the kind of claims that can be made about specific ingredients [although as Food Manufacture flagged recently that has its limitations too].

“There has to be demonstrable evidence that science supports the claims being made before they are allowed. Social media has rapidly become so influential in terms of food and diet, while policy and regulation takes a long time to catch up.

“Big tech has been able to develop in a regulatory wild west for much of the last 25 years and created a new environment under their own rules; however, we’re reaching a point whereby the responsibility of those platforms to their users is becoming more apparent and we’re starting to see more legal challenges in that respect.

“I feel that social media platforms largely shirk responsibility for the content which users are posting, even when it’s harmful. However, publishing comes with responsibility and Meta’s Musk-esque retreat from fact checking is only going to allow even more misinformation to spread.”

Wren is referring to the announcement made at the start of the year by Meta’s chief executive Mark Zuckerberg – who owns the likes of Facebook and Instagram – which will see the billionaire ditching fact checkers across his platforms.

Zuckerberg had previously described this system as ‘industry-leading’ during a testimony to Congress following the 2021 US Capitol riots over false claims the presidential election had been rigged.

In place of the original fact checking programme will be a new system inspired by X’s ‘community notes’ which pass the power of adjudication to the users.

“People will have the ability to share any narrative, news, or information they want as if it’s truth with no repercussions,” said Lee. “It’s a scary time with the rise of AI too, you don’t even need a physical person to be recording a video now, you can have an AI avatar that looks and sounds like a real person share the news for you!”

“It is deeply concerning that companies like Meta and X have relinquished their commitment to distinguishing fact from fiction,” Alice Johnson, co-founder and chief scientific officer at Rooted Research Collective agreed.

“Instead, they promote an entirely online existence detached from our shared, factual reality – exemplified by the metaverse. These platforms have an essential role to play in addressing the flow of misinformation and their retreat from this responsibility only exacerbates the issue.”

What can businesses do to protect from misinformation?

The key to beating misinformation is “transparency in communication” according to Wren.

Misinformation thrives on a lack of detail or nuance about a topic, it exploits the doubts and fears of people to make them follow it.

Gavin Wren, food policy expert and content creator

“As a business I’d be considering how to rebut common misinformation and have that available on websites or socials.”

But Wren acknowledged that it’s a “hard fight”, with misinformation sometimes taking a life of its own, as it did with Bovaer (see below example) despite plenty of data demonstrating its safety.

“Prevention is always better than addressing the aftermath of misinformation,” added Johnson. “A collaborative approach is key. Building trust means working with social media users and the wider public, promoting a more open and inclusive response to misinformation. Instead of simply telling people to accept ‘the facts’, we should engage them as active participants in creating solutions.”

One such approach is already in action – the Freedom Food Alliance, which focuses on challenging misleading headlines using referenced sources.

For Lee, it’s important not to get too bogged down and focus on what we can do: “We can spend all our time worrying about the ‘what ifs’ but, ultimately, food and drink businesses should just focus on what’s in their control: sharing transparency around supply chain, ingredients, packaging, emissions, nutrition – and stick to their truths.”

Whilst Meta may have decided to take a U-turn, other work is underway to help protect against mis- and disinformation. Professor Alani’s team for example has just been awarded a sizeable research grant to look into the issue of detecting and tracking misinformation. The project will be exploring whether we can use AI to perform such predictions based on past misinformation patterns and templates.

But he also acknowledged there is more work that can be done, including: “implementing credibility tracking for accounts, promoting individuals who consistently share accurate information, and demoting those who propagate known falsehoods”.

Accountability must extend to everyone, including influencers, politicians and other public figures.

Harith Alani, professor of web science for The Open University and director of the Knowledge Media Institute

He continued: “Tools are needed to empower people to verify information more easily and to deliver legitimate fact-checks to those who share misinformation, precisely at the time and place it is disseminated.

“These steps could foster a healthier information environment and reduce the opportunities for individuals and groups to exploit misinformation as a lucrative business model, a practice that has unfortunately become prevalent on some social media platforms today.”