Skip to main content
Connect Online: your alumni magazine
Connect Online News and views Your opinion is the new media

Your opinion is the new media

Influencing narratives, shaping discourse and generating headlines around the globe, “fake news” has become an unavoidable feature of today’s media landscape. It’s a phrase that’s only become common vernacular over the past year – not least as a favourite term of President Donald Trump – with Collin’s Dictionary declaring "fake news" as 2017's Word of the Year. So in an era where conversations around “fake news” are mainstays of public dialogue, how do we know what is real or fake, what stories to trust? The answer, according to Dr Mattia Fosci (PhD International Law, 2014), lies with the people. With the support of the University’s Ingenuity Lab, his team has developed an app which uses the wisdom of the crowd to verify the quality of the news we consume. We caught up with Mattia to discover how Yoop aims to fight the “fake news” phenomenon.    

How does Yoop work, and what is the vision behind the platform?

“We created a new way to discover and engage with digital content, which is based on people’s opinions. Yoop (which stands for Your Opinion) lets users tag content with a category, such as ‘important’ or ‘interesting’, and pools all the tags together to determine the quality or value of a story. Tagged stories are automatically shared on the app platform and distributed based on how good people thought it was. The idea is that people’s collective opinion will help good content to surface and bad content to naturally sink.”

With Yoop reliant on the opinion of your users, how will you ensure that the app is inclusive of a broad range of views?

“The mechanism behind Yoop is not based on ‘friends’. Yes, you can share content with friends privately, but the stories you see on your feed are crowdsourced. All Yoop users automatically share content with everyone else, and then the collective ratings determine what’s worthy of being seen. The more you use the app, the more it learns your content preferences and displays the most positively tagged stories on your favourite topics. So if you’re interested in an issue, say Donald Trump or Brexit, it will display the best rated stories from all sides of the spectrum. So you still have an ‘echo chamber’ of issues that you’re interested in, but at least you’ll see different viewpoints on those issues.”

Much of the media content we consume today is opinion-based. How will you ensure that opinion or satirical pieces are not labelled as “fake news” by your users?

“Yoop allows users to report “fake news”, and we’re working on a piece of software to help us verify these stories quickly. But we take a narrow definition of what fake means – it’s the entirely fabricated stories, which are not difficult to fact check. We don’t label as ‘fake’ those stories that are biased or inaccurate. Users have the option to tag those stories as ‘biased’ or a ‘waste of time’, and essentially say “watch out, do not trust everything this story says”. We’ve had a couple of posts from The Onion and other satirical websites that people rated as both ‘inaccurate’ and ‘entertaining’ - if you get both types of rating, it’s a good chance it’s satirical. We love entertaining content, but the fake stories are written to mislead people. And there’s nothing funny about that.

We say that collective intelligence plus artificial intelligence equals success – we want our machines to help humans make better decisions

In our efforts to challenge “fake news” and disinformation, how do you think we can ensure that content is accurate but avoid censorship?  

“I think the point is not censoring bad content but giving more prominence to good content. The problem, in my view, has always been one of ‘gatekeeping’: who makes the decision? We were used to a world where editors controlled the flow of information, and now we live in a world where information is filtered by invisible algorithms. If one believes in democracy, neither solution is good enough. Only the people have a right to make that decision, and the role of technology is to make the process as simple, engaging and transparent as possible. There’s evidence that this approach can work. Some people will get it wrong some of the time, but most people will get it right most of the time. I trust a crowd with enough information to make a better decision than an editor sat in a newsroom or an algorithm whose main purpose is to generate revenue.”

How do you think crowdsourcing systems such as Yoop can compete with the interests and resources of groups that are focused on promoting “fake news”?

“Whatever system you try to build, there will always be someone who tries to trick it. Malicious bots and software that have been developed to purposefully skew public opinion and push narratives are a big problem. To combat this, we rely in part on larger platforms like Facebook and Twitter getting better at spotting bots, but we’re also developing software of our own on top of that.

Most importantly, Yoop users have a score, which works as a reputation system and is meant to encourage the community to self-regulate itself. Users are great at spotting anomalous behaviour, so they will be our best allies.”   

We’re increasingly reliant on artificial intelligence through the systems and tools that we use. What do you think the future holds for human interaction with technological tools such as Yoop?

“We say that collective intelligence plus artificial intelligence equals success. We created artificial intelligence that users can easily control and customise: “I don’t want stories about Bitcoin, I want to read on climate change”. The artificial intelligence may advance suggestions on the sort of content you may like, but you can always review and change it. Google and Facebook want humans to help their machines make better decisions, we want our machines to help humans make better decisions.”

Yoop will launch this April – you can download a trial version of the app now and discover more about the story behind Yoop online.