Dublin's Kinzen brings its ‘algorithms for editors’ to the disinformation battlefield (armed with $2 million in seed funding)

Dublin's Kinzen brings its ‘algorithms for editors’ to the disinformation battlefield (armed with $2 million in seed funding)

Covid-19 exacerbated an already vast problem online: disinformation. The problem, which can seem insurmountable at times, has forced large platforms like Facebook and Twitter to take more and more action (with mixed results).

The problem has also given birth to a slew of startups that believe they can help. One of them is Dublin’s Kinzen, head by Storyful co-founder Mark Little, which has just raised $2 million in a seed round for its “algorithms for editors”.

There’s a common thread in Little’s career of verifying information and parsing the signal from the noise. A former journalist with Irish public broadcaster RTE, he founded news and social media monitoring agency Storyful in 2010 which would eventually sell to News Corp in 2013 for €18 million.

Storyful’s mission in 2010 was to verify news sources and social media chatter, whether it be in battle zones or the sites of natural disasters.

After selling the company and a sojourn to Twitter, Little’s mission has remained the same at Kinzen, where he is joined by co-founder Áine Kerr.

Kinzen began life on a different path. In 2018, it had desires to be a new incarnation of the curated news app, helping users make sense of their news feed through a subscription service that plucked the best from the worst. But from late 2018 into 2019, Kinzen learned that people “didn’t really want another news app”.

“It’s back to that old Jeff Bezos line, be stubborn on the vision but flexible on the detail. Today it’s the same vision. How do we help citizens become confident of the information in their new feed?” Little told tech.eu.

That flexible detail has seen Kinzen shift toward an enterprise software model rather than going after the everyday news consumer.

It provides its dual model of human editors and AI to the likes of content platforms and online marketplaces to weed out questionable or outright false information and predict trends in future content.

“We realised that these tools we were building had an application not just for the big tech platforms but essentially anybody who has an online conversation or has an online community. That could be an online marketplace, it could be a gaming platform and obviously it could be a big content distribution platform.”

2020 proved to be a pivotal time for the move, when disinformation and the threats it poses couldn’t be more pronounced.

The wave of falsehoods, fabrications and forgeries that have erupted during the Covid-19 pandemic, from dubious medical commentary to anti-mask rhetoric, has made the lives of editors and fact checkers more difficult. Meanwhile, the US presidential election campaign presented its own brand of deceptions.

Each issue on their own would be overwhelming hurdles to cross - but this is 2020 and the hits keep coming. It presented Kinzen and its clients with some real-life practical challenges that would bring the startup’s work out of the theory stage and into practice.

“ really haven't figured out the way to use human decisions but scale them through AI, and that's where we land.”

Much has been made about AI’s ability to moderate content in place of humans but the reality is that the technology is only as good as the people that create it.

“You can't have a full automated solution,” Little said.

AI and machine learning have been necessary to collate and segregate vast swathes of information into comprehensible silos, but the human eye is still needed to steer the ship.

Kinzen’s tech and its editors have become adept at analysing text-based information but it’s now pouring more resources into analysing video and audio to identify phrases, images and footage that can set off alarm bells.

“During the election we would have alerted a particular platform to examples of false information around electoral fraud surfacing in the content,” Little said.

“Where we’re supporting is not necessarily for them to take down content,” he said. “We're basically saying, hey, something's going on here. You should either not recommend that piece of content or you might have to consider taking it down.”

The human touch is vital to keep AI in check, he added, especially when it comes to biases baked into algorithms by their creators. By that same token, humans come rife with their own biases.

“If you’re a 25-year-old Irish white guy living in Dublin, how do you judge a conversation between 15-year-olds in Brazil, in their language, in a different culture? That's where we want to be way more transparent and I think that’s where we break new ground by trying to weed out all those biases from these algorithmic solutions and build better algorithms for recommendation systems.”

This dual strategy serves to build a knowledge graph of information, from nuances spotted by the human eye to trends identified by algorithms.

“We're annotating, tagging, tracking the evolution of language and bad actors and dog whistles and putting it into a database that essentially is acting as high-quality training data for AI and ML models,” he said.

Little said that these tools are increasingly necessary because the problem isn’t just limited to Facebook, Twitter and the large players with millions of users.

“We’ve seen QAnon, the deep state conspiracy at the US, hijacking new age yoga and spirituality communities on marketplaces where they're selling things and we’ve spotted QAnon selling t-shirts,” he said.

“If you have an online community or a conversation that you're a part of, you're exposed to some form of risk from people like far right networks or QAnon, and that's the way we're approaching this.”

As vaccines for Covid-19 begin to reach the public, the disinformation wars are only going to ramp up with anti-vaccine content and conspiracy theories and Kinzen’s knowledge graph will likely balloon with new content.

“We have the in-house editors that are full time with us or freelancers working for us in different languages and markets,” Little explained. “We then have another editorial network, and these are freelancers or people who are researchers in particular areas like health or tracking neo-nazi groups. We're getting this data from this network, which is building a better knowledge graph, which in turn is now better training data.”

Amid the coronavirus lockdowns this year, Kinzen closed its $2 million funding round, led by Danish media accelerator FST Growth, part of media group Jysk Fynske Medier.

Other investors in the round include Irish investment firm Business Venture Partners (BVP), Traveloka co-founder Derianto Kusuma and Hostelworld co-founder Ray Nolan.

Little said that he and his co-founder Kerr will be keeping the Kinzen team lean in 2021; it currently has 10 staff in Dublin, but will be hiring in engineering and development while expanding its freelance network of editors globally in different languages.

The next year will be focused heavily on securing new clients. Little is tight-lipped on who it is working with other than “two major content platforms” currently. He added that the startup is in talks with other potential clients, such as communications agencies and e-commerce marketplaces.

Follow the developments in the technology world. What would you like us to deliver to you?
Your subscription registration has been successfully created.