I agree to the use of cookies in accordance with the Sourcefabric Privacy Policy.

Support our media development efforts

Please note: due to the quarantine measures required by the coronavirus outbreak, we are unable to answer the phone in our Prague office. Please send an email to contact@sourcefabric.org and someone will get back to you as soon as possible.

Who, what, when, where and why

Get the latest news about Sourcefabric software, solutions and ideas.

BACK TO BLOG OVERVIEW

Robots in the newsroom: The good, the bad and the (un)ethical

Thomas Kent, journalism professor at Columbia University
Thomas Kent, journalism professor at Columbia University

News organisations of all sizes are using artificial intelligence (AI) to automate elements of the news gathering and story-writing process. From algorithms that sort data in search of scoops, to machine learning that makes transcribing interviews a snap, AI is transforming how journalists work, and what audiences see.

To technologists, and some news executives, these innovations are essential for saving a struggling industry, as they promise to deliver news faster and in more targeted ways than ever before. But some experts, including Thomas Kent, a journalism professor at Columbia University and a former CEO of Radio Free Europe/Radio Liberty, caution against such black and white assessments. Automation has many advantages, Thomas says, but it also has the potential to do great harm. Sourcefabric caught up with Thomas at a recent news-industry conference in Prague to learn what he sees as AI’s benefits, and risks, to the news business.

Depending on who you talk to, AI is either a silver bullet for journalism’s troubles, or the final nail in the coffin. Which is it?

AI isn't sentimental, and it isn’t ideological. It can be used for good or for evil. It can be used to replace journalists or to strengthen journalists. It can be used to get the truth out, and it can be used for disinformation. It’s a neutral piece of technology that can be adapted for almost anything.

Meaning, for the news industry, the technology is only as good as the data that’s feeding it?

That’s one key thing. As they say in the computer world: garbage in, garbage out.

Given that, can you walk us through the current trends – what are news organisations doing with AI, and where are they headed?

There are all sorts of ways that AI is developing [in the news industry]. One common thing news organisations are doing is using AI to write stories. But in that respect, I think we may have hit a stopping point. Many news organisations are engaged in this form of journalism, and that is generating thousands of personalised or regionalised news stories based on data –  such as company earnings reports and sports stories. These are very simple stories, where the data is relatively easy to manipulate without too many ethical problems.

But there are limits to it. While there are certainly no limits to machines writing stories, when it comes to machines writing stories ethically and in a balanced way, I don't think the technology is evolved enough, or is likely to be very soon.

AI isn't sentimental, and it isn’t ideological. It’s a neutral piece of technology that can be adapted for almost anything.

How else are journalists using AI?

The most promising AI tools aid journalists in doing their regular work, such as tools that go through large amounts of data and find facts that can be compared. For example, if you’re looking at statistics in a speech by the president of any country, you can immediately find, through AI, how those numbers have been used before, and essentially deduce if the president is telling the truth or if he or she is using skewed statistics, or statistics that are contradicted by other statistics.

This is one example of the use of AI in fact-checking. This is extremely common, but it is also an area that still has a long way to develop. For instance, most of these programs only allow you to check numbers. Soon, however, it will be possible to check whole assertions through natural language analysis.

Then there’s AI that is used to help a journalist preparing a story, which brings up all sorts of photos, videos, and related content so they can see more broadly the landscape of the subject that they're writing about. Finally, there are many new forms of distribution, such as by voice request or glasses that project an image of news or video in front of a viewer. These technologies rely very heavily on AI to function.

It's great that news organisations can produce all of these stories with AI, but can they distribute them and make money doing so?

In some ways, the distribution is easy. For example, it’s possible to personalise what people see in their social-media feeds, or personalise what people hear on their smart speaker, or what they see on their glasses or smartwatch. We have many intensely personal platforms now, so reaching people individually is much less difficult than it was in the past.

Monetisation is another issue entirely. It's all fine to distribute content to audio-visual social networks and platforms in a tagged way, but if you are trying to make money, then the material would have to be valuable enough to the platform that consumers would be willing to pay for it or at least spend significantly more time on the platform. This is the same conundrum that we face with news organisations’ articles being on Google News. Sure, Google News can do a great job of formatting and sorting articles and making them visible to various readers, depending on their interest. But how, exactly, do news producers get money out of this? That is the problem.

You mentioned that there are ethical aspects of AI-assisted journalism. Can you explain?

The ethical problems arise when the software is tasked with writing more sophisticated stories than it is capable of, or when news organisations decide to use AI to process and generate data from sources that are not reliable. Say you allowed political parties to give you structured data on the appearances of their candidates. You would wind up with a lot of stories, but these stories will essentially be digital press releases. There's nothing in that structure that would provide balancing material or point out lies the politician may say.

There are all sorts of ways that the mischievous could game information, and people, unfortunately, tend to believe the first thing they hear. If the initial information that somebody gets is incorrect, it will take a long time for the truth to catch up. For example, if someone figured out how news organisations processed results – of a sporting event, or even an election – you could hack the system in such a way so that the first report most people saw on their phones is  completely incorrect. Later on, the mistake will be seen and corrected, but by then people won’t know what to believe. It can be very dangerous.

There are all sorts of ways that the mischievous could game information, and people ...tend to believe the first thing they hear.

Is it possible to defend against that kind of abuse?

It’s hard because, on the one hand, the ethical problem is obvious, but on the other hand, the siren song of immediate, personalised and high-volume content is very, very attractive. The world is full of news organisations and websites that desperately need content, the more local the better. It's very hard for them to resist taking something [generated by AI] that is local and regionalised, even if it isn’t very reputable. But just because it's unethical doesn't mean a lot of people aren't going to do it, which could lead to a cheapening of the quality of content.

It sounds like a bit of a Catch-22. With so many local news outlets shrinking or closing, AI news seems like the perfect solution to their content needs and staffing struggles.

Right, but I wouldn't assume that larger organisations are immune from the temptation, either. They all have or want to have regional editions and individualised content. At the margins, they may well accept just about anything. A good example is youth sports. Sometimes news organisations have robotic systems where parents or coaches can enter data about a local game or match, and it gets written up as a story. Now, you're assuming that the coach or the parent involved is going to put in the right score, but you don't know that. That's a risk that many news organisations feel like they can live with, but really? It's outsourcing a certain amount of editorial discrimination.

But isn't there something about automation making things "easy" that makes human journalists less inclined to look into the sources of the data?

That’s what we have to struggle against. It’s "easy" and "cool" versus the right thing to do.

Assuming that these ethical quandaries can be addressed, who should develop these technologies? We've seen a lot of media outlets selling the technology they use internally to their own rivals, but is this a healthy trend, or should we leave technology to the tech companies?

I would look at it from the standpoint of the end user who is buying another company’s system. If a large news company is developing a system basically for itself and will sell it out the back door to anybody who wants it, that’s fine. But if you're the purchaser, and you need a certain change made for your own purposes, what priority will that request have? If the seller’s tech team is basically devoted to the main publication, what kind of service are you going to get?

If you're dealing with somebody whose heart and soul is devoted to their own publication, as a buyer, you would want to be very certain that the arrangements you had in place were such that you would get appropriate attention to your needs – not only when you enunciated your needs, but as future versions of that software are rolled out. That said, if another publication’s system exactly fits your needs and you feel that will continue, it may make sense.

BACK TO TOP