How AI is changing journalism

By Lewis Liu | Published 11th April 2026

Artificial intelligence is already part of everyday journalism, even if many people still talk about it like it is something new. In fact, 56% of journalists are using AI tools every week. This means AI is no longer experimental. It is already shaping how stories are written, edited, and distributed. For early-career journalists, this shift can feel confusing at first, because the rules of the job is starting to change in ways that are not always clear.

AI is already being used in real newsrooms, not just in theory. For example, the Associated Press uses AI to automatically produce financial reports based on data, which allows journalists to focus more on analysis and storytelling. This kind of automation is especially useful for routine and data-heavy stories, where speed and accuracy both matter. At the same time, many reporters are also using AI tools for tasks like summarising documents or generating ideas, which can make the workflow more efficient, but also raise questions about how much work is still “human” in the process.

What Are The Risks?

While AI can make journalism more efficient, it also introduces new problems that are not always easy to see at first. Some of these risks are already affecting how news is produced and trusted.

One of the biggest concerns is that AI can generate information that sounds convincing but is actually incorrect. In journalism, this is especially dangerous because false details can easily be published if they are not carefully checked. Unlike human errors, AI mistakes can appear confident and complete, which makes them harder to detect and more likely to be trusted.

AI systems are trained on existing data, which means they can reproduce and even amplify bias. This can shape how stories are written, what perspectives are included, and which voices are prioritised. Over time, this may lead to less balanced reporting, even if journalists are not intentionally introducing bias into their work.

There is also a growing concern that AI may weaken public trust in journalism. Research shows that 42% of people are less likely to trust news when AI use is disclosed. This creates a difficult situation for journalists, because being transparent about AI use can actually reduce credibility. As a result, reporters may face pressure between using new tools and maintaining audience trust.

For those just entering the newsroom

For early-career journalists, these changes are not optional. They are entering the industry at a time when AI tools are already part of everyday work, and younger reporters are also more likely to use them frequently. This creates a situation where AI becomes a default tool rather than a carefully considered one.

The risk is not simply using AI, but using it without enough experience or judgment. Research shows that journalists sometimes publish AI-generated content with limited intervention. For early-career journalists, this is especially risky, because they may rely on AI outputs too quickly, without fully checking accuracy or context. Over time, this can lead to mistakes that damage credibility, even if the intention was to work more efficiently.

What does this mean going forward

AI is already part of journalism, and it is not going away. The real challenge is not whether to use it, but how to use it well. As shown earlier, AI can help with speed and efficiency, but it also creates risks around accuracy and trust.

For early-career journalists, this means learning to use AI carefully rather than relying on it too quickly. In the end, AI does not replace journalism, but it does change how journalists work, and many are still figuring out what that looks like.

这是否是您的新站点? 登录以激活管理功能,并且忽略此消息
登录