The Promise And Perils Of AI In The Media
Experts weigh in on artificial intelligence and the press at the 2024 EWC International Media Conference in Manila
Quick Take:
- Human judgment will be crucial to ensure journalistic integrity and safeguard against the spread of disinformation as the AI landscape evolves rapidly, experts say.
- On the upside, AI offers powerful tools for digital searches, data analysis, and content repurposing, which can be game-changing for newsrooms.
- On the other hand, there are numerous challenges when trying to regulate online content, yet there is little accountability for tech companies' role in disseminating disinformation.
- Online influencers can sway public opinion and elections, sometimes more effectively than traditional journalism, which can dampen trust in media institutions.
MANILA — (June 28, 2024) Data and deepfakes. Influencers and elections. Hopes and fears. Those were just a few of the topics explored during the East-West West Center’s biennial International Media Conference this week in Manila, Philippines.
The future of journalism hinges on how responsibly AI can be integrated in newsrooms, speakers told the more than 400 journalists and media professionals from 30 countries who gathered at the Philippine International Convention Center to attend the conference focused on the theme “The Future of Facts.” While AI offers powerful tools, they said, it needs to be deployed carefully.
The
positives: Data analysis and research
efficiencies
News outlets are already finding
advantages of using AI, whether it be for digital searches
or dissecting data. The Associated Press’ AI-generated
image search function, which can quickly sift through
millions of images and video, is just one example. AI is
also particularly useful with data analysis, according to
experts like Jaemark Tordecilla, a 2024 Nieman Fellow for
Journalism at Harvard University, and Don Kevin Hapal, who
heads data and innovation at Rappler, an online newspaper
based in Manila.
Hapal said Rappler leverages
AI for “civic engagement wherever possible,” while
making sure to disclose its AI-powered content to its
readers. During the 2022 Philippine elections, for example,
Rappler used ChatGPT to create profiles for 50,000
candidates running for public posts. It’s tasks like these
that can be delegated to technology to free up reporters’
time to work on other important things, he
said.
“We believe that human critical
thinking and creativity is the supreme,” Hapal said.
“Nothing comes out without being reviewed by
humans.”
Generative artificial intelligence
also allows newsrooms to take their articles and repurpose
them to meet different demographics, according to Khalil A.
Cassimally, the head of audience insights at the nonprofit
newsroom The Conversation in Melbourne, Australia. The
Conversation’s editorial team recently used AI to
repurpose stories written by human reporters to create a
“microsite,” or landing page website, with information
about the recent Indonesian election catered to a younger
audience. By using its own content, the editors found few
instances of “hallucination,” or inaccurate information,
when conducting fact checks, Cassimally said.
The Perils: Who is Responsible?
At the same time, panelists voiced concerns about tech companies' role in perpetuating disinformation and the profit-driven nature of fake news. They also discussed whether or not governments have a role to regulate such a phenomenon.
A few recent examples came up in the conversation, including:
In Sri Lanka, the parliament’s decision to pass a controversial bill regulating online content and internet use among its citizens raised international concerns about restricting free speech.
In Canada, when the government passed its Online News Act in June, the company Meta avoided paying media companies newly mandated fees by simply blocking news on Facebook in the country. “Facebook said, okay, no more news for Canadians,” said Doc Ligot, CEO of CirroLytix, a technology company based in the Philippines. “So a flip of a switch in Facebook; suddenly an entire country goes dark.”
Ligot likened the power held by AI platforms and technology companies to “the equivalent of a veto in the UN Security Council.” Meanwhile, veritable news websites often have paywalls, while fake news websites are easy to find online and free to access, he said, calling the current media landscape “lopsided.”
Influencer power
Panelists were generally more concerned with the potential power of AI in human hands than they were of AI replacing traditional journalism jobs. Irene Jay Liu, the regional director for Asia and the Pacific at the International Fund for Public Interest Media in Singapore and former head of Google News Lab for the region, said she is most wary of human use of AI as a tool to spread misinformation.
“I'm mostly skeptical of people,” she said. “People lie all the time. I probably share incorrect information 20, 50 times a day. We all do 'cause we're humans. We should not pretend like humans are the ones that actually are infallible.”
On the other hand, online influencers may now sometimes serve in the roles of traditional journalists, for better or for worse. And that new dynamic has already influenced several elections across the globe, she said, pointing to the Philippines as a recent example. “In the Philippines, we saw how nano influencers were able to shape the voter and connect with people in a way that traditional news wasn't necessarily able to do.”
In general, Liu said, people will likely increasingly rely on their family and friends, chatbots, influencers and content creators for information, rather than the traditional news media. Back in 2016, the former Duterte government in the Philippines even gave online social media influencers presidential press accreditations, providing them the same access as professional journalists.
Syed Nazakat, founder and CEO of DataLEADS in New Delhi, India, said he sees AI as a monstrous threat to democracy due to its power to fuel disinformation and propaganda. But Nazakat also highlighted AI's dual role as both as a tool for disinformation in political elections, and as a means to combat it. “Information warfare is happening at an unprecedented scale where everyone is trying to manipulate each other's thinking,” said Nazakat.
However, during the recent Indian presidential election, he said, approximately 300 editors, journalists, and fact checkers worked together for four months to form a fact-checking collective called “Project Shakti” to ensure their news was factual and to fight against the machine of disinformation. It used an early warning system code to flag content that could go viral. “You can see the difference of that collaboration in the [election] results,” Nazakat said, after the ruling party of Prime Minister Narendra Modi won far fewer parliamentary seats than expected.
Liu said more collaboration like that is needed. “Journalism is hard,” she said. “It's expensive in many countries. It's dangerous. So we need to make sure that that model can thrive into the future.”