Print logo

AI and Journalism
Artificial Intelligence and the Future of Journalism in the Philippines

The media landscape is at a turning point. Artificial intelligence has spread rapidly across the media landscape and is increasingly taking over the tasks of journalists. Declining readership and revenues as well as the increasing spread of fake news make it necessary to analyze AI applications in the media from an ethical and reliable point of view on how to use AI applications effectively and responsibly.

To discuss the potential and risks of artificial intelligence (AI), the Hanns Seidel Foundation (HSF), in cooperation with the Philippine Press Institute (PPI), organized the seminar "A Race with Machines: Artificial Intelligence and the Future of Journalism" from 22-24 November 2023 for around 30 journalists from different regions of the Philippines.

"Data is the oil, Artificial Intelligence is the engine, and disinformation is the pollution"

Dominic Ligot

Journalists during the workshop to create guidelines for the responsible use of AI

HSF

AI is not a new phenomenon

In order to use AI effectively as a tool, rules for the management and use of AI are required. Critical thinking in recognizing misinformation, disinformation and false information is particularly important when using social media. AI is not a new phenomenon in journalism in the Philippines and has developed rapidly during the pandemic. It is used throughout the news process in research, analysis and content creation. During the three-day seminar, the journalists were informed about the potential of AI applications for their own work and sensitized to the risks of using artificial intelligence.

 

Protection for investigative journalists

Investigative journalists in the Philippines are exposed to cyber attacks, and digital authoritarianism is also used as a weapon against journalists.

Dominic Ligot discusses the functions of algorithms in social media

PPI

Dominic Ligot, member of the Board of Trustees of the Philippine Center for Investigative Journalism (PCIJ) and co-founder of Data Ethics PH, therefore urged journalists to take the protection of their own digital or electronic identity and information more seriously and to better protect their websites and email accounts from attacks and not to enter confidential or sensitive information into AI tools.

 

Algorithms in social media

Ligot then shed light on the functions of algorithms in social media. Algorithms learn the patterns behind the data in order to generate and disseminate new data. One problem with AI is the spread of false content, which is exacerbated by the scope and algorithm. "Today's reality no longer knows any real right and wrong due to the new possibilities of AI," says Ligot. This "infodemic", the flood of - often false and unverified - information is becoming increasingly problematic. However, AI applications are not always reliable, especially when it comes to political issues, which is why the information must be used and checked with caution. In the last presidential election (2022), AI gained increasing influence, meaning that the next president (2028) could be elected with the help of AI.

 

Q&A session with the participating journalists

PPI

AI applications can make journalistic work easier

AI applications can make the work of journalists easier, especially when it comes to analyzing large amounts of data, as they offer a variety of possible applications. Generative AI applications such as ChatGPT and Bing Image Creator are fast and cost-saving research tools that can be helpful tools and support the work of journalists by increasing the effectiveness of data analysis. However, the general consensus among journalists is that AI is not yet a reliable tool for fact-checking.

 

Guidelines for the responsible use of AI

Data ethics is an important aspect of the use of AI, but social media platforms rarely prioritize it. It is therefore of great importance that journalists agree on transparency and ethical standards and develop strategies for the use of AI. During a workshop, the participants considered how to use AI responsibly in their journalistic work. It was agreed that AI should be under human supervision. Journalists should prioritize information verification when using AI and ensure that fact-checking is part of the news creation process and that AI-generated content is checked for factual and contextual accuracy before news goes to print. Journalists and newsrooms should commit to the core values of responsible journalism when using AI, using AI algorithms with integrity, competence, independence, objectivity, fairness and humanity. Based on these considerations, a code of ethics with standards for journalists and newsrooms of the PPI community in the use of AI is to be adopted by April 2024.