Exercise caution when relying on AI-generated information, says lawyer
KUALA LUMPUR - Media entities operating in Malaysia are urged to exercise caution and critical discernment when relying on information produced by ChatGPT or similar Artificial Intelligence (AI) language models, says Selangor Bar Representative, Kokila Vaani Vadiveloo.
The advice stems from the fact that the output is exclusively derived from algorithms and lacks human verification.
The lawyer concurred that while AI models like ChatGPT can furnish valuable insights and information, they should be viewed as tools rather than infallible fountains of truth.
"News organisations often exercised heightened caution and diligence when publishing articles concerning court decisions or matters involving presiding officers, such as Magistrates or judges, as opposed to news reports generated by algorithms like ChatGPT.
"The application of such care and caution maybe missing when news reports are being churned out by an algorithm as in ChatGPT,” she told Bernama in an exclusive interview at Wisma Bernama, recently.
ChatGPT, an AI-powered model unveiled by OpenAI in November 2022, operates by automating chatbot technology. It employs algorithms to categorise raw, unclassified data into patterns and structures.
In response to inquiries about how Malaysian copyright law addresses AI-generated news content, Kokila Vaani, former Selangor Bar chairman, stated that the legal landscape in Malaysia lacks clarity on whether AI-generated news content is safeguarded by copyright laws.
She noted that several factors come into play when determining its eligibility for protection.
"Human intervention in the content creation process is one factor under consideration. If AI-generated news content is primarily the result of algorithmic processes, it is less likely to enjoy copyright protection.
"Originality is also a key consideration; if the content merely replicates existing information, copyright protection is unlikely,” she said.
Kokila Vaani cited a notable case from American Jurisprudence involving Kris Kashtanova, which delved into the concept of 'human intervention'.
"In this case, Kashtanova typed instructions for a graphic novel into an AI programme which caused a heated debate over who created the artwork; a human or an algorithm. Kashtanova received a copyright initially but it was stripped off by the States Copyright Office as they believe that Kashtanova’s work was ‘not the product of human authorship,” she added.
Regarding regulations and guidelines governing AI in news reporting in Malaysia, Kokila Vaani noted the absence of specific provisions.
Nevertheless, she highlighted several existing laws and principles applicable to AI-powered news reporting, including the Malaysian Communications and Multimedia Act 1998 (MCMA), the Personal Data Protection Act 2010 (PDPA), and the Penal Code.
These laws, she emphasised, are crucial for news publishers to consider when employing AI-powered news reporting.
The MCMA prohibits the dissemination of false or misleading news, the PDPA mandates obtaining consent before collecting personal data and the Penal Code addresses criminal defamation.
The lawyer also stressed that transparency is paramount for news organisations in ensuring compliance.
"When utilising AI for news reporting, entities should openly communicate how they collect and utilise personal data. They are advised to disclose the data collection process, its application and the measures in place to safeguard privacy.
"Obtaining explicit and informed consent from individuals before collecting their personal data for AI-powered news reporting is of utmost importance, " she emphasised.
Prior to this, Malaysian Communications and Digital Minister Fahmi Fadzil had said the government is looking into the need to establish a regulatory framework for AI to address ethical issues related to the use of the technology.
Fahmi said the establishment of the framework would help the government understand some of the challenges of using the new technology. - BERNAMA