Govt need to draft laws to govern deepfake porn, experts say

DIANA OTHMAN
DIANA OTHMAN
09 Feb 2023 07:30am
Photo for illustrative purposes only. Photo: 123RF
Photo for illustrative purposes only. Photo: 123RF
A
A
A
SHAH ALAM - Legislation to govern the issue of deepfake porn is urgently needed as false dissemination of information could cause psychological harm, say experts.

Deepfake pornography is a type of porn involving the act of using advanced video-altering techniques to superimpose someone's face onto another person's body.

Lawyer Megat Syazlee said that a comprehensive legislative framework should be established by the government, to prevent the malicious use of deepfakes.

"This law could include Artificial Intelligence (AI) technology regulation, criminalisation of the creation and dissemination of deepfake content that spreads false information, defames individuals, or violates privacy, and imposing liability on technology companies for hosting such content on their platforms," he told Sinar Daily.

He expressed that it will be difficult to say whether Malaysia's current laws are sufficient to counteract deepfakes since this issue is complex and involves myriad facets of technology, politics, and confidentiality.

Megat explained that Malaysia has made efforts to address this issue, with the government announcing plans in 2020 to draw up laws that govern the use of deepfakes with an emphasis on preventing the spread of false information and safeguarding individuals' privacy.

However, he said that the specifics of these laws, as well as their effectiveness, remain unclear.

AI generated deepfake pornography is likely to be known to some Malaysians as the technology has gained international attention and the problem of manipulated media is not limited to any one country however individual and community awareness and understanding of the topic may differ.

"Disseminating false information can destroy reputations and mislead the public, while deepfakes can create a sense of distrust and uneasiness, inflicting psychological trauma on those who are affected," he said.
Related Articles:


"This emphasises the importance of people being aware of the possible dangers of deepfakes and verifying the legitimacy of internet content before sharing it," he added.

According to criminologist Nadiah Syariani, the act of creating AI-generated deepfake pornography is not only a criminal offence, but it also has civil implications.

"Firstly, creating such content without permission or knowledge of the individual whom face is being used or exploited, is a crime of stealing and faking one’s identity," she said.

She stated that the circulation of such content is another offence in relation to the wrongful use of technology and multimedia (in Malaysia, it is related to the Multimedia & Communication Act).

"For example, if the images or videos used the face or any body parts of a child, it’s a sexual offence against children for creating child pornography content, as recognised in the Sexual Offence Against Children Act 2017 in Malaysia," she added.

She also said that being featured in such videos or images is gravely embarrassing, so it can also be a crime of harassment for damaging one’s reputation.

Nadiah agreed that misuse of AI technology can indeed lead to more cybercrime cases, saying that there is nothing impossible with the advancement of technology nowadays and that AI learns immediately and improves every time a weakness is addressed.

"It would be challenging not only to detect the false information and gather cyber-related evidence but also open more room for the perpetrators to escape," she pointed out.

She also said AI technology can be misused to "improve" and get "creative" with the modus operandi of existing cybercrimes such as scams, harassment, identity fraud, and the circulation of fake news, to the extent of challenging personal or national security, which use biometric systems.

She stated that everything that is shared online is permanent due to sharing, including likes, comments, and screenshots, and that the incident may end, but the damaging effects last forever.

"This worrying trend is not only directed against women, who are being sexually exploited in a way that they do not want, but children too are at risk of being vulnerable targets of this scheme," she stressed.

Nadiah was of the opinion that most people are unaware of this issue, especially those who are not technology savvy.

"My concern goes for younger children who may have basic skills using technology but lack comprehension of the consequences of their online behaviour and activities," she said.

"They have a poor understanding or judgement of the consequences of simply sharing their personal information, images, and videos with good intention, not knowing that these are being used for making AI-generated porn," she added.

Earlier this week, a Twitch streamer named Atrioc came under fire after he inadvertently revealed that he was subscribed to a website that can create explicit AI images.

The site apparently offers deepfake explicit images of several prominent streamers, including Pokimane, Maya Higa, and QTCinderella.

As the backlash against Atrioc began to gain traction online, he issued a tearful apology video with his wife in which he addressed the situation.