KUALA LUMPUR, March 5 — The misuse of artificial intelligence (AI) technology, such as deepfakes, to produce inappropriate content, especially those targeting women, is worrying, with nearly 7,000 content takedown applications submitted to social media platforms since 2024.

Deputy Communications Minister Teo Nie Ching said the Malaysian Communications and Multimedia Commission (MCMC) had submitted a total of 6,987 content takedown requests from Jan 1, 2024, to March 1, 2026, with 95 per cent or 6,657 content removed.

“In 2024, 817 pieces of content were taken down, and in 2025, this increased to 3,389. As of March 1 this year, we have removed 2,451 pieces of inappropriate content using deepfake and AI technologies.

“This has become a trend, with technologies such as deepfakes, we are seeing more women becoming victims. So, this is becoming an increasingly worrying and concerning trend,” she told reporters after attending the 115th International Women’s Day celebration at the UCSI University here today.

Teo said the MCMC will start enforcing a framework related to online safety this year under the Online Safety Act 2025 (ONSA) to strengthen user protection, particularly for women and children.

“The MCMC has tried to develop regulatory instruments to ensure we can hold platform providers or service providers accountable. This is so that inappropriate content, pornographic content or content that has a tendency to victimise women or children can be taken down faster,” she said.

At the same time, she encouraged students to continue capitalise on technology creatively and responsibly to contribute to society more inclusively and fairly.

“I hope they (the students) will continue to showcase their faith and creativity in this industry so that we can see a world with more gender equality, where we can see that men and women are all equal,” she said.

Earlier, Teo, in her speech, said AI tools trained on existing gender biases are enabling violence against women to spread further, faster and in more complex ways, worsening technology-facilitated abuse globally.

She said the world already lives in a reality where at least one in three women experiences physical or sexual violence, and the emergence of extremely powerful AI tools has created what she described as a ‘perfect storm’.

“While technology-facilitated violence against women and girls has been intensifying, with studies showing 16 to 58 per cent of women worldwide impacted, AI is creating new forms of abuse and amplifying existing ones at alarming rates.

“That’s why under the Madani Government, we have strengthened safeguards for women both online and offline through the Online Safety Act 2025 and the Anti-Bullying Act 2025, which will enhance enforcement mechanisms against harmful content, online harassment, cyberbullying, scams and digital exploitation.

“They provide clearer compliance obligations for platforms, stronger investigatory powers for regulators, and firmer penalties for offenders,” she added. — Bernama