DECEMBER 27 — Malaysia’s proposal to prohibit under-16s from creating social media accounts by 2026 is a headline that instantly comforts parents: at last, action is being taken. The intent is clear: to protect young people from cyberbullying, grooming, scams and the wider ecosystem of online harm that has grown more sophisticated and more relentless. However, the announcement alone will not determine the success of this policy. It will be decided by the uncomfortable mechanics of enforcement: how we verify age, how much personal data we collect in the process, and whether the policy reduces real harm or merely pushes it into darker corners of the internet.
The Cabinet of Malaysia has approved raising the minimum age for social media accounts to 16, with implementation expected in 2026, and that platforms may be expected to use electronic know-your-customer (eKYC) checks, potentially involving official IDs such as MyKad, passports and MyDigital ID to verify age at registration. This is paired with a broader push to tighten platform accountability. Malaysia’s communications regulator has stated that major internet messaging and social media service providers meeting the user threshold will be “deemed” registered as ASP(C) class licensees effective January 1, 2026, covering platforms such as WhatsApp, Telegram, Facebook, Instagram, TikTok and YouTube.
Unmistakably, Malaysia wants platforms to assume more responsibility, not just users. That’s the right instinct. Yet the under-16 rule will fail if we confuse a bold headline with a workable system.
The first risk is that teens don’t vanish when you tell them they can’t make an account. They route around restrictions. If official sign-ups become impossible, underage users can still access the same platforms through older siblings’ accounts, borrowed identities, informal “account rental” practices, or by shifting to services and communities with weaker enforcement. A policy that simply changes the registration screen may reduce visible accounts while leaving behaviours and harms largely intact.
The second risk is more dangerous because it feels like “doing something”: identity-heavy age verification. If the main solution becomes “show your ID to go online”, Malaysia could accidentally build a new vulnerability surface for fraud, data leaks and surveillance-style data accumulation. Despite the initial focus on age checks, identity systems often broaden their scope once they are in place. Civil society voices have already raised concerns that mandating eKYC specifically could undermine privacy and legitimate anonymity needs. The question is not whether we should verify age. The question is: can we do so without normalising excessive data collection and retention?
This is where Australia’s recent move matters, because it shows both the momentum behind youth protection policies and the implementation dilemmas they create. From 10 December 2025, Australia’s rules require age-restricted social media platforms to take “reasonable steps” to prevent under-16s from having accounts. Australia’s eSafety regulator frames it as a delay to having accounts rather than a punitive regime, with penalties targeting platforms rather than children or parents. The Australian move was described as a world-first “ban” on social media for under-16s, which immediately triggered debates about its feasibility, free speech, and technology enforcement. The key lesson for Malaysia is that age rules become politically easy to announce but technically hard to execute in a way that is both effective and rights-respecting.
A third risk is definitional confusion. What counts as “social media” in 2026? TikTok and Instagram, certainly. But what about YouTube, messaging platforms, community servers, livestream platforms, or multiplayer games with social features? Malaysia’s own licensing and regulation discussions already list major messaging and video platforms in scope under certain criteria, reflecting how blurred these categories are. When the government, parents, and platforms do not share a common understanding of what is covered, enforcement becomes inconsistent, leading to a collapse of public trust.
Finally, focusing only on age gates can miss how online harm actually happens. “Being allowed to register at 15” does not cause many of the most damaging behaviours. Harm happens through direct messages, group chats, algorithmic rabbit holes, impersonation, doxxing, coercion, and the speed at which abuse can scale. Even if we push some users off official on-ramps, the harm engine remains intact without stronger product safeguards and faster response systems.
So what does “making it work” look like? Malaysia should treat the under-16 rule not as a single ban, but as a child safety programme with measurable outcomes. That means scope clarity, so everyone knows what is covered and why. It means age assurance that is privacy-preserving, minimising data collection, limiting retention, and preventing identity documents from becoming the price of participation in modern life. It means placing primary responsibilities on platforms, not criminalising children or turning parents into enforcement officers. And it means pairing the age floor with “safety-by-default” expectations: reduced exposure to unknown messages, better reporting tools, quicker response times, and transparency reporting on what harms are occurring and how they are being reduced.
Most importantly, Malaysia should publish success metrics and be willing to be judged by them. Are grooming attempts down? Are cyberbullying incidents reduced? Are scam targeting patterns shifting? A policy that produces cleaner compliance reports but unchanged harm levels is not a success; it is administrative theatre.
Bans create headlines. Systems create safety. If Malaysia gets this right, it can become a regional reference point for child protection that does not trade safety for surveillance. If it gets it wrong, we may simply push young people into less visible spaces while collecting more sensitive data than ever before. The question for 2026 is not whether Malaysia can announce a rule. It depends on whether we can implement one that actually protects children.
* Galvin Lee Kuan Sian is a PhD Researcher in Marketing at the Asia-Europe Institute, Universiti Malaya and serves as a lecturer and programme coordinator in Business at a private college in Malaysia.
** This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.