MCMC Online Safety Act Proposals Draw Industry Concerns Over Age Verification, Content Rules

LocalPolitics
14 May 2026 • 2:35 AM MYT
Pokde.Net
Pokde.Net

Tech & Gaming news and reviews

MCMC Online Safety Act Proposals Draw Industry Concerns Over Age Verification, Content Rules

The Malaysian Communications and Multimedia Commission’s (MCMC) proposed online safety framework is receiving broad support from industry players and civil society groups, although stakeholders are urging the regulator to adopt a more proportionate and flexible approach to enforcement. The concerns were outlined in MCMC’s Public Consultation Report on Proposed Draft Codes Under the Online Safety Act (ONSA) 2025, released on 29 April 2026.

MCMC: ONSA Receives Broad Support, Stakeholders Say Privacy Concerns Remain

The report covers two proposed codes under ONSA: the Risk Mitigation Code (RMC) aimed at reducing exposure to harmful online content, and the Child Protection Code (CPC) which focuses on online safety measures for children. According to MCMC, respondents generally supported efforts to strengthen online safety protections in Malaysia, especially for children and vulnerable users; at the same time, many submissions emphasized the need for clearer definitions, practical implementation timelines, and safeguards for privacy and freedom of expression.

The consultation attracted submissions from technology companies including Meta, Google, TikTok, Roblox, X Corp (Twitter), and Rednote (Xiaohongshu / 小红书), alongside local telecommunications providers (Time, TM, CelcomDigi, Maxis), civil society organizations, UNICEF, Human Rights Commission of Malaysia (SUHAKAM), Malaysian Bar, and various other groups.

Feedback On Risk Mitigation Code (RMC)

Wording & Enforcement Mechanism

One of the main concerns raised involved the definition of “harmful content” under the proposed Risk Mitigation Code. Respondents warned that categories that are too broad or insufficiently defined could “unintentionally capture lawful and socially valuable expression, including legitimate journalism such as investigative reporting on public wrongdoing or coverage of racial, religious and political tensions.”

Respondents also argued that “content-centric obligations” such as content moderation, algorithm testing, advertiser verification and user-level controls are more suitable for platforms that host or recommend user-generated content, rather than network or infrastructure providers (such as ISPs) with limited visibility over content.

Industry players also pushed for a risk-based regulatory framework that takes into account the size, role, and technical capabilities of different services, whereas several submissions also called for a 12-month transition period after the codes are finalized to allow companies to update systems and compliance processes.

Advertiser Verification

Advertiser verification requirements also raised concerns among platforms and business groups, particularly for international advertising operations which involves multiple jurisdictions and laws. While stakeholders acknowledged that verification could help address scams and fraudulent advertisements, many favored targeted and risk-based measures instead of blanket one-size-fits-all requirements.

One example is to require financial advertisers (i.e. investment-related services) to “demonstrate that they are appropriately licensed and legitimate entities.” That said, respondents argue that such approach will need to answer questions regarding how regulatory oversight will be performed, such as license validity, activity monitoring, regulatory frameworks, and more. Increased compliance costs is cited as a potential negative that may impact external trade and business activities conducted within the soil.

AI-Generated Content Disclosure

On AI-generated content, respondents said current technologies are still unable to reliably detect and label all AI-generated material, especially when content is reposted across different services.

The report notes that some stakeholders warned visible labels “may easily be edited out or stripped by bad actors,” while excessive labeling could, essentially, become the boy who cried wolf situation as people become desensitized of such disclosures. Similarly, government-mandated labeling could also “generate false confidence among users,” with incorrectly unlabeled content slipping past checks.

Feedback On Child Protection Code (CPC)

Age Verification Laws

The most debated issue during the consultation involved age verification requirements under the proposed Child Protection Code. While we have seen several jurisdictions around the world already implementing age verification laws, current approaches mainly revolve around mandatory identity verification tied to government-issued documents, of which a significant number of respondents have voiced against due to privacy and cybersecurity concerns.

On top of that, respondents also warned that strict verification requirements could exclude users who lack official identification documents or face accessibility and socioeconomic barriers, even if they are technically eligible for online access by age. Instead of rigid identity checks, many stakeholders supported “age assurance” approaches (similar to Discord’s reworked system) that rely on a combination of identifiers to estimate the age of the user.

Regarding the proposed social media ban for those below the age of 16, some submissions favored applying stronger protections for all users under 18 instead of implementing outright restrictions, as such approach allows for “stronger automated protections without requiring invasive data collection,” thus reducing the need to use mandatory age verification systems. Respondents also believed a blanket prohibition could interfere with children’s “rights to access information and freedom to express themselves as protected by the Federal Constitution.”

Content Moderation

Content moderation obligations also emerged as a major concern. While respondents supported stronger protections against harmful content, many cautioned against obligations that effectively require platforms to prevent all harmful material from appearing online; instead, the group has proposed that platforms be required to take “reasonable and proportionate steps” to reduce risks rather than guarantee complete prevention, noting that moderation systems cannot eliminate all harmful content in practice.

Some respondents also warned that overly aggressive moderation standards (i.e. content moderation before publishing) could result in over-blocking of lawful content and “disproportionate restrictions” to freedom of expression. One such example is users may share personal experiences of bullying or abuse, which can be inadvertently blocked as automated systems (AI-assisted or otherwise) cannot sufficiently understand the context behind such content.

Parental Controls, Privacy and Safety, Safe Search & Recommendation Systems

Measures relating to parental controls, safe search settings, recommendation systems, and child privacy protections generally received positive feedback during the consultation. However, respondents urged MCMC to avoid imposing “most restrictive” settings by default for all child users, arguing that protections should instead be age-appropriate and flexible enough to reflect different maturity levels.

MCMC stated that it acknowledges the concerns raised and will consider stakeholder feedback in finalizing the codes. The regulator said the final framework is intended to ensure the measures are “clear, practical and enforceable, while balancing the public interest with operational realities for industry.”

Pokdepinion: Hopefully the concerns are taken into account before the laws are enacted.

View Original Article