Indonesia summons Meta and Google over non-compliance with child social media curbs, minister says
Indonesia requires social media companies with platforms it deems high risk to deactivate accounts belonging to children under 16
360° Perspective Analysis
Deep-dive into Geography, Polity, Economy, History, Environment & Social dimensions — AI-powered, on-demand
Context
Indonesia has summoned tech giants Meta and Google for non-compliance with a new regulation that mandates the deactivation of social media accounts for children under 16 on 'high-risk' platforms. The regulation, aimed at curbing cyberbullying and digital addiction, has now come into full effect. This move highlights a growing global trend of governments attempting to assert regulatory control over digital platforms to protect minors.
UPSC Perspectives
Governance
Indonesia's action is a classic example of a state trying to enforce digital sovereignty and regulate 'Big Tech' entities that operate transnationally. For India, this provides a comparative framework for its own regulatory challenges. India's primary legal instrument is the (DPDP Act). Section 9 of the Act is particularly stringent, defining a 'child' as anyone under 18 and requiring verifiable parental consent before processing their data. It also prohibits tracking, behavioural monitoring, or targeted advertising for children, making India's regime one of the strictest globally. The upcoming , intended to replace the outdated IT Act of 2000, is expected to further clarify intermediary liability, moving away from the 'safe harbour' principle and holding platforms more accountable for content. UPSC aspirants should analyze the challenges in enforcing such laws, including the difficulty of effective age verification and the potential for a 'splinternet' where global platforms must navigate a patchwork of national regulations.
Social
The core social issue is the protection of child rights in the digital era, specifically the right to health and protection from exploitation. The Indonesian government cites risks like cyberbullying and addiction, which are well-documented problems in India as well. Studies in India link excessive social media use among adolescents to increased anxiety, depression, and sleep disruption. The in India already mandate content classification for OTT platforms and require intermediaries to remove content harmful to children. However, the effectiveness of these rules is debated due to weak age verification mechanisms that are easily bypassed. The social perspective requires a nuanced debate: how can the state balance the protection of children from online harms with their right to access information and express themselves online? This question is central to child development in an increasingly digital Bharat.
Internal Security
From an internal security perspective, the unregulated nature of social media poses significant threats, including the role of media in radicalization, spread of misinformation, and online grooming. While the Indonesian case focuses on child well-being, the underlying principle of holding platforms accountable extends to security concerns. In India, the aim to make intermediaries more responsible for the content on their platforms, including provisions for traceability of the originator of a message in certain cases. The rise of AI-generated misinformation and deepfakes has added another layer of complexity, prompting discussions about stricter content moderation and platform liability under the proposed . A key challenge for security agencies is the balance between monitoring for threats and upholding citizens' right to privacy. The debate over age verification technologies is also relevant here, as robust systems could help mitigate risks but also raise surveillance concerns if linked to national identity databases like Aadhaar.