Australia investigates tech giants over social media ban compliance
The announcement marks the government’s first public assessment of compliance with the law that is being studied by policymakers globally
360° Perspective Analysis
Deep-dive into Geography, Polity, Economy, History, Environment & Social dimensions — AI-powered, on-demand
Context
Australia's internet safety regulator, the eSafety Commissioner, is investigating major social media platforms like Meta, TikTok, and Google for failing to comply with a new law banning users under 16. The investigation signals a move towards stricter enforcement, including potential fines up to A$49.5 million. This action is part of a global trend to regulate tech giants and protect children from online harms, such as addictive algorithms and inadequate age verification processes.
UPSC Perspectives
Governance & Technology Regulation
The Australian case highlights the challenge of digital governance and enforcing national laws on transnational tech corporations. In India, the legal framework for this is primarily the Information Technology Act, 2000 and the IT Rules, 2021. These rules define intermediaries and mandate due diligence, including the swift removal of unlawful content. Specifically, Section 79 of the IT Act provides a 'safe harbour' for platforms, protecting them from liability for third-party content, but this protection is conditional on their compliance with government regulations. The Australian regulator's findings of non-compliance, such as allowing children to bypass age checks, resonate with challenges faced globally. For India, this underscores the need for robust enforcement mechanisms and questions whether the current 'safe harbour' provisions are sufficient or if a more proactive regulatory model, like Australia's , is needed to ensure platforms are accountable.
Social Issues & Child Rights
The core issue is the protection of child rights in the digital age, a key topic under Social Justice. The UN Convention on the Rights of the Child recognizes the need to protect children from all forms of harm, which now extends to the digital realm. In India, the Digital Personal Data Protection Act, 2023 (DPDP Act) provides a stringent framework. It defines a 'child' as anyone under 18 and mandates 'verifiable parental consent' before processing their personal data. The Act also prohibits tracking, behavioural monitoring, and targeted advertising directed at children, which is stricter than the GDPR in the EU and COPPA in the US. The Australian experience, where platforms allegedly use 'big-tech playbook' tactics to undermine laws, serves as a crucial case study. It highlights the technological and policy challenges in implementing age-gating and obtaining meaningful consent, which are central to the operationalization of India's .
Internal Security & Cyber Space
From an Internal Security perspective, the unregulated access of children to social media poses several threats. These include exposure to harmful content, cyberbullying, grooming, and manipulation through misinformation. The role of social media as a vector for threats to national security is a well-established concern. While the Australian law focuses on age-based restrictions for well-being, the underlying principle is controlling access to the digital ecosystem. In India, the IT Rules, 2021 empower the government to mandate content removal and traceability. The failure of age-assurance technologies reported in Australia is relevant to India's security apparatus, as malign actors can exploit similar loopholes to create fake profiles for influence operations or radicalization. Ensuring platforms have robust identity and age verification mechanisms is not just about child safety but also about maintaining the integrity of the information ecosystem and preventing the misuse of cyberspace for activities that threaten internal security.