Social media giants including Instagram, Facebook, TikTok, and X must now legally safeguard users from harmful content under Ireland’s Online Safety Code.
Companies that fail to comply face penalties of up to €20 million or 10% of their global annual turnover, whichever is higher. The new rules target harmful material such as cyberbullying, content promoting self-harm, suicide, eating disorders, violent challenges, and pornography.
Coimisiún na Meán, Ireland’s media regulator, formally adopted the Online Safety Code in October 2024. Platforms were given a nine-month grace period to adjust technical systems before enforcement began. Now, they must have robust age assurance tools and parental controls in place.
Platforms hosting violent or explicit content must use strong age verification methods, not just user-declared ages. Acceptable verification could include facial recognition, cognitive testing, or uploading official ID like a passport or driving licence. However, privacy advocates have raised concerns over how such sensitive data will be stored and handled.
The Code applies to platforms with EU headquarters in Ireland. As of December 2023, ten platforms were designated under its remit, including YouTube, Udemy, LinkedIn, Pinterest, Tumblr, and Reddit. Although Reddit later successfully challenged its designation, Tumblr remains covered. Snapchat is excluded due to its EU base being outside Ireland.
The Code does not currently regulate recommender algorithms, the systems that shape users’ feeds based on personal data. These algorithms have been criticised for pushing harmful content, but Coimisiún na Meán plans to address this issue through the broader EU Digital Services Act (DSA), which complements Ireland’s online safety framework. Disinformation is also only regulated when it meets criteria under the DSA or constitutes illegal content.