Mandatory insurance for Russian entities using AI
Russia’s President signed into force a law requiring civil liability insurance for entities using AI. The law mandates insurance for harm caused by AI, maintenance of an insurance register, and creation of a commission to investigate and address damages within 30 days.
EC announces AI Pact
The European Commission (“EC”) unveiled the draft AI Pact, a voluntary initiative to preemptively meet EU AI Act requirements featuring two pillars: Facilitating Best Practices and Compliance Actions. Core commitments include AI governance and risk mapping, while specific pledges target developers and deployers to improve AI oversight and transparency.
South Korean AI guide released
South Korea’s data protection authority released a guide for the use of public data in AI development to address legal and safety concerns. It clarifies the use of public data, including personal data, under existing regulations and provides minimum safety and security measures for AI companies. The guide will be updated regularly.
UK Complaint regarding consensual use of AI
Open Rights Group (“ORG”) filed a complaint against Meta with the UK Information Commissioner’s Office (“ICO”) against Meta for allegedly violating UK General Data Protection Regulation (“UK GDPR”) by planned use of UK user data for AI training without clear purpose or consent. ORG seeks a binding decision to halt Meta’s research and ensure compliance with the UK GDPR.
Germany uses criminal law in fight against deepfakes
Germany’s Federal Council released a draft law to protect personal rights from deepfakes by amendment of existing criminal law. It suggests a specific Criminal Code provision to address deepfakes and similar technological manipulations, and includes measures to protect deceased individuals; enhance penalties for aggravated circumstances; and ensure that socially acceptable actions remain exempt from punishment. The Council also proposes amendments to the law on criminal charges and the Criminal Code Procedure.
Report highlights dark patterns and transparency
The European Commission’s consumer protection report highlighted dark patterns affecting 37% of websites and AI risks arising from chatbot transparency. The Consumer Protection Cooperation Network’s focus on reviews of a TikTok ad targeting children and WhatsApp’s updated privacy policies reflect ongoing enforcement and regulatory efforts.
The report highlighted the use of generative AI systems and chatbots and found that consumers felt they were not adequately informed about potential risks including those arising from the lack of transparency in chatbot algorithms.
FTC, DOJ, CMA and EC issue joint public statement for AI risks
The US Federal Trade Commission, the US Department of Justice, the UK Competition and Markets Authority and the EC have issued a joint statement outlining competition risks in the AI sector.
Dutch call to AI attention
The Netherlands’ data protection authority issued a public call for caution in the deployment of AI. Its “AI and algorithmic risks” report, which highlights misinformation and how little control governments have over their own AI systems as two primary concerns, recommends the creation of a national AI strategy.