EU kicks off an inquiry into Google's AI model

2 months ago 22
BOOK THIS SPACE FOR AD
ARTICLE AD

The European Union's key regulator for data privacy, Ireland's Data Protection Commission (DPC), has launched a cross-border inquiry into Google's AI model to ascertain if it complies with the bloc's rules.

The probe is part of broader efforts by the DPC and its peers in the European Union (EU) and European Economic Area (EEA) in regulating the collection of personal data of EU and EEA subjects into AI models.

The DPC is concerned about whether Google fully complied with its Data Protection Impact Assessment (DPIA), an evaluation that EU regulators ask data controllers to perform before they ingest large amounts of personal data in a systemic way. A DPIA defines the scope, context, and purposes of data processing and assesses whether that processing might result in a high risk to the freedoms and rights of individuals.

According to the DPC: "A DPIA assessment is a key process for building and demonstrating compliance, which ensures that data controllers identify and mitigate against any data protection risks arising from a type of processing that entails a high risk.

"It seeks to ensure, among other things, that the processing is necessary and proportionate and that appropriate safeguards are in place in light of the risks."

The obligations to do the assessment fall under the umbrella of the General Data Protection Regulation (GDPR), and the probe relates to Google's processing of personal data in developing its foundational AI Model, Pathways Language Model 2 (PaLM 2).

A Google spokesperson told El Reg: "We take seriously our obligations under the GDPR and will work constructively with the DPC to answer their questions."

W3C says Google's cookie climbdown 'undermines' a lot of work OpenAI unveils AI search engine SearchGPT – not that you're allowed to use it yet Google I/O is Google A/I as search biz goes all-in on AI Microsoft's FOMO after seeing Google AI drove investment in OpenAI

Google is not alone in having its AI ambitions come under regulatory scrutiny. In August, X agreed to suspend the processing of personal data from posts of EU and EEA users to train its AI Grok against the backdrop of an urgent High Court application. In June, Meta paused its plans to train AI models on EU users' Facebook and Instagram posts in response to a request from the Irish DPC.

Using personal data in training and processing prompts is a potential privacy minefield for AI companies as far as the EU is concerned. However, AI models will be of little use to EU and EEA users without that data. For example, what might be culturally significant in the US might not apply in Germany.

As this latest inquiry shows, EU and EEA regulators are closely monitoring how the tech giants are training their models and using citizen data. ®

Read Entire Article