ChatGPT resumes service in Italy after adding privacy disclosures and controls

Estimated read time: 7 min

Wireless

A few days after OpenAI announced a suite of privacy controls for its generative, AI-powered chatbot, ChatGPT, the service was once again available to users in Italy — resolving (for now) an early regulatory hold in one of the 27 EU member states. country, even as a domestic investigation continues into its compliance with data protection rules in the region.

At the time of this writing, web users browsing ChatGPT from an Italian IP address are no longer greeted with a notification telling them that the service is “Disabled for users in Italy”. Instead, they were greeted with a note saying that OpenAI was “delighted to resume offering ChatGPT in Italy”.

The pop-up continues to state that users must confirm their age over 18 or 13+ with parental or guardian consent to use the service – by clicking on a button stating “I meet OpenAI’s age requirements”.

The notice text also draws attention to OpenAI’s privacy policy and links to a Help Center article where the company says it provides information on “how to develop and train ChatGPT.”

Changes to how OpenAI offers ChatGPT to users in Italy are intended to meet an initial set of conditions set by the local data protection authority (DPA) for service to resume while managing regulatory risks.

A quick recap of the backstory here: Late last month, Italian company Garante ordered a processing pause on ChatGPT, saying it was concerned the services were violating EU data protection laws. It also opened an investigation into suspected breaches of the General Data Protection Regulation (GDPR).

OpenAI quickly responded to the intervention by geo-blocking users with Italian IP addresses at the beginning of this month.

The move was followed, two weeks later, by Garante by issuing a list of actions it said OpenAI must implement in order for the suspension order to be lifted by the end of April — including adding an age limit to prevent minors from accessing the service and amending the claimed legal basis for processing domestic users’ data. .

The regulator has faced some political criticism in Italy and elsewhere in Europe for the intervention. Although it’s not the only data protection authority raising concerns – and earlier this month, regulators in the bloc agreed to launch a ChatGPT-focused working group with the aim of supporting investigations and collaborating on any enforcement actions.

In a press release issued today announcing the resumption of service in Italy, Garante said that OpenAI sent her a letter detailing the actions that were implemented in response to the previous request — writing: “OpenAI clarified that it expanded the information to European and non-users, that it amended and clarified several mechanisms and deployed appropriate solutions to enable users and non-users to exercise their rights. Based on these improvements, OpenAI has restored access to ChatGPT for Italian users.”

Expanding on the steps OpenAI has taken in more detail, the DPA says OpenAI has expanded its privacy policy and provided users and non-users with more information about the personal data that is processed to train its algorithms, including stipulating that everyone has the right to opt out of such processing – Which suggests that the company is now relying on a claim of legitimate interests as a legal basis for data processing to train its algorithms (since that basis requires it to provide an opt-out).

In addition, Garante reveals that OpenAI has taken steps to provide a way for Europeans to demand that their data not be used to train the AI ​​(requests can be made to it through an online form) — and to provide them with “mechanisms” to delete their data.

The regulator also reported that it is unable to fix the defect of chatbots creating false information about the mentioned individuals at this point. Hence the introduction of “mechanisms to enable data subjects to delete information deemed inaccurate”.

European users wishing to opt out of the processing of their personal data for training their AI can also do so through a form made available by OpenAI which the DPA says will “thus filter their conversations and chat history from data used to train algorithms”.

So the intervention of the Italian DPA has led to some notable changes in the level of control that ChatGPT offers Europeans.

However, it is not yet clear if the modifications that OpenAI has rushed to implement (or can) go far enough to resolve all the GDPR concerns that have been raised.

For example, it is not clear whether the Italians’ personal data that was used to train her GPT model historically, that is, when she removed public data from the Internet, was processed on a valid legal basis – or indeed, whether the data used to train the models previously It will or can be deleted if users request deletion of their data now.

The big question remains what legal basis OpenAI had to process people’s information in the first place, when the company wasn’t very open about what data it was using.

The US company appears to be hoping to constrain objections raised about what it does with Europeans’ information by providing some now limited controls, applied to new incoming personal data, in the hopes that this will confuse the broader issue of all regional personal data. Processing is done historically.

When asked about the changes that have been implemented, an OpenAI spokesperson emailed TechCrunch this brief statement:

ChatGPT is again available for our users in Italy. We are excited to welcome them back, and we remain committed to protecting their privacy. We have addressed or clarified the issues raised by the warranty, including:

We appreciate Garante for being so cooperative, and we look forward to continued constructive discussions.

In the Help Center article, OpenAI admits that it processed personal data for ChatGPT training, while trying to claim that it didn’t really intend to do so but things were out there on the Internet – or as it says: “A great deal of data on the internet relates to people, so our coaching information incidentally includes personal information. We do not actively seek personal information to train our models.”

It is a nice attempt to sidestep the GDPR requirement that it has a valid legal basis for processing such personal data that it happens to find.

OpenAI expands further on its defense in the (definitely) section titled “How does ChatGPT development comply with privacy laws?” – in which it is suggested that he lawfully used people’s data because a) he intended the chatbot to be useful; b) she had no choice because so much data was required to build the AI ​​technology; c) Claims that it does not mean to negatively affect individuals.

“For these reasons, we base our collection and use of personal information included in training information on legitimate interests in accordance with privacy laws such as the GDPR,” he also writes, adding: “To meet our compliance obligations, we also completed a data protection impact assessment to help ensure that we We collect and use this information legally and responsibly.

So, again, OpenAI’s defense against the accusation of data protection law violation basically boils down to: “But we didn’t mean anything bad officer!”

Its caption also provides some bold text to underscore the claim that it doesn’t use this data to create profiles about individuals; call them or advertise to them; or trying to sell them anything. None of them has anything to do with the question of whether or not their data processing activities violated the General Data Protection Regulation.

The Italian Department of Political Affairs has assured us that its investigation into this high-profile case is still ongoing.

In its update, Garante also indicates that it expects OpenAI to comply with the additional requests set forth in the April 11th order — noting its requirement to implement an age verification system (to more robustly prevent minors from accessing the service); and conducting a local information campaign to inform Italians of how their data is processed and their right to opt out of the processing of their personal data to train its algorithms.

“The Italian supervisory authority (SA) acknowledges the steps taken by OpenAI to reconcile technological developments with respect to respecting the rights of individuals and hopes that the company will continue its efforts to comply with European data protection legislation,” it adds, before emphasizing that this is just the first pass in this regulatory dance.

Ergo, all of OpenAI’s various claims of 100% good faith still need to be hard tested.

Source link

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.