Dusseldorf Artificial intelligence (AI) is an important technology for Samsung – yet the electronics manufacturer recently banned its employees from using ChatGPT at work. The group fears that confidential data will end up on the servers of the operator OpenAI and can no longer be deleted. Banks such as JP Morgan Chase and Citigroup also limit its use.
The concerns are understandable: OpenAI stores personal data to improve the chatbot ChatGPT and to train the language model in the background. It is therefore important to protect private and business secrets.
Companies as well as private users should deal with data protection when using ChatGPT and other AI chatbots – a lot can be achieved with just a few simple steps.
In the data protection declaration, OpenAI informs about which data it uses for ChatGPT. This includes information about the user account such as name and contact details, but also the entries in the chat window, including uploaded files and feedback. The American company also logs user behavior.
OpenAI secures far-reaching rights of use for this data in the data protection declaration. The company may use the personal data to operate and improve the service, to conduct research and develop new products, to prevent abuse and to comply with legal requirements.
Is there a privacy mode for ChatGPT?
Browser manufacturers who offer a private mode promise more privacy with one click. Browsing history, cookies and other data are then not stored on the device.
>>Read also: ChatGPT: How to use AI for free in German
There is no such option for ChatGPT. However, users can do a lot to protect their privacy. For one thing, there is no need to identify yourself for the free accounts – an e-mail alias will do, too. On the other hand, users can prevent OpenAI from using private data for AI training in the settings.
How can you protect your data in ChatGPT?
By default, OpenAI can be allowed to process ChatGPT users’ data. However, users can subsequently set data protection more strictly on chat.openai.com. This is possible in desktop mode after clicking on the login name at the bottom left under “Settings”.
Under “General”, users can delete all chats there with one click. There are other important settings under “Data Controls”. In this way, it can be determined whether the data may be used to train the model. The export of data and the deletion of the user account are also possible via this menu. 30 days must pass before the chats disappear permanently in the digital Orkus.
>>Read also: How to get the best results with ChatGPT
In the iPhone app, which has been available since the end of May, users can also make adjustments: the settings are accessed via the three dots in the top right corner.
How do privacy advocates rate ChatGPT?
Data protection authorities in several countries are currently investigating whether OpenAI with ChatGPT complies with the law, and a procedure is also underway in Germany. In Italy, the intelligent chatbot was even banned until the company made several changes to the service at the request of the regulator.
Two key questions arise here. First: To what extent does the data that OpenAI uses to train the models contain personal information such as names or email addresses – without the consent of the individuals? Secondly: Can the company use the data generated during use for further training?
>>Read also: ChatGPT: These are the ten most useful plugins
“Unfortunately, I can’t make a judgment about the legality of ChatGPT at the moment,” explains Dieter Kugelmann. The data protection officer of Rhineland-Palatinate leads a task force of the German data protection authorities for artificial intelligence.
One has to wait and see how OpenAI answers the request from the data protection authorities, said the lawyer at the beginning of June. In addition, something is in flux because the provider is apparently already reacting to the supervisory authorities.
Is ChatGPT GDPR Compliant?
Despite concerns from regulators, users and businesses can still use ChatGPT and other chatbots. “I think it is wrong to generally regard the use of ChatGPT as violating data protection,” says Carsten Ulbricht, a lawyer at the law firm Menold Bezler in Stuttgart. “There are usage scenarios that are completely unproblematic.”
>>Read also: How can texts be recognized by ChatGPT?
The lawyer, who specializes in the implementation of digital processes and business models, advises differentiation. As long as users do not enter any personal data at ChatGPT, they do not violate the GDPR. For example, as soon as the names of employees or information about customers are included, the situation is different.
What should companies consider when it comes to data protection?
Ulbricht advises companies to issue guidelines for the use of generative AI. For example, they should name permissible purposes and set guidelines for handling personal data and trade secrets.
There are also some “organizational and technical measures” that organizations should take – such as the default setting that ChatGPT is not allowed to use employee data for training.
Why is user data so important for ChatGPT?
ChatGPT is the user interface for a powerful technology: When users ask the program to summarize texts, write program code or pre-formulate e-mails, the large language model GPT-4 does the work in the background. In technical jargon, there is talk of a Large Language Model (LLM).
Operator OpenAI has trained this model with huge amounts of text – from the Internet and other sources. When ChatGPT answers questions from users, the program uses this data as a basis to calculate which words could follow each other using statistical methods.
This procedure is purely associative, so the artificial intelligence has no human understanding of the world and regularly produces errors. Therefore, OpenAI uses user data to improve the services, for example to evaluate the quality of the answers. The technical term for the procedure is “Reinforcement Learning From Human Feedback”.
More: What ChatGPT can do
First publication: 05.06.2023, 04:00 a.m.