Bonn All of this can be entertaining or useful. At the same time, the use of this technology harbors “new IT security risks and increases the threat potential of some known IT security threats”. This is the conclusion of the Federal Office for Information Security (BSI) in a position paper.
Behind every AI chatbot is a so-called language model. Large AI language models, also called Large Language Models (LLMs), are computer programs that are able to automatically process natural language in written form.
Well-known models are GPT from OpenAI or Palm from Google. Palm uses Google for its chatbot Bard. And GPT is used by ChatGPT or Microsoft’s Bing chat.
The BSI names the following known threats that can further strengthen AI language models:
– Creating or improving malware.
– The generation of spam and phishing mails using human characteristics such as helpfulness, trust or fear (social engineering).
– Language models can adapt the writing style of the texts in the mails so that it resembles that of a specific organization or person.
– The spelling or grammatical mistakes that have often been found in spam and phishing emails, which can help to identify such messages, are now hardly found in the automatically generated texts.
– Not only numerically, e-mail attacks are likely to increase with relatively little effort using AI language models. The messages can also be made even more convincing by means of the models.
And these are completely new problems and threats from AI language models that the BSI has identified:
– A major risk is that attackers secretly redirect user input into a language model in order to manipulate the chat and access data or information.
– In any case, there is always a risk that the data entered will not go unnoticed, but will be analyzed by the chatbot operator or passed on to unknown third parties.
– It is possible that language models are misused to produce fake news, propaganda or hate messages in order to influence public opinion.
– The ability to imitate writing style harbors a particular danger here: misinformation could be spread with a style adapted to specific people or organizations.
– According to the information, machine-generated ratings that can be used to advertise or discredit services or products are also conceivable.
A basic problem of chatbots is that the data used to train the language model and its quality significantly influence the functionality. According to the BSI, this results in the following risks:
– Questionable content such as disinformation, propaganda or hate speech in the training set of the language model can flow into the AI-generated text in a linguistically similar way.
– It is never certain that AI-generated content is current or factually correct. Because a language model can only derive information from the already “seen” texts. All classifications that go beyond this and that are known to people from the real world cannot be made by models. Therefore, it can even lead to the invention of content, the so-called hallucinations.
Conclusion: Users should remain critical. Text that is often generated without any linguistic errors when using AI language models often gives the impression of human-like performance – and thus too much trust in AI-generated content, even though it may be inappropriate, factually wrong or manipulated.