Many WhatsApp users have already noticed it: the blue circle in the chat overview. Behind it is a new function – Meta AI, an AI-supported assistant that has been integrated directly into the messenger.
The introduction of this artificial intelligence not only represents a technical innovation, but also raises considerable legal questions, particularly from a data protection perspective. The business use of WhatsApp in particular gives rise to new risks that companies should be aware of.
Function and scope of Meta AI
Meta AI is a generative AI chatbot from the Facebook universe that can be used within WhatsApp. Users can use simple inputs to create texts, ask questions or generate information. The chatbot is designed as an interactive conversation partner – technically comparable to other large language models such as ChatGPT.
In contrast to traditional WhatsApp communication, which is protected by end-to-end encryption, interaction with Meta AI takes place via Meta’s servers. The content entered there is processed, stored and can be used to further develop the AI. This also applies to personal data or sensitive information if it is entered by the user.
Relevance under data protection law
The use of Meta AI raises several data protection issues:
- No end-to-end encryption: Content transmitted to Meta AI is not subject to WhatsApp’s conventional protection mechanism.
- Unclear data processing: There is currently little transparency as to the extent to which, for what purposes and for how long the entries are processed.
- Lack of deactivation option: There are currently no plans to completely deactivate the function. The AI is available by default; users do not have to give their conscious consent.
Using the service with personal data or company information is therefore problematic. Particularly in a professional context, for example when communicating with customers or in internal exchanges, there is a risk that personal or business-critical information could be unknowingly transmitted to third parties or even analyzed.
It is helpful that the tool can be used to quickly clarify questions and achieve useful results. However, personal data from the business context may only be entered there if there is a legal basis for this (e.g. consent). These requirements do not apply to the private sector to the same extent. The temptation of quick in-app AI processing is great – as is the risk of a data breach.
Special features for companies
The accountability obligation of Art. 5 para. 1 GDPR applies to companies: they must be able to ensure that all means of communication used comply with data protection requirements. The uncontrolled use of artificial intelligence by employees – including meta AI – can lead to data protection violations and thus to fines or claims for damages. This applies in particular if personal data is passed on without a sufficient legal basis.
There is also a risk to the confidentiality and integrity of company information. The use of AI-supported functions without control over data processing can hardly be reconciled with the responsible handling of company-related data. This also gives rise to claims for damages, among other things.
Conclusion
The use of meta AI in WhatsApp in a business context poses a considerable risk from a data protection perspective – especially for companies that use the messenger for internal or external communication.
Our recommendations:
- Adapt existing guidelines on the use of messenger services in the company. Only anonymized data should be used, if at all.
- Educate employees about the workings and risks of AI, especially Meta AI, to protect your company information.
- Evaluate alternative communication solutions that guarantee data protection-compliant use.