Artificial intelligence (AI) has rapidly found its way into modern sales methods. Medium-sized companies are also increasingly using AI-supported tools such as ChatGPT, DeepL or automated chatbots to optimize marketing and product sales. However, with the introduction of the EU AI Act (AI Regulation) on August 1, 2024, new legal requirements have been added that companies must observe. Further tightening will take place gradually. We therefore provide you with an overview of the most important regulations and practical tips for the legally compliant use of AI in marketing.
1. the AI Regulation
The AI Act creates a uniform legal framework for the use of AI in Europe for the first time. It classifies AI systems according to their risk potential into minimal, low, high and unacceptable risks. While high-risk systems are subject to strict requirements, the regulation also applies to the use of AI-supported content generators and chatbots in marketing.
The majority of the regulations will only apply after a transitional period from August 2, 2026, but bans on high-risk systems such as emotion recognition or social scoring are already in force.
Although the most stringent regulatory obligations apply to AI manufacturers, companies that only use AI must already comply with legal requirements:
The transparency obligation requires that automated content must be labeled as AI-generated, especially in the case of deepfakes or relevant public information (Art. 52 para. 3 AI Act).
Another requirement is to ensure employee competence. From February 2025, companies must ensure that their employees have sufficient knowledge about the safe use of AI. In case of doubt, technical and legal training should be provided.
Data protection also remains an important component: AI tools may only be used in compliance with data protection regulations.
2. application of AI in social media marketing
Many companies use AI for content creation and user analysis. When doing so, the rights of data subjects and users must be observed. The following requirements should therefore always be taken into account:
- Labeling of AI-generated content: Content must not be presented as the work of a human author.
- Personal rights: Deepfakes or the use of AI personal images without the consent of the person depicted are not permitted.
- Liability for incorrect content: AI-generated content must be checked for truthfulness, as the user is also fully liable for incorrect or unlawful AI results.
- Data analysis with AI: Scraping, automated extraction of data from web content, or the use of content that is not freely accessible for analysis purposes can violate data protection and copyright regulations.
3. recommendations for action for companies
The introduction and operation of AI systems require particularly careful planning and management. To ensure that your company implements the AI Act in a legally compliant manner, you should take the following steps:
- Inventory: Analyze which AI tools are already in use and which risk category they fall into.
- Ensure transparency: Develop internal guidelines for labeling AI-generated content.
- Train employees: Prepare your team for the new regulations with training courses.
- Clarify responsibility: Select an employee as the contact person for the use of AI systems.
- Ensure data protection: Check AI tools for data protection compliance and adapt internal data protection guidelines.
- Obtain legal advice: Have your AI strategy reviewed by experts to avoid infringements and the resulting threat of sanctions.
Conclusion
The AI Act leads to new requirements, but also to opportunities for SMEs. The latter is ensured in particular by the clearest possible legal framework, which should also take account of ongoing technical developments. Responsible use of AI can increase efficiency, but requires compliance with the legal framework. By taking proactive measures, companies can prepare for the new requirements at an early stage and minimize legal risks.