Background: Publicly available data as fuel for artificial intelligence
The further development of artificial intelligence (AI) relies heavily on access to large amounts of information. Social platforms such as Facebook and Instagram are a particularly valuable source of data for AI providers. However, the European Economic Area (EEA) has strict rules in place in the form of the General Data Protection Regulation (GDPR), which regulate the handling of personal data – and sensitive information in particular. A ruling by the Higher Regional Court (OLG) in Cologne is currently attracting attention: the court has allowed Meta to process publicly shared content from adult users for AI training purposes – even though this could include sensitive data. The prerequisite is that the content in question was deliberately made public.
The core problem: tension between the drive for innovation and data protection
European legislators are currently facing a dilemma. On the one hand, the goal is to set global standards for ethical and trustworthy AI development. On the other hand, privacy protection mechanisms in Europe are particularly strong. The OLG Cologne has dealt extensively with the question of whether companies such as Meta really need publicly accessible user data to develop competitive AI solutions. The answer: yes – according to the court, in many cases it is necessary to draw on as broad and representative a data pool as possible in order to advance AI. According to the court, synthetic or less extensive data sets would not be able to ensure the same quality or diversity in training.
The handling of sensitive data and the role of Art. 9 GDPR
When may special categories of personal data be processed?
A particularly sensitive area concerns the handling of so-called special categories of personal data in accordance with Art. 9 GDPR. These include, among other things, information on health, religion, ethnic origin or sexual life. In principle, their processing is prohibited unless there is a specific exception. However, the court has clarified that this prohibition does not always apply absolutely: if users of the platforms deliberately make data publicly accessible, this protection does not apply in certain circumstances. The decisive prerequisite is that the user takes a clear and deliberate action to make the information available to the general public.
How precise is the GDPR's protection mechanism with regard to public posts?
The Cologne judges expressly emphasise that when information is shared personally in one's own profile or in public posts, the average user is aware that it can be viewed worldwide. If, for example, a user enters a health detail in a public comment, it can be assumed that they have waived the protection of the special category. However, it remains unclear whether and to what extent this applies to third-party data that is also mentioned in public posts. The Higher Regional Court suggests that protection under Article 9 GDPR initially depends on the will and awareness of the persons concerned – a final clarifying decision by the European Court of Justice is still pending.
Practical implications for companies and users in the EU
Legal balancing act between digital innovation and data protection – what companies need to know
The ruling from Cologne highlights the current balancing act in the EU: on the one hand, promoting technologies with great economic and social potential, and on the other hand, protecting individual fundamental rights and data security.
Companies that develop or use AI solutions must be aware that the processing of personal data – even if it is publicly accessible – always requires legal consideration. It is not sufficient to refer to the public status of a post in general terms. Transparency, options for users to delete and object to data, and technical safeguards remain mandatory.
What can users do specifically to maintain control over their data?
It remains important for private individuals to be aware of the reach of their own activities on social networks. Anyone who publishes sensitive information generally waives some of their legal protection as soon as the data becomes visible to everyone. Many platforms now offer improved data protection features, including targeted visibility settings, explicit notices when posting, and easy ways to object to certain uses of one's data. Nevertheless, each user remains responsible for the information they disclose in the public sphere.
Conclusion: Between new opportunities and proven data protection – looking ahead
European AI development under the GDPR
The decision of the Higher Regional Court of Cologne brings a breath of fresh air to the discussion about the supposedly ‘absolute’ protection of sensitive data in the General Data Protection Regulation. It shows that the legal framework certainly opens up ways to combine legal data protection and innovation-friendly data use – provided that those affected act in an informed and responsible manner. The leap forward for European AI models thus relies on a mixture of technical development, user awareness and progressive legal differentiation.
Support with questions about data protection and AI compliance
The technological and legal requirements in data protection are constantly evolving – whether in connection with artificial intelligence, social media or new data-driven business models. Companies and those responsible must continuously adapt their processes and their handling of personal data to ensure legal certainty and the trust of their customers. Professional support helps to identify risks at an early stage and implement legally compliant solutions.
Do you need support in designing your data processing to be legally compliant or do you have questions about data protection and AI? Feel free to contact us – we will advise you competently and reliably!