Integration of Artificial Intelligence Solutions into Business Processes and the Increasing Complexity of Data Breach Management

Integration of Artificial Intelligence Solutions into Business Processes and the Increasing Complexity of Data Breach Management
The integration of artificial intelligence solutions into business processes exponentially increases the complexity of managing personal data, particularly with reference to the prevention and management of security incidents. The proliferation of regulatory frameworks on topics that inevitably intersect has highlighted the need to coordinate the obligations for reporting and managing cybersecurity incidents, as set out respectively by Regulation (EU) 2024/1689 (AI Act), Regulation (EU) 2016/679 (“GDPR”) and, for the entities concerned, Directive (EU) 2022/2555, as transposed in Italy by Legislative Decree No. 138/2024 (the “NIS2 legislation”).

1. Reporting of Significant Incidents and the Risk of Data Breaches in the Artificial Intelligence Ecosystem

The AI Act introduces specific notification obligations regarding significant incidents for providers and users of high-risk AI systems. Indeed, pursuant to Article 73, providers must report to the competent authorities any significant incident or malfunction that may result in serious harm to individuals’ health, safety, or fundamental rights. Users are likewise required to promptly inform the provider should they detect anomalous or risky system behaviour.

This incident-reporting mechanism may intersect with the incident-management obligations laid down in the GDPR and NIS2, requiring companies to put in place integrated management procedures to ensure consistency and timeliness in communications.

The use of artificial intelligence may involve the processing of large volumes of data, often originating from different sources and combined in an automated manner. Such data may include personal information, including special categories of personal data, used for training or operating generative or predictive models. In this scenario, an event qualifying as a data breach may compromise entire datasets, algorithms, and decision-making processes built upon them.

The technical complexity of AI systems also makes it difficult to reconstruct the chain of events leading to the breach, to identify the data controllers or processors involved, and to assess the impact on the rights and freedoms of the data subjects. This compels the controller to carry out a more articulated risk assessment in advance, compared to that required for traditional information systems.

2. The Controller’s Obligations and the GDPR Perspective

The GDPR requires the controller to adopt appropriate technical and organisational measures, in line with the principles of accountability and privacy by design. In the event of a personal data breach, the controller is subject to the obligation to notify the supervisory authority within 72 hours and, in the most serious cases, to communicate the breach directly to the data subjects.

In the context of artificial intelligence, these obligations take on an expanded scope: the transparency of automated processes, the traceability of decisions, and the documentation of security measures become essential elements for ensuring compliance. Furthermore, the interaction between automated systems and external infrastructures increases the likelihood of systemic security incidents.

3. The NIS2 Directive and the Convergence Between Cybersecurity and Data Protection

The NIS2 legislation has also introduced more stringent obligations concerning governance, risk management, and the notification of network and information security incidents: “essential” and “important” entities falling within the scope of NIS2 are required to implement minimum security measures, including vulnerability management, staff training, the adoption of security policies, and cooperation with the competent authorities.

A data breach involving AI systems may therefore constitute not only a violation of the GDPR, but also a security incident relevant under the AI Act and NIS2. In such circumstances, multiple notification obligations are imposed on the organisation - respectively under the AI Act, where the incident is relevant pursuant to Article 73 of the Regulation; under the GDPR, in the cases and in the manner set out in Articles 33 and 34; and under NIS2, in the event of a significant incident pursuant to Article 25 of Italian Legislative Decree No. 138/2024 - each with different subject matter and timelines, yet requiring internal coordination among the various corporate actors involved, including the data controller, the IT department, the legal and compliance teams, and, where appointed, the Data Protection Officer (DPO) and the Chief Information Security Officer (CISO).

This overlap requires companies to structure unified incident-management processes capable of simultaneously responding to the requirements imposed by the relevant legislation on data protection, cybersecurity, and artificial intelligence.

4. Towards an Integrated Compliance Model

The evolution of the European regulatory framework - which sees the parallel implementation of NIS2, the AI Act and the GDPR - outlines a context in which these different areas can no longer be interpreted in isolation, since personal data protection, cybersecurity resilience and the safety of AI systems now represent different facets of the same issue.

The preferable solution is for organisations to develop integrated compliance programmes, providing for cooperation among the various functions involved (privacy, IT security and compliance), the definition of shared risk-management procedures, and the adoption of continuous-monitoring systems.

A synergistic approach, grounded in security by design and a corporate culture of prevention, not only reduces the likelihood of incidents but also enhances user trust and the organisation’s reputation.

Conclusions

Managing data breaches in the era of artificial intelligence requires a strategic vision capable of overcoming the separation between regulatory obligations. Integrating GDPR compliance with the requirements of NIS2 and the AI Act means adopting a holistic logic of protecting data and systems, where privacy, security and innovation coexist in balance. Only an organisational model founded on such integration can ensure genuine protection of digital rights and the legal sustainability of new technologies.

Lawyer Simona Lanna 

 

Newsletter

Iscriviti per ricevere i nostri aggiornamenti

* campi obbligatori