Artificial intelligence systems can be highly beneficial to both individuals and society as a whole, but businesses that use AI systems must consider the risks to data subjects' privacy rights and freedoms.
FREMONT, CA: The European General Data Protection Regulation (GDPR) applies to process any data that potentially identify individuals alone or in conjunction with other data. This broad definition can cause issues in AI, as AI systems handle ever more personal data and automated decisions and profiling of persons utilising such systems increases.
As a result, all stakeholders must address data protection during AI-assisted projects.
Assuring the Accuracy and Privacy of Personal Data
The accuracy of personal data is also critical. Personal data must be accurate and, when appropriate, up to date under the GDPR and statistical correctness is essential to AI systems.
AI systems that form inferences about people must be statistically accurate enough for their intended goals to ensure fairness. While not all premises must be correct, the possibility of faulty assumptions and the resulting impact on any decisions based on such inferences must be addressed.
Much has been written about the possibility of bias in AI systems, resulting in unjustified discriminatory outcomes for individuals.
The GDPR addresses concerns of unfair discrimination in various ways, including applying the fairness principle and the declared goal of safeguarding individuals' rights and freedoms in connection with the processing of their data. Additionally, technical measures should be implemented to reduce the danger of discrimination in machine learning.
Ensure that individuals' privacy rights are honoured when AI systems are used. This includes rights to information, access, erasure, correction, data portability, objection, and restriction of the processing.
While the use of personal data in AI systems may complicate compliance with individuals' data protection rights, such rights should be considered at each stage of the development and deployment of AI systems, and requests from data subjects regarding their rights should be addressed following the GDPR's requirements.
Notably, the GDPR provides special rights to individuals when personal data is processed primarily by automated decision-making that has legal or similarly substantial consequences for individuals or through profiling. Individuals must be informed about this processing. They also have rights about decisions made about them (e.g. the right to obtain human intervention, give their point of view, challenge decisions made about them and have the logic of the decision explained).
Security of personal data is equally critical in the context of AI systems. The GDPR requires that personal data be treated securely, protecting it from unauthorised or unlawful processing, accidental loss, destruction, or damage. AI has the potential to amplify previously identified security problems and make them more challenging to handle.
Using AI to process personal data can influence security and introduce new forms of danger.
Organisations that use AI systems must consider the threats to data subjects' privacy and freedoms. To achieve GDPR compliance, data protection should be considered early on and managed throughout the lifecycle of AI systems.