Alister Pearson, the ICO’s Senior Policy Officer – Technology has put out a blog on their website which introduces a new beta version of their AI and Data Protection Risk Toolkit. He explains how it can assure organisations that use AI to process personal data that they are processing it in line with the law and how organisations can help the ICO shape a final version.
“Understanding how to assess compliance with data protection principles can be challenging in the context of AI. From the exacerbated, and sometimes novel, security risks that come from the use of AI systems, to the potential for discrimination and bias in the data. It is hard for technology specialists and compliance experts to navigate their way to compliant and workable AI systems.”
To help address this challenge, we have decided to publish an AI and Data Protection Risk Toolkit.
This work draws upon the Guidance on AI and Data Protection, as well as our co-badged guidance with The Alan Turing Institute on Explaining Decisions Made With AI. It is also part of our commitment to enable good data protection practice in AI.
The toolkit contains risk statements to help organisations using AI to process personal data understand the risks to individuals’ information rights. It also provides suggestions on best practice organisational and technical measures that can be used to manage or mitigate the risks and demonstrate compliance with data protection law.
The toolkit reflects the auditing framework developed by our internal assurance and investigation teams. This framework gives us a clear methodology to audit AI applications and ensure they process personal data in compliance with the law. If your organisation is using AI to process personal data, then by following this toolkit, you can have high assurance that you are complying with data protection legislation.
We are presenting this toolkit as a beta version and it follows on from the successful launch of the alpha version in March 2021. We are grateful for the feedback we received on the alpha version.
We are now looking to start the next stage of the development of this toolkit.
This stage will involve testing the toolkit on live examples of AI systems that process personal data to see how practical and useful it is for organisations.
We will continue to engage with stakeholders to help us achieve our goal of producing a product that delivers real-world value for people working in the AI space. We plan to release the final version of the toolkit in December 2021.
If you are interested in helping us test the toolkit on a live AI application, or want to provide feedback or suggestions on how to improve the beta version, please email AI@ico.org.uk.