Thursday, March 28, 2024
Home > ICO > final considerations and next steps

final considerations and next steps

As the initial Call for Input into the development of the ICO AI Auditing Framework comes to an end, Simon McDougall, Executive Director for Technology and Innovation, reflects on some of the overarching themes that have emerged in the first phase of our work. 

This update draws our initial Call for Input on developing the ICO Auditing Framework for AI to a close. Over the last eight months, we have used this blog to share our early thinking on the key data protection risks that we believe AI could generate or exacerbate. Each update explored a particular risk area, including relevant technical and organisational risk controls, and asked external stakeholders for their feedback and input. 

As we already mentioned during the summer, we are very pleased and encouraged by the level of engagement and feedback this approach has generated. Over the next few months my team, led by Reuben Binns, our Research Fellow in AI, will use this input to conduct further research and finalise our draft framework and guidance, which we plan to publish for consultation by early January 2020.

In this last blog post, before we move to the next phase, I want to take the opportunity to reflect on some of the key governance and accountability themes that cut across all the AI risk areas we have explored so far.
These are:

  • the need to build adequate AI governance and risk management capabilities; 
  • understanding data protection risks and setting an appropriate risk appetite; and
  • leveraging Data Protection Impact Assessments (DPIAs) as a roadmap to develop compliant and ethical approaches to AI.

AI governance and risk management capabilities 

If used well, AI has vast potential to make organisations more efficient, effective and innovative. However, as our work demonstrates, AI also raises significant data protection risks for data subjects and compliance challenges for organisations. 

Different technological approaches will either exacerbate or mitigate some of these issues, but many others are much broader than the technology per se. As our blogs on fairness and accuracy make particularly clear, data protection implications of AI will be heavily dependent on the specific use cases, the population on which they will be deployed, other overlapping regulatory requirements, as well as social, cultural and political considerations. 

These are not issues that can be delegated to data scientists or engineering teams. Boards and organisations’ senior leaders, including Data Protection Officers, will be accountable for understanding and addressing them appropriately and promptly. 

To do so, in addition to their own upskilling, they will need diverse, well-resourced, teams to support them in discharging their responsibilities. Organisations’ internal structures, roles and responsibilities maps, training requirements, policies and incentives will also need to be aligned to their overall AI governance and risk management aims. 

It is important that organisations do not underestimate the initial and ongoing level of investment of resources and effort that will be required. As a regulator we will expect the level of governance and risk management capabilities to be commensurate and proportional to their AI data protections risks. This is particularly true now as AI adoption is still in its initial stages, and the technology itself, as well as the associated laws and regulations, and governance and risk management best practices are still developing quickly. 

Setting a meaningful risk appetite 

As part of governance and risk management discussions, we often hear from organisations that they have no risk appetite in relation to data protection. In practice, especially in the context of AI (but not only), this is unlikely to be achievable. 

For example, AI systems will often require organisations to decide the appropriate balance between different data protection requirements, for instance between accuracy and fairness. In such cases, adopting a “zero tolerance” approach towards data protection risk would mean that an organisation might be unable to adopt AI altogether. 

To manage the data protection risks arising from of their AI systems properly, it is important organisations develop a more mature understanding and articulation of data protection risk. This includes setting appropriate risk appetite and risk assessment frameworks. This is a complex task, which will take time to get right. Ultimately however it will give organisations, and the ICO, a fuller and more meaningful view of their risk positions and of the adequacy of their compliance and risk management approaches. 

DPIAs as a roadmap to a compliant and ethical approach to AI 

Data Protection Impact Assessments are a key part of the increased focus on accountability and data protection by design introduced by the General Data Protection Regulation (GDPR). Using AI to process personal data is likely to result in high risk to individuals’ rights and freedoms from a GDPR perspective, and therefore trigger the requirement to undertake a DPIA. 

DPIAs should not be seen as a mere box ticking compliance exercise. They can effectively act as roadmaps for organisations to identify and control data protection risks in relation to AI, and the perfect opportunity to consider and demonstrate their accountability for the decisions they make in the design or procurement of AI systems. 

Among other things, DPIAs will force organisations to demonstrate the necessity and proportionality of any AI-related personal data processing; account for any detriment to data subjects that could follow from any bias or inaccuracy in a system; explain the rationale behind any trade-offs; and describe the relationships and the terms of any contracts with other processors or third party providers. DPIAs can also support organisations in thinking about the broader risks of harm to individuals or ethical implications for society at large. 

One global leading technology firm recently told us that they have made DPIAs compulsory for all their projects processing personal data, even if not required by GDPR. They believe this has helped them develop a much stronger risk management culture, as well as a common terminology, across teams in different countries to discuss complex data protection issues. This is further evidence that there is real benefit for organisations in embedding DPIAs fully within their AI governance and risk management practices, rather treating them as an isolated compliance deliverable. 

Thank you 

To conclude, I once again want to thank all the external stakeholders who read our blog updates over the last eight months, and especially those who took the time to provide us with valuable feedback and insights. I also want to extend my personal thanks to Reuben Binns and Valeria Gallo in particular, who have invested a huge amount of expertise and energy into making this work a success. We look forward to engaging with all of you further once our formal consultation paper is published in January 2020. 

Appendix: List of AI blogs published at part of the Call for Input 

  1. Meaningful human reviews in non-solely automated AI systems
  2. Accuracy of AI systems outputs and performance measures
  3. Known security risks exacerbated by AI
  4. Explainability of AI decisions to data subjects
  5. Human biases and discrimination in AI systems
  6. Trade-offs
  7. Privacy attacks on AI models
  8. Data minimisation and privacy-preserving techniques in AI systems
  9. Fully automated decision making AI systems: the right to human intervention and other safeguards
  10. DPIAs
  11. Exercise of rights

On the move 

We will be moving this blog onto the ICO website at the end of October but we will be still taking feedback via email at AIAuditingFramework@ico.org.uk. If you would like to stay up to date with the work on the AI framework and other work from the ICO, you can sign up to our e-newsletter here.

Original Source

One thought on “final considerations and next steps

Leave a Reply

Your email address will not be published. Required fields are marked *