Tuesday, December 24, 2024
Home > ICO > Blog: ICO and The Alan Turing Institute open consultation on first piece of AI guidance

Blog: ICO and The Alan Turing Institute open consultation on first piece of AI guidance

2 December 2019

A blog aimed at data scientists, app developers, business owners, CEOs or data protection practitioners, whose organisations are using, or thinking about using, artificial intelligence (AI) to support, or to make, decisions about individuals, by Simon McDougall, Executive Director Technology and Innovation

What do we really understand about how decisions are made about us using artificial intelligence (AI)? The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works. And when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.

ICO research shows that over 50% of people are concerned about machines making complex automated decisions about them. In our co-commissioned citizen jury research, the majority of people stated that in contexts where humans would usually provide an explanation, explanations of AI decisions should be similar to human explanations.

The decisions made using AI need to be properly understood by the people they impact. This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built-in to AI systems.  

Today we publish our first draft regulatory guidance into the use of AI.   ‘Explaining decisions made with AI’ is co-badged by the ICO and The Alan Turing Institute (The Turing) and is out for consultation until January 24 2020.

AI is a key area of focus for the ICO. When an independent review and the Government’s AI Sector Deal both called for the ICO and The Turing to create this guidance, we rose to the challenge.

Through the resulting draft guidance, we aim to help organisations explain how AI-related decisions are made to those affected by them.

Our draft guidance lays out four key principles, rooted within the General Data Protection Regulation (GDPR). Organisations must consider these when developing AI decision-making systems. These are:

  1. Be transparent: make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.
  2. Be accountable: ensure appropriate oversight of your AI decision systems, and be answerable to others.
  3. Consider context: there is no one-size-fits-all approach to explaining AI-assisted decisions.
  4. Reflect on impacts: ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.

In our interim report released in June we stated that context was key to the explainability of AI decisions. This remains key in the draft guidance, with some sections aimed at those that need summary positions for their work, and others including lots of detail for the experts and enthusiasts.

Our draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about. It also outlines different types of explanation and emphasises the importance of using inherently explainable AI systems.

Real-world applicability is at the centre of our guidance. Feedback is crucial to its success and we’re keen to hear from those considering or developing the use of AI. Whether you’re a data scientist, app developer, business owner, CEO or data protection practitioner, we want to hear your thoughts. You can respond to this via online survey.

We will be consulting on this draft guidance until 24 January 2020 and the final version of the guidance will be published later in the year, taking the feedback into account. However, we will keep the guidance under review beyond then to ensure it remains relevant in this fast-developing and complex area. We also continue to work on our related AI project, developing a framework for auditing AI systems. This is also being consulted on.

I’d like to take this opportunity to send my thanks to the teams at the ICO and The Turing for their work on this.

ENDS

Simon McDougall is Executive Director for Technology Policy and Innovation at the ICO where he is developing an approach to addressing new technological and online harms. He is particularly focused on artificial intelligence and data ethics. He is also responsible for the development of a framework for auditing the use of personal data in machine learning algorithms.

Original Source

Leave a Reply

Your email address will not be published. Required fields are marked *