Thursday, December 26, 2024
Home > ICO > Generative AI developers, it’s time to tell people how you’re using their information

Generative AI developers, it’s time to tell people how you’re using their information

Stephen Almond is the ICO’s executive director of regulatory risk.

In January we launched our consultation series on data protection in generative AI, receiving over 200 quality responses from stakeholders. We are today publishing our outcomes report which details our policy positions on generative AI and sets out what further work is still needed by industry.

We looked at five areas which resulted in us refining our position in two key areas: the lawful basis for web scraped data to train generative AI models and the engineering of individual rights into generative AI models. We found a serious lack of transparency, especially in relation to training data within the industry, which our consultation responses show is negatively impacting the public’s trust in AI.

Generative AI developers, it’s time to tell people how you’re using their information. This could involve providing accessible and specific information that enables people and publishers to understand what personal information has been collected. Without better transparency, it will be hard for people to exercise their information rights and hard for developers to use legitimate interests as their lawful basis.

We have been clear in our view that generative AI offers great potential for the UK and the opportunity must be realised in a responsible way that appropriately considers data protection law. We have engaged openly to develop our positions. We have also been clear that there is no excuse for generative AI developers not to embed data protection by design into products from the start.

Our engagement is ongoing, and global, and we continue to work with government, other digital regulators and our international counterparts to play our part in safeguarding people and enabling responsible innovation.

I encourage firms looking to innovate responsibly to get advice from us through our Regulatory Sandbox and Innovation Advice service, as well as from other regulators through the DRCF AI & Digital Hub.

Our report provides regulatory clarity and certainty which will enable responsible AI developers to thrive. We will now focus our attention on organisations that are not doing enough.

Original Source