Wednesday, December 25, 2024
Home > ICO > No regulatory wild west: how the ICO applies the law to emerging tech

No regulatory wild west: how the ICO applies the law to emerging tech

Hello and thanks for having me here at New Scientist’s emerging tech summit. It’s a great opportunity to take part in such an event – talking about emerging technologies and their effect on all aspects of society.

A bit of audience interaction to begin. Cast your minds back to November 2022. A little programme called ChatGPT was launched.

Now, let’s bring you forward to February 2023. That programme just hit 100 million users.

Suddenly generative AI was a key buzzword. It was all anyone could talk about – analysts said the rate of growth was unprecedented for a consumer app. In just two months, it hit user numbers that took TikTok nine months to reach, and Instagram more than two years. Copywriters, journalists and people in creative industries despaired as ChatGPT spat out reams of articles, jokes, poetry and essays in response toContent prompts. People were simultaneously amused by the novelty of it and unnerved by its power. Questions started being asked about where ChatGPT was getting its information from, the pros and cons of using personal data to train the model. Our colleagues in Italy, the Garante, banned ChatGPT due to concerns over how it was using people’s personal information.

Of course, none of this was news to us. We’ve been here for some time.

Ever since the ICO’s inception in 1984, data protection law has been the principal form of regulation for new technologies. The same principles apply now as they always have – you need to look after people’s information, be transparent about how you’re using it and ensure it’s accurate.

New tech, old tricks.

To put this in context, we often hear complaints that the world of tech and AI is a regulatory wild west. In our digital world, we hear that there are no apparent constraints on bringing products to market, allowing harms to come through that wouldn’t exist in the pharmaceutical or civil aviation industries, for example. You know that there are checks and balances in place to ensure these products are safe.

But digital products are no different.

We’re here and we’re enforcing – there’s no regulatory gap between the digital world and the real world. Data protection law is principles-based and it’s technology neutral. So, no matter what the tech is, if it involves personal information, there are protections in place for people.

AI and emerging tech can be a huge force for good. The strides forward we’ve made in terms of healthcare, productivity and transportation have been massive.

But organisations who use these technologies must be clear with their users about how their information will be processed. It’s the only way that we continue to reap the benefits of AI and emerging technologies.

I said at the end of last year that 2024 cannot be the year that people lose trust in AI. I stand by that statement today.

We want to help you provide and maintain that trust. A regulator’s job, albeit never an easy one, is to keep the balance between keeping people’s information safe while encouraging organisations to innovate and explore new products and services. To that end, we’ve designated AI and emerging technology as a key cause for us this year. That means that we’ll be looking at it closely and examining the questions it throws up – how much control are people willing to give up in order to use these technologies? Do they know how much information they’re sharing? Can organisations do more to ensure their users are fully informed?

So, what are we doing as the data protection regulator to help people keep their trust in AI and emerging tech, and to help developers and organisations earn that trust? It’s a classic chicken and egg situation here – if organisations are trustworthy and transparent, more people will give more of their information, allowing organisations to then innovate more with the information they receive.

Our Innovation Advice service offers an opportunity to ask ICO experts questions about how to ensure their product or service is compliant, with a quick turnaround so that data protection isn’t a blocker to innovation. Our Sandbox offers a bespoke way for organisations to stress-test their innovative ideas before taking them to market, and advice on how to mitigate potential risks at the design stage and beyond. Information about both services is available on our website.

For generative AI, we’ve been consulting on various aspects of the technology and how it would comply with existing data protection law. The consultation series outlines our initial thoughts and aims to gather stakeholder views to inform our positions on generative AI, in order to provide clarity for developers in a changing technological landscape.

Our most recent chapter, which opened for consultation this week, covers individual rights when training and deploying generative AI models. These models are frequently trained using real-life data, so we’re interested in how people can exercise their rights when it comes to training data. Our previous chapters have focused on the principle of accuracy – both the accuracy of the model’s outputs themselves, and the effect that the accuracy of the training data has on the outputs. Let us know your thoughts on our latest chapter via our website.

These consultations allow us to set out our early thinking on these topics as they develop. I have previously made a commitment that we won’t miss the boat on AI in the same way that policymakers and regulators did with social media. This is one way that we’re ensuring we’re ahead of the tide on this.

We’re also working closely with other regulators to keep abreast of new developments in how AI models are trained. There are a number of areas of cross-over between our remits and we’re working together to bring clarity on our individual responsibilities.

I’m sure that news of us working with other regulators is music to your ears. A lot of you in the room today will be working with developers of AI and emerging technologies. Some of you may even be developing this technology yourselves. As leaders in your field, I want to make it clear that you must be thinking about data protection at every stage of your development, and you must make sure that your developers are considering this too.

We call it data protection by design and default. Protection for people’s personal information must be baked in from the very start. It shouldn’t just be a tick box exercise, a time-consuming task or an afterthought once the work is done.

As the data protection regulator, it’s important that we examine the interplay between AI, emerging tech and data protection. As I said previously, our opinion is that there is no regulatory gap here. We do not consider that AI is unregulated. The same principles of protecting people’s information, being transparent and fair with the data and not keeping hold of it for longer than necessary – these things all apply in the same way. The five principles of AI that the government announced as part of their AI white paper – they all track to a data protection principle.

I know it’s sometimes difficult to see the wood for the trees, to see how you can build in protections from the start. That’s where we come in – we have lots of information, guidance, tools and checklists available to use on our website. Our Tech Horizons report covers the emerging technologies that we believe will be part of our society in the next two to seven years, including quantum computing and neurotech. We don’t have a crystal ball, but these are the technologies that we’re seeing emerging at this moment in time. We wanted to raise any data protection considerations early, to ensure developers are apprised of any changes they may need to make prior to rolling out their product or service in the next few years.

We’ve also offered advice and guidance on the emerging area of biometrics. I’m reminded again, as I mentioned at the start, that my job as data protection regulator is never easy. Biometrics is one of those tricky areas, where it takes careful consideration to continue being a champion of innovation and a protector of people’s rights. I need to balance the scales and ensure that both can coexist in harmony.

Of course, biometrics is just one area of emerging tech. We can’t be across everything at all times. It takes a village, and our village comprises of the Digital Regulation Cooperation Forum, or DRCF. That’s us, the ICO, along with our counterparts at the Financial Conduct Authority, the Competition and Markets Authority and Ofcom. We work collaboratively, sharing information and workstreams, to ensure a joined-up approach to regulation.

This means that you all in the room today get a smooth and seamless experience when dealing with issues that intersect any of our remits. For example, we’ve recently launched our AI and digital hub. It’s designed to help and support innovators like yourselves working on AI or digital products by offering you free, informal advice on complex regulatory questions. If you’re developing a new fintech app, rather than going to us and the FCA, you can just come to the hub and get the answers to your questions quicker. Go to the DRCF website to see the requirements and put your application in – we’re looking forward to working with you.

In the spirit of working more closely with you, I want to end by referencing the work we’re doing internally to support our ambitions of setting the tone for responsible and innovative data use. I don’t want to stand here and preach to you about being responsible innovators without ensuring that we’re working just as hard to provide you with the steers you need.

We’ve recently published our Enterprise Data Strategy. It sets out how we develop, manage, govern and use the data we hold. Whether it’s providing deep insights into the needs of our customers, identifying and predicting instances of harm, guiding prioritisation and decision making, or underpinning our use of automation and AI, it’s clear that data is a key part of the modern organisational fabric.

This also contributes to our move away from being “the regulator of no”. We want to say, “how to”, not “don’t do”. We’re hoping that our new strategy will provide you with the tools you need to safely and confidently innovate with data within the guardrails of the law.

I’ll close there, as we have time for a Q&A – but my key message and key takeaway that I want to end with is that we’re here to help. If you’re developing innovative ideas, if you’re working with emerging technologies, if you’re advising organisations on how they can use AI safely – come to us if you need advice or guidance. Our website has a wealth of information available, you can call our helpline or sign up to our monthly newsletter. We can help you get this right.

Thank you.

Original Source