Featuring:
Enza Iannopollo, Principal Analyst
Show Notes:
The explosion of activity and interest around generative AI has many organizations skipping the “toe in the water” phase and diving right into the deep end. But like any technology that relies on data, genAI comes with privacy risks. Principal Analyst Enza Iannopollo provides the very latest updates on how to manage the use of personal data in the genAI environment.
Iannopollo starts the episode with a reminder that genAI is by no means excluded or exempt from existing privacy rules and regulations. And most organizations already have guidelines for handling the personal data of their customers and employees to meet regulatory requirements. So when personal information is used as a prompt in a genAI tool or to train models, the organization must ensure that this usage complies with all existing privacy rules and regulations that pertain to the specific type of data.
That sounds simple enough — but with genAI embedded within applications, users might not even know when they are feeding data into a genAI model. Iannopollo says that this is where company culture is important. Organizations must train employees and help them understand which types of data can be used within which use case, and within which technologies, to remain compliant.
From there, the conversation turns to a timely question: Can an individual withdraw their consent for their data to be used in a genAI system or model? It’s unclear today, but Iannopollo is confident that it will become much clearer once a key issue is addressed: What happens to your data once it has been used in the training of a model before you withdraw consent? That scenario has to be resolved before withdrawing consent is feasible.
Later in the episode, Iannopollo discusses some of the emerging regulatory approaches of which organizations should be aware. First, she touches on the approach that the US and the UK are leaning toward, which is not to adopt AI regulation but to look at existing regulation that can be applied to AI use cases. The EU, China, Australia, and some other regions are leaning more toward new regulation that will be built on existing regs, such as the GDPR in Europe.
Throughout the episode, Iannopollo provides real-world examples of organizations that have run into trouble by using personal data in their generative AI work and examples of organizations that are doing it well today.
The episode closes with Iannopollo describing what she thinks is the biggest “blind spot” for leaders trying to manage the use of personal data in a genAI landscape, so be sure to stick around for that.