The promise of AI is proving tempting for businesses in every sector, with greater efficiencies, faster decision making, and more productivity up for grabs. Next-generation apps and tools are flying off the proverbial shelves. When the chatbot tool ChatGPT launched in November 2022, for example, it gained more than 100 million active users in its first two months. That made it the (then) fastest growing consumer application in history, but it’s since been surpassed by Meta’s launch of its Twitter rival, Threads.
Yet there are those urging caution. At a recent Bloomberg conference, some of the biggest names in AI warned the technology is already incredibly intrusive and chipping away at hard-won privacy rights. Meredith Whittaker, president of the secure messaging app Signal, said the “Venn diagram of AI concerns and privacy concerns is a circle”. She added: “The majority of the population is the subject of AI … Most of the ways that AI interpolates our life and makes determinations that shape our access to resources and opportunities are made behind the scenes in ways we probably don’t even know.”
Regulators are scrambling to keep up. According to the OECD, there are more than 800 policy initiatives from 69 countries, territories and the EU. The European Artificial Intelligence Act took another step towards becoming law last month but isn’t expected to come into force until 2025 by the very earliest. It’s also working with the US on an AI voluntary code of conduct, with hopes other regions will sign up too. There is also progress happening at a local level. In New York City, a new law recently passed, whereby organisations using AI in the recruitment process must pass an audit to show it’s free of racist and sexist bias. OpenAI, which developed ChatGPT, is also being sued in California for its data collection practices, as is Google in relation to its Bard chatbot.
Under the GDPR, principles of lawfulness, fairness, transparency and accountability are paramount. But the speed at which AI systems and tools are being adopted means some of these values are being overlooked.
Last month, the UK’s data protection watchdog warned against developers rushing to adopt powerful AI technology without doing proper due diligence on the risk to privacy and data protection. The ICO’s executive director of regulatory risk, Stephen Almond, said the regulator will be “taking action where there is risk of harm to people through poor use of their data”.
“Businesses are right to see the opportunity that generative AI offers,” he added, “but they must not be blind to the privacy risks. Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”
With that in mind, here’s how to keep a privacy-first approach while using AI:
Lawfulness, fairness and transparency
Organisations must have a lawful basis for processing personal information and individuals must be informed about how and why their personal information is being processed, how long it is kept and who it is shared with. AI systems and other algorithms should not discriminate on the basis of race, gender, age or other protected characteristics and should not be used to make decisions on matters which affect people’s livelihoods. Where an AI system has made decisions about people, it must be transparent about the process and how individuals can exercise their right to challenge automated decisions.