The start of a new year is often a time for new beginnings, hope, and resolutions to do things a bit differently. In that vein, on Wednesday, 4 January, the UK prime minister was in a reflective mood, setting out his five priorities for the year ahead. Unsurprisingly, plans around privacy were absent despite continued uncertainty in the UK around the Data Protection and Digital Information Bill.
But while uncertainty already seems to be the overarching sentiment for 2023, there are a few trends in privacy we’re certain we’ll see this year.
Offenders will be named and shamed
The Information Commissioner, John Edwards, told the BBC in September 2022 that “naming and shaming organisations that fail to comply is a new proactive way for the ICO to work.” This naming and shaming will apply not just to headline-grabbing fines, but also to reprimands issued to organisations who fail to comply with the UK GDPR. Reprimands were previously unpublished, but the ICO’s intention is that by “reading about where an organisation failed to comply with data protection laws, we hope that others will understand what went wrong and what they need to do if they find themselves in a similar scenario.”
It will no longer be just organisations which have experienced a massive data breach in the spotlight. Expect to see others which fail to comply with UK GDPR legislation more generally publicly named by the ICO. Edwards appears to be adopting a tougher approach than we’ve seen from previous Information Commissioners, and we expect to see more investigations, more pre-emptive fines and more brand damage as he takes action against those ignoring their responsibilities.
The internet may become a safer place for children
In September 2022, the ICO announced a potential £27m fine for TikTok for failing to protect children’s personal information. In the press release, the commissioner, John Edwards, said his office is currently looking into how more than 50 different online services are conforming with the ICO’s Children’s Code. And, it wasn’t just TikTok that was in hot water over how it processes children’s information last year. Meta was fined €405m by the Irish data protection authority after Instagram allowed teenagers to set up accounts with their email addresses and phone numbers publicly displayed. And the Department for Education was warned after poor due diligence meant the personal information of up to 28 million children was shared with gambling companies. It only escaped a fine in excess of £10m thanks to the ICO’s new approach towards the public sector which aims to reduce the impact of fines on the public.
We expect the focus on children’s rights online to continue globally – not least because California has passed its Age Appropriate Design Code Act, heavily based on the ICO’s Children’s Code, which will come into effect in the state on 1 July 2024 and New York has plans to introduce something similar there. In the UK, the current draft of the Online Safety Bill covers all services likely to be accessed by children and includes more child protection measures, including more stringent requirements around age verification. Whether or not it passes in its current state is another matter.
A crackdown on biometric snooping
Have you been filmed while at the self-serve checkouts in a supermarket recently? You’re not the only one. Southern Co-operative has installed facial recognition cameras at 35 of its stores in the UK, Amazon is rolling out palm scanners for shoppers to make payment in Whole Foods in the US, and even Mastercard is trialling ‘pay with your face’ functionality. Meanwhile, Aldi’s ambitions have extended in the past year to opening a “check-out free” store, ‘Aldi Shop & Go’ which allows shoppers to just walk out with their goods. It won’t be the last. Tesco has one too. The privacy group Big Brother Watch has already made an official complaint to the ICO over Southern Co-op’s use of surveillance technology and we’re expecting the regulators to become more interested in this area across the board in 2023.
Europe is edging closer to a ban on facial recognition with its proposed Artificial Intelligence Act, which should limit how far biometric recognition can be used. Meanwhile, in June 2022, the Ryder Review commissioned by the Ada Lovelace Institute, called for an immediate moratorium on the use of live facial recognition technology in the UK until legislation governing its use has been passed. Italy went one step further in November 2022, when it placed a moratorium on the use of facial recognition technology.