Think a quick AI face swap is all in good fun? UK law might disagree. In our third installment, we turn to something every business owner loves – compliance paperwork! (Joking aside, this part is crucial.) For businesses in Liverpool, the North West, and North Wales, uploading images to AI tools isn’t just a tech or privacy issue – it’s a legal one. You’ve heard of GDPR; you know it’s serious about personal data. Well, your face is personal data. In fact, it can even be classified as biometric data under GDPR, which brings extra strict rules. Let’s unpack the legal and regulatory considerations when using AI tools that munch on images of real people.
Your Face as Personal (and Biometric) Data
Under the UK GDPR, personal data is any information relating to an identified or identifiable person – and a photograph of your face clearly qualifies. But there’s more: if that image is used for “biometric identification” (such as facial recognition), it becomes Special Category Data (sensitive personal data) which has even tighter requirements. Even if you’re just uploading selfies to get a cartoon avatar, the fact remains that you are processing personal data with a third-party service. If that photo is of someone else (say, your employees or customers in a team pic), you’re processing their personal data.

What does this mean practically? It means you need a lawful basis under GDPR to do so. Are you doing it with the person’s consent? If not, do you have a “legitimate interest”? It’s a grey area for something trivial like creating fun images. And if the service is using those photos for anything beyond just giving you an image (for example, training their AI on facial features), it could count as biometric processing. The law hasn’t caught up to selfie apps directly, but it’s safer to assume GDPR does apply – and that regulators care about how these tools handle data. (Remember how the ICO – UK’s Information Commissioner’s Office – cracked down on Clearview AI for scraping faces off the web theguardian.com? That shows how regulators view misuse of facial data as “unacceptable” theguardian.com.)
Terms and Conditions: What Did You Agree To?
From a legal perspective, those lengthy terms of service in AI apps are essentially contracts. When you or your employees click “Accept,” you might be agreeing to all sorts of provisions that could conflict with your own data protection policies. As we saw earlier, FaceApp’s terms required users to confirm they had rights/permissions to upload each photo capitallaw.co.uk. This means if an employee uploads a group shot from your company event, and they didn’t get everyone’s consent, your company could be on the hook for that lapse. The app’s terms basically say “if there’s a problem, it’s the user’s fault, not ours.” So an unwitting employee might breach privacy rules and the company gets the blame.
Moreover, by agreeing to broad licenses (like the perpetual license in FaceApp’s terms), you may be violating principles of data protection such as data minimization and purpose limitation. GDPR says you should only collect and share personal data for specific, legitimate purposes. If an app can use your uploaded images for anything (advertising, machine learning, etc.), is that purpose clear and limited? Unlikely. There’s a real tension between these app agreements and GDPR’s spirit. For UK businesses, the safest assumption is that uploading identifiable photos to a consumer-grade AI tool equals sharing personal data with a third-party processor. You’d ideally need a Data Processing Agreement, adequate safeguards, etc., just as you would when outsourcing any data processing – things most people definitely don’t obtain from a phone app!
International Data Transfers (AKA “Where’s Your Face Going?”)
When you upload an image to an app, do you know where it’s sent? As mentioned, many popular AI services are run by companies outside the UK/EU. For example, Lensa is operated by a U.S. company with servers in the U.S. abc7.com. That means your photo is zipping across international borders. Under UK GDPR, transferring personal data abroad (especially to the US or other countries without an adequacy decision) requires safeguards – typically Standard Contractual Clauses and an assessment of the receiving country’s data protection. Now, do AI avatar apps offer you a UK GDPR-compliant data transfer agreement upfront? Highly unlikely. They might mention in their privacy policy that by using the service you consent to your data going to wherever they operate. But consent under GDPR has to be explicit and informed for it to count in this context, and most users are not truly informed.
This presents a compliance risk. If a UK business is found to be routinely uploading personal images to an app that ships data to (say) the US or Russia without proper measures, the ICO could view it as an unlawful transfer. Admittedly, the ICO isn’t scanning for small instances like “Bob in Accounting uploaded team selfies to FaceApp,” but if a larger issue came up (say a data breach or a complaint by an employee), this could become a headache. The safe play for businesses is to treat any cloud AI tool as you would any new software vendor – vet it for data protection compliance. If it’s not clear where data goes and how it’s protected, that’s a red flag.
Employee and Company Liability
Let’s talk about employees. We know folks love trying new apps – maybe someone on your marketing team is experimenting with AI-generated headshots to use on the company “About Us” page. It sounds innovative, but make sure they understand the rules. If an employee uploads a photo of a client or another staff member to an AI tool without permission, your company could be violating privacy rights. Under GDPR, each person in a photo has rights – the right to know where their data goes, to object, etc. An employee having fun with an app could accidentally trigger those issues.
There’s also the question of biometric data: If an AI tool is extracting a facial template or “embedding” from the photo (a numerical representation of your facial features), that could be seen as biometric processing. Biometric data usage under GDPR often requires explicit consent or a strong necessity (like security). Using it for amusement or even minor convenience probably wouldn’t meet the bar. So if an employee uses a face-login feature or face-analyzing AI without clearance, it might put your firm in a legally murky area.
In short, educate your team: not every app is OK to use with work-related images. Just as you’d (hopefully) caution against uploading company documents to random websites, extend that caution to images of people. The legal onus is ultimately on the company (the data controller) to ensure compliance.
GDPR, AI and the Road Ahead
Regulators are increasingly aware of AI risks. While there’s no specific “AI avatar law” yet, frameworks like the EU’s upcoming AI Act and existing privacy laws are circling closer. The prudent approach for business owners is to err on the side of caution and transparency:
- Document what you’re doing – If you decide to use any AI tool involving personal data, document the purpose and ensure it’s defensible under GDPR (e.g. you have individuals’ consent or a legitimate interest that isn’t overridden by their rights).
- Read the Privacy Policy – Boring, yes, but you might discover that the app stores data longer than you thought or shares it with partners. For instance, if an app’s policy says data “may be used to improve our services”, that implies your images could be retained for AI training. Does that clash with your own privacy notices to staff or users?
- Stay Updated on Guidance – The ICO periodically releases guidance on new tech. Biometric data and AI are hot topics. Ensuring you align with best practices can save you from penalties.
Next up in our final part: we’ll get practical. How can you and your team safely enjoy the benefits of AI tools without stepping on legal landmines or handing hackers an advantage? We’ll cover actionable best practices to protect your business while still innovating.
Hilt Digital Solutions is here to simplify the legal tech jargon for you. We blend leading AI and cloud expertise with deep knowledge of UK data protection. When we advise clients, we cut through the nonsense and focus on what actions to take. If you’re unsure about the legality of an AI tool or need a hand with GDPR compliance in your tech stack, our experts in Liverpool and North Wales are a phone call away. We position ourselves as your trusted cyber and cloud assurance partner – so you can innovate with confidence, knowing the legal bases are covered.