What the ChatGPT Caricature Craze Really Reveals About Privacy and AI Risk
If you’ve been on social media over the past few weeks, I’m pretty sure you’ll have seen those ChatGPT caricatures everywhere.
Sure, it’s a bit of harmless fun, you could argue. “It’s just playful images of friends/colleagues/myself”, you could shoot back, or you may even pose the question of “it’s done using images I’ve already uploaded to my social media, so why does it matter?” and to a large degree, you’re right. It is. But it’s also... not? Yes, yes, the party’s over, folks, the fun police are here.
The trend, for those who have missed it, is people asking ChatGPT (or other Generative AI/LLMs) to generate caricatures of themselves based on what their AI “knows” about them from their chats. It’s light-hearted, fun, and in some aspects, feels like a pretty harmless way to engage with technology (if you put concerns about the ethics/environmental impact of AI and their data centres to the side for a moment).
And when we pop the bonnet and look under the surface of this viral trend, there is a more serious question that sits right there: how much of ourselves are we handing over to AI, and what happens to that data once it’s shared? Experts are warning that the caricature craze normalises oversharing with generative models and could unintentionally expose personal or professional details far beyond what most users realise.
In this blog, we’re not just going to sit and look at the trend’s intricacies, but dive deeper into why the privacy concerns are real, and what organisations and individuals should think about before jumping on the next AI wave.
What the ChatGPT Caricature Trend Actually Is
The trend is deceptively simple at first glance. Users provide a prompt along the lines of:
“Create a caricature of me and my job based on everything you know about me.”
In many cases, users also upload a photo or include descriptive details about their interests, role or personal life. ChatGPT then generates a text description of a caricature, which is then run through an image generator to visualise.
These images will then be churned out, including surprisingly personalised elements: tools of your trade, hobbies, even subtle hints about workplace context from earlier chats or data in a user’s history. In a lot of cases that I’ve seen, it outright has their company name, logo, and some have had aspects of their direct work/ongoing projects visualised. In a way, I suppose that’s part of the appeal. It feels custom, and instead of a generic cartoon, you get something that looks, in some ways, like “you.”
But the trend doesn’t exist in a vacuum, and it reflects a broader shift in how people interact with AI - and that’s before we even get on to the privacy angle, and how much data people are willing to share without pausing to consider the longer-term implications.
The Privacy Risk People Aren’t Seeing
The core concern isn’t that the caricature tool is inherently malicious. The concern is what users choose to feed it and what that implies about personal data exposure.
Generative AI platforms, including ChatGPT, handle input differently depending on settings and regional privacy defaults. Some models retain context across sessions to improve continuity; others store interactions unless users opt out. Even if a platform says it doesn’t “remember” forever, the act of uploading a photo or entering detailed prompts can mean that identifiable information, professional context, or sensitive personal details have been processed and potentially stored somewhere.
This - and indeed wider uses of AI platforms - should have already raised concerns for you from a working perspective, but if they haven’t, here’s what we’re mindful of:
1. Data Permanence: AI interactions may feel like they’re temporary, but in reality, they may not be. Even if they provide opt-outs or say they have limited retention policies, you’re still ultimately passing up files into a system you don’t control or own - you only have to look at Facebook’s Cambridge Analytica controversy to see that you should always err on the side of caution. Otherwise, you’re just sticking photos, context, files, spreadsheets and everything else into something to be processed, logged and retained under T&C’s that most people never read fully (For context, OpenAI’s privacy policy alludes to the fact that anything uploaded on the free version of ChatGPT can be shared and used to train future models).
2. Profiling and Aggregation Risk: Whilst a single caricature may not be a nightmare for you, what it does is let these AI systems build and extract patterns from the details you feed in. That means that when you provide job titles, industry context, interests, location hints and personality traits, you’re effectively constructing a compact profile that becomes more valuable when combined with publicly shared outputs. Small details can - and will accumulate.
3. Workplace Exposure: Sure, it might seem like an easy shortcut to get AI to summarise massive company documents, organise client lists, or process your sensitive data. But that’s actually a glaring business issue. Most people have - or do - experiment with AI tools on devices that access corporate systems, entering prompts that may include employer names, client types, internal language or operational detail to make the output more “accurate”. Individually, those references may seem minor. Collectively, they can reveal more about a business than intended. The concern isn’t that AI is actively spying on you. It’s that casual experimentation can blur the boundary between personal curiosity and corporate responsibility.
Even if an AI tool processes data temporarily, the lack of transparency into retention policies and training pipelines can leave users in the dark about how their inputs are handled.
From Playful Fun to Practical Risk
That might not feel like a pressing concern for a simple caricature, but context and the bigger picture matter.
Consider the kinds of additional details people sometimes include to get “better” results:
Employer name
Job title or team
Office locations or landmarks
Hobbies and routines
Do they add some lovely creative flourishes? Sure. But when combined with that lovely photo of yourself, they can give AI systems (and anyone who sees the output) a surprisingly rich profile that goes beyond what most people would normally put online. And that’s where the risk curve starts to bend.
The more specific the detail, the easier it becomes for an attacker or scraper to piece together:
Identity data
Professional affiliations
Lifestyle cues
Social networks
Patterns that could be used for phishing or impersonation
This isn’t science fiction. If you think that attacks and methods to get into your ChatGPT/Gemini/Claude/other accounts aren’t already ongoing, it’s time to start thinking logically.
The concern isn’t that an AI caricature alone will destroy someone’s life. The concern is that normalising the sharing of layered personal data with opaque AI systems reduces the friction for real risk to emerge. It’s the same impulse as tagging every location in every post, only this time it’s bundled into an AI prompt that may persist in ways you don’t immediately see.
With one successful attack, a misclick, a reused password, or no 2FA in place, someone can jump into your AI account, and you’ve literally left a blueprint for you, your job, your career, a bunch of sensitive files, and so much more, in one handy and easy-to-navigate location.
Why This Matters Beyond the Fun
Most organisations have policies about sharing customer data, intellectual property, or confidential business information. But few have clear guidance on how employees should use public AI tools when creating content that blends personal and professional data, since it’s so new. Yet, this trend highlights why such guidance matters:
Employees often don’t see the risk
What feels like a light-hearted personal experiment can touch on elements of your digital identity that you’d otherwise keep private. Generating and sharing an AI caricature in a professional context effectively broadcasts a mini-profile that others can reuse, mash up, or scrape. If a hacker is sitting targeting your CFO with an in-depth phishing campaign, and sees that they share a caricature that has their pet dog in it, alongside a handful of their hobbies, do you start to see how it feeds into things that could open you up to threats?
AI platform policies are not uniform
Some tools offer the option to turn off training and memory features; others store context by default. Users rarely read terms of service, and even when they do, the implications of how data may be used for training or improvement are easy to miss. A blanket rule to implement is that if you’re running a free version of any software, you should be cautious (to say the least) about uploading anything at all related to your work.
Once data is out there, it’s hard to control
Images and descriptions posted online can be downloaded, indexed, cross-referenced and reused. Privacy isn’t just about what a platform does with your data; it’s about what others can do once it’s public. If you’re not ready to face the uncomfortable realities of your once sensitive information, or personally identifiable information, out there for anyone in the world to use, exercise caution.
Practical Steps for Responsible AI Use
We’re not saying this means you should never engage with AI creativity or avoid trends; it means being deliberate about what data you share and understanding the implications. What you choose to do with it, and indeed how you choose to deal with the implications and knock-on effects of AI, are, ultimately, your own business.
That said, here are some straightforward practices we’d recommend:
Avoid uploading real photos unless absolutely necessary
Keep prompts generic and don’t include employer names, project details, or personal identifiers
Review privacy settings and data retention options in any AI platform you use
If AI tools allow it, disable memory or training use for your content
Treat public AI prompts the same as social posts, which means that if you wouldn’t post it verbatim, don’t feed it to AI
This mirrors sensible data hygiene practices many organisations already use internally, applied to a new class of tools that are still unfamiliar to many.
Final Thought
For many, the caricature trend is fun, engaging and understandable, and people like seeing playful, personalised content on feeds that are often rewarded for hosting hostile, divisive or harmful content. Others may see it as a lot like the ‘action figure’ trend we saw last year, and thought “oh that looks terrible, no thank you” (and in my own opinion, they’d be 100% correct, but I digress..).
What it should do is serve as a timely reminder that every entertaining interaction with AI carries implicit data choices, and the novelty of the output shouldn’t obscure the reality of the input. And we shouldn’t overlook privacy concerns either, since it’s a new and still-growing area.
As AI systems become more central in both business and daily life, the questions we ask about privacy, data handling and how much we are prepared to share will shape not just our digital personas, but the security and resilience of the systems we rely on.
This trend - and indeed this blog itself - isn’t just about caricatures. It’s about how we choose to interact with technology that knows more about us, intentionally or not, and what that means for privacy in an era where data is both currency and risk.