Pakistani Brand Uses AI-Morphed Alia Bhatt Images

A Pakistan-based clothing label, WajAyesha Clothing (Instagram: Wajayesha Official), posted AI-morphed images of actor Alia Bhatt wearing its Pure Sheesha Silk collection. Fans and netizens quickly identified the visuals as digitally altered and challenged the brand for using the actress's likeness without authorization. The brand responded to comments with dismissive replies, fueling further backlash. The incident is the latest example of unauthorised synthetic media being used for commercial promotion, raising questions about personality rights, cross-border enforcement, platform moderation, and the reputational risks amateur AI edits pose for brands and platforms.
What happened
A Pakistan-based label, WajAyesha Clothing (Instagram handle appearing as Wajayesha Official), posted images showing Alia Bhatt modeling its Pure Sheesha Silk collection. Social media users and fans identified the images as AI-morphed edits and called out the brand for using the actress's likeness without consent. The brand responded to comments with cheeky replies, including saying "No, she will not" when warned about a lawsuit and asking followers to make the post viral so the actor would notice.
Technical details
The edits are consistent with contemporary image-synthesis workflows: face-preservation combined with outfit inpainting and color manipulation. Visual clues reported by observers include inconsistent lighting, edge artifacts around hair and clothing, and compositional oddities that point to automated image synthesis and inpainting rather than a professional photoshoot. From a practitioner perspective, this is an instance of three common tooling capabilities being chained together:
- •face-conditioned generation or face-swapping to preserve celebrity facial features
- •inpainting/segmentation to replace clothing and textures
- •color/style transfer to produce multiple palette variants
Context and significance
This episode fits a growing pattern where low-cost generative tools enable small brands and social accounts to fabricate celebrity endorsements. The story echoes higher-profile actions by public figures seeking stronger personality-rights protections; news coverage referenced prior efforts by actors like Amitabh Bachchan and Salman Khan to clamp down on unauthorised uses of their images. For practitioners building detection, moderation, or content-attribution systems, the case underlines two persistent challenges: first, easily accessible synthesis tools lower the bar for misuse; second, cross-border posting complicates takedown and legal remedies.
Risk and operational implications
The brand faces reputational and legal risk even if it is small. Platforms must balance moderation speed with false-positive risk when assessing synthetic media claims. For teams building detection pipelines, the incident highlights the need for multi-signal approaches that combine:
- •pixel-level artifact detection with model-based classifiers
- •provenance and metadata checks (signed uploads, watermarking)
- •user-reporting workflows tuned for celebrity-rights claims
What to watch
Whether Alia Bhatt or her representatives issue a takedown or legal notice, how Instagram responds, and whether the brand removes the material or issues an apology. This incident will likely push renewed calls for clearer platform policies, automated provenance tools, and proactive verification for commercial promotional posts.
Scoring Rationale
The incident is a clear example of synthetic-media misuse that matters to practitioners working on detection, moderation, and legal risk, but it does not introduce new technology or a major policy shift. Its importance is practical rather than systemic.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



