RSL Media launches Human Consent Standard for AI licensing

According to The Verge, nonprofit RSL Media on May 12, 2026 unveiled the Human Consent Standard, a machine-readable licensing framework that lets people set terms for how AI systems may use their likenesses, creative works, characters, and designs. The Verge reports the standard is backed by talent including George Clooney, Tom Hanks, Meryl Streep, Viola Davis, Kristen Stewart, and Steven Soderbergh, and by organizations such as the Creative Artists Agency and Music Artists Coalition. In an email to The Verge, RSL Media cofounder Eckart Walther said the Human Consent Standard builds on the existing RSL Standard and can be discovered by AI crawlers through machine-readable web signals, and that it applies to an underlying work or identity "wherever it appears," rather than only at a specific URL.
What happened
According to The Verge, nonprofit RSL Media announced the Human Consent Standard, a new machine-readable licensing framework that lets people set terms governing how AI systems may use their likenesses, creative works, characters, and designs. The Verge reports the launch is backed by public figures including George Clooney, Tom Hanks, and Meryl Streep, as well as industry groups such as the Creative Artists Agency and the Music Artists Coalition. The Verge also reports that RSL Media cofounder Eckart Walther, in an email to The Verge, said the new standard builds on the existing RSL Standard and can be discovered by AI crawlers via machine-readable web signals; Walther wrote that the Human Consent Standard "applies to the underlying work, identity, character, or mark itself, wherever it appears."
Technical details
Editorial analysis - technical context: Public reporting indicates the Human Consent Standard follows the wider pattern of machine-readable metadata approaches used to communicate permissions to crawlers and automated systems; similar schemes historically rely on structured tags, headers, or registry entries to encode licensing rules. For practitioners, the critical technical question is how broadly and consistently AI developers will implement detection and enforcement of such signals, and whether the standard includes validation, provenance, or dispute-resolution mechanisms to handle conflicting signals.
Context and significance
Industry context: Public coverage frames the Human Consent Standard as part of a broader push by creators and rights organizations to assert control over AI training and synthesis of people's likenesses and creative outputs. For model developers and data engineers, that trend raises operational questions about ingest pipelines, content filtering, and rights management at scale. Legal and contract teams tracking training-data compliance will likely treat machine-readable creator intent as an additional input when auditing datasets.
What to watch
Editorial analysis: Observers should track:
- •formal specification and machine-readable formats RSL publishes
- •adoption by major platforms and model providers
- •whether industry indexing and crawling tools update to detect and honor the standard. Reporting to date does not quote any major AI provider committing to implement the Human Consent Standard, and RSL Media has not been quoted making enforcement claims beyond discoverability comments to The Verge
Scoring Rationale
The launch is a notable development for creator rights and dataset governance because it introduces a machine-readable standard specifically targeting AI use of likenesses. The story is important for practitioners handling training data and compliance, but its practical impact depends on adoption by platforms and model builders.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems


