P&G Uses Computer Vision to Model Human Behavior

Procter & Gamble is applying advanced computer vision, synthetic data generation, and 3D human modeling to understand human behavior at product scale. Oya Aran, Director of R&D Data Science and AI at P&G, will present these practical deployments at OSCCA on May 4, 2026, part of Display Week 2026 in Los Angeles. The work spans large vision-and-language systems, robotics, and production-level pipelines that bridge research and multi-unit productization inside a global consumer goods company. Expect discussion of synthetic-to-real strategies, privacy and governance tradeoffs, and operational challenges of scaling models across brands and categories.
What happened
Procter & Gamble is bringing advanced computer vision into product R&D to model and interpret human behavior at scale, and Oya Aran, Director of R&D Data Science and AI at P&G, will present the program at OSCCA on May 4, 2026 during Display Week 2026. The announcement highlights work in computer vision, synthetic data, 3D human modeling, large vision and language models, and robotics, with an emphasis on moving research into multi-brand production pipelines.
Technical details
P&G's stack centers on three capability areas practitioners care about: synthetic data generation for annotation-sparse tasks, 3D reconstruction and parametric human models for pose and behavior analysis, and integration with large vision-language systems for richer context understanding. Operationally this implies investments in simulation, domain randomization, and synthetic-to-real transfer, plus tooling for dataset versioning and privacy-aware labeling. Key engineering problems include handling domain shift across product categories, automating annotation through synthetic pipelines, and deploying models across diverse inference targets from cloud services to robotics platforms.
Conference lineup
OSCCA pairs this presentation with other practitioners, signaling industry-focused CV discussions rather than pure academic benchmarks. Speakers include:
- •Gary Bradski, founder of OpenCV
- •Doug Fidaleo, Director of Disney Research Imagineering
- •Shawn Frayne, CEO of Looking Glass Factory
- •Matt Flagg, Chief Science Officer of Code 19 Racing
- •Glenn Jocher, Founder and CEO of Ultralytics (pre-recorded)
Context and significance
This is significant because P&G represents a large, complex enterprise environment where ML models must intersect with supply chains, privacy regimes, and legacy product teams. The work shows how enterprise AI focuses less on frontier model size and more on robustness, synthetic-data workflows, cross-unit productization, and regulatory-compliant data governance. For the research-to-production pipeline, P&G is an exemplar of applying computer vision to behavioral signals that directly inform product design, packaging, and real-world testing.
What to watch
Look for technical takeaways on synthetic-to-real transfer, annotation automation, and governance patterns that enable model reuse across brands. Practitioners should also watch for concrete tooling or open-source releases following the talk that would lower the barrier to similar enterprise deployments.
Scoring Rationale
This is a notable example of enterprise-scale computer vision applied to consumer behavior, relevant to practitioners working on production ML pipelines and synthetic-data workflows. It is not a frontier model release or major funding event, so the impact is significant but not industry-shaking.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



