Netflix Releases VOID Video Inpainting Model

Netflix researchers on April 3, 2026 published a preprint and released VOID, a vision-language model that removes objects from video and inpaints physically plausible outcomes. The model, authored by Netflix and Sofia University researchers, is available on Hugging Face and was preferred in human tests—64.8 percent versus 18.4 percent for Runway—across synthetic and real-world scenarios. VOID targets film editing and automated video-manipulation workflows.
Scoring Rationale
VOID is a notable research release with runnable code on Hugging Face and strong human-preference results, boosting novelty and actionability. Score moderated because the work is a preprint (not peer reviewed) and coverage here is brief rather than deeply technical.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
Sources
- Read OriginalNow even Netflix has its own video AItheregister.com



