Figure AI Demonstrates Humanoid Robots Making a Bed
Figure AI published a video and blog post showing two humanoid F.03 robots autonomously tidying a staged bedroom and making a bed in under two minutes, according to the company blog post and accompanying social posts. Figure AI says the robots run an onboard Helix-02 Vision-Language-Action policy and coordinate purely via visual observation, with "no shared planner between them, no message passing, no central coordinator," per the company blog. The demo includes locomotion, bimanual dexterity, object reorientation, hanging clothes, and smoothing a comforter, all presented as learned behaviors from pixels to actions (Figure AI blog). CEO Brett Adcock posted the video on X and one company post read, "Honestly, they're better at it than most humans," according to social-media coverage compiled by DI.GG. Business Insider noted the demo in the context of broader humanoid competition, including Tesla's Optimus (Business Insider).
What happened
Figure AI released a video and a detailed blog post showing two humanoid robots performing a fully autonomous bedroom reset, including making a bed, in under two minutes (Figure AI blog; DI.GG). The robots shown are described as F.03 hardware running the onboard policy Helix-02 and performing tasks directly from pixels to motor actions, according to Figure AI's May 8, 2026 post (Figure AI blog). The company states this demo runs a single learned Vision-Language-Action policy on each robot, and that there is "no shared planner between them, no message passing, no central coordinator," with coordination inferred visually from partner motion (Figure AI blog). The published footage shows the robots opening doors, hanging clothing, placing headphones on a stand, closing a book, moving furniture, and jointly lifting and smoothing a comforter to make a bed (Figure AI blog; DI.GG). CEO Brett Adcock posted the video on X and one company post included the line "Honestly, they're better at it than most humans," as recorded in social-media coverage (DI.GG). Business Insider highlighted the demo and placed it alongside competing humanoid efforts including Tesla's Optimus (Business Insider).
Technical details
Per the company's blog, Helix-02 is presented as a single learned model that integrates visual input and motor control to produce coordinated locomotion and dexterous manipulation on humanoid hardware (Figure AI blog). Figure AI enumerated task-level capabilities demonstrated in the video, including whole-body door manipulation, furniture pushing using foot placement and posture, draping garments on narrow fixtures, in-hand object reorientation, and bimanual book handling (Figure AI blog). The company describes the multi-robot coordination as emerging from each agent visually inferring its partner's intent rather than from explicit inter-robot messaging (Figure AI blog). Social posts accompanying the release emphasize that the behavior is "fully autonomous, no teleop" and runs onboard the robots (DI.GG).
Editorial analysis - technical context: Companies attempting multi-agent humanoid coordination often use explicit communication channels, centralized planners, or staged synchronization in research settings. Industry-pattern observations: learned, perception-driven inference of partner intent can reduce the engineering of explicit messaging, but it typically raises questions about robustness to occlusion, out-of-distribution scenes, and failure-mode interpretability. For practitioners: integrating locomotion, whole-body balance, and dexterous manipulation into a single policy, as reported here, increases the state and action dimensionality that a model must manage, which in turn tends to demand substantial training data, careful sim-to-real transfer, or extensive real-world data collection.
Context and significance
Industry context
Humanoid robotics has moved from single-skill demonstrations toward longer, multi-step tasks in human environments; both the visual-motor integration and multi-agent coordination in this demo illustrate those two vectors (Business Insider; Figure AI blog). Observers tracking the sector will note this adds public evidence that end-to-end learned policies are being attempted on full-size humanoids for compound tasks. The demonstration does not, by itself, document real-world reliability, generalization across room layouts, safety guarantees, or operational cost metrics, all of which remain necessary to assess production readiness (industry-pattern observations).
What to watch
For practitioners: monitor whether Figure or independent evaluators publish quantitative metrics beyond curated video-examples include task success rates over many randomized scenes, failure-mode breakdowns, latency/compute budgets for onboard inference, sample-efficiency numbers, and sim-to-real methodology. Also watch for reproducibility signals such as open datasets, benchmarked comparisons, or third-party deployments. Finally, follow adjacent industry moves: competitor demos (for example, Tesla's Optimus-related work referenced in Business Insider), regulatory attention on humanoid deployments in public spaces, and early pilot integrations in controlled facilities.
Scoring Rationale
The demo is a notable step in multi-agent humanoid capability-integrating locomotion, dexterity, and visual coordination-but it is a curated video without published benchmarks or robustness metrics. That makes it interesting and relevant for practitioners but not yet industry-shifting.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

