EN 中文

Vidar: Embodied Video Diffusion Model for Generalist Manipulation

arXiv: 2507.12898 Dual-arm robot operation Video diffusion world model Low sample migration Masked Inverse Dynamics
Method name: Vidar, Video Diffusion for Action Reasoning
Authors: Yao Feng, Hengkai Tan, Xinyi Mao, Chendong Xiang, Guodong Liu, Shuhe Huang, Hang Su, Jun Zhu
Organization: Tsinghua University; Shengshu Tech
Meeting format: NeurIPS 2025 preprint style

1. Quick overview of the paper

What should the paper solve? On the new robot platform, dual-arm operation requires a large amount of demonstration data bound to the hardware; the differences in action spaces, perspectives, and shapes of different robots make cross-platform migration difficult. The paper wants to answer: Can the video dynamic knowledge on the Internet and robot data be turned into a transferable prior, so that an unseen two-arm platform can perform multi-task operations in only about 20 minutes of demonstration?
The author's approach Break down the strategy into two steps: first use the video diffusion model to generate a multi-view video rollout of "what should happen next", and then use the Masked Inverse Dynamics Model, MIDM, to convert the video frames into target robot actions. The key points are the unified observation space, embodied pre-training of 750K multi-view dual-arm video, a small amount of full-parameter fine-tuning of the target platform, multi-sampling rearrangement during testing, and action-related area masks without pixel annotation.
most important results In the RoboTwin 2.0 multi-tasking setting, Vidar achieved an average success rate of 60.0% in the low data clean scenario, compared to 25.0% for Pi0.5; 65.8% for standard clean and 44.8% for Pi0.5. On a real platform, with only about 20 minutes and 232 episodes of human demonstration, Vidar achieved 68.2%, 66.7%, and 55.6% on seen tasks/backgrounds, unseen tasks, and unseen backgrounds respectively, significantly higher than UniPi and VPP.
Things to note when reading The focus of this paper is not just "making videos look good", but whether the videos are action-aware enough. We should pay special attention to three aspects: whether the unified observation space really alleviates the embodiment gap; whether MIDM's mask is only effective on real platform distribution; TTS relies on GPT-4o rearrangement and cloud video model, which brings delay, cost and lack of closed-loop control. Another boundary is that the author relies heavily on multiple perspectives to see the arms and contact areas.
One sentence version: Vidar treats the large video model as a transferable world prior for robot operation, uses unified multi-viewpoints to observe spatial "operation dynamics in video", and then uses an inverse dynamics model with an implicit mask to predict the video into the action space of the new dual-arm robot.
Vidar method overview
Figure 2. The overall process of Vidar: external video and robot video training video diffusion prior, a small number of demonstration fine-tuning on the target platform, selecting better videos through TTS during inference, and then outputting actions by MIDM.

2. Motivation and problem definition

2.1 Why not directly train the VLA policy?

The author believes that the difficulty of dual-arm operation comes from three coupled factors: the expansion of the action space with the number of joints, the need for precise time coordination between the arms, and contact dynamics and long-term planning that are sensitive to data quality. Although the VLA model can connect vision, language and action end-to-end, the action space is highly dependent on the robot form, and it is difficult to directly migrate the action labels of one platform to another platform when cross-platform.

Therefore, Vidar chooses to bypass the heterogeneity of the action space: the video diffusion model only learns how the world evolves, and does not directly learn actions; actions are only decoded by a lightweight inverse dynamics model on the target platform. This is the basic position of the paper: video is an intermediate modality between Internet data, robot data, and target platform demonstrations.

2.2 Formalization issues

The original goal is to learn conditional action strategies:

$$\pi: \mathcal{L} \times \mathcal{O} \rightarrow \mathbb{P}(\mathcal{A}), $$

Among them, $\mathcal{L}$ is the language instruction space, $\mathcal{O}$ is the observation space, and $\mathcal{A}$ is the action space. The author decomposes it into a video generation model $G$ and an inverse dynamics model $I$:

$$\pi = I \circ G, \qquad G: \mathcal{L}\times\mathcal{O}\rightarrow\mathbb{P}(\mathcal{V}), \quad I: \mathcal{V}\rightarrow\mathcal{A}.$$

This means that most of the cross-task and cross-scenario knowledge is undertaken by $G$; the action mapping related to the target robot is undertaken by $I$. When reading this paper, keep asking: does the video space really retain enough motion information? If certain joints or contact points are out of view, the $I$ cannot reliably resume motion.

2.3 Contribution positioning

4. Detailed explanation of method

4.1 Three-stage training pipeline

Vidar's training is divided into three layers of data from large to small:

  1. Internet-scale video pre-training: Directly adopt existing video model checkpoints, such as Wan2.2, Vidu 2.0, and HunyuanVideo, to obtain semantic, motion, and physical interaction priors.
  2. embodied domain pre-training: Continue pre-training with about 750K multi-view dual-arm robot episodes to adapt the video model to the robot, camera and operating dynamics.
  3. Target platform fine-tuning: Perform full-parameter SFT with a small number of demonstrations on the target robot, while training MIDM to map video frames to actions.

4.2 Video generation model: rectified flow

The authors adopt a video diffusion model in the form of rectified flow. The model learns the velocity field $v(x_t, t, c)$, flowing from Gaussian noise $x_0$ to the target video $x_1$:

$$\frac{d x_t}{d t}=v(x_t, t, c), \quad t\in[0, 1].$$

During training, let the velocity field approximate the constant flow from $x_0$ to $x_1$:

$$L_G=\mathbb{E}_{c, t, x_0, x_1}\left[\left\|(x_1-x_0)-v(tx_1+(1-t)x_0, t, c)\right\|^2\right].$$

The $c$ here is not just the task text, but the robot, camera, task and multi-view image context in the unified observation space. This choice is what sets Vidar apart from ordinary text-to-video control methods.

4.3 Unified observation space

The unified observation space is defined as:

$$\mathcal{U}=\{\langle\mathbf{o}, \mathbf{l}\rangle\mid \mathbf{o}=\mathrm{aggregate}(\mathbf{I}^{(1)}, \ldots, \mathbf{I}^{(V)}), \; \mathbf{l}=\mathrm{concatenate}(l_r, l_c, l_t)\}.$$

Among them, $\mathbf{o}$ is a multi-view image aggregation, and $l_r, l_c, l_t$ describes the robot platform, camera configuration and task instructions respectively. For different data sets, the author gives text descriptions similar to "fixed high camera, movable left arm camera, movable right arm camera" as part of the conditions.

The advantage of this step is that the model does not need to unify the action space. It only needs to cast the videos of different robot platforms into the same "multi-view image plus description" format. But this also brings an implicit requirement: the camera description must be stable enough and the perspective must cover the action-related area.

4.4 embodied pre-training and target platform fine-tuning

The pre-training data comes from Agibot-World, RoboMind, and RDT. The data size corresponding to the real experiment is 746, 533 episodes; Egodex is additionally added to the simulation experiment. The authors filtered out episodes with less than three views or shorter than four seconds, and used Agibot's frame-level annotation to cut long episodes into short clips. Sampling is done in proportion to the size of the data set during pre-training.

Data from the target platform are used both for the fine-tune video diffusion model and for training MIDM. The real platform data is a 20-minute human demonstration, covering 81 tasks and 232 episodes; the simulation low data is set to 20 episodes per task for RoboTwin, and the standard is set to 50 episodes per task.

4.5 Test-Time Scaling

The sampling of the video diffusion model is random, and a single rollout may be physically inconsistent or task mismatched. Vidar generates $K$ candidate videos during inference:

$$\{\tilde{\mathbf{v}}^{(i)}_{1: T}\}_{i=1}^{K}, $$

Then use evaluator $q_\eta$ to select the highest scoring video:

$$\arg\max_i q_\eta(\tilde{\mathbf{v}}^{(i)}_{1: T}).$$

In the real experiment, $K=3$, three videos were generated in parallel, and then rearranged by GPT-4o based on physical rationality and task text consistency; in the simulation experiment, TTS was turned off for reproducibility, that is, $K=1$. The appendix notes that each pairwise comparison costs about $0.003, and GPT-4o rearrangement accounts for about 25% of the total delay at $K=3$.

4.6 Masked Inverse Dynamics Model

MIDM contains two networks: mask prediction network $U$ and action regression network $R$. Given frame $x$, first predict the spatial mask:

$$m=U(x), \qquad \hat a=R(\mathrm{Round}(m)\odot x).$$

The training loss is:

$$L_I=\mathbb{E}_{x, a}\left[l(\hat a-a)+\lambda\|m\|_1\right], $$

Among them, $l$ is Huber loss, $\lambda\|m\|_1$ encourages mask sparseness, and Round is trained by straight-through estimator. The intuition is: if the model must predict actions with as few pixels as possible, it will tend to retain arms, grippers, tools, and contact points, and filter out background and reflection interference.

MIDM learned masks
Figure. MIDM learned mask: In unseen backgrounds and reflective surfaces, the mask still mainly falls on the robot arm and key operating areas.

4.7 Why not use ready-made segmentation models

Appendix tests RoboEngine segmentation. The author points out that it often only recognizes one arm in dual-arm scenes, is unstable from the wrist perspective, and lacks temporal consistency. Vidar therefore does not rely on external segmentation labels, but instead lets action supervision shape the mask inversely.

RoboEngine segmentation results
Appendix. RoboEngine segmentation example: As a comparison, it shows that explicit segmentation may miss action-related parts in multi-view dual-arm scenes.

5. Experiments and results

5.1 Experimental hypothesis

  1. Vidar also achieves a higher success rate with only 20 minutes of target domain demonstration.
  2. Vidar can generalize to unseen tasks and unseen contexts.
  3. Embodied pre-training on a unified observation space can improve video generation quality.
  4. MIDM generalizes better than the ordinary ResNet inverse dynamics model.

5.2 Data and training settings

Projectsettings
Pre-training dataAgibot-World, RoboMind, RDT; real experiments total 746, 533 episodes; simulation experiments additionally add Egodex.
Target real platform data20 minutes of human demonstration, 81 tasks, 232 episodes; target platform not seen in pre-training.
Simulation dataRoboTwin 2.0; low data is 20 episodes per task and the camera is adjusted to fully see the arm; standard is 50 episodes per task and official perspective.
video modelThe simulation uses Wan2.2; the real main experiment uses Vidu 2.0; the appendix also reproduces Wan2.2 and HunyuanVideo.
Number of training stepsWan2.2: 10K pre-training + 12K fine-tuning; Vidu 2.0: 10K pre-training + 13K fine-tuning; video downsampling to 8 fps.
MIDMU-Net mask predictor + ResNet-50 action regressor; $\lambda=3\times10^{-3}$; trained with fine-tuning dataset only.

5.3 RoboTwin 2.0 Main Results

The RoboTwin results use a more difficult multi-task setup: instead of training a policy individually for each task, it is evaluated evenly across 50 tasks, with 100 episodes each. Pi0* in the table is the official leaderboard result. Because each task is trained independently, the author's explanation is easier and not completely comparable.

methodLow CleanLow RandomizedStandard CleanStandard Randomized
Pi0*--46.42%16.34%
Pi0.525.0%9.2%44.8%14.2%
Vidar60.0%15.7%65.8%17.5%

Key interpretation: Vidar improves the most in the clean scene, indicating that video priors and multi-view conditions are very helpful for task execution; the improvement in the randomized scene is smaller, indicating that background and object randomization are still bottlenecks, but Vidar still exceeds Pi0.5.

5.4 Real robot main results

Real experiments are divided into three categories: seen tasks/backgrounds, unseen tasks, and unseen backgrounds. UniPi and VPP were selected as baselines because the author believes that it is difficult for conventional VLA to effectively adapt when there are only 20 minutes of data.

methodSeen Tasks & BackgroundsUnseen TasksUnseen Backgrounds
VPP4.5%13.3%0.0%
UniPi36.4%6.7%22.2%
Vidar68.2%66.7%55.6%

The strongest evidence here is that unseen tasks are still 66.7%, indicating that the video model not only remembers the demonstration tasks, but can also use language and video priors to process semantic tasks, such as grabbing the shortest piece of bread, wiping the table with a rag, etc. unseen backgrounds dropped to 55.6%, but still significantly higher than UniPi and VPP.

Vidar prediction and execution demos
Figure. On a real robot, the left side is the prediction video and the right side is the corresponding execution. It can be used to check whether the video rollout is consistent with the action execution.

5.5 Video quality gains of embodied pre-training

ConfigurationSubject ConsistencyBackground ConsistencyImaging Quality
Vidu 2.00.5650.8000.345
+ Embodied Pre-training0.8550.9090.667

VBench indicators show that robot video pre-training significantly improves subject consistency, background consistency and imaging quality. For robot control, subject consistency is not a purely visual aesthetic indicator. It is related to whether the arm, object and contact relationships remain stable throughout the rollout.

5.6 MIDM generalization results

inverse dynamics modelTraining AccuracyTesting AccuracyTesting $l_1$ Error
ResNet99.9%24.3%0.0430
MIDM99.9%49.0%0.0308

Both scores are almost full on the training set, but the success rate of MIDM on the test set is doubled, indicating that ResNet is more likely to take advantage of background or texture shortcuts, and MIDM's sparse mask regularization helps draw attention back to action-related areas.

5.7 Ablation experiment

ConfigurationSeen Tasks & BackgroundsUnseen TasksUnseen Backgrounds
Vidar w/o TTS45.5%33.3%44.4%
Vidar w/o MIDM59.1%26.7%22.2%
Vidar68.2%66.7%55.6%

TTS improves unseen tasks significantly, from 33.3% to 66.7%; MIDM is especially critical for unseen background, from 22.2% to 55.6%. This is consistent with the narrative of the paper: TTS mainly filters out videos that do not match the task or are physically unreasonable, and MIDM mainly resists background interference.

5.8 Supplementary results in Appendix

$\lambda$ for MIDM

$\lambda=3\times10^{-3}$ best: testing accuracy 49.0%, testing $l_1$ error 0.0308. If $\lambda=10^{-1}$ is too large, the mask will be too sparse, and the test accuracy is only 7.1%; if $\lambda=10^{-4}$ is too small, the constraints will be insufficient, and the test accuracy will be 24.4%.

Wan2.2 additional real experiments

On 14 real tasks, Vidar-Wan2.2's seen cases average 69.3%, Pi0.5 is 34.3%; unseen cases average 67.1%, Pi0.5 is 12.9%. This support method is not fully bound to Vidu 2.0.

HunyuanVideo Additional Experiments

The average success rate of the six tasks was 58.3%, including 100% for grabbing apples and placing them in the steamer, 25% for grabbing cup handles, and 20% for stacking blocks. It shows that it is feasible to replace the backbone, but the differences between different tasks are still large.

Failure case

Additional visualizations in the appendix contain success and failure cases. Report readers should pay attention to whether the failure results from unreasonable video prediction, incorrect TTS selection, failed MIDM action decoding, or accumulation of open-loop execution errors.

Additional demos and failure cases
Appendix. More examples of challenge tasks, including success and failure cases, which are important diagrams for understanding boundary conditions.

6. reproducibility Key Points

6.1 Data processing

6.2 Training resources

moduleResources and training
Vidu 2.064 NVIDIA Ampere 80GB GPUs, 23, 000 iterations, about 64 hours; 10, 000 pre-training, the rest fine-tuning.
Wan2.25B parameters; pre-train lr $2\times10^{-5}$, fine-tune lr $2\times10^{-5}$, warm-up 200, pre-training 10K, fine-tuning 12K.
HunyuanVideo13B parameters; 64 NVIDIA Hopper 80GB GPU, about 54 hours; pre-training 10K, fine-tuning 2K.
MIDM92M parameters; 8 NVIDIA Hopper 80GB GPUs, 60, 000 iterations, about 5 hours.

6.3 MIDM hyperparameters

hyperparametersvalue
Mask networkU-Net, 5 layers down/up sampling
Action networkResNet-50
LossHuber loss + $\lambda\|m\|_1$
Learning rate$5\times10^{-4}$
Warm-up6000 steps
AdamW$\beta=(0.9, 0.999)$, $\epsilon=10^{-8}$, weight decay $10^{-2}$
Mask sparsity$\lambda=3\times10^{-3}$

6.4 Reasoning process

  1. Enter the current multi-view and task text.
  2. The video model generates 60 frames, 8 fps, or 7.5 seconds of future video at one time.
  3. The real experiment generates $K=3$ candidates, and extracts 5 to 7 frames and gives them to GPT-4o for sorting according to the task and physical rationality.
  4. When a video is selected, MIDM runs locally and converts video frames into action sequences.
  5. The execution is open-loop, and no re-planning is performed based on observations during execution after generation.
Recurrence risk: Vidu 2.0 is not a completely open source backbone; TTS uses GPT-4o; the video model is deployed in the cloud, and a single 60-frame video takes about 25 seconds. These factors impact standalone replication and live deployment.

7. Analysis, Limitations and Boundaries

7.1 The most valuable part of this paper

The most valuable part is that it provides a very clear cross-embodiment transfer split: do not forcefully unify the action space, but first unify the video observation space, let the large video model learn the operating dynamics across robots and cross-views, and then use a small amount of action data from the target platform to train the inverse dynamics adapter. This idea compresses expensive robot action supervision to the last step, which is very consistent with the current paradigm of "learn universal priors first, and then align them at low cost" in the current era of large models.

The second value point is that the design of MIDM is very simple but it captures the pain points in the generalization of real robots: changes in background, reflection, and perspective will make overfitting shortcuts from ordinary images to action models, while action supervision plus sparse masks can force the model to focus on arms and contact areas without segmentation labels.

7.2 Why the results hold up

7.3 Main limitations

7. 4 Boundary conditions

Applicable conditionsConditions that do not apply or require caution
There are multi-view RGB videos, and most of the arms, grippers, and contact objects are visible.The key states are outside the image, such as joint angles, forces, and occluded contact points, and the action cannot be reversed from the video.
Tasks allow inference latencies of seconds or more and can be executed in an open-loop for a period of time.High-speed dynamics, real-time obstacle avoidance, and strong human-computer interaction safety constraints.
The target platform can take a small number of high-quality demonstrations and train MIDM with the same camera layout.The camera layout changes frequently or the demonstration actions are noisy.
Semantically explicit, visually discriminable video tasks.Fine assembly tasks requiring force control, haptics, or latent state reasoning.

8. Preparation for group meeting Q&A

Q1: What is the biggest difference between Vidar and UniPi?

UniPi also uses video as an intermediate planning representation, but Vidar emphasizes large-scale embodied video pre-training, unified multi-view observation space, and MIDM action decoding on the target platform. In real experiments, UniPi directly fine-tune Vidu 2.0 and uses ResNet IDM, which lacks heterogeneous robot video pre-training and mask mechanisms.

Q2: Why doesn't the video model directly output actions?

The author wants to avoid the embodiment gap caused by action space heterogeneity. Different robots have different joints, grippers, and control frequencies, so it is difficult to directly unify the actions; the video space is more versatile and can carry semantic and physical interactions, and then a lightweight inverse dynamics model is trained on each target platform.

Q3: Why does MIDM mask not need to split labels?

Because supervision comes from action prediction errors. If certain pixels are useful for action prediction, retaining them will reduce Huber loss; at the same time, $\ell_1$ regularization requires the mask to be as small as possible, and the model tends to retain only action-related areas. Round is trained with a straight-through estimator.

Q4: What are the functions and costs of TTS?

The function is to reduce the variance of diffusion sampling and select videos with more reasonable physics and better matching tasks from multiple candidates. The cost is multiple video generation and GPT-4o sorting. In the real experiment $K=3$, sorting accounts for about 25% of the total delay and introduces external model dependencies.

Q5: What is the point that is most likely to be questioned?

The first is reproducibility: Vidu 2.0 and GPT-4o sorting are not completely open source and controllable. The second is open loop control and high latency. The third is to adjust the camera so that the target platform can fully see both arms. This is critical to the method, but it may not always be satisfied in actual deployment.

Q6: If you continue this line, what is the most natural next step?

Change Vidar from open-loop video execution to closed-loop receding horizon; replace GPT-4o TTS with a locally deployable physical consistency/task success evaluator; further integrate touch, force or proprioception into states other than video to solve the problem of invisible information in video.