EN 中文

VLA-JEPA: Enhancing Vision-Language-Action Model with Latent World Model

Method name: VLA-JEPA

Authors: Jingwen Sun, Wenyao Zhang, Zekun Qi, Shaojie Ren, Zezhi Liu, Hanxin Zhu, Guangzhong Sun, Xin Jin, Zhibo Chen

Organization: University of Science and Technology of China; Zhongguancun Academy; Shanghai Jiao Tong University; Tsinghua University; Eastern Institute of Technology, Ningbo; University of Chinese Academy of Sciences; Nankai University

arXiv: v1 submitted on 2026-02-10, v2 updated on 2026-02-14

Links: arXiv: 2602.10098 | PDF | Project page | official code | Hugging Face

1. Quick overview of the paper

One-sentence summary: VLA-JEPA uses a JEPA-style latent world model to replace the pixel reconstruction latent-action pre-training, allowing VLA to predict future latent states from current observations and language, and strictly avoid future frames from entering the student pathway, thereby learning latent actions that are closer to controllable state transitions.
What should the paper solve?Internet videos are large-scale but do not have robot action labels; existing latent-action pre-training easily learns pixel differences, camera movements, and background changes, rather than state transfer semantics useful for control.
The author's approachUse leakage-free state prediction: the future frame only passes through the target encoder to generate supervised targets, and the student only looks at the current observation and language; the predicted targets are aligned in the latent space instead of reconstructing pixels.
most important resultsLIBERO's average success rate is 97.2; LIBERO-Plus is the best in 5 of 7 types of perturbations, with an average of 79.5; SimplerEnv averages 65.2 on Google Robot and 57.3 on WidowX; both real robot ID and object-layout OOD are better than $\pi_0$ and $\pi_{0.5}$.
Things to note when readingThe core of this article is not to "add a future prediction head", but to limit the future information to the target, forcing the latent action token to carry the explanatory variables from the current state to the future latent state; at the same time, the action head uses conditional flow matching to generate continuous action trajectories.

Difficulty rating: ★★★★☆. Need to understand VLA, latent action, JEPA/V-JEPA2, world model, causal attention, conditional flow matching, and robot evaluation such as LIBERO/SimplerEnv.

Keywords: Vision-Language-Action, JEPA, latent world model, latent action, human video pretraining, flow matching, LIBERO, SimplerEnv.

Core contribution list

VLA-JEPA architecture
Figure 1. VLA-JEPA architecture: target encoder generates latent target from future frames; student pathway only looks at the current observation and language, and predicts future latent state through latent action and predictor.

2. Motivation

2.1 What problem should be solved?

VLA models require a large amount of visual, language, and action data, but action-labeled robot data is expensive to collect and has narrow coverage. In comparison, human videos and Internet videos are much larger and contain rich temporal variations. Therefore, many recent works hope to learn latent actions from videos without action tags and then migrate them to robot control.

The paper believes that the problem is that many latent-action objectives learn "compressed representations of visual changes" rather than "how controllable states evolve under interaction." For robots, the value of action does not lie in explaining how each pixel changes, but in explaining the task-related state transitions of objects, hands, tools, robots, etc.

2.2 Four failure modes of existing methods

  1. Pixel-level objective favors appearance: Future pixel prediction or frame-difference compression is easily dominated by high variance but low control factors such as texture, lighting, background, viewing angle, etc.
  2. Real-world video amplified noise motion: In human videos and in-the-wild footage, camera motion and non-causal background changes may outweigh state changes caused by interactions.
  3. Information leakage causes latent action to become shortcut: If the latent action module sees both the current frame and future frames when training, it may encode future images directly instead of learning variables that explain state transitions.
  4. Multi-stage pipelines are fragile: Multi-stage processes such as representation pre-training, latent-action learning/alignment, and policy learning will introduce engineering complexity and inconsistencies between stages.

2.3 The solution ideas of this article

The principle of VLA-JEPA is to predict the future latent state that reflects the action-relevant transition structure, while prohibiting future information from leaking into the predictor. The advantage of JEPA fits this perfectly: it does not reconstruct pixels, but aligns them at the latent representation level, thus naturally reducing the dominance of low-level noise on the target.

4. Detailed explanation of method

4.1 Overall framework

VLA-JEPA consists of three main parts: Qwen3-VL-2B VLM backbone, V-JEPA2-based latent world model, and conditional flow-matching action head. The pre-training phase learns latent actions from Something-Something-v2 human videos and Droid robot data; the post-training/fine-tuning phase trains downstream controls on LIBERO, SimplerEnv, or real robot data.

The paper states in the text that Qwen3-VL is used as the core VLM, and the visual encoder is SigLIP-2; the appendix clearly implements the use of Qwen3-VL-2B. The world state target comes from the frozen V-JEPA2 encoder, and the predictor is randomly initialized and trained.Appendix Implementation Details

4.2 Model Backbone and latent tokens

In order to allow VLM to output time-aware latent action and embodied action, the author added two types of special tokens to the Qwen3-VL vocabulary: $\langle latent_i \rangle$ and $\langle action \rangle$. Among them, $\langle latent_0 \rangle$ represents the state transition from $s_0$ to $s_1$; $\langle action \rangle$ serves as the condition of the action head on the robot data.

When actually generating latent action tokens, the same $\langle latent_i \rangle$ will be repeated $K$ times to enhance the model's attention to latent action tokens; the appendix gives $K = 24/T$, where $T$ is the future video horizon and 24 is the empirical optimal value.Appendix VLA-JEPA Architecture

4.3 Learn latent world model from human videos

For human videos without action tags, the data is written as $D=\{(O_0, O_1, \dots, O_v, \ell)\}$, $\ell$ is the language description, and $O_v$ is the video frame sequence of the $v$th perspective. The world state encoder first uses V-JEPA2 encoder $F(\cdot)$ for each perspective to obtain a representation, and then splices the multi-view representation into a unified world state:

What this formula does: combine the visual states of different perspectives at the same time into a world-state latent.

$$s_{t_i}=\Vert_v F(I_{v, t_i})$$
$F(\cdot)$Single-view video encoder, the paper uses V-JEPA2.
$I_{v, t_i}$The image frame of the $v$th view at time $t_i$.
$\Vert_v$Splice representations along the view dimension.

The student pathway of VLM only inputs multi-view images and language at the initial moment, and does not input future frames. It maps learnable latent tokens into state transition representations:

$$z_{t_i}=p_\theta^{VLM}\left(\langle latent_i\rangle \mid \{I_{j, t_0}\}_{j=0}^{v}, \ell\right)$$

Here $z_{t_i}$ is the latent representation corresponding to the $i$ latent action token. The key is that there are no future frames on the right, and future frames only appear in the supervision targets generated by the target encoder.

The world model then uses historical world states and corresponding latent action representations to predict future state chunks:

$$\hat{s}_{t_{1: i+1}} = p_\theta^{WM}(s_{t_{0: i}}, z_{t_{0: i}})$$

The world model uses time-causal attention: two-way full attention on latent action tokens and image latent tokens within the same time step; strict causality across time steps, and can only look at the present and the past.

4.4 JEPA Objectives and Information Leakage Control

The paper interprets the training target as the ELBO of predictive log-likelihood in semantic space. Since the frozen V-JEPA2 encoder $F(\cdot)$ produces deterministic embeddings, the KL term disappears in practice and the target degrades to latent-space reconstruction/alignment loss.

$$\mathcal{L}_{WM}=\sum_{k=1}^{T}\mathbb{E}_{s_{t_k}\sim F(\cdot)}(\hat{s}_{t_k}-s_{t_k})$$

The original formula of the paper does not write the norm. Combined with the context, it can be understood as aligning the predicted world state and the target world state in the latent space. The report does not assume additional specific distance patterns that are not stated.

This design avoids two common shortcuts: first, it does not allow the model to directly reconstruct pixels, thereby reducing the dominance of background and camera motion; second, it does not feed future frames to the student, so the latent action will not degenerate into future image compression codes.

4.5 Action prediction on robot data

On action-labeled robot data, VLA-JEPA also retains world modeling loss and appends embodied action token. VLM outputs a global action-conditioning representation:

$$z_a=p_\theta^{VLM}\left(\langle action\rangle\mid \{I_{i, t_0}\}_{i=0}^{v}, \ell, \langle latent_i\rangle\right)$$

$z_a$ serves as a conditional flow-matching action head condition, constraining action generation together with initial observation, language, and latent action.

The action head uses a DiT-B style Transformer to model the distribution of continuous action trajectories. Given the real action sequence $a_{0: H}$ and Gaussian noise $\epsilon$, define linear interpolation:

$$a_t=(1-t)\epsilon+t a_{0: H}, \quad t\sim\mathcal{U}(0, 1)$$ $$\mathcal{L}_{FM}=\mathbb{E}_{a_{0: H}, \epsilon, t}\left[\left\|v_\theta(a_t, t\mid z_a)-(a_{0: H}-\epsilon)\right\|_2^2\right]$$

The training goal is to let the model predict the velocity field flowing from noise to the real action sequence; during inference, from noise integration to action space, $\hat{a}_{0: H}$ is obtained.

The overall goal on the robot data is:

$$\mathcal{L}=\mathcal{L}_{FM}+\beta \mathcal{L}_{WM}$$

4.6 Architecture and training hyperparameters

Latent World Model Configurationvalue
Transformer Layers12
Attention heads8
Image token dimension2048
Number of image tokens per time step256
Action token dimension2048
Number of action tokens per time step3
Number of views2
Future video horizon8
Action Head Configurationvalue
Transformer Layers16
Attention heads12
Token dimension1024
State dimension8
Action dimension7
Future action horizon7
Positional encodingLearnable
Denoising timesteps4
training detailsPaper/Appendix Information
Image sizeVLM input resize to 224x224; world-state encoder video clips resize to 256x256.
action normalizationUse joint-space delta positions for joint-position control; use delta positions and delta axis-angle for end-effector control; both min-max to [0, 1]; gripper binarized to {0, 1}.
Multi-view processingWhen there are less than two camera views, copy the world-state representation; when there are more than two views, select two for world-state representation.
Training hardware and batch8 GPUs parallel, batch size 32, global batch size 256.
learning ratecosine schedule + linear warmup; VLM and latent world model peak LR 1e-5; action head peak LR 1e-4.
Number of training stepsSSv2+Droid pre-trains for 50K steps; simulated data continues to train for 30K steps; real data continues to fine-tune for 20K steps.

5. Experiment

5.1 Experimental setup

The paper uses three types of simulation benchmarks and a real robot environment: LIBERO evaluates simulation in-distribution manipulation; SimplerEnv evaluates real-to-sim gap; LIBERO-Plus evaluates multi-dimensional perturbation robustness; the real robot uses Franka Research 3 desktop pick-and-place. The authors compare the latest VLA baselines, including latent-action VLA, future-prediction VLA and open source VLA.

Experiment setup
Figure 3. Experiments cover LIBERO, LIBERO-Plus, SimplerEnv and the real Franka robot.
data/stageUsage
Something-Something-v2220K human videos; used for action-free human video latent world modeling pretraining.
Droid76K high-quality demonstration trajectories; used for action-labeled robot data pretraining.
LIBERO / LIBERO-PlusOnly the original LIBERO about 2K expert demonstrations are used for fine-tuning, and the LIBERO-Plus augmented dataset is not used.
SimplerEnvUse Fractal and BridgeV2 to correspond to two embodiments of SimplerEnv post-training respectively.
Real-world100 demonstrations, covering 3 picking-and-placing tasks.

5.2 LIBERO main results

LIBERO evaluates 50 episodes per task in each task suite, 500 episodes per suite, and reports success rate. VLA-JEPA ranks first in 2 out of 4 suites, with the highest average success rate.

methodSpatialObjectGoalLIBERO-10Avg
LAPA73.874.658.855.465.7
UniVLA96.596.895.692.095.2
OpenVLA-OFT97.698.497.994.597.1
$\pi_0$96.898.895.885.294.2
$\pi_{0.5}$98.898.298.092.496.9
VLA-JEPA96.299.697.295.897.2
VLA-JEPA w/o human videos94.899.695.894.096.1

The author specifically points out that strong baselines such as OpenVLA-OFT and $\pi_{0.5}$ rely on a large number of robot datasets pretraining; VLA-JEPA uses less data to achieve higher averages. Compared with latent-action / human-video methods such as LAPA, UniVLA, villa-X, and CoT-VLA, the results of VLA-JEPA support its judgment of pixel shortcut and leakage problems.

5.3 SimplerEnv results

SimplerEnv contains two sets of visual matching settings for Google Robot and WidowX Robot. VLA-JEPA averages 65.2 on Google Robot, which is the highest among all listed methods; it averages 57.3 on WidowX Robot, which is the same as LAPA.

methodGoogle PickGoogle MoveGoogle DrawerGoogle PlaceGoogle AvgWidowX SpoonCarrotBlockEggplantWidowX Avg
LAPA*-----70.845.854.258.357.3
villa-X81.755.438.44.244.948.324.219.271.740.8
RoboVLMs77.361.743.524.151.745.820.84.279.237.5
$\pi_0$72.765.338.3--29.1016.662.540.1
VLA-JEPA88.364.159.349.165.275.070.812.570.857.3
VLA-JEPA w/o human videos85.366.775.586.178.475.054.220.879.257.3

This table also hints at an important boundary: w/o human videos have a higher Google Avg in SimplerEnv. The author explains that in real-to-sim gap and ID scenarios, the impact of high-quality expert demonstrations may be greater than that of human video.

5.4 LIBERO-Plus Robustness

LIBERO-Plus puts the four original LIBERO suites into seven types of disturbances: Camera, Robot, Language, Light, Background, Noise, and Layout. VLA-JEPA ranks first in the five categories of Robot, Language, Light, Background, and Layout, with an average score of 79.5.

methodCameraRobotLanguageLightBackgroundNoiseLayoutAvg
UniVLA1.846.269.669.081.021.231.942.9
OpenVLA-OFT56.431.979.588.793.375.874.269.6
$\pi_0$13.86.058.885.081.479.068.953.6
$\pi_0$-Fast65.121.661.073.273.274.468.861.6
VLA-JEPA63.367.185.495.693.666.385.179.5
VLA-JEPA w/o human videos40.355.772.988.270.538.274.662.9

This table is one of the clearest evidences of the value of human video: VLA-JEPA full model has improved from 62.9 w/o human videos to 79.5, especially in Background, Noise, Language, and Layout. The gap is obvious.

5.5 Real Robot Experiment

The real setup uses a Franka Research 3, Robotiq 2F-85 gripper, three Intel RealSense D435 cameras, two in third person view and one in wrist-mounted view. Training expert demonstrations include placing grapes, apples, mangoes, and oranges from the table into a plate or bowl. Each task was performed 10 independent trials and the average success rate is reported.Appendix Real-world Experiments Details

Real-world experimental results
Figure 4. Real robot ID, task OOD and object-layout OOD results; the paper shows that VLA-JEPA is the best in ID and layout OOD, and second in task OOD.

task-level OOD includes: putting bananas in a bowl, peaches on a plate, and grapes on the top shelf. The author observed that: in the banana task, $\pi_{0.5}$ and VLA-JEPA were about 50% successful; in the peach task, the robot often violated the safety boundary due to its irregular shape; in the shelf task, no model successfully placed the end on the top floor, but VLA-JEPA would approach and lift the end from the back of the shelf, while $\pi_0$ and $\pi_{0.5}$ directly collided with the shelf.

In layout OOD, $\pi_0$ and $\pi_{0.5}$ will not reopen the gripper and try again after failing to capture; VLA-JEPA will immediately open the gripper and try again. The author attributes this to the large number of repeated grasping behaviors in human videos.Appendix Real-world Experiments Details

Real-world frames under object-layout OOD
Figure 7. Comparison of execution of $\pi_0$, $\pi_{0.5}$ and VLA-JEPA under object-layout OOD.

5.6 Further analysis and ablation

The impact of human video

The analysis of the paper is very restrained: human videos may not necessarily bring significant improvement on LIBERO and SimplerEnv, and even w/o human videos are stronger in some settings of SimplerEnv; but in perturbation robustness scenarios such as LIBERO-Plus, human videos significantly improve stability. The author believes that human videos mainly enhance the robustness/stability of existing skills rather than directly introducing new action execution capabilities.

Human video proportion effect
Figure 5. As the human video proportion increases, the success rate of LIBERO-Plus under multi-type perturbations changes; based on this, the author supports that human videos mainly improve robustness.

The impact of Unified pretraining

The author visualizes the attention from latent action token to image token of LAPA, UniVLA, and VLA-JEPA. Paper explanation: LAPA focuses on overly dense visual information and irrelevant desktop objects, which may come from information leakage; UniVLA uses textual semantics to alleviate but will focus on irrelevant semantic backgrounds such as stationary pens and tablecloth textures; VLA-JEPA focuses more on robotic arm, hand and manipulated objects.

Attention map
Figure 6. The attention weight matrix of latent action tokens versus image tokens.

Future video horizon

$T$SpatialObjectGoalLIBERO-10Avg
495.099.295.889.094.8
894.899.895.894.096.1
1692.898.898.092.295.5

The author hopes that latent action captures the dynamics between adjacent frames, and the number of latent action tokens is always equal to the number of frames minus one. $T=8$ is the best on average, and the paper explains that it is close to the predefined action horizon; $T$ is too small and lacks information, and $T$ is too large to introduce redundancy.

6. Reproducible auditing

Code and models

There is official code and model entrance: GitHub README shows that partial training code, LIBERO/LIBERO-Plus/SimplerEnv evaluation code, and custom dataset training code have been released; Hugging Face provides checkpoints. README also lists Qwen3-VL-2B, V-JEPA2 encoder, SSv2, Droid, LIBERO, BridgeV2, Fractal and other dependent resources.

Recurring itemsInformation givenStatus
Model structureQwen3-VL-2B; V-JEPA2 encoder; latent world model and action head configuration tables are complete.fully
Training hyperparametersImage size, batch, number of GPUs, learning rate, schedule, number of training steps, action normalization, and multi-view processing are all given in the appendix.relatively sufficient
dataSSv2, Droid, LIBERO, BridgeV2, and Fractal provide download links in the README; there is no public complete data description for the 100 demos of real robots.The simulation is sufficient, but the real experiments are limited.
ReviewREADME gives LIBERO, LIBERO-Plus, SimplerEnv environment preparation, checkpoint configuration and eval script.fully
HardwareThe paper training uses 8 NVIDIA A100 GPUs; README description LIBERO 4 GPUs parallel, LIBERO-Plus/SimplerEnv 8 GPUs parallel, can be modified according to the number of GPUs.higher cost
Repository setup skeleton: git clone https: //github.com/ginwind/VLA-JEPA conda create -n VLA_JEPA python=3.10 -y conda activate VLA_JEPA pip install -r requirements.txt pip install flash-attn --no-build-isolation pip install -e. Required checkpoints: - Qwen3-VL-2B - V-JEPA2 encoder - VLA-JEPA checkpoints from Hugging Face

7. Analysis, Limitations and Boundaries

7.1 The most valuable part of this paper

Based on the paper's own evidence, the most valuable point is to change the learning goal of the latent action from "explaining pixel differences" to "explaining the future latent state under no leakage conditions". This makes the role of human-video pretraining closer to "improving the robustness and temporal stability of existing skills" rather than mistaking human videos for directly executable robot action supervision. Repeated grasping observations of LIBERO-Plus and real robots are the most direct support for this point.

7.2 Why the results hold up

The paper does not just report an average value, but tests on four types of settings: LIBERO, SimplerEnv, LIBERO-Plus and real robots, and disassembles human videos, future horizon, and attention map. In particular, the gap between full model and w/o human videos in LIBERO-Plus, and the fact that w/o human videos in SimplerEnv are better, together illustrate that the author does not generalize human videos to universal gain, but positions the gain boundary at robustness/stability.

7.3 Failure phenomena clearly written in the paper

7.4 Applicable boundaries