LaMP: Learning Vision-Language-Action Policy with 3D Scene Flow as Latent Motion Prior

Xinkai Wang1,5, Chenyi Wang3,5, Yifu Xu2,5, Mingzhe Ye2, Fucheng Zhang4,5,
Jialin Tian2,5, Xinyu Zhan2,5, Lifeng Zhu1, Cewu Lu2,5†, Lixin Yang2,5†
1Southeast University 2Shanghai Jiao Tong University 3Zhejiang University
4Beihang University 5Shanghai Innovation Institute

Co-advising

Teaser Figure

LaMP achieves best performance by embedding 3D scene flow as a latent motion prior. Unlike 2D-centric VLAs that struggle with spatial reasoning, LaMP's dual-expert architecture achieves 98.3% on LIBERO, +9.7% improvement on OOD perturbations, and 79.2% on SimplerEnv-WidowX—bridging the sim-to-real gap through explicit geometric foresight.

Abstract

LaMP is a dual-expert vision-language-action framework that embeds dense 3D scene flow as a latent motion prior for robotic manipulation. Existing VLA models directly generate actions from 2D semantic features, implicitly requiring the policy to learn complex 3D physical interactions—a strategy that struggles under unfamiliar spatial dynamics.

LaMP instead factorizes the problem through two complementary experts: a Motion Expert that generates one-step partially denoised 3D scene flow via conditional flow matching, and an Action Expert whose predictions are aligned with the motion representation through gated cross-attention. Crucially, the Motion Expert's hidden state conditions the Action Expert without requiring full multi-step reconstruction, providing rich geometric guidance at negligible additional cost.

Evaluated on LIBERO, LIBERO-Plus, SimplerEnv-WidowX, and real-world experiments, LaMP achieves the highest average success rate under the same training budget, and on LIBERO-Plus OOD perturbations, outperforms the strongest baseline by an average margin of 9.7%.

Method

Pipeline Architecture

Overview of the LaMP architecture. LaMP consists of two complementary experts: a Motion Expert that generates 3D scene flow predictions using a CogVideoX-style 3D Transformer, and an Action Expert that receives motion-guided VLM features through gated cross-attention to generate continuous action sequences.

Motion Expert

Why 3D scene flow? It provides dense geometric foresight for contact-rich manipulation. The Motion Expert uses a CogVideoX-style 3D Transformer to track K=400 keypoints across T=32 timesteps, trained via conditional flow matching with one-step partial denoising for efficient inference.

Gated Motion Guidance

How to fuse motion without disrupting VLM semantics? A single-layer gated cross-attention with a learnable scalar gate initialized at zero (constrained to 0-1 by sigmoid) enables stable optimization—motion guidance gradually increases only when geometrically beneficial.

Action Expert

How to transfer to new robots? The Action Expert generates continuous action sequences from motion-guided VLM features. A two-stage training strategy enables embodiment-agnostic transfer—the motion prior transfers to unseen robots with only 10 warm-up demos.

Performance

LaMP achieves 98.3% SOTA on LIBERO benchmark, 96.7% on challenging long-horizon tasks, and 79.2% on SimplerEnv-WidowX (22.1% higher than second best). On LIBERO-Plus OOD perturbations, it outperforms the strongest baseline by +9.7%.

Inference Pipeline

During inference, LaMP performs the following steps: (1) The Vision-Language Model encodes the observation and language instruction; (2) The Motion Expert generates one-step partially denoised 3D scene flow, extracting the hidden motion representation; (3) The Gated Motion Guidance module fuses the motion features with VLM features; (4) The Action Expert generates the final action sequence through flow matching. Crucially, full multi-step reconstruction is not required—only the hidden state provides geometric guidance.

Experiments

Table 1: Simulation Benchmark Results

Results on LIBERO and SimplerEnv-WidowX benchmarks. Best results in bold, second best underlined.

Method LIBERO SimplerEnv-WidowX
Spatial Object Goal Long Avg Stack Block Put Carrot Put Spoon Put Eggplant Avg
General VLA
OpenVLA 84.7 88.4 79.2 53.7 76.5 0.0 0.0 4.2 12.5 4.2
OpenVLA-OFT 97.6 98.4 97.9 94.5 97.1
π0 96.8 98.8 95.8 85.2 94.2 16.7 0.0 29.1 62.5 40.1
π0.5 98.8 98.2 98.0 92.4 96.9 44.7 64.7 49.3 69.7 57.1
GR00T N1 94.4 97.6 93.0 90.6 93.9 16.7 45.8 62.5 20.8 49.5
Latent-Action VLA
UniVLA 96.5 96.8 95.6 92.0 95.2 29.2 62.5 83.3 100.0 68.7
villa-X 97.5 97.0 91.5 74.5 90.1 61.3 46.3 77.9 64.6 62.5
Video-Based VLA
mimic-video 94.2 96.8 90.6 93.9 29.2 54.2 41.7 100.0 56.3
WorldVLA 87.6 96.2 83.4 60.0 81.8
F1 98.2 97.8 95.4 91.3 95.7 50.0 70.8 50.0 66.7 72.9
2D Flow/Trace-Guided VLA
FlowVLA 93.2 95.0 91.6 72.6 88.1 62.5 62.5 70.8 100.0 74.0
TraceVLA 84.6 85.2 75.1 54.1 75.8 16.6 16.6 12.5 65.0 27.7
LaMP 99.4 99.8 97.4 96.7 98.3 75.0 66.7 79.1 95.8 79.2
w/o motion 95.8 98.9 96.6 78.2 92.4 25.0 45.8 66.7 87.5 56.3

Table 2: LIBERO-Plus Zero-Shot OOD Evaluation

All models are trained on LIBERO and evaluated zero-shot on seven perturbation dimensions without additional training data.

Method Camera Robot Language Light Background Noise Layout Avg
UniVLA 1.8 46.2 69.6 69.0 81.0 21.2 31.9 42.9
OpenVLA 0.8 3.5 23.0 8.1 34.8 15.2 28.5 15.6
OpenVLA-OFT 56.4 31.9 79.5 88.7 93.3 75.8 74.2 69.6
π0 13.8 6.0 58.8 85.0 81.4 79.0 68.9 53.6
π0-Fast 65.1 21.6 61.0 73.2 73.2 74.4 68.8 61.6
WorldVLA 0.1 27.9 41.6 43.7 17.1 10.9 38.0 25.0
LaMP 64.5 69.6 88.2 95.3 97.4 76.9 73.8 79.3
w/o motion 46.7 56.0 82.5 95.3 95.4 69.3 71.0 71.6

Real-World Experiments

Experimental Platform

  • Robot: Flexiv Rizon 4
  • Gripper: Robotiq 2F-85
  • Camera: Intel RealSense D415

Evaluation Tasks

  • Pick-and-Place (Stack Cup)
  • Fold Towel
  • Making Bread

OOD Conditions

  • Novel Layout
  • Novel Object
  • Novel Background
Real-World Experimental Platform

Real-world experimental platform. We use a Flexiv Rizon 4 robot arm with a Robotiq 2F-85 gripper and an Intel RealSense D415 camera for real-world evaluation.

π0
3D FDP
LaMP
Real-world Benchmark
Avg Task Progress (%)
100
80
60
40
20
0
70
53
80
Pick and Place
40
40
50
Deformable
70
60
80
Long-Horizon
52.5
26.8
62.5
Out of Domain

LaMP outperforms π0 and 3D FDP across all tasks, with the largest gains on Deformable manipulation (50% vs 40% for both baselines) and OOD conditions (62.5% vs 26.8% for 3D FDP). Notably, 3D FDP collapses under distribution shift (−26.2 points), while LaMP degrades gracefully (−17.5 points), confirming that camera-frame geometric reasoning is more resilient to visual shifts than pixel-level representations.

Key Achievements

98.3%
LIBERO
State-of-the-Art
+9.7%
OOD Robustness Gain
LIBERO-Plus Zero-Shot
79.2%
SimplerEnv-WidowX
+22.1% vs Second Best
62.5%
Real-World OOD
+35.7% vs 3D FDP and +10% vs π0

Citation

If you find our work useful, please consider citing:

@inproceedings{wang2026lamp,
  title={LaMP: Learning Vision-Language-Action Policy with 3D Scene Flow as Latent Motion Prior},
  author={Wang, Xinkai and Wang, Chenyi and Xu, Yifu and Ye, Mingzhe and Zhang, Fucheng and Tian, Jialin and Zhan, Xinyu and Zhu, Lifeng and Lu, Cewu and Yang, Lixin},
  journal={arXiv preprint},
  year={2026}
}