ManiVID-3D: Generalizable View-Invariant Reinforcement Learning for Robotic Manipulation via Disentangled 3D Representations

1 The Hong Kong University of Science and Technology (Guangzhou) 2 Tsinghua University 3 The University of Hong Kong 4 The Hong Kong University of Science and Technology *Equal Contribution
项目展示图

Overview of ManiVID-3D. (A) In the training phase, our method consists of two key components: (a) Pretrained ViewNet aligns arbitrary-viewpoint clouds collected in simulation to a unified frame without extrinsic calibration; (b) A disentanglement encoder extracts view-invariant features that are used to train manipulation policies with strong cross-view generalization. (B) In the deployment phase, we introduce a multi-stage processing pipeline specifically designed for camera-coordinate point clouds to bridge the sim-to-real domain gap, enabling zero-shot transfer to real-world deployment.

Abstract

Deploying visual reinforcement learning (RL) policies in real-world manipulation is often hindered by camera viewpoint changes. A policy trained from a fixed front-facing camera may fail when the camera is shifted—an unavoidable situation in real-world settings where sensor placement is hard to manage appropriately. Existing methods often rely on precise camera calibration or struggle with large perspective changes. To address these limitations, we propose ManiVID-3D, a novel 3D RL architecture designed for robotic manipulation, which learns view-invariant representations through self-supervised disentangled feature learning. The framework incorporates ViewNet, a lightweight yet effective module that automatically aligns point cloud observations from arbitrary viewpoints into a unified spatial coordinate system without the need for extrinsic calibration. Additionally, we develop an efficient GPU-accelerated batch rendering module capable of processing over 5000 frames per second, enabling large-scale training for 3D visual RL at unprecedented speeds. Extensive evaluation across 10 simulated and 5 real-world tasks demonstrates that our approach achieves a 44.7\% higher success rate than state-of-the-art methods under viewpoint variations while using 80\% fewer parameters. The system's robustness to severe perspective changes and strong sim-to-real performance highlight the effectiveness of learning geometrically consistent representations for scalable robotic manipulation in unstructured environments.

Overall Performance

项目展示图

Our method achieves robust multi-domain generalization for manipulation tasks with superior viewpoint adaptation and sim-to-real transferability, while significantly reducing computational costs. Besides, ManiVID-3D maintains consistently strong performance across varying degrees of view offsets and different reference viewpoint choices, whereas Maniwhere exhibits a clear performance degradation trend.

Real-world Demonstrations

Real-world Tasks

Pick & Place

Laptop Close

Block Lift

Button Press

Reach

Generalization to Spatial Disturbance

Position Disturbance

Generalization to Camera Views

Various Viewpoints

Shaking View

Generalization to Visual Appearances

Object Color Variation

Instance Variation

Lighting Variation

Cluttered Scene

BibTeX

@misc{li2025manivid3dgeneralizableviewinvariantreinforcement,
      title={ManiVID-3D: Generalizable View-Invariant Reinforcement Learning for Robotic Manipulation via Disentangled 3D Representations}, 
      author={Zheng Li and Pei Qu and Yufei Jia and Shihui Zhou and Haizhou Ge and Jiahang Cao and Jinni Zhou and Guyue Zhou and Jun Ma},
      year={2025},
      eprint={2509.11125},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2509.11125}, 
}