logoMoVie: Visual Model-Based Policy Adaptation for View Generalization

NeurIPS 2023



Sizhe Yang12*,   Yanjie Ze13*,   Huazhe Xu415

1Shanghai Qi Zhi Institute,  2University of Electronic Science and Technology of China,  
3Shanghai Jiao Tong University,  4Tsinghua University,  5Shanghai AI Lab
*Equal contribution

Abstract

Visual Reinforcement Learning (RL) agents trained on limited views face significant challenges in generalizing their learned abilities to unseen views. This inherent difficulty is known as the problem of view generalization. In this work, we systematically categorize this fundamental problem into four distinct and highly challenging scenarios that closely resemble real-world situations. Subsequently, we propose a straightforward yet effective approach to enable successful adaptation of visual Model-based policies for View generalization (MoVie) during test time, without any need for explicit reward signals and any modification during training time. Our method demonstrates substantial advancements across all four scenarios encompassing a total of 18 tasks sourced from DMControl, xArm, and Adroit, with a relative improvement of 33%, 86%, and 152% respectively. The superior results highlight the immense potential of our approach for real-world robotics applications.

Method

We present MoVie, a framework that adapts visual model-based policy for view generalization. During training (left), the agent is trained with the latent dynamics loss. At test time (right), we freeze the dynamics model and modify the encoder as a spatial adaptive encoder to adapt the agents to novel views. We visualize the first layer feature map of the image encoder from TD-MPC and MoVie. It is observed that the feature map from MoVie on the novel view exhibits a closer resemblance to that on the training view.






Feature Map

Training View

Novel View

TD-MPC

MoVie (ours)


DM-Control

  • MoVie
  • TD-MPC

xArm

  • MoVie
  • TD-MPC

Adroit

  • MoVie
  • TD-MPC