Continual Predictive Learning from Videos

Geng Chen*   Wendong Zhang*   Han Lu   Siyu Gao   Yunbo Wang   Mingsheng Long   Xiaokang Yang

Teaser

Figure 1: The new problem of continual predictive learning and the general framework of our approach at test time.

Abstract

Predictive learning ideally builds the world model of physical processes in one or more given environments. Typical setups assume that we can collect data from all environments at all times. In practice, however, different prediction tasks may arrive sequentially so that the environments may change persistently throughout the training procedure. Can we develop predictive learning algorithms that can deal with more realistic, non-stationary physical environments? In this paper, we study a new continual learning problem in the context of video prediction, and observe that most existing methods suffer from severe catastrophic forgetting in this setup. To tackle this problem, we propose the continual predictive learning (CPL) approach, which learns a mixture world model via predictive experience replay and performs test-time adaptation with non-parametric task inference. We construct two new benchmarks based on RoboNet and KTH, in which different tasks correspond to different physical robotic environments or human actions. Our approach is shown to effectively mitigate forgetting and remarkably outperform the naïve combinations of previous art in video prediction and continual learning.

Method

Teaser

Figure 2: The overall network architecture of the mixture world model and the predictive experience replay training scheme in the proposed CPL method

Result on RoboNet Benchmark

Teaser

Figure 3: Showcases of action-conditioned video prediction in the first environment of RoboNet (i.e., Berkeley) after training the models in the last environment (i.e., Stanford). We compare our method (CPL-full) with the naïve combinations of existing world models and continual learning algorithms.

Result on KTH Benchmark

    Inputs True outputs SVG PredRNN PhyDNet
    CPL-base+EWC PredRNN+LwF CPL-base CPL-base (Joint Train) CPL-full (ours)

Figure 4: Showcases of predicted frames of the first task (i.e., Boxing) after the training period of the last task (i.e., Running).

Related Publication

[1] PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs
      NeurIPS 2017 PDF Code [PyTorch]
[2] Stochastic Video Generation with a Learned Prior
      ICML 2018 PDF Code [PyTorch]