Action-Conditioned 3D Human Motion Synthesis with Transformer VAE

Mathis PetrovichMichael J. BlackGül Varol

ICCV 2021

teaser.png

Abstract

We tackle the problem of action-conditioned generation of realistic and diverse human motion sequences. In contrast to methods that complete, or extend, motion sequences, this task does not require an initial pose or sequence. Here we learn an action-aware latent representation for human motions by training a generative variational autoencoder (VAE). By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action. Specifically, we design a Transformer-based architecture, ACTOR, for encoding and decoding a sequence of parametric SMPL human body models estimated from action recognition datasets. We evaluate our approach on the NTU RGB+D, HumanAct12 and UESTC datasets and show improvements over the state of the art. Furthermore, we present two use cases: improving action recognition through adding our synthesized data to training, and motion denoising. Code and models are available on our project page.

Video


Approach

pipeline.png

Encoder: Given a sequence of body poses P_1, ..., P_T and an action label a , the encoder outputs distribution parameters on which we define a KL loss \mathcal{L}_{KL} . We use extra learnable tokens per action ( \mu_{a}^{token} and \Sigma_{a}^{token} ) as a way to obtain \mu and \Sigma from the Transformer encoder. We sample from \mu and \Sigma the latent representation z \in \mathbf{M} (the encoded input).

Decoder: It takes the latent vector z, an action label a, and a duration T as input. The action determines the learnable b_{a}^{token} additive token, and the duration determines the number of positional encodings (PE) to input to the decoder. The decoder outputs the whole sequence \widehat{P}_1, \ldots, \widehat{P}_T against which the reconstruction loss \mathcal{L}_{P} is computed. In addition, we compute vertices with a differentiable SMPL layer to define a vertex loss \mathcal{L}_{V} .

Generator: The decoder alone is simply the generator, where the encoded input z is replaced by a vector randomly sampled from a Gaussian distribution.

Visual results


Generated motions


Please refer to the video for more examples and see other experiments (different durations, interpolation in latent space, loss ablation).

Results

Comparison to the state of the art

sota.png

Please refer to the paper to get more details.

Bibtex

If you find this project useful for your research, please cite

@inproceedings{petrovich21actor,
    title = {Action-Conditioned 3{D} Human Motion Synthesis with Transformer {VAE}},
    author = {Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l},
    booktitle = {International Conference on Computer Vision ({ICCV})},
    year = {2021}
}

Acknowledgements Notice

This work was granted access to the HPC resources of IDRIS under the allocation 2021-101535 made by GENCI. The authors would like to thank Mathieu Aubry and David Picard for helpful feedback, Chuan Guo and Shihao Zou for their help with Action2Motion details.

Copyright Notice

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright.