Boeun-Kim / GL-Transformer

This is the official implementation of Global-local Motion Transformer for Unsupervised Skeleton-based Action Learning (ECCV 2022).
MIT License
19 stars 2 forks source link

GPU #1

Closed maoyunyao closed 1 year ago

maoyunyao commented 1 year ago

Hello, how many GPUs do you use for pre-training? I tried to use 4 x RTX3090, but out of memory.

Boeun-Kim commented 1 year ago

Hi. Thanks for visiting. I'm using 3x 40GB gpus for pretraining and linear evaluation. Please change the batch size for the lower memory.