daniellawson9999 / online-decision-transformer

An unofficial implementation for online decision transformer
38 stars 1 forks source link

Increasing GPU memory usage when online finetuning #3

Closed guosyjlu closed 2 years ago

guosyjlu commented 2 years ago

Hi, thanks for your great job in this implementation. When I use this codebase, I find that the GPU memory usage increases from ~4000 MiB (offline pretraining) to ~11000 MiB (online finetuning). Do you have any idea about this phenomenon?

For offline pretraining: python experiments.py --env hopper --dataset medium-replay For online fientuning: python experiments.py --env hopper --dataset medium-replay --online_training