import nle
import gym
env = gym.make("NetHackChallenge-v0")
for i in range(1_000_000_000):
obsv = env.reset()
The memory used by the process keeps increasing as long as it keeps running. Tested the same configuration with "CartPole-v0" and the memory remains static.
Python version: 3.8
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti
Nvidia driver version: 471.41
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.17.4
[conda] Could not collec
Additional context
I tried using tracemalloc to find the issue but there wasn't any memory that was tracked by the tool that increased with increased runtime/resets.
🐛 Bug
Possible memory leak when resetting environment
To Reproduce
Steps to reproduce the behavior:
Using the latest nle==0.8.1
Run the following:
The memory used by the process keeps increasing as long as it keeps running. Tested the same configuration with "CartPole-v0" and the memory remains static.
Tested and same behavior on:
Expected behavior
I expect that the memory usage for the process would remain static or at least close to static?
Environment
From Ubuntu20:04 as this was the cleanest environment
Collecting environment information... NLE version: 0.8.1 PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 CMake version: version 3.24.0
Python version: 3.8 Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti Nvidia driver version: 471.41 cuDNN version: Could not collect
Versions of relevant libraries: [pip3] numpy==1.17.4 [conda] Could not collec
Additional context
I tried using tracemalloc to find the issue but there wasn't any memory that was tracked by the tool that increased with increased runtime/resets.