DLR-RM / stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
https://stable-baselines3.readthedocs.io
MIT License
8.84k stars 1.68k forks source link

Fix tensorboad video slow numpy->torch conversion #1910

Closed NickLucche closed 5 months ago

NickLucche commented 5 months ago

Hey,

this PR is a small docs fix to the speed issue highlighted here https://github.com/DLR-RM/stable-baselines3/pull/196#issuecomment-714656995, which also shows up as a warning now in more recent torch versions :UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:275.).

You can also test the speed difference yourself with:

from time import time
import numpy as np
import torch

rgb = np.ones((3,256,320))
rgbs = [rgb.copy() for _ in range(100)]

s = time()
a = torch.ByteTensor(rgbs)
print(time()-s)

s = time()
a = torch.from_numpy(np.asarray(rgbs))
print(time()-s)

I get two orders of magnitude of difference in times here.

Description

I create a numpy array first and then let torch use the same storage. np.stack could also be suggested if we prefer being more explicit.

Motivation and Context

Fixes https://github.com/DLR-RM/stable-baselines3/pull/196#issuecomment-714656995.

Types of changes

Checklist

Note: You can run most of the checks using make commit-checks.

Note: we are using a maximum length of 127 characters per line