Open linminhtoo opened 2 years ago
As @NivekT pointed out the overhead of profiler is significantly huge, potentially this PR would solve the performance regression for IterDataPipe
: https://github.com/pytorch/pytorch/pull/78674
As @NivekT pointed out the overhead of profiler is significantly huge, potentially this PR would solve the performance regression for
IterDataPipe
: pytorch/pytorch#78674
I see! This sounds like the main reason for the speed difference. I will eagerly wait for the PR to be approved and merged, then will be happy to re-run my speed benchmarks.
Since the PR is landed, do you want to try to test nightly release of PyTorch and TorchData.
You can install torchdata via pip install –pre torch torchdata -f https://download.pytorch.org/whl/nightly/cpu
thanks! i'll try to test this out soon.
Hi! Can you also share the code you are using for benchmarks?
🐛 Describe the bug
I ran a series of speed benchmarks comparing 3 methods of building a
datapipe
. This is a continuation of the conversation from https://github.com/pytorch/data/issues/454#issuecomment-1141858156Brief context: I start with
.csv
containing unique ID of each data sample. I apply a series of maps & filters. Finally, I generate tensors needed by my model for training. The last step is expensive in both compute & memory. I'm in the cheminformatics space where we generate various features for a given molecule or a protein. We may, for example, generate morgan fingerprints of dimension 2048, or a large 3d surface mesh of the protein (which can be thousands of elements per sample). Just to be clear, this is a very common motif in deep learning workflows in this space, and so, would be generally applicable to the domain.I am using
IterDataPipe
for all the operations until the last step. For the last step, I have tried 3 cases: 1) define my ownIterDataPipe
2) define my ownMapDataPipe
3) use my definedIterDataPipe
, BUT run.to_map_datapipe()
to convert it toMapDataPipe
AFTER generating the tensors.Training time per epoch (in seconds) on 100k samples, as the average of 5 epochs (includes forward & backward pass + backprop on a fixed model, which is delibrately kept very simple).
I also measured the time to set up the datapipe
dp = build_datapipe(args)
going from .csv to torch.Tensors followed bydl = DataLoader(dp)
. For all 3 methods, it is the same, at 18 seconds. MapDataPipe (2) didn't take longer to set up than IterDataPipe (1).There are obvious benefits to
IterDataPipe
overMapDataPipe
. The main one that I'm most concerned about is error handling, where if I am somehow unable to generate the feature matrices & tensors for a given data sample, I could simply skip it and notyield
anything in__iter__
. withMapDataPipe
, I am forced to return something, likeNone
, which complicates things as I have to handle thisNone
later, like incollate_fn
.Note that 3) is very memory intensive and simply infeasible even for a relatively small dataset (100k samples), since we need to load the whole
IterDataPipe
into memory inself._load_map()
. (see https://github.com/pytorch/data/issues/454)Versions
As I'm on Nix-OS, I'm not directly using
pip/conda
and it is difficult for me to run thecollect_env.py
script as it is. However, I can still provide the versions printed by<pkg>.__version__
:1.11.0+cu113
0.3.0
1.21.5
1.4.2
the benchmarks are done on an RTX3080 GPU with 32 GB RAM and 8 cores CPU.
please let me know if this is insufficient.