klauscc / VindLU

MIT License
101 stars 11 forks source link

Problem with finetuning speed #9

Open insundaycathy opened 1 year ago

insundaycathy commented 1 year ago

Hi, thanking for the great work. But when I tried to finetune the network on my own data, I encounter problems with efficiency.

  1. If I set the num_workers in the Dataloader to >0, the data loading process becomes extremely slow and the loading time increases with each worker.
  2. The time to backpropagate tho the graph (the time to execute this line of code) increases in proportion to the batch size. scaler.scale(loss).backward()

I want to ask if this is normal in finetuning or have I somehow introduced a bug. Also, is there anyway to speed up?

klauscc commented 1 year ago

Hi, Thanks for your interests. I encountered a similar issue on another project on other servers. The issue may be caused by the video reader we used decord. decord seems have some issue with pytorch's multiprocess in dataloader.

My solution is to add "spawn" when creating dataloader:

dataloader = Dataloader(multiprocessing_context="spawn", 
                                          ....)