v-iashin / MDVC

PyTorch implementation of Multi-modal Dense Video Captioning (CVPR 2020 Workshops)
https://v-iashin.github.io/mdvc
142 stars 19 forks source link

The utilization rate of GPU is low #19

Closed XIUXIUXIUBIUA closed 3 years ago

XIUXIUXIUBIUA commented 3 years ago

I found that your dataloader is kind of strange. Its batch_size must be 1 and then according to the idx(a batch) of the caption_loader_iter to get video&audio feature vectors. So I cannot set num_workers>0 and I guess it is the reason of my problem. How do you solve this problem when you are training? gpu_ulti

v-iashin commented 3 years ago

Yes. I am glad you asked.

I am not sure if both are related because I remember my CPU was using 16 threads at 100% load when training the model. It is even with the num_workers = 0 (I mean the default option).

About the data loader. Yes, you are right it is a bit strange. I came up with this workaround because previously (maybe even still) torchtext and PyTorch were using different approaches for data loading which made it very painful to combine the good stuff about text from torchtext with vision-style pipelines.

An explanation of how it works is a bit tricky:

It is a tricky part.

The DataLoader which is going to wrap this dataset https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/main.py#L220-L221 cannot pad text (without collate_fn). One possible solution to this would be a custom collate_fn in PyTorch but we already have torchtext package with BucketIterator that did precisely this and back then I felt it would take too much time to implement. Hence, caption_iterator() is defined above in dataset.py: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L134

Besides the iterator (datasetloader variable) that automatically pads sentences to be of the same length, this caption_iterator() outputs the train_vocab which can be used as token-to-word mapping which is also cumbersome to implement on your own because you need to build a vocabulary (ew!).

Anyway, the data items which we get from self.caption_loader defined here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L400 , in turn, contain both a batch of padded tokenized captions and a batch of indices to the meta.csv (or filtered) rows and, hence, to self.features_dataset. Check out slicing here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L446 which uses the same csv file (meta) which contains paths to precalculated video feats, start and end segment entries.

So, to get the video features, I need to iterate through the self.caption_loader which I got from the caption_iterator() to get the indices to the dataset rows (the paths to precalculated features). However, I can't index self.caption_loader with an index somehow to retrieve these captions and indices – it fails. So, I came up with just doing it in the opposite way: adding the dataset row index as a field in self.caption_loader See here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L182 and here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L443-L446. Each next() call on the iterator returns a shuffled set of indices and captions (caption_data). Then, these indices are used in self.features_dataset which returns vid features for each index entry in meta – see here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L308 Finally, the padding of features is performed in AudioVideoFeaturesDataset because BucketIterator can digest only 'text' data but we also need to pad features.

As a result, the batch size is not 1 as it is defined in the PyTorch DataLoader here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/main.py#L220-L224 but rather the one defined in caption_iterator() defined here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L209-L211

So, on a high level: I am wrapping a torchtext BucketIterator inside of the PyTorch DataLoader with batch size = 1. This means that num_workers won't do much here because it loads a batch of one item.

This was the price of having text-padding out-of-the-box back then along with PyTorch dataset class.

Q: Have you ensured that the indices that caption_iterator returned with captions (caption_data) are unique when you do caption_data = next(self.caption_loader_iter)? A: Yes, and they are shuffled after every epoch in update_iterator(self): (defined here: https://github.com/v-iashin/MDVC/blob/df3b88a8bc10271e9501be41cd77e74d13abf79b/dataset/dataset.py#L452-L456).

XIUXIUXIUBIUA commented 3 years ago

Oh, thanks a lot. I see what you're trying to do. Really nice work! I only have 2 threads assigned to this task when training the model so it took too much time in fetching dataset. I think I should use more threads if I want to improve the utilization of GPU. When I have 8 threads assigned to the task, it looks good. gpu_cpu

v-iashin commented 3 years ago

I also remember that during training gpu load was ~50-70% while during the 1-by-1 prediction it ramped up to 100%. So i would assume it is related to torchtext parts.

XIUXIUXIUBIUA commented 3 years ago

Yes, with your help ,I know how to solve this problem. Thanks a lot !