gsig / PyVideoResearch

A repository of common methods, datasets, and tasks for video research
GNU General Public License v3.0
533 stars 90 forks source link

about self.orig_loss #12

Closed FingerRec closed 5 years ago

FingerRec commented 5 years ago

i found baseline_exp/async_tf_i3d_charades.py cannot run directly. so i modify line 81 in models/criteria/async_tf_criterion.py as follow idtime = [] for i in range(len(meta)): idtime.append((meta[i]['id'], meta[i]['time']))

i was confused about line 105 in models/criteria/async_tf_criterion.py loss += self.loss(torch.nn.Sigmoid()(a), target) * self.orig_loss what's the self.orig_loss mean?

gsig commented 5 years ago

Hi!

This baseline definitely needed some updating, I just added fixes in commit ded24bd5f049d4c429c9f0a746b09f2d6af8fd44 and it's running now on 4 gpus.

self.orig_loss was just a legacy parameter that had been set to 1, so it could safely be removed.. it was historically to adjust for the difference between the original softmax loss and the new sigmoid loss.

This baseline includes my experiments with simplifying asynchronous temporal fields, and extending to multi-label sigmoid loss, and i3d base architecture etc. I hope it helps! Let me know if you have any questions.

FingerRec commented 5 years ago

Thanks for your reply!

This code works very well now, just two small problem, as i use the pertained model, at the begin, the Prec@5 is often bigger than 100, like bellow

Train Epoch: [0][60/2660(2660)] Time 1.629 (2.227) Data 0.032 (0.119) Loss 0.0362 (0.0438) Prec@1 2.051 (47.684) Prec@5 168.718 (135.191)

Another question is

ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).

may be need to lower the memory_size or video_size?

gsig commented 5 years ago

That's just due to how I extended Prec@1 and Prec@5 to work with multi-label ground truth. It's easy to add your own metrics under metrics/ and then just include them under --metrics in the config. My extension just counts all the are correct, either in top 1 or top 5. I just use if for analyzing training and over/underfitting, but then I use mAP for all proper evaluations.

This error is due to memory usage of the dataloading threads. The way multithreading works in pytorch/python is that it requires duplicating some of the data across the threads etc, and furthermore the images are queued into memory while they are waiting to be used, and the number of queued images is proportional to the number of workers (2x?). The easiest fix is to reduce the number of --workers. You can also try optimizing the dataloader by using torch.Tensors where possible (they aren't duplicated like lists of strings/numpy arrays/etc I believe).

If this error is happening at the start of the val_video phase you can try changing the number of workers in the val_video phase ( datasets/get.py ) either by just manually setting a number there or creating a new args parameter for it. This is because each dataloader is loading in much larger batch (whole video) in the val_video phase, and thus requires much more memory to store the queue of images.

Hope that helps!

FingerRec commented 5 years ago

fixed, thanks a lot