facebookresearch / detr

End-to-End Object Detection with Transformers
Apache License 2.0
13.66k stars 2.47k forks source link

Training with resnet18 #39

Open AlexFridman opened 4 years ago

AlexFridman commented 4 years ago

❓ How to train DETR with resnet18 backbone?

Describe what you want to do, including:

  1. I'm trying to run training on my 2080ti with resnet18 backbone and getting an error
  2. I started with default command, but end up with it still unsuccessfully: python -m torch.distributed.launch --nproc_per_node=1 --use_env main.py --num_queries 2000 --pre_norm --masks --output_dir output --eval --num_workers 4 --enc_layers 2 --dec_layers 2 --dim_feedforward 512 --backbone resnet18 --hidden_dim 128
  3. I'm detecting small round like objects with a simple background. There're 200-2000 objects per image.

Could you, please, help me run with resnet18? Any advice regarding optimal parameters to start for my task are appreciated!

Traceback:

  File "main.py", line 248, in <module>
    main(args)
  File "main.py", line 186, in main
    data_loader_val, base_ds, device, args.output_dir)
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/Documents/repos/detr/engine.py", line 92, in evaluate
    outputs = model(samples)
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 445, in forward
    output = self.module(*inputs[0], **kwargs[0])
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/Documents/repos/detr/models/segmentation.py", line 57, in forward
    seg_masks = self.mask_head(src_proj, bbox_mask, [features[2].tensors, features[1].tensors, features[0].tensors])
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/Documents/repos/detr/models/segmentation.py", line 110, in forward
    cur_fpn = self.adapter1(fpns[0])
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 349, in forward
    return self._conv_forward(input, self.weight)
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 346, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 1024, 1, 1], expected input[2, 256, 50, 50] to have 1024 channels, but got 256 channels instead
Traceback (most recent call last):
  File "/home/user/.pyenv/versions/3.7.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/user/.pyenv/versions/3.7.7/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/distributed/launch.py", line 263, in <module>
    main()
  File "/home/user/.virtualenvs/jupyter/lib/python3.7/site-packages/torch/distributed/launch.py", line 259, in main
    cmd=cmd)
fmassa commented 4 years ago

Hi,

The issue is that the panoptic segmentation model has been implemented only taking ResNet50 (and 101) in mind. If you want to use ResNet18, you'll need to change https://github.com/facebookresearch/detr/blob/b7b62c080d34f76c0069a63afcfb9093213d235c/models/segmentation.py#L33 to use instead

[256, 128, 64]

But it seems that you are setting the script to evaluation mode, but we do not have pre-trained weights for this configuration so this will give 0 mAP.

Additionally, the panoptic segmentation implementation that we have here is very naive and uses a lot of memory, so even with a ResNet18 and fewer encoder/decoder layers you'll need a GPU with more than 16GB of memory to be able to perform training with a batch size of 1.

Optimizing the memory requirements for training / inference for the panoptic segmentation models is left for future work.

cc @alcinos if I forgot something.

fmassa commented 4 years ago

I think we could make the DETRsegm support ResNet18 by default as well, by checking the num_channels of the backbone that is passed and using it instead of hard-coding it.

PRs are welcome.

fmassa commented 4 years ago

Plus, as an additional comment, if your objects are small and on simple backgrounds, it might be preferable to use different segmentation methods for now, as DETR is currently lagging behind on small objects compared to Faster R-CNN.

Lee4396 commented 4 years ago

Plus, as an additional comment, if your objects are small and on simple backgrounds, it might be preferable to use different segmentation methods for now, as DETR is currently lagging behind on small objects compared to Faster R-CNN.

Hi, do you think DETR is a good model to be used on cell detection then? I am thinking about if mode complexity reduction might help on simpler tasks like microanalysis.

fmassa commented 4 years ago

Hi, do you think DETR is a good model to be used on cell detection then? I am thinking about if mode complexity reduction might help on simpler tasks like microanalysis.

I don't know what the cell detection datasets look like, so I'm not sure I can give a good answer to the question. For now, DETR struggles precisely localizing very small (< 32 pixels) objects.