facebookresearch / SlowFast

PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
Apache License 2.0
6.62k stars 1.21k forks source link

Is it possible to just use pre-trained model without GPU #39

Open ganesh-pivotchain opened 4 years ago

ganesh-pivotchain commented 4 years ago
def build_model(cfg):
    """
    Builds the video model.
    Args:
        cfg (configs): configs that contains the hyper-parameters to build the
        backbone. Details can be seen in slowfast/config/defaults.py.
    """
    assert (
        cfg.MODEL.ARCH in _MODEL_TYPES.keys()
    ), "Model type '{}' not supported".format(cfg.MODEL.ARCH)
    assert (
        cfg.NUM_GPUS <= torch.cuda.device_count()
    ), "Cannot use more GPU devices than available"

    # Construct the model
    model = _MODEL_TYPES[cfg.MODEL.ARCH](cfg)
    # Determine the GPU used by the current process
    cur_device = torch.cuda.current_device()
    # Transfer the model to the current GPU device
    model = model.cuda(device=cur_device)
    # Use multi-process data parallel model in the multi-gpu setting
    if cfg.NUM_GPUS > 1:
        # Make model replica operate on the current device
        model = torch.nn.parallel.DistributedDataParallel(
            module=model, device_ids=[cur_device], output_device=cur_device
        )
    return model

I feel this project very intriguing But it seems like very sparse documentation. I wanted to use the pre-trained SlowFast R-50 model but from the above code looks like to load the model I need GPU?

haooooooqi commented 4 years ago

Thanks for using the codebase!

This codebase is mainly designed for training/ inference with state of the art backbones. Training or inferencing relative heavy backbones normally requires GPUs. (otherwise it would take forever) Could you clarify what would be your use case, then I can help to better support it.

If simply training and inferencing on CPU is what you need, I can implement and support CPU for you (your use case).

ganesh-pivotchain commented 4 years ago

This codebase is mainly designed for training/ inference with state of the art backbones. Training or inferencing relative heavy backbones normally requires GPUs. (otherwise, it would take forever) Could you clarify what would be your use case, then I can help to better support it.

I just wanted to use inference for some basic embedded applications hence just need CPU version for inference.

If simply training and inferencing on CPU is what you need, I can implement and support CPU for you (your use case).

If just inference possible on CPU then that will be a great help.

Thanks

Abhitheonly1 commented 4 years ago

Traceback (most recent call last): File "tools/run_net.py", line 151, in main() File "tools/run_net.py", line 147, in main test(cfg=cfg) File "/content/drive/My Drive/SlowFast/tools/test_net.py", line 141, in test multi_view_test(test_loader, model, test_meter, cfg) File "/content/drive/My Drive/SlowFast/tools/test_net.py", line 51, in multi_view_test preds = model(inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, kwargs) File "/content/drive/My Drive/SlowFast/slowfast/models/video_model_builder.py", line 328, in forward x = self.s2(x) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, *kwargs) File "/content/drive/My Drive/SlowFast/slowfast/models/resnet_helper.py", line 482, in forward x = m(x) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(input, kwargs) File "/content/drive/My Drive/SlowFast/slowfast/models/resnet_helper.py", line 317, in forward x = x + self.branch2(x) RuntimeError: CUDA out of memory. Tried to allocate 960.00 MiB (GPU 0; 15.90 GiB total capacity; 14.66 GiB already allocated; 533.81 MiB free; 17.16 MiB cached)

Hello, I am getting this runtime erroe while testing the pretrained model. Could you please help to resolve this. Thank you in advance

amrahsmaytas commented 4 years ago

This codebase is mainly designed for training/ inference with state of the art backbones. Training or inferencing relative heavy backbones normally requires GPUs. (otherwise, it would take forever) Could you clarify what would be your use case, then I can help to better support it.

I just wanted to use inference for some basic embedded applications hence just need CPU version for inference.

If simply training and inferencing on CPU is what you need, I can implement and support CPU for you (your use case).

If just inference possible on CPU then that will be a great help.

Thanks

did you figure out, how to do inference of slowfast on cpu? if yes, please share the code!

@takatosp1 could you also please look into it?

ysl208 commented 3 years ago

any updates on this?

sainivedh commented 3 years ago

@Abhitheonly1 Any leads for the problem, whether slowfast can be used for CPU inference ?

Thanks