facebookresearch / SlowFast

PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
Apache License 2.0
6.62k stars 1.21k forks source link

Demo programme on camera input #34

Open anhminh3105 opened 4 years ago

anhminh3105 commented 4 years ago

Hi,

I find this project very interesting and thanks for open-sourcing it.

I am trying to make a demo programme to load and run the models (e.g. SlowFast) and infer it using input from a USB camera to visually evaluate the accuracy and performance and I wonder if it would it be possible? if so could you briefly elaborate on which modules should be used and how it could be implemented?

Thanks in advance.

lippman1125 commented 4 years ago

Me too

haooooooqi commented 4 years ago

Thanks for interested in the codebase! I'll prepare a demo for video classification and detection in my spare time. Regarding to your question, you can feel free to use any pretrained model we provided. You might need to test the inference speed and choose the one you prefer to use. Also feel free to contribute your code to the codebase.

anhminh3105 commented 4 years ago

Thanks for the reply, I'm doing that already. Also, it would be great if I could have your assists when I encounter any problem and I will post it here if you don't mind.

It would also be nice to have you release the video detection part as well.

Br.

haooooooqi commented 4 years ago

Thanks a lot for the awesome demo code @anhminh3105! Just left some nit comments in the code xD

micuentadecasa commented 4 years ago

Dear all,

I would like to see the demo code, can you share a link with me?

Regards.

anhminh3105 commented 4 years ago

@takatosp1 found it ;p

anhminh3105 commented 4 years ago

@micuentadecasa Hi, you may find it here. Please start off from the Getting Started page and install additional dependencies Pandas and OpenCV as I haven't changed the setup.py to include them.

Br

micuentadecasa commented 4 years ago

thanks @anhminh3105 , I will give a try.

17702513221 commented 4 years ago

@anhminh3105 when i try to do it,i get a error: python3 tools/run_net.py --predict_source 0 --cfg configs/AVA/c2/SLOWFAST_64x2_R101_50_50.yaml Traceback (most recent call last): File "tools/run_net.py", line 179, in main() File "tools/run_net.py", line 116, in main cfg = load_config(args) File "tools/run_net.py", line 104, in load_config cfg.PREDICT.SOURCE = args.predict_source File "/home/xs/.local/lib/python3.6/site-packages/yacs/config.py", line 141, in getattr raise AttributeError(name) AttributeError: PREDICT Can i help me

anhminh3105 commented 4 years ago

in slowfast/config/defaults.py make sure you got these lines https://github.com/anhminh3105/SlowFast/blob/49e0e2de422fd0dfbf11a0d3ee44120d8f07cdea/slowfast/config/defaults.py#L96-L104

make sure your SLOWFAST_64x2_R101_50_50.yaml got the run settings similar to my example file https://github.com/anhminh3105/SlowFast/blob/49e0e2de422fd0dfbf11a0d3ee44120d8f07cdea/configs/Kinetics/demo/SLOWFAST_8x8_R50.yaml#L1-L63

Since my programme relies on the previous version of this repo, which was now overwritten for the new code for the AVA dataset. I recommend you to clone this branch and try out

17702513221 commented 4 years ago

When i use https://github.com/anhminh3105/SlowFast/tree/prediction_demo,i get a error: python3 tools/run_net.py --predict_source 0 --cfg configs/Kinetics/demo/SLOWFAST_8x8_R50.yaml Traceback (most recent call last): File "tools/run_net.py", line 14, in from test_net import test File "/home/xs/SlowFast/tools/test_net.py", line 13, in from slowfast.datasets import loader File "/home/xs/SlowFast/slowfast/datasets/init.py", line 4, in from .ava_dataset import Ava # noqa File "/home/xs/SlowFast/slowfast/datasets/ava_dataset.py", line 8, in import slowfast.datasets.ava_helper as ava_helper AttributeError: module 'slowfast' has no attribute 'datasets' maybe the previous version of this repo have a error.

anhminh3105 commented 4 years ago

based on your traceback the problem is likely because you have cloned the master branch instead of the prediction_demo branch that I intended to direct you to. The code in master is just a merge of new code from the official repo and is certainly not working.

To clone the prediction_demo branch please try: git clone -b prediction_demo https://github.com/anhminh3105/SlowFast.git

17702513221 commented 4 years ago

Thank you,l am success.

17702513221 commented 4 years ago

Can you help me how to get a result like gif,l should use ava's models?

anhminh3105 commented 4 years ago

That is not open sourced at the moment unfortunately, but I'm working on making one for my own. Feel free if you want to contribute as well.

haooooooqi commented 4 years ago

@anhminh3105 thanks for working on the cool demo! I am gonna to work on the Action classification and detection demo (and expect to release it in ~2 weeks since this is not my top priority). If you like, I'd love to keep your commit history in the master track (all the commit stack will be tracked externally since now). (If you want) feel free to commit your code, then I can work on top of it with a further refactor and supporting more functions (e.g., multi-gpu inference, bbox pred and det). Otherwise I will implement a new demo from scratch.

anhminh3105 commented 4 years ago

@takatosp1 Thanks a lot for your consideration, it's great and I'm more than happy to put in my contributions.

bilel-bj commented 4 years ago

Thanks for your consideration and support. Is there any update about the detection demo (used to generate the ava_demo.gif). Thanks.

anhminh3105 commented 4 years ago

Thanks for your consideration and support. Is there any update about the detection demo (used to generate the ava_demo.gif). Thanks.

Not completely as real-time as it looks in ava_demo.gif but I hope it satisfies you, please find it here and start off in Getting Started. I also hope @takatosp1 will do something to improve it as well :smile:

gtgtgt1117 commented 4 years ago

That is not open sourced at the moment unfortunately, but I'm working on making one for my own. Feel free if you want to contribute as well.

Hello,目前AVA形式的demo还没有开源是吗?现在只有Kinetics是吗?

dagongji10 commented 4 years ago

Thanks for your consideration and support. Is there any update about the detection demo (used to generate the ava_demo.gif). Thanks.

Not completely as real-time as it looks in ava_demo.gif but I hope it satisfies you, please find it here and start off in Getting Started. I also hope @takatosp1 will do something to improve it as well 😄

Can you describe it in detail ? If not real-time , then what's the speed of ava pretrained model ? 10fps or 20fps?

anhminh3105 commented 4 years ago

On my laptop with RTX 2080, each prediction takes about 4-5s including collecting the input frames, the forward pass and the visualisation

CabbageWust commented 4 years ago

我用‘SLOWFAST_8x8_R50.pkl’这个模型计算出来的结果维度怎么不对啊 打印网络结构输出维度是400: ..... (head): ResNetBasicHead( (pathway0_avgpool): AvgPool3d(kernel_size=[8, 7, 7], stride=1, padding=0) (pathway1_avgpool): AvgPool3d(kernel_size=[32, 7, 7], stride=1, padding=0) (dropout): Dropout(p=0.5, inplace=False) (projection): Linear(in_features=2304, out_features=400, bias=True) (act): Softmax(dim=4)

加载模型参数,利用‘...... preds = model(inputs)’计算得到的preds的维度成了:  torch.Size([1, 1600])

求大神指导。。。

harshsp31 commented 4 years ago

Hi @takatosp1 @anhminh3105

Thanks for sharing your great work! I wanted to ask if it's possible to extract the bounding box coordinates from the demo script and what would be the best way to include face recognition models in this codebase?

yangsusanyang commented 4 years ago

@anhminh3105 , thank you for the demo code! Are you running your demo inside a docker container? If yes, how do you manage to make cv2.imshow or camera work since I encountered a problem whenever displaying images or opening a camera inside a docker container?

NaeemKhan333 commented 4 years ago

@anhminh3105 Thanks for your nice work . I have question , Is detection is done for kinetics Dataset or it is classifying the whole image image . Please guide me about that and also how we can use detection to get result like "ava_demo.gif" . Thank you

NaeemKhan333 commented 4 years ago

I also have a question , is detection is done during the training for kinetics Dataset or it is using the whole image frame of video in training.Please guide me about it.....Thanks

pjw-cmd commented 4 years ago

When i use the command line : pjw@star:~/SlowFast-prediction_demo$ python tools/run_net.py --predict_source ~/dataset/ --cfg configs/Kinetics/SLOWFAST_4x16_R50.yaml ,i get a error: Traceback (most recent call last): File "tools/run_net.py", line 178, in main() File "tools/run_net.py", line 116, in main cfg = load_config(args) File "tools/run_net.py", line 104, in load_config cfg.PREDICT.SOURCE = args.predict_source File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/yacs/config.py", line 141, in getattr raise AttributeError(name) AttributeError: PREDICT My profile is in the original configs/Kinetics/SLOWFAST_8x8_R50 yaml path, then change is as follows: TRAIN.ENABLE: False TEST.ENABLE: False CHECKPOINT_TYPE: caffe2 CHECKPOINT_FILE_PATH: "/home/pjw/SLOWFAST_4×16.pkl" NUM_GPUS: 1 (only if running on a single CPU) NUM_SHARDS: 1 (only if running on a single machine) How can I solve this problem,?thank you!

anhminh3105 commented 4 years ago

Hi @pjw-cmd

what you had there was the old demo app I made, try this new one out my friend https://github.com/anhminh3105/SlowFast/tree/master

anhminh3105 commented 4 years ago

@anhminh3105 , thank you for the demo code! Are you running your demo inside a docker container? If yes, how do you manage to make cv2.imshow or camera work since I encountered a problem whenever displaying images or opening a camera inside a docker container?

Hi @yangsusanyang sorry I didn't try this out with docker but I have heard of this problem before, this has something to do with the nature of docker itself.

anhminh3105 commented 4 years ago

Hi @NaeemKhan333 For the Kinetics dataset, slowfast classifies the scene. For the AVA dataset, slowfast detects for people in the scene. For something like that ava_demo.gif, you can check out my lastest demo https://github.com/anhminh3105/SlowFast/tree/master

anhminh3105 commented 4 years ago

Hi @harshsp31

slowfast relies on a person detector in its internal to predict actions. I would recommend you to make a pipeline of (1)person detector-(2)face detector-(3)face recognition and (1) - (4) slowfast. still this doesn't sound like the most optimal way of doing this and I expect this to be kind of slow so you may need to experiment it on a multi-gpu setup.

anhminh3105 commented 4 years ago

我用‘SLOWFAST_8x8_R50.pkl’这个模型计算出来的结果维度怎么不对啊 打印网络结构输出维度是400: ..... (head): ResNetBasicHead( (pathway0_avgpool): AvgPool3d(kernel_size=[8, 7, 7], stride=1, padding=0) (pathway1_avgpool): AvgPool3d(kernel_size=[32, 7, 7], stride=1, padding=0) (dropout): Dropout(p=0.5, inplace=False) (projection): Linear(in_features=2304, out_features=400, bias=True) (act): Softmax(dim=4)

加载模型参数,利用‘...... preds = model(inputs)’计算得到的preds的维度成了:  torch.Size([1, 1600])

求大神指导。。。

where did you get the config file from? I would recommend keeping the network dimensions as it is since making changes to the network would require re-training the model.

zlstl1 commented 4 years ago

Thanks for your nice work . I have question ,

python3 tools/run_net.py --predict_source ./sample_input.mp4

I try run this code and I get below error

ASSERT: "false" in file qasciikey.cpp, line 501
Traceback (most recent call last):
  File "tools/run_net.py", line 178, in <module>
    main()
  File "tools/run_net.py", line 172, in main
    daemon=False,
  File "/home/a/Downloads/test/venv/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
    while not spawn_context.join():
  File "/home/a/Downloads/test/venv/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 107, in join
    (error_index, name)
Exception: process 0 terminated with signal SIGABRT

This is my config File - SLOWFAST_4x16_R50.yaml

TRAIN:
  ENABLE: False
  DATASET: kinetics
  BATCH_SIZE: 64
  EVAL_PERIOD: 10
  CHECKPOINT_PERIOD: 1
  CHECKPOINT_TYPE: caffe2
  CHECKPOINT_FILE_PATH: "/home/a/Downloads/WEB/test/SlowFast/checkpoints/SLOWFAST_4x16_R50.pkl"
  AUTO_RESUME: True
DATA:
  NUM_FRAMES: 32
  SAMPLING_RATE: 2
  TRAIN_JITTER_SCALES: [256, 320]
  TRAIN_CROP_SIZE: 224
  TEST_CROP_SIZE: 256
  INPUT_CHANNEL_NUM: [3, 3]
SLOWFAST:
  ALPHA: 8
  BETA_INV: 8
  FUSION_CONV_CHANNEL_RATIO: 2
  FUSION_KERNEL_SZ: 5
RESNET:
  ZERO_INIT_FINAL_BN: True
  WIDTH_PER_GROUP: 64
  NUM_GROUPS: 1
  DEPTH: 50
  TRANS_FUNC: bottleneck_transform
  STRIDE_1X1: False
  NUM_BLOCK_TEMP_KERNEL: [[3, 3], [4, 4], [6, 6], [3, 3]]
NONLOCAL:
  LOCATION: [[[], []], [[], []], [[], []], [[], []]]
  GROUP: [[1, 1], [1, 1], [1, 1], [1, 1]]
  INSTANTIATION: dot_product
BN:
  USE_PRECISE_STATS: True
  NUM_BATCHES_PRECISE: 200
  MOMENTUM: 0.1
  WEIGHT_DECAY: 0.0
SOLVER:
  BASE_LR: 0.1
  LR_POLICY: cosine
  MAX_EPOCH: 196
  MOMENTUM: 0.9
  WEIGHT_DECAY: 1e-4
  WARMUP_EPOCHS: 34
  WARMUP_START_LR: 0.01
  OPTIMIZING_METHOD: sgd
MODEL:
  NUM_CLASSES: 400
  ARCH: slowfast
  LOSS_FUNC: cross_entropy
  DROPOUT_RATE: 0.5
TEST:
  ENABLE: False
  DATASET: kinetics
  CHECKPOINT_TYPE: caffe2
  CHECKPOINT_FILE_PATH: "/home/a/Downloads/WEB/test/SlowFast/checkpoints/SLOWFAST_4x16_R50.pkl"
  BATCH_SIZE: 64
DATA_LOADER:
  NUM_WORKERS: 8
  PIN_MEMORY: True
NUM_GPUS: 2
NUM_SHARDS: 1
RNG_SEED: 0
OUTPUT_DIR: .

How can I solve this problem,?thank you!

additionaly I use torch 1.4.0 torchvision 0.5.0 this version

pjw-cmd commented 4 years ago

Hi @pjw-cmd

what you had there was the old demo app I made, try this new one out my friend https://github.com/anhminh3105/SlowFast/tree/master

Hi,I used your latest code. when I used the command line: pjw@star:~/ slowfast-master $python tools/run_net.py -- cfg demo/Kinetics/ slowfast_8x8_r50.yaml, I got an error: Traceback (most recent call last): File "tools/run_net.py", line 142, in main() File "tools/run_net.py", line 108, in main cfg = load_config(args) File "tools/run_net.py", line 84, in load_config cfg.merge_from_file(args.cfg_file) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/fvcore/common/config.py", line 110, in merge_from_file self.merge_from_other_cfg(loaded_cfg) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/fvcore/common/config.py", line 121, in merge_from_other_cfg return super().merge_from_other_cfg(cfg_other) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/yacs/config.py", line 217, in merge_from_other_cfg _merge_a_into_b(cfg_other, self, self, []) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/yacs/config.py", line 473, in _merge_a_into_b raise KeyError("Non-existent config key: {}".format(full_key)) KeyError: 'Non-existent config key: DEMO' How can I solve this problem,?thank you!

anhminh3105 commented 4 years ago

does your clone has these lines? https://github.com/anhminh3105/SlowFast/blob/f309ea3f1bf8b438626a34bf67f083e62455b34c/slowfast/config/defaults.py#L433-L450

pjw-cmd commented 4 years ago

您的克隆有这些行吗?https://github.com/anhminh3105/SlowFast/blob/f309ea3f1bf8b438626a34bf67f083e62455b34c/slowfast/config/defaults.py#L433-L450

My clone have these lines.

----------------------------------------------------------------------------

Demo options

----------------------------------------------------------------------------

_C.DEMO = CfgNode()

_C.DEMO.ENABLE = False

_C.DEMO.LABEL_FILE_PATH = ""

_C.DEMO.DATA_SOURCE = 0

_C.DEMO.DISPLAY_WIDTH = 0

_C.DEMO.DISPLAY_HEIGHT = 0

_C.DEMO.DETECTRON2_OBJECT_DETECTION_MODEL_CFG = ""

_C.DEMO.DETECTRON2_OBJECT_DETECTION_MODEL_WEIGHTS = ""

anhminh3105 commented 4 years ago

Could you try to run the demo in detection mode with this config file? https://github.com/anhminh3105/SlowFast/tree/master/demo/AVA Does the error still occur?

pjw-cmd commented 4 years ago

Could you try to run the demo in detection mode with this config file? https://github.com/anhminh3105/SlowFast/tree/master/demo/AVA Does the error still occur? In this MODEL_ZOO, I did not find the slowfast_32x2_r101_50_50.pkl file, and I did not have the ava data set.I have a question. When we Run the Demo on Videos, do we not need Kinetic data sets?Need to set similar command line data.path_to_data_dir ~/dataset?Thanks!

anhminh3105 commented 4 years ago

https://dl.fbaipublicfiles.com/pyslowfast/model_zoo/ava/pretrain/SLOWFAST_32x2_R101_50_50_v2.1.pkl here is the link to that model weights file. I might have renamed the model weights somehow during my workflow, you may need to make changes to config file accordingly. Sorry about that.

Only if you are going to train the model then you will need the whole dataset

pjw-cmd commented 4 years ago

https://dl.fbaipublicfiles.com/pyslowfast/model_zoo/ava/pretrain/SLOWFAST_32x2_R101_50_50_v2.1.pkl here is the link to that model weights file. I might have renamed the model weights somehow during my workflow, you may need to make changes to config file accordingly. Sorry about that.

Only if you are going to train the model then you will need it, so you do not :) pjw@star:~/SlowFast-master$ python tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml Traceback (most recent call last): File "tools/run_net.py", line 142, in main() File "tools/run_net.py", line 108, in main cfg = load_config(args) File "tools/run_net.py", line 84, in load_config cfg.merge_from_file(args.cfg_file) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/fvcore/common/config.py", line 110, in merge_from_file self.merge_from_other_cfg(loaded_cfg) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/fvcore/common/config.py", line 121, in merge_from_other_cfg return super().merge_from_other_cfg(cfg_other) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/yacs/config.py", line 217, in merge_from_other_cfg _merge_a_into_b(cfg_other, self, self, []) File "/home/pjw/miniconda3/envs/two/lib/python3.7/site-packages/yacs/config.py", line 473, in _merge_a_into_b raise KeyError("Non-existent config key: {}".format(full_key)) KeyError: 'Non-existent config key: DEMO' Thanks,I download this model,but the error still occur.

pjw-cmd commented 4 years ago

https://dl.fbaipublicfiles.com/pyslowfast/model_zoo/ava/pretrain/SLOWFAST_32x2_R101_50_50_v2.1.pkl是指向该模型权重文件的链接。我可能在工作流程中以某种方式重命名了模型权重,您可能需要相应地对配置文件进行更改。对于那个很抱歉

仅当您要训练模型时,才需要整个数据集

Excuse me, I looked at the code that went wrong yesterday, and I printed the cfg configuration file to the console without the properties of the DEMO.But I see that . / slowfast/defaults. py and. / configs/Kinetics/c2 / SLOWFAST_8 x 8 _r50. yaml have about demo attribute set, do you know why is this?Thank you very much for your answer.

anhminh3105 commented 4 years ago

https://dl.fbaipublicfiles.com/pyslowfast/model_zoo/ava/pretrain/SLOWFAST_32x2_R101_50_50_v2.1.pkl是指向该模型权重文件的链接。我可能在工作流程中以某种方式重命名了模型权重,您可能需要相应地对配置文件进行更改。对于那个很抱歉。 仅当您要训练模型时,才需要整个数据集

Excuse me, I looked at the code that went wrong yesterday, and I printed the cfg configuration file to the console without the properties of the DEMO.But I see that . / slowfast/defaults. py and. / configs/Kinetics/c2 / SLOWFAST_8 x 8 _r50. yaml have about demo attribute set, do you know why is this?Thank you very much for your answer.

For the demo, I modified slowfast/defaults.py by adding the DEMO cfgNode. Therefore, to be able to run the demo, the config files in configs/Kinetics (or AVA)/.yaml need to add the DEMO field and its attributes just like the example script I made in demo/Kinetics (or AVA)/.yaml

zqz979666 commented 4 years ago

Hi There,Thanks for the cool demo.I have some questions here.If I want to use camera as input on remote server,how should I modify the .yaml files to run the demo code?Or it can just run in local place ?Futhermore,in my comprehension,is that correct that I clone your master branch,modify the Kinetics/ and AVA/ yaml then run the code and I can get the result?(Maybe my questions are a little ignorant,but it would be appreciated if you could spend a little precious time to answer them.Thanks a lot!)

anhminh3105 commented 4 years ago

Hi There,Thanks for the cool demo.I have some questions here.If I want to use camera as input on remote server,how should I modify the .yaml files to run the demo code?Or it can just run in local place ?Futhermore,in my comprehension,is that correct that I clone your master branch,modify the Kinetics/ and AVA/ yaml then run the code and I can get the result?(Maybe my questions are a little ignorant,but it would be appreciated if you could spend a little precious time to answer them.Thanks a lot!)

The camera input is handled by OpenCV so I think it should be easily enough to make it read input remotely.

Yes it’s made to be as dumb as that sounds :)

zqz979666 commented 4 years ago

Hi There,Thanks for the cool demo.I have some questions here.If I want to use camera as input on remote server,how should I modify the .yaml files to run the demo code?Or it can just run in local place ?Futhermore,in my comprehension,is that correct that I clone your master branch,modify the Kinetics/ and AVA/ yaml then run the code and I can get the result?(Maybe my questions are a little ignorant,but it would be appreciated if you could spend a little precious time to answer them.Thanks a lot!)

The camera input is handled by OpenCV so I think it should be easily enough to make it read input remotely.

Yes it’s made to be as dumb as that sounds :)

Thanks for the reply!

I understand each word of the first sentence,but I still feel a little confused when they are put together :( (still have so much to learn) I mean,I connect the webcam to my PC,but the pytorch environment is on the remote server so I need to read local webcam data to remote server...

cap = cv2.VideoCapture(1) success , img = cap.read()

I use the code above to capture camera frames.And I tried on remote server,it can't get my camera data :( SO Does it mean I have to get another pytorch environment on my local computer so I could use camera as input?(If the dumb question bothers you, you could completely ignore it :P and I would somehow find solutions )

Anyway, appreciate your answer.And your demo is not dumb at all XD

SantiHM23 commented 4 years ago

Hi!

Thank you for your work and sharing, the repo is very cool and super useful. I just have a small question about something you comment in the demo.py file, in lines 202 to 220:

            if cfg.DETECTION.ENABLE:
                # This post processing was intendedly assigned to the cpu since my laptop GPU
                #   RTX 2080 runs out of its memory, if your GPU is more powerful, I'd recommend
                #   to change this section to make CUDA does the processing.
                preds = preds.cpu().detach().numpy()
                pred_masks = preds > .1
                label_ids = [np.nonzero(pred_mask)[0] for pred_mask in pred_masks]
                pred_labels = [
                    [labels[label_id] for label_id in perbox_label_ids]
                    for perbox_label_ids in label_ids
                ]
                # I'm unsure how to detectron2 rescales boxes to image original size, so I use
                #   input boxes of slowfast and rescale back it instead, it's safer and even if boxes
                #   was not rescaled by cv2_transform.rescale_boxes, it still works.
                boxes = boxes.cpu().detach().numpy()
                ratio = np.min(
                    [frame_provider.display_height, frame_provider.display_width]
                ) / cfg.DATA.TEST_CROP_SIZE
                boxes = boxes[:, 1:] * ratio

As I have a 11GB GPU that I can use, I am interested on knowing how that part of the code could be modified in order to make the CUDA do the processing, as you say in your comment. I infer that you have tried it before and maybe you could share how to do it, or some hints about it.

Thank you for your help!

zqz979666 commented 4 years ago

Hi I have succeed running the demo code of SLOWFAST_8x8_R50.yaml and I just wonder if I could modify SLOWFAST_8x8_R50 and make the DETECTION.enable be True?And so that I could simultaneously detect and classify.

Thx for helping:)

windspirit95 commented 4 years ago

Hi I have succeed running the demo code of SLOWFAST_8x8_R50.yaml and I just wonder if I could modify SLOWFAST_8x8_R50 and make the DETECTION.enable be True?And so that I could simultaneously detect and classify.

Thx for helping:)

I believe you need to modify in demo_net.py also, and change from multi-labelling to single argmax label as what we expect to see when using kinetics dataset and SLOWFAST_8x8_R50.yaml. Actually my coding skill is not good, since then it takes time for me to modify it as the above requirement :)

anhminh3105 commented 4 years ago

Hi I have succeed running the demo code of SLOWFAST_8x8_R50.yaml and I just wonder if I could modify SLOWFAST_8x8_R50 and make the DETECTION.enable be True?And so that I could simultaneously detect and classify.

Thx for helping:)

The detect and classify demo is in Ava/, in my fork