open-mmlab / mmaction2

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
https://mmaction2.readthedocs.io
Apache License 2.0
4.06k stars 1.2k forks source link

Get different result for the same input in inference #2742

Open JasseurHadded1 opened 8 months ago

JasseurHadded1 commented 8 months ago

Branch

0.x branch (0.x version, such as v0.24.1)

Prerequisite

Environment

sys.platform: linux
Python: 3.8.16 (default, Oct 16 2023, 14:40:53) [GCC 11.4.0]
CUDA available: True
GPU 0,1: NVIDIA GeForce RTX 4090
CUDA_HOME: None
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 1.9.0+cu111
PyTorch compiling details: PyTorch built with:

TorchVision: 0.10.0+cu111
OpenCV: 4.8.0
MMCV: 1.7.1 MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.1
MMAction2: 0.24.1+27c674a

Describe the bug

When I do inference using the "inference_recognizer" function in mmaction/apis/inference.py I get different results! I have tried to see if there is any thing that causes randomness in the pipeline but not found! Anyone know the solution?

Reproduces the problem - code sample

Just that:

results, returned_features = inference_recognizer(model,
                                     video,
                                     nb_class_predicted=9,
                                     score_threshold=0)

Reproduces the problem - command or script

No response

Reproduces the problem - error message

No response

Additional information

No response

JasseurHadded1 commented 8 months ago

Any answer!