Closed Raven-D closed 4 years ago
This error is new to me. Does this error always happen or just occasionally? Only to this video or it happens to other videos too?
If possible, please provide more information on how to reproduce this error so that we can locate this problem. Thank you.
it's always happened,
GPU INFO: Tesla P100 , Driver Version: 418.67 CUDA Version: 10.1 SYSTEM: Linux version 4.4.0-31-generic (buildd@lgw01-16) (gcc version 5.3.1 20160413 (Ubuntu 5.3.1-14ubuntu2.1) ) CMD: CUDA_VISIBLE_DEVICES=0 python demo.py --video-path ../kf001.mp4 --output-path ../kf002.mp4
version: torch : 1.2.0 torchvision : 0.4.0 tqdm: 4.19.9 yacs: 0.1.7 tensorboardX: 1.9 numpy: 1.18.0 av: 8.0.2 cython-bbox: 0.1.3 easydict: 1.9 opencv-python: 4.1.1.26 scipy: 1.2.1
and it's happened on more than 1 mp4 files.
the infor of kf001.mp4: dimensions: 960 x 448 codec: H.264 framerate: 30 frames per second bitrate: 1004 kbps
and the kf002.mp4 could be outputed successfully, but I can only see the tracking frame, no action-type tags existed, even it raised an exception in Thread-1.
Sorry for late replay. This never happens on my machine. Maybe this is related to your tqdm version. Would you please check this issue, see if it fixs your problem.
Sorry for late replay. This never happens on my machine. Maybe this is related to your tqdm version.
Would you please check this issue, see if it fixs your problem.
yes, it works, by upgrading the tqdm to version 4.32.2, thank u very much!
But I can't see the action-type tags in the outputted video by running the CMD: CUDA_VISIBLE_DEVICES=0 python demo.py --video-path ../kf001.mp4 --output-path ../kf002.mp4 --cfg-path ../config_files/resnet101_8x8f_denseserial.yalm --weight-path ../data/models/aia_models/resnet101_8x8f_denseserial.pth
only some tracking frames(box) on picture.
I had putted [yolov3-app.weights, jde.uncertainty.pt] into folder "AlphaAction/AlphaAction/data/models/detector_models/".
Did I miss something?
I still can't reproduce the bug you described. Everything works fine on my machine
Would you please provide me with your video?
If not, here are some steps to help us locate the problem.
Please try to reinstall the whole project and redownload the weights. I also recommand you to recreate a conda environment follwing the guide in INSTALL.md and try again. If the problem persists, please refer to the following steps.
The AVAPredictorWorker
class in demo/action_predictor.py
is responsible for labels prediction. You can insert prints between line 293 and line 294 to check if it gives correct prediction. Like this
predictions = self.ava_predictor.compute_prediction(center_timestamp//self.interval, video_size)
print(predicitons) # new line
self.output_queue.put((predictions, center_timestamp, ids))
The AVAVisualizer
in demo/visualizer.py
is responsible for results visualization and AVAVisualizer.action_dictionary
saves the action of every detected person. You can check its stats by printing its content
For example between line 303 and line 304
self.update_action_dictionary(scores, ids)
print(self.action_dictionary, scores, ids) # new line
last_visual_mask = self.visual_result(boxes, ids)
Hello, I encountered a problem when I executed the demo.py
the error log as followed :
Starting video demo, video path: ../kf001.mp4 after Initialise Visualizer @@ multiprocessing.set_start_method @@ torch.multiprocessing.set_sharing_strategy @@ count() @@ count(0) Loading YOLO model.. yolo self.model_cfg ../detector/yolo/cfg/yolov3-spp.cfg yolo self.model_weights ../data/models/detector_models/yolov3-spp.weights self.model.net_info-height, 608 Network successfully loaded args.gpus [0] <class 'list'> args.device cuda <class 'torch.device'>
model_weight_url @@ ../data/models/aia_models/resnet101_8x8f_denseserial.pth Loading tracking model.. after AVAPredictorWorker @@ 0it [00:00, ?it/s]Network successfully loaded 644it [00:56, 11.33it/s]Wait for feature preprocess The input queue is empty. Start working on prediction 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21673/21673 [00:00<00:00, 145254.70it/s] End of video loader 50%|█████████████████████████████████████████████████████████████████████████████ | 10/20 [00:00<00:00, 43.17it/s]/home/xa/miniconda3/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown len(cache)) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 43.97it/s] Prediction is done. Wait for writer process to finish...
Exception in thread Thread-1: | 92/645 [00:02<00:16, 32.77it/s] Traceback (most recent call last): File "/home/xa/miniconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/home/xa/.local/lib/python3.6/site-packages/tqdm/_monitor.py", line 62, in run for instance in self.tqdm_cls._instances: File "/home/xa/miniconda3/lib/python3.6/_weakrefset.py", line 60, in iter for itemref in self.data: RuntimeError: Set changed size during iteration
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 645/645 [00:18<00:00, 34.42it/s] write frame closed load frame closed Avaworker stopped
would you please tell me how to fix it? thank u ~