Open realningzheng opened 3 years ago
Hi realningzheng,
I encountered similar situation when doing evaluation. I also found that inside 'solotest.json', the frame number is not consistent with the processed data. Did you solve the problem?
Hi there,
Personally, I think the author left out the part that converts the .jpg to .pkl. However, the generated pictures themselves are the result. Since I am not working on localization tasks (separation instead), I didn’t try to recover it, sorry about that.
Best
2021年2月17日 上午11:18,Pixie412 notifications@github.com 写道:
Hi realningzheng,
I encountered similar situation when doing evaluation. I also found that inside 'solotest.json', the frame number is not consistent with the processed data. Did you solve the problem?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/DTaoo/Discriminative-Sounding-Objects-Localization/issues/5#issuecomment-780268811, or unsubscribe https://github.com/notifications/unsubscribe-auth/AORGBWY2HFKXSCMDHCQ5KKDS7MYO5ANCNFSM4WRIRB2A.
Hi, sorry for some mass in test code. The evaluation procedure for stage one, single source localization, should be
And for stage two MUSIC-synthetic/duet data, the evaluation procedure should be
Thanks
Hi, sorry for some mass in test code. The evaluation procedure for stage one, single source localization, should be
1. python3 training_stage_one.py --mode test --ckpt_file path/to/ckpt 2. python3 tools.py
And for stage two MUSIC-synthetic/duet data, the evaluation procedure should be
1. python3 test_stage_two.py/test_stage_two_duet.py --ckpt_file path/to/ckpt 2. python3 eval.py/eval_duet.py
Thanks Hi, for evaluation in stage one, the frame index in solotest.json is not consistent with processed data. It seems you adopted a different way to extract frames when doing annotations? According to annotations, it seems that all videos are in 30 fps, and you extract a frame every 7 frames. I tried to modify the frame rate to 30 fps, but this results in reduction of total length of the video. I also opened a new issue. Can you explain it?
Hi, sorry for some mass in test code. The evaluation procedure for stage one, single source localization, should be
1. python3 training_stage_one.py --mode test --ckpt_file path/to/ckpt 2. python3 tools.py
And for stage two MUSIC-synthetic/duet data, the evaluation procedure should be
1. python3 test_stage_two.py/test_stage_two_duet.py --ckpt_file path/to/ckpt 2. python3 eval.py/eval_duet.py
Thanks Hi, for evaluation in stage one, the frame index in solotest.json is not consistent with processed data. It seems you adopted a different way to extract frames when doing annotations? According to annotations, it seems that all videos are in 30 fps, and you extract a frame every 7 frames. I tried to modify the frame rate to 30 fps, but this results in reduction of total length of the video. I also opened a new issue. Can you explain it?
I have encountered the same question with you, have you solved this '30 fps' problem?
Hello,
I found that at the begining of the evaluation stages, the file: test.py doesn't generate a pkl file which is requested in tools.py, line:12. Instead, it saves the localization result in jpg format (test.py, line:120).
I wonder is there some additional steps to convert them into .pkl file (a pre-request file in )? If so, could you please release the code as well? Thanks a lot.
Best