Open pcshih opened 5 years ago
Could you give me the Summe dataset (downsampled to 320 frames per video) preprocessed by make_dataset.py? I would like to try the evaluation.
Could you give me the Summe dataset (downsampled to 320 frames per video) preprocessed by make_dataset.py? I would like to try the evaluation.
Sorry, the hdf5 file is saved on a server in my school but now I am at home...
You may make it using make_dataset.py
.
Is the performance far from the paper mentioned(SumME)?
Is the performance far from the paper mentioned(SumME)?
Yes
I cannot run the code. What is the input format of video? i.e. Air_Force_One.mp4
I cannot run the code. What is the input format of video? i.e. Air_Force_One.mp4
You may run
python make_dataset.py --video_dir {arg1} --h5_path {arg2} --vsumm_data {arg3}
and here are some descriptions about 3 args: https://github.com/weirme/Video_Summary_using_FCSN/blob/0895cccbb2a488369b1bfc7d2c087b3050250898/make_dataset.py#L15-L17
in case of SumMe, video_dir
may like SumMe_root/videos
.
I also got bad performance in summe dataset.
How was your performance in summe dataset?
Almost as bad as yours :(
Is there any solution to the bad performance of SumMe dataset? Please someone guide me accordingly.
As mention in the paper, the training and testing set should be 80% and 20%. But in https://github.com/weirme/Video_Summary_using_FCSN/blob/96b40851b7805afd1f1fc69f2beb5143d5727b4e/data_loader.py#L25
should it be
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [int(len(dataset)*0.8), int(len(dataset)*0.2)])
?Thank you.