atelili / 2BiVQA

2BiVQA is a no-reference deep learning based video quality assessment metric.
MIT License
31 stars 6 forks source link

issue #4

Closed usutdzxych closed 1 year ago

usutdzxych commented 1 year ago

What causes the model training not to stop

usutdzxych commented 1 year ago

image

atelili commented 1 year ago

Hello, you need to specify arguments before (which folder contains data...), type python spatial_train.py -h for more details.

usutdzxych commented 1 year ago

I have set all the parameters. I don't think it's a problem with parameter settings

atelili commented 1 year ago

can you send me these parameter ?

usutdzxych commented 1 year ago

!python3 spatial_train.py -p 16 -b 16

atelili commented 1 year ago

Did you extract the features with extract_features.py ?

usutdzxych commented 1 year ago

yes,I tried 4 videos of myself for training, but no epoch appeared

atelili commented 1 year ago

To train 2BiVQA you need to execute End2End_train.py, spatial_train.py is used only to train the spatial pooling module and it works only with images not videos.

atelili commented 1 year ago

The spatial pooling module is trained on Koniq10k database, you can use it directly with End2End_train.py, I will add further arguments and comments to clarify this point. Thanks.

usutdzxych commented 1 year ago

Thank you for your answer; But I had trained with End2End_train.py,it also happens that no epoch appeared

atelili commented 1 year ago

image There is no issues with the train.

atelili commented 1 year ago

yes,I tried 4 videos of myself for training, but no epoch appeared

Here you tried 4 videos, while you specify batch_size as 16

atelili commented 1 year ago

I will close this issue, if you still face the some problem you can contact me at atelili@insa-rennes.fr and we can organize a meeting on zoom or google meet.

usutdzxych commented 1 year ago

After reading your reply, I think it may be the problem of environment and data, because my data is run on Google's Colab. Can you upload the Colab version, including the running data?

atelili commented 1 year ago

You can find a brief demo here: https://drive.google.com/drive/folders/1O4xajRa71K7fkmdjGvLDuNbX8ATdZ_hP?usp=sharing

usutdzxych commented 1 year ago

Thank you for your work. I don't know how long your dataset video is? If possible, can you upload part of the video in Colab and extract features

atelili commented 1 year ago

We used 3 datasets, you can find details in our paper. I have already uploaded 10 videos features in the drive.

usutdzxych commented 1 year ago

I have no more questions. Thank you for your help. Good luck to you.

usutdzxych commented 1 year ago

I want to reproduce your paper, but I can't find the dataset you synthesized from three dataset in the paper, including videos and MOS.Can you send me your new dataset including videos and MOS?

atelili commented 1 year ago

You can find all details in our paper. "We also used these three datasets to create a fourth dataset, which is the union of them after MOS calibration using the Iterative Nested Least Squares Algorithm (INSLA)"