prophesee-ai / openeb

Open source SDK to create applications leveraging event-based vision hardware equipment
https://www.prophesee.ai/metavision-intelligence/
164 stars 36 forks source link

Training setting for reproducing corner detection model #120

Open yusukeSekikawa opened 3 months ago

yusukeSekikawa commented 3 months ago

I was training the corner detection model and encountered the following issues.

Can you share the settings to reproduce the pre-trained model?

In a similar vein, I have a few questions.

jngt commented 3 months ago

I'm facing the same problem.

lbristiel-psee commented 3 months ago

Hello,

let me first answer to your first point about reproducing the pre-trained model. Could you give some information about those points:

Thanks, Laurent for Prophesee Support

yusukeSekikawa commented 3 months ago

Thank you for the reply

I appreciate your help.

saladair commented 3 months ago

Hi, I am suffering from a similar issue with the e2v model. I attempted to reproduce the pre-trained model (e2v.chpt) using MS COCO data (we use the default setting of train_event_to_video.py), but we could not produce the result on evaluation. We use the same chessboard dataset from http://prophesee.ai/hvga-atis-corner-dataset. Our trained model outputs the intensity image, but the quality is worse than the pre-trained one.

Please help us reproduce the pre-trained mode (dataset, options, etc). I really appreciate any help you can provide.

lbristiel-psee commented 3 months ago

Hello @yusukeSekikawa and @saladair, indeed the training script used with their default params don't allow to reproduce the pre-trained model we share. Our main suggestion is to follow the indication of the papers (same number of epochs, data augmentation etc.) to get closer to what we have done to produce those models. Hope this helps, Laurent for Prophesee Support

yusukeSekikawa commented 3 months ago

Hello @lbristiel-psee

Thank you for the feedback. I want to reproduce the pre-trained model, NOT the result on the paper. So, I would appreciate it if you could share the script to run the "train_corner_detection.py" (and "train_event_tovideo.py" for @saladair). I want to know the value for each option, e.g., --lr, --epochs, --precision, which your teams used when training the pre-trained model in SDK_. (The paper describes the values for some parameters, but it is difficult to know all the parameters I need to specify for training. So, we are trying to reproduce the pre-trained model in SDK, not the results on the paper).

If sharing the training script is difficult, it would also be helpful if you can share the major parameter

In case we need NDA to share the values for input options for the script, please let me know (sekikawa.yusuke@core.d-itlab.co.jp)

Great thanks

yusukeSekikawa commented 3 months ago

I found a hyperparameter stored in "corner_detection_10_heatmaps.ckpt"

It looks like the SDK's provided pre-trained model is based on another checkpoint:"'/home/pchiberre/prophesee/data/logs/testing_train/checkpoints/epoch=65-step=131999.ckp."

Can you share the hyper parameter for training "epoch=65-step=131999.ckp"?

checkpoint = torch.load("corner_detection_10_heatmaps.ckpt") print(checkpoint["hyper_parameters"]) {'root_dir': '/home/pchiberre/prophesee/data/logs/testing_train', 'dataset_path': '/mnt/hdd1/coco/images/', 'lr': 0.0007, 'epochs': 100, 'demo_iter': 10, 'precision': 16, 'accumulate_grad_batches': 1, 'batch_size': 2, 'demo_every': 1, 'val_every': 1, 'save_every': 1, 'just_test': False, 'cpu': False, 'resume': False, 'checkpoint': '/home/pchiberre/prophesee/data/logs/testing_train/checkpoints/epoch=65-step=131999.ckpt', 'mask_loss_no_events_yet': False, 'limit_train_batches': 2000, 'limit_val_batches': 100, 'data_device': 'cuda:0', 'event_volume_depth': 10, 'cin': 10, 'cout': 10, 'height': 360, 'width': 480, 'num_tbins': 10, 'min_frames_per_video': 200, 'max_frames_per_video': 5000, 'number_of_heatmaps': 10, 'num_workers': 2, 'randomize_noises': True}

Thank you in advance.

lbristiel-psee commented 3 months ago

Sorry but we are not able to share more information at the moment than what we already published (research papers, training scripts and pre-trained models). We are trying to gather more data about the topic and will share it when available, but in the meantime, the main idea is to follow what is specified in the papers (even if it is not the full picture) are those pre-trained models were build when writting those papers, and fine-tune by adjusting the parameters yourself.

I will keep you updated when I have some news.

Best, Laurent for Prophesee Support

yusukeSekikawa commented 3 months ago

I appreciate your help. We will wait for the updates.