Open yusukeSekikawa opened 5 months ago
I'm facing the same problem.
Hello,
let me first answer to your first point about reproducing the pre-trained model. Could you give some information about those points:
Thanks, Laurent for Prophesee Support
Thank you for the reply
We use MS-COCO as described in the "Long-Lived Accurate Keypoint..." paper. For training, we use default settings except for the batch size (we use 16 instead of 4 and train for 8 epochs, which I think is equivalent to training for 30 epochs with a default batch size of 4 due to the cap from limit_train_batches).
We use the default settings for evaluation using demo_corner_detection.py. We use the chessboard dataset downloaded from http://prophesee.ai/hvga-atis-corner-dataset for evaluation.
With the same setting, the pre-trained model works fine.
I appreciate your help.
Hi, I am suffering from a similar issue with the e2v model. I attempted to reproduce the pre-trained model (e2v.chpt) using MS COCO data (we use the default setting of train_event_to_video.py), but we could not produce the result on evaluation. We use the same chessboard dataset from http://prophesee.ai/hvga-atis-corner-dataset. Our trained model outputs the intensity image, but the quality is worse than the pre-trained one.
Please help us reproduce the pre-trained mode (dataset, options, etc). I really appreciate any help you can provide.
Hello @yusukeSekikawa and @saladair, indeed the training script used with their default params don't allow to reproduce the pre-trained model we share. Our main suggestion is to follow the indication of the papers (same number of epochs, data augmentation etc.) to get closer to what we have done to produce those models. Hope this helps, Laurent for Prophesee Support
Hello @lbristiel-psee
Thank you for the feedback. I want to reproduce the pre-trained model, NOT the result on the paper. So, I would appreciate it if you could share the script to run the "train_corner_detection.py" (and "train_event_tovideo.py" for @saladair). I want to know the value for each option, e.g., --lr, --epochs, --precision, which your teams used when training the pre-trained model in SDK_. (The paper describes the values for some parameters, but it is difficult to know all the parameters I need to specify for training. So, we are trying to reproduce the pre-trained model in SDK, not the results on the paper).
If sharing the training script is difficult, it would also be helpful if you can share the major parameter
In case we need NDA to share the values for input options for the script, please let me know (sekikawa.yusuke@core.d-itlab.co.jp)
Great thanks
I found a hyperparameter stored in "corner_detection_10_heatmaps.ckpt"
It looks like the SDK's provided pre-trained model is based on another checkpoint:"'/home/pchiberre/prophesee/data/logs/testing_train/checkpoints/epoch=65-step=131999.ckp."
Can you share the hyper parameter for training "epoch=65-step=131999.ckp"?
checkpoint = torch.load("corner_detection_10_heatmaps.ckpt") print(checkpoint["hyper_parameters"]) {'root_dir': '/home/pchiberre/prophesee/data/logs/testing_train', 'dataset_path': '/mnt/hdd1/coco/images/', 'lr': 0.0007, 'epochs': 100, 'demo_iter': 10, 'precision': 16, 'accumulate_grad_batches': 1, 'batch_size': 2, 'demo_every': 1, 'val_every': 1, 'save_every': 1, 'just_test': False, 'cpu': False, 'resume': False, 'checkpoint': '/home/pchiberre/prophesee/data/logs/testing_train/checkpoints/epoch=65-step=131999.ckpt', 'mask_loss_no_events_yet': False, 'limit_train_batches': 2000, 'limit_val_batches': 100, 'data_device': 'cuda:0', 'event_volume_depth': 10, 'cin': 10, 'cout': 10, 'height': 360, 'width': 480, 'num_tbins': 10, 'min_frames_per_video': 200, 'max_frames_per_video': 5000, 'number_of_heatmaps': 10, 'num_workers': 2, 'randomize_noises': True}
Thank you in advance.
Sorry but we are not able to share more information at the moment than what we already published (research papers, training scripts and pre-trained models). We are trying to gather more data about the topic and will share it when available, but in the meantime, the main idea is to follow what is specified in the papers (even if it is not the full picture) are those pre-trained models were build when writting those papers, and fine-tune by adjusting the parameters yourself.
I will keep you updated when I have some news.
Best, Laurent for Prophesee Support
I appreciate your help. We will wait for the updates.
I was training the corner detection model and encountered the following issues.
Can you share the settings to reproduce the pre-trained model?
In a similar vein, I have a few questions.