Fanghuachen / AEDNet

Source code for AEDNet paper from ACMMM 2022
8 stars 0 forks source link

question about SNR #6

Closed wzz-z closed 4 weeks ago

wzz-z commented 1 month ago

You mentioned in your paper that ๐‘€ and ๐‘ in the SNR calculation refer to the number of real events and noise respectively, may I ask how the calculation of ๐‘€ and ๐‘ is implemented in the code? Is it comparing the number of agreement between the output data and the reference data label?

Fanghuachen commented 1 week ago

You mentioned in your paper that ๐‘€ and ๐‘ in the SNR calculation refer to the number of real events and noise respectively, may I ask how the calculation of ๐‘€ and ๐‘ is implemented in the code? Is it comparing the number of agreement between the output data and the reference data label?

M and N are calculated with the help of the labels. Our AEDNet is an element-based algorithm. Therefore, each result can easily correspond to the label, without individually extracting the index among the event stream.

wzz-z commented 1 week ago

The problem has been solved, thank you very much for your reply. I have carefully read your paper and successfully reproduced your code, however, when I try to use my own dataset, the denoising accuracy reaches about 80% and then converges and stops increasing, is this related to the parameters patches_per_shape and points_per_patch, and what are the suggestions for tuning these two parameters? In addition, I see that you have used the EDNCNN model for comparison experiments in your paper, and the publicly available dataset of EDNCNN contains data acquired using other class cameras, how can the code be reproduced with only event data?

------------------ ๅŽŸๅง‹้‚ฎไปถ ------------------ ๅ‘ไปถไบบ: "Fanghuachen/AEDNet" @.>; ๅ‘้€ๆ—ถ้—ด: 2024ๅนด10ๆœˆ30ๆ—ฅ(ๆ˜ŸๆœŸไธ‰) ๆ™šไธŠ8:34 @.>; @.>;"State @.>; ไธป้ข˜: Re: [Fanghuachen/AEDNet] question about SNR (Issue #6)

You mentioned in your paper that ๐‘€ and ๐‘ in the SNR calculation refer to the number of real events and noise respectively, may I ask how the calculation of ๐‘€ and ๐‘ is implemented in the code? Is it comparing the number of agreement between the output data and the reference data label?

M and N are calculated with the help of the labels. Our AEDNet is an element-based algorithm. Therefore, each result can easily correspond to the label, without individually extracting the index among the event stream.

โ€” Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>

Fanghuachen commented 1 week ago

The problem has been solved, thank you very much for your reply. I have carefully read your paper and successfully reproduced your code, however, when I try to use my own dataset, the denoising accuracy reaches about 80% and then converges and stops increasing, is this related to the parameters patches_per_shape and points_per_patch, and what are the suggestions for tuning these two parameters? In addition, I see that you have used the EDNCNN model for comparison experiments in your paper, and the publicly available dataset of EDNCNN contains data acquired using other class cameras, how can the code be reproduced with only event data? โ€ฆ ------------------ ๅŽŸๅง‹้‚ฎไปถ ------------------ ๅ‘ไปถไบบ: "Fanghuachen/AEDNet" @.>; ๅ‘้€ๆ—ถ้—ด: 2024ๅนด10ๆœˆ30ๆ—ฅ(ๆ˜ŸๆœŸไธ‰) ๆ™šไธŠ8:34 @.>; @.>;"State @.>; ไธป้ข˜: Re: [Fanghuachen/AEDNet] question about SNR (Issue #6) You mentioned in your paper that ๐‘€ and ๐‘ in the SNR calculation refer to the number of real events and noise respectively, may I ask how the calculation of ๐‘€ and ๐‘ is implemented in the code? Is it comparing the number of agreement between the output data and the reference data label? M and N are calculated with the help of the labels. Our AEDNet is an element-based algorithm. Therefore, each result can easily correspond to the label, without individually extracting the index among the event stream. โ€” Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>

For the first question, I want to know which event camera you used for recording your own dataset. Is it DAVIS346? Different cameras have different spatial resolutions. First, you must change the parameters of --x_frame and --y_frame (which have probably been changed). The --x_lim and --y_lim may have to change to correspond to the change in spatial resolution. Besides, you can change the --points_per_patch to adjust the number of neighbor events, depending on the event density in your dataset. You can also change the t_lim parameter in the load_train_shape function in dataprocess.py (sorry for not highlighting it in the arguments). These two parameters are a pair of corresponding parameters. You can adjust them simultaneously. For --patches_per_shape, it only relates to the length of each event stream (number of events in each data file of your dataset). I think it does relate to the training performance if it is too small and the event stream contains too many events. It may cause your model to fail to encounter the whole data. However, I do not think it is the main reason for low accuracy. You should pay more attention to the parameters of --x_frame, --y_frame, --x_lim, --y_lim, --points_per_patch and t_lim. This is just my speculation since I do not see your dataset. Hope it can help you. If you have another question, feel free to contact us.

For the second question, EDNCNN itself does not need APS or IMU parameters. The public code trained EDNCNN with DVSNOISE20 dataset with EPM label. EPM label needs APS or IMU parameters. When you train EDNCNN on our DVSCLEAN, you can train with our provided label instead of the EPM label. Therefore, you do not have to use APS and IMU parameters. You can revise the public code or rewrite EDNCNN since the framework is quite simple and is not too hard to achieve code replication.