AICPS / hydrafusion

Model code for our paper titled "HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception"
MIT License
20 stars 8 forks source link

Train and Test script files are missing #3

Closed jayamohan-cb closed 1 year ago

jayamohan-cb commented 2 years ago

Train and test script files are missing in the repository. Can someone upload train and test scripts for HydraFusion model?

malawada commented 1 year ago

This repository only contains our model architecture in PyTorch. We don't provide any code for the dataset, training, or evaluation, but you should be able to use this model architecture with your own data and training code for solving multi-modal object-detection tasks as the model is derived from Faster R-CNN.

jayamohan-cb commented 1 year ago

Hi ,

I was trying to train the model with radiate dataset.but I could not figure out what is the corresponding arguments from radiate data to be passed to hydrafusion forward function as I could not find respective fields in the dataset. Can you provide details on the radar_y , cam_y paramaters in the forward function?

On Thu, 1 Dec 2022, 03:09 Arnav Malawade, @.***> wrote:

This repository only contains our model architecture in PyTorch. We don't provide any code for the dataset, training, or evaluation, but you should be able to use this model architecture with your own data and training code for solving multi-modal object-detection tasks as the model is derived from Faster R-CNN.

— Reply to this email directly, view it on GitHub https://github.com/AICPS/hydrafusion/issues/3#issuecomment-1332765486, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXFKJQD5BD6CABU5KTO6G5LWK7CQZANCNFSM6AAAAAARW4YF3A . You are receiving this because you authored the thread.Message ID: @.***>

--

Disclaimer:- This footer text is to convey that this email is sent by one of the users of IITH. So, do not mark it as SPAM.

malawada commented 1 year ago

We used the radiate sdk function get_from_timestamp to get the sensor data for each input modality and corresponding annotations. https://github.com/marcelsheeny/radiate_sdk/blob/master/radiate.py#L187

We used annotations['radar_cartesian'] as the radar_y and annotations['camera_right_rect'] as cam_y. However, since Faster R-CNN is a 2D bounding box predictor, we flatten the pseudo-3D camera annotations for cam_y into 2D boxes.

jayamohan-cb commented 1 year ago

Thanks, I'll check it out.

On Thu, 8 Dec 2022, 01:36 Arnav Malawade, @.***> wrote:

We used the radiate sdk function get_from_timestamp to get the sensor data for each input modality and corresponding annotations. https://github.com/marcelsheeny/radiate_sdk/blob/master/radiate.py#L187

We used annotations['radar_cartesian'] as the radar_y and annotations['camera_right_rect'] as cam_y. However, since Faster R-CNN is a 2D bounding box predictor, we flatten the pseudo-3D camera annotations for cam_y into 2D boxes.

— Reply to this email directly, view it on GitHub https://github.com/AICPS/hydrafusion/issues/3#issuecomment-1341526516, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXFKJQFTGVZXDHTQWZECJSTWMDU4XANCNFSM6AAAAAARW4YF3A . You are receiving this because you authored the thread.Message ID: @.***>

--

Disclaimer:- This footer text is to convey that this email is sent by one of the users of IITH. So, do not mark it as SPAM.