jxbbb / ADAPT

This repository is an official implementation of ADAPT: Action-aware Driving Caption Transformer, accepted by ICRA 2023.
MIT License
376 stars 20 forks source link

About Quick Demo #21

Open MoMo569377793 opened 2 weeks ago

MoMo569377793 commented 2 weeks ago

Thank you for your excellent work!

However, I want to make sure that Quick Demo does not include a control signal output, right (because the multitask parameter is not set?). So does the model.bin file in the checkpoint you provided also just train the driving caption output?

If this is the case, then is there a script provided that can output the results of the control signal prediction?

Or what modifications and settings do I need to make in order to be able to output from the input video to the driving caption along with the control signal?

MoMo569377793 commented 2 weeks ago

I tried the following steps: 1, add multitask and signal_types in the script file inference.sh, signal_types use the course speed corresponding to the following pre-trained model 2、Use Basic Model for training, that is, BDDX_multitask.sh to get a new pre-trained model instead of the default one. 3、Add the initial value of car_info in Input (I gave a random value, I don't know if it's OK).

This way the final output will contain the predicted values for the control signals course and speed, I wonder if I'm doing it right?