PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
Apache License 2.0
453
stars
39
forks
source link
Thanks for your work.I have a question. Since the prompt in the model's input already describes the content represented by the vectors, why is it necessary to align the vectors with the LLM during the pre-training process? Are the vectors used to help the model further understand the driving scenario based on the prompt? Are the labels in the pre-training process the prompts generated by lanGen? What is the purpose of the 100k question-answer pairs in the pre-training process? #24
Thanks for the question. The short answer is that we were trying to proof that we can align the the vectors with LLMs hence we can use the similar method to align the image with LLMs as described in the Lingo work: https://arxiv.org/abs/2312.14115
Could you explain what the model's input and the corresponding labels are during the pre-training stage to help me better understand this process?