This repository contains the official implementation of GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models, published in the journal Communications in Transportation Research.
π₯ Essential Science Indicators High-Citation Paper β ranked in the top 1% of most-cited papers in the field.
Welcome to the official repository for GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models. This project introduces a novel approach using GPT-4 to enhance autonomous vehicle (AV) systems with a human-centric multimodal grounding model. The CAVG model combines text, visual, and contextual understanding for improved intent prediction in complex driving scenarios.
Hybrid Strategy for Contextual Analysis
A pioneering hybrid approach for advanced image-text context analysis tailored to autonomous vehicle command grounding.
Cross-Modal Attention Mechanism
A unique cross-modal attention mechanism for deriving nuanced human-AV interactions from multimodal inputs.
Large Language Model Integration
Leverages GPT-4 for effective embedding and interpretation of emotional nuances in human commands.
Robustness in Diverse Scenarios
Demonstrates exceptional performance across challenging traffic environments, validated extensively on the Talk2Car dataset.
Navigating complex commands in a visual context is a core challenge for autonomous vehicles (AVs). Our Context-Aware Visual Grounding (CAVG) model employs an advanced encoder-decoder framework to address this challenge. Integrating five specialized encodersβText, Image, Context, Cross-Modal, and Multimodalβthe CAVG model leverages GPT-4βs capabilities to capture human intent and emotional undertones. The model's architecture includes multi-head cross-modal attention and a Region-Specific Dynamic (RSD) layer for enhanced context interpretation, making it resilient across diverse and challenging real-world traffic scenarios. Evaluations on the Talk2Car dataset show that CAVG outperforms existing models in accuracy and efficiency, excelling with limited training data and proving its potential for practical AV applications.
Model Architecture
Create Conda Environment
conda create --name CAVG python=3.7
conda activate CAVG
Install PyTorch with CUDA 11.7
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
Install Additional Requirements
pip install -r requirements.txt
Experiments are conducted using the Talk2Car dataset. If you use this dataset, please cite the original paper:
Thierry Deruyttere, Simon Vandenhende, Dusan Grujicic, Luc Van Gool, Marie-Francine Moens:
Talk2Car: Taking Control of Your Self-Driving Car. EMNLP 2019
Activate Environment and Install gdown
conda activate CAVG
pip install gdown
Download Talk2Car Images
gdown --id 1bhcdej7IFj5GqfvXGrHGPk2Knxe77pek
Organize Images
unzip imgs.zip && mv imgs/ ./data/images
rm imgs.zip
To start training the CAVG model with the Talk2Car dataset, run:
bash talk2car/script/train.sh
To evaluate the model's performance, execute:
bash talk2car/script/test.sh
During the prediction phase on the Talk2Car dataset, bounding boxes are generated to assess the model's spatial query understanding. To begin predictions, run:
bash talk2car/script/prediction.sh
Performance Comparison
Ground truth bounding boxes are in blue, while CAVG output boxes are in red. Commands associated with each scenario are displayed for context.
Challenging Scenes
Examples from scenes with limited visibility, ambiguous commands, and multiple agents.
Models on Talk2Car are evaluated by Intersection over Union (IoU) of predicted and ground truth bounding boxes with a threshold of 0.5 (AP50). We welcome pull requests with new results!
Model | AP50 (IoU0.5) | Code |
---|---|---|
STACK-NMN | 33.71 | |
SCRC | 38.7 | |
OSM | 35.31 | |
Bi-Directional retr. | 44.1 | |
MAC | 50.51 | |
MSRR | 60.04 | |
VL-Bert (Base) | 63.1 | Code |
AttnGrounder | 63.3 | Code |
ASSMR | 66.0 | |
CMSVG | 68.6 | Code |
Vilbert (Base) | 68.9 | Code |
CMRT | 69.1 | |
Sentence-BERT+FCOS3D | 70.1 | |
Stacked VLBert | 71.0 | |
FA | 73.51 | |
CAVG (Ours) | 74.55 | Code |
You can find the full Talk2Car leaderboard here.
If you find our work useful, please consider citing:
@article{LIAO2024100116,
title = {GPT-4 enhanced multimodal grounding for autonomous driving: Leveraging cross-modal attention with large language models},
journal = {Communications in Transportation Research},
volume = {4},
pages = {100116},
year = {2024},
issn = {2772-4247},
doi = {https://doi.org/10.1016/j.comm
tr.2023.100116},
url = {https://www.sciencedirect.com/science/article/pii/S2772424723000276},
author = {Haicheng Liao and Huanming Shen and Zhenning Li and Chengyue Wang and Guofa Li and Yiming Bie and Chengzhong Xu},
keywords = {Autonomous driving, Visual grounding, Cross-modal attention, Large language models, Human-machine interaction}
}
GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models accepted by the journal Communications in Transportation Research. Thank you for exploring CAVG! Your support and feedback are highly appreciated.