https://github.com/NVlabs/OmniDrive/assets/74858581/f64987a0-b890-416d-90c1-e0daaeb542d6
We present OmniDrive, a holistic Drive LLM-Agent framework for end-to-end autonomous driving. Our main contributions involve novel solutions in both model (OmniDrive-Agent) and benchmark (OmniDrive-nuScenes). The former features a novel 3D multimodal LLM design that uses sparse queries to lift and compress visual representations into 3D. The latter is constituted of comprehensive VQA tasks for reasoning and planning, including scene description, traffic regulation, 3D grounding, counterfactual reasoning, decision making and planning.
[2024/07/18]
OmniDrive-nuScenes model release.[2024/05/02]
OmniDrive-nuScenes dataset release.[2024/05/02]
ArXiv technical report release.Please follow our documentation step by step. If you like our work, please recommend it to your colleagues and friends.
Joint End-to-end Planning and Reasoning
Interactive Conversation with Ego Vehicle
Counterfactual Reasoning of Planning Behaviors
If this work is helpful for your research, please consider citing:
@article{wang2024omnidrive,
title={{OmniDrive}: A Holistic LLM-Agent Framework for Autonomous Driving with 3D Perception, Reasoning and Planning},
author={Shihao Wang and Zhiding Yu and Xiaohui Jiang and Shiyi Lan and Min Shi and Nadine Chang and Jan Kautz and Ying Li and Jose M. Alvarez},
journal={arXiv:2405.01533},
year={2024}
}