LLVM-AD Workshop @ WACV 2024

The 1st WACV Workshop on Large Language and Vision Models for Autonomous Driving ( LLVM-AD) seeks to bring together academia and industry professionals in a collaborative exploration of applying large language and vision models to autonomous driving. Through a half-day in-person event, the workshop will showcase regular and demo paper presentations and invited talks from famous researchers in academia and industry. Additionally, LLVM-AD will launch two open-source real-world traffic language understanding datasets, catalyzing practical advancements. The workshop will host two challenges based on this dataset to assess the capabilities of language and computer vision models in addressing autonomous driving challenges.

Note for Benchmark: The workshop challenge will be maintained in the long term, and even after the workshop concludes, we will continue to welcome submissions of new results on the datasets. We will also update the benchmark accordingly.


Workshop Recording


Important Dates

  • Paper Submission Deadline: October 23rd, 2023 October 26th, 2023
  • Author Notification: November 13th, 2023
  • Camera-ready Papers Deadline: November 19th, 2023

Invited Speakers

Dr. Zhen Li
Assistant Professor, CUHKSZ
Dr. Oleg Sinavski
Principal Applied Scientist, Wayve
Dr. Yu Huang
CEO and Chief Scientist, roboraction.ai

Organizers

Chao Zheng
Tencent
Kun Tang
Tencent
Zhipeng Cao
Tencent
Xu Cao
PediaMed AI & UIUC
Yunsheng Ma
Purdue University
Can Cui
Purdue University
Wenqian Ye
PediaMed AI & UVA
Ziran Wang
Purdue University
Shawn Mei
Tencent
Tong Zhou
Tencent

Accepted Papers

Summary of the 1st WACV Workshop on Large Language and Vision Models for Autonomous Driving (LLVM-AD): [Arxiv, GitHub]

🎉 We would like to congrate the following papers for being accepted to LLVM-AD 2024!


Challenge Organization Committee

  • Chao Zheng (Tencent)
  • Kun Tang (Tencent)
  • Zhipeng Cao (Tencent)
  • Tong Zhou (Tencent)
  • Erlong Li (Tencent)
  • Ao Liu (Tencent)
  • Shengtao Zou (Tencent)
  • Xinrui Yan (Tencent)
  • Shawn Mei (Tencent)
  • Yunsheng Ma (Purdue University)
  • Can Cui (Purdue University)
  • Ziran Wang (Purdue University)
  • Yang Zhou (New York University)
  • Kaizhao Liang (SambaNova Systems)
  • Wenqian Ye (PediaMed AI & University of Virginia)
  • Xu Cao (PediaMed AI & University of Illinois Urbana-Champaign)

Program Committee

  • Erlong Li (Tencent)
  • Ao Liu (Tencent)
  • Shengtao Zou (Tencent)
  • Xinrui Yan (Tencent)
  • Yang Zhou (New York University)
  • Kaizhao Liang (SambaNova Systems)
  • Tianren Gao (SambaNova Systems)
  • Kuei-Da Liao (SambaNova Systems)
  • Shan Bao (University of Michigan)
  • Xuhui Kang (University of Virginia)
  • Sean Sung-Wook Lee (University of Virginia)
  • Amr Abdelraouf (Toyota Motor North America)
  • Jianguo Cao (PediaMed AI)
  • Jintai Chen (University of Illinois Urbana-Champaign)

Citation

If the workshop and the survey inspire you, please consider citing our work:

@misc{cui2023survey,
      title={A Survey on Multimodal Large Language Models for Autonomous Driving}, 
      author={Can Cui and Yunsheng Ma and Xu Cao and Wenqian Ye and Yang Zhou and Kaizhao Liang and Jintai Chen and Juanwu Lu and Zichong Yang and Kuei-Da Liao and Tianren Gao and Erlong Li and Kun Tang and Zhipeng Cao and Tong Zhou and Ao Liu and Xinrui Yan and Shuqi Mei and Jianguo Cao and Ziran Wang and Chao Zheng},
      year={2023},
      eprint={2311.12320},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}