[🌟ECCV2024 Poster🌟]EventBind: Learning a Unified Representation to Bind Them All for Event-based Open-world Understanding (Project Page)
This repository contains the official PyTorch implementation of the paper "EventBind: Learning a Unified Representation to Bind Them All for Event-based Open-world Understanding" paper.
Citation
If you find this paper useful, please consider staring 🌟 this repo and citing 📑 our paper:
@article{zhou2023clip,
title={E-CLIP: Towards Label-efficient Event-based Open-world Understanding by CLIP},
author={Zhou, Jiazhou and Zheng, Xu and Lyu, Yuanhuiyi and Wang, Lin},
journal={arXiv e-prints},
pages={arXiv--2308},
year={2023}
}
Quick Start
- Refer to install.md for step-by-step guidance on how to install the packages.
- Download the ViT-B-32, ViT-B-16, ViT-L-14 CLIP pretrained backbone in this repository.
- Download the dataset and its corresponding model checkpoints in the following Dataset section and Checkpoints section, respectively.
Note that the train-val split for N-Caltech101/Caltech101 and N-MNIST/MNIST dataset are provided in Dataloader folder to ensure the fairness of future comparison.
and we follow the N-Imagenet/Imagenet dataset's official train-val split.
- Change settings of the dataset_name.yaml in the Configs folder, which are emphasized by TODO notes.
- Finally, train and evaluate the EventBind using the following command!
python ./EventBind/train_dp_dataset_name.py
Checkpoints
| Datasets | Access to Model checkpoints |
|:------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| N-Caltech101 | [ViT-B-32](https://huggingface.co/garlandchou/EventBind/resolve/main/N-Caltech101-ViT-B-32.bin?download=true), [ViT-B-16](https://huggingface.co/garlandchou/EventBind/resolve/main/N-Caltech101-ViT-B-16.bin?download=true), [ViT-L-14](https://huggingface.co/garlandchou/EventBind/resolve/main/N-Caltech101-ViT-L-14.bin?download=true) |
| N-MINIST | [ViT-B-32](https://huggingface.co/garlandchou/EventBind/resolve/main/MINIST%20ViT-B-32.bin?download=true), [ViT-B-16](https://huggingface.co/garlandchou/EventBind/resolve/main/MINIST%20ViT-B-16.bin?download=true), [ViT-L-14](https://huggingface.co/garlandchou/EventBind/resolve/main/MINIST%20ViT-L-14.bin?download=true) |
| N-Imagenet | [ViT-B-32](https://huggingface.co/garlandchou/EventBind/resolve/main/N-Imagenet%20ViT-B-32.bin?download=true), [ViT-B-16](https://huggingface.co/garlandchou/EventBind/resolve/main/N-Imagenet%20ViT-B-16.bin?download=true), [ViT-L-14](https://huggingface.co/garlandchou/EventBind/resolve/main/N-Imagenet%20ViT-L-14.bin?download=true) |
Dataset
Please refer to the .txt files in the Dataloader folder for the dataset structure.
| Event Datasets | Acesse to Download | Corresponding Image Datasets | Acesse to Download |
|:--------------:|:------------------------------------------------------------------------------------------------:|:----------------------------:|:------------------------------------------------------------------------------:|
| N-Caltech101 | [Download](https://drive.google.com/drive/folders/1sY91hL_iHnmfRXSTc058bfZ0GQcEC6St) | Caltech101 | [Download](https://data.caltech.edu/records/mzrjq-6wc02) |
| N-Imagenet | [Download](https://docs.google.com/document/d/1x0Vqe_5tVAJtYLYSZLwN6oNMExyUjIh-a30oLOKV2rE/edit) | Imagenet | [Download](https://image-net.org/download.php) |
| N-MINIST | [Download](https://www.garrickorchard.com/datasets/n-mnist) | MINIST | [Download](https://link.zhihu.com/?target=http%3A//yann.lecun.com/exdb/mnist/) |
Dependencies
Please refer to install.md for step-by-step guidance on how to install the packages.
️ ️Acknowledgement
We thank the authors of CLIP, CoOp for opening source their wonderful works.
License
This repository is released under the MIT License.
Contact
If you have any questions about this project, please open an issue in this repository.