If you use this code for a paper please cite:
@article{gao2021container,
title={Container: Context Aggregation Network},
author={Gao, Peng and Lu, Jiasen and Li, Hongsheng and Mottaghi, Roozbeh and Kembhavi, Aniruddha},
journal={arXiv preprint arXiv:2106.01401},
year={2021}
}
We provide baseline Container-light models pretrained on ImageNet 2012.
name | acc@1 | acc@5 | #params | url |
---|---|---|---|---|
Container-Light | 82.3 | 96.2 | 21M | model |
First, clone the repository locally:
git clone https://github.com/allenai/container.git
Create a new conda environment:
conda create -n container python=3.7
conda activate container
cd container
Install PyTorch 1.7.0+ and torchvision 0.8.1+ and pytorch-image-models 0.3.2:
conda install -c pytorch pytorch torchvision
pip install timm==0.3.2
Download and extract ImageNet train and val images from http://image-net.org/.
The directory structure is the standard layout for the torchvision datasets.ImageFolder
, and the training and validation data is expected to be in the train/
folder and val
folder respectively:
/path/to/imagenet/
train/
class1/
img1.jpeg
class2/
img2.jpeg
val/
class1/
img3.jpeg
class/2
img4.jpeg
To evaluate a pre-trained Container-Light on ImageNet val with a single GPU run:
For Container-Light, run:
python main.py --eval --resume checkpoint.pth --model container_v1_light --data-path /path/to/imagenet
giving
* Acc@1 82.26
To train Container-Light on ImageNet on a single node with 8 gpus for 300 epochs run:
Container-Light
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --model container_v1_light --batch-size 128 --data-path /path/to/imagenet --output_dir /path/to/save
Code will be released seperately.
Container V2 with much better performance will be released soon. Stay tuned.
Imagenet pretrained model for Container V2 : Container V2
This repository is released under the Apache 2.0 license as found in the LICENSE file.
Container codebase is highly motivated by DeiT