reading from datapath datasets/imagenet_full
Number of the class = 1000
/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torchvision/transforms/transforms.py:257: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
warnings.warn(
Transform =
Resize(size=256, interpolation=bicubic)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
reading from datapath datasets/imagenet_full
Number of the class = 1000
Sampler_train = <torch.utils.data.distributed.DistributedSampler object at 0x7c13ca1da730>
log writter dir: None
/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/cuda/init.py:104: UserWarning:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), devicename))
number of params: 2979700
LR = 0.00600000
Batch size = 512
Update frequent = 2
Number of training examples = 1281167
Number of training training per epoch = 2502
Use Cosine LR scheduler
Set warmup steps = 50040
Set warmup steps = 0
Max WD = 0.0500000, Min WD = 0.0500000
criterion = LabelSmoothingCrossEntropy()
Auto resume checkpoint:
Total Trainable Params: 2.98 M
Start training for 300 epochs
[ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03899768/n0389976843379.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n01924916/n0192491614328.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03662601/n0366260128649.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02130308/n021303084198.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02342885/n023428853182.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04141327/n041413277288.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02870880/n0287088011036.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02009229/n020092296480.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03259280/n032592802258.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03854065/n038540658381.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03759954/n037599541093.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02825657/n0282565710468.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n09193705/n09193705166.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02676566/n026765665416.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02259212/n022592123700.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04548362/n0454836237597.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04417672/n044176729395.JPEG'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "main.py", line 530, in
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02493793/n024937934192.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02104029/n02104029546.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03670208/n036702088088.JPEG'): can't open/read file: check file path/integrity
main(args)
File "main.py", line 440, in main
[ WARN:0@4.058] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04589890/n04589890_7745.JPEG'): can't open/read file: check file path/integrity
train_stats = train_one_epoch(
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/engine.py", line 25, in train_one_epoch
for data_iter_step, (samples, targets) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/utils.py", line 140, in log_every
for obj in iterable:
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/data/samplers.py", line 330, in getitem
img = self.load_img(imgpath)
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/data/samplers.py", line 349, in load_img
img = Image.fromarray(img)
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/PIL/Image.py", line 3266, in fromarray
arr = obj.array_interface
AttributeError: 'NoneType' object has no attribute 'array_interface'
When I ran python main.py, I got an issue like:
AttributeError: 'NoneType' object has no attribute '__array_interface__'
(casvit) yubai03@yubai03:~/yubai03/aJialin_Tang/CAS-ViT/classification$ python main.py Not using distributed mode Namespace(aa='rand-m9-mstd0.5-inc1', auto_resume=True, batch_size=256, classifier_dropout=0.0, clip_grad=None, color_jitter=0.4, crop_pct=None, cutmix=0.0, cutmix_minmax=None, data_path='datasets/imagenet_full', data_set='IMNET', device='cuda', disable_eval=False, dist_eval=True, dist_on_itp=False, dist_url='env://', distributed=False, drop_path=0.1, enable_wandb=False, epochs=300, eval=False, eval_data_path=None, find_unused_params=False, finetune='', imagenet_default_mean_and_std=True, input_size=224, layer_decay=1.0, layer_scale_init_value=1e-06, local_rank=-1, log_dir=None, lr=0.006, max_crop_size_h=320, max_crop_size_w=320, min_crop_size_h=160, min_crop_size_w=160, min_lr=1e-06, mixup=0.0, mixup_mode='batch', mixup_prob=0.0, mixup_switch_prob=0.5, model='rcvit_xs', model_ema=False, model_ema_decay=0.9995, model_ema_eval=False, model_ema_force_cpu=False, momentum=0.9, multi_scale_sampler=False, nb_classes=1000, num_workers=10, opt='adamw', opt_betas=None, opt_eps=1e-08, output_dir='', pin_mem=True, project='edgenext', recount=1, remode='pixel', reprob=0.0, resplit=False, resume='', save_ckpt=True, save_ckpt_freq=1, save_ckpt_num=3, seed=43, smoothing=0.1, start_epoch=0, three_aug=False, train_interpolation='bicubic', update_freq=2, use_amp=True, usi_eval=False, wandb_ckpt=False, warmup_epochs=20, warmup_start_lr=0, warmup_steps=-1, weight_decay=0.05, weight_decay_end=None, world_size=1) Transform = RandomResizedCropAndInterpolation(size=(224, 224), scale=(0.08, 1.0), ratio=(0.75, 1.3333), interpolation=bicubic) RandomHorizontalFlip(p=0.5) RandAugment(n=2, ops= AugmentOp(name=AutoContrast, p=0.5, m=9, mstd=0.5) AugmentOp(name=Equalize, p=0.5, m=9, mstd=0.5) AugmentOp(name=Invert, p=0.5, m=9, mstd=0.5) AugmentOp(name=Rotate, p=0.5, m=9, mstd=0.5) AugmentOp(name=PosterizeIncreasing, p=0.5, m=9, mstd=0.5) AugmentOp(name=SolarizeIncreasing, p=0.5, m=9, mstd=0.5) AugmentOp(name=SolarizeAdd, p=0.5, m=9, mstd=0.5) AugmentOp(name=ColorIncreasing, p=0.5, m=9, mstd=0.5) AugmentOp(name=ContrastIncreasing, p=0.5, m=9, mstd=0.5) AugmentOp(name=BrightnessIncreasing, p=0.5, m=9, mstd=0.5) AugmentOp(name=SharpnessIncreasing, p=0.5, m=9, mstd=0.5) AugmentOp(name=ShearX, p=0.5, m=9, mstd=0.5) AugmentOp(name=ShearY, p=0.5, m=9, mstd=0.5) AugmentOp(name=TranslateXRel, p=0.5, m=9, mstd=0.5) AugmentOp(name=TranslateYRel, p=0.5, m=9, mstd=0.5)) ToTensor() Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
reading from datapath datasets/imagenet_full Number of the class = 1000 /home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torchvision/transforms/transforms.py:257: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. warnings.warn( Transform = Resize(size=256, interpolation=bicubic) CenterCrop(size=(224, 224)) ToTensor() Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
reading from datapath datasets/imagenet_full Number of the class = 1000 Sampler_train = <torch.utils.data.distributed.DistributedSampler object at 0x7c13ca1da730> log writter dir: None /home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/cuda/init.py:104: UserWarning: NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), devicename)) number of params: 2979700 LR = 0.00600000 Batch size = 512 Update frequent = 2 Number of training examples = 1281167 Number of training training per epoch = 2502 Use Cosine LR scheduler Set warmup steps = 50040 Set warmup steps = 0 Max WD = 0.0500000, Min WD = 0.0500000 criterion = LabelSmoothingCrossEntropy() Auto resume checkpoint: Total Trainable Params: 2.98 M Start training for 300 epochs [ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03899768/n0389976843379.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n01924916/n0192491614328.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03662601/n0366260128649.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.054] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02130308/n021303084198.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02342885/n023428853182.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04141327/n041413277288.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02870880/n0287088011036.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02009229/n020092296480.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.055] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03259280/n032592802258.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03854065/n038540658381.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03759954/n037599541093.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02825657/n0282565710468.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n09193705/n09193705166.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.056] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02676566/n026765665416.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02259212/n022592123700.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04548362/n0454836237597.JPEG'): can't open/read file: check file path/integrity [ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04417672/n044176729395.JPEG'): can't open/read file: check file path/integrity Traceback (most recent call last): File "main.py", line 530, in
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread ('datasets/imagenet_full/train/n02493793/n024937934192.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n02104029/n02104029546.JPEG'): can't open/read file: check file path/integrity
[ WARN:0@4.057] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n03670208/n036702088088.JPEG'): can't open/read file: check file path/integrity
main(args)
File "main.py", line 440, in main
[ WARN:0@4.058] global loadsave.cpp:241 findDecoder imread('datasets/imagenet_full/train/n04589890/n04589890_7745.JPEG'): can't open/read file: check file path/integrity
train_stats = train_one_epoch(
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/engine.py", line 25, in train_one_epoch
for data_iter_step, (samples, targets) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/utils.py", line 140, in log_every
for obj in iterable:
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/data/samplers.py", line 330, in getitem
img = self.load_img(imgpath)
File "/home/yubai03/yubai03/aJialin_Tang/CAS-ViT/classification/data/samplers.py", line 349, in load_img
img = Image.fromarray(img)
File "/home/yubai03/anaconda3/envs/casvit/lib/python3.8/site-packages/PIL/Image.py", line 3266, in fromarray
arr = obj.array_interface
AttributeError: 'NoneType' object has no attribute 'array_interface'