hkzhang-git / HiNAS

14 stars 4 forks source link

NotImplementedError: Input Error: Only 3D, 4D and 5D input Tensors supported (got 4D) for the modes: nearest | linear | bilinear | trilinear (got bicubic) #2

Open ye-zero opened 2 years ago

ye-zero commented 2 years ago

I have searched, but the following error appears,Hope you can help answer

2021-12-04 15:50:36,768 one_stage_nas INFO: Namespace(config_file='/data/yy/hinas/configs/sr/DIV2K_2c3n/03_x2_train_CR.yaml', device='3', opts=[]) 2021-12-04 15:50:36,779 one_stage_nas INFO: Loaded configuration file /data/yy/hinas/configs/sr/DIV2K_2c3n/03_x2_train_CR.yaml 2021-12-04 15:50:36,780 one_stage_nas INFO: DATASET: DATA_ROOT: /data/yy/hinas/data

DATA_ROOT: /data/data2/zhk218/data/nas_data

DATA_NAME: DIV2K_800

DATA_NAME: Set14 CROP_SIZE: 64 TASK: "sr" LOAD_ALL: False SEARCH: TIE_CELL: False INPUT: CROP_SIZE_TRAIN: 64 SOLVER: TRAIN:

MAX_ITER: 600000

MAX_ITER: 100

CHECKPOINT_PERIOD: 10

CHECKPOINT_PERIOD: 1000

VALIDATE_PERIOD: 10 LOSS: ['l1', 'log_ssim'] LOSS_WEIGHT: [1.0, 0.6] DATALOADER: NUM_WORKERS: 4 BATCH_SIZE_TRAIN: 16 BATCH_SIZE_TEST: 16 S_FACTOR: 2 R_CROP: 4 DATA_LIST_DIR: ../preprocess/dataset_json MODEL: FILTER_MULTIPLIER: 16 META_ARCHITECTURE: Sr_compnet

META_ARCHITECTURE: Sr_supernet

META_MODE: Width NUM_STRIDES: 3 NUM_LAYERS: 2 NUM_BLOCKS: 3 IN_CHANNEL: 3 PRIMITIVES: "NO_DEF_L" ACTIVATION_F: "Leaky" USE_ASPP: True USE_RES: True

OUTPUT_DIR: output

2021-12-04 15:50:36,782 one_stage_nas INFO: Running with config: DATALOADER: BATCH_SIZE_TEST: 16 BATCH_SIZE_TRAIN: 16 DATA_AUG: 1 DATA_LIST_DIR: ../preprocess/dataset_json NUM_WORKERS: 4 R_CROP: 4 SIGMA: [] S_FACTOR: 2 DATASET: CROP_SIZE: 64 DATA_NAME: Set14 DATA_ROOT: /data/yy/hinas/data LOAD_ALL: False TASK: sr TEST_DATASETS: [] TO_GRAY: False TRAIN_DATASETS: [] TRAIN_DATASETS_WEIGHT: [] INPUT: CROP_SIZE_TRAIN: 64 MAX_SIZE_TEST: 1024 MAX_SIZE_TRAIN: 1024 MIN_SIZE_TEST: -1 MIN_SIZE_TRAIN: -1 MODEL: ACTIVATION_F: Leaky AFFINE: True ASPP_RATES: (2, 4, 6) FILTER_MULTIPLIER: 16 IN_CHANNEL: 3 META_ARCHITECTURE: Sr_compnet META_MODE: Width NUM_BLOCKS: 3 NUM_LAYERS: 2 NUM_STRIDES: 3 PRIMITIVES: NO_DEF_L RES: add USE_ASPP: True USE_RES: True WEIGHT: WS_FACTORS: [1, 1.5, 2] OUTPUT_DIR: output RESULT_DIR: . SEARCH: ARCH_START_EPOCH: 20 PORTION: 0.5 R_SEED: 0 SEARCH_ON: False TIE_CELL: False VAL_PORTION: 0.02 SOLVER: BIAS_LR_FACTOR: 2 CHECKPOINT_PERIOD: 10 LOSS: ['l1', 'log_ssim'] LOSS_WEIGHT: [1.0, 0.6] MAX_EPOCH: 30 MOMENTUM: 0.9 SCHEDULER: poly SEARCH: LR_A: 0.001 LR_END: 0.001 LR_START: 0.025 MOMENTUM: 0.9 T_MAX: 10 WD_A: 0.001 WEIGHT_DECAY: 0.0003 TRAIN: INIT_LR: 0.05 MAX_ITER: 100 POWER: 0.9 VAL_PORTION: 0.01 VALIDATE_PERIOD: 10 WEIGHT_DECAY: 4e-05 WEIGHT_DECAY_BIAS: 0 Loading genotype from output/sr/Set14/Outline-2c3n_TC-False_ASPP-True_Res-True_Prim-NO_DEF_L/search/models/model_best.geno 2021-12-04 15:50:58,502 one_stage_nas.utils.checkpoint INFO: No checkpoint found. Initializing model from scratch 2021-12-04 15:50:58,549 one_stage_nas.trainer INFO: Model Params: 0.26M 2021-12-04 15:50:58,556 one_stage_nas.trainer INFO: Start training /home/yy/miniconda/conda/envs/hinas/lib/python3.7/site-packages/torch/nn/functional.py:2423: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) Traceback (most recent call last): File "train.py", line 134, in main() File "train.py", line 130, in main train(cfg, output_dir) File "train.py", line 68, in train cfg File "../one_stage_nas/engine/trainer.py", line 87, in do_train pred, loss_dict = model(images, targets) File "/home/yy/miniconda/conda/envs/hinas/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, kwargs) File "/home/yy/miniconda/conda/envs/hinas/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 141, in forward return self.module(*inputs[0], *kwargs[0]) File "/home/yy/miniconda/conda/envs/hinas/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(input, kwargs) File "../one_stage_nas/modeling/sr_compnet.py", line 214, in forward pred = F.interpolate(images, size=pred.size()[-2:], mode='bicubic') + pred File "/home/yy/miniconda/conda/envs/hinas/lib/python3.7/site-packages/torch/nn/functional.py", line 2459, in interpolate " (got {})".format(input.dim(), mode)) NotImplementedError: Input Error: Only 3D, 4D and 5D input Tensors supported (got 4D) for the modes: nearest | linear | bilinear | trilinear (got bicubic)

and the model_best.geno is 354bytes. Whether or not this has anything to do with "No checkpoint found. Initializing model from scratch"

hkzhang-git commented 2 years ago

Is the pytorch version you adopted consist with the version listed in the requirement.txt? If so, I will check the code latter.

ye-zero commented 2 years ago

Is the pytorch version you adopted consist with the version listed in the requirement.txt? If so, I will check the code latter.

yes, Pytorch = 1.0.0.

ye-zero commented 2 years ago

Is the pytorch version you adopted consist with the version listed in the requirement.txt? If so, I will check the code latter.

Owner

when i train the genotype and use_res=true, the error exist

A8B94A454E41AF38F1488C3AB4DEF3B4

do you have this problem?

JuliaWasala commented 1 year ago

hi, I have the same InputError, with torch = 1.0.0. training a x2 SR model

JuliaWasala commented 1 year ago

it is maybe some other package version error, could you maybe provide a full yml file so I can reproduce your conda environment?

JuliaWasala commented 1 year ago

Hi, I was able to solve the issue by upgrading to pytorch 1.1.0. I went back to check the pytorch documentation of the interpolate method, and realised that the bicubic option was only introduced in 1.1.0. Upgrading pytorch solved the issue. I used the instructions for CUDA10 from the pytorch docs, afterwards I needed to remove numpy and re-install the version in the requirements again.