open-mmlab / mmengine

OpenMMLab Foundational Library for Training Deep Learning Models
https://mmengine.readthedocs.io/
Apache License 2.0
1.18k stars 360 forks source link

[Bug] Model can't forward with dictionary as output #1603

Closed viktorho closed 3 weeks ago

viktorho commented 3 weeks ago

Prerequisite

Environment

OrderedDict([('sys.platform', 'linux'), ('Python', '3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]'), ('CUDA available', True), ('MUSA available', False), ('numpy_random_seed
', 2147483648), ('GPU 0', 'NVIDIA GeForce RTX 3060 Laptop GPU'), ('CUDA_HOME', '/usr'), ('NVCC', 'Cuda compilation tools, release 12.0, V12.0.140'), ('GCC', 'n/a'), ('PyTorch', '
1.13.0+cu117'), ('PyTorch compiling details', 'PyTorch built with:\n  - GCC 9.3\n  - C++ Version: 201402\n  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122
 for Intel(R) 64 architecture applications\n  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\n  - OpenMP 201511 (a.k.a. OpenMP 4.5)\n  - LAPACK is 
enabled (usually provided by MKL)\n  - NNPACK is enabled\n  - CPU capability usage: AVX2\n  - CUDA Runtime 11.7\n  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;
-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_8
0;-gencode;arch=compute_86,code=sm_86\n  - CuDNN 8.5\n  - Magma 2.6.1\n  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER
=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUS
E_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-
virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unuse
d-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=
old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-funct
ion-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE
_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n'), ('TorchVision', '0.14.0+cu117'), ('OpenCV', '4.10.0'), ('MMEngine', '0.10.5')])

Reproduces the problem - code sample

My model applied which forward like this but end of error. I understand that the error is showing that the output must be in List type, but the super().forward() can also in Dict() type, is that because of another setup or am I do anything wrong?

  def forward(self, batch_inputs: Tensor,
                batch_data_samples: Optional[list] = None,
                mode: str = 'tensor') -> Union[Dict[str, Tensor], list]:
        output_type = 'last_hidden_state' if not self.with_pooler else 'pooler_output'
        img_features = self.model(batch_inputs)[output_type]
        return {'train_features': img_features, 'cache_keys': batch_data_samples}

Reproduces the problem - command or script

*

Reproduces the problem - error message

File "/home/victor-ho/CV/yolo_w_adap/YOLO-World/tools/temp.py", line 68, in <module>
    main()
  File "/home/victor-ho/CV/yolo_w_adap/YOLO-World/tools/temp.py", line 45, in main
    runner.test()
  File "/home/victor-ho/miniconda3/envs/yolo-w/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1823, in test
    metrics = self.test_loop.run()  # type: ignore
  File "/home/victor-ho/miniconda3/envs/yolo-w/lib/python3.9/site-packages/mmengine/runner/loops.py", line 463, in run
    self.run_iter(idx, data_batch)
  File "/home/victor-ho/miniconda3/envs/yolo-w/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/victor-ho/miniconda3/envs/yolo-w/lib/python3.9/site-packages/mmengine/runner/loops.py", line 489, in run_iter
    outputs, self.test_loss = _update_losses(outputs, self.test_loss)
  File "/home/victor-ho/miniconda3/envs/yolo-w/lib/python3.9/site-packages/mmengine/runner/loops.py", line 535, in _update_losses
    if isinstance(outputs[-1],
KeyError: -1

Additional information

No response

viktorho commented 3 weeks ago

One more thing, the task I'm using is for testing. I kinda new with this framework, and really appriciate for any help

fanqiNO1 commented 3 weeks ago

This is because pr #1503 does not consider the situation when the output is a dict. Maybe you can try mmengine==0.10.4 first through pip install mmengine==0.10.4.

I hope this helps.

viktorho commented 3 weeks ago

thanks for fast response, I will tried