Open formidiable opened 2 years ago
There is an explanation about my suspicion about encoder decoder and usage Batch Normalization in this link. ^
I also got this error while training without batchnorm1d or train with layernorm.
_RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
I searched a lot but couldn't find a proper way to fix this.
I believe Batch Normalization cause the problem. Do you have any advice about it ?