Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)
MIT License
342
stars
40
forks
source link
LiLT can not make inference with the Half (float16) dtype on CPU #43
Hi,
I wanted to make inference with
LiLT
with model parameters toHalf
(float16
) dtype on CPU (I did try on GPU and it worked).As I'm using Transformers from Hugging Face, I ran the following code:
It worked but when I ran the model for inference with the following code, it failed:
Error message:
It looks like that dtype
float32
is directly implemented in theLiLT
code.How to solve this issue? Thanks.