Open AmineDjeghri opened 1 year ago
When running the following code without using model.to(device), it works. But when using the device (which is a GPU), i get the following error : RuntimeError: Input type (c10::Half) and bias type (float) should be the same
RuntimeError: Input type (c10::Half) and bias type (float) should be the same
Here is the code
import torch from PIL import Image import matplotlib.pyplot as plt from donut import DonutModel import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def demo_process(input_img): global pretrained_model, task_prompt, task_name # input_img = Image.fromarray(input_img) output = pretrained_model.inference(image=input_img, prompt=task_prompt)["predictions"][0] return output task_prompt = f"<s_cord-v2>" image = Image.open("data/sample_image_cord_test_receipt_00004.png") plt.imshow(image) plt.show() pretrained_model = DonutModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") pretrained_model.to(device) pretrained_model.eval()
same +1
When running the following code without using model.to(device), it works. But when using the device (which is a GPU), i get the following error :
RuntimeError: Input type (c10::Half) and bias type (float) should be the same
Here is the code