Open TT2TER opened 2 months ago
Hi, just remove this dependency.
I also faced similar issues for deepspeed, iotop. python: 3.9
Hi, there are some internal packages and you can just remove those dependencies. What about the issue of deepspeed?
Hi, solved the problem. I had removed some packages and could run the infer cli command. Right now I'm trying ot decouple the inference in a notebook. The problem that I am facing is when I try to run the following portion:
input_ids_llava = torch.cat([
(torch.ones(input_ids.shape[0], 1) *uni_prompting.sptids_dict['<|mmu|>']).to(device),
input_ids_system,
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|soi|>']).to(device),
# place your img embedding here
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|eoi|>']).to(device),
input_ids,
], dim=1).long()
I get Runtime error.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_CUDA_cat)
Any idea on this ?
Looks like the tensors input_ids_system
and input_ids
are not on the same device. You can manually put them to GPU.
I didn't encounter such a problem.
I solved the problem by changing input_ids_system
and input_ids
to following code:
input_ids_system = input_ids_system.clone().detach().to(device)
Thanks
ERROR: Could not find a version that satisfies the requirement byted-cruise==0.7.3 (from versions: none) ERROR: No matching distribution found for byted-cruise==0.7.3
this problem happens when using python 3.10 and 3.12