Doragd / Chinese-Chatbot-PyTorch-Implementation

:four_leaf_clover: Another Chinese chatbot implemented in PyTorch, which is the sub-module of intelligent work order processing robot. 👩‍🔧
Apache License 2.0
879 stars 192 forks source link

聊天崩溃 #13

Open Chenjm08 opened 1 year ago

Chenjm08 commented 1 year ago

运行以下命令,出现崩溃,该怎么解决?

python3 main.py chat
Doragd > 你在 干嘛

崩溃信息:

Traceback (most recent call last):
  File "main.py", line 38, in <module>
    fire.Fire()
  File "/home/chenjm/.local/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/chenjm/.local/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/chenjm/.local/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "main.py", line 28, in chat
    output_words = train_eval.output_answer(input_sentence, searcher, sos, eos, unknown, opt, word2ix, ix2word)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/train_eval.py", line 291, in output_answer
    tokens = generate(input_seq, searcher, sos, eos, opt)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/train_eval.py", line 202, in generate
    tokens, scores = searcher(sos, eos, input_batch, input_lengths, opt.max_generate_length, opt.device)
  File "/home/chenjm/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/utils/greedysearch.py", line 17, in forward
    encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)
  File "/home/chenjm/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/model.py", line 51, in forward
    packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
  File "/home/chenjm/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py", line 262, in pack_padded_sequence
    _VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
666github100 commented 1 year ago

要么换torch版本,要么把lengths改成lengths.cpu()

anfogy commented 1 year ago

要么换torch版本,要么把lengths改成lengths.cpu()

torch为最新版本,lengths.to(opt.device)改了之后会触发另一个runtime error

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
Whylickspittle commented 1 year ago

do you have any solutions to solve the problem?

Whylickspittle commented 1 year ago

do you have any solutions to solve the problem?

model.py
Line 51 packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) just edited packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths.cpu())

anfogy commented 1 year ago

do you have any solutions to solve the problem?

model.py Line 51 packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) just edited packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths.cpu())

Woo, haven't check it but thank you!

pinst7 commented 10 months ago

我运行main.py之后没有进入对话模式, 出现os: <module 'os' from '地址' preprocess: <function preprocess at 0x000001C8EE6B57B8> train_eval: <module 'train_eval' from'地址' fire: <module 'fire' from'地址' QA_test: <module 'QA_data.QA_test' from'地址' Config: <class 'config.Config'> chat: <function chat at 0x000001C8EDFA2EA0> 求解决