YenRaven / annoy_ltm

annoy long term memory experiment for oobabooga/text-generation-webui
31 stars 3 forks source link

Pressing "generate" twice causes error #4

Closed soctib closed 1 year ago

soctib commented 1 year ago

Normally one can press "generate" twice without inputting something. With annoy loaded it gets stuck and prints the following error:

  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\routes.py", line 395, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\blocks.py", line 1193, in process_api
    result = await self.call_function(
  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\blocks.py", line 930, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\oobabooga\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\oobabooga\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "D:\oobabooga\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\utils.py", line 491, in async_iteration
    return next(iterator)
  File "D:\oobabooga\text-generation-webui\modules\chat.py", line 319, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True)):
  File "D:\oobabooga\text-generation-webui\modules\chat.py", line 313, in generate_chat_reply
    for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message):
  File "D:\oobabooga\text-generation-webui\modules\chat.py", line 226, in chatbot_wrapper
    prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs)
  File "D:\oobabooga\text-generation-webui\modules\extensions.py", line 193, in apply_extensions
    return EXTENSION_MAP[typ](*args, **kwargs)
  File "D:\oobabooga\text-generation-webui\modules\extensions.py", line 80, in _apply_custom_generate_chat_prompt
    return extension.custom_generate_chat_prompt(text, state, **kwargs)
  File "D:\oobabooga\text-generation-webui\extensions\annoy_ltm\script.py", line 671, in custom_generate_chat_prompt
    return generator.custom_generate_chat_prompt(user_input, state, **kwargs)
  File "D:\oobabooga\text-generation-webui\extensions\annoy_ltm\script.py", line 615, in custom_generate_chat_prompt
    related_memories = retrieve_related_memories(
  File "D:\oobabooga\text-generation-webui\extensions\annoy_ltm\script.py", line 252, in retrieve_related_memories
    input_embedding = generate_embeddings(rem_user_and_time(input_str))
  File "D:\oobabooga\text-generation-webui\extensions\annoy_ltm\script.py", line 159, in generate_embeddings
    input_embeds = shared.model.model.embed_tokens(input_ids)
  File "D:\oobabooga\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\oobabooga\installer_files\env\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\oobabooga\installer_files\env\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
    return F.embedding(
  File "D:\oobabooga\installer_files\env\lib\site-packages\torch\nn\functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding
YenRaven commented 1 year ago

Thanks for the report! Issue should be fixed in main branch.