Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
16.75k stars 830 forks source link

llamafile as LLM server for Mantella mod and Skyrim, is working nice but there is a little problem. #415

Open amonpaike opened 1 month ago

amonpaike commented 1 month ago

Mantella mod introduces the possibility of talking to Skyrim NPCs, revolutionizing the way of playing this RPG, making it a unique experience ever seen. Officially the author of the mod relies on the llm server koboldcpp. Unfortunately koboldcpp with cuda crashes on my pc because my processor doesn't support avx2, while the other "blas" accelerant are too slow. So as an alternative i use llamafile, is working nice and smart, is very light and very performing on my 3060 with 12GB. The only problem is that every time I have to start a conversation, in order for the llm to generate the response, I have to briefly "alt+tab" to "exit and re-enter the game" so that llamafile generates the response and it triggers the loop with the voice speech, it also works for multiple comments, but then after it asks a new question to the npc, I have to "alt+tab" again to trigger the llm server. I was wondering what it could be and if there is a way to overcome this problem.

mofosyne commented 1 month ago

Rewritten for clarity, please confirm if correct.

Bug Report

Issue: When using llamafile with the Mantella mod in Skyrim, I have to briefly "alt+tab" (exit and re-enter the game) to trigger the LLM server response. This step is necessary each time I start a conversation or ask a new question to an NPC, which disrupts the gameplay experience.

Steps to Reproduce:

  1. Start Skyrim with the Mantella mod and llamafile as the LLM server.
  2. Initiate a conversation with an NPC.
  3. To trigger the LLM response and voice speech loop, "alt+tab" out of the game and then back in.
  4. The response is generated, and the conversation continues.
  5. Ask a new question to the NPC.
  6. Repeat "alt+tab" to trigger the next LLM response.

Expected Behavior: The LLM server should generate responses seamlessly during gameplay without requiring "alt+tab" actions.


Background

The Mantella mod introduces the ability to talk to NPCs in Skyrim, revolutionizing the RPG experience. The mod author uses koboldcpp for the LLM server, but it crashes on my PC due to lack of AVX2 support. Other accelerants like "blas" are too slow.

As an alternative, I am using llamafile, which is efficient and performs well on my NVIDIA 3060 GPU with 12GB VRAM. However, the need to "alt+tab" to trigger responses is the primary issue I need to resolve.

amonpaike commented 1 month ago

@mofosyne thank you very much, I'm not very good at reporting bug reports, next time I'll try to force myself to do my best.

jart commented 1 month ago

This is Windows correct? llamafile is a CLI application. How would the state of the Window manager impact its operation?

mofosyne commented 1 month ago

Is it possible that it's a bug in the mod? Maybe give the mod writer a poke and link this issue to them and see if they reply.

amonpaike commented 1 month ago

This is Windows correct? llamafile is a CLI application. How would the state of the Window manager impact its operation?

yes is windows, llamafile as a cli application: .\llamafile.exe -m C:\Users\noki\gguf\Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf -ngl 9999

Mantella LLM is form here but it is irrelevant it happens with any llm model.

Is it possible that it's a bug in the mod? Maybe give the mod writer a poke and link this issue to them and see if they reply.

To be honest I don't even know if it's a llamafile issue. I also reported it to the author of the mod when I wrote the issue here.

amonpaike commented 1 month ago

in case sameone want to try this is the quick tutorial (is not exhaustive, refer to the official tutorials for correct step by step)

you need : 1) skyrim installed with various mods attached to make "mantella" work well (read mantella tutorial)

2) download the mantella spell (the mod for skyrim) and mantella software (inteconnects all) from here

3) download the xvasynt (the voices for the npcs) from here 4) modify the config.ini in the mantella sofware to indicate the llamafile LLM_server
llm_api = http://127.0.0.1:8080/v1

play the game (go to mods configuration in the game for mantella spell customizations and shortcuts )

https://www.youtube.com/watch?v=FLmbd48r2Wo