getnamo / Llama-Unreal

Llama.cpp plugin for Unreal Engine 5
MIT License
41 stars 5 forks source link

Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded #7

Open aleistor222 opened 7 months ago

aleistor222 commented 7 months ago

I'm having issues implementing the plugin. I've come from using the original UELlama plugin which worked but I had issues with packaging the project. I'm pretty novice and I'm a bit confused on things.

I'm running the Oculus source build UE5.3 on Windows and when I try to follow the instructions to build using cmake I get the error CMakeLists.txt doesn't exist. So I build through the Visual Studio IDE but when I try to run UE I get:

getnamo commented 7 months ago

If you're using this fork I recommend to start by using the plugin releases found here: https://github.com/getnamo/Llama-Unreal/releases/tag/v0.3.0 they contain the correct compiled dlls that should make it drag and drop to use in blueprint only. Use the .7z link.

If you're trying to build your own dlls for each platform, those build commands are meant to be run from within llama.cpp cloned root directory, not the plugin one.

aleistor222 commented 7 months ago

So I tried that release, followed the instructions and just put it in the plugins folder, but still the same issue.

Is there anything else I could be missing? Is it because my project is being built on the visual studio IDE? I have tried building and rebuilding the project etc. Have tried on a different PC with a different project, all the same error.

getnamo commented 7 months ago

If you're building from source (Oculus5.3), you may possibly need to recompile the project with the plugin in it so it generates new binaries to match your custom engine. The release is meant for canonical 5.3 engine release.

getnamo commented 7 months ago

NB: if you're using the CUDA branch, you may be missing CUDA 12.2 runtimes. CPU-only should work though.

oivio commented 7 months ago

I am having same exac error.

What I did is downloaded https://github.com/getnamo/Llama-Unreal/releases/tag/v0.3.0 Added that to clean project UE5.3 Plugin Folder And soon as I run project that error popup:

Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up.

From LOG:

[2024.02.03-00.23.06:881][ 0]LogWindows: Failed to load 'H:/Unreal Projects/Personal/CodeProject53/Plugins/Llama-Unreal/Binaries/Win64/UnrealEditor-UELlama.dll' (GetLastError=1114) [2024.02.03-00.23.06:881][ 0]LogPluginManager: Error: Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up. [2024.02.03-00.23.08:994][ 0]Message dialog closed, result: Ok, title: Message, text: Plugin 'UELlama' failed to load because module 'UELlama' could not be loaded. There may be an operating system error or the module may not be properly set up.

My specs

Windows 10 and GTX4080 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0

chris-boyce commented 7 months ago

Same issue here tried the same steps, let me know if there is a fix. For now I'm going to try implement some of the changes myself but would be great if this could get resolved.

getnamo commented 7 months ago

Does the windows CPU only build work for anyone?

chris-boyce commented 7 months ago

Does the windows CPU only build work for anyone?

Yeah I tried it, its an issue loading the module by the seems, it the moment in the process of extracting it from the module see if it can build then.

getnamo commented 7 months ago

It's a static build so it's possible it has a hidden dll dependency that works on my system. Need to do a dll build for CPU and cuda and try again

chris-boyce commented 7 months ago

What version of Cuda are you using, I have a feeling im not on the right version

chris-boyce commented 7 months ago

What version of Cuda are you using, I have a feeling im not on the right version

Ive gotta go to work now :) Night shift. I will try the CPU one tomorrow ASAP. The best error I got so far is : TLDR , dont think Im getting a .lib file somewhere C:\Users\skyog\Documents\GitHub\ToTheMoonReLlama\Binaries\Win64\UnrealEditor-ToTheMoon.patch_1.lib and object C:\Users\skyog\Documents\GitHub\ToTheMoonReLlama\Binaries\Win64\UnrealEditor-ToTheMoon.patch_1.exp llama.lib(ggml.obj) : error LNK2019: unresolved external symbol __imp_strdup referenced in function gguf_add_tensor C:\Users\skyog\Documents\GitHub\ToTheMoonReLlama\Binaries\Win64\UnrealEditor-ToTheMoon.patch_1.exe : fatal error LNK1120: 1 unresolved externals.

chris-boyce commented 7 months ago

Update : Got CPU working with a little work around :), took it out of the module into the main source files. @oivio @aleistor222 would you like the version I've modified. Going to do the same with the GPU one most likely tomorrow evening Ill let you know if I can it to work.

getnamo commented 7 months ago

Doing a bit of a refactor before I make another build. Will test that one on other pcs to debug what's failing on startup.

chris-boyce commented 7 months ago

Did a little bit of poking around. Found the Module loading issue is coming from the common.cpp and idented some of the functions that are causing it. Linked a video of me all the functions that are causing the issues.

https://www.youtube.com/watch?v=v1Mr1am2Zp8

These are the ones I found instantly but there could be some more. I haven't a clue why its happening but I hope it helps in any way. llama_tokenize llama_token_to_piece llama_detokenize_spm llama_detokenize_bpe llama_should_add_bos_token

getnamo commented 7 months ago

Try https://github.com/getnamo/Llama-Unreal/releases/tag/v0.4.0 with the CPU only build to see if it works out of the box. NB this has the refactor so if you used blueprints off of the old api you'll need to re-wire those.

Will have to address precise CUDA build dependencies at another date.

chris-boyce commented 7 months ago

v0.4.0 has the same error message when launching

oivio commented 7 months ago

yea, same for me. I can confirm that with log:

[2024.02.07-16.10.49:730][  0]LogWindows: Failed to load 'H:/Unreal Projects/Llama/Plugins/Llama-Unreal/Binaries/Win64/UnrealEditor-LlamaCore.dll' (GetLastError=1114)
[2024.02.07-16.10.49:730][  0]LogPluginManager: Error: Plugin 'Llama' failed to load because module 'LlamaCore' could not be loaded.  There may be an operating system error or the module may not be properly set up.
[2024.02.07-16.11.27:950][  0]Message dialog closed, result: Ok, title: Message, text: Plugin 'Llama' failed to load because module 'LlamaCore' could not be loaded.  There may be an operating system error or the module may not be properly set up.
chris-boyce commented 7 months ago

I'm just in the process of using GFLAG to see what dll isn't getting loaded.

getnamo commented 7 months ago

Just to confirm v0.4 is failing with cuda = false in build.cs for you guys?

chris-boyce commented 7 months ago

Yeah Cuda is false again I think it links back to the changes to the functions now being in the commons folder. Idk it seems to me like it is missing a windows DLL file. Ill find some time to boot it up on a 2nd PC and check it and also using the GFLAG tool for VS to see what DLL isnt getting loaded is my current plan.

chris-boyce commented 7 months ago

Ok for my investigating it isn't a windows DLL. On the call stack is it saying loading the module correct "LlamaCore.dll" that what the logs say I have a feeling it was from when you Refactored the name to Llamacore as v.2.4 it hadn't been done yet and its the version that I can get running. I'm currently looking into how the Plugin is set up.

Update : V0.2 also has the issue. I got it working by extracting the code from the plugin. So it does go back to the first version you released.

getnamo commented 6 months ago

Apparently building llama.cpp yourself locally can resolve this. Hints at static lib config maybe?

getnamo commented 6 months ago

See https://github.com/getnamo/Llama-Unreal/pull/10 for an alternative path. Thanks to @ellllie-42 pr you can now specify a LLAMA_PATH and use your standard CUDA_PATH if you want a dev friendly custom build env. If those fail or you have local cuda libs it will default to local paths as before.

This doesn't solve portability problem of builds yet.

jm18499 commented 6 months ago

I built llama.cpp using the same settings as the cuda and cpu build on the read me and tested it on my system its working with cuda and I can use gpu offloading by itself. What do I need to copy to the plugin for it to work? I tried copying the cudart.lib cublas.lib and cuda.lib to the cuda folder,editing the build.cs to enable it, and replacing the ggml_static.lib and llama.lib with the one from my build/release folder but I am still getting the same error. I tried to rebuild the solution and deleting the binaries of the plugin but that didnt work either. I was looking at a unity version(llmunity) they use llamafile(https://github.com/Mozilla-Ocho/llamafile) would that help this plugin too?

SlySeanDaBomb commented 3 months ago

i'm also having the same issue.

ellllie-42 commented 3 months ago

Did you use the tempfix associated with this issue?

On Tue, May 21, 2024, 4:42 PM SlySeanDaBomb @.***> wrote:

i'm also having the same issue.

— Reply to this email directly, view it on GitHub https://github.com/getnamo/Llama-Unreal/issues/7#issuecomment-2123671382, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI . You are receiving this because you were mentioned.Message ID: @.***>

SlySeanDaBomb commented 3 months ago

Did you use the tempfix associated with this issue? On Tue, May 21, 2024, 4:42 PM SlySeanDaBomb @.> wrote: i'm also having the same issue. — Reply to this email directly, view it on GitHub <#7 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI . You are receiving this because you were mentioned.Message ID: @.>

i didn't know there was one, where/what is it?

ellllie-42 commented 3 months ago

https://github.com/getnamo/Llama-Unreal/pull/10

On Tue, May 21, 2024, 5:34 PM SlySeanDaBomb @.***> wrote:

Did you use the tempfix associated with this issue? … <#m-2477502457224046517> On Tue, May 21, 2024, 4:42 PM SlySeanDaBomb @.> wrote: i'm also having the same issue. — Reply to this email directly, view it on GitHub <#7 (comment) https://github.com/getnamo/Llama-Unreal/issues/7#issuecomment-2123671382>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI https://github.com/notifications/unsubscribe-auth/BGGIIKOKZ3AINFVN2FXZ3W3ZDPSWBAVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTGY3TCMZYGI . You are receiving this because you were mentioned.Message ID: @.>

i didn't know there was one, where/what is it?

— Reply to this email directly, view it on GitHub https://github.com/getnamo/Llama-Unreal/issues/7#issuecomment-2123713052, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGGIIKPN56VEDJU7IGCCB33ZDPY27AVCNFSM6AAAAABBYJCDWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRTG4YTGMBVGI . You are receiving this because you were mentioned.Message ID: @.***>