ngxson / wllama

WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
https://huggingface.co/spaces/ngxson/wllama
MIT License
441 stars 21 forks source link

Error: Module is already initialized #123

Open flatsiedatsie opened 1 month ago

flatsiedatsie commented 1 month ago

I'm trying to keep Wllama instance alive (not setting it to null) when it's not needed, or when I want to load another model.

I'm running into the error above though, if a model is already loaded.

I've tried unloading the existing model first, but that doesn't seem to cut it.

if(typeof window.llama_cpp_app.loadModelFromUrl != 'undefined'){

        if(typeof window.llama_cpp_app.isModelLoaded != 'undefined'){
            let a_model_is_loaded = await window.llama_cpp_app.isModelLoaded();
            console.warn("WLLAMA: need to unload a model first?: ", a_model_is_loaded);
            if(a_model_is_loaded && typeof window.llama_cpp_app.unloadModel != 'undefined'){
                console.log("wllama: unloading old loaded model first");
                await window.llama_cpp_app.unloadModel();
            }
        }

        await window.llama_cpp_app.loadModelFromUrl(model_url, model_settings);
}