nomic-ai / gpt4all-chat

gpt4all-j chat
Other
1.27k stars 155 forks source link

Add llmodel_setMlock function #212

Closed kuvaus closed 1 year ago

kuvaus commented 1 year ago

Title: Adds setMlock function to llmodel_c.h

/**
 * Sets mlock to force system to keep model in RAM.
 * @param model A pointer to the llmodel_model instance.
 * @param use_mlock true if the model kept in RAM, false otherwise.
 */
void llmodel_setMlock(llmodel_model model, bool use_mlock);

Setting use_mlock=True really speeds up the use at least on my machine so I find it useful. Currently the use_mlock parameter is behind the private pointer cpp d_ptr->params.use_mlock in llamamodel.cpp. This function gives access to changing the use_mlock straight from the main program.

Another way would be to make getters and setters to the d_ptr but I think its set to private for a reason.

The use_mlock feature is currently only in the llamamodel params struct so there is no implementation in gptj.cpp GPTJ.

manyoso commented 1 year ago

I wonder if what we need to do here is detect if mlock is available and just set it if it is? Can you look into this?