Atome-FE / llama-node

Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
https://llama-node.vercel.app/
Apache License 2.0
865 stars 63 forks source link

update: upgrade llm to 0.2.0-dev #86

Closed fardjad closed 1 year ago

fardjad commented 1 year ago

This PR updates the version of LLM to 0.2.0-dev (at https://github.com/rustformers/llm/tree/a5b9365a57c23dffa543a1c07416add160fd0b0a). It also adds MPT to the list of model types.

I must admit that I'm very new to this project (and even Rust). Please feel free to apply your own changes on top of this branch.

Looks like convert_pth_to_ggml is removed from llm and I have no idea what to do here:

https://github.com/Atome-FE/llama-node/pull/86/files#diff-734e579940800965103740299b6c251cccc5e2c08e6d172b582feb6babd1a66fR73

hlhr202 commented 1 year ago

@fardjad Hi could you have a look and fix the clippy error thx

fardjad commented 1 year ago

Clippy Investigating

Sure! Clippy should be happy now