Closed rd4cake closed 4 days ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
langchainjs-docs | 🛑 Canceled (Inspect) | Nov 11, 2024 10:04pm |
@jacoblee93 was wondering if it was possible for you to take a look at what we have so far for our implementation, it would be greatly appreciated.
CC @nigel-daniels
Just one merge to main should be fine! If you keep doing it, it requires CI to run again.
[Work in Progress]
Draft PR to address the following issue: https://github.com/langchain-ai/langchainjs/issues/6994 Using asynchronous function to load models from node-llama-cpp breaks adherence to the default way of instantiating components ( chatmodels, embeddings, llms, etc ).
Here is an example:
before with node-llama-cpp v2
after with node-llama-cpp v3
Question for the maintainers: Our implementation changes the way LlamaCpp model is instantiated. Is it this fine moving forward?