-
https://brandolosaria.medium.com/setting-up-metaais-code-llama-34b-instruct-model-fc009aa937f6
https://github.com/go-skynet/LocalAI
-
### Feature request / 功能建议
how to use a local LLM to evaluate prediction quality? For example, Llama-3-70B-Instruct?
### Motivation / 动机
how to use a local LLM to evaluate prediction quality? For …
-
Hi,
Could you please add option for code autocompletion, similar to recently added github copilot, but based on local ollama LLM?
Currently vs code and jetbrains have such option with continue a…
-
This is so good, just if this would work with local LLM such as phind also and not only with openAI API would be perfect.
IVIJL updated
1 month ago
-
System Info
GPU: NVIDIA RTX 4090
TensorRT-LLM 0.13
root@docker-desktop:/llm/tensorrt-llm-0.13.0/examples/chatglm# python3 convert_checkpoint.py --chatglm_version glm4 --model_dir "/llm/other/mode…
-
### System Info
CPU x86_64
GPU NVIDIA L20
TensorRT branch: v0.13.0
CUDA: NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.5
### Who can help?
@kaiyux @byshiue
### Information…
-
- see also https://github.com/ObrienlabsDev/blog/issues/47
- see https://github.com/ObrienlabsDev/rag/issues/4
-
Hello,
First of all, thank you for your work on this library. I am using it to integrate a local LLM and I have encountered some strange behavior.
I would like to know if it is necessary to manu…
-
Description:
As a developer working on my project, one of the main challenges I’ve encountered is the limitations of using external language models, especially when I reach usage limits or encounte…
-
ollama for example