-
Blog
https://medium.com/@obrienlabs/running-the-70b-llama-2-llm-locally-on-metal-via-llama-cpp-on-mac-studio-m2-ultra-32b3179e9cbe
https://www.linkedin.com/posts/michaelobrien-developer_running-70b-…
-
I trained a Llama2-3B model using OpenRLHF and it trained fine. But when I shifted to the 7B version of the model, I had to shift to multiple nodes and encountered this error. After contacting the sup…
-
### Version
1.2.1
### Describe the bug
Using Cody with a user selected 'unstable-openai' model, I entered the URL and key for my local vLLM or local Ollama server running my model, both with…
-
### Motivation
只见对awq的支持,未见对gptq的探讨
### Related resources
_No response_
### Additional context
_No response_
-
Hi, thank you for sharing the code of your interesting research.
I have a question about how to adapt the Bayesian optimization method for the human-eval task. It seems like it only has a test set…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
### What happened + What you expected to happen
- We are trying Ray cluster across GPU (A10s) to use VLLM
- Head node is of type A10*4 Bare Metal VM with Oracle Linux 8
- Node 1 is of A10*2 Bare me…
-
I've been playing around with ollama, running it local on my linux machine, running `codellama` and using Continue vscode extension I'm able to generate code with it
https://ollama.com/download
ht…
-
hello ! i open this issure for the problem of "DPO loss remains 0.6931 from the first step and the rewards stuck at 0.0" , the problem primary in the #1311 , but now i cant find a solution for this a…
virt9 updated
2 months ago
-
### 🐛 Describe the bug
我使用examples/language/llama2中的代码预训练llama2-70b。使用gemini.sh直接跑benchmark.py是成功的,但是我想基于训好的模型进行增量预训练,训练参数和gemini.sh中给出的参数一致,只是修改了如下代码读取已有的模型:
with init_ctx:
# model = L…