-
Hi, may I ask which llama3-70b Ins model in the livecodebench ranking corresponds to https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct still https://huggingface.co/codellama/CodeLlama-7b-Ins…
-
### What happened + What you expected to happen
- We are trying Ray cluster across GPU (A10s) to use VLLM
- Head node is of type A10*4 Bare Metal VM with Oracle Linux 8
- Node 1 is of A10*2 Bare me…
-
### 🐛 Describe the bug
我使用examples/language/llama2中的代码预训练llama2-70b。使用gemini.sh直接跑benchmark.py是成功的,但是我想基于训好的模型进行增量预训练,训练参数和gemini.sh中给出的参数一致,只是修改了如下代码读取已有的模型:
with init_ctx:
# model = L…
-
Lets try to rethink our analysis methods using Huggingface transformers
-
I'm getting the following error when trying to use Ollama running on another machine:
```
2024-07-25 09:25:14,882 [ 612290] SEVERE - #com.devoxx.genie.service.PromptExecutionService - Error occurred…
-
**Describe the bug**
I'm seeing this error in my ollama server.log with every auto-complete request from twinny. Auto complete does appear to be working and giving valid completion suggestions, but i…
-
### How are you running AnythingLLM?
AnythingLLM desktop app
### What happened?
I sent a message in a new workspace after configuring to use the only installed model: codellama 7b from provider Any…
-
### Before submitting your bug report
- [ ] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
Wondering if this is a config issue or something else? IE are any of the additional model files that are downloaded alongside the 38gb main file, borked in any way?
Ollama is via WSL in windows.
…
-
https://blog.perplexity.ai/blog/introducing-pplx-api
Perplexity is optimized for q&a and live web research so perhaps it's a better backend for the ask command.
I use their consumer facing product …