-
### System Info / 系統信息
{
"version": 1,
"context_length": 32000,
"model_name": "InternVL2-Llama3-76B-AWQ",
"model_lang": [
"en",
"zh"
],
"model_ability"…
-
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
After #16877 any prompt to ollama with llama3.1:latest crashes the ollama runner (ROCm). This does…
-
**What causes the issue:**
Run 01 specifying any non OAI server-host and api key
**Expected:**
Be able to connect to other services like Groq, Anthropic, OpenRouter etc as the seem to be working …
-
### System Info
I am running on A100 with 40 GB GPU memory
### Who can help?
@SunMarc and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scri…
-
![image](https://github.com/BasedHardware/OpenGlass/assets/55334914/ee3d2512-1cd0-4421-9e2d-bde1d781a0d3)
-
ERROR: ValueError: Target modules llm\..*layers\.\d+\.self_attn\.(q_proj|k_proj|v_proj|o_proj) not found in the base model. Please check the target modules and try again.
thrown at the line PeftModel…
-
# Trending repositories for C#
1. [**microsoft / fluentui-blazor**](https://github.com/microsoft/fluentui-blazor)
__Microsoft Fluent UI Blazor components library. For use with .NE…
-
I'm trying to serve InternVL2 (llama3 76B) using lmdeploy as the example [here](https://internvl.readthedocs.io/en/latest/internvl2.0/deployment.html#serving-with-openai-compatible-server) on 4 A100-8…
-
### Describe the bug
when use ollama with model llama3:70b, the python code return with a "`" ,caused the code cannot be executed.
### Reproduce
1. run interpreter by this command . interpreter --m…
-
Currently Providing model is a required argument.
```
python vision.py
usage: vision.py [-h] -m MODEL [-b BACKEND] [-f FORMAT] [-d DEVICE] [--device-map DEVICE_MAP]
[--max-memo…