-
Version affected: current version v0.7.1 (main)
I initially assumed the issue was with my system, outdated nvidia drivers, cuda etc. But after trying on 4 separate machines running different mixes …
-
The gpu memory usage continues to increase after each round while finetuning LLM with an adapter. The gpu memory increment after each round was approximately the same. I speculate it's because that th…
-
### Description
I get an error Method not found: 'Double Microsoft.KernelMemory.AI'
### Reproduction Steps
repeating the example, except replacing the document with text
https://github.com/SciSha…
-
On the firefly board:
The default operating mode of the CPU is interactive, with a frequency of 408000. The default operating mode of NPU is rknpu_ondemand, with a frequency of 1000000000. The defaul…
-
### What is the issue?
~$ nvidia-smi
Fri May 24 09:41:47 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.171.04 …
-
### System Info
- GPU Name: T4 X2
- System Ram: 30GB
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Reproducti…
-
### System Info
gpu:
```nvidia-smi
Mon Apr 22 17:00:40 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08 …
-
用ollama run qwen2:7b
还是用curl模式,都返回错误llama runner process has terminated exit status 1
![微信图片_20240716184524](https://github.com/user-attachments/assets/ba6ee0cb-6e30-4cfd-ba5c-0c4fd5a7446a)
![微信图…
-
```
❯ magic-cli config list
Field: llm
Value: "openai"
Description: The LLM to use for generating responses. Supported values: "ollama", "openai"
Field: ollama.base_url
Value: "http://localhos…
-
### System Info
CPU Architecture: x86_64
CPU/Host memory size: 1024Gi (1.0Ti)
GPU properties:
GPU name: NVIDIA GeForce RTX 4090
GPU mem size: 24Gb…