-
### System Info
It's using the versions downloaded by pip install during the llama stack build.
I have an nvidia GPU
### Information
- [X] The official example scripts
- [ ] My own modified…
-
I have been following the proposal and after doing some research I wanted to confirm that the status info I have found is accurate.
This is what I found so far after searching on some of the engine/t…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a llama3. I don't know how to integrate it with vllm…
-
### Description
On our app detail page, we need to be able to edit app specific meta.
We should have something there currently. But the change needs to be made where we pull in the fields that…
-
### Prerequisites
- [X] I have carried out troubleshooting steps and I believe I have found a bug.
- [X] I have searched for similar bugs in both open and closed issues and cannot find a duplicate.
…
-
Symptom:
```
$ python3 runner.py
...
DEBUG | All tests have been successfully completed.
Traceback (most recent call last):
File "runner.py", line 158, in
main()
File "runner.py…
-
I want to display the user role of the queried user through Dynamic Field https://prnt.sc/NPkiGhgLoNJ3 or Dynamic Tags https://prnt.sc/wbU0uvGJAGM1. Currently, I have achieved this with a custom solut…
-
See #45.
-
### What is the issue?
After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.
```
ollama run llama3.2
Error: llama runner process has terminated: c…
-
When redis memory has nothing left, those logs are print.
```
2021/12/23 15:23:38.567529 juicefs[25228] : error: EXECABORT Transaction discarded because of previous errors.
github.com/juicedata/jui…