-
运行日志如下:
~/llm/demo/server_demo$ sudo python flask_server.py --target_platform rk3588 --rkllm_model_path ../../model/minicpm.rkllm
=========init....===========
rkllm-runtime version: 1.0.1, rknpu d…
-
The addon already has a way to configure API keys, in this case we should add an `API key` and an `end-point` that can be queried using OpenAI v1 API, as well as the `model` to be used for comment pro…
-
### Title
openai.BadRequestError: Error code: 400 - {'error': {'code': 'invalid_type', 'param': 'messages[0].content', 'message': "Invalid type for 'messages[0].content': expected one of a string or …
-
### Version
v1.10.0
### Describe the bug
Platform: MacOS
**Steps to reproduce:**
1. Have Cody plugin installed in your VS Code editor
2. Login to Cody with Enterprise user
3. Start a new ch…
-
Update: I used to run ollama on this chromebook when tinyllama came out and it ran great.
### What is the issue?
![image](https://github.com/ollama/ollama/assets/13264408/e37d1a70-8d92-4281-88…
-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC …
-
we are trying to finetune chatGLM6B using LoRA on arcA770 1card and 2cards , use the following command
1card:
```
python ./alpaca_lora_finetuning.py \
--base_model "/home/intel/models/chat…
-
### Error Description
I am encountering the error, `Native API returns: -30 (PI_ERROR_INVALID_VALUE)`, when trying to run llama.cpp with the latest IPEX-LLM, following the official quickstart guide o…
-
### Feature request
Please make it more obvious to the user that compilation & exportation of models has to be done on Neuron hardware
Please put this statement "An important restriction is that L…
-
### Summary
Hello all!
**Problem**
AFAICS, the current implementation does not have OpenAI Function Calling support. This would be a fantastic, powerful, and much needed feature. Almost any serio…