-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
在使用Ollama时报错,显示没有llama_index.core.llms.function_calling模块。
代码:
from llama_index.llms.o…
-
### System Info
- NVIDIA A100 80G * 2
- Libraries
- TensorRT-LLM: 0.11.0.dev2024052800
- Driver Version: 525.105.17
- CUDA Version: 12.4
### Who can help?
@byshiue @schetlur-nv
##…
-
### What happened?
I encountered an issue while loading a custom model in llama.cpp after converting it from PyTorch to GGUF format. Although the model was able to run inference successfully in PyTor…
-
Hugginface hub login successful
Used gemma2-27b LLM to testing:
cargo run --release -- -m "google/gemma-2-27b-it" -c
Finished release [optimized] target(s) in 0.03s
Running `target/re…
-
### **Is your feature request related to a problem? Please describe.**
PyRIT currently lacks built-in support for easily using and comparing multiple LLM providers. This makes it challenging for user…
-
### Background / context
- was originally following install instructions for Mac at https://simonwillison.net/2023/Aug/1/llama-2-mac/ - yeah, I should have spotted that this was an older post....bu…
-
## Server test code as follows:
```
llm_end_point_url = "http://172.16.21.155:8000/v1/"
model = ChatOpenAI(model="glm4v-9b",base_url=llm_end_point_url, api_key="api_key")
### embedding ###
embe…
-
### Feature request
Does any documentation exist, or would it be possible to add documentation, on how to use the TensorRT-LLM backend? #2458 makes mention that the TRT-LLM backend exists, and I can …
-
-
[ ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
"ValueError: a cannot be empty unless no samples are taken"
…