-
### What is the issue?
Currently working in a project where we are integrating with LLM's and using Ollama with phi3:mini model in a container as a local testing environment. The project was initia…
-
### Description
I'm testing out the experimental "provider registry" and I've found if the model ID has a `:` in it, this package will remove everything before the LAST `:`, which sends the wrong mod…
-
```
import os
from crewai import Agent, Task, Crew, Process
from langchain_community.llms import Ollama
from crewai_tools import BaseTool
# Initialize the OpenAI LLM with the specific model
…
-
`python llama.cpp/convert-hf-to-gguf.py --outtype f16 --outfile /content/Phi-3-small-128k-instruct.f16.gguf /content/Phi-3-small-128k-instruct`
```
INFO:hf-to-gguf:Loading model: Phi-3-small-128k-…
-
Is it possible to create a memgpt feature and make it available to all the agents rather than having a separate agent like it's discussed in #530?
-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
Description:
When loading FP8 quantized models with merged linear modules (e.g., Phi…
mgoin updated
1 month ago
-
Hi David, I guess I only need to replace config.xml under run/ directory and execute job.sh to run biocro regional on ROGER. But these files and folders are generated by RStudio from the VM, if we wou…
-
### What is the issue?
Ollama is failing to run on GPU instead it uses CPU. If I force it using `HSA_OVERRIDE_GFX_VERSION=9.0.0` then I get `Error: llama runner process has terminated: signal: abo…
-
### What happened?
Qwen2-72B-Instruct Q4_K_M generates output with random tokens (numbers, special symbols, random chunks of words from different languages, etc).
Has been tested on:
1) Tesla P…
-
### What is the issue?
Regardless of what's in the modelfile, it seems phi3 doesn't take in the SYSTEM prompt at all. I've looked around and can't find anyone else discussing this. Assuming this is a…