-
There have been a few reports where the grammar sampling can significantly degrade the performance.
It would be nice to profile and optimize the implementation - there should be room for improvements…
-
I prepared the csv, but in the next step I get this:
```
❯ python -m finetuning --dataset "custom_dataset" --custom_dataset.file "scripts/custom_dataset.py" --whatsapp_username "Jorge"
Tracebac…
-
Hi, thanks for maintaining awesome project.
I slightly modified `examples/scripts/reward_modeling.py` and found and the tracked training loss and accuracy are so weird.
Here is my modified scrip…
-
### Your current environment
```text
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/rpc/client.p…
-
### System Info
- CPU architecture: x86_64
- CPU/Host memory size: 126G
- GPU properties
- GPU name: L4
- GPU memory size: 24GB
- Libraries
- TensorRT-LLM branch or tag (e.g., main, v0.…
-
### Description
I defined my llms as following:
`
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_ollama import ChatOllama
…
-
I haven't been able to reproduce this in LLMEval from mlx-swift-examples for some reason, but when I run the following line in my app using mlx-swift 0.18.0:
```swift
MLXRandom.seed(UInt64(Date.ti…
-
![image](https://github.com/MartialBE/one-api/assets/95951386/9f9271ac-4476-4b2b-9764-c7b2e6c7fbc4)
这些是通过“自动获取支持模型”添加的模型,好像还没配置价格
---
存在未配置价格的模型,请及时配置价格:command-light-nightly, mistral-small-…
-
I can run examples/llms/providers/watsonx.ts successfully. When I copy the code into examples/llms/agents/bee.ts and replace OllamaChatLLM I get the following error:
"Missing either space_id or pro…
-
Hello,
I am currently working on fine-tuning the CuMo model following the instructions in the "Getting Started" section of the repository. After downloading the necessary datasets and JSON files, t…