-
运行指令:
python run.py --config_path=./configs/rtdetr_hgnetv2_x_qat_dis.yaml --save_dir='./output/' --devices='cpu'
配置文件:
Global:
reader_config: configs/rtdetr_reader.yml
include_nms: False
…
-
### System Info
I'm using AWS sagemaker to implement a token classification model using phi3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modifi…
-
MWE:
```julia
using Enzyme, QuadGK
function polyintegral(coeffs, config)
f(x) = evalpoly(x, coeffs)
return first(quadgk(f, -1.0, 1.0; config...))
end
coeffs = (1.0,)
config = (; …
-
I was reading the `node-llama-cpp` docs, and they mention that the `ipull` package can be useful for improved model download speeds:
- https://withcatai.github.io/node-llama-cpp/guide/#getting-a-mo…
-
Brief description:
This feature is used to export data from a project.
User spec:
https://github.com/CCTC-team/redcap_cypress/blob/redcap_val/user_requirement_specification/core/21_export_data.spec
…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
I want to calculate the llm's token usage when do assistant's run, how can I do that? Do we have some callback method like LangChain?
-
**Describe the bug**
I am trying to fine tune `phi-2` model on custom dataset using `ilab model train` command. The command downloads the model successfully from huggingface however it later fails wi…
-
Running this script:
```python
import mlx.core as mx
from mlx_vlm import load, generate
import os
from pathlib import Path
# model_path = "mlx-community/llava-1.5-7b-4bit"
#model_path = "…
-
### bug描述 Describe the Bug
File "/data/mlops/Open-Assistant/inference/server/oasst_inference_server/plugins/vectors_db/loaders/data_loader.py", line 383, in path_to_doc1
res = file_to_doc(file, …