-
run sh: `python /root/autodl-tmp/Code/swift-main/swift/cli/infer.py --model_type glm4v-9b-chat --ckpt_dir ./output/glm4v-9b-chat/v2-20240716-140854/checkpoint-2400/ --load_dataset_config true --show_d…
-
Where can I download the model file manually? Because automatic downloads always time out. Thank you!
```log
(MiniCPMV) PS C:\work\github\MiniCPM-V> python web_demo_2.5.py --device cuda
C:\Users\Ad…
-
Could already quanted model like: https://huggingface.co/01-ai/Yi-34B-Chat-4bits could be directly compiled in mlc_llm?
I try directly do --quant option like q0f16 or q4f16, but it report some lay…
-
**Description**
I am trying to use the triton server on cpu only model and during launch the server will launch perfectly with only ONNX models but the moment I include a python backend model it hang…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu …
-
La mise en oeuvre du Référentiel National des Bâtiments (RNB) implique de générer un identifiant pour chaque bâtiment.
Les différentes réflexions sur ce sujet, entre autre via le groupe de travail…
fe51 updated
3 months ago
-
Same errors on 3 different Linux distros.
I have installed from source:
pushd intel-extension-for-transformers/
pip install -r requirements.txt
python setup.py install
Then start to try exa…
-
**Hardware**:
CPU: Xeon® E5-2630 v2 but limited to 16GB as this is what the vast.ai instance has.
GPU: 4x A40 --> Total of 180GB
**OS**
Linux
**python**
3.10
**cuda**
12.2
**packa…
-
### System Info
- transformers: 4.40.2
- platform: Ubuntu (compute cluster)
- python version: 3.12.2
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official exa…
-
# YOLOv8 Pose Models
![image](https://user-images.githubusercontent.com/26833433/239691398-d62692dc-713e-4207-9908-2f6710050e5c.jpg)
Pose estimation is a task that involves identifying the locatio…