-
## 🐛 Bug
## To Reproduce
Using this model [Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)
I got some bugs, need your help !!!
For phi3-v problem, w…
-
### Feature request
I encountered a KeyError while loading the phi3-v vision model into Optimum Huggingface. The error message states:
```
KeyError: 'phi3-v model type is not supported yet in Nor…
-
_Context : Issue was flagged first by DFO user , but product team ended creating the ticket_
As noted [in this comment](https://github.com/cds-snc/gcds-components/issues/497#issuecomment-2312932496)…
-
Hi During reproducing the training process, a bug is reported as
File "/mnt/bn/kinetics-lp-maliva/playground_projects/RLAIF-V_bak/./muffin/train/train_llava15.py", line 279, in init_model …
-
### Issue Description
After an update a few months ago I have no longer been able to use v-prediction models successfully. I am using diffusers, IPEX and linux.
The current recommendation for u…
-
### Technical Group
Golden Model SIG
### ratification-pkg
Technical Debt
### Technical Liaison
Bill McSpadden
### Task Category
SAIL model
### Task Sub Category
- [ ] gcc
- [ ] binutils
- [ ]…
-
在用qlora在两张32GbV100上微调Llama-3___2-3B-Instruct时最后保存模型的时候报错
slurm脚本为
```#!/bin/bash
#SBATCH --job-name=openrlhf
#SBATCH --partition=gpu_v100
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH…
-
### 起始日期 | Start Date
_No response_
### 实现PR | Implementation PR
在 https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf
这里指定的量化运行,需要指定的参数包括:
./llama-minicpmv-cli -m ../MiniCPM-V-2_6/model/ggml-mode…
-
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Execution order of model validators in case of a `wrap` validator from a base class and an `after` validator from a sub…
-
### Validations
- [ ] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](https://githu…