-
Building on the amazing work by @mzbac and @nkasmanoff in https://github.com/ml-explore/mlx-examples/pull/461, I'd really love an example of how LLaVA 1.6 (aka llava next) can be fine-tuned with a LoR…
-
### Describe the issue
Issue/Error:
Loading 1.5 models works fine, but loading 1.6 models yield the error below. Note that the 1.6 models do load (despite the error) and inference works. However, tr…
-
![Untitled](https://github.com/Fanghua-Yu/SUPIR/assets/168951584/a82bc49d-1ca1-4f69-b8b5-e1d4aa4f5035)
[Code.txt](https://github.com/Fanghua-Yu/SUPIR/files/15211845/Code.txt)
BasicTransformerBlock i…
-
```
INFO:mteb.cli:Running with parameters: Namespace(model='laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K', task_types=None, categories=None, tasks=['BLINKIT2IRetrieval'], languages=None, device=None, ou…
-
dear author, i do have some quetions about your demo. Something go wrong during reference.
Here is the LOG:
[nltk_data] Error loading punkt:
[nltk_data] Error loading averaged_perceptron_tagger:
…
-
To save GPU memory, I want to load the multilingual model in 4bit mode, the code is as follows.
```python
import torch
from transformers import AutoTokenizer
from mplug_owl.modeling_mplug_owl impo…
-
### 0.5b response is norm but 7b wrong
the same image,where i chage the code is` pretrained = "/home/shihongyu/MMLM_models/lmms-lab/llava-onevision-qwen2-7b-ov"
model_name = "llava_qwen"
device = "…
-
-
-
https://github.com/training-transformers-together/hf-website-how-to-join
Demo page (updated on push): https://training-transformers-together.github.io/
- [x] intro and motivation text
- [x] liv…