-
I want to train a linear probing of depth dataset on my own (using same structure of official depther head, freeze the dinov2 model, and train the depther head only.)
so, following the official not…
-
https://github.com/amusi/ECCV2024-Papers-with-Code
追記:
ECCV2024速報:https://hirokatsukataoka.net/temp/presen/241004ECCV2024Report_finalized.pdf
-
Pose a question about one of the following articles:
[“The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings.”](https://journals.sagepub.com/doi/full/10.1177/00031224198…
-
### 📚 The doc issue
Unclear.
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [X] Make sure you already searched for relevant issues, and asked the cha…
-
# Drunken Walk Generator and Lattice for Deep GNN Training
## Overview
This page documents the development of a generator function and a lattice structure designed to encompass a broad range of inte…
-
I am trying to use the mini_deit_tiny_patch16_224 with finetuning another subtask having different sequence size of 18 (num of patches) with dimension 192.
when operate under code
for blk in self.bl…
gudrb updated
9 months ago
-
I’m giving up. The files are writable and readable, but the error still appears. Nothing seems to fix it.
---------------------------------------------------------------------------
PermissionEr…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
```
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-5.15.0-1071-azure-x86_64-with-glibc2.31
- P…
-
## 🚀 Feature request
Use Argument linking to link `init_args` to `dict_kwargs`
### Motivation
I try to link `data.input_width` to `model.dict_kwargs.input_width` because transformer models …
-
### Describe the issue
Issue: When starting a worker with 34B version of the 1.6 model, the worker will crash on the first inference. I've verified that the mistal-7b version does work and I can run …