-
The specified python version as suggested is 3.8, but after executing the following
```
conda create -n lmdrive python=3.8
conda activate lmdrive
cd vision_encoder
python setup.py develop
```
…
-
Hi! I'm trying to train **tf_efficientdet_d3** with your code on custom coco-like data. I'm training on Google Colab.
`
!python3 train.py '/content/drive/MyDrive/dataset' --model tf_efficientdet_d3 …
-
Hi there @wallish77 and a Happy New Year to you.
I updated comfyUi and installed Matt3os ipAdapter_plus for FaceID/insightFace
Ever since then I am getting red nodes for your Image Save with promp…
-
Jake had code to do this for Sam.. Heidi willing to translate to metacat
-
**Problem**
When I try to run CIFAR datasets with Swin-Tiny Backbone, it return the error:
"AssertionError: Input image height (32) doesn't match model (224)."
**Reproducibility**
I'm using the…
-
i install torch, torchvision and transformers and so on with given versions. i donot know why the problem appear, can u help me solve? thks
-
We recently added model cards for KerasNLP, this is supported for any preset with with a `model_card` field in the metadata.
They show up in the description field -> https://keras.io/api/keras_nlp/…
-
i prepare to change the backbone to use efficient,
while how to edit the func LRBranch and HRBranch ?
-
### Feature request
The support is [already present in huggingface/transformers](https://github.com/huggingface/transformers/pull/27662).
But when I try to export LLaVA model to neuron format, i…
lifo9 updated
3 weeks ago
-
Hello, may I ask what is the input for the decoder of the transformer? What is the difference with the input of the encoder.