-
Hi, Thanks to your Solid work!I want to know how to calculate the R-GAE maps,especially the Query-to-patch. Could you please supply some key codes.
-
Hey all. I am getting issues when trying to load a Lora created in OneTrainer for PixArt Sigma. No matter what options I try/train the Lora with, I always get a load of warning messages in the ComfyUI…
-
for code like these:
```py
class ConvNeXtBlock(nn.Module):
def __init__(
self,
dim: int,
intermediate_dim: int,
kernel: int,
dilation: int,
…
-
How can the model evaluate on GLEU tasks?The tasks are text-pure, but in the paper it said “Similar to PLM, when prefix image is none, this task will degenerate into “text-to-image generation” task, f…
-
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.9.19
- Huggingface_hub version: 0.24.5
- Safetensors version: 0…
-
Dear author,
Thank you so much for providing the pretrained weights. I tried to load the pretrained weights (i.e. bat_1_1_0_e6_loss_0_aug_1 and bat_valid_1_1_0_e6_loss_0_aug_1 ) to test images and me…
-
Hi!
Thanks for sharing your work! I would like to know that whether the adversarial results reported for the Llava-1.5 are using CLIP encoder ViT-L/14 or ViT-L/14@336? Does the adversarial evaluat…
-
I tried to instantiate a bert model with the following code:
```rust
use candle_core::DType;
use candle_lora::LoraConfig;
use candle_lora_transformers::bert::{BertModel, Config};
use candle_nn::{…
-
File "/export/scratch/ra63nev/lab/discretediffusion/OmniTokenizer/omnitokenizer.py", line 108, in __init__
spatial_depth=args.spatial_depth, temporal_depth=args.temporal_depth, causal_in_temporal…
-
So I want to change below Keras bidirectional LSTM layer into Transformer encoder:
`lstmLayer = keras.layers.Bidirectional( keras.layers.CuDNNLSTM(args.rnnSize, return_sequences = True, recurrent_i…