-
Hi,
I'd like to compile `projects/pt1/examples/torchscript_stablehlo_backend_tinybert.py` into torch-mlir, so I did the modification:
```
--- a/projects/pt1/examples/torchscript_stablehlo_backend…
-
Let's try typing examples below
Transformer Examples :
-
Thank you for your awesome project!
But, I am confused about which you write comments to extract features from the last 4 blocks in depth_anything_v2 architecture.
https://github.com/heyoeyo/muggle…
-
### Model description
"Attention Is All You Need" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bah…
-
I replaced the Linear Layer in the prediction part with a traditional transformer, but the results were not satisfactory (ETTh1 dataset). The performance is far from the indicators of the transformer+…
-
### Feature request
Seems that there is no config for DeBERTa v1-2-3 as decoder (while there are configs for BERT/RoBERTa et similia models)... This is needed in order to perform TSDAE unsupervised…
-
When I load the existing pretrained model, the following error was reported: "RuntimeError: Error(s) in loading state_dict for FairModel4CIKM:
Missing key(s) in state_dict: "i_embeddings.weight", "p…
-
Hey, this is my first post.
I wanted to ask about how one implements prompt weighting within the architecture.
This is the base generation code, which works.
`image = ip_model.generate(
…
-
I guess it's an error from my side, but when I try to run the python code given by the huggingface page of the model (https://huggingface.co/vikhyatk/moondream2), it give me this error :
```
Traceba…
-
准备复现ChineseClip论文,以CLIP-VIT-B/16 初始化image encoder部分,下载对应的是 https://huggingface.co/openai/clip-vit-base-patch16/tree/main 但是加载模型参数时,发现image encoder部分参数加载不上。我打印发现对应参数名称以vision_model.encoder.layers.开头…