-
This issue is to track the new design required for flash-attention on bottom-up optimization pipeline.
## Status
The most of the optimization passes has been finished and been checked in llvm-targ…
-
wget https://storage.googleapis.com/bottom-up-attention/trainval.zip
这个链接无法下载,在浏览器中也打不开
-
### Anything you want to discuss about vllm.
within vllm/attention/ops/triton_flash_attention.py, we don't need dropout, philox_, etc. stuff.
should consider to clean them up for code simplicity.
#…
-
### The model to consider.
https://huggingface.co/tencent/Tencent-Hunyuan-Large
Tencent released a 389B MoE with only 52B activated parameters which beats the Llama 3.1 405B.
There are three chec…
-
Hi! Can you provide the download links for "the bottom-up attention visual features of VisDial v1.0"?
I can not find these features on data/img_feats1.0/ while these are necessary for running vdbert/…
-
Hello!
I'm working on a master thesis about bottom-up pose estimation on high resolution images. Your paper seems to address both of these topics successfully, yet I am unable to find a configuration…
-
**Environment:** Windows 11, Firefox 106.0.5 (64-bit)
**Reproducible:** always
**Preconditions**
Mock-up
https://www.figma.com/design/KmaTQl8I31fnA3leNd9CoX/UBS-(%D0%A3%D0%91%D0%A1)?node-id=1…
-
Hello everyone,
so i am trying to extract features from images for my project and getting this error again and again.
I have successfully installed detectron2 and getting this error when trying to…
-
### Motivation.
take a look at the current llama forward computation logic:
```python
class LlamaMLP(nn.Module):
def forward(self, x):
gate_up, _ = self.gate_up_proj(x)
x…
-
I used to run this pipeline fine, but recently after a few updates and coming back to this exact workflow, I realized there are new issues, can anyone help ? thanks
# ComfyUI Error Report
## Err…