-
I am working on applying Quantization-Aware Training (QAT) with various parameters to optimize my model. During this process, I ran into an issue when attempting to use certain configuration parameter…
-
Rather than manually specifying `attn_flash`, why not allow the code to adaptively figure out if possible in the `forward()` method?
As far as i can tell you can use it when there is no fancy relativ…
-
I have meet an error when using demo.py
`(fcclip) ga@test-4U-GPU-Server:~/code/fc-clip$ python demo/demo.py --input 000741.jpg 000860.jpg --opts MODEL.WEIGHTS fcclip_cocopan.pth
[07/15 15:09:43 …
-
Hi
I am currently having the following use case:
```python
from functorch import vmap
x = torch.randn(2,10)
w = torch.randn(2,5,10)
b = torch.randn(2,5)
print(vmap(F.linear, in_dims=(0, 0, …
-
Due to https://github.com/facebookresearch/xformers/issues/286 we cannot currently fuse the bias/gelu/activation into a single kernel using triton. This means we're just use a standard [MLP](https://g…
-
When I ran the example code in README.md, I met a strange problem.
```python
import scripts.control_utils as cu
import torch
from PIL import Image
path_to_config = 'configs/inference/sdxl/sdx…
-
I noticed that CLIP is already present in the Hailo Model Zoo, which suggests that conversion is possible. [link](https://github.com/hailo-ai/hailo_model_zoo/blob/833ae6175c06dbd6c3fc8faeb23659c9efaa2…
-
1.使用validation验证给出的训练模型,得到的结果只有一个简单的tenseor数据,数据的意义是什么、
![image](https://user-images.githubusercontent.com/69671793/233565397-43f25cfa-5351-4fa8-9070-1ed41e138f8c.png)
2.真实数据代码部分,validation_realWo…
-
```
self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = norm_layer(self.inplanes)
self.relu = nn.ReLU(inplace=True) …
-
Hello @935963004,
I would like to starting say thank you for your work, I think it is a fundamental and necessary work in EEG decoding. Thank you for that!
So, I am trying to understand and run…