-
# Describe the feature
**Motivation**
The motivation behind this feature request is to integrate the recently proposed Dynamic Snake Convolution (DSCNet) model for tubular structure segmentation i…
-
logつくけど殴れる!みたいなのは除くものとする
library checkerのカテゴリ変更に対応していません
https://judge.yosupo.jp/
# New
- [ ] Matrix Product (Mod 2) (#364)
- [ ] Multiplication of Hex Big Integers
- [ ] Intersection of F_2 vec…
-
## 一言でいうと
CNNの計算コストを削減するため、kernelをレイヤ間で共有する手法。入力に対しAttentionを計算し、その重み(配合率)でkernelを組み合わせて畳み込みを計算する。既存のアーキテクチャ(Mobilenet)に組み入れることで、速度/精度双方の改善に成功。
### 論文リンク
https://arxiv.org/abs/1912.03458
#…
-
def forward(self, x):
# Compute FFT to get amplitude and phase
fft_x = torch.fft.fft2(x)
amp = torch.real(fft_x)
pha = torch.imag(fft_x)
# Apply Dynami…
-
convolve_dynamic~ applies non-segmented convolution.
Therefore, the signal is delayed by the duration of the applied impulse response and processing power is used only bursty.
This is a complicated t…
-
The current method of the spectral resolution convolution kernel assumes that there is a single convolution kernel which does not vary with wavelength. This assumes that the spectral resolution is the…
-
Context Modulated Dynamic Networks for Actor and Action Video Segmentation with Language Queries, AAAI 2020
-
1.TexturePose: Supervising Human Mesh Estimation with Texture Consistency(2019)
Texture map (texel): A corresponding UVmap un-warps the template surface onto an image, A, which is the texture map
co…
-
## Abstract
- self-attention is strong, but its effect on long-range dependency is in question
- propose `lightweight convolution` and `dynamic convolution`, a convolution as a function of timestep …
-
## 一言でいうと
Self-Attentionは強力だが、1:N(自分vsその他)の計算が必要でコストが高い。単にAttentionをタイムステップごとに予測するだけでも、同等の性能が得らるという研究。畳み込みはDepthwiseでカーネルの重みをAttentionライクに正規化するのが基本だが、この重みを動的に計算する
![image](https://user-images.gi…