issues
search
fkodom
/
fft-conv-pytorch
Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.
MIT License
478
stars
58
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Speed of depth-wise convolution
#25
lim1011
opened
3 months ago
0
How to achieve overlap and add/save
#24
jelly114514
opened
5 months ago
0
License
#23
hmaarrfk
closed
1 year ago
8
Using fft-conv hurts convergence
#22
OasisArtisan
closed
1 year ago
2
Complex value support?
#21
StephenHogg
opened
2 years ago
0
CUDA out of memory with complex_matmul
#20
aminaab96
opened
2 years ago
5
FFTConvTranspose
#19
tolusophy
opened
2 years ago
0
Add padding=same, support half-precision input
#18
papkov
closed
1 year ago
3
adaptively moves offset to the right device so that gpu can be used
#17
alexhagen
closed
2 years ago
3
Bug Fix and Torch Compatibility
#16
fkodom
closed
2 years ago
0
Add Benchmark Plots
#15
fkodom
closed
2 years ago
0
Cleanup Unit Tests
#14
fkodom
closed
2 years ago
0
Perform dilation with Kronecker product
#13
aretor
closed
2 years ago
2
Add dilation parameter and tests
#12
aretor
closed
2 years ago
7
feat: remove einsum for efficiency
#11
yoyolicoris
closed
2 years ago
1
Made it as python package
#10
yoyolicoris
closed
3 years ago
1
in_channels must be divisible by groups
#9
yoyolicoris
opened
3 years ago
0
Frequency domain sub-sampling for strided convolution
#8
yoyolicoris
closed
2 years ago
4
add plots for better benchmark visualisation
#7
antonfrancois
closed
2 years ago
1
bug
#6
williamlzw
closed
3 years ago
4
Autograd for complex matrix multiplication in Pytorch ?
#5
RobinhoodKi
closed
3 years ago
3
Propagation of error becomes large very fast
#4
dwromero
closed
3 years ago
1
Depth-wise separable convolution?
#3
vaesl
closed
3 years ago
11
Stride
#2
fshamsafar
closed
3 years ago
1
can't work on GPU?
#1
libonwpu
closed
3 years ago
1