-
Hello , why forward_conv_layer is changed.
the original version:
if (l.size == 1) {
b = im;
} else {
im2col_cpu(im, l.c/l.groups, l.h, l.w, l.size, l.stride, l.pad, b);
}
In y…
-
### Description
As discussed on Zulip here : https://quarkusio.zulipchat.com/#narrow/stream/187030-users/topic/RabbitMQ.20Protobuff.20Serialisation.20unknownFields
edit: link to reproducer as zulip …
-
Whenever I try to run
`from mmdet.models import build_detector` i get an error saying
`ImportError: cannot import name 'deform_conv_cuda'
`
What can I do to avoid this error?
-
when I try to load the pruned model, it report error
RuntimeError: Error(s) in loading state_dict for MobileNetV3:
Missing key(s) in state_dict: "features.1.conv.4.bias", "features.2.conv.0.bias", …
-
Right now our conv ops in torch are a bit slow, because torch has non-standard semantics which force us to convert back and forth:
- NCHW layout
- different kernel layout
- different padding conv…
-
Hi,
I came across your post below:
http://madebyoll.in/posts/cnn_acapella_extraction/
I am wondering how did you come up with the neural network below?
mashup = Input(shape=(None, None, 1), …
-
hi
I want to run the depth-anythin vits on Nvidia Orin NX board, I migrated some code from tensorrt8.6 to tensorrt 10.0
when I run ./build/trt-depth-anything --onnx depth_anything_vits14.on…
-
Float16 CUDA `conv` seems to be broken for 5D tensors, but not 3D or 4D tensors. FluxML/Flux.jl#2184
(using Julia 1.8.3 on a A100 GPU.)
```julia
julia> conv(rand(Float16, 16, 16, 1, 1) |> gpu, …
-
# depth-wise conv
x = torch.einsum('nctv,cvw->nctw', (x, dw_gcn_weight))
# point-wise conv
x = torch.einsum('nctw,cd->ndtw', (x, self.pw_gcn_weight))
these can be replaced by conv with groups?
…
-
I have read the paper, it is very interesting work, but I was thinking to use it with Conv layers, but the Conv layer is not implemented.
I have researched other Github repository, there is no PyT…