-
We are contemplating exposing an argument that should sparsify the entire pair-score edgelist (#1026).
It would be very nice if we had some principled way to do this automatically for users. @fjsj …
-
Hello, may I ask that the code of the paper Gradient Sparsification for Communication-Efficient Distributed Optimization is also in util.py
-
If we want to allow solvers that sparsity, then we must also then be able to reduce the basis size accordingly. This is currently getting implemented in ACE.jl, but we may wish to backport it to ACE1.…
-
Starts with full graph and deletes random edges out of drivable graph if the new enclosed area does not exceed a limit.
Of the connected components of deleted blocks form LTNs
-
Hello,
I am trying to fine-tune a model with sparsification. Actually, it corresponds to the alexnet architecture trained on imagenet data given here : https://github.com/cvjena/cnn-models . I am t…
-
Hi, it's a quite solid and promising work but I have some questions.
(1) In the paper, you perform an average pooling with kernel size 2 × 2 after the sixth block for the structural downsampling. But…
-
Hi,
I see in [https://github.com/tidsp/caffe-jacinto/blob/caffe-0.16/src/caffe/net.cpp#L2078](url) that the sparsification process excludes thin layers and depthwise separable layers. I understand …
-
From a conversation with @cortner ... we'd like to ensure that highly sparsified bases (e.g., from ARD regression) are still evaluated with optimal speed. Not immediately clear which evaluator would b…
-
Firstly, thank you for updating the rnnoise, Now I can convert the model to rnnoise_data.c by using dump_rnnoise_weights.py, I want to konw how to convert the model to rnnoise_data_little.c
-
Consider the following MLIR program:
a.mlir:
```
module {
func.func @tensor_i32(%arg0: tensor) -> i32 {
%idx0 = index.constant 0
%0 = tensor.extract %arg0[%idx0] : tensor
return %…
ghost updated
4 months ago