Efficient-Scalable-Machine-Learning / DeepRewire

Easily create and optimize PyTorch networks as in the Deep Rewiring paper (https://igi-web.tugraz.at/PDF/241.pdf). Install using 'pip install deep_rewire'
MIT License
1 stars 0 forks source link

deep #2

Closed phj123456789 closed 1 month ago

phj123456789 commented 1 month ago

作者您好,我有两个问题想向您咨询。1、请问您这个稀疏度可以自己设置吗?如果可以,具体应该怎么设置呢? 2、我看到只有使用了softdeep算法的代码,没有使用deep算法的代码,若调用deep算法,那nc参数应该怎么设置呢?非常感谢您的回答

LuggiStruggi commented 1 month ago

Hi!

I've added an example for using the DEEPR optimizer. The nc parameter represents the number of connections and can be calculated for specific connectivity, such as 0.1, using the formula: nc = 0.1 * sum(p.numel() for p in sparse_params). Currently, DEEPR supports only a global constraint on the number of connections; Ill add an implementation for local (matrix-wise) constraints.

I suspect there might still be an issue with my DEEPR implementation as I haven't achieved great results yet. If you identify any problems, please let me know. I also recommend reviewing the original implementation by the paper's authors for further insights.

Thanks!

phj123456789 commented 1 month ago

我明白啦,非常感谢您的回答

phj123456789 commented 1 month ago

您好,打扰啦,我还有一个问题想问您,您怎么来对比该算法效果是否好呢?是否需要跟没有使用deep算法的进行对比;如果不想用deep算法,是怎么切换的呢,是优化器换成SGD优化器吗?

LuggiStruggi commented 1 month ago

To compare it in a fair manner to another algorithm i would say the best way is to use the same network and train it on the same task for multiple different seeds, for each the DEEPR-Algorithm and the Algorithm to compare it too. Then compare the average performance and std. Its hard to compare the sparsity since normally SGD doesn't produce real 0 values. But you could count the parameters below a certain threshold. If you are interested in the trade-off between active connections and performance you can run a bunch of experiments with different nc and plot connectivity vs. performance as a scatter plot? Hope I understood your question correctly.

phj123456789 commented 1 month ago

好的,我明白了,很感谢您回答我的问题