Closed tigereatsheep closed 4 months ago
Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?
还有个小问题是我把 --use_precond 关掉会报错:
Traceback (most recent call last):
File "/home/tigereatsheep/workspace/Xplace/main.py", line 104, in
TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
波浪线是在 lr * g_k 这个地方
还有个小问题是我把 --use_precond 关掉会报错: Traceback (most recent call last): File "/home/tigereatsheep/workspace/Xplace/main.py", line 104, in main() File "/home/tigereatsheep/workspace/Xplace/main.py", line 100, in main run_placement_main(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 41, in run_placement_main run_placement_single(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 10, in run_placement_single res = run_placement_main_nesterov(args, logger) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/run_placement_nesterov.py", line 109, in run_placement_main_nesterov init_lr = estimate_initial_learning_rate(obj_and_grad_fn, trunc_node_pos_fn, mov_node_pos, args.lr) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/initializer.py", line 147, in estimate_initial_learning_rate x_k_1 = (constraint_fn(x_k - lr g_k)).clone().detach().requiresgrad(True)
~^~~ TypeError: unsupported operand type(s) for : 'float' and 'NoneType'波浪线是在 lr * g_k 这个地方
Enabling --use_precond
would be better. This parameter is highly correlated with the solution quality.
还有个小问题是我把 --use_precond 关掉会报错: Traceback (most recent call last): File "/home/tigereatsheep/workspace/Xplace/main.py", line 104, in main() File "/home/tigereatsheep/workspace/Xplace/main.py", line 100, in main run_placement_main(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 41, in run_placement_main run_placement_single(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 10, in run_placement_single res = run_placement_main_nesterov(args, logger) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/run_placement_nesterov.py", line 109, in run_placement_main_nesterov init_lr = estimate_initial_learning_rate(obj_and_grad_fn, trunc_node_pos_fn, mov_node_pos, args.lr) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/initializer.py", line 147, in estimate_initial_learning_rate x_k_1 = (constraint_fn(x_k - lr g_k)).clone().detach().requiresgrad(True)
~^~~ TypeError: unsupported operand type(s) for : 'float' and 'NoneType' 波浪线是在 lr * g_k 这个地方Enabling
--use_precond
would be better. This parameter is highly correlated with the solution quality.
all right, I'm worried that in some special cases, this may disrupt the balance between hpwl and overflow.
Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?
main
branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the neural
branch. But the neural
branch does not support routability optimization. As you want to optimize the routability, I suggest using the main
branch. net_weight
is float
tensor and its tensor size and indexing can follow the net_mask
. Please feel free to contact me if you have any questions about modifying the code.Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?
1. In our default `main` branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the `neural` branch. But the `neural` branch does not support routability optimization. As you want to optimize the routability, I suggest using the `main` branch. 2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large [--ignore_net_degree](https://github.com/cuhk-eda/Xplace/blob/main/main.py#L32) and (2) adding your custom net weight in the [pin grad](https://github.com/cuhk-eda/Xplace/blob/main/cpp_to_py/wa_wirelength_hpwl_cuda/wa_wirelength_hpwl_cuda_kernel.cu#L269-L270). You can try to modify the code in [folder](https://github.com/cuhk-eda/Xplace/tree/main/cpp_to_py/wa_wirelength_hpwl_cuda) to pass the net weight parameter to the cuda function. Basically, the `net_weight` is `float` tensor and its tensor size and indexing can follow the `net_mask`. Please feel free to contact me if you have any questions about modifying the code.
Hello, Li. I've implemented large nets weighting in file wa_wirelength_hpwl_cuda_kernel.cu
The congestion down from 10% to 1%.
I think the WA function pay more attention on small net (less on large net) due to the logsum denominator.
感觉讲的清楚的话发个小论文挺好的
Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?
1. In our default `main` branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the `neural` branch. But the `neural` branch does not support routability optimization. As you want to optimize the routability, I suggest using the `main` branch. 2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large [--ignore_net_degree](https://github.com/cuhk-eda/Xplace/blob/main/main.py#L32) and (2) adding your custom net weight in the [pin grad](https://github.com/cuhk-eda/Xplace/blob/main/cpp_to_py/wa_wirelength_hpwl_cuda/wa_wirelength_hpwl_cuda_kernel.cu#L269-L270). You can try to modify the code in [folder](https://github.com/cuhk-eda/Xplace/tree/main/cpp_to_py/wa_wirelength_hpwl_cuda) to pass the net weight parameter to the cuda function. Basically, the `net_weight` is `float` tensor and its tensor size and indexing can follow the `net_mask`. Please feel free to contact me if you have any questions about modifying the code.
Hello, Li. I've implemented large nets weighting in file
wa_wirelength_hpwl_cuda_kernel.cu
The congestion down from 10% to 1%. I think the WA function pay more attention on small net (less on large net) due to the logsum denominator. 感觉讲的清楚的话发个小论文挺好的
Glad to hear that.
Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?
1. In our default `main` branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the `neural` branch. But the `neural` branch does not support routability optimization. As you want to optimize the routability, I suggest using the `main` branch. 2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large [--ignore_net_degree](https://github.com/cuhk-eda/Xplace/blob/main/main.py#L32) and (2) adding your custom net weight in the [pin grad](https://github.com/cuhk-eda/Xplace/blob/main/cpp_to_py/wa_wirelength_hpwl_cuda/wa_wirelength_hpwl_cuda_kernel.cu#L269-L270). You can try to modify the code in [folder](https://github.com/cuhk-eda/Xplace/tree/main/cpp_to_py/wa_wirelength_hpwl_cuda) to pass the net weight parameter to the cuda function. Basically, the `net_weight` is `float` tensor and its tensor size and indexing can follow the `net_mask`. Please feel free to contact me if you have any questions about modifying the code.
Hello, Li. I've implemented large nets weighting in file
wa_wirelength_hpwl_cuda_kernel.cu
The congestion down from 10% to 1%. I think the WA function pay more attention on small net (less on large net) due to the logsum denominator. 感觉讲的清楚的话发个小论文挺好的Glad to hear that.
非常感谢你的帮助!补个图关issue
s.t. https://github.com/cuhk-eda/ripple/issues/12 Hello Li, reopen the issue here. 今天下载了你们工作的四篇论文,需要花一些时间看会儿论文再调试xplace。非常感谢!