[CVPR 2022] RestoreFormer: High-Quality Blind Face Restoration from Undegraded Key-Value Pairs
332
stars
35
forks
source link
HQ_Dictionary model and RestoreFormer model use different number of attention heads #20
Open
siegelaaron94 opened 1 year ago
RestoreFormer sets the number of attention heads to 8 here https://github.com/wzhouxiff/RestoreFormer/blob/f31d8e9a200a6200368d6fc27596d20499e0e991/configs/RestoreFormer.yaml#L30 but the HQ_Dictionary model leaves it as default, ie 1. Is this a bug or was it done on purpose?