Closed usamec closed 6 months ago
My fault. I was initially using [1, 125, 256, 1] and even that was too big. But KAN do converge faster than MLP. (or maybe its lr problem) Ill take some test later and fix it soon.
Even with a properly tuned MLP, ChebyKAN seems to converge faster. https://colab.research.google.com/drive/1dco5uoDXSF7c6B2WDGhabT7dnsAJiOzi
Your MLP baseline in functional interpolation is big and undertrained.
This is properly tuned MLP: https://colab.research.google.com/drive/1wJFhSeTF9xTikN_ranR2xebf9HEaHo5Y