Closed basnijholt closed 5 years ago
_originally posted by Jorn Hoofwijk (@Jorn) at 2018-10-29T11:05:51.877Z on GitLab_
Now, say we optimize this LearnerND (e.g. by switching to C++/Cython) then this (MR + gitlab:#80) may suffice to make the LearnerND as good as the Learner2D and we then could have another look at gitlab:#83
_originally posted by Jorn Hoofwijk (@Jorn) at 2018-10-29T13:12:27.234Z on GitLab_
And it works for R^N -> R^M now :) Tested it out in 3d and works great :D Although I'm not gonna pretend that it's fast, it should need less points than the old loss, which is great and should make it faster for sufficiently slow functions :)
_originally posted by Bas Nijholt (@basnijholt) at 2018-10-30T13:58:05.003Z on GitLab_
Cool!
Would it be possible to implement this in a similar way to what we did in gitlab:!131?
_originally posted by Bas Nijholt (@basnijholt) at 2018-11-05T10:44:09.534Z on GitLab_
@Jorn I introduced this method but I am not sure about the name. Could you fix it if you know a better description?
(original issue on GitLab)
opened by Jorn Hoofwijk (@Jorn) at 2018-10-26T19:36:45.884Z
Closes gitlab:#120
TODO add support to output in $
R^N
$TODO rewrite the code to be more readable, I will do this next week
As you can see in the plot, it is getting hard to distinguish the LearnerND from the Learner2D :D