Open PoliteApps opened 2 years ago
The sklearn API is also valuable for hyperopt and other research. For example, if I want to run the model on the "Why do tree-based models still outperform deep learning on tabular data?" (Léo Grinsztajn 2022) dataset and code for benchmarking the model, I will need to implement it myself.
Were you able to figure out how to use this on custom datasets?
I just gave up after reading that: https://medium.com/@tunguz/trouble-with-hopular-6649f22fa2d3
I was able to run it on a custom dataset. You need to create a class that fits the datasets classes API here (https://github.com/ml-jku/hopular/blob/main/hopular/auxiliary/data.py) then you can just run it with the instruction on the README file.
I ran some tests comparing the model to xgboost and catboost on the same datasets as the paper and optimized as they said they did but reached much better results than reported.
Also ran the model on a bigger dataset (58K samples) and I was amazed at how much memory it uses. I believe this is a very interesting Idea but still not a usable model.
still a great paper and repo though
I looked into the repository and did not find an easy way to use our code in my own datasets. Something I would expect is an interface similar to the regression functions on scikit learn. This would be amazing for other researchers and students to use this new network. Is there any script, change or material available to help use this repository this way?
E.g. hopular.fit(x,y)
Thanks!