great code
only may you help these data sets mentioned in paper to try your code ?
Datasets. We use all regression datasets from three recent works: 3 large tabular datasets from [43], a chemical dataset from [39] and 4 standard (UCI) datasets from [56]. For classification, we use 8 multi-class datasets from [18] and 2 large multi-class datasets from [28] on which we report accuracy and 2 binary chemical datasets from [39] where we follow their scheme and report AUC. Sizes, splits and other details for all the datasets are given in Appendix B.
[43] Sergei Popov, Stanislav Morozov, and Artem Babenko. Neural oblivious decision ensembles for deep learning on tabular data. In International Conference on Learning Representations, 2020
[39] Guang-He Lee and Tommi S. Jaakkola. Locally constant networks. In International Conference on Learning Representations (ICLR), 2020.
[56] Arman Zharmagambetov and Miguel Carreira-Perpinan. Smaller, more accurate regression forests using tree alternating optimization. In Proceedings of the 37th International Conference on Machine Learning, pages 11398–11408, 2020.
[18] Miguel A Carreira-Perpinan and Pooya Tavallali. Alternating optimization of decision trees, ´ with application to learning sparse oblique trees. In Advances in Neural Information Processing Systems, pages 1211–1221, 2018.
[28] Raphael F ¨ eraud, Robin Allesiardo, Tanguy Urvoy, and Fabrice Cl ´ erot. Random forest for the ´ contextual bandit problem. In Arthur Gretton and Christian C. Robert, editors, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 93–101, Cadiz, Spain, 09–11 May 2016. PMLR.
great code only may you help these data sets mentioned in paper to try your code ? Datasets. We use all regression datasets from three recent works: 3 large tabular datasets from [43], a chemical dataset from [39] and 4 standard (UCI) datasets from [56]. For classification, we use 8 multi-class datasets from [18] and 2 large multi-class datasets from [28] on which we report accuracy and 2 binary chemical datasets from [39] where we follow their scheme and report AUC. Sizes, splits and other details for all the datasets are given in Appendix B.
[43] Sergei Popov, Stanislav Morozov, and Artem Babenko. Neural oblivious decision ensembles for deep learning on tabular data. In International Conference on Learning Representations, 2020
[39] Guang-He Lee and Tommi S. Jaakkola. Locally constant networks. In International Conference on Learning Representations (ICLR), 2020.
[56] Arman Zharmagambetov and Miguel Carreira-Perpinan. Smaller, more accurate regression forests using tree alternating optimization. In Proceedings of the 37th International Conference on Machine Learning, pages 11398–11408, 2020.
[18] Miguel A Carreira-Perpinan and Pooya Tavallali. Alternating optimization of decision trees, ´ with application to learning sparse oblique trees. In Advances in Neural Information Processing Systems, pages 1211–1221, 2018.
[28] Raphael F ¨ eraud, Robin Allesiardo, Tanguy Urvoy, and Fabrice Cl ´ erot. Random forest for the ´ contextual bandit problem. In Arthur Gretton and Christian C. Robert, editors, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 93–101, Cadiz, Spain, 09–11 May 2016. PMLR.