-
A problem occured when i was training the Baseline Model
ValueError Traceback (most recent call last)
in ()
----> 1 net, opt = build_model()
2
3 net_l…
-
```
I found something odd with the Splay benchmark. It create lots and lots of
objects with properties, `array` and `string`, that are never accessed/modified
whose `array` value is also fixed. I th…
-
Prediction of quantiles for a few thousand new records (3000 rows, 3 quantiles, 41 predictors) using a `RandomForestQuantileRegressor` (e.g. `n_estimators=50, min_samples_split=10, min_samples_leaf=10…
-
I have a bunch of benchmarks which are done across differently sized datasets (n=10, 100, 1000, ...) to measure how runtime scales across packages. This can actually produce useful information when d…
-
Hi, I have been working on a surrogate model (Hyperboost) based on gradient boosting. This seems to outperform SMAC's random forest in most cases, while the training and querying of the surrogate mode…
-
Enhancement suggestion: Adding optimal leaf ordering as an option for `clustermap`
Background:
Many users are unaware that the leaf ordering in hierarchical clustering without any explicit leaf or…
votti updated
3 years ago
-
This issue makes part of #20 more concrete.
Recurrent Neural Networks, became an effective Neural Network architecture, that we would like to implement in Leaf as well. The operations could probably …
-
```
I found something odd with the Splay benchmark. It create lots and lots of
objects with properties, `array` and `string`, that are never accessed/modified
whose `array` value is also fixed. I th…
-
Just from reading over the code, it seems like updating the values within the data-source may potentially break searches. Am I missing something?
The obvious solution to doing "updates", would be t…
-
Hello!
I've really enjoyed reading through these benchmarks, and drawing conclusions about how code is structured. The multi-crate build times in particular really help you see how many crates are …