-
```
[LightGBM] [Fatal] Socket send error, code: 104
distributed.worker - WARNING - Compute Failed
```
Full logs:
```
2021-03-15T22:41:00.2549100Z ============================= test session st…
-
This might be relevant for training lots of models (100s, 1000s...) on smaller data, when running them in parallel 1 model/CPU core would be probably the most efficient if the data is small and all th…
-
**Describe the bug**
In skpro.regression.residual.ResidualDouble, for the argument residual_trafo="squared", we should be taking the sqrt of the scale estimate before feeding it into the scale parame…
-
The `min_child_weight` parameter (default value 1.0) has different effects based on scaling of objective functions. I noticed this when developing a new objective function that had a small Hessian and…
-
Hi, @fabsig thank you for your work, this sounds like an exciting method.
IIRC, you currently only support 0/1 binary outcomes with a logistic link, ctrl+F searching for 'logit'
https://github.…
-
### Describe the bug
I am trying to implement federated learning using Flower from the link: https://github.com/adap/flower/tree/main/examples/xgboost-quickstart
However, I am using CatBoost model …
-
Hi mmlspark team,
Given I have a LightGBM model trained in python with a dataset that contains categorical features and missing values. Now LightGBM deals with both under the hood which is neat.
…
-
https://github.com/Microsoft/LightGBM/wiki/Features
Pros: Leaf-wise algorithm can reduce more loss than level-wise algorithm.
Each iteration in master, todo queue is sorted by delta loss and then se…
-
Is there UI or log which to see the number of iterations and loss changes when running model in mmlspark lightgbm?
AB#1984522
-
I notice in the introduction that
> torch::deploy (MultiPy for non-PyTorch use cases) is a C++ library that enables you to run eager mode PyTorch models in production without any modifications to …