-
`PenalizedMixin` subclasses use the inherited `summary` which does not contain any information about the penalization besides the model name.
When reading the summary in a notebook it would be usef…
-
## Objective
Sometimes when nodes experience issues they can return an ID field that is inconsistent with the requested ID field. QoS should filter these responses and trigger a retry when this beh…
-
Hey all,
According to the issues I read in this repository, the batch_size>1 only works with the local_penalization evaluator, and not with sequential or any other evaluator model.
I have set up…
-
Should we allow using `tell` like
```python
tell(solutions, objective_values, constraints_values=constraints_values)
```
Implementation thoughts: the above call would probably instantiate a `A…
-
Hi,
currently a batch solution can contain infeasible points in case of batch generation and local penalization (LP). The constrains are checked when the acquisition function is evaluated, but the…
-
Hello,
I'm interested in using the MultipointExpectedImprovement together with a CandidatePointCalculator to suggest candidates in a batched mode.
I found a very nice example of this batched can…
-
#2435 PR with development discussion
**update**
#5370 latest version of GAM PR
#5296 previous PR with comments on most of the changes to get it to work correctly
## TODO:
- [ ] k-fold cro…
-
GLM.fit_regularized uses `**kwargs` for some penalization keywords but doesn't show what the defaults are.
e.g. it is not visible what L1_wt default is
```
defaults = {"maxiter": 50, "L1_wt": 1,…
-
Hey guys,
I would like to add a concern regarding the reward function.
After some analysis, I think it can be easily exploited for controllers that does not walk. Basically, the positive reward …
-
**Is your feature request related to a problem? Please describe.**
A good idea would be the ability to award users points in the case of "proof of work" challenges or to penalize them if they have be…