-
If the tokenizer prepends `_` as sow token, it will make single token evals fail.
Reported by @anton-l
-
`@benchmarkset` is nowhere in the documentation and can only be found in the the [API Reference](https://juliaci.github.io/BenchmarkTools.jl/stable/reference/#BenchmarkTools.@benchmarkset-Tuple{Any,%2…
-
### Describe the feature or improvement you're requesting
I think I found a way how to improve abstract logic and analogies understanding by GPT4. I run a lot of different tests for logical reasoni…
-
I have been working with the HyperTuning from recbole.trainer.
On the provided documentation, I cannot find what are the parameters `early_stop` and `max_evals` and what is their relation to the pa…
-
It is `"predictive_accuracy"` in
```
evals
-
There is probably a straightforward method to do this, but I'm looking through the documentation and I'm not finding the answer - How would I go about limiting the maximum number of iterations or fitn…
-
### Because
the goal is to improve the automation of future evaluation tasks (as opposed to updating current evaluations to become automatic), brainstorming of what components could be used to incr…
-
I love this project, and very much appreciate the effort that went into it. Kudos!
I'm not sure whether this is intended for actual use or just a simple teaching exercise, but I thought I'd mentio…
-
### What happened + What you expected to happen
1.) The Bug:
I'm attempting to use Ray to scale up a parameter sweep which involves solving an eigenvalue problem over a large parameter space. Th…
-
### Prerequisites
Please put an X between the brackets as you perform the following steps:
* [X] Check that your issue is not already filed:
https://github.com/leanprover/lean4/issues
* …