-
### Description
Introduce the capability to support ONNX subgraphs within the Burn framework, particularly for handling conditional inference computations. This feature aims to enable Burn to fully…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/ray-project/kuberay/issues) and found no similar feature requirement.
/cc Bytedancer @Basasuya @Yicheng-Lu-llll
…
-
- [x] (Michele Andrea) jump optimization with crocoddyl for comparison
- [x] (Riccardo) RL with torques, give low reward with target distance even if TD is not achieved to encourage moving toward tar…
-
中文:
- [ ] 弱化model_type的概念, 支持只使用自动检测model_type (config.json).
- [ ] template模块和dataset模块 拥抱messages数据集格式.
- [ ] 去除generation-template的概念. 使用use_generate_template参数来控制获取base model需要的template, 以支持所…
-
### Presentation of the new feature
Logits processors in outlines.processors support nearly every inference engine, offering a "write once, run anywhere" implementation of business logic.
Curren…
lapp0 updated
3 months ago
-
I've been finetuning my transcription model for some time now.
The workflow is the following: I take the data -> split it manually into training and validation sets -> train my model -> run it on …
-
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I tried running the same…
-
(I ran into this now 3,4 times in the last months for different applications. I might have some experiments from years ago.)
For QMLE or estimating equations or GMM under mis- or incompletely speci…
-
In PR https://github.com/epinowcast/epidist/pull/69 we added a vignette fitting a simple model with four inference techniques. This includes Pathfinder, which I also presented about [here](https://ath…
-
Hi,
Nice code and thanks for open-sourcing. I noticed the default value for ``inference_mode`` is False and ``return_sampled_latent`` is True and I was wondering if it is turned to deterministic so…