-
Hello,
I have been looking for a Julia package to perform amortized Bayesian inference on simulation-based models. In many areas of science, there are models with unknown/intractable likelihood fu…
-
### Feature request
We would like to incorporate UGround for identifying regions of interest:
https://x.com/ysu_nlp/status/1844186560901808328
Inference code here: https://github.com/boyugou/ll…
-
Your paper states that you trained on 8 V100s with a batch size of 16. Does this mean that each GPU had a batch of 2? I am trying to get a gauge of how much VRAM this model uses for someone with less …
-
## Description
During the process of conducting source code reading and testing on LightGBM using a binary classifier, it was observed that the GPU performance during training is notably lower than th…
-
The [Pathfinder algorithm](https://arxiv.org/abs/2108.03782) is an approximate inference algorithm supported by Stan.
Pathfinder can be used as an inference algorithm in its own right, but the pri…
-
### Your question
## Environment
- ComfyUI: Latest version (installed via official website instructions)
- GPU: NVIDIA RTX 4080 (12GB VRAM)
- RAM: 32GB
- CUDA: 12.1
- PyTorch: 2.4.0 (issue occ…
-
Great job! I'm now able to achieve promising results. However, the code requires approximately 46GB of GPU memory when running inference on 50 frames. Could you share a script for processing long vide…
-
Hello,
I recently discovered that the MNIST inference benchmark performance for [Tsetlin.jl](https://github.com/BooBSD/Tsetlin.jl) degrades by approximately 15-16% on Julia 1.11 RC3 compared to ver…
-
congraduation, thanks for your work. Great job!
I would like to inquire about the possibility of optimizing the inference speed and GPU memory usage of the model. Based on my testing, inference on tw…
-
**Description:**
I am experiencing a significant performance drop when running the mobilenet-ssd-v2 model with a detectnet ROS node compared to standalone execution. The FPS drops by approximately tw…