-
### Describe the issue
Inference results are outputting abnormally when using YOLOv7 models with TensorRT EP.
We have confirmed that the results are normal when using CPU and CUDA.
The issue wa…
-
1. Clean the current implementation
2. Better algorithms for creating Junction Trees.
3. Incorporate the Factor Graph BP into the main algorithm.
Ref #1740
-
Implement mapping from and to Dataset download JSON.
**Requirements**
* default dataset download JSON mapping
* custom dataset metadata download JSON mapping
* import from Dataset download JSO…
-
# 🐞 bug report
### Affected Version(s)
Observed in 4.9.1
### To Reproduce
Steps to reproduce the behavior:
1. Go to 3d printing and import a model (observed with both STL and OBJ models)
2. …
-
### Search before asking
- [X] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
Add support for Ga…
-
I'm really impressed with the speed and usability of NEBULA, great work! My data set has barcoded cells that are unique to each classified group, as such, I'd like to be able to incorporate clonal bar…
-
### What happened + What you expected to happen
无法使用样本内预测,阻止我完成接下来的任务
```
test.py 75
Y_hat_insample = nf.predict_insample(step_size=12)
core.py 1213 predict_insample
fcsts[:, col_idx : (col…
-
Hi, is there a way to change the upscaling scale factor? It's at x4 by default but I'm interested in x6, x8, x2 etc.
I am also training, so I'd be interested in training tecogan models that are of …
-
## ❓ Questions and Help
How do you run models that are offloaded to the CPU, Trying to work with ```enable_sequential_cpu_offload``` or ```enable_model_cpu_offload```, when running ```torch_xla.sy…
-
I finetuned gliner small v2.1 model and created onnx version of the same model using the convert_to_onnx.ipynb exmple code.
When I compared the inference time of both models, the onnx version took 50…