-
- **Is your feature request related to a problem? Please describe:**
The current face expression recommendation system uses MobileNet, and there is a need to evaluate a custom CNN model built from s…
-
The original 23.8 GB flux1-dev model runs at around the same speed as the 6.8 GB Q4_0 quant that should fit completely into my 12 GB of vram.
This is my workflow:
[workflow.json](https://github.co…
-
Hi, first of all, thank you for sharing the code and resources with the community! I’ve been experimenting with the four pretrained models provided in the repository to extract depth maps. While testi…
-
Executive summary:
The model is implemented in n150 with new conv-api.
Dimension: 1, 10, 2047, 255
Utilization:
op ----- Utilization
Conv1 ----- 3%
Conv2 ----- 16%
Conv3 ----- 14%
Conv4 ----- 8%
Conv…
-
Executive summary:
The model is implemented in n150 with new conv-api.
Dimension: 1, 3, 128, 128
**Utilization:
op ----- Utilization**
Conv1 ----- 0%
Conv2 --…
-
### Describe the issue
I am trying to load XGBoost onnx models using onnxruntime on Windows machine.
The model size is 52 mb and the RAM it is consuming on loading is 1378.9 MB. The time to load …
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
Thank you for your contributions to the NLP field. I would like to know more about 1.4.2 model performance, such as the meaning of Anaphors in 20% and Accuracy (%), as well as how to align the format …
-
## Background
We are curious to know whether ontology score correlates to performance on downstream tasks.
We could evaluate performance on downstream tasks ourselves, but as a first approximation,…
-
When running test.py to test a picture of Celeb_iid, such as 000990.jpg, the effect is very poor.
faca_hq is right.
![WechatIMG15](https://github.com/yinzhicun/MetaF2N/assets/60536525/c7b418de-71b1…