-
### OpenVINO Version
2023.3.0
### Operating System
Ubuntu 20.04 (LTS)
### Device used for inference
None
### OpenVINO installation
PyPi
### Programming Language
Python
##…
-
# accelerator
[Modeling Deep Learning Accelerator Enabled GPUs](https://deepai.org/publication/modeling-deep-learning-accelerator-enabled-gpus)
-
最近在量化DETR模型,在模拟器上fp16精度的推理在coco数据集上的map正常,但是int8推理mAP为0,请问有人知道为什么吗,这个bug折磨了我两礼拜了。
虽然mAP是0,但是推理的bbox和logits不为0
我的理解是int8和fp16推理用的是同一套前后处理代码,所以前后处理肯定没有问题,唯一存在的问题可能rknn config里面,但是里面能改同的只有mean和std,…
-
When training the MeshAutoencoder, I compared **ResidualLFQ** and **ResidualVQ**.
ResidualLFQ is your default option, which can rebuild a reasonable structure.
However, when I use ResidualVQ (w…
-
I have two model, which is mobilenetv1 for classification.
the first model, it's download from google: https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_qu…
-
It was brought up in #4 about using `ndarray` and a parallelization library to speed up calculations. It makes sense to use an external crate for matrix features instead of rolling our own. I'm not su…
-
I merged a mistal 8x7b model with the lora adapter, and I save the .pt with torch.save(model.state_dict(), 'path_to_model.pt')
However, when I use vllm to inference on the new merged model, I fai…
-
## Why
Machine Learning 輪講は最新の技術や論文を追うことで、エンジニアが「技術で解決できること」のレベルをあげていくことを目的にした会です。
prev. https://github.com/wantedly/machine-learning-round-table/issues/240
## What
話したいことがある人はここにコメントしましょう…
-
请问在哪里可以查找到目前rk支持的所有算子?
目前尝试将efficientVit-sam(encoder-decoder架构)移植到rknn平台上,官方训练好的torch模型可以导出onnx模型,目前想将onnx转换为rknn模型,其中涉及到算子是否支持等问题,以下是转换encoder的代码:
```
from __future__ import absolute_import, print…
-
It would be nice to carry around the WCS information from the original FITS files. This would allow you to reconstruct the WCS if/when you create individual images. This will be useful if/when we want…