-
命令行:
./converter_lite --fmk=TFLITE --modelFile=0918_mv2_0.35_FP32.tflite --outputFile=/home/vinbert/data/mindspore-lite-1.3.0-linux-x64/tools/converter/converter/mv2_int8.ms
--configFile=/home/vinbe…
-
So I'm creating a ComfyUI wrapper for Lumina-mGPT, I've got it generating similar to the gradio demo, or almost identically. The Gradio demo produces a single RGB (coloured) image but my ComfyUI wrapp…
-
After successful quantizing and exporting ONNX models for ResNet18, using 2 different mode `int8` and `fp8`, I am trying to export these ONNX models to TRT, but no luck so far. It returns Error No sup…
-
**Describe the bug**
I used docker to run onnxruntime transformers optimizer and met this error, but I can successfully run it on my local ubuntu machine. Could you give any suggestion?
![image](htt…
-
This project focuses on image cartoonification using OpenCV and image processing techniques. The process involves reading an input image, identifying its edges, applying median blur for smoothness, an…
-
Hi,
I followed the windows installation guide, and tried both on latest python 3.12 and 3.9.12 (per recommendation from the guide for python to be 3.9).
When I attempt to run the v2 example from…
-
When rapidly ingesting with quantization turned on, the full vectors seem to be put into the cache such that the cluster uses significantly more memory than one would expect.
## Current Behavior
…
-
Hello Dusty
Here's the thing. We recently keep working on Jetson Orin projects. When I decided to copy my environment to another Orin. I just wonder can we use local model to load such as VILA or L…
-
### **Initial action plans**
Copying these things from the wav2vec2 repo for safe housekeeping.
* An immediate quantize could be to convert the fine-tuned model using TFLite APIs. [Post-trainin…
-
downloaded the 1B model from Huggingface and encountered an error while running it. The following is the configuration process, and I am puzzled as to why I need to link it to the address [: ffff: 0.0…