Closed solomonmanuelraj closed 6 months ago
Hi The tools doesn't support it for now.
I think you can have a try to use the onnx quantization tool , which supports static quantization or dynamic quantization
Okay. Thanks for your update
On Fri, Feb 23, 2024, 8:12 AM wejoncy @.***> wrote:
Hi The tools doesn't support it for now.
But I would recommend to use the onnx quantization tool , which supports static quantization or dynamic quantization
— Reply to this email directly, view it on GitHub https://github.com/wejoncy/QLLM/issues/91#issuecomment-1960658131, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGAHDW3FIFK2V2TBJ2E6TPLYU76XRAVCNFSM6AAAAABDU6R4JGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRQGY2TQMJTGE . You are receiving this because you authored the thread.Message ID: @.***>
Hi Team,
I like to do 8 bits quantion of owl-vit and export it in onnx format. Is vision foundation models are supported using your tool?
thanks