-
Hi Team,
Great project! We're looking into using it for a bunch of our predictors. Unfortunately, it takes a really long time to train (multiple hours, compared to 5-10 minutes for LightGBM).
A…
-
All our current quantizers rely on a very heavy cache in order to produce acceptable quality:
https://github.com/SixLabors/ImageSharp/blob/b06cb32b7114961fd5473f7645d38f8fee04ec64/src/ImageSharp/Proc…
-
# Overview
v0.7 is brings many major features. The community works together to refactor the internal code base to bring an unified IR code structure with unified IRModule, type system and pass infr…
-
## 🐛 Bug
There is a linking error when linking `libtorch_cuda.dylib`.
## To Reproduce
Steps to reproduce the behavior:
1. git clone https://github.com/pytorch/pytorch
1. cd pytorch
1. gi…
-
**System information**
- OS Platform and Distribution: Linux Ubuntu 18.04.3:
- TensorFlow installed from source: pip3 install --upgrade tensorflow==1.15
- TensorFlow version: 1.15
**Provide th…
-
How can get the compressed model and find the compression ratio which is an important concern in deep compression?
-
Hello,
I am applying the **FastGRNN** model on **Google-12 Dataset** and I am getting the same accuracy mentioned in the paper for floating values **with Sparsity and low rank matrices**. I have al…
-
Hello, how does the quantified model (int8) compare with the original model (fp32) in the acceleration of the inference process? Thank you!
-
Hello,
First I wanted to say: kudos for creating this library; I'm really excited to try it out on different models!
I saw in the readme:
> QKeras extends QNN by providing a richer set of lay…
-
Hi,
Let's start a discussion here about the roadmap towards 0.10 and 1.0. We are looking for:
- New features that are useful to your research
- Improvements and patches to existing features
If…