-
We aim to implement a system that leverages distillation and quantization to create a "child" neural network by combining parameters from two "parent" neural networks. The child network should inherit…
-
Hello! Thank you for publishing Dhara.
I have been playing with it and wanted to share some performance numbers. I am using a NAND Flash chip on an embedded system. When I use my raw nand driver, I…
-
Perform hyperparameter tuning and benchmark the performance:
- We use grid or random search to perform the tuning (Note that gradient descent is more efficient but sub optimal you should ask the inst…
-
### Feature request
This request aims to introduce functionality to delete specific adapter layers integrated with PEFT (Parameter-Efficient Fine-Tuning) within the Hugging Face Transformers librar…
-
Hello,
I tried to run a fast tuning of GEMM with float16:
```python
from bitblas.base.roller.policy import TensorCorePolicy, DefaultPolicy
from bitblas.base.arch import CUDA
from bitblas.base.uti…
-
I apologize in advance if this isn't an appropriate content for a GitHub issue.
I'm interested in DoqueDB's performance. To understand this better, I'd first like to grasp the key characteristics o…
-
As somebody who has to tune xcaches for optimal performance, I would very much appreciate if every xrootd release would come with a set of curves showing throughput rate vs block size for both xrdcp a…
-
hi author, do you have any recommended configuration for performance? With the same environment, the download speed to easily reach 20MB. But, with kcptun-libev, no matter how the parameters are tune…
-
We want to be able to ship a library of default tuning specs with IREE, so that users can get good performance out of the box on known key operations. This is applied after dispatch formation and real…
-
I noticed that the method in the paper relies heavily on hyperparameter tuning. However, since the target domain lacks labels, tuning ultimately relies on validation set performance for optimal result…