-
Hi thanks for the library! There seems to be another interesting paper: https://arxiv.org/abs/2407.05000. Thus I wonder whether Unsloth will support it?
-
This figure seems a little too obvious to include but happy to be vetoed.
Also, while it may be true that "it is possible that the true but unknown population mean has the low-rank property but the e…
-
Hello,
1. The current implementation for matrix multiplication uses BRGEMM algorithm. Is there any implementation of "Low Rank Approximation approach" for matrix multiplication in oneDNN? Is there a…
-
https://github.com/r-three/phatgoose/blob/2c841d70724192401598ed9d2c94d088c9cd3377/src/models/custom_modules/lora/lora_linear.py#L8
This probably will be improved by using this one instead:
```
…
catid updated
7 months ago
-
Hello, everyone!
I wonder if I can use low rank approximation in darknet, or how can I modify the forward layer code to implement the following transformation?
![image](https://user-images.githubuse…
-
We have the code below in `analysis.py` for low rank approximation of the beta matrix. A couple points:
- This looks like it would violate our WT gauge condition, and it might take a little more thin…
-
Thanks for your great work!
In the paper, after the kv cache is quantized, a low-rank matrix is used to approximate the quantization error. I really want to know if this process needs training? S…
shhn1 updated
1 month ago
-
Here is an example:
```R
library(ashr)
dat
-
[Local collaborative autoencoders](https://sci-hub.ru/https://dl.acm.org/doi/abs/10.1145/3437963.3441808)
[Local latent space models for top-n recommendation](https://sci-hub.ru/https://dl.acm.org/do…
-
Explore effectiveness of variations from the technique covered in Halko et al 2011, section 5.4
https://arxiv.org/pdf/0909.4061