-
Our attention module is essentially standard multi-head attention, just with arbitrarily many bias tensors. It’s approximately as follows:
```
a = torch.matmul(q, k)
for b in biases:
a += …
-
It would be useful for torch.distributed.send and .recv to be able to send arbitrary objects. I have two requests:
1. One version of send and recv that does not copy to tensor, but instead returns …
-
Perhaps as 256, 128, 64, and 32? Right now PNGs have arbitrary sizes and some are huge.
-
I have got a hint that the Ultimate 64 with firmware 3.11 crashes with ExFAT. Therefore I have done the following test:
1. With drive sizes of 1 GB, 2 GB, 4 GB, ..., 128 GB (always exact power of t…
-
**Update on this**, we did a more detailed design review and came up with a better solution, [Idea 3](#idea3) below. Benchmarks confirm this is just as fast as using the native `Nat` and `Int` functio…
dolio updated
2 months ago
-
- [FFT Convolution](https://www.dspguide.com/ch18/2.htm)
- [Very Efficient Training of Convolutional Neural Networks using Fast Fourier Transform and Overlap-and-Add](https://arxiv.org/pdf/1601.06815…
-
Is there any interest in supporting font variations in `MATH` table?
At high level, I think there are two aspects for font variations regarding `MATH` table:
1. `MATH` in variable fonts: say, a fo…
-
Our kernel generator can only handle FFT vector sizes that contains 2, 3, 5 as prime factors. This enhancement is to add support for generating kernels that can handle arbitrary vectors sizes that do …
kknox updated
6 years ago
-
![Internal Exception: io.netty.handler.codec.DecoderException: java.lang.IllegalArgumentException: Payload may not be larger than 32767 bytes](https://github.com/user-attachments/assets/96c2a61d-6e94-…
-
I'm transforming images to run them through an Nx.Serving. One of the transformation steps is a resize. I was originally using the resize function in this module. When we first put it under heavy load…