-
I found v2.6.3's `flash_attn_varlen_func` runs faster than v2.7.0.post2's `flash_Attn_varlen_func` on H100.
code
```
import torch
from hopper.flash_attn_interface import flash_attn_func, flash…
-
@tdhock I want to include this to the section on Comparing bench::press and atime::atime, what do you think ? and how best can I go about it.
```{r}
library(atime)
subject.size.vec
-
## Expected Behavior
After using “mmseqs easy-linclust” clustering, the retained sequence is non-redundant
## Current Behavior
After using “mmseqs easy-linclust” clustering, he retained seque…
-
library(dplyr)
library(tidyr)
library(plotly)
# Parameters
contract_size
-
**Goals**
Jeff is suggesting to make a quick edit on Spacely to concatenate the send commands into one line and to parse the receive afterwards:
- [x] Benchmark read/write before the change
- [x]…
-
First, we set up the data:
```r
data(chagas2012)
serodata
-
- ¿Cómo generamos los datos sintético?
- ¿Cómo lo resolvemos con Bayes?
- ¿Cómo lo resolvemos con _Likelihood_?
## References
- ⬅️ #59
-
There is a train.py script within the encoder directory, however when attempting to run `python srcs.encodec.train.py --help` or `python encodec.train.py --help` there are circular importation errors …
-
To reproduce, you can throw together some toy data where the maximum delay is an even number of weeks (e.g. 14 days):
```
# Say we have 6 weeks of data with a max delay of 2 weeks
ref_dates
-
Repost from the [PyTorch forum](https://discuss.pytorch.org/t/flex-attention-gaps-in-profiler/211917/1)
I have recently been playing with Flex attention, trying to replace some of my custom triton …