-
### 🐛 Describe the bug
Setting the wrong value `True` to `dim` of [Softmax()](https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html) gets the error message as shown below:
```python
imp…
-
in mean+stddev, softmax and layernorm, one reduceop builds up on its parent reduceop.
tinygrad is making progress towards fusing these into a single kernel.
### Milestones
1. mean+stddev fusion…
-
Hi
In the official code (https://github.com/nwojke/deep_sort), he focused on generative learning using cosine_softmax loss
![image](https://user-images.githubusercontent.com/99321359/206056647-3614…
-
Error while running nngen.from_onnx()
Download the onnx file:
https://github.com/onnx/models/blob/master/vision/classification/alexnet/model/bvlcalexnet-3.onnx
And run Python code:
```python
…
-
`triton.__version__` is 3.0.0 for me
The tutorial code 02-fused-softmax.py given in https://triton-lang.org/main/getting-started/tutorials/02-fused-softmax.html fails to compile a kernel during the…
-
Hello,
Thank you for maintaining this repository and the effort you've put into it. While working with the model, I encountered an issue related to the softmax function in the `_coordinate_selectio…
-
Hi @tridao ,
I'd like to use the value of [softmax_lse](https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/flash_attn_interface.py#L1117) in my model and back-propagate gradient thro…
-
models里面没必要做logits=self.softmax(logits),本身输出就是logits,后面用的是CrossEntropyLoss,里面会做softmax,虽然结果不影响,但是这步骤确实是多余了
-
pytest -svv tests/TTIR/test_mnist.py::test_softmax
fails with error:
`Always | FATAL | keepdim=False is not supported`
The keep_dim flag for reduction Op is always set to false. It might r…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
This issue is introduced by `block_softmax` kernel(part of `flat_pa`, see #169 )
For some …