-
### 🐛 Describe the bug
When executing the [pippy_bert.py](https://github.com/pytorch/PiPPy/blob/main/examples/huggingface/pippy_bert.py) example with cpu gloo backend:
```
torchrun --nproc-per-node…
-
### Describe the bug
Exception in thread "main" uk.ac.manchester.tornado.api.exceptions.TornadoInternalError: org.graalvm.compiler.debug.GraalError: should not reach here: node is not LIRLowerable: …
-
| | |
| --- | --- |
| Bugzilla Link | [48794](https://llvm.org/bz48794) |
| Version | unspecified |
| OS | Linux |
| Attachments | [Source file to reproduce the bug - compile with -DTILED_COPY]…
-
```
mateusz@debian:~/model-zoo/vision/diffusion_mnist$ julia
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ |…
-
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
GIT_VERSION:v2.14.0-rc1-21-g4dacf3f368e VERSION:2.14.0
### Custom code
…
-
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
Right now you can pass an integer argument to specify 1D dynamic shared memory. Allowing for a tuple argument to allow for 2D+ dynamic shared memory allocation would be a great addition.
-
### Your current environment
Tested with both v0.5.3.post1 and v0.5.4.
```text
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTor…
-
### Description
I updated my nvidia driver
`nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, releas…
lyysl updated
7 months ago