-
### System Info
```Shell
- `Accelerate` version: 0.33.0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.35
- `accelerate` bash location: /opt/conda/bin/accelerate
- Python version: 3.10.14
- Nu…
-
Can we add in the example something related to TPU.
There was a FAQ about creating custom ops for TPU https://cloud.google.com/tpu/docs/faq
bhack updated
8 months ago
-
root@1cf5ef2c1064:~/cvi/cvi_alios_open_licheervnano# ./host-tools/Xuantie-900-gcc-elf-newlib-x86_64-V2.6.1/bin/riscv64-unknown-elf-addr2line -e solutions/smart_doorbell/yoc.elf -a -f 0x80050858 0x800…
-
Hi, first of all thanks for this amazing work 👍.
Is it possible to run the `train_network.py` script on a TPU?
I actually tried but it's not working, I even remove the `.to("cuda")` now I'm not se…
-
I'm looking to build an automatic differentiation library for TPUs without using high-level front-ends like TensorFlow/JAX/PyTorch-XLA, but I'm finding information about lower-level TPU usage is pract…
-
I get this error when trying to predict on a tfrecord dataset
Error Message:
```
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError …
-
Hi, I wonder if it is now possible to use TPUs?
Thanks
-
## 🐛 Usability Bug
Consider the following scenario:
Helper Code:
```
def is_tpu_available():
devices = xm.xla_real_devices()
if len(devices) > 0:
return 'TPU' in devices[0]
…
-
Hello,
If we plan to use TPUs instead of GPUs, is it possible with the current config or shall we use a different configuration?
Thanks
-
I'm looking to build an automatic differentiation library for TPUs without using high-level front-ends like TensorFlow/JAX/PyTorch-XLA, but I'm finding information about lower-level TPU usage is pract…