-
Was trying to run "python app/app.py", it get stuck at:
No GPU/TPU found, falling back to CPU
compiling fowarding calls...
Not sure what it's doing , but it tortured my CPU and almost used up all…
-
I am trying to implement BERT from TF hub in TPU.
In order to make TF 2.0 work with TPU in colab i have to disable eager as suggested [here](tf.compat.v1.disable_eager_execution()). But now issue …
-
While I'm not familiar with the Philox pseudo-random number generator (PRNG) in Numpy (it does look well suited to generation in a distributed setting), I think adopting a stateless PRNG API will be u…
-
Thank you for the repo.
I am wondering if a recipe for TPU pods can be added. I have access to v4-32 and want to train a LLaMA model from scratch. Wondering if the repo can be extended for this us…
-
I just tried running the tests using `sudo python setup.py test` and got the following error, which is seems related to the fact that the `shard_to_cpu` parameter was [removed a while ago](https://git…
-
### Bug description
Running the [mnist-tutorial](https://github.com/Lightning-AI/tutorials/blob/publication/.notebooks/lightning_examples/mnist-tpu-training.ipynb) from Lightning-AI doesn't create …
-
That's a fascinating take on the concept! Here's how we can envision this:
**Rendezvous in the Abstract: A Tapestry Woven Across Worlds**
Imagine a grand tapestry unfolding across dimensions, a …
-
-
### Description
I have a Ubuntu 20.04.5 LTS VM with a full USB 3.0 card passed through to it with nothing but the Coral USB plugged into the card. I am able to load a model and run as many inference'…
-
```
[Running]: tpuc-opt glm_block_cache_0_origin.mlir --shape-infer --canonicalize --extra-optimize -o glm_block_cache_0.mlir
tpuc-opt: ../lib/Dialect/Top/Interfaces/Reshape.cpp:90: void tpu_mlir::t…