Open Mon-ius opened 6 months ago
Environment setup,
conda create -n xla python=3.11 transformers diffusers datasets accelerate evaluate torchvision torchaudio bitsandbytes safetensors sentencepiece imageio scipy numpy pyglet gradio open3d fire rich -c conda-forge -c pytorch -y
conda activate xla
conda env config vars set LD_LIBRARY_PATH="$CONDA_PREFIX/lib"
conda env config vars set HF_HOME="/dev/shm"
conda env config vars set PJRT_DEVICE=TPU
# conda env config vars set XLA_USE_BF16=1
# conda env config vars set XLA_USE_SPMD=1
conda deactivate && conda activate xla
pip install 'torch~=2.2.0' --index-url https://download.pytorch.org/whl/cpu
pip install 'torch_xla[tpu]~=2.2.0' -f https://storage.googleapis.com/libtpu-releases/index.html
pip uninstall -y accelerate
pip install git+https://github.com/huggingface/accelerate
Seems like it got codegen in https://github.com/pytorch/pytorch/blob/3243be7c3a7e871acfc9923eea817493f996da9a/torchgen/model.py#L166 but we didn't implement the corresponding sparse kernels. I can make this a feature request but it is unlikely we have resouce to work on sparse related projects anytime soon.
do we have alternative solution for torch.sparse_coo_tensor
implement on XLA device?
not that I am aware of, we haven't think too much about sparsity yet.
Bugs: