issues
search
butchland
/
fastai_xla_extensions
A Python package to allow fastai to run on TPUs using Pytorch-XLA
https://butchland.github.io/fastai_xla_extensions
Apache License 2.0
36
stars
7
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
ImportError: cannot import name 'inf' from 'torch'
#49
pankaj-kvhld
opened
1 year ago
0
NameError: name 'xmp' is not defined
#48
pankaj-kvhld
opened
1 year ago
0
Bump nokogiri from 1.13.6 to 1.14.3 in /docs
#47
dependabot[bot]
opened
1 year ago
0
Bump activesupport from 6.0.3.2 to 6.0.6.1 in /docs
#46
dependabot[bot]
opened
1 year ago
0
Bump nokogiri from 1.13.6 to 1.13.9 in /docs
#45
dependabot[bot]
closed
1 year ago
1
Please be compatible with the latest version of fastai, thank you!
#44
alleniver
opened
2 years ago
0
Bump tzinfo from 1.2.7 to 1.2.10 in /docs
#43
dependabot[bot]
closed
2 years ago
0
Bump nokogiri from 1.13.4 to 1.13.6 in /docs
#42
dependabot[bot]
closed
2 years ago
0
Bump nokogiri from 1.12.5 to 1.13.4 in /docs
#41
dependabot[bot]
closed
2 years ago
0
Bump nokogiri from 1.12.5 to 1.13.3 in /docs
#40
dependabot[bot]
closed
2 years ago
1
Bump nokogiri from 1.11.5 to 1.12.5 in /docs
#39
dependabot[bot]
closed
2 years ago
0
Bump addressable from 2.7.0 to 2.8.0 in /docs
#38
dependabot[bot]
closed
3 years ago
0
Is this repo still working? Can't run the colab notebooks provided in the doc
#37
coldfir3
closed
3 years ago
1
Bump nokogiri from 1.10.10 to 1.11.5 in /docs
#36
dependabot[bot]
closed
3 years ago
0
xla_fit_one_cycle fails with an error when using fastai version 2.3.1
#35
butchland
opened
3 years ago
0
Bump rexml from 3.2.4 to 3.2.5 in /docs
#34
dependabot[bot]
closed
3 years ago
0
xla_lr_find locks xm.xla_device in main process
#33
butchland
closed
3 years ago
0
creating a model with concat_pool = True creates a model that can't run on multiple tpu cores
#28
butchland
opened
3 years ago
0
using fastai data loaders is slower on multi core is slow if using large image sizes
#27
butchland
opened
3 years ago
0
add `utils.print_aten_ops`
#26
tyoc213
closed
3 years ago
1
fastai_xla_extensions on kaggle errors out
#25
butchland
closed
3 years ago
7
export and load_learner (for loading for inference) is not working
#22
butchland
closed
3 years ago
1
Fixing some issues been found testing locally XLA
#18
tyoc213
closed
3 years ago
1
Proxy to get attr
#17
tyoc213
closed
3 years ago
2
Run on multiple TPUs
#16
tyoc213
closed
4 years ago
1
default_device no longer sets to overridden default_device method
#15
butchland
closed
3 years ago
1
Setting pretrained = true causes the cnn_learner model to not train
#14
butchland
closed
3 years ago
2
Let user aquire device
#13
tyoc213
closed
4 years ago
1
replace fastai2 with fastai on all that is not explore_nbs
#12
tyoc213
closed
4 years ago
1
run batch transforms on CPU if dataloader is using a TPU
#11
butchland
closed
3 years ago
1
Tracing
#10
tyoc213
closed
4 years ago
0
Xla run debug
#9
tyoc213
closed
4 years ago
0
Sending RNN test
#8
tyoc213
closed
4 years ago
2
Add patch to LMLanguageLearner to fix problem on load_ignore_keys where state_dict error due to use of TPU
#7
butchland
opened
4 years ago
0
Language model trainer is very slow or not finishing fit_one_cycle for 1 epoch
#6
butchland
opened
4 years ago
2
batch transforms for vision are slow
#5
butchland
opened
4 years ago
4
learner.lr_find returns wrong result
#4
butchland
closed
4 years ago
2
It seems that using the value of the line in lr_find gives good results on trainning?
#3
tyoc213
closed
4 years ago
0
TODO
#2
tyoc213
closed
4 years ago
0
Trying to pinpoint why we can't train a model
#1
tyoc213
closed
4 years ago
3