The NLP example currently uses GloVe word vectors from Stanford's repository, but these are:
Sometimes slow to download on our typical instance type (~6min30) - because the combined zip of 50/100/200/300D vectors is downloaded and the unused files discarded. There don't seem to be separate downloads offered for the 100D vector size the model uses.
Only offered pre-trained in English which makes the exercise less transferable for ASEAN customers.
The NLP example currently uses GloVe word vectors from Stanford's repository, but these are:
Could maybe instead consider: