Did anyone use the tokens generated from Tokens.txt file and use it to load tokenizer?
I was able to load the word2vec model using the code snippet shown in the link. But when it comes to initalize the tokenizer, I am struggling a bit. My approach is:
Did anyone use the tokens generated from Tokens.txt file and use it to load tokenizer?
I was able to load the word2vec model using the code snippet shown in the link. But when it comes to initalize the tokenizer, I am struggling a bit. My approach is:
embedding_layer_text
is the list of all the code data.But I am wondering whether we can directly load the tokens generated from code2vec by following the approach mentioned in the link?