paulbricman / dual-obsidian-client

A skilled virtual assistant for Obsidian.
https://paulbricman.com/thoughtware/dual
Mozilla Public License 2.0
242 stars 7 forks source link

feat: add gpt-neo model handling #42

Open onlurking opened 3 years ago

onlurking commented 3 years ago

This PR enables neo-gpt model loading using the same API from transformers from huggingface.

Related: #40

onlurking commented 3 years ago

Google Collab with Neo-GPT models support:

https://colab.research.google.com/drive/1xqEZeZY3aYl4w859Ej4sCsX-2LxBGU1l?usp=sharing

paulbricman commented 3 years ago

Thanks for the PR! As mentioned on Discord (https://discord.com/channels/817119487999606794/825717174257319974/833584636533407745), I think using the AutoModel class from transformers would make the implementation somewhat simpler, as you can simply give it the local path to the model and it can figure out what's in there. What do you think? Not sure about the tokenizer, though, but I think both GPT-2 models and GPT-Neo use similar tokenizers?

onlurking commented 3 years ago

Hi @paulbricman!

I've followed the transformers docs and both models use the same GPT2Tokenizer function, but i´ts totally possible to replace the specific code to use AutoTokenizer, AutoConfig and AutoModel instead.

After work, I’ll take a look on this.

onlurking commented 3 years ago

@paulbricman done!