neurokitti / AIRIS-VtuberAI

27 stars 1 forks source link

not fully sure whats happening #2

Open sbk-djarman opened 1 month ago

sbk-djarman commented 1 month ago

After getting to the option for what version to run im getting some errors like:

"C:\Users\dylan\OneDrive\Desktop\alinity\AIRIS-VtuberAI.venv\lib\site-packages\transformers\models\auto\configuration_auto.py", line 691, in getitem raise KeyError(key) KeyError: 'mllama' During handling of the above exception, another exception occurred: File "c:\Users\dylan\OneDrive\Desktop\alinity\AIRIS-VtuberAI\OpenVoice\main.py", line 22, in main main_chat_youtube_non_legacy(CHAT_MODEL, CHAT_VTUBER_NAME,BANNED_WORDS,USE_DEFAULT_PROFANITY_LIST,HUGGING_FACE_TOKEN) File "c:\Users\dylan\OneDrive\Desktop\alinity\AIRIS-VtuberAI\OpenVoice\startup_scripts.py", line 63, in main_chat_youtube_non_legacy chat_instance = neo_chat_engine(model,Vtuber_name,mem_length=5,token=HF_token,path_to_system_messages = "OpenVoice\system_message.txt",device_map="cuda",load_in_4bit=True,) File "c:\Users\dylan\OneDrive\Desktop\alinity\AIRIS-VtuberAI\OpenVoice\chat_API.py", line 20, in init self.model = AutoModelForCausalLM.from_pretrained( File "C:\Users\dylan\OneDrive\Desktop\alinity\AIRIS-VtuberAI.venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 524, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "C:\Users\dylan\OneDrive\Desktop\alinity\AIRIS-VtuberAI.venv\lib\site-packages\transformers\models\auto\configuration_auto.py", line 991, in from_pretrained raise ValueError( ValueError: The checkpoint you are trying to load has model type mllama but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date

sbk-djarman commented 1 month ago

I think I tried using the wrong huggingface model waiting for access to see if it fixes it

neurokitti commented 1 month ago

yeah that looks like you are trying to load a model the hugging face pipeline doesn't support. which one were you using? you may also try updating the version of transformers you are using if you are using a newer model.

sbk-djarman commented 1 month ago

Yes I did manage to get it! It is now downloading shards but internet is slow so gonna have to wait haha but the only thing is I don't see a model in vtube studio but gonna mess with that after shards are done

sbk-djarman commented 1 month ago

Also as the shards are downloading its not stop saying "trial"

sbk-djarman commented 1 month ago

Microsoft phi 3 4k would be better cause its a smaller download right?

neurokitti commented 1 month ago

yes Phi 3 4k instruct is the best small model ive found that works. you could try the new meta llama 3.2 1b or 3b because those are small too but ive not got a chance to test them.

sbk-djarman commented 1 month ago

got everything partially working running interview mode but main .py file doesnt have interview mode to comment or uncomment doesnt seem like its responding to me really...I did see in the video when you talk it shows recording audio in the terminal but also doesn't do that for me

neurokitti commented 1 month ago

are you in the discord i can take a more in depth look there