Closed EmiyaKatuz closed 1 year ago
BV13t4y1V7DV,cmake安装成功后,直接pip install -r requirements.txt,应该能避免环境冲突问题。或者直接用封装版的Japanese cleaner
感谢 已解决环境冲突 但我在使用封装版的Japanese cleaner完成text文件夹和cleaners压缩包解压替换后 仍出现如下错误 请问原因是?
Traceback (most recent call last):
File "D:...\vits_with_chatgpt-gpt3\main.py", line 6, in
Japanese cleaners解压后应该是一个cleaners文件夹,里面有 char.bin、JapaneseCleaner.dll、matrix.bin、sys.dic、unk.dic。你看看现在是把这个文件夹放在哪个位置,试一下直接放在你显示的这个路径"D:\Program Files (x86)\IDE\apps\PyCharm-P\ch-0\231.8109.197\jbr\bin\cleaners"
十分感谢 现在项目后端已经能运行起来了
但在结合前端使用的时候却只能提供空回答 并报如下错误 请问是什么原因呢
ERROR:local_chat:Exception on /chat [GET]
Traceback (most recent call last):
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\flask\app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\flask\app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\flask\app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\flask\app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(view_args)
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\local_chat.py", line 103, in text_api
response, new_history = model.chat(tokenizer, message, history)
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, kwargs)
File "C:\Users\19110/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1255, in chat
outputs = self.generate(inputs, *gen_kwargs)
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(args, kwargs)
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\transformers\generation\utils.py", line 1437, in generate
return self.sample(
File "D:\EmiyaKatuz\CS_Homework\vits_with_chatgpt-gpt3\venv\lib\site-packages\transformers\generation\utils.py", line 2440, in sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "C:\Users\19110/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1091, in prepare_inputs_for_generation
mask_positions = [seq.index(mask_token) for seq in seqs]
File "C:\Users\19110/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1091, in
直接键入网址 127.0.0.1:8080/chat?Text=测试 试一下
显示500 Internal Server Error,是服务端哪里没配置好么?
这要看服务器端的报错,光是启动成功还不够,你用chatgpt能行吗
我去稍微研究了一下 150001是分词规则更新造成的gmask错误 去仓库将模型、tokenizer_config.json、ice_text.model和tokenization_chatglm.py更新成最新版本 就能移除embedding中的image token以避免报错 然后, 在configuration_chatglm.py中需要将logger.warning_once修改成logger.warning就能正常运行了 不然会报AttributeError: 'Logger' object has no attribute 'warning_once' BTW, GLM对于关键库的版本似乎卡得很死,我这边测试下来protobuf必须为3.20.0、transformers必须为4.26.1才能正常生成回答和语音 不确定是否是个例 如果不是的话,requirements里的版本约束可能做一下修改会比较好
十分感谢您对我问题的持续解答,辛苦了
Traceback (most recent call last): File "D:...\vits_with_chatgpt-gpt3\main.py", line 6, in
from text import text_to_sequence
File "D:...\vits_with_chatgpt-gpt3\text__init.py", line 2, in
from text import cleaners
File "D:...\vits_with_chatgpt-gpt3\text\cleaners.py", line 3, in
from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
File "D:...\vits_with_chatgpt-gpt3\text\japanese.py", line 3, in
import pyopenjtalk
File "D:...\vits_with_chatgpt-gpt3\venv\lib\site-packages\pyopenjtalk\ init__.py", line 20, in
from .htsengine import HTSEngine
File "pyopenjtalk/htsengine.pyx", line 1, in init pyopenjtalk.htsengine
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject