X-D-Lab / Sunsimiao

🌿孙思邈中文医疗大模型(Sunsimiao):提供安全、可靠、普惠的中文医疗大模型
Apache License 2.0
366 stars 23 forks source link

Message='BaiChuanTokenizer' object has no attribute 'sp_model' #9

Open toniedeng opened 5 months ago

toniedeng commented 5 months ago

Message='BaiChuanTokenizer' object has no attribute 'sp_model' Source=C:\Users\Administrator.cache\huggingface\modules\transformers_modules\Sunsimiao\tokenization_baichuan.py StackTrace: File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\Sunsimiao\tokenization_baichuan.py", line 104, in vocab_size return self.sp_model.get_piece_size() File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\Sunsimiao\tokenization_baichuan.py", line 108, in get_vocab (Current frame) vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\Sunsimiao\tokenization_baichuan.py", line 74, in init super().init( File "C:\Users\Administrator.cache\modelscope\modelscope_modules\Sunsimiao\ms_wrapper.py", line 41, in init self.tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) File "C:\Users\Administrator.cache\modelscope\modelscope_modules\Sunsimiao\ms_wrapper.py", line 20, in init model = SunsimiaoTextGeneration(model) if isinstance(model, str) else model File "C:\Users\Administrator\source\repos\Sunsimiao\scripts\inference_ms.py", line 4, in pipe = pipeline(task=Tasks.text_generation,

how can i deal?

toniedeng commented 5 months ago

deal with change transformers==4.33.1

jingnant commented 5 months ago

尝试降级transformers==4.33.3 或者修改tokenization_baichuan.py,super() 修改到最后执行

        self.vocab_file = vocab_file
        self.add_bos_token = add_bos_token
        self.add_eos_token = add_eos_token
        self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
        self.sp_model.Load(vocab_file)
        super().__init__(
            bos_token=bos_token,
            eos_token=eos_token,
            unk_token=unk_token,
            pad_token=pad_token,
            add_bos_token=add_bos_token,
            add_eos_token=add_eos_token,
            sp_model_kwargs=self.sp_model_kwargs,
            clean_up_tokenization_spaces=clean_up_tokenization_spaces,
            **kwargs,
        )
        # self.vocab_file = vocab_file
        # self.add_bos_token = add_bos_token
        # self.add_eos_token = add_eos_token
        # self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
        # self.sp_model.Load(vocab_file)