OpenMOSS / MOSS

An open-source tool-augmented conversational language model from Fudan University
https://txsun1997.github.io/blogs/moss.html
Apache License 2.0
11.95k stars 1.15k forks source link

NameError: name 'autotune' is not defined #129

Open nlp-enthusiast opened 1 year ago

nlp-enthusiast commented 1 year ago

使用pip 安装 autotune-0.0.3 后仍存在这个问题

cnsky2016 commented 1 year ago

我这边是把models目录里的custom_autotune.py文件放在~/.cache/huggingface/modules/transformers_modules/local/ 目录下解决

nlp-enthusiast commented 1 year ago

我这边是把models目录里的custom_autotune.py文件放在~/.cache/huggingface/modules/transformers_modules/local/ 目录下解决

解决了autotune的问题,但是会出现下面的问题 File "/home/.cache/huggingface/modules/transformers_modules/local/custom_autotune.py", line 93, in run self.cache[key] = builtins.min(timings, key=timings.get) TypeError: '<' not supported between instances of 'tuple' and 'float'

nlp-enthusiast commented 1 year ago

我这边是把models目录里的custom_autotune.py文件放在~/.cache/huggingface/modules/transformers_modules/local/ 目录下解决

解决了autotune的问题,但是会出现下面的问题 File "/home/.cache/huggingface/modules/transformers_modules/local/custom_autotune.py", line 93, in run self.cache[key] = builtins.min(timings, key=timings.get) TypeError: '<' not supported between instances of 'tuple' and 'float'

已经解决了! timings中的value存在inf,将带有inf的item删除就可以正常运行了

ColorfulDick commented 1 year ago

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

nlp-enthusiast commented 1 year ago

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, *kwargs),float): continue temp[config] = {self._bench(args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

xiami2019 commented 1 year ago

也可以尝试清一下本地的Huggingface的缓存,重新从Huggingface Hub上下载一下,模型文件有所更新,最新的模型文件直接按照Readme里的步骤跑量化模型应该ok的

sun1092469590 commented 1 year ago

我这边~/.cache/huggingface/modules/transformers_modules下没有local文件,是local是自己建吗?

linonetwo commented 1 year ago

https://github.com/linonetwo/MOSS-DockerFile

我在 dockerfile 里把这些问题都解决了,相关笔记 https://onetwo.ren/wiki/#调研GPU上运行的语言模型

66li commented 1 year ago

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, *kwargs),float): continue temp[config] = {self._bench(args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

大佬,这个缩进看不明白,能贴下源码吗,多谢了

nlp-enthusiast commented 1 year ago

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, *kwargs),float): continue temp[config] = {self._bench(args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

大佬,这个缩进看不明白,能贴下源码吗,多谢了

image

zls130921 commented 1 year ago

是的,我也通过这个解决了。

nezhazheng commented 1 year ago

老哥,代码具体咋改才能跑起来,我也遇到这个问题了

将报错的代码改为 temp = {} for config in pruned_configs: if isinstance(self._bench(*args, config=config, *kwargs),float): continue temp[config] = {self._bench(args, config=config, **kwargs)} bench_end = time.time() self.bench_time = bench_end - bench_start self.cache[key] = builtins.min(temp, key=timings.get)

大佬,这个缩进看不明白,能贴下源码吗,多谢了

image

@txsun1997 官方可以修复下这个问题吗,好多人都碰到了,感谢~