guzy0324 / meta_learned_review_helpfulness_prediction

0 stars 0 forks source link

关于lr_find #2

Closed APPLE-XMT closed 2 years ago

APPLE-XMT commented 2 years ago

抱歉再次打扰您,我在设置超参数时发现lr_find这个状态里面需要一个名为-lfa的参数,请问一下这个参数需要给予什么形式的值呀?

guzy0324 commented 2 years ago

json字符串,默认是"{}"

guzy0324 commented 2 years ago
    elif args.stage == "lr_find":
        if args.redo or not exists(f"{LOGS}/lr_find_result.pkl"):
            seed(args.seed)
            model = application_scenario_module[args.application_scenario](optimizer_args={"lr": 0.001}, **args.__dict__)
            # 默认num_sanity_val_steps=2,有可能全是positive导致auroc报错
            trainer = Trainer(gpus=1, default_root_dir=str(LOGS), num_sanity_val_steps=0)
            tuner = Tuner(trainer)
            lr_finder = trainer.tuner.lr_find(model, **args.lr_find_args)
            fig = lr_finder.plot(suggest=True)
            try:
                # https://stackoverflow.com/questions/7290370/store-and-reload-matplotlib-pyplot-object
                with open(f"{LOGS}/lr_find_result.pkl", "wb") as f:
                    dump(fig, f)
            except:
                exc_remove(f"{LOGS}/lr_find_result.pkl")
        else:
            with open(f"{LOGS}/lr_find_result.pkl", "rb") as f:
                fig = load(f)
        show()
APPLE-XMT commented 2 years ago

这个里面是写lr_find()这个函数所需要的参数吗?比如训练次数

guzy0324 commented 2 years ago

对,文档可以看https://pytorch-lightning.readthedocs.io/en/1.5.10/api/pytorch_lightning.tuner.tuning.Tuner.html?highlight=tuner#pytorch_lightning.tuner.tuning.Tuner.lr_find

guzy0324 commented 2 years ago

比如可以"{'num_training': 1000}"

guzy0324 commented 2 years ago

结果的图缓存在LOGS文件夹中,如果以后不--redo的话只会展示缓存的结果而不会重新调参

APPLE-XMT commented 2 years ago

比如可以"{'num_training': 1000}"

好的,非常感谢!谢谢啦,麻烦您了

APPLE-XMT commented 2 years ago

结果的图缓存在LOGS文件夹中,如果以后不--redo的话只会展示缓存的结果而不会重新调参

我在运行时写了 -r 1 ,但是程序报错了

guzy0324 commented 2 years ago

直接-r

APPLE-XMT commented 2 years ago

直接-r

好的 谢谢