facebookresearch / mmf

A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
https://mmf.sh/
Other
5.49k stars 935 forks source link

pytorch-lightning@9b011606f in requirements.txt is 404 #1229

Open Serendipity-zx opened 2 years ago

Serendipity-zx commented 2 years ago

When I am install mmf in the v100 remote server, since network connecting on the server is not good, the command"git clone" often happen Frequent disconnections. So I downloaded the pytorch-lighting package in offline on https://github.com/PyTorchLightning/pytorch-lightning, and then uploaded it to the server and installed it with the command "pip install .zip". But after pytorch-lighting installed, when I run the pytest command, it will report some such errors , it seems to be related to the mismatch of the installed version of pytorch-lighting, but the link https://github.com/PyTorchLightning/pytorch-lightning@9b011606f in requirements.txt cannot be logged in, showing 404 So how can I correctly install the pytorch-lighting which matched mmf, please help me, thank you !

Serendipity-zx commented 2 years ago

_ TestLightningTrainerLRSchedule.test_lr_schedule_compared_to_mmf_is_same __

self =

def test_lr_schedule_compared_to_mmf_is_same(self):
    config = get_config_with_defaults(
        {"training": {"max_updates": 8, "max_epochs": None, "lr_scheduler": True}}
    )

    mmf_trainer = get_mmf_trainer(config=config)
    mmf_trainer.lr_scheduler_callback = LRSchedulerCallback(config, mmf_trainer)
    mmf_trainer.callbacks.append(mmf_trainer.lr_scheduler_callback)
    mmf_trainer.on_update_end = mmf_trainer.lr_scheduler_callback.on_update_end
    mmf_trainer.evaluation_loop = MagicMock(return_value=(None, None))
    mmf_trainer.training_loop()

    with patch("mmf.trainers.lightning_trainer.get_mmf_env", return_value=""):
        config = self._get_config(max_steps=8, lr_scheduler=True)
      trainer = get_lightning_trainer(config=config)

tests/trainers/lightning/test_lr_schedule.py:45:


tests/trainers/test_utils.py:121: in get_lightning_trainer prepare_lightning_trainer(trainer) tests/trainers/test_utils.py:131: in prepare_lightning_trainer trainer._load_trainer() mmf/trainers/lightning_trainer.py:61: in _load_trainer **lightning_params_dict,


self = <pytorch_lightning.trainer.trainer.Trainer object at 0x7fa5a82eff50> args = () kwargs = {'accumulate_grad_batches': 1, 'benchmark': False, 'callbacks': [], 'default_root_dir': '', ...} cls = <class 'pytorch_lightning.trainer.trainer.Trainer'>, env_variables = {}

@wraps(fn)
def insert_env_defaults(self: Any, *args: Any, **kwargs: Any) -> Any:
    cls = self.__class__  # get the class
    if args:  # in case any args passed move them to kwargs
        # parse only the argument names
        cls_arg_names = [arg[0] for arg in get_init_arguments_and_types(cls)]
        # convert args to kwargs
        kwargs.update(dict(zip(cls_arg_names, args)))
    env_variables = vars(parse_env_variables(cls))
    # update the kwargs by env variables
    kwargs = dict(list(env_variables.items()) + list(kwargs.items()))

    # all args were already moved to kwargs
  return fn(self, **kwargs)

E TypeError: init() got an unexpected keyword argument 'progress_bar_refresh_rate'

/opt/conda/envs/mmf/lib/python3.7/site-packages/pytorch_lightning/utilities/argparse.py:339: TypeError

E TypeError: init() got an unexpected keyword argument 'progress_bar_refresh_rate'

/opt/conda/envs/mmf/lib/python3.7/site-packages/pytorch_lightning/utilities/argparse.py:339: TypeError