Open JiahaoYao opened 2 years ago
../../../../home/codespace/.conda/envs/ci/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py:5
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py:5: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
tensorboard.__version__
../../../../home/codespace/.conda/envs/ci/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py:6
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
) < LooseVersion("1.15"):
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_checkpoint
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/torch/distributed/_sharded_tensor/__init__.py:10: DeprecationWarning: torch.distributed._sharded_tensor will be deprecated, use torch.distributed._shard.sharded_tensor instead
DeprecationWarning
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_checkpoint
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_finetune
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_resume_from_checkpoint
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_test
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_resume_from_checkpoint_downsize
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/pytorch_lightning/loops/utilities.py:94: PossibleUserWarning: `max_epochs` was not set. Setting it to 1000 epochs. To train without an epoch limit, set `max_epochs=-1`.
category=PossibleUserWarning,
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_finetune
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:245: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 4 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
category=PossibleUserWarning,
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_finetune
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:1937: PossibleUserWarning: The number of training batches (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
category=PossibleUserWarning,
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_finetune
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:245: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 4 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
category=PossibleUserWarning,
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_resume_from_checkpoint
ray_lightning/tests/test_ddp_sharded.py::test_ddp_sharded_plugin_resume_from_checkpoint_downsize
/home/codespace/.conda/envs/ci/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:52: LightningDeprecationWarning: Setting `Trainer(resume_from_checkpoint=)` is deprecated in v1.5 and will be removed in v1.7. Please pass `Trainer.fit(ckpt_path=)` directly instead.
"Setting `Trainer(resume_from_checkpoint=)` is deprecated in v1.5 and"
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html