-
## 🚀 Feature
Expose dataloaders to the `LightningModule`'s `setup` method.
### Motivation
This will allow for a real dynamic setup, meaning that some layers' size can be set up correctly than…
-
JIRA-it .... closest thing we got to a "reset cluster" without shutting the cluster down.
Cliff
On 3/11/2015 10:00 AM, Tom Kraljevic wrote:
Right thing to do here is push the logic performe…
-
## Proposed refactor
We've dropped support for Python 3.6, which means we can use the new annotations format.
### Motivation
New shiny things are nice
### Pitch
Adopt [PEP 585](https://…
-
## 🚀 Feature
### Motivation
This is a request from a user on [Slack](https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1661841050909629).
In their use case, they need to transform th…
-
## 🚀 Feature
Integrate https://github.com/pytorch/torchsnapshot
### Motivation
The library is design with composition in mind and is very modular.
The distributed training benchmarks look ve…
-
## Proposed refactor
This was proposed by @tchaton on live-stream. `prepare_data_per_node`can be identified by dumping a file on rank 0 and checking whether all other ranks can see that file or…
-
## Proposed refactoring or deprecation
Instead of disabling shuffle / replacing `RandomSampler` with `SequentialSampler` in the train dataloader, replace the train dataset with a fixed subset of it…
-
-
The [R-Hero experiment](http://arxiv.org/pdf/2012.06824) has uncovered the existence of catastrophic forgetting problem for patch generation.
We will study how to overcome this problem.
(spinoff…
-
To make this more usable for a wider range of people/experiments, it would be nice to
* Put as much logic as needed into small library functions, that are just used in the command line tools
* Ha…