CUNY-CL / yoyodyne

Small-vocabulary sequence-to-sequence generation with optional feature conditioning
Apache License 2.0
29 stars 16 forks source link

Torch and Lightning 2.0.0 #60

Open kylebgorman opened 1 year ago

kylebgorman commented 1 year ago

The library does not work with Lightning (and one suspects that Torch itself is also an issue) > 2.0.0. The first issue I encounter when running yoyodyne-train with no arguments is related to a change in how Lightning command-line arguments are handled---I suspect there are at least a few more.

So that the library is not broken at head---which I consider unacceptable---I have pinned as follows:

pytorch-lightning>=1.7.0,<2.0.0
torch>=1.11.0,<2.0.0

What we need to do is just to migrate to 2.0.0, by fixing Lightning (and Torch, if any) bugs until things work, and then re-pin these two dependencies >=2.0.0. I have initially assigned myself, but I would welcome help.

kylebgorman commented 1 year ago

@Adamits I have assigned this to you since you wanna take a stab.

There is a new interface for CLI stuff (probably an imporvement over the somewhat ad hoc one they gave us):

https://lightning.ai/docs/pytorch/stable/cli/lightning_cli.html https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_expert.html

Looks like an improvement, but some work to make things conformant.

kylebgorman commented 1 year ago

I have read these docs and have the basic lay of the land now. The migration should go in two steps:

  1. First, we should implement subclasses of LightningDataModule (this is available in Lightning pre 2.0 too). The train_dataloader method should return the training data dataloader (naturally) and similarly with dev_dataloader and test_dataloader. (Even though we don't need access to train, dev, and test in any one process, we can probably set it up so we can use the same one for training and inference.) Most of the logic of get_datasets and get_loaders would be moved into this too. This is thus mostly just cleanup.
  2. Then, we have to move arg parsing to LightningCli and migrate Torch and Lightning to >=2.0. This is a bit magical (it introspects to find flags for the dataset and models) but our house is mostly in order so it might just work. At this point there are probably one or two low-level Torch issues we'll hit up against too and can handle those as they arise...
Adamits commented 1 year ago

Sounds good, I think I understand. Though I feel a bit annoyed that we have to use LightningCli instead of standard python libraries... I will try to prioritize it this week.

kylebgorman commented 1 year ago

Sounds good, I think I understand. Though I feel a bit annoyed that we have to use LightningCli instead of standard python libraries... I will try to prioritize it this week.

I'm working on the data module part of things in the next few days---haven't thought about the model side yet.

FWIW, what LightningCLI is is basically a tool that can introspect to populate the argparse parser...which should help us enormously with the kind of bug we saw with not passing around the label smoothing...

kylebgorman commented 5 months ago

@Adamits believes it is possible to migrate to Torch 2 while staying on the 1.13 branch of PTL. The ability to complete this migration in two steps (one to Torch 2, then later migrating to PTL 2 focusing on CLI changes) would make this much easier.

kylebgorman commented 5 months ago

Draft of the migration to PyTorch 2 in #173. Testing now---focusing on the transformer since we can use the causal mask flag there, hopefully.

kylebgorman commented 1 month ago

I have been using LightningCLI in another project and I have to say, it's a total vibe shift and it's way better than having big Bash scripts (without any nice syntax support), so I think we should prioritize this now.