Lightning-AI / litgpt

Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.
https://lightning.ai
Apache License 2.0
7.95k stars 797 forks source link

Add package cli scripts #996

Closed carmocca closed 3 months ago

carmocca commented 3 months ago

litgpt pretrain ... litgpt finetune lora ... litgpt chat etc.

carmocca commented 3 months ago

Along the lines of https://github.com/Lightning-AI/lit-gpt/issues/982#issuecomment-1980219442, we need to decide, what are the CLI calls for every script and how are they structured in directories.

carmocca commented 3 months ago

For instance, the top post says litgpt finetune lora. I have a PoC in #1026 but it doesn't support subcommands inside subcommands (lora inside finetune).

It's not clear to me right now if we will have a single finetune.py script with a --algorithm lora|full|adapter. Or if they will be separate scripts and we need to customize the CLI to support subcommands inside subcommands. Another option is to expose litgpt finetune_lora, litgpt finetune_full ...

Similar questions apply to generation, especially if we need to choose between base|tp|sequentially (related #1016)

rasbt commented 3 months ago

I have a slight preference towards litgpt finetune_lora, litgpt finetune_full, so that we are not chaining multiple commands and keep it more similar to litgpt pretrain and litgpt chat.

But one concern is also, hypothetically speaking, suppose we were to add RLHF with PPO & REINFORCE and DPO later. How would it look?

litgpt dpo_lora and litgpt dpo_full? Or litgpt finetune_dpo_lora and litgpt finetune_dpo_full? That's getting very convoluted then.

Or the following is also super complicated:

I will have to think about it more ...

lantiga commented 3 months ago

I think lora, dpo need to be different flags (--method lora) etc on the finetune subcommmand

carmocca commented 3 months ago

There's a jsonargparse bug blocking me from implementing it the way I thought: https://github.com/omni-us/jsonargparse/issues/467. If we cannot find a fix or a workaround in time I would have to try click or something else.

carmocca commented 3 months ago

Mauricio replied and seems like --method lora is not really feasible because you would need to do argument parsing in two stages, once to get the method selected and once to add the arguments of the finetune/lora.py script.

Instead I will do subcommands inside subcommands which will require rolling out our custom parser. The gist is:

from jsonargparse import ArgumentParser

finetune_lora = ArgumentParser()
finetune_lora.add_argument("--lr")

finetune = ArgumentParser()

generate = ArgumentParser()
generate.add_argument("--num_tokens")

parser = ArgumentParser()
subcommands = parser.add_subcommands()
subcommands.add_subcommand("finetune", finetune)
subcommands.add_subcommand("generate", generate)

finetune_subcommands = finetune.add_subcommands()
finetune_subcommands.add_subcommand("lora", finetune_lora)

args = parser.parse_args()
print(args)
mauvilsa commented 3 months ago

I forgot to mention in my initial response, that recently added, it is possible to use jsonargparse.CLI with custom named subcommands. Look at the class Raffle: example in the docs. It has some limitations, but it might fit the use case.

carmocca commented 3 months ago

Very interesting @mauvilsa. How do you set help messages in the subcommands? This is what I rolled out and what I would want to replace: https://github.com/Lightning-AI/litgpt/blob/wip/litgpt/__main__.py#L43-L127

mauvilsa commented 3 months ago

How do you set help messages in the subcommands?

Currently not possible. But I have thought that this could be added like:

components = {
    "weekday": {
        "_help": "Help for weekday",
        "tier1": Raffle(prize=100),
        "tier2": Raffle(prize=50),
    },
    "weekend": {
        "_help": "Help for weekend",
        "tier1": Raffle(prize=300),
        "tier2": Raffle(prize=75),
    },
}

If it is this simple, then implementing it wouldn't be difficult.

mauvilsa commented 2 months ago

The _help option in jsonargparse.CLI is now implemented in https://github.com/omni-us/jsonargparse/pull/485.