bigscience-workshop / Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Other
1.33k stars 215 forks source link

[Tensorboard] Log text prediction in evaluation #163

Open thomasw21 opened 3 years ago

thomasw21 commented 3 years ago

A very useful tool in order to understand model performance beyond obtaining loss: Actually show what are the predictions.

It'd be very useful to be able to "see" the output of the model during evaluation in text format. These should be logged in tensorboard. Tensorboard likely supports markdown style where you can put prediction in bold.

Maybe we can only print out the first batch as we should get a good amount of example from it.

thomasw21 commented 3 years ago

@TevenLeScao also suggested that we make inference work in Meg-DS. Very simple greedy search. The motivation is that teacher forcing won't tell us much about the model (it's very similar to validation loss), whereas greedy search will show the models actually infers.

Personally I don't agree with the statement that teacher forcing won't tell us much, but I do agree that running actual inference in Meg-DS will probably allow us to notice bugs very quickly.

KMFODA commented 2 years ago

Hey @thomasw21. Is this still needed? If so I'd love to take it on.

thomasw21 commented 2 years ago

Hey! We have finished training BLOOM so the tensorboard integration might not be required anymore. However I think having a generation engine in Meg-DS would he greatly appeciated as we currently rely on our transformers converted checkpoint to generate

KMFODA commented 2 years ago

I see I'd like to help with that then. Where would be the best place for having that generate engine?

mayank31398 commented 2 years ago

@KMFODA @thomasw21 , https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/328 Already added the ability to benchmark system, interactive cli, and a generation server. Testing a few things. Will try to get this merged by this week

thomasw21 commented 2 years ago

Already added the ability to benchmark system, interactive cli, and a generation server.

IMO this issue is different, we want to have a inference mechanism from Meg-DS, without having to convert to transformers. The context was that we were training in Meg-DS and had no way to "test" the model until we built the transformers skeleton, convert the weights and then leverage transformers inference mechanisms.

Where would be the best place for having that generate engine? I'm not sure what you're asking, probably in this repo?

KMFODA commented 2 years ago

Where would be the best place for having that generate engine? I'm not sure what you're asking, probably in this repo?

Sorry. I'm new to this repo. I meant to ask where in the repo itself should this generate engine live?

mayank31398 commented 2 years ago

Hmm, @thomasw21 so, the PR I referred to above uses both HF accelerate and DS-inference libraries, depending on what we want to infer with. But it does require transformers version of BLOOM

mayank31398 commented 2 years ago

@KMFODA currently, I am planning to create a standalone library. For now, I am adding to this repo itself.

thomasw21 commented 2 years ago

Sorry. I'm new to this repo. I meant to ask where in the repo itself should this generate engine live?

I mean you can probably create a megatron/inference folder.

mayank31398 commented 2 years ago

@thomasw21 , I am not sure how this differs from the PR I pointed above ^^. Can you explain?

thomasw21 commented 2 years ago

If you don't have transformers skeleton (ie modeling) how would one be able to use transformers or DS-inference?

mayank31398 commented 2 years ago

oh, I think I understand the issue now. Maybe something like loading from the universal checkpoints and running inference etc?

thomasw21 commented 2 years ago

@mayank31398 yup! Essentially this is what this issue is about.