Open thomasw21 opened 3 years ago
@TevenLeScao also suggested that we make inference work in Meg-DS. Very simple greedy search. The motivation is that teacher forcing won't tell us much about the model (it's very similar to validation loss), whereas greedy search will show the models actually infers.
Personally I don't agree with the statement that teacher forcing won't tell us much, but I do agree that running actual inference in Meg-DS will probably allow us to notice bugs very quickly.
Hey @thomasw21. Is this still needed? If so I'd love to take it on.
Hey! We have finished training BLOOM so the tensorboard integration might not be required anymore. However I think having a generation engine in Meg-DS would he greatly appeciated as we currently rely on our transformers
converted checkpoint to generate
I see I'd like to help with that then. Where would be the best place for having that generate engine?
@KMFODA @thomasw21 , https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/328 Already added the ability to benchmark system, interactive cli, and a generation server. Testing a few things. Will try to get this merged by this week
Already added the ability to benchmark system, interactive cli, and a generation server.
IMO this issue is different, we want to have a inference mechanism from Meg-DS, without having to convert to transformers
. The context was that we were training in Meg-DS and had no way to "test" the model until we built the transformers
skeleton, convert the weights and then leverage transformers
inference mechanisms.
Where would be the best place for having that generate engine? I'm not sure what you're asking, probably in this repo?
Where would be the best place for having that generate engine? I'm not sure what you're asking, probably in this repo?
Sorry. I'm new to this repo. I meant to ask where in the repo itself should this generate engine live?
Hmm, @thomasw21 so, the PR I referred to above uses both HF accelerate and DS-inference libraries, depending on what we want to infer with. But it does require transformers version of BLOOM
@KMFODA currently, I am planning to create a standalone library. For now, I am adding to this repo itself.
Sorry. I'm new to this repo. I meant to ask where in the repo itself should this generate engine live?
I mean you can probably create a megatron/inference
folder.
@thomasw21 , I am not sure how this differs from the PR I pointed above ^^. Can you explain?
If you don't have transformers
skeleton (ie modeling
) how would one be able to use transformers
or DS-inference
?
oh, I think I understand the issue now. Maybe something like loading from the universal checkpoints and running inference etc?
@mayank31398 yup! Essentially this is what this issue is about.
A very useful tool in order to understand model performance beyond obtaining loss: Actually show what are the predictions.
It'd be very useful to be able to "see" the output of the model during evaluation in text format. These should be logged in tensorboard. Tensorboard likely supports markdown style where you can put prediction in bold.
Maybe we can only print out the first batch as we should get a good amount of example from it.