graviraja / MLOps-Basics

MIT License
6.08k stars 1.02k forks source link

A question on week_0 #23

Closed chloamme closed 3 years ago

chloamme commented 3 years ago

Hello Raviraj, your great work is so helping me.

I installed all packages using requirements.txt. And I trained a model without any issue. But I have an issue while I do inference sentences.

In my case, every sentence gets the same results. (almost same score) Could you check it out? Thanks!

image image

ravirajag commented 3 years ago

Hi @chloamme can you paste the training script also.

chloamme commented 3 years ago

This is the training script.

$ git clone https://github.com/graviraja/MLOps-Basics.git
$ cd MLOps-Basics/week_0_project_setup/
$ pip install -r requirements.txt 
$ python train.py 
$ ls -al ./models/epoch\=1-step\=535.ckpt 

After I got a checkpoint, I edited the inference.py file using the name of the ckpt file I've obtained. And I added a few more examples to inference on the model.

# inference.py

if __name__ == "__main__":
    sentence = "The boy is sitting on a bench"
    predictor = ColaPredictor("./models/epoch=1-step=535.ckpt")
    print(sentence, "\n\t", predictor.predict(sentence))
    sentence = "The boy are sitting on a benches" 
    print(sentence, "\n\t", predictor.predict(sentence))
    sentence = "just for test....."
    print(sentence, "\n\t", predictor.predict(sentence))
    sentence = "asdfasdfasdf"
    print(sentence, "\n\t", predictor.predict(sentence))
$ python inference.py 

And my environment is,

Tensorboard captures are image

ravirajag commented 3 years ago

The model is training. As you can see the loss is decreasing. Since the goal is to explore MLOps, not model training, I have done only a basic one. For the model to perform better, either try with a different model (I have used the smallest one to run experiments faster) or tune the hyper-parameters.

chloamme commented 3 years ago

I thought maybe the inference was going wrong, with very different inputs giving almost the same scores. These are the sentences using the training step; one is for acceptable, and the other is vice versa. image

But, They got similar scores also. I guessed these samples would get distinct scores. So, I was confused.

The critics laughed the play off the stage. 
         [{'label': 'unacceptable', 'score': 0.31048455834388733}, {'label': 'acceptable', 'score': 0.6895154714584351}]
There were killed three men by the assassin. 
         [{'label': 'unacceptable', 'score': 0.3104795813560486}, {'label': 'acceptable', 'score': 0.6895204186439514}]

I'll tune the hyper-parameters and try again! Thanks!

chloamme commented 3 years ago

I changed only the lr value (1e-7), and it works well!! Thank you very much! 😄