chiphuyen / stanford-tensorflow-tutorials

This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.
http://cs20.stanford.edu
MIT License
10.32k stars 4.32k forks source link

Bot responses degrading with iteration #145

Open GrayHat12 opened 5 years ago

GrayHat12 commented 5 years ago

The more I'm training the bot ... The worse are the responses....

Check it out : -

HUMAN ++++ Hi BOT ++++ dennings dennings dennings dennings dennings shed grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer HUMAN ++++ Hey BOT ++++ dennings dennings dennings dennings dennings dennings grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer HUMAN ++++ What's your name? BOT ++++ grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer grocer ============================================= HUMAN ++++ Hi BOT ++++ ? HUMAN ++++ What is your name BOT ++++ me . HUMAN ++++ WHere are you from BOT ++++ of the . HUMAN ++++ Let's go BOT ++++ you ' re going to do you do ? ============================================= HUMAN ++++ Hi BOT ++++ HUMAN ++++ Hey BOT ++++ . HUMAN ++++ Hello BOT ++++ . HUMAN ++++ What are you doing? BOT ++++ . HUMAN ++++ What is your name BOT ++++ . HUMAN ++++ Speak BOT ++++ . HUMAN ++++ Tell me your name BOT ++++ . HUMAN ++++ Who am I BOT ++++ . HUMAN ++++ You BOT ++++ HUMAN ++++ I BOT ++++ HUMAN ++++ Am BOT ++++ HUMAN ++++ WHats BOT ++++ HUMAN ++++ Should i go BOT ++++ . ============================================= HUMAN ++++ Hi BOT ++++ ? HUMAN ++++ Hello BOT ++++ ? HUMAN ++++ What is your name BOT ++++ . HUMAN ++++ Where are you from BOT ++++ . HUMAN ++++ Let's Go BOT ++++ . . . HUMAN ++++ Let us go BOT ++++ . ============================================= <8000 iterations> HUMAN ++++ Hi BOT ++++ . HUMAN ++++ Hey BOT ++++ . HUMAN ++++ Hi BOT ++++ . HUMAN ++++ Let us go BOT ++++ . HUMAN ++++ Let me in BOT ++++ . HUMAN ++++ Tell me his name again BOT ++++ . HUMAN ++++ . BOT ++++ HUMAN ++++ No dots plz BOT ++++ . HUMAN ++++ no dot BOT ++++ . HUMAN ++++ stop BOT ++++ . HUMAN ++++ idiot BOT ++++ . HUMAN ++++ clr BOT ++++ . HUMAN ++++ Gufdc BOT ++++ . HUMAN ++++ hoy BOT ++++ . ============================================= <9000 iterations HUMAN ++++ You always been this selfish? BOT ++++ . HUMAN ++++ Hey dude... What have you been up to ? BOT ++++ . HUMAN ++++ Hello BOT ++++ . HUMAN ++++ Hi BOT ++++ . HUMAN ++++ gyfjgjxxjygkyfdyfykuhkudtufbkudgchjgkudt BOT ++++ . HUMAN ++++ drgfth BOT ++++ . HUMAN ++++ gfthfthgyjdesfgbdc BOT ++++ . HUMAN ++++ ggd rg gtfh BOT ++++ . HUMAN ++++ yjygvhrh ygmjvhgsdg BOT ++++ . HUMAN ++++ fth fhgyjvgrdgd h BOT ++++ . HUMAN ++++ fthfth BOT ++++ . HUMAN ++++ ft hyjujmfgrdsg BOT ++++ . HUMAN ++++ f BOT ++++ . ============================================= The attached image shows the current training progress ![Annotation 2019-05-20 153224](https://user-images.githubusercontent.com/37294843/58013584-82785f80-7b14-11e9-9f4f-f9e3460b85e3.png)> The config.py is as follows : `DATA_PATH = 'cornell movie-dialogs corpus' CONVO_FILE = 'movie_conversations.txt' LINE_FILE = 'movie_lines.txt' OUTPUT_FILE = 'output_convo.txt' PROCESSED_PATH = 'processed' CPT_PATH = 'checkpoints' THRESHOLD = 2 PAD_ID = 0 UNK_ID = 1 START_ID = 2 EOS_ID = 3 TESTSET_SIZE = 25000 BUCKETS = [(19, 19), (28, 28), (33, 33), (40, 43), (50, 53), (60, 63)] #BUCKETS = [(16, 19)] CONTRACTIONS = [("i ' m ", "i 'm "), ("' d ", "'d "), ("' s ", "'s "), ("don ' t ", "do n't "), ("didn ' t ", "did n't "), ("doesn ' t ", "does n't "), ("can ' t ", "ca n't "), ("shouldn ' t ", "should n't "), ("wouldn ' t ", "would n't "), ("' ve ", "'ve "), ("' re ", "'re "), ("in ' ", "in' ")] NUM_LAYERS = 3 HIDDEN_SIZE = 256 BATCH_SIZE = 64 LR = 0.5 MAX_GRAD_NORM = 5.0 NUM_SAMPLES = 512 ENC_VOCAB = 24360 DEC_VOCAB = 24686 `
MartinAbilev commented 5 years ago

as i see you need to train more .. loss need to be belov 1.0 kinda 0.2 .. 0.03

NazaninSal commented 5 years ago

Hi, How long does it take for training to have loss below 1.0 like 0.03?