sherjilozair / char-rnn-tensorflow

Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow
MIT License
2.64k stars 964 forks source link

How do you open/view the final result? #138

Closed effdotsh closed 5 years ago

hugovk commented 5 years ago

It outputs to the command line.

For example:

$ # prep some small sample data
$ mkdir data/teenytinyshakespeare
$ head -100 data/tinyshakespeare/input.txt > data/teenytinyshakespeare/input.txt
$ # train
$ python train.py --data_dir data/tinyshakespeare --num_epochs 1
loading preprocessed files
WARNING:tensorflow:From /Users/hugo/github/char-rnn-tensorflow/model.py:30: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /Users/hugo/github/char-rnn-tensorflow/model.py:36: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /Users/hugo/github/char-rnn-tensorflow/model.py:57: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-05-14 16:43:12.930540: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
0/446 (epoch 0), train_loss = 4.162, time/batch = 0.529
model saved to save/model.ckpt
1/446 (epoch 0), train_loss = 4.132, time/batch = 0.147
2/446 (epoch 0), train_loss = 4.035, time/batch = 0.165
3/446 (epoch 0), train_loss = 3.808, time/batch = 0.170
4/446 (epoch 0), train_loss = 3.546, time/batch = 0.160
5/446 (epoch 0), train_loss = 3.539, time/batch = 0.159
6/446 (epoch 0), train_loss = 3.472, time/batch = 0.169
7/446 (epoch 0), train_loss = 3.391, time/batch = 0.144
8/446 (epoch 0), train_loss = 3.344, time/batch = 0.213
9/446 (epoch 0), train_loss = 3.422, time/batch = 0.294
10/446 (epoch 0), train_loss = 3.330, time/batch = 0.164
11/446 (epoch 0), train_loss = 3.360, time/batch = 0.153
12/446 (epoch 0), train_loss = 3.344, time/batch = 0.143
13/446 (epoch 0), train_loss = 3.310, time/batch = 0.147
14/446 (epoch 0), train_loss = 3.381, time/batch = 0.155
15/446 (epoch 0), train_loss = 3.359, time/batch = 0.169
16/446 (epoch 0), train_loss = 3.296, time/batch = 0.207
17/446 (epoch 0), train_loss = 3.324, time/batch = 0.315
18/446 (epoch 0), train_loss = 3.303, time/batch = 0.211
19/446 (epoch 0), train_loss = 3.279, time/batch = 0.267
20/446 (epoch 0), train_loss = 3.317, time/batch = 0.240
21/446 (epoch 0), train_loss = 3.347, time/batch = 0.179
22/446 (epoch 0), train_loss = 3.354, time/batch = 0.163
23/446 (epoch 0), train_loss = 3.351, time/batch = 0.159
24/446 (epoch 0), train_loss = 3.389, time/batch = 0.189
25/446 (epoch 0), train_loss = 3.340, time/batch = 0.257
26/446 (epoch 0), train_loss = 3.289, time/batch = 0.397
27/446 (epoch 0), train_loss = 3.283, time/batch = 0.270
28/446 (epoch 0), train_loss = 3.272, time/batch = 0.223
29/446 (epoch 0), train_loss = 3.284, time/batch = 0.241
30/446 (epoch 0), train_loss = 3.319, time/batch = 0.225
31/446 (epoch 0), train_loss = 3.351, time/batch = 0.224
32/446 (epoch 0), train_loss = 3.277, time/batch = 0.265
33/446 (epoch 0), train_loss = 3.301, time/batch = 0.191
34/446 (epoch 0), train_loss = 3.292, time/batch = 0.186
35/446 (epoch 0), train_loss = 3.287, time/batch = 0.162
36/446 (epoch 0), train_loss = 3.287, time/batch = 0.165
37/446 (epoch 0), train_loss = 3.251, time/batch = 0.171
38/446 (epoch 0), train_loss = 3.298, time/batch = 0.153
39/446 (epoch 0), train_loss = 3.296, time/batch = 0.182
40/446 (epoch 0), train_loss = 3.303, time/batch = 0.183
41/446 (epoch 0), train_loss = 3.211, time/batch = 0.147
42/446 (epoch 0), train_loss = 3.245, time/batch = 0.149
43/446 (epoch 0), train_loss = 3.202, time/batch = 0.160
44/446 (epoch 0), train_loss = 3.270, time/batch = 0.160
45/446 (epoch 0), train_loss = 3.242, time/batch = 0.153
46/446 (epoch 0), train_loss = 3.242, time/batch = 0.148
47/446 (epoch 0), train_loss = 3.273, time/batch = 0.161
48/446 (epoch 0), train_loss = 3.209, time/batch = 0.152
49/446 (epoch 0), train_loss = 3.244, time/batch = 0.153
50/446 (epoch 0), train_loss = 3.211, time/batch = 0.148
51/446 (epoch 0), train_loss = 3.173, time/batch = 0.148
52/446 (epoch 0), train_loss = 3.194, time/batch = 0.142
53/446 (epoch 0), train_loss = 3.139, time/batch = 0.156
54/446 (epoch 0), train_loss = 3.141, time/batch = 0.152
55/446 (epoch 0), train_loss = 3.146, time/batch = 0.175
56/446 (epoch 0), train_loss = 3.095, time/batch = 0.175
57/446 (epoch 0), train_loss = 3.073, time/batch = 0.160
58/446 (epoch 0), train_loss = 3.054, time/batch = 0.155
59/446 (epoch 0), train_loss = 3.043, time/batch = 0.164
60/446 (epoch 0), train_loss = 3.056, time/batch = 0.148
61/446 (epoch 0), train_loss = 3.076, time/batch = 0.150
62/446 (epoch 0), train_loss = 3.006, time/batch = 0.153
63/446 (epoch 0), train_loss = 3.032, time/batch = 0.176
64/446 (epoch 0), train_loss = 2.998, time/batch = 0.146
65/446 (epoch 0), train_loss = 2.945, time/batch = 0.150
66/446 (epoch 0), train_loss = 2.950, time/batch = 0.151
67/446 (epoch 0), train_loss = 2.911, time/batch = 0.151
68/446 (epoch 0), train_loss = 2.939, time/batch = 0.156
69/446 (epoch 0), train_loss = 2.892, time/batch = 0.165
70/446 (epoch 0), train_loss = 2.897, time/batch = 0.203
71/446 (epoch 0), train_loss = 2.886, time/batch = 0.166
72/446 (epoch 0), train_loss = 2.857, time/batch = 0.167
73/446 (epoch 0), train_loss = 2.826, time/batch = 0.160
74/446 (epoch 0), train_loss = 2.831, time/batch = 0.164
75/446 (epoch 0), train_loss = 2.799, time/batch = 0.167
76/446 (epoch 0), train_loss = 2.792, time/batch = 0.153
77/446 (epoch 0), train_loss = 2.795, time/batch = 0.141
78/446 (epoch 0), train_loss = 2.818, time/batch = 0.156
79/446 (epoch 0), train_loss = 2.758, time/batch = 0.152
80/446 (epoch 0), train_loss = 2.770, time/batch = 0.155
81/446 (epoch 0), train_loss = 2.708, time/batch = 0.139
82/446 (epoch 0), train_loss = 2.754, time/batch = 0.165
83/446 (epoch 0), train_loss = 2.701, time/batch = 0.144
84/446 (epoch 0), train_loss = 2.719, time/batch = 0.152
85/446 (epoch 0), train_loss = 2.697, time/batch = 0.152
86/446 (epoch 0), train_loss = 2.676, time/batch = 0.151
87/446 (epoch 0), train_loss = 2.622, time/batch = 0.144
88/446 (epoch 0), train_loss = 2.705, time/batch = 0.152
89/446 (epoch 0), train_loss = 2.616, time/batch = 0.165
90/446 (epoch 0), train_loss = 2.606, time/batch = 0.151
91/446 (epoch 0), train_loss = 2.595, time/batch = 0.150
92/446 (epoch 0), train_loss = 2.572, time/batch = 0.146
93/446 (epoch 0), train_loss = 2.570, time/batch = 0.150
94/446 (epoch 0), train_loss = 2.641, time/batch = 0.155
95/446 (epoch 0), train_loss = 2.599, time/batch = 0.162
96/446 (epoch 0), train_loss = 2.622, time/batch = 0.151
97/446 (epoch 0), train_loss = 2.559, time/batch = 0.145
98/446 (epoch 0), train_loss = 2.578, time/batch = 0.152
99/446 (epoch 0), train_loss = 2.587, time/batch = 0.156
100/446 (epoch 0), train_loss = 2.570, time/batch = 0.163
101/446 (epoch 0), train_loss = 2.527, time/batch = 0.151
102/446 (epoch 0), train_loss = 2.574, time/batch = 0.176
103/446 (epoch 0), train_loss = 2.530, time/batch = 0.150
104/446 (epoch 0), train_loss = 2.499, time/batch = 0.159
105/446 (epoch 0), train_loss = 2.492, time/batch = 0.143
106/446 (epoch 0), train_loss = 2.503, time/batch = 0.148
107/446 (epoch 0), train_loss = 2.502, time/batch = 0.145
108/446 (epoch 0), train_loss = 2.483, time/batch = 0.190
109/446 (epoch 0), train_loss = 2.476, time/batch = 0.159
110/446 (epoch 0), train_loss = 2.460, time/batch = 0.144
111/446 (epoch 0), train_loss = 2.506, time/batch = 0.158
112/446 (epoch 0), train_loss = 2.509, time/batch = 0.149
113/446 (epoch 0), train_loss = 2.468, time/batch = 0.155
114/446 (epoch 0), train_loss = 2.477, time/batch = 0.165
115/446 (epoch 0), train_loss = 2.479, time/batch = 0.157
116/446 (epoch 0), train_loss = 2.471, time/batch = 0.147
117/446 (epoch 0), train_loss = 2.473, time/batch = 0.148
118/446 (epoch 0), train_loss = 2.433, time/batch = 0.165
119/446 (epoch 0), train_loss = 2.398, time/batch = 0.151
120/446 (epoch 0), train_loss = 2.441, time/batch = 0.189
121/446 (epoch 0), train_loss = 2.407, time/batch = 0.158
122/446 (epoch 0), train_loss = 2.434, time/batch = 0.160
123/446 (epoch 0), train_loss = 2.394, time/batch = 0.167
124/446 (epoch 0), train_loss = 2.416, time/batch = 0.186
125/446 (epoch 0), train_loss = 2.463, time/batch = 0.156
126/446 (epoch 0), train_loss = 2.419, time/batch = 0.158
127/446 (epoch 0), train_loss = 2.397, time/batch = 0.147
128/446 (epoch 0), train_loss = 2.444, time/batch = 0.146
129/446 (epoch 0), train_loss = 2.371, time/batch = 0.156
130/446 (epoch 0), train_loss = 2.410, time/batch = 0.172
131/446 (epoch 0), train_loss = 2.369, time/batch = 0.221
132/446 (epoch 0), train_loss = 2.433, time/batch = 0.196
133/446 (epoch 0), train_loss = 2.367, time/batch = 0.227
134/446 (epoch 0), train_loss = 2.341, time/batch = 0.266
135/446 (epoch 0), train_loss = 2.399, time/batch = 0.198
136/446 (epoch 0), train_loss = 2.408, time/batch = 0.258
137/446 (epoch 0), train_loss = 2.413, time/batch = 0.253
138/446 (epoch 0), train_loss = 2.398, time/batch = 0.190
139/446 (epoch 0), train_loss = 2.434, time/batch = 0.183
140/446 (epoch 0), train_loss = 2.406, time/batch = 0.272
141/446 (epoch 0), train_loss = 2.407, time/batch = 0.214
142/446 (epoch 0), train_loss = 2.326, time/batch = 0.174
143/446 (epoch 0), train_loss = 2.347, time/batch = 0.164
144/446 (epoch 0), train_loss = 2.363, time/batch = 0.193
145/446 (epoch 0), train_loss = 2.355, time/batch = 0.167
146/446 (epoch 0), train_loss = 2.343, time/batch = 0.169
147/446 (epoch 0), train_loss = 2.326, time/batch = 0.161
148/446 (epoch 0), train_loss = 2.380, time/batch = 0.246
149/446 (epoch 0), train_loss = 2.361, time/batch = 0.302
150/446 (epoch 0), train_loss = 2.331, time/batch = 0.238
151/446 (epoch 0), train_loss = 2.339, time/batch = 0.204
152/446 (epoch 0), train_loss = 2.305, time/batch = 0.176
153/446 (epoch 0), train_loss = 2.350, time/batch = 0.164
154/446 (epoch 0), train_loss = 2.312, time/batch = 0.188
155/446 (epoch 0), train_loss = 2.326, time/batch = 0.172
156/446 (epoch 0), train_loss = 2.291, time/batch = 0.173
157/446 (epoch 0), train_loss = 2.361, time/batch = 0.156
158/446 (epoch 0), train_loss = 2.313, time/batch = 0.160
159/446 (epoch 0), train_loss = 2.318, time/batch = 0.165
160/446 (epoch 0), train_loss = 2.291, time/batch = 0.156
161/446 (epoch 0), train_loss = 2.368, time/batch = 0.161
162/446 (epoch 0), train_loss = 2.321, time/batch = 0.153
163/446 (epoch 0), train_loss = 2.302, time/batch = 0.153
164/446 (epoch 0), train_loss = 2.323, time/batch = 0.170
165/446 (epoch 0), train_loss = 2.323, time/batch = 0.160
166/446 (epoch 0), train_loss = 2.307, time/batch = 0.181
167/446 (epoch 0), train_loss = 2.287, time/batch = 0.146
168/446 (epoch 0), train_loss = 2.304, time/batch = 0.160
169/446 (epoch 0), train_loss = 2.323, time/batch = 0.162
170/446 (epoch 0), train_loss = 2.322, time/batch = 0.153
171/446 (epoch 0), train_loss = 2.336, time/batch = 0.153
172/446 (epoch 0), train_loss = 2.316, time/batch = 0.164
173/446 (epoch 0), train_loss = 2.283, time/batch = 0.166
174/446 (epoch 0), train_loss = 2.308, time/batch = 0.155
175/446 (epoch 0), train_loss = 2.338, time/batch = 0.156
176/446 (epoch 0), train_loss = 2.326, time/batch = 0.166
177/446 (epoch 0), train_loss = 2.296, time/batch = 0.206
178/446 (epoch 0), train_loss = 2.277, time/batch = 0.172
179/446 (epoch 0), train_loss = 2.231, time/batch = 0.156
180/446 (epoch 0), train_loss = 2.267, time/batch = 0.191
181/446 (epoch 0), train_loss = 2.349, time/batch = 0.300
182/446 (epoch 0), train_loss = 2.306, time/batch = 0.147
183/446 (epoch 0), train_loss = 2.294, time/batch = 0.156
184/446 (epoch 0), train_loss = 2.221, time/batch = 0.142
185/446 (epoch 0), train_loss = 2.267, time/batch = 0.185
186/446 (epoch 0), train_loss = 2.246, time/batch = 0.190
187/446 (epoch 0), train_loss = 2.283, time/batch = 0.284
188/446 (epoch 0), train_loss = 2.266, time/batch = 0.338
189/446 (epoch 0), train_loss = 2.184, time/batch = 0.275
190/446 (epoch 0), train_loss = 2.245, time/batch = 0.269
191/446 (epoch 0), train_loss = 2.185, time/batch = 0.168
192/446 (epoch 0), train_loss = 2.244, time/batch = 0.168
193/446 (epoch 0), train_loss = 2.221, time/batch = 0.236
194/446 (epoch 0), train_loss = 2.245, time/batch = 0.285
195/446 (epoch 0), train_loss = 2.227, time/batch = 0.300
196/446 (epoch 0), train_loss = 2.223, time/batch = 0.204
197/446 (epoch 0), train_loss = 2.245, time/batch = 0.156
198/446 (epoch 0), train_loss = 2.285, time/batch = 0.166
199/446 (epoch 0), train_loss = 2.251, time/batch = 0.171
200/446 (epoch 0), train_loss = 2.298, time/batch = 0.162
201/446 (epoch 0), train_loss = 2.250, time/batch = 0.153
202/446 (epoch 0), train_loss = 2.239, time/batch = 0.158
203/446 (epoch 0), train_loss = 2.250, time/batch = 0.156
204/446 (epoch 0), train_loss = 2.222, time/batch = 0.157
205/446 (epoch 0), train_loss = 2.253, time/batch = 0.159
206/446 (epoch 0), train_loss = 2.232, time/batch = 0.152
207/446 (epoch 0), train_loss = 2.249, time/batch = 0.154
208/446 (epoch 0), train_loss = 2.311, time/batch = 0.166
209/446 (epoch 0), train_loss = 2.236, time/batch = 0.145
210/446 (epoch 0), train_loss = 2.232, time/batch = 0.151
211/446 (epoch 0), train_loss = 2.211, time/batch = 0.150
212/446 (epoch 0), train_loss = 2.216, time/batch = 0.167
213/446 (epoch 0), train_loss = 2.256, time/batch = 0.160
214/446 (epoch 0), train_loss = 2.170, time/batch = 0.168
215/446 (epoch 0), train_loss = 2.187, time/batch = 0.157
216/446 (epoch 0), train_loss = 2.241, time/batch = 0.148
217/446 (epoch 0), train_loss = 2.207, time/batch = 0.141
218/446 (epoch 0), train_loss = 2.194, time/batch = 0.184
219/446 (epoch 0), train_loss = 2.209, time/batch = 0.147
220/446 (epoch 0), train_loss = 2.188, time/batch = 0.158
221/446 (epoch 0), train_loss = 2.220, time/batch = 0.197
222/446 (epoch 0), train_loss = 2.211, time/batch = 0.152
223/446 (epoch 0), train_loss = 2.180, time/batch = 0.232
224/446 (epoch 0), train_loss = 2.176, time/batch = 0.198
225/446 (epoch 0), train_loss = 2.173, time/batch = 0.218
226/446 (epoch 0), train_loss = 2.259, time/batch = 0.256
227/446 (epoch 0), train_loss = 2.210, time/batch = 0.212
228/446 (epoch 0), train_loss = 2.213, time/batch = 0.247
229/446 (epoch 0), train_loss = 2.170, time/batch = 0.201
230/446 (epoch 0), train_loss = 2.219, time/batch = 0.201
231/446 (epoch 0), train_loss = 2.215, time/batch = 0.164
232/446 (epoch 0), train_loss = 2.192, time/batch = 0.184
233/446 (epoch 0), train_loss = 2.211, time/batch = 0.155
234/446 (epoch 0), train_loss = 2.184, time/batch = 0.152
235/446 (epoch 0), train_loss = 2.205, time/batch = 0.146
236/446 (epoch 0), train_loss = 2.197, time/batch = 0.151
237/446 (epoch 0), train_loss = 2.193, time/batch = 0.248
238/446 (epoch 0), train_loss = 2.174, time/batch = 0.172
239/446 (epoch 0), train_loss = 2.198, time/batch = 0.194
240/446 (epoch 0), train_loss = 2.188, time/batch = 0.233
241/446 (epoch 0), train_loss = 2.190, time/batch = 0.171
242/446 (epoch 0), train_loss = 2.171, time/batch = 0.170
243/446 (epoch 0), train_loss = 2.180, time/batch = 0.202
244/446 (epoch 0), train_loss = 2.151, time/batch = 0.235
245/446 (epoch 0), train_loss = 2.161, time/batch = 0.228
246/446 (epoch 0), train_loss = 2.166, time/batch = 0.227
247/446 (epoch 0), train_loss = 2.168, time/batch = 0.242
248/446 (epoch 0), train_loss = 2.179, time/batch = 0.192
249/446 (epoch 0), train_loss = 2.148, time/batch = 0.249
250/446 (epoch 0), train_loss = 2.197, time/batch = 0.259
251/446 (epoch 0), train_loss = 2.121, time/batch = 0.175
252/446 (epoch 0), train_loss = 2.184, time/batch = 0.236
253/446 (epoch 0), train_loss = 2.159, time/batch = 0.170
254/446 (epoch 0), train_loss = 2.150, time/batch = 0.148
255/446 (epoch 0), train_loss = 2.152, time/batch = 0.150
256/446 (epoch 0), train_loss = 2.189, time/batch = 0.150
257/446 (epoch 0), train_loss = 2.153, time/batch = 0.150
258/446 (epoch 0), train_loss = 2.187, time/batch = 0.165
259/446 (epoch 0), train_loss = 2.143, time/batch = 0.194
260/446 (epoch 0), train_loss = 2.141, time/batch = 0.193
261/446 (epoch 0), train_loss = 2.119, time/batch = 0.235
262/446 (epoch 0), train_loss = 2.134, time/batch = 0.221
263/446 (epoch 0), train_loss = 2.134, time/batch = 0.226
264/446 (epoch 0), train_loss = 2.154, time/batch = 0.184
265/446 (epoch 0), train_loss = 2.160, time/batch = 0.152
266/446 (epoch 0), train_loss = 2.205, time/batch = 0.161
267/446 (epoch 0), train_loss = 2.150, time/batch = 0.170
268/446 (epoch 0), train_loss = 2.097, time/batch = 0.194
269/446 (epoch 0), train_loss = 2.135, time/batch = 0.251
270/446 (epoch 0), train_loss = 2.131, time/batch = 0.231
271/446 (epoch 0), train_loss = 2.175, time/batch = 0.179
272/446 (epoch 0), train_loss = 2.106, time/batch = 0.279
273/446 (epoch 0), train_loss = 2.149, time/batch = 0.230
274/446 (epoch 0), train_loss = 2.124, time/batch = 0.178
275/446 (epoch 0), train_loss = 2.143, time/batch = 0.149
276/446 (epoch 0), train_loss = 2.104, time/batch = 0.151
277/446 (epoch 0), train_loss = 2.141, time/batch = 0.183
278/446 (epoch 0), train_loss = 2.148, time/batch = 0.195
279/446 (epoch 0), train_loss = 2.167, time/batch = 0.201
280/446 (epoch 0), train_loss = 2.095, time/batch = 0.185
281/446 (epoch 0), train_loss = 2.147, time/batch = 0.158
282/446 (epoch 0), train_loss = 2.101, time/batch = 0.169
283/446 (epoch 0), train_loss = 2.131, time/batch = 0.162
284/446 (epoch 0), train_loss = 2.136, time/batch = 0.163
285/446 (epoch 0), train_loss = 2.125, time/batch = 0.171
286/446 (epoch 0), train_loss = 2.092, time/batch = 0.161
287/446 (epoch 0), train_loss = 2.138, time/batch = 0.189
288/446 (epoch 0), train_loss = 2.150, time/batch = 0.198
289/446 (epoch 0), train_loss = 2.117, time/batch = 0.171
290/446 (epoch 0), train_loss = 2.098, time/batch = 0.163
291/446 (epoch 0), train_loss = 2.117, time/batch = 0.173
292/446 (epoch 0), train_loss = 2.060, time/batch = 0.162
293/446 (epoch 0), train_loss = 2.143, time/batch = 0.167
294/446 (epoch 0), train_loss = 2.145, time/batch = 0.192
295/446 (epoch 0), train_loss = 2.129, time/batch = 0.273
296/446 (epoch 0), train_loss = 2.166, time/batch = 0.155
297/446 (epoch 0), train_loss = 2.087, time/batch = 0.220
298/446 (epoch 0), train_loss = 2.139, time/batch = 0.240
299/446 (epoch 0), train_loss = 2.025, time/batch = 0.231
300/446 (epoch 0), train_loss = 2.050, time/batch = 0.241
301/446 (epoch 0), train_loss = 2.128, time/batch = 0.201
302/446 (epoch 0), train_loss = 2.058, time/batch = 0.177
303/446 (epoch 0), train_loss = 2.036, time/batch = 0.197
304/446 (epoch 0), train_loss = 2.122, time/batch = 0.178
305/446 (epoch 0), train_loss = 2.115, time/batch = 0.169
306/446 (epoch 0), train_loss = 2.048, time/batch = 0.178
307/446 (epoch 0), train_loss = 2.094, time/batch = 0.156
308/446 (epoch 0), train_loss = 2.133, time/batch = 0.165
309/446 (epoch 0), train_loss = 2.091, time/batch = 0.177
310/446 (epoch 0), train_loss = 2.074, time/batch = 0.192
311/446 (epoch 0), train_loss = 2.138, time/batch = 0.254
312/446 (epoch 0), train_loss = 2.101, time/batch = 0.224
313/446 (epoch 0), train_loss = 2.112, time/batch = 0.224
314/446 (epoch 0), train_loss = 2.037, time/batch = 0.257
315/446 (epoch 0), train_loss = 2.087, time/batch = 0.264
316/446 (epoch 0), train_loss = 2.099, time/batch = 0.258
317/446 (epoch 0), train_loss = 2.106, time/batch = 0.207
318/446 (epoch 0), train_loss = 2.064, time/batch = 0.202
319/446 (epoch 0), train_loss = 2.066, time/batch = 0.179
320/446 (epoch 0), train_loss = 2.045, time/batch = 0.174
321/446 (epoch 0), train_loss = 2.048, time/batch = 0.179
322/446 (epoch 0), train_loss = 2.066, time/batch = 0.153
323/446 (epoch 0), train_loss = 2.065, time/batch = 0.147
324/446 (epoch 0), train_loss = 2.064, time/batch = 0.178
325/446 (epoch 0), train_loss = 2.053, time/batch = 0.172
326/446 (epoch 0), train_loss = 2.029, time/batch = 0.163
327/446 (epoch 0), train_loss = 2.027, time/batch = 0.174
328/446 (epoch 0), train_loss = 2.083, time/batch = 0.158
329/446 (epoch 0), train_loss = 2.025, time/batch = 0.155
330/446 (epoch 0), train_loss = 2.056, time/batch = 0.174
331/446 (epoch 0), train_loss = 2.102, time/batch = 0.158
332/446 (epoch 0), train_loss = 2.061, time/batch = 0.155
333/446 (epoch 0), train_loss = 2.027, time/batch = 0.160
334/446 (epoch 0), train_loss = 2.036, time/batch = 0.157
335/446 (epoch 0), train_loss = 2.045, time/batch = 0.155
336/446 (epoch 0), train_loss = 2.041, time/batch = 0.163
337/446 (epoch 0), train_loss = 2.034, time/batch = 0.179
338/446 (epoch 0), train_loss = 2.094, time/batch = 0.149
339/446 (epoch 0), train_loss = 2.011, time/batch = 0.153
340/446 (epoch 0), train_loss = 2.061, time/batch = 0.157
341/446 (epoch 0), train_loss = 1.982, time/batch = 0.156
342/446 (epoch 0), train_loss = 2.060, time/batch = 0.173
343/446 (epoch 0), train_loss = 2.031, time/batch = 0.174
344/446 (epoch 0), train_loss = 1.978, time/batch = 0.167
345/446 (epoch 0), train_loss = 2.074, time/batch = 0.168
346/446 (epoch 0), train_loss = 2.055, time/batch = 0.208
347/446 (epoch 0), train_loss = 2.089, time/batch = 0.186
348/446 (epoch 0), train_loss = 2.007, time/batch = 0.179
349/446 (epoch 0), train_loss = 2.094, time/batch = 0.188
350/446 (epoch 0), train_loss = 2.075, time/batch = 0.158
351/446 (epoch 0), train_loss = 2.038, time/batch = 0.153
352/446 (epoch 0), train_loss = 1.998, time/batch = 0.172
353/446 (epoch 0), train_loss = 1.957, time/batch = 0.171
354/446 (epoch 0), train_loss = 1.978, time/batch = 0.161
355/446 (epoch 0), train_loss = 2.047, time/batch = 0.154
356/446 (epoch 0), train_loss = 2.041, time/batch = 0.161
357/446 (epoch 0), train_loss = 2.055, time/batch = 0.159
358/446 (epoch 0), train_loss = 1.982, time/batch = 0.159
359/446 (epoch 0), train_loss = 1.983, time/batch = 0.171
360/446 (epoch 0), train_loss = 2.027, time/batch = 0.164
361/446 (epoch 0), train_loss = 1.973, time/batch = 0.200
362/446 (epoch 0), train_loss = 1.998, time/batch = 0.191
363/446 (epoch 0), train_loss = 2.013, time/batch = 0.175
364/446 (epoch 0), train_loss = 2.006, time/batch = 0.243
365/446 (epoch 0), train_loss = 2.037, time/batch = 0.202
366/446 (epoch 0), train_loss = 2.024, time/batch = 0.181
367/446 (epoch 0), train_loss = 2.013, time/batch = 0.179
368/446 (epoch 0), train_loss = 2.014, time/batch = 0.151
369/446 (epoch 0), train_loss = 2.011, time/batch = 0.154
370/446 (epoch 0), train_loss = 2.116, time/batch = 0.180
371/446 (epoch 0), train_loss = 2.016, time/batch = 0.155
372/446 (epoch 0), train_loss = 2.019, time/batch = 0.152
373/446 (epoch 0), train_loss = 2.035, time/batch = 0.192
374/446 (epoch 0), train_loss = 2.006, time/batch = 0.159
375/446 (epoch 0), train_loss = 2.028, time/batch = 0.155
376/446 (epoch 0), train_loss = 2.021, time/batch = 0.179
377/446 (epoch 0), train_loss = 1.973, time/batch = 0.182
378/446 (epoch 0), train_loss = 2.010, time/batch = 0.157
379/446 (epoch 0), train_loss = 2.014, time/batch = 0.163
380/446 (epoch 0), train_loss = 2.003, time/batch = 0.162
381/446 (epoch 0), train_loss = 2.013, time/batch = 0.173
382/446 (epoch 0), train_loss = 2.043, time/batch = 0.182
383/446 (epoch 0), train_loss = 2.054, time/batch = 0.184
384/446 (epoch 0), train_loss = 1.995, time/batch = 0.158
385/446 (epoch 0), train_loss = 2.036, time/batch = 0.178
386/446 (epoch 0), train_loss = 2.029, time/batch = 0.173
387/446 (epoch 0), train_loss = 1.994, time/batch = 0.174
388/446 (epoch 0), train_loss = 2.049, time/batch = 0.171
389/446 (epoch 0), train_loss = 1.989, time/batch = 0.185
390/446 (epoch 0), train_loss = 1.991, time/batch = 0.202
391/446 (epoch 0), train_loss = 2.049, time/batch = 0.179
392/446 (epoch 0), train_loss = 2.015, time/batch = 0.167
393/446 (epoch 0), train_loss = 2.027, time/batch = 0.171
394/446 (epoch 0), train_loss = 1.968, time/batch = 0.160
395/446 (epoch 0), train_loss = 2.012, time/batch = 0.177
396/446 (epoch 0), train_loss = 1.989, time/batch = 0.214
397/446 (epoch 0), train_loss = 1.943, time/batch = 0.172
398/446 (epoch 0), train_loss = 1.992, time/batch = 0.162
399/446 (epoch 0), train_loss = 1.945, time/batch = 0.175
400/446 (epoch 0), train_loss = 2.032, time/batch = 0.182
401/446 (epoch 0), train_loss = 1.998, time/batch = 0.188
402/446 (epoch 0), train_loss = 1.999, time/batch = 0.164
403/446 (epoch 0), train_loss = 2.019, time/batch = 0.159
404/446 (epoch 0), train_loss = 2.001, time/batch = 0.199
405/446 (epoch 0), train_loss = 2.000, time/batch = 0.195
406/446 (epoch 0), train_loss = 2.014, time/batch = 0.195
407/446 (epoch 0), train_loss = 1.959, time/batch = 0.245
408/446 (epoch 0), train_loss = 2.046, time/batch = 0.191
409/446 (epoch 0), train_loss = 2.006, time/batch = 0.204
410/446 (epoch 0), train_loss = 2.014, time/batch = 0.234
411/446 (epoch 0), train_loss = 1.968, time/batch = 0.202
412/446 (epoch 0), train_loss = 1.999, time/batch = 0.247
413/446 (epoch 0), train_loss = 1.990, time/batch = 0.207
414/446 (epoch 0), train_loss = 1.964, time/batch = 0.199
415/446 (epoch 0), train_loss = 1.997, time/batch = 0.161
416/446 (epoch 0), train_loss = 1.987, time/batch = 0.161
417/446 (epoch 0), train_loss = 1.935, time/batch = 0.168
418/446 (epoch 0), train_loss = 1.963, time/batch = 0.159
419/446 (epoch 0), train_loss = 1.992, time/batch = 0.150
420/446 (epoch 0), train_loss = 1.986, time/batch = 0.158
421/446 (epoch 0), train_loss = 1.988, time/batch = 0.187
422/446 (epoch 0), train_loss = 2.013, time/batch = 0.158
423/446 (epoch 0), train_loss = 2.003, time/batch = 0.169
424/446 (epoch 0), train_loss = 1.998, time/batch = 0.175
425/446 (epoch 0), train_loss = 1.946, time/batch = 0.158
426/446 (epoch 0), train_loss = 2.011, time/batch = 0.161
427/446 (epoch 0), train_loss = 1.981, time/batch = 0.155
428/446 (epoch 0), train_loss = 1.975, time/batch = 0.174
429/446 (epoch 0), train_loss = 1.970, time/batch = 0.170
430/446 (epoch 0), train_loss = 1.950, time/batch = 0.183
431/446 (epoch 0), train_loss = 2.008, time/batch = 0.150
432/446 (epoch 0), train_loss = 1.968, time/batch = 0.158
433/446 (epoch 0), train_loss = 1.998, time/batch = 0.152
434/446 (epoch 0), train_loss = 1.951, time/batch = 0.149
435/446 (epoch 0), train_loss = 1.934, time/batch = 0.168
436/446 (epoch 0), train_loss = 2.001, time/batch = 0.161
437/446 (epoch 0), train_loss = 1.966, time/batch = 0.206
438/446 (epoch 0), train_loss = 1.993, time/batch = 0.254
439/446 (epoch 0), train_loss = 1.921, time/batch = 0.197
440/446 (epoch 0), train_loss = 1.967, time/batch = 0.211
441/446 (epoch 0), train_loss = 1.933, time/batch = 0.190
442/446 (epoch 0), train_loss = 1.928, time/batch = 0.519
443/446 (epoch 0), train_loss = 1.911, time/batch = 0.350
444/446 (epoch 0), train_loss = 1.940, time/batch = 0.271
445/446 (epoch 0), train_loss = 1.965, time/batch = 0.289
model saved to save/model.ckpt
$ # sample, look at the end, after the warnings
$ python sample.py
WARNING:tensorflow:From /Users/hugo/github/char-rnn-tensorflow/model.py:30: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /Users/hugo/github/char-rnn-tensorflow/model.py:36: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-05-14 16:54:14.394704: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
 ount,
Yit, Riond ontorly by ecor; to horsaes the umter,
Worce! af at whlatcith, stoct? in to all wert cousing.
Ediy laes, son troose, how, Wlising, a praces old;
Ansers of stainter weoble, shang it man-endy.

LARTIES:
Thre's leare of hene tree' wistens?
Sare are sist, ins sicher, ry and in you us that the the wish.

aMOPNOLO:
A mutherfer: wear ho and that tills beefsten,
Thene say, shase all ray tath sgraky his Yame?

RYUREK:
Why, harrens, Gode alath kand! whe letprong, mard thit; ther, Villoud;
effdotsh commented 5 years ago

OK thank you, also how do you change the number of output lines?

hugovk commented 5 years ago

You can choose how many characters are output with -n:

$ python sample.py --help
usage: sample.py [-h] [--save_dir SAVE_DIR] [-n N] [--prime PRIME]
                 [--sample SAMPLE]

optional arguments:
  -h, --help           show this help message and exit
  --save_dir SAVE_DIR  model directory to store checkpointed models (default:
                       save)
  -n N                 number of characters to sample (default: 500)
  --prime PRIME        prime text (default: )
  --sample SAMPLE      0 to use max at each timestep, 1 to sample at each
                       timestep, 2 to sample on spaces (default: 1)