doas3140 / PyStack

Python implementation of Deepstack
79 stars 34 forks source link

It seems cannot run python train_nn.py --street 4 --approximate root_nodes #1

Open YUE-hash opened 5 years ago

YUE-hash commented 5 years ago

When I finish the operation python generate_data.py --street 4 --approximate root_nodes python convert_npy_to_tfrecords.py --street 4 --approximate root_nodes It seems cannot run python train_nn.py --street 4 --approximate root_nodes because of ValueError: Empty training data.

chiangqiqi commented 5 years ago

This issue happens because you do not have enough data on data/TrainSamples/river/root_nodes_tfrecords/ . Each command generate one file, but during training, the ratio of validation is 0.1, which means you should generate more than 10 samples at least to avoid this empty data problem.

154461013 commented 5 years ago

@chiangqiqi Do I need to run python generate_data.py --street 4 --approximate root_nodes ython convert_npy_to_tfrecords.py --street 4 --approximate root_nodes ten times? when I run python train_nn.py --street 4 --approximate root_nodes Traceback (most recent call last): File "train_nn.py", line 38, in main() File "train_nn.py", line 35, in main T.train(num_epochs=arguments.num_epochs, batch_size=arguments.batch_size, validation_size=0.1, start_epoch=starting_idx) File "E:\PyStack-texas-holdem\src\NnTraining\train.py", line 68, in train callbacks = self.callbacks, initial_epoch = start_epoch ) File "D:\Anaconda3.5\lib\site-packages\tensorflow\python\keras\engine\training.py", line 880, in fit validation_steps=validation_steps) File "D:\Anaconda3.5\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 346, in model_iteration aggregator.finalize() File "D:\Anaconda3.5\lib\site-packages\tensorflow\python\keras\engine\training_utils.py", line 108, in finalize self.results[0] /= self.num_samples_or_steps IndexError: list index out of range

doas3140 commented 5 years ago

The issue arises you have not enough files in data/TrainSamples/river/root_nodes_tfrecords/ . You have to create more TF Record files. Just replace line self.tfrecords_batch_size = 1024*10 w/ for example self.tfrecords_batch_size = 64 in src/Settings/arguments.py file, so it does create enough files and then run train_nn.py.

154461013 commented 5 years ago

Generated solved situations: river: 2mil., other rounds: 0.5mil 。 How much tfrecords can reach to 2 million?

doas3140 commented 5 years ago

You also will need to change self.batch_size = 1024 to something smaller like self.batch_size = 64 in src/Settings/arguments.py to not get Empty training data error.

doas3140 commented 5 years ago

@154461013 This means I solved 2 million situations in river round. So if self.tfrecords_batch_size is 1024*10, there is a total of 2,000,000/(1024*10)=196 tf record files in data/TrainSamples/river/root_nodes_tfrecords/ directory.

bluedevils23 commented 4 years ago

How long it will take to generate 2mil sovled situations with a normal ML setup?