maxpumperla / deep_learning_and_the_game_of_go

Code and other material for the book "Deep Learning and the Game of Go"
https://www.manning.com/books/deep-learning-and-the-game-of-go
953 stars 387 forks source link

ValueError on end_to_end.py Chapter 8.2.1 #38

Open Tuxius opened 5 years ago

Tuxius commented 5 years ago

I am getting an error running the end_to_end.py from github:

KGS-2003-19-7582-.tar.gz 7582
KGS-2002-19-3646-.tar.gz 3646
KGS-2001-19-2298-.tar.gz 2298
total num games: 179689
Drawn 100 samples:
Traceback (most recent call last):
  File "end_to_end.py", line 20, in <module>
    X, y = processor.load_go_data(num_samples=100)
  File "/smb/deep_learning_and_the_game_of_go-master/code/dlgo/data/parallel_processor.py", line 51, in load_go_data
    features_and_labels = self.consolidate_games(data_type, data)
  File "/smb/deep_learning_and_the_game_of_go-master/code/dlgo/data/parallel_processor.py", line 142, in consolidate_games
    features = np.concatenate(feature_list, axis=0)
ValueError: all the input array dimensions except for the concatenation axis must match exactly

I am using Python 3.6.8, Tensorflow 1.13.1 and Keras 2.2.4. I assume the code needs an update due to some changes in one of these versions?

lichun306 commented 4 years ago

Any progress or fix on this issue? I also got the similar ValueError, more details are as follows:

ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 7 and the array at index 1 has size 1

I am using Python 3.6.8, tensorflow 2.0.0, and tensorflow.keras 2.2.4-tf on Linux Ubuntu 18.04.

lichun306 commented 4 years ago

I found reason why the ValueError happened in the above issue. The reason is that we ran train_generator.py first which generated some features and labels files in data directory 'data'. These features files are related to oneplane encoder with small network. Next when we ran end_to_end.py, which generated new features files which are related to sevenplane encoder with large network. The shape of those two kinds of features files are different. Somehow they together confused the program end_to_end.py, and produced the above ValueError. The work around solution is to delete all those features files before running end_to_end.py.