batzner / indrnn

TensorFlow implementation of Independently Recurrent Neural Networks
https://arxiv.org/abs/1803.04831
Apache License 2.0
515 stars 129 forks source link

Performance issues in the program #25

Closed DLPerf closed 3 years ago

DLPerf commented 3 years ago

Hello,I found a performance issue in the definition of get_training_set , batzner/indrnn/blob/master/examples/sequential_mnist.py, dataset.map was called without num_parallel_calls. I think it will increase the efficiency of your program if you add this.

The same issues also exist in dataset = dataset.map(preprocess_data) and dataset = dataset.map(preprocess_data)

Here is the documemtation of tensorflow to support this thing.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

batzner commented 3 years ago

Thank you for taking a close look at the example! I think for the sake of simplicity, it is okay to leave the data loading as it is.

batzner commented 3 years ago

From your recent activity on GitHub it seems like the performance issues that you report in this and other repos are generated (semi-)automatically. If that is the case, it would be helpful to include this info in the descriptions of the respective issues. From a code maintainer's perspective, it makes a difference whether a code analysis program detected a suboptimal order of, say, shuffle().batch().repeat() or an actual user encountered an issue with the code.