Open DLPerf opened 3 years ago
Hello, I'm looking forward to your reply~
I would like to contribute in solving this issue. Can you please assign this issue to me? @DLPerf @lamberta @mihaimaruseac @jaeyounkim @martinwicke
We don't generally assign issue to contributors. Instead, please send a PR when ready.
I resolved the performance issue by changing the order of operations in the dataset pipeline, calling .batch(BATCH_SIZE) before .map(scale). This modification improves efficiency by batching the data before applying the map function, reducing the number of individual function calls.
Hello! I've found a performance issue in /tests/testdata/keras_tuner_cifar_example.py:
batch()
should be called beforemap()
, which could make your program more efficient. Here is the tensorflow document to support it.Detailed description is listed below:
.batch(BATCH_SIZE)
(here) should be called beforetrain_dataset.map(scale)
(here)..batch(BATCH_SIZE)
(here) should be called before.map(scale)
(here).Besides, you need to check the function called in
map()
(e.g.,scale
called in.map(scale)
) whether to be affected or not to make the changed code work properly. For example, ifscale
needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z).Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.