Closed Freakwill closed 6 years ago
From a quick glance through the code, it seems like mostly stylistic changes. Where are you getting the 20% perf improvement from?
From a quick glance through the code, it seems like mostly stylistic changes. Where are you getting the 20% perf improvement from?
"Stylistic changes" may improve the perf. It's true for any python code. The old code is not efficient, in some way. For example, I chose list comprehension instead of for-loop, np.sum instead of reduce function, deleted some "repetitive operations".
Finally, I did an experiment.
I benchmarked master (f59e5efdcc6d183024014644ac16c5ddb343d4df) and this PR (29b9479d471c9dde5867a3fd7a36c092f7cee02e) on my machine (i7 5930k / 1080 Ti, Tensorflow 1.9.0, CUDA 9, CuDNN 7), using the command:
time python neural_style.py \
--content examples/1-content.jpg \
--styles examples/1-style.jpg \
--output test.png \
--iterations 1000
The results are:
master: 61.60s user 19.14s system 147% cpu 54.573 total
PR: 62.22s user 19.11s system 147% cpu 55.045 total
There doesn't seem to be a statistically significant difference.
I see. By the way, it is interesting.
Improve the codes and performance. at least 20% faster than the original one.
Maybe I change the style of the codes, but PEP is the standard.