Open aisk opened 3 years ago
@aisk thanks for the PR! do you have any numbers to show the amount of memory it reduces though?
asking because our CI does a comparison benchmarking between the PR branch & master branch at the end of each build, like this. and this PR's builds don't show an obvious difference as far as I can see:
benchmark old allocs new allocs delta
BenchmarkBasicMath/add-2 41 41 +0.00%
BenchmarkBasicMath/subtract-2 41 41 +0.00%
BenchmarkBasicMath/multiply-2 41 41 +0.00%
BenchmarkBasicMath/divide-2 41 41 +0.00%
BenchmarkConcurrency/concurrency-2 68552 68500 -0.08%
BenchmarkContextSwitch/fib-2 72309 72309 +0.00%
BenchmarkContextSwitch/quicksort-2 47935 47935 +0.00%
benchmark old bytes new bytes delta
BenchmarkBasicMath/add-2 1736 1736 +0.00%
BenchmarkBasicMath/subtract-2 1736 1736 +0.00%
BenchmarkBasicMath/multiply-2 1736 1736 +0.00%
BenchmarkBasicMath/divide-2 1736 1736 +0.00%
BenchmarkConcurrency/concurrency-2 3412003 3402804 -0.27%
BenchmarkContextSwitch/fib-2 2997256 2997264 +0.00%
BenchmarkContextSwitch/quicksort-2 2184213 2184209 -0.00%
I'm not sure if our default benchmarking cases are the right ones for this change though. so perhaps you can show some statistics as well? 😄
I did not run any bench mark and think thie optimise is obvious, but to my surprise, I and test it and found the memory reduce is so minimum. I think maybe go have some string intern stuff?
@aisk I think one reason could be the tokenizer is only called once when running a program. so we may need to see it take effect in a large codebase. while I do appreciate your attempt to optimize Goby's performance, I think at this stage such optimization is not necessary, especially when it could increase the complexity of the code 🙂
(also sorry for the late reply, I've been quite busy working on multiple projects recently 🙏)
Codecov Report
91.66% <0.00%> (-8.34%)
82.22% <33.33%> (ø)
68.52% <100.00%> (ø)
70.37% <100.00%> (ø)
Continue to review full report at Codecov.