goby-lang / goby

Goby - Yet another programming language written in Go
MIT License
3.48k stars 171 forks source link

Using uint8 as token type to reduce memory usage #872

Open aisk opened 3 years ago

codecov[bot] commented 3 years ago

Codecov Report

Merging #872 (a15a35c) into master (557bbca) will decrease coverage by 0.01%. The diff coverage is 50.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #872      +/-   ##
==========================================
- Coverage   80.90%   80.89%   -0.02%     
==========================================
  Files          54       54              
  Lines        6788     6789       +1     
==========================================
  Hits         5492     5492              
- Misses       1069     1070       +1     
  Partials      227      227              
Impacted Files Coverage Δ
compiler/token/token.go 91.66% <0.00%> (-8.34%) :arrow_down:
compiler/parser/flow_control_parsing.go 82.22% <33.33%> (ø)
compiler/parser/expression_parsing.go 68.52% <100.00%> (ø)
native/ripper/ripper.go 70.37% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 557bbca...a15a35c. Read the comment docs.

st0012 commented 3 years ago

@aisk thanks for the PR! do you have any numbers to show the amount of memory it reduces though?

asking because our CI does a comparison benchmarking between the PR branch & master branch at the end of each build, like this. and this PR's builds don't show an obvious difference as far as I can see:

benchmark                              old allocs     new allocs     delta
BenchmarkBasicMath/add-2               41             41             +0.00%
BenchmarkBasicMath/subtract-2          41             41             +0.00%
BenchmarkBasicMath/multiply-2          41             41             +0.00%
BenchmarkBasicMath/divide-2            41             41             +0.00%
BenchmarkConcurrency/concurrency-2     68552          68500          -0.08%
BenchmarkContextSwitch/fib-2           72309          72309          +0.00%
BenchmarkContextSwitch/quicksort-2     47935          47935          +0.00%

benchmark                              old bytes     new bytes     delta
BenchmarkBasicMath/add-2               1736          1736          +0.00%
BenchmarkBasicMath/subtract-2          1736          1736          +0.00%
BenchmarkBasicMath/multiply-2          1736          1736          +0.00%
BenchmarkBasicMath/divide-2            1736          1736          +0.00%
BenchmarkConcurrency/concurrency-2     3412003       3402804       -0.27%
BenchmarkContextSwitch/fib-2           2997256       2997264       +0.00%
BenchmarkContextSwitch/quicksort-2     2184213       2184209       -0.00%

I'm not sure if our default benchmarking cases are the right ones for this change though. so perhaps you can show some statistics as well? 😄

aisk commented 3 years ago

I did not run any bench mark and think thie optimise is obvious, but to my surprise, I and test it and found the memory reduce is so minimum. I think maybe go have some string intern stuff?

st0012 commented 3 years ago

@aisk I think one reason could be the tokenizer is only called once when running a program. so we may need to see it take effect in a large codebase. while I do appreciate your attempt to optimize Goby's performance, I think at this stage such optimization is not necessary, especially when it could increase the complexity of the code 🙂

(also sorry for the late reply, I've been quite busy working on multiple projects recently 🙏)