Closed anthony0727 closed 2 years ago
Merging #728 (70dd040) into development (3a5e439) will increase coverage by
0.06%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## development #728 +/- ##
===============================================
+ Coverage 88.67% 88.73% +0.06%
===============================================
Files 131 131
Lines 7936 7936
===============================================
+ Hits 7037 7042 +5
+ Misses 899 894 -5
Impacted Files | Coverage Δ | |
---|---|---|
compiler_gym/service/connection.py | 77.59% <0.00%> (-1.01%) |
:arrow_down: |
...ompiler_gym/service/client_service_compiler_env.py | 90.89% <0.00%> (+0.41%) |
:arrow_up: |
...loop_tool/service/loop_tool_compilation_session.py | 90.54% <0.00%> (+0.67%) |
:arrow_up: |
compiler_gym/envs/llvm/datasets/cbench.py | 80.57% <0.00%> (+1.07%) |
:arrow_up: |
compiler_gym/views/observation.py | 100.00% <0.00%> (+2.70%) |
:arrow_up: |
compiler_gym/views/observation_space_spec.py | 85.71% <0.00%> (+2.85%) |
:arrow_up: |
Hi @anthony0727, fantastic! Great to see a GNN-backed RL implementation. Couple of things:
results.csv
file is missing columns. The easiest way to make sure you populate all of the columns in the right order is to use the compiler_gym.CompilerEnvStateWriter
to create your CSV file.Also, be sure to add an entry to the main README.md with your entry!
BTW, the vocab file that I think you are looking for can be found here: https://zenodo.org/record/4247595
Cheers, Chris
Hey @ChrisCummins, resovled two bullets you told me!
Again, ghostscript
was excluded in the result.csv
, due to resource shortage
BTW, Can the ranking in the main README.md be updated with new experiments? (though ranking is not about everything, I observed some better results, but it's gone with the server issue)
Thanks! Anthony
Hi Anthony,
Thanks for the fixes 🙂
BTW, Can the ranking in the main README.md be updated with new experiments? (though ranking is not about everything, I observed some better results, but it's gone with the server issue)
Yes, just send us a PR when you have the new results to update your position on the leaderboard.
One small comment about changing the walltime reported for greedy search. Other than, LGTM!
Cheers, Chris
This adds entrypoint for model learnt from Programl observations.
Programl encoded with graph neural network, and align tasks in as much parallelism as possible to remove temporal correlation(just like replay buffer from Deep-Q Network). The model is optimized with PPOv2 loss.
The datasets used to train are ['cbench-v1', 'mibench-v1', 'blas-v0', 'npb-v0'] with limited node/edge counts(due to our shortage of resouce).
Anticipating better result with additional experiments(this has been halted from our internal server maintanance, and will be resumed when done).