facebookresearch / CompilerGym

Reinforcement learning environments for compiler and program optimization tasks
https://compilergym.ai/
MIT License
906 stars 127 forks source link

How to make a runnable benchmark #797

Closed WanrongGao closed 1 year ago

WanrongGao commented 1 year ago

I want to make a runnable benchmark. I use "benchmark = env.make_benchmark_from_command_line(["gcc", "-DNDEBUG", "test.c", "-o", "test"])" An executable file test is indeed generated in the directory. But I want to use runtime reward. I use "env=compiler_gym.wrappers.RuntimePointEstimateReward(env)" When reset, the program will report an error “ BenchmarkInitError: Benchmark is not runnable” I guess it's because there's no run_cmd. Is that the reason? How on earth can I generate an runnable benchmark?

ChrisCummins commented 1 year ago

Hi @WanrongGao,

that's a good question. The process of creating a runnable benchmark isn't documented (note to self: we should add examples), but here's an example of how to make a simple runnable benchmark. Hopefully you can adapt this to your needs:

from compiler_gym.service.proto import BenchmarkDynamicConfig, Command
from compiler_gym.envs.llvm.llvm_benchmark import get_system_library_flags

benchmark = env.make_benchmark_from_command_line(["gcc", "-DNDEBUG", "test.c", "-o", "test"])
# tell CompilerGym how to compile and run the binary:
benchmark.proto.dynamic_config.MergeFrom(
    BenchmarkDynamicConfig(
        build_cmd=Command(
                argument=["$CC", "$IN"] + get_system_library_flags(),
                outfile=["a.out"],
                timeout_seconds=60,
            ),
            run_cmd=Command(
                argument=["./a.out"],
                timeout_seconds=300,
            ),
    )
)

# now you can use this benchmark for enviornments:
env.reset(benchmark=benchmark)

Closing this as "resolved" but feel free to post follow up questions :)

Cheers, Chris