inoueke-n / optimization-detector

Optimization detection over compiled binaries
MIT License
7 stars 2 forks source link

Optimization Detector

This is the companion code for the paper

Identifying Compiler and Optimization Level in Binary Code from Multiple Architectures

D. Pizzolotto, K. Inoue

The code in this repository is used to train and evaluate a deep learning network capable of recognizing the optimization level and compiler used in a compiled binary.

With our dataset we tested:

This repository contains only the code, pre-trained models can be found at the following link

Pre-requisites

In order to run the code python 3.6+ is required. Additional dependencies are listed in the requirements.txt file and may be installed with pip. However, having a GPU supporting CUDA is suggested. This implies installing CUDA drivers and cuDNN libraries.

Dataset

The manually generated dataset can be found at the following link. Alternatively, one can follow the instructions on the dataset generation section to generate a gcc-only dataset automatically, for any architecture having gcc, g++ and binutils available in the Ubuntu Packages repository.

This software expects a list of binary files as dataset and can use two types of analysis:

An additional file can be used to replicate our evaluation. This file should not be run blindly, and is provided only to have an idea of our overall training approach. Using it in a different system may require some changes.

Usage

The usage of this software consist in the following four parts:

In the following subsections we explain the basic usage. Additional flags can be retrieved by running the program with the -h or --help option.

Generation

We prepared an automated script capable of generating the dataset using any gcc cross compiler available on the Ubuntu Packages repository. In this study we used this script to prepare the riscv64, sparc64, powerpc, mips and armhf architectures. If you retrieved our dataset from zenodo, just extract everything and jump to the next section.

Given that compilation results may vary greatly based on the host environment, using docker to generate the dataset is mandatory.

First create the image using:

$ docker build -t <image_name> .

Then execute the command on the newly created container:

$ docker run -it <image_name> python3 generate_dataset.py -t "riscv64-linux-gnu" /build 

In this command the -t parameter specifies which architectures will be built, and expects a machine-operatingsystem tag. This is the same tag that can be found in the toolchains available on the Ubuntu Package Archive. To build more than one architecture, one can use : to separate them, for example "riscv64-linux-gnu:arm-linux-gnueabihf". This will build the flags -O0, -O1, -O2, -O3 and -Os for each architecture.

Note: building requires at least 150GB of free disk available (even though the final result will be less than 1GB), and at least 10GB of system RAM. Expect the building to last a couple of hours for each architecture-flag combination.

As soon as the build is finished, one can use the following command to copy out the results.

$ docker cp /build/riscv64-gcc-o0.tar.xz <target_directory>

where riscv64 and o0 should be replaced accordingly with the input architecture and optimization level.

At this point, the dataset should be extracted with

$ tar xf <archive> -C <target>

in order to be used by the next step (ironically called Dataset Extraction as well, even though is a different kind of extraction).

Extraction

This step is used to extract only executable data from the binary.

The following command should be used:

$ python3 optimization-detector.py extract <input_files> <output_dir>

where

For preprocessing the following command should be used:

$ python3 optimization-detector.py preprocess -c <class ID> <input_folder> [<input_folder> ...] <model_dir> 

where

Note that this command should be run multiple times, every time with a different class and the same model dir, for example like this:

$ python3 optimization-detector.py preprocess --incomplete -c 0 gcc-o0/ clang-o0/ model_dir/
$ python3 optimization-detector.py preprocess --incomplete -c 1 gcc-o1/ clang-o1/ model_dir/ 
$ python3 optimization-detector.py preprocess --incomplete -c 2 gcc-o2/ clang-o2/ model_dir/
$ python3 optimization-detector.py preprocess -c 3 gcc-o3/ clang-o3/ model_dir/  

The --incomplete flag is used to save time by avoiding shuffling and duplicate elimination in intermediate steps, but is not strictly necessary.

Finally, the following command can be used to check the amount of samples that will be used for training, validation and testing

$ python3 optimization-detector.py summary <model_dir>

Training

Training can be run with the following command after preprocessing:

$ python3 optimization-detector.py train -n <network_type> <model_dir>

where <network_type> is one of lstm or cnn and <model_dir> is the folder containing the result of the preprocess operation.

An extra folder, containing tensorboard data, logs/ will be generated inside <model_dir>.

Evaluation

The evaluation in the paper has been run with the following command:

$ python3 optimization-detector.py evaluate -m <model> -o output.csv <dataset_dir>

where:

This will test the classification multiple times, each time increasing the input vector length. To test a specific length, and obtain the confusion matrix, add the --confusion <value> flag.

Inference

The single-file inference has been run using the following command:

$ python3 optimization-detector.py infer -m <model> -o output.csv <path-to-file>

This command will divide the file in chunks of 2048 bytes each and run the inference for each one. Then, the result of each chunk inference will be written in the file output.csv. If the -o output.csv part is omitted, the average will be reported in stdout.

Pre-Trained Models

Pre-trained models for every architecture in our dataset can be downloaded from the following link.

Note that LSTM models always provide better accuracy (4.5% better on average), while CNN models provide faster inference (2x-4x faster).

Authors

Davide Pizzolotto <davidepi@ist.osaka-u.ac.jp>

Katsuro Inoue <inoue@ist.osaka-u.ac.jp>