dneprDroid / tfsecured

Small library for TensorFlow proto model's (*.pb) encryption/decryption
MIT License
81 stars 21 forks source link

performance is slow #2

Closed samhodge closed 5 years ago

samhodge commented 5 years ago

I have read elsewhere that performance of AES should be 200Mb/s

but I am seeing times of about 5 minutes for a 164 Mb model.

What should be the execution time?

samhodge commented 5 years ago

I compiled with -O3 and everything is a lot zippier, so embarressed.

jxmelody commented 5 years ago

Hi! @samhodge @dneprDroid , Sorry to bother you.
I am getting the same problem. My performance is still very slow: about 1.5 mintues for a 92.Mb model. Compiled with -O3 not work. Any possibility that I missed something?

jxmelody commented 5 years ago

I used openssl and every thing is faster - about 0.5s for 92Mb

dneprDroid commented 5 years ago

Added a version with OpenSSL - check out the branch feature/OpenSSL

samhodge commented 5 years ago

@dneprDroid

I tried with your branch and things are indeed faster, 0.8s compared with 11s with O3 optimisation (on macOS 10.13 at least)

The problem is if I drop in identical code I get an error.

here is the code I am using

        cur_graph.last = graph_file_name;
        auto start = std::chrono::high_resolution_clock::now();
        std::cerr << "Start Decrypt" << std::endl;
        auto load_graph_status = tfsecured::GraphDefDecryptAES(graph_file_name,         // path to *.pb file (frozen graph)
            &graph_def,
            key);         // your key

        auto end = std::chrono::high_resolution_clock::now();
        std::cerr << "Start End decrypt" << std::endl;
        auto microseconds = std::chrono::duration_cast<std::chrono::microseconds>(end-start);
                 std::cerr << "Time Elapsed: " << microseconds.count() << " µs\n";
        if (!load_graph_status.ok()) {
            std::cerr << "What is the issue ? " << load_graph_status.error_message().c_str() << std::endl;
            return tensorflow::errors::NotFound("Failed to load compute graph at '", graph_file_name, "'");
        }
        else {
            graph_map[graph_file_name] = graph_def;
        }

and the output I get back

Start Decrypt
Start End decrypt
Time Elapsed: 879331 µs
What is the issue ? [OpenSSL] EVP_DecryptFinal_ex Error

So it looks like it is having issues with the alignment of bits being on a block boundary to do with padding, I will paste some related googling I did on the subject a few days ago.

samhodge commented 5 years ago

here is one link I found useful

https://stackoverflow.com/questions/5665698/evp-decryptfinal-ex-error-on-openssl

Or do I need to rencrypt my model with updated Python code that takes padding into consideration?

sam

samhodge commented 5 years ago

I can get the funtion to return without error by inserting

EVP_CIPHER_CTX_set_padding(ctx, 0);

the line after

CHECK_AES_STATUS(status, ctx, "[OpenSSL] EVP_DecryptInit_ex Error");

but then the binary proto cannot be read by Tensorflow

ie

TF_LoadGraph ERROR: Failed to load compute graph at '...

from

        if (!graph->ParseFromArray(bytes.data(), (int)bytes.size())){
#ifdef DEBUG
            std::cout << "Invalid data: "
                      << std::string(bytes.begin(), bytes.end())
                      << std::endl;
            std::cout << "----------\nend invalid data block" << std::endl;;
#endif
            return errors::DataLoss("Can't parse ", modelPath, " as binary proto");
        }

Using the DEBUG the stuff looks OKish, like you can sort of see the graph labels and stuff, but I am not sure how it is supposed to end.

sam

samhodge commented 5 years ago

So here is the output of the debug stuff

_output_shapes"
 :
?????????
        ?????????*

squeeze_dims
*
T0  
i
SemanticPredictionIdentity  Squeeze_1*
T0  *4
_output_shapes"
 :
?????????
        ?????????----------
end invalid data block