Open HenryHu2000 opened 1 year ago
Hi @HenryHu2000 , the TEEC_InvokeCommand(forward) failed 0xffff3024 origin 0x3
error is typically caused by the secure memory limits. When one layer's weight matrix is created during the forward pass, out-of-memory happens. But I found in your test, the layer is quite small, and seems not large enough to trigger this problem?
The cfg
files you mentioned are not in tz_datasets/cfg
, but server_side_sgx/cfg
. You may try run again with these cfg
files inside
The
cfg
files you mentioned are not intz_datasets/cfg
, butserver_side_sgx/cfg
. You may try run again with thesecfg
files inside
Hi @mofanv, thanks for your reply. Yes, I used the cfg files in server_side_sgx/cfg
but was still getting these errors. Without these cfg files, fl_tee_layerwise.sh
doesn't run.
Hi @HenryHu2000 , the
TEEC_InvokeCommand(forward) failed 0xffff3024 origin 0x3
error is typically caused by the secure memory limits. When one layer's weight matrix is created during the forward pass, out-of-memory happens. But I found in your test, the layer is quite small, and seems not large enough to trigger this problem?
I followed the exactly same configuration as in the paper. I tried the following 3 configurations on fl_tee_layerwise.sh
but none of them worked:
However, other scripts like fl_tee_standard_noss.sh and fl_tee_standard_ss.sh do run correctly. It seems that changing the flag -ss 2
to -ss 1
also avoids the error, but I guess it breaks the intended purpose.
Hello @mofanv, I attempted to run fl_tee_layerwise.sh on an HiKey 960, the same board used in the original PPFL paper PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments. However, I'm getting
TEEC_InvokeCommand(forward) failed 0xffff3024 origin 0x3
when runningfl_tee_layerwise.sh
, the same error in mofanv/darknetz#14 and mofanv/darknetz#29. Other scripts likefl_tee_standard_noss.sh
andfl_tee_standard_ss.sh
can run correctly.Since under tz_datasets/cfg folder there don't exist
greedy-cnn-aux.cfg
,greedy-cnn-layer1.cfg
,greedy-cnn-layer2.cfg
,greedy-cnn-layer3.cfg
andmnist_greedy-cnn.cfg
files that are required byfl_tee_layerwise.sh
, I manually copy-pasted them fromPPFL/server_side_sgx/cfg
. Error log:I checked mofanv/darknetz#14 and mofanv/darknetz#29 and attempted to increase
TA_STACK_SIZE
andTA_DATA_SIZE
in ta/include/user_ta_header_defines.h I have the following values, but am still getting the error. I cannot increase them further because that would cause aTEEC_Opensession failed with code 0xffff000c origin 0x3
error as from mofanv/darknetz#32.I isolated the command
darknetp classifier train -pp_start_f 0 -pp_end 4 -ss 2 "cfg/mnist.dataset" "cfg/mnist_greedy-cnn.cfg" "/root/models/mnist/mnist_greedy-cnn_global.weights"
that failed and tried to run it manually on the client.-pp_start_f 0 -pp_end 4
fails but-pp_start_f 0 -pp_end 3
can run. It seems that layer 4 is the one that cannot fit into TEE memory.Do you know what the original configuration used in PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments was? Thank you!