pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
1.74k stars 295 forks source link

Android app - Error - Attempted to resize a static tensor to a new shape at dimension 0 #1350

Open adonnini opened 9 months ago

adonnini commented 9 months ago

My Android application fails with Attempted to resize a static tensor to a new shape at dimension 0 error. Please find the full logcat below.

The shape of input datasets in my model is not static. Specifically, the number of steps in any one sequence varies.

here is the code I use to define the input dataset for the model in the Android application:

            float[] flat = flatten(tmpData);
            final long[]  shapeArrDataPytorchFlattened = new long[]{tmpData.length, 4, 1};
            arrDataPytorch = Tensor.fromBlob(flat, shapeArrDataPytorchFlattened);

where 4 is the number of features and tmpData.length is the size of the input dataset (with n rows and 4 columns)

here is the code I use to run inference:

            try {
                Log.i(TAG, " - neuralNetworkloadAndRunPytorch - Abut to run inference --- ");
                outputTensor = mModule.forward(from(arrDataPytorch)).toTensor();
            } catch (Exception e) {
                Log.i(TAG, " - neuralNetworkloadAndRunPytorch - Inference FAILED --- ");
                throw new RuntimeException(e);
            }

when I run inference on my model processed with torchscript and processed using pytorch mobile. I produce the input dataset as follows:

                final long[] shapeArrDataPytorchFlattened = new long[]{1, flat.length};   //USED FOR PYTORCH MOBILE
            arrDataPytorch = Tensor.fromBlob(flat, shapeArrDataPytorchFlattened);

and run inference as follows:

                mModule = LiteModuleLoader.load(moduleFileAbsoluteFilePath);
                outputTensor = mModule.forward(IValue.from(arrDataPytorch)).toTensor();

This works producing reasonable results.

I would appreciate any thoughts as to what is causing the problem, and how I might go about fixing it.

Thanks

LOGCAT

12-05 16:48:49.983: I/NeuralNetworkService(16887):  - NeuralNetworkServiceRunnable - neuralNetworkInputPreparationRunning - 1 - 0
12-05 16:48:49.983: I/NeuralNetworkService(16887):  - NeuralNetworkServiceRunnable - neuralNetworkLoadAndRunRunning - 0 - 0
12-05 16:48:49.983: I/NeuralNetworkService(16887):  - NeuralNetworkServiceRunnable - About to run neuralNetworkloadAndRun --- 
12-05 16:48:49.983: I/NeuralNetworkService(16887):  - neuralNetworkloadAndRunPytorch - Running - 
12-05 16:48:49.983: I/NeuralNetworkService(16887):  - neuralNetworkloadAndRunPytorch - locationInformationDir - /data/user/0/com.android.contextq/files/locationInformation/
12-05 16:48:49.983: I/NeuralNetworkService(16887):  - neuralNetworkloadAndRunPytorch - savedNetworkArchiveLength - 120669888
12-05 16:48:49.983: I/NeuralNetworkService(16887):  - neuralNetworkloadAndRunPytorch - Abut to load module --- 
12-05 16:48:50.067: I/ETLOG(16887): Model file /data/user/0/com.android.contextq/files/locationInformation/tfmodel_exnnpack.pte is loaded.
12-05 16:48:50.067: I/ETLOG(16887): Setting up planned buffer 0, size 23366800.
12-05 16:48:50.077: W/libc(16887): Access denied finding property "ro.hardware.chipname"
12-05 16:48:50.078: W/adbd(13666): timeout expired while flushing socket, closing
12-05 16:48:50.080: D/XNNPACK(16887): allocated 6144 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.080: D/XNNPACK(16887): created workspace of size 774176
12-05 16:48:50.081: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.085: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.088: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.092: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.092: D/XNNPACK(16887): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
12-05 16:48:50.092: D/XNNPACK(16887): created workspace of size 42368
12-05 16:48:50.092: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.096: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.097: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.113: D/XNNPACK(16887): allocated 4196352 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.127: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.130: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.130: D/XNNPACK(16887): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
12-05 16:48:50.130: D/XNNPACK(16887): created workspace of size 42368
12-05 16:48:50.130: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.132: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.146: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.150: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.150: D/XNNPACK(16887): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
12-05 16:48:50.150: D/XNNPACK(16887): created workspace of size 42368
12-05 16:48:50.150: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.152: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.166: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.170: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.170: D/XNNPACK(16887): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
12-05 16:48:50.170: D/XNNPACK(16887): created workspace of size 42368
12-05 16:48:50.170: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.172: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.186: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.190: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.190: D/XNNPACK(16887): reusing tensor id #4 memory for tensor id #3 Node #2 Softmax
12-05 16:48:50.190: D/XNNPACK(16887): created workspace of size 42368
12-05 16:48:50.190: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.192: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.206: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.209: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.213: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.217: D/XNNPACK(16887): allocated 8192 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.217: D/XNNPACK(16887): created workspace of size 1327136
12-05 16:48:50.217: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.221: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.224: D/XNNPACK(16887): reusing tensor id #8 memory for tensor id #5 Node #2 Softmax
12-05 16:48:50.224: D/XNNPACK(16887): created workspace of size 42368
12-05 16:48:50.225: D/XNNPACK(16887): created workspace of size 663584
12-05 16:48:50.225: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.229: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.232: I/XNNPACK(16887): fuse Clamp Node #2 into upstream Node #1
12-05 16:48:50.234: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.249: D/XNNPACK(16887): allocated 4196352 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.263: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.269: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.273: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.276: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.277: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.281: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.284: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.286: D/StNfcHal(979): (#0C838) Rx 60 07 01 e2 
12-05 16:48:50.286: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.301: D/XNNPACK(16887): allocated 4196352 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.315: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.319: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.322: D/XNNPACK(16887): created workspace of size 663584
12-05 16:48:50.323: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.327: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.331: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.334: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.338: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.338: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.342: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.344: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.357: D/XNNPACK(16887): created workspace of size 663584
12-05 16:48:50.358: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.361: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.362: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.365: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.367: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.381: D/XNNPACK(16887): created workspace of size 663584
12-05 16:48:50.381: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.385: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.385: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.389: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.390: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.404: D/XNNPACK(16887): created workspace of size 663584
12-05 16:48:50.405: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.408: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.409: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.412: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.414: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.427: D/XNNPACK(16887): created workspace of size 663584
12-05 16:48:50.428: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.431: D/XNNPACK(16887): created workspace of size 387104
12-05 16:48:50.432: D/XNNPACK(16887): allocated 1050624 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.435: I/XNNPACK(16887): fuse Clamp Node #1 into upstream Node #0
12-05 16:48:50.437: D/XNNPACK(16887): allocated 4202496 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.450: D/XNNPACK(16887): allocated 16416 bytes for packed weights in Fully Connected (NC, F32) operator
12-05 16:48:50.467: I/NeuralNetworkService(16887):  - neuralNetworkloadAndRunPytorch - Abut to run inference --- 
12-05 16:48:50.467: I/ETLOG(16887): Attempted to resize a static tensor to a new shape at dimension 0 old_size: 27 new_size: 12716
12-05 16:48:50.467: I/ETLOG(16887): Error setting input 0: 0x10
12-05 16:48:50.467: I/ETLOG(16887): In function forward(), assert failed: set_input_status == Error::Ok
12-05 16:48:50.467: A/libc(16887): Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 16905 (Thread-2), pid 16887 (lNetworkService)
12-05 16:48:50.635: I/crash_dump64(17226): obtaining output fd from tombstoned, type: kDebuggerdTombstoneProto
tarun292 commented 1 month ago

yep i can confirm that fixes the issue and i can repro the actual export issue. Will let you know once i have more insights into what the issue is.

adonnini commented 1 month ago

Thanks! I appreciate it

adonnini commented 4 days ago

@tarun292 , Sorry to bother you again. It's been over a month since we last connected. (When) will you be able to take another look at this issue? I am still waiting for its resolution in order to be able to proceed with some of my work. Thanks