Open areabow opened 1 year ago
Hey i am facing a similar issue. Did you get any resolution to this issue?
@areabow @SaketNer You can allocate memory on PSRAM for your TensorArena. This should work.
if (tensor_arena == NULL) {
//allocate memory for TensorArena on PSRAM
tensor_arena = (uint8_t *) ps_malloc(kTensorArenaSize);
}
also checkout this Github repo where I used PSRAM for my tensor_arena: https://github.com/Navodplayer1/ESP32_PSRAM_Person_Detection
Good day, I have the following model that I am using for inference on an esp32. The model is a minimally "forward only" definition in tensorflow for an early time-series classification model that I have trained in PyTorch. Hence in tensorflow, this model is only suitable for inference. I have been able to successfully transfer the weights from torch to tensorflow and verified on my validation set that all is correct i.t.o predictions. Further, I'm able to convert this model to tf lite without issue and confirmed the TFLite converted model using the Python inference API.
I have implemented a minimal example to run inference on the ESP, however I am having some trouble with the tensor allocation process:
When I attempt to implement the model on the ESP32, I run into the following issue when attempting to allocate the tensors on idf monitor:
Guru Meditation Error: Core 0 panic'ed (LoadProhibited). Exception was unhandled.
With backtrace:
The list of Ops in my model are (export from tensorflow
visualize.py
):ADD, CAST, CONCATENATION, EQUAL, EXPAND_DIMS, FILL, FULLY_CONNECTED, GATHER, GREATER, LESS, LOGICAL_AND, LOGISTIC, MUL, PACK, RESHAPE, SELECT_V2, SHAPE, SPLIT, SQUEEZE, STRIDED_SLICE, TANH, TRANSPOSE, UNPACK, WHILE
I have tried increasing the main task stack and have had no luck there either. Any assistance is appreciated.