Closed philj0st closed 2 years ago
You have 32 output channels, so 0x00000000ffffffff looks correct.
The "izer" run finished without errors, correct? The failure message is from the linker stage. You can go into main.c and simply remove check_output() as well as the included sampledata.h and sampleoutput.h. We typically recommend trying to get the KAT working (which is difficult in your case since both input and output data are so large). But you did not substantially modify the model so it may not be needed anyway. We will take a look at why "--no-kat" didn't work they way it should.
The "-demo" examples are customized from what the synthesis tool generated. Since you removed "USE_SAMPLEDATA", it tries to allocate the full input buffer in RAM. For the large data you're trying to use, that won't work.
We are working on a demo for a full sized 352x352 UNet but it isn't ready yet. You could perhaps display only every nth input pixel (while you still feed all of the data to the CNN). I've tagged @maximreza and @aniktash who may be able to help.
The UNET-demo example is trained with image size 3x80x80 as in CamVid_s80_c3. For a full-sized 352x352 with CamVid_s352_c3, the demo is not published yet, but can be picked up from here (you need to clone the entire SDK and checkout UNet-Highres branch): https://github.com/aniktash/MAX78000_SDK-1/tree/UNet-Highres/Examples/MAX78000/CNN/UNet-highres-demo-UART. It captures images either through UART or Camera, completes the inference and sends back the inference through UART. Unlike lower resolution UNET-demo which generates the segmentation results and displays on TFT, in UNet-highres-demo-UART a script is used to receive the inference result, unfold the output (unfolding 88x88x16x4 for four classes to 352x352x4 ) and to create the segmentation result on the PC. If an end-to-end demo on evkit is needed, you may want to consider UNET-demo for now.
Thank you both for the detailed answers! This is very good to hear! I'm excited to look more into the UNet-Highres demo. For the time being i went back to training and used the ai85unetmedium model on a _Cocos80, inspired by _CamVids80. When inspecting UNet-demo/log.txt i see that the demo was synthesized with
--config-file ../ai8x-training/unet_visualize/unet_v5.yaml
I cannot seem to find this _unetvisualize folder in the git history, nor any _unetv5.yaml in ai8x-synthesis/networks/'s history. Could you maybe provide me with the network description for the ai85unetmedium?
You are right. Seems like the model, checkpoint and yaml that was used for UNet-demo with 3x80x80 resolution are not pushed. @MaximGorkem, @ermanok could you please help?
any news on this? @MaximGorkem @ermanok I have a deadline coming up, would be cool to present the UNet-demo with the model i trained.
The model names had changed in the training repo. unet_v5 is really the ai85unetmedium
and what you did is certainly the right thing to have a model 80x80 images. The yaml file is not uploaded to the repo but I think this file helps you to synthesize your model.
camvid-unet-medium.zip
.
Hi I trained your
ai85unetlarge
on MS COCO with only 2 classes (Background, Person). When trying to synthesize the trained model i ran into some problems:First i had to modify
aisegment-unet-large-fakept.yaml
because the last layer ended up smaller.vs.
that's why i changed the last layer to
output_processors: 0x00000000ffffffff
. Is that correct?I'd like to use your UNet-Demo with my trained network. After some investigation i found out that you used some (i assume older version)
unet_v5
and aAISegment_352_reduced
dataset in order to fit the sample data into SRAM. Is there a possibility to synthesize without all the KAT and sample data?I just wanna use UNet-demo with the camera and don't care about KAT for now. I tried
--no-kat
,--synthesize-input
,--synthesize-words
and--max-verify-length
but when inspectingizer/sampledata.py
i saw that i cannot enter these steps afterif shape[0] < 1 or shape[0] > 4:
anyways as my input shape is(48,88,88)
.the build error is the following:
but it's pretty clear the sample data won't fit, the numpy pickle i generated with
train.py
is 2.9MB which makes total sense given the (48x88x88x8bit) dimensions. edit: nvm, i was slightly confused because the filesize is 2.9MB but when I load the pickle the array has the size 371712 which does indeed fit into data memory.I commented out
//#define USE_SAMPLEDATA // shows the sample data
.