terryky / tflite_gles_app

GPU accelerated deep learning inference applications for RaspberryPi / JetsonNano / Linux PC using TensorflowLite GPUDelegate / TensorRT
MIT License
488 stars 130 forks source link

Running gles headless #13

Open satyajitghana opened 3 years ago

satyajitghana commented 3 years ago

Hey Terry, Thanks for this awesome work !

One query, how do I run these examples headless ? without any display attached to the nano ?

terryky commented 3 years ago

I think it's up to your VNC environment whether OpenGLES commands are correctly transferred to another PC or not, Isn't it ?

satyajitghana commented 3 years ago

I think it's up to your VNC environment whether OpenGLES commands are correctly transferred to another PC or not, Isn't it ?

no i mean i want to run it via SSH, no VNC environment, just plain old SSH into the machine and want to run it.

will unset DISPLAY help ? or maybe export DISPLAY:=1 or something ?

terryky commented 3 years ago

how about export DISPLAY=:0

satyajitghana commented 3 years ago

nope, doesn't really work on the nano

i've tried:

if [ -z "$DISPLAY" ]
then
       startx > /dev/null 2>&1 &
       export DISPLAY=:1.0
fi

for the above: the tflite models are loaded, but inferencing doesnt work, no output, and sometimes there's error as https://github.com/terryky/tflite_gles_app/issues/11

if [ -z "$DISPLAY" ]
        then
        DISPLAY=:0.0 glxinfo > /dev/null 2>&1
        if [ $? -ne 0 ]
        then
                echo "starting new x server at :0.0"
                startx > /dev/null 2>&1 &
        else
                echo "connecting to existing x server at :0.0"
        fi
        export DISPLAY=:0.0
else
        echo "already configured for x server at $DISPLAY"
fi

i've tried this as well: which works on ssh, but needs a display connected with the nano, in that case i am getting the outputs on my ssh terminal, and the window is created on the display connected to nano, that is not really what i want.

could it be possible to run the examples without any display connected to the nano ? like just ssh into nano and run the gles app

Some references

terryky commented 3 years ago

I've added a new config named "headless". Could you try below ?

(ssh to jetson with X forwarding)
$ ssh -X jetson@192.168.1.1

(On jetson)
$ export DISPLAY=:10.0
$ cd ~/work/tflite_gles_app/gl2blazeface
$ make clean
$ make TARGET_ENV=headless
$ ./gl2blazeface
satyajitghana commented 3 years ago

I've added a new config named "headless". Could you try below ?

(ssh to jetson with X forwarding)
$ ssh -X jetson@192.168.1.1

(On jetson)
$ export DISPLAY=:10.0
$ cd ~/work/tflite_gles_app/gl2blazeface
$ make clean
$ make TARGET_ENV=headless
$ ./gl2blazeface

thanks i will give it a try !

satyajitghana commented 3 years ago

Hi, an update on this,

i tried out the below stuff

(ssh to jetson with X forwarding)
$ ssh -X jetson@192.168.1.1

(On jetson)
$ export DISPLAY=:10.0
$ cd ~/work/tflite_gles_app/gl2blazeface
$ make clean
$ make TARGET_ENV=headless
$ ./gl2blazeface

and i added printf("%d\n", detection->num) and ran ./gl2blazeface -x to run the example image, the model is loaded, and its running, but i the number of detected faces as 0, something to note: i dont have the X forwared session rendered at my end (this might be an issue at my end)

is the gl window rendering required ? either directly on the nano, or as an X forwarded session ? i am preferring to not use X session forwarded in SSH, since the connection is slow.

I did printf for all the debug strings, so the tflite inference time is printed fine, but some issues, below :

terryky commented 3 years ago

hmmmm. I don't understand why gl2blazeface doesn't work although gl2classification work well. They are almost same implementation. Don't you have any OpenGL error logs ?

 i am preferring to not use X session forwarded in SSH, since the connection is slow.

Yep, I agree that. However, do you mean you don't want to see any OpenGL window ? (you want to get inference result by just a console text instead of OpenGL visualization ?)

satyajitghana commented 3 years ago

Yess i only want the inference results on the console, not on window !, i'll modify the examples so as the debug string and other info are pushed to the console

No i dont have any opengl error logs, just that the number of faces detected is always 0

i think since the window is not showing up, maybe just a black screen is sent to the model ? i am just guessing.

terryky commented 3 years ago

On current implementation, OpenGL is required in order to preprocess input images. (resize, crop, affine transform, color conversion (YUYA->RGB)). and unfortunately, current implementation requires showing window in order to use OpenGL.

terryky commented 3 years ago

FYI. This is a screen capture of gl2blazeface forwarded to WindowsPC from Jetson nano. It works well. I used VcXsrv as a X Server. gl2blazeface

satyajitghana commented 3 years ago

hmm okay, thanks a lot !

Techieali commented 3 years ago

hey @satyajitghana, @terryky I am also trying to run the trt_posnet in headless mode, is it possible to do that,?