Open Gold-TW opened 3 years ago
MonochromaticBMPGeneratorWithText.zip @tayler-king can you try to transform this to jxe? this JAR it should be java 8 compatible i dont know how to achieve it using this software you posted as there is just other way around in documentation :-/
ok still working on plain bash solution as for java i have no idea how to compile .jxe container
https://gist.github.com/OneB1t/7b37e9ac769cec3589623a8cd11ec017
this is what i got now it is not perfect but eventually i will get there if there is any bash master feel free to help :-)
EDIT: i think it should be possible to run also just .jar files as there is DSItracer.jar and lsd_d.sh file which is somehow controling it maybe that could be also a good idea as java is much faster than bash
found on some random forum that it should be possible to run .jar like this j9.exe -classpath \JxeTest.jar -jcl:ppro11 com.test.Test.
i will give it a try next time im in car
You can definitely run just jar files, the jxe is just optimised for fast startup on boot. I haven't found any tool to convert jar 2 jxe compatible with this platform though.
If you want to build jar files of correct version for the old j9 take a look at the scripts in my patching framework https://gitlab.com/alelec/mib2-lsd-patching
by running this im able to show different picture on main screen and dashboard
export LD_LIBRARY_PATH=/eso/lib:/armle/lib:/root/lib-target IPL_CONFIG_DIR=/etc/eso/production dmdt sc 4 -9 dmdt sb 0 loadandshowimage /mnt/app/eso/media/default.png
Looks great! I never did get an image sent to dash without taking over main screen too.
Also if you find a command line C program you'd like to use to make / convert images I've got a compiler setup that can build many Linux utilities from pkgsrc.
I've also got a copy of micropython compiled for mib if you want to script up things in python.
I'm happy with java if it goes well if not then micropython can be also nice for such data manipulation
Also is there a way to run any qnx in VMware/ virtual machine?
Also is there a way to run any qnx in VMware/ virtual machine?
I did have a qnx VM at one point but it wasn't very helpful because it's x86, not arm like the Tegra so mib binaries don't run. It did not have java / j9 so couldn't test any of that.
I have Nvidia Jetson kits.. but I don't have QNX VMs similar to MIB to test things.
Let's focus on java for now it is good
You can definitely run just jar files, the jxe is just optimised for fast startup on boot. I haven't found any tool to convert jar 2 jxe compatible with this platform though.
If you want to build jar files of correct version for the old j9 take a look at the scripts in my patching framework https://gitlab.com/alelec/mib2-lsd-patching
If I cannot run it standalone I can probably patch LSD.jxe to contain my code and execute it somehow (maybe forever in loop)
Also is there a way to run any qnx in VMware/ virtual machine?
I did have a qnx VM at one point but it wasn't very helpful because it's x86, not arm like the Tegra so mib binaries don't run. It did not have java / j9 so couldn't test any of that.
@andrewleech Can you maybe help me with that micropython? Im strugling with correct format of that jar file as it requires java 6 jar file... In Bash it is too slow :(
If you want to try java some more, use this copy: https://gitlab.com/alelec/mib2-lsd-patching/-/blob/main/ibm-java-ws-sdk-pxi3260sr4ifx.zip?ref_type=heads
Unzip it, then set JAVA_HOME
environment variable to that folder.
${JAVA_HOME}/bin/javac -source 1.2 -target 1.2 code.java
sorry maybe im braindamaged but why this is happening? (JAVA_HOME was setted) is there something else i need to do in order to run it?
Ah, that IBM java is for running on desktop Linux, not on MMX. I never found a javac that could be run on the unit.
Ok so that is also out of question.. It looks like i will need to make that BASH script which is generating BMP file working... Still i have some new knowledge which can be interesting
When you run dmdt dm 0 it will show all currently running "tables" on unit. I believe that you can switch to any of them only problem is that there must be same resolution as VC is able to receive.
So close but yet so far.. 😄
Yep you can switch the dash to the other "screens" or tables, I was able to switch it to the one that shows AA/CP no worries... however it comes up as black where the AA/CP content shows because the tegra uses hardware video decoding to go staight to the monitor output.
root@mmx:/mnt/app/root> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/mnt/app/armle/usr/lib:/eso/lib:/mnt/app/root/lib-target:/eso/production
root@mmx:/mnt/app/root> /eso/bin/apps/dmdt gs
displaymanager reports the following system information:
number of displayables: 7
number of displays: 2
display 0:
name: display0
terminal: main
size: 1280 x 640
context id: 1
16 (DISPLAYABLE_HMI)
19 (DISPLAYABLE_MAPVIEWER)
display 1:
name: <error>
terminal: <error>
size: 0 x 0
context id: 70
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
root@mmx:/fs/sdb0> /eso/bin/apps/dmdt gc
displaymanager knows 32 contexts:
ID: flags:
-----------------------------------
-8 | 1 | NONE
-----------------------------------
-123 (--)
-1 | 1 | NONE
-----------------------------------
-666 (--)
-2 | 1 | NONE
-----------------------------------
17 (DISPLAYABLE_REAR_VIEW_CAM)
-3 | 7 | PERSISTENT | REDRAW
-----------------------------------
51 (DISPLAYABLE_STREETVIEW)
18 (DISPLAYABLE_BROWSER)
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
23 (DISPLAYABLE_MAP_JUNCTION_VIEW)
22 (DISPLAYABLE_MAP_3D_INTERSECTION_VIEW)
19 (DISPLAYABLE_MAPVIEWER)
16 (DISPLAYABLE_HMI)
-4 | 1 | PERSISTENT
-----------------------------------
47 (DISPLAYABLE_FBAS_1)
-5 | 1 | PERSISTENT
-----------------------------------
48 (DISPLAYABLE_FBAS_2)
-6 | 1 | PERSISTENT
-----------------------------------
49 (DISPLAYABLE_FBAS_3)
-10 | 1 | PERSISTENT
-----------------------------------
-125 (--)
-7 | 1 | PERSISTENT
-----------------------------------
-124 (--)
-9 | 1 | PERSISTENT | REDRAW | RELAYOUT
-----------------------------------
-2 (--)
0 | 1 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
1 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
19 (DISPLAYABLE_MAPVIEWER)
2 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
27 (DISPLAYABLE_AMI)
3 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
17 (DISPLAYABLE_REAR_VIEW_CAM)
5 | 3 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
50 (DISPLAYABLE_MAP_IN_MAP)
19 (DISPLAYABLE_MAPVIEWER)
6 | 3 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
22 (DISPLAYABLE_MAP_3D_INTERSECTION_VIEW)
19 (DISPLAYABLE_MAPVIEWER)
7 | 3 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
23 (DISPLAYABLE_MAP_JUNCTION_VIEW)
19 (DISPLAYABLE_MAPVIEWER)
8 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
43 (DISPLAYABLE_DIGITAL_VIDEOPLAYER_1)
9 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
18 (DISPLAYABLE_BROWSER)
17 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
56 (DISPLAYABLE_MIRRORLINK)
18 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
51 (DISPLAYABLE_STREETVIEW)
19 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
39 (DISPLAYABLE_GOOGLE_EARTH)
20 | 3 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
50 (DISPLAYABLE_MAP_IN_MAP)
39 (DISPLAYABLE_GOOGLE_EARTH)
21 | 3 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
22 (DISPLAYABLE_MAP_3D_INTERSECTION_VIEW)
39 (DISPLAYABLE_GOOGLE_EARTH)
22 | 3 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
23 (DISPLAYABLE_MAP_JUNCTION_VIEW)
39 (DISPLAYABLE_GOOGLE_EARTH)
12 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
26 (DISPLAYABLE_TV_TUNER)
13 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
31 (DISPLAYABLE_TV_VIDEOTEXT)
14 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
29 (DISPLAYABLE_TV_AUX1)
70 | 1 | NONE
-----------------------------------
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
71 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
59 (DISPLAYABLE_EXTERNAL_SMARTPHONE)
26 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
23 (DISPLAYABLE_MAP_JUNCTION_VIEW)
27 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
22 (DISPLAYABLE_MAP_3D_INTERSECTION_VIEW)
root@mmx:/eso/bin/apps> /eso/bin/apps/dmdt gs
displaymanager reports the following system information:
number of displayables: 8
number of displays: 2
display 0:
name: display0
terminal: main
size: 1280 x 640
context id: 71
16 (DISPLAYABLE_HMI)
59 (DISPLAYABLE_EXTERNAL_SMARTPHONE)
display 1:
name: <error>
terminal: <error>
size: 0 x 0
context id: 70
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
root@mmx:/eso/bin/apps> /eso/bin/apps/dmdt sc 1 71
root@mmx:/eso/bin/apps> /eso/bin/apps/dmdt gs
displaymanager reports the following system information:
number of displayables: 8
number of displays: 2
display 0:
name: display0
terminal: main
size: 1280 x 640
context id: 71
16 (DISPLAYABLE_HMI)
59 (DISPLAYABLE_EXTERNAL_SMARTPHONE)
display 1:
name: <error>
terminal: <error>
size: 0 x 0
context id: 70
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
root@mmx:/eso/bin/apps> /eso/bin/apps/dmdt sc 4 71
root@mmx:/eso/bin/apps> /eso/bin/apps/dmdt gs
displaymanager reports the following system information:
number of displayables: 8
number of displays: 2
display 0:
name: display0
terminal: main
size: 1280 x 640
context id: 71
16 (DISPLAYABLE_HMI)
59 (DISPLAYABLE_EXTERNAL_SMARTPHONE)
display 1:
name: <error>
terminal: <error>
size: 0 x 0
context id: 71
16 (DISPLAYABLE_HMI)
59 (DISPLAYABLE_EXTERNAL_SMARTPHONE)
can you point me where "dmdt sc 4 -9" came from? when i go trough those tables im actually unable to tell why this specific command will switch VC to show what is on Infotainment and for same reason i don't know why "dmdt sb0" behaves as shown before..
https://github.com/jilleb/mib2-toolbox/assets/320479/54403543-3612-4622-852d-35202b4bc31e
also any idea if is there some other scripting language i can use to render bmp file? Or i'm stuck with bash? if i will be able to create renderer i think we can easily show any data from MIB unit
That was a dump of some old notes, I'll have to look at them tomorrow to try to remember.
Definitely not stuck with bash, I do have micropython and have been looking for suitable libraries to write text out to bitmap. (It's different to regular python). I should be able to test / share something tomorrow on that front too.
Oh, yeah, you can install real python and other tools like png
from https://pkgsrc.mibsolution.one/
ok and to install them it means just download a unpack it to sdcard (and add it to PATH to be able to run it from everywhere) or there is some other necromancy required? 😄 if there is working python i think we can quite easily step up the game and also try to steal image from tegra 😸
EDIT: those packages are awesome that will help quite a lot
python ready
so there is new python prototype which should be fast enough to write and show picture for now there is 500ms for each picture to be visible on VC i will try to mess with it in order to see if this is good or not https://gist.github.com/OneB1t/9a0154ebb27e976e1478e6a904ff1628
It is rendering 1280x640 monochrome BMP text in pure python without any dependencies. I will try to test it tomorrow in car to see the real behavior
Nice work with the python script! If that works out well, eventually we can turn it into a micropython one which has a benefit of being much easier to install - but for now full python is perfect for experimenting!
can you point me where "dmdt sc 4 -9" came from?
The graphics system is made up of "contexts" which are like separate screen pages. Each context can have multiple pieces of source image (did) overlayed on each other.
root@mmx:/fs/sdb0> /eso/bin/apps/dmdt gc
displaymanager knows 32 contexts:
ID: flags:
-2 | 1 | NONE
-----------------------------------
17 (DISPLAYABLE_REAR_VIEW_CAM)
-3 | 7 | PERSISTENT | REDRAW
-----------------------------------
51 (DISPLAYABLE_STREETVIEW)
18 (DISPLAYABLE_BROWSER)
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
23 (DISPLAYABLE_MAP_JUNCTION_VIEW)
22 (DISPLAYABLE_MAP_3D_INTERSECTION_VIEW)
19 (DISPLAYABLE_MAPVIEWER)
16 (DISPLAYABLE_HMI)
-9 | 1 | PERSISTENT | REDRAW | RELAYOUT
-----------------------------------
-2 (--)
1 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
19 (DISPLAYABLE_MAPVIEWER)
3 | 2 | NONE
-----------------------------------
16 (DISPLAYABLE_HMI)
17 (DISPLAYABLE_REAR_VIEW_CAM)
I've snipped most of the ones out of the list above, just left some examples. So starting from the bottom:
dmdt sc
is the "switch context" command, which takes screen number and context number. So dmdt sc 4 -9
means switch screen 4 (the dash) to context -9
I can only presume loadandshowimage
is hardcoded to load the image into context -9 ... or into display id -2 which context -9 appears to contain.
I don't really know how / what the dmdt sb
command does, other than the official description of: Switch to the buffered context for a certain display
If it works to keep the loaded image off the main screen though it sounds perfect :-)
But how is VC screen 4 and not visible under displays? I can get the context and overlays but this is just weird 😄. Also mib is able to record video of AA/CP under videosink option in gal. I will try to dig into gal.bin to see if there is some way to do just screenshot.
I think it's screen 4 because it's not a "real" monitor, eg not hdmi / dvi screen etc. It's a custom comms set up to be that screen.
I'm pretty sure gal doesn't process the video in any way, the h264 video steam is the raw data coming from phone which is then decoded in tegra gpu direct to (real) display. That's how AA/CP works, they just send streaming video over the usb/wifi link to the unit to be displayed on screen.
do you know where one can "touch" data for such stream? (some file/uri/memory location) as with python and some libs i believe it will be possible to do periodic screenshot from h.264 stream and show it
maybe this process?
/sbin/devg-nvcapture -w 1280 -h 640 -o v -v 2 -p 15 -b 4 -t 250
In /etc/eso/production/gal.json
you can enable video dump and it starts creating files in /tmp like AAPDumpVideoSink_1970-01-01_12-12-25.h264 which can be converted to mp4 and viewed in vlc etc.
It's certainly possibly to hunt through a h264 raw file and extract frames in python (I've done it on a work project in the past) however the libraries typically used to do this are large C extensions, preferably ones with their own hardware decoding capabilities. I'm not sure how well the tegra would do trying to keep up with a pure software decoder, even if one could be compiled for it. I tried compiling x264 for it at one point but iirc that had a bunch of assembly functions in it that weren't ported to that arm chip. There would also be a lag / delay between the latest frames flushed to file before the python could read them in.
Hmmm I wonder what devg-nvcapture
does
Run nvcapture daemon.
-p internal thread priority (default: %d)
-b capture buffer count, 4 - 32 (default: %d)
-t capture timeout in msec (default: %d)
-w display width (default: %d)
-h display height (default: %d)
-o use overlay, v for VIP, c for CSI, b for both, n for NONE (default: n)
-m file name to inject metadata
-u unadulterated CSI (no crop/zoom can be done)
-d disable force of CSI restart/reset on configure (default is to enable)
-v verbose level, 0 - 3 (default: %d)
it is running 1280x640 "capture" that cannot be coincidence there is nothing else requiring this resolution in MIB system 😄
i believe what is going on si that MIB2 is rendering to main display some "table" which cannot be seen (it is drawn off the screen) and this nvcapture is actually creating h.264 stream for MOST to be rendered from that location/table
unfortunatelly it does not work with HW decoded AA stream
EDIT:
this is my .profile to make winscp working and to be able to run dmdt without path
cat << "EOF" >> ~/.profile
export PATH=/bin:/proc/boot:/sbin:/usr/bin:/usr/sbin:/mnt/app/armle/bin:/mnt/app/armle/sbin:/mnt/app/armle/usr/bin:/mnt/app/armle/usr/sbin:/fs/sda0/Toolbox/scripts:/mnt/app/media/gracenote/bin:/mnt/app/pkg/bin:/mnt/app/pkg/sbin:/mnt/app/pkg/usr/bin:/mnt/app/pkg/usr/sbin:/mnt/app/root:/eso/bin/apps/
export LIBIMG_CFGFILE=/etc/imgprocessing.cfg
export LD_LIBRARY_PATH=/mnt/app/root/lib-target:/eso/lib:/mnt/app/usr/lib:/mnt/app/armle/lib:/mnt/app/armle/lib/dll:/mnt/app/armle/usr/lib:/mnt/app/pkg/lib:/mnt/app/pkg/usr/lib
export IPL_CONFIG_DIR=/etc/eso/production
PS1='${USER}@mmx:${PWD}> '
export PS1
mount -uw /mnt/app
mount -uw /mnt/system
export TERM='xterm-256color'
export PACKAGESITE=http://pkgsrc.mibsolution.one/
EOF
. ~/.profile
cd /
tar xvf /fs/sda0/mib_pkgsrc.tgz
# For reference, the above package includes the pkgsrc bootstrap install tools and the following requirements:
#pkg_add http://pkgsrc.mibsolution.one/packages.armle/pkgtools/pkg_install-info-4.5nb3.tgz
#pkg_add http://pkgsrc.mibsolution.one/packages.armle/All/gettext-lib-0.18.3.tgz
#pkg_add http://pkgsrc.mibsolution.one/packages.armle/textproc/gsed-4.2.2nb4.tgz
Great work. @andrewleech , do you know how to send data to the interface that shows the media info or the dashboard data that comes from the MIB? I believe we should be able to send more data, dsi.carkomb.DCAdditionalInfo shows the following data:
acceleration
averageConsumption
averageSpeed
batteryBoost
batteryCoolant
batteryLevel
batteryStateOfCharge
batteryTemperature
boostCoolant
boostLevel
boostPressure
chargingTimeLeft
combustorConsumption
compass
consumptionData
coolant
coolantTemperature
currentConsumption
date
deceleration
destinationArrivalTime
destinationTripTime
digitalSpeed
distance
drivingProfile
drivingTime
electricalConsumption
electricRange
energyFlow
engineData
fuelRange
gMeter
gpsHeight
hybrid
hybridBattery
intermediateArrivalTime
intermediateTripTime
lateralAcceleration
longTermData
oilPressure
oilTemperature
performance
phoneInfo
powermeter
powermeterAndTachometer
predictiveEfficiencyAssistant
routeGuidance
secondarySpeed
shiftUpIndication
shortTermData
slope
station
steeringAngle
tachometer
time
trafficSignDetection
tyrePressureMonitor
vehicleVoltage
wildcard
zeroEmission
routeGuidance should be the type of data we can send the navigation instructructions to
Working automatic switching from python this is slow but it works :-) now if we can get data to python somehow it is possible to draw them automatically and show them in navigation window of VC https://gist.github.com/OneB1t/740653f17c6a37c27e8c6c82e7195ff1
https://github.com/jilleb/mib2-toolbox/assets/320479/2958fa44-49ca-444d-ab61-c85860bd8086
https://github.com/jilleb/mib2-toolbox/assets/320479/7b2a84cd-0c81-4c57-a893-f28de68b4926
https://github.com/jilleb/mib2-toolbox/assets/320479/d625360f-629a-4b98-9ecd-61b4ec68585a
Program log:
Time taken to generate BMP: 0.065000 seconds
Terminated
Time taken to generate BMP: 0.065000 seconds
Terminated
Time taken to generate BMP: 0.063000 seconds
Terminated
Time taken to generate BMP: 0.064000 seconds
Terminated
Time taken to generate BMP: 0.063000 seconds
Terminated
Time taken to generate BMP: 0.066000 seconds
Terminated
Time taken to generate BMP: 0.077000 seconds
Terminated
Time taken to generate BMP: 0.066000 seconds
Terminated
Time taken to generate BMP: 0.062000 seconds
Terminated
Time taken to generate BMP: 0.065000 seconds
Terminated
Time taken to generate BMP: 0.064000 seconds
Terminated
Time taken to generate BMP: 0.064000 seconds
Terminated
Time taken to generate BMP: 0.065000 seconds
Draw speed is decent but "imageloadandshow" is not really up to task (MIB is able to load picture really fast but unfortunatelly when using it it takes some time to get executed as it is creating new context every time
but i think we can either directly call some lib.so functions from python to skip context creation (maybe do it just once then swap images around)
to show android auto turn-by-turn i believe we can try to use this https://developers.google.com/maps/documentation/navigation/android-sdk/tbt-feed
to show android auto turn-by-turn i believe we can try to use this https://developers.google.com/maps/documentation/navigation/android-sdk/tbt-feed
Interesting... I'd assumed any TBT data would have to come from gal
if it was collecting it at all, maybe sending it out of DSI or something... but yeah looking at that page briefly perhaps it's something that can be patched into java / lsd to add the calls to register for the additional data.
do you know how to send data to the interface that shows the media info or the dashboard data that comes from the MIB
I'm pretty sure I did go hunting at one point, I was thinking it should be easier to hijack that data stream in java and send our own / additional data. I think once I found the loadandshow
application i stopped looking for existing data streams.
i think we can either directly call some lib.so functions from python to skip context creation
This sounded like a good idea to me too... for first pass I checked out the binary from my backups:
$ strings ./20210802_13.48_mib/eso/bin/apps/loadandshowimage
/usr/lib/ldqnx.so.2
glAttachShader
glBindAttribLocation
glGetShaderiv
glTexParameterf
glClear
glDeleteProgram
glUseProgram
glClearColor
glDeleteShader
glEnableVertexAttribArray
glGetProgramInfoLog
glBindTexture
glGetProgramiv
glCreateShader
glCreateProgram
glTexImage2D
glDrawArrays
glVertexAttribPointer
glLinkProgram
glGetShaderInfoLog
glShaderSource
glViewport
glGenTextures
glCompileShader
libEGL.so
eglInitialize
eglDestroyContext
eglCreateContext
eglMakeCurrent
eglSwapBuffers
eglDestroySurface
eglGetDisplay
eglTerminate
eglCreateWindowSurface
libnvrm.so
libnvrm_graphics.so
libnvos.so
libnvwsi.so
libnvcwm.so
libnvddk_2d_v2.so
libnvdc.so
libimg.so.1
img_write_file
img_lib_attach
img_load_file
libecpp-ne.so.4
libc.so.3
Error compiling shader:
!linked
Error linking program:
attribute vec4 vPosition;
attribute vec2 vTex;
varying vec2 tex;
void main()
{
tex = vTex;
gl_Position = vPosition;
}
precision mediump float;
uniform sampler2D s0;
varying vec2 tex;
void main()
{
vec4 clr = texture2D(s0, tex);
gl_FragColor = vec4 ( clr.rgb, 1.0 );
}
Error destroying native window
kdCreateWindow failed
kdSetWindowPropertycv (caption)
KD failed to set window size
kdSetWindowPropertyiv KD_WINDOWPROPERTY_SIZE failed
KD failed to realize window
kdRealizeWindow failed
EGL failed to obtain display
EGL failed to initialize display
EGL failed to obtain matching configuration
Error creating native window
EGL failed to create window surface
EGL failed to create context
EGL failed to make context/surface current
showscreen
ARGH -- unable to attach to "showscreen"...exiting now
/tmp/img.conf
[img_codec_png.so]
mime=image/png
ext=png
[img_codec_bmp.so]
mime=image/bmp
ext=bmp
[img_codec_gif.so]
mime=image/gif
ext=gif
[img_codec_jpg.so]
mime=image/jpg:image/jpeg
ext=jpg:jpeg
[img_codec_tga.so]
mime=image/tga
ext=tga
There is no description available for this file.
NAME=loadandshowimage
DESCRIPTION="no description available"
DATE=2016-03-30CEST-15:03:59
I've trimmed that list above down a bit.... but I think that should give the idea that just replacing this app is not going to be so straightforward.... it looks like it's full-on setting up nvidia graphics hardware shaders / surfaces to render the image onto. This is interesting, in that it looks like it must be a case of using the hardware graphics renderer to draw onto a context that's then sent to the dash... maybe it's technically possible for the AA/CP decoded stream to also be drawn onto a context that's sent to the screen in which case ghidra and pulling this app apart might be able to show how.... though that feels like it'll still be a lot of work. I'd definitely need a virtual cockpit screen on the bench to realistically get that going, I don't have much chance for dev time in the car these days.
Another thought, I just went searching for any other apps that include the call glTexImage2D
and a few interesting ones popped up
grep: ./20210802_13.48_mib/eso/bin/apps/browser: binary file matches
grep: ./20210802_13.48_mib/eso/bin/apps/coverflow: binary file matches
grep: ./20210802_13.48_mib/eso/bin/apps/displaymanager: binary file matches
grep: ./20210802_13.48_mib/eso/bin/apps/loadandshowimage: binary file matches
grep: ./20210802_13.48_mib/eso/bin/apps/media: binary file matches
grep: ./20210802_13.48_mib/eso/bin/apps/mirrorlink.real: binary file matches
grep: ./20210802_13.48_mib/eso/bin/apps/showimage: binary file matches
grep: ./20210802_13.48_mib/ifs/lsd.jxe: binary file matches
I'm little lost with this DSI is it socket based? Can I use tcpdump/wireshark to see traffic? I decompiled this dsitracer.jar but it is very confusing application...Anyway I think we have 4 main goals now:
i think we can either directly call some lib.so functions from python to skip context creation
This sounded like a good idea to me too... for first pass I checked out the binary from my backups:
$ strings ./20210802_13.48_mib/eso/bin/apps/loadandshowimage /usr/lib/ldqnx.so.2 glAttachShader glBindAttribLocation glGetShaderiv glTexParameterf glClear glDeleteProgram glUseProgram glClearColor glDeleteShader glEnableVertexAttribArray glGetProgramInfoLog glBindTexture glGetProgramiv glCreateShader glCreateProgram glTexImage2D glDrawArrays glVertexAttribPointer glLinkProgram glGetShaderInfoLog glShaderSource glViewport glGenTextures glCompileShader libEGL.so eglInitialize eglDestroyContext eglCreateContext eglMakeCurrent eglSwapBuffers eglDestroySurface eglGetDisplay eglTerminate eglCreateWindowSurface libnvrm.so libnvrm_graphics.so libnvos.so libnvwsi.so libnvcwm.so libnvddk_2d_v2.so libnvdc.so libimg.so.1 img_write_file img_lib_attach img_load_file libecpp-ne.so.4 libc.so.3 Error compiling shader: !linked Error linking program: attribute vec4 vPosition; attribute vec2 vTex; varying vec2 tex; void main() { tex = vTex; gl_Position = vPosition; } precision mediump float; uniform sampler2D s0; varying vec2 tex; void main() { vec4 clr = texture2D(s0, tex); gl_FragColor = vec4 ( clr.rgb, 1.0 ); } Error destroying native window kdCreateWindow failed kdSetWindowPropertycv (caption) KD failed to set window size kdSetWindowPropertyiv KD_WINDOWPROPERTY_SIZE failed KD failed to realize window kdRealizeWindow failed EGL failed to obtain display EGL failed to initialize display EGL failed to obtain matching configuration Error creating native window EGL failed to create window surface EGL failed to create context EGL failed to make context/surface current showscreen ARGH -- unable to attach to "showscreen"...exiting now /tmp/img.conf [img_codec_png.so] mime=image/png ext=png [img_codec_bmp.so] mime=image/bmp ext=bmp [img_codec_gif.so] mime=image/gif ext=gif [img_codec_jpg.so] mime=image/jpg:image/jpeg ext=jpg:jpeg [img_codec_tga.so] mime=image/tga ext=tga There is no description available for this file. NAME=loadandshowimage DESCRIPTION="no description available" DATE=2016-03-30CEST-15:03:59
I've trimmed that list above down a bit.... but I think that should give the idea that just replacing this app is not going to be so straightforward.... it looks like it's full-on setting up nvidia graphics hardware shaders / surfaces to render the image onto. This is interesting, in that it looks like it must be a case of using the hardware graphics renderer to draw onto a context that's then sent to the dash... maybe it's technically possible for the AA/CP decoded stream to also be drawn onto a context that's sent to the screen in which case ghidra and pulling this app apart might be able to show how.... though that feels like it'll still be a lot of work. I'd definitely need a virtual cockpit screen on the bench to realistically get that going, I don't have much chance for dev time in the car these days.
Yes this is definitely "on bench" task. I'm also quite a newbie when it comes to working with ghidra or calling those compiled libs from new app. Still there are just few function inside loadandshow
As loadandshow is running some OpenGL shader code I believe it is actually running like little 3d engine just with texture loaded from image and drawn over (and I think there could be way to replace this image and therefore draw new data instantly)
I'm little lost with this DSI is it socket based? Can I use tcpdump/wireshark to see traffic?
Maybe??? tcpdump.zip
I decompiled this dsitracer.jar but it is very confusing application.
It's already loaded by the main MIB application I believe, enabled with /eso/hmi/engdefs/scripts/activateDSITracer.sh
(also available in green screen). That script just enables DSITRACER_ACTIVATED=yes
in /eso/hmi/lsd/lsd.sh
. Or do you already have that, hence you had that telnet port open you mentioned a couple of days ago?
As loadandshow is running some OpenGL shader code I believe it is actually running like little 3d engine just with texture loaded from image and drawn over (and I think there could be way to replace this image and therefore draw new data instantly)
Yeah this stuff is definitely possible, it'd be much better to start loadandshow
and then add an extra socket / hook in the middle to be able to pipe new images into it :-D. There's the other application showimage
that I wonder, can it do just part of the process faster perhaps. I haven't seen or looked at it before at all I don't think.
No I abandoned that path :-) chasing different tail. It just looked to me that inside this dsitracer you can enable ctrl plugin and then communicate with it over telnet. I kind of expected that somebody already messed with that before 😄
What about running browser and let it show some texture? Maybe that can be also fast and viable option of it will be shown on virtual cockpit the same way as loadandshow
As loadandshow is running some OpenGL shader code I believe it is actually running like little 3d engine just with texture loaded from image and drawn over (and I think there could be way to replace this image and therefore draw new data instantly)
Yeah this stuff is definitely possible, it'd be much better to start
loadandshow
and then add an extra socket / hook in the middle to be able to pipe new images into it :-D. There's the other applicationshowimage
that I wonder, can it do just part of the process faster perhaps. I haven't seen or looked at it before at all I don't think.
I will ask chatgpt tomorrow if it can do modification like that and create binary patch it can be quite simple for AI 😄
like ghidra decompile:
uint FUN_001018f8(undefined4 param_1)
{
int iVar1;
undefined4 uVar2;
undefined4 uVar3;
undefined4 uVar4;
uint uVar5;
void *__ptr;
undefined auStack536 [276];
undefined auStack260 [208];
undefined auStack52 [12];
size_t local_28;
size_t local_24;
FUN_001021fc(0);
FUN_001021a8(auStack52);
iVar1 = FUN_001022ac(auStack52,param_1);
if (iVar1 == 0) {
printf("error loading %s\n",param_1);
}
glGenTextures(1,&DAT_00103ef8);
glBindTexture(0xde1,DAT_00103ef8);
uVar2 = FUN_001021bc(auStack52);
uVar3 = FUN_001021c4(auStack52);
uVar4 = FUN_001021cc(auStack52);
glTexImage2D(0xde1,0,0x1907,uVar2,uVar3,0,0x1907,0x1401,uVar4);
glTexParameterf(0xde1,0x2802,0x47012f00);
glTexParameterf(0xde1,0x2803,0x47012f00);
glTexParameterf(0xde1,0x2800,0x46180400);
glTexParameterf(0xde1,0x2801,0x46180400);
FUN_001021d4();
memcpy(auStack260,
"attribute vec4 vPosition; \nattribute vec2 vTex; \nvarying vec2 tex;\nvoid main() \n{ \n\t\ttex = vTex;\n gl_Position = vPosition; \n} \n"
,0xd0);
memcpy(auStack536,
"precision mediump float;\nuniform sampler2D s0;\nvarying vec2 tex;\nvoid main() \n{ \n\tvec4 clr = texture2D(s0, tex);\n gl_FragColor = vec4 ( clr.rgb, 1.0 );\n} \n"
,0x114);
uVar2 = FUN_00101820(0x8b31,auStack260);
uVar3 = FUN_00101820(0x8b30,auStack536);
uVar5 = glCreateProgram();
if (uVar5 == 0) {
printf("programObject == 0\n");
}
else {
glAttachShader(uVar5,uVar2);
glAttachShader(uVar5,uVar3);
operator.new[](uVar5);
operator.new[](uVar5);
glLinkProgram(uVar5);
glGetProgramiv(uVar5,0x8b82,&local_24);
if (local_24 == 0) {
printf("!linked\n");
local_28 = local_24;
glGetProgramiv(uVar5,0x8b84);
if (1 < (int)local_28) {
__ptr = malloc(local_28);
glGetProgramInfoLog(uVar5,local_28,0,__ptr);
fprintf((FILE *)&_Stderr,"Error linking program:\n%s\n",__ptr);
free(__ptr);
}
glDeleteProgram(uVar5);
uVar5 = 0;
}
else {
DAT_00103eec = uVar5;
glClearColor(0,0,0,0);
uVar5 = 1;
}
}
FUN_0010228c(auStack52);
return uVar5;
}
after AI intervence
uint LoadTextureAndShader(undefined4 filename) {
int loadResult;
undefined4 vertexShaderID;
undefined4 fragmentShaderID;
undefined4 programID;
uint success;
void* logInfo;
undefined vertexShaderSource[276];
undefined fragmentShaderSource[208];
undefined stackData[12];
size_t logInfoSize;
size_t logInfoLength;
InitializeLibrary();
GetShaderSource(stackData);
loadResult = LoadFileAndCompileShader(stackData, filename);
if (loadResult == 0) {
printf("Error loading shader from file: %s\n", filename);
return 0;
}
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
// Set texture parameters
uVar2 = GetTextureWidth(stackData);
uVar3 = GetTextureHeight(stackData);
uVar4 = GetTextureData(stackData);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, uVar2, uVar3, 0, GL_RGBA, GL_UNSIGNED_BYTE, uVar4);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
CleanUpLibrary();
// Load shader source code
memcpy(vertexShaderSource, /* vertex shader source */, 0xD0);
memcpy(fragmentShaderSource, /* fragment shader source */, 0x114);
// Compile shaders
vertexShaderID = CompileShader(GL_VERTEX_SHADER, vertexShaderSource);
fragmentShaderID = CompileShader(GL_FRAGMENT_SHADER, fragmentShaderSource);
// Create shader program
programID = glCreateProgram();
if (programID == 0) {
printf("Failed to create shader program\n");
return 0;
}
// Attach shaders to the program
glAttachShader(programID, vertexShaderID);
glAttachShader(programID, fragmentShaderID);
// Link the program
LinkProgram(programID);
glGetProgramiv(programID, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramiv(programID, GL_INFO_LOG_LENGTH, &logInfoSize);
if (logInfoSize > 1) {
logInfo = malloc(logInfoSize);
glGetProgramInfoLog(programID, logInfoSize, &logInfoLength, logInfo);
fprintf(stderr, "Error linking shader program:\n%s\n", logInfo);
free(logInfo);
}
glDeleteProgram(programID);
return 0;
} else {
shaderProgramID = programID;
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
return 1;
}
CleanUpLibrary();
return success;
}
}
the showscreen became interesting as this is code inside loadandshow
int RunApplication(int argc, char *argv[]) {
int result = 0;
uint shaderInitializationResult;
int nameAttachResult;
int nameOpenResult;
struct timespec sleepTime;
undefined shaderParams[16];
// Check the value of argc
if (argc < 3) {
// Initialize the EGL context
result = InitializeEGLContext();
// Check if EGL initialization failed or shader loading failed
if (result != 0 || (shaderInitializationResult = LoadAndInitializeTextureShader(argv[1])) == 0) {
result = 1;
} else {
// Loop and draw the scene multiple times
result = 5;
do {
result = result - 1;
drawScene();
} while (result != -1);
// Attach to a named resource "showscreen"
nameAttachResult = name_attach(0, "showscreen", 0);
if (nameAttachResult == 0) {
printf("ERROR: Unable to attach to \"showscreen\"...exiting now\n");
} else {
// Receive a message from the named resource
MsgReceive(nameAttachResult + 4, shaderParams, 0x10, 0);
}
// Delete the shader program and clean up EGL and NVIDIA
DeleteShaderProgram();
CleanupEGLAndNV();
result = 0;
}
} else {
// Sleep for 2 seconds using nanosleep
sleepTime.tv_nsec = 0;
sleepTime.tv_sec = 2;
nanosleep(&sleepTime, (struct timespec *)0);
// Open a named resource "showscreen"
nameOpenResult = name_open("showscreen", 0);
if (nameOpenResult == 0) {
printf("ERROR: Unable to open \"showscreen\"...exiting now\n");
} else {
// Send a pulse to the named resource
MsgSendPulse(nameOpenResult, 0xffffffff, 1, 0xd5);
// Close the named resource
name_close(nameOpenResult);
result = 0;
}
}
return result;
}
also this part can be probably used to draw in while loop
// Loop and draw the scene multiple times
result = 5;
do {
result = result - 1;
drawScene();
} while (result != -1);
i modified loadandshowimage from this
to this
if my idea is correct (and this is really hard for me im not used to do assembly 😄) it should run LoadAndInitializeTextureShader forever and as loadImg is part of that function it should basically load new images in while(true) loop (we maybe need to include some system.sleep later).
Later when i have some free time i will try to check it in car until then can somebody check if my assembly can work? loadandshowimage-patched.zip
Oh neat, I forgot how impressive ghidra is with modifications, that it basically lets you just modify it in C syntax. Looks like a great idea, hope it works out!
More like you just do random things in assembly window and see how it plays out as decompiled code 🤣
did anyone managed to run "browser"? maybe that can be used in order to render some data 👿 anyway today i loaded my modification and it crashed spectacularly so back to drawing board 🤦
also when running dmdt dm 0 on while android auto is active it is clearly visible that AA/CP is not part of display manager windows it is going over it...
EDIT: hmm what about running OpenGL window from python? as there are windows created by both java and compiled binary it should be possible to open window also from python right?
EDIT2: somebody experimented with renderer settings inside GAL.JSON (Google Automotive Link) "renderer":{
"displayableID":59,
I haven't tried that yet. I did try to find the channel where data is shared to the clocks of the dash, like media information.. if I can send a custom string there, we might not even need to generate graphics on the fly to display text.
EDIT2: somebody experimented with renderer settings inside GAL.JSON (Google Automotive Link) "renderer":{
I played a lot with it, but never with any succesful outcome.
maybe we can make use of the BAP protocol. I let ChatGPT create a client for it:
import socket
# BAP protocol constants
BAP_PORT = 5000 # The BAP protocol default port
BUFFER_SIZE = 1024 # Size of the receive buffer
# BAP protocol commands
BAP_COMMAND_GET = "GET" # Example command to request data
BAP_COMMAND_SET = "SET" # Example command to set data
# BAP protocol data
BAP_DATA_TEMPERATURE = "TEMP" # Example data identifier for temperature
def send_bap_command(command, data):
# Establish a connection to the BAP server
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
server_address = ('localhost', BAP_PORT) # Update with the correct server address
sock.connect(server_address)
# Construct the BAP message
message = f"{command}:{data}\n"
# Send the BAP message to the server
sock.sendall(message.encode())
# Receive the response from the server
response = sock.recv(BUFFER_SIZE).decode().strip()
# Print the response
print(f"Response: {response}")
# Example usage
send_bap_command(BAP_COMMAND_GET, BAP_DATA_TEMPERATURE)
send_bap_command(BAP_COMMAND_SET, "25.5") # Set the temperature to 25.5 degrees
Yes this can be used if you want to show some data but i wanted to test with rendering to be able to also send carplay/android auto later. Having blank file as start gave us possibility to also render graphs and all kind of graphic elements
for AA/CP: I know that "gal" is responsible for taking the h.264 stream and then decode it to show on screen. There is also videosink option for gal which will save it as h.264 file. If we can somehow let it save screenshot of that stream then loadandshowimage can be used to show AA on VC.
I still struggle to find out where this AA/CP stream came from if it is just some memory location or there is file/URI where the stream is received.
i reached another dead end... i wanted to write python binding for libegl/libglesv2 files to spawn custom window but my python decided that i does not like "import ctypes" any idea how to fix that?
I guess it's related to https://bugs.python.org/issue11048
On Sun, 8 Oct 2023, 23:28 OneB1t, @.***> wrote:
i reached another dead end... i wanted to write python binding for libegl/libglesv2 files to spawn custom window but my python decided that i does not like "import ctypes" any idea how to fix that?
[image: image] https://user-images.githubusercontent.com/320479/273448530-9b56190f-3d79-4d46-a6ee-3950cac2674a.png
— Reply to this email directly, view it on GitHub https://github.com/jilleb/mib2-toolbox/issues/159#issuecomment-1752015967, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7FEBRD5ZHWLUT3RPTZJH3X6KMALAVCNFSM43CW7UL2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZVGIYDCNJZGY3Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>
ok that maybe can help to try to remount system as RW (do you know which parts must be RW?)
also i was kind of able to run browser but it says "Killed" in like 2 seconds after running "browser --standalone" but i believe that can be a way how to make rendering easy as before it is killed it shows new window on "dmdt dm 0 on" screen (debug screen for display 0)
https://github.com/jilleb/mib2-toolbox/assets/320479/44fd8d29-8958-47ea-88cd-58e6d011d088
EDIT: hmm as it say "Killed" and there is no such string inside browser binary i believe that it is externally killed by some watchdog running on HMI. So i will maybe try to rename executable first to see it this autokill is still working
I wonder if this idea can be implemented?
Virtual cockpit using CarPlay Output google maps or apple maps.
Or can install GOOLE EARTH in MIB2 high
Thanks for the suggestion.