Open Gold-TW opened 3 years ago
I think I did get browser running / showing at one point.... maybe from green screen or something. Don't quite remember for sure though. "Killed" is often caused by out of memory too, might have to keep an eye on free ram use. I'm also not sure how / if it's possible to launch an app directly onto a different context so it only shows on the dash, or whether it would need to be loaded onto main screen first then switched.
Shame about ctypes
, that's annoying. I think I got the ffi
interface working in micropython which kind of lets you do the same thing, check the example here: https://github.com/micropython/micropython/blob/master/examples/unix/ffi_example.py
Here's a "desktop linux" and "mib (qnx)" copy of micropython for testing:
linux_x64_ff20a220.zip
qnx_mib_ff20a220.zip
I think I did get browser running / showing at one point.... maybe from green screen or something. Don't quite remember for sure though. "Killed" is often caused by out of memory too, might have to keep an eye on free ram use. I'm also not sure how / if it's possible to launch an app directly onto a different context so it only shows on the dash, or whether it would need to be loaded onto main screen first then switched.
I think it wants to start new context (you can see new window in my experiments). It looks promising as the same is applied also to loadandshowimage which I'm able to show only on VC with "dmdt sb 0'" trick. But it immediately close... if it is memory related is there anything I can do? As killing most of running apps simply reboots the unit. Regarding micropython I will try and see
Anyway lets sumarize my findings:
"dmdt sc 4 -9" switch VC to show same context as main display (and i think it is a bug or default behavior because no window selected)
"dmdt sc 4 70" switch VC to show DISPLAYTABLE_KOMBI_MAP_VIEW (put it back to NAV)
"dmdt sb 0" is able to rotate buffer of window (kind of alt+tab)
VC is able to show 800x480 window but it looks like it cannot show 1280x640 windows
For unknown reason MOST is actually display 4 not display 1 as "dmdt gs" show
Those are available windows inside system:
root@mmx:/mnt/app/root> dmdt gd
displaymanager knows 5 displayables:
ID: type: size: #buffer: dsi-name (guessed using ID & dsi 2.11.27):
-----------------------------------------------------------------------------------------------------------
16 window 1280 x 640 2 DISPLAYABLE_HMI
19 window 1280 x 640 2 DISPLAYABLE_MAPVIEWER
33 window 800 x 480 2 DISPLAYABLE_KOMBI_MAP_VIEW
22 window 285 x 276 2 DISPLAYABLE_MAP_3D_INTERSECTION_VIEW
23 window 285 x 276 2 DISPLAYABLE_MAP_JUNCTION_VIEW
by default configuration looks like this:
root@mmx:/mnt/app/root> dmdt gs
displaymanager reports the following system information:
number of displayables: 5
number of displays: 2
display 0:
name: display0
terminal: main
size: 1280 x 640
context id: 0
16 (DISPLAYABLE_HMI)
display 1:
name: <error>
terminal: <error>
size: 0 x 0
context id: 70
33 (DISPLAYABLE_KOMBI_MAP_VIEW)
so by running "dmdt dc 99 18" and then "dmdt sc 4 99" it should be possible to point MOST to show "18 (DISPLAYABLE_BROWSER)"
Now the main goal for me is to be able to run browser somehow. Alternative approach will be to rewrite "loadandshowimage" using micropython + ffi to be able to create/manipulate OpenGL windows.
With h.264 AA stream Iβm lost now as GAL binary is huge and there is some weird mechanism of taking packets from USB driver and then running them via decoder and showing them inside "DISPLAYABLE_EXTERNAL_SMARTPHONE". Attack vector (videosink) for that is probably too complicated for me to change into periodic screenshots π’.
You can run the browser stand alone on the unit. Have developer mode enabled and go into the test mode menu. In there you can run an instance of the browser.
Also the browser is used to render the menu of online services. Maybe you can get some information out of those libraries on how to use it.
@OneB1t , it's possible to change the Android Auto context in gal.json to something else. I've tried it once, without luck, but maybe there's something there.
You can run the browser stand alone on the unit. Have developer mode enabled and go into the test mode menu. In there you can run an instance of the browser.
Also the browser is used to render the menu of online services. Maybe you can get some information out of those libraries on how to use it.
settings for browser is here (for my unit atleast) \app\eso\browser\configs\VW_EU_G13
As it looks like this browser is able to run javascript we can make it reload image from /tmp folder and that way render the data. I will test what can be done with browser later but i have high-hopes π
@jilleb i think what can be interesting is to switch renderer as some maybe will draw correctly into OpenGL windows and then result can be used in VC
# preferred renderer (defaults to lowest available if preferred is not available)
# 0 - default
# 1 - nvss
# 2 - nvmedia
# 3 - qc
# 4 - cinemo
"type":0,
to be able to run browser this needs to be activated π
Control unit: 5F Information Control Unit
Adaptations:
Menu entries for mobile online services:
Not activated -> activated
i decided to focus on parts i have working so there is new repository with development over the python script https://github.com/OneB1t/VcMOSTRenderMqb
data to render i expect to get from exlap channel it should be quite simple integration over websocket i think it is possible to write websocket client in python without any libraries (i did it before in LUA for different project)
new info: valid resolution for MOST stream is 800x480 there is no accessible framebuffer on this device so direct screenshot is not possibleπ still it is possible to use "dumper" to dump memory data for some process exlap websocket running on 25010 is somehow weird im unable to connect to it via PostMan or my dummy websocket client implementation but when this will eventually work it can finally show some useful data
Exlap session needs to be opened first, by logging in. I made a client for it in Python a few years ago, I'll have to look in my archives
Im now thinking what about parsing some log file of GAL? and try to get data from there (im not sure if AA is sending anything other than video stream but as in some cars there is also integration for second display maybe this type of data is also available here but just not used but still logged)
PS: integrated nav is so bad compared to google maps/waze etc...
also can someone help me with crosscompile of C code for this QNX system it looks like those guys somehow managed that https://github.com/h4570/qnx-audi or https://github.com/ManuelRodM/QNX_hello_triangle
Exlap session needs to be opened first, by logging in. I made a client for it in Python a few years ago, I'll have to look in my archives
@jilleb Did you find that software π?
A simple question about this project:
Is it possible to change the map style and UI in VC without duplicating the main screen? Or load the Audi Map style with the Audi GPS indication UI instead of the VW UI?
I think changing the map style would be a good step to improve the user experience with the built-in GPS.
Or change the UI by the Waze, maps... UI in the VC ... just an idea.
Sorry if this bothers you.
Exlap session needs to be opened first, by logging in. I made a client for it in Python a few years ago, I'll have to look in my archives
@jilleb Did you find that software π?
I found a bunch of scraps π https://www.dropbox.com/scl/fo/y109ggwew68pu5rdtb8dr/h?rlkey=qtmwd1ifh9xdnsewpqb53u7ag&dl=0
Is it possible to change the map style and UI in VC without duplicating the main screen? Or load the Audi Map style with the Audi GPS indication UI instead of the VW UI?
I think changing the map style would be a good step to improve the user experience with the built-in GPS.
Yes, changing mapstyles is possible, but it's a lot of work.
i have very little time for messing with the VC but i found this https://github.com/edub0/EXLAP/blob/main/samples/example%20exlap%20cmds.xml
i also found qnx sdk 6.5.0 but for now im unable to crosscompile to armle target using qcc π’any ideas how to manage that?
I've got a cross-compile environment set up (it took years to get it working properly) so let me know if you've got C code you'd like to compile. Because it's still based on proprietary qnx sdk 6.5.0 I can't share it publicly :-( (I hate closed source crap like that)
i wanted to compile this https://github.com/ManuelRodM/QNX_hello_triangle and later modify it to periodically load and show my .bmp file (or directly render with C code but python is now much faster for prototyping than C π)
i seen that crosscompile is actualy using GCC 4.2.2 but in my case im completely missing gcc to armle target
i wanted to compile this https://github.com/ManuelRodM/QNX_hello_triangle
I tried to compile this but the qnx 6.5.0 environment I've got was missing some of the headers & libraries it needs.
I've got <EGL/egl.h>
but not <screen/screen.h>
or <GLES3/gl3.h>
.
I'm not sure if they're just from newer qnx platforms.
I think it's a great idea though, so started hunting.
Found this official opengl text example for qnx 6.6.0 : it uses the same screen.h
https://www.qnx.com/developers/docs/6.6.0.update/index.html#com.qnx.doc.screen/topic/manual/cscreen_rendering_text_sample.html
There's 6.5.0 docs for opengl here https://www.qnx.com/developers/docs/6.5.0SP1.update/index.html#./com.qnx.doc.gf_dev_guide/3d.html#window_surfaces It doesn't have examples described like the 6.6.0 docs, but maybe there's something compatible that can be found.
Hah, take a look at this! https://github.com/h4570/qnx-audi
We can probably use GLES2 instead of GLES3 (and modify version of code inside vertex_shader and fragment_shader to 200) and then modify the required functions (those basic ones could have same signatures so maybe not that complicated). This tutorial how to render triangle should be completely the same on basically all platforms (i did it in the past using JOGL for java and i think it was like 20-30 lines of code to create simple triangle.
screen.h is used in QNX 6.5 SP1 documentation http://www.qnx.com/developers/docs/6.5.0SP1.update/index.html#com.qnx.doc.screen/topic/screen_8h_1Screen_Usage_Flag_Types.html
so maybe it needs SP1?
Hah, take a look at this! https://github.com/h4570/qnx-audi
Yep i seen this i even contacted Sandro who is the guy behind this game π And he responded with following knowledge:
- My solution work on MMI 3G+ which have custom QNX 6.3.2 (with extra OpenGL support, which was introduced in QNX 6.5.0(!)).
I think that this will not work on newer Audi's, cause they have other version of QNX.
From what I remember, I downloaded QNX SDKs from torrent sites (6.3.2) and some other.
With SDK there is shipped compiler ("qcc" from what I remember) for architectures like: SHLE, x86, etc.. Audi have SHLE arch. So you can cross compile by using qcc.
You can read my Makefile to create your build command (with qcc) https://github.com/h4570/qnx-audi/blob/master/src/Makefile
There was a problem with running my SHLE binaries on Audi. From what I remember i used QNX 6.3.2 libs as a base and also copied some compiled C libs from SDK versions 6.3.0(?) and 6.5.0 and pasted to SD card.
I tried to debug my program on VM with QNX Neutrino (x86), but OpenGL didnt worked on this. So I built oldschool 2003 PC with graphics card that was supported by QNX 6.3.2.
All journey is described in video which is linked in README.MD (unfortunately in Polish).
Good luck!
So my code is now able to show AA sensors data
now looking for next turn data from GAL. Any info regarding that?
Great work! I am searching to get the gal log somewhere. I suspect it is written to sd
now looking for next turn data from GAL. Any info regarding that?
https://gitlab.com/alelec/mib2-lsd-patching/-/blob/f81fed59749c5ced3831d2f0fc476190e3324ae2/patched/de/vw/mib/asl/internal/androidauto/target/AndroidAutoTarget.java has no-op callbacks for bunch of android auto metadata including next turn information. I never managed to succesfully patch that particular file myself (my attempt would just break android auto completely heh), but maybe you or someone else would have better luck. If that would work you could write out logs / dump data to be read by your renderer. Those methods are called by file in generated
directory which I assume is some kind of interop between gal and lsd. It would be cool if we would be able to just use that information for native VC navigation ui elements, but using custom renderer like yours probably gives you more flexibility to get exactly what you want and not what is there already
hmmm, that sounds very interesting!
de/esolutions/fw/dsi/androidauto2/DSIAndroidAuto2Dispatcher.java
has updateNavigationNextTurnDistance
, updateNavigationNextTurnEvent
, updateCoverArtUrl
. updatePlaybackState
, updatePlayposition
, updateNowPlayingData
, updateTelephonyState
I have no idea where to start with this.. but maybe @OneB1t has an idea?
modify those methods to write file with navigation data inside /tmp/ :-) then patch it the same way as for navignore
in java you can override loaded class file with your own code
little problem is that this is some special java 6 code so making correct patch is quite tricky (but @andrewleech should know how to achieve it)
That mib-lsb-patching project has the java compiling process all automated so it's quite easy to make lsd changes.
So then there would be 2 ways to enable media and nav guidance on the cluster:
So then there would be 2 ways to enable media and nav guidance on the cluster:
- add code to the java, to enable the DSI communication to the cluster
- add code to the java to write the data to a local file, which can then be picked up by something else that sends it to the cluster.
There Is also posibility to somehow access this DSI message queue and a read it directly from there. Client for That Is already decompiled in lsd . This approach can be intesting also for other messages.
Also rather than overriding those functions to write the data to a file, the new functions could directly exec()
the shell/python scripts @OneB1t has written.
Yes i can split main loop to just show /tmp/render.bmp forever and call BMP generation directly from java as second script every time it gets DSI data
I managed to get some more knowledge on how android auto video is rendered on the screen and where it should be possible to take picture:
gal (NvSSVideoDecode, NvSSVideoDecode) -> libnvss_video.so -> libnmsdk.so / libnvmedia.so
now libnmsdk is interesting as there is source code for it it contains methods like "NvMediaVideoSurfaceCreate" https://github.com/NakhyunKim/carProject/blob/master/include/nvmedia.h
i think it should be possible to modify libnmsdk.so to take screenshot
For the next turn from gal stuff:
I managed to get somewhere - the lsd-patching was kind of dead end for me - but I did managed to get some callbacks working after checking on https://github.com/jilleb/mib2-toolbox/issues/159#issuecomment-1838984224
I ended up checking on de/esolutions/fw/comm/dsi/androidauto2/impl/DSIAndroidAuto2ReplyService.java
(which calls AndroidAuto2Dispatcher mentioned in that comment) and found that I only get following callbacks ever ran in lsd:
* 1 audioAvailable
* 3 audioFocusRequestNotification
* 13 navFocusRequestNotification
* 23 updateCallState
* 24 updateCoverArtUrl
* 30 updateTelephonyState
* 31 videoAvailable
interestingly I do get updateCoverArtUrl
, but none of other media or navigation metadata (btw gal output coverarts in /tmp/gal_albumArt_<incrementing-number>.png
)
I did spend more time trying to just dump raw bytes from comm
modules with patching de/esolutions/fw/comm/agent/service/Method.java
with this in its invoke
method:
IDeserializer clone = (IDeserializer) this.deserializer.clone();
byte[] bytes = new byte[clone.bytesLeft()];
clone.getRawBytes(bytes);
System.out.println("raw comm bytes " + Arrays.toString(bytes));
and tried to anylyze them / convert to strings etc - I did manage to see same /tmp/gal_albumArtX
stuff I mentioned above, but never managed to find stuff relevant to other media or navigation metadata so I did gave up on this.
buuuuuut - I did found a way to start writing some tracing info to sdcard - on unit there is /scripts/activateSDCardEsotrace.sh
:
#!/bin/sh
/bin/mount -uw /mnt/system
/bin/touch /etc/mcd.writable
if [ -d /fs/sda0 ] ; then
/bin/mount -uw /fs/sda0
if [ ! -d /fs/sda0/esotrace_SD/ ] ; then
/bin/mkdir /fs/sda0/esotrace_SD
fi
/bin/touch /fs/sda0/esotrace_SD/esotrace.sdcard
/bin/sync
echo "Done, tracing to SD1 should start in a few seconds."
else
echo "No SD card found in SD1."
fi
and after that I started to get this:
[DSIAndroidAuto2Impl] onJob_updatePlaybackState : playbackInfo=[status=PAUSED, shuffleMode=OFF, repeatMode=OFF, playbackApp='Spotify'], valid=1
[DSIAndroidAuto2Impl] onJob_updateNowPlayingData : title='FAST LAND', artist='Moderat', album='MORE D4TA', duration=219000, valid=1
This one I actually was able to receive in LSD
[DSIAndroidAuto2Impl] onJob_updateCoverArtUrl : url='/tmp/gal_albumArt_1.png'
[DSIAndroidAuto2Impl] onJob_updateNavigationNextTurnEvent : road='Cicha', turnSide=UNSPECIFIED, event=DEPART, turnAngle=0, turnNumber=0, valid=1
[DSIAndroidAuto2Impl] onJob_updateNavigationNextTurnDistance : distanceMeters=0, timeSeconds=0, valid=1
it's not "raw text" format (see screenshot with ~non-character stuff), but raw strings are easily readable and easy to find / grep / scan
and the sd card esotrace_SD
directory looks like this:
[4.0K] .
βββ [4.0K] 001_20231224_08-34-28
β βββ [10.0M] log_0000.esotrace
β βββ [10.0M] log_0001.esotrace
β βββ [1.0M] log_0002.esotrace
βββ [4.0K] 002_20231224_08-47-50
β βββ [5.9M] log_0000.esotrace
βββ [3.6K] Protocol.txt
βββ [4.3M] ems_tables.zip
βββ [ 0] esotrace.sdcard
those logs do get big pretty quickly (gal is just one of many things that will be in there - might also be useful for some other stuff to explore :) ) and it does create new ones when current one get's to 10MB
there is tracing.json
file that is sibling to gal.json
that is patched in here - https://github.com/jilleb/mib2-toolbox/blob/13745524b53c89bcf8a49f11c587a3275a49bf65/Toolbox/scripts/patch_gal.sh#L4
I think you could adjust levels of things being outputted there for things you don't care about / disable everything other than GAL maybe? Alternatively if outputting that much data isn't really impacting system overall - maybe just be content with prunning those logs manually (maybe those are prunned automatically, but I just managed to get anything today so π€· )
Well done π this should be enough to write navigation data into VC its just matter of writing log parser and adding icon rendering into loop to show it also
About data size i think what we want to do is to write them into /tmp folder is internal storage is not damaged external SDcard is also fine and modern 64GB+ card will be albe to sustain many years of logging
Im also still looking for way to save screenshot directly from libnvmedia.so so we can have full map in there :-)
Did you have any thoughts on how exactly will you run your custom VC dashboard renderer on day-to-day basis? I didn't play with dmdt
/ loadandshowimage
stuff so I don't exactly know how it works - does it replace just "map" view in VC or it takes over entire center part of VC regardless of "view" you are on?
I think it's really cool stuff and definitely would really like to play with it, but I also am not willing to lose regular functionality of VC so I wonder how that works. Ideally for me there would have some user interaction to run/kill it.
I guess this applies both to dashboard you shared before and your libnvmedia.so
explorations - for every day usability I would imagine you wouldn't want to lose access to normal VC functionality?
On my part - in my tinkering I did manage to use lsd patching to intercept some key presses on steering wheel and managed to remap "voice control" short press to toggle between driving modes (tho my very hacky patches did break DCC slider in Individual mode settings UI a bit :D - but this is rather issue with the way I patched in ability to select driving profile, rather than key intercepts heh).
I think I can intercept all keys except for ACC related ones - so media volume/prev/next/ and all on the right side of steering wheel can be intercepted. The problem I have is that I do use remaining buttons - maybe the phone one I could live without - or maybe try to see if I can do long press on ones I care about to run/kill things.
In any case - for intercepting I patched de/vw/mib/asl/internal/system/keypanel/lock/KeyLockServiceImpl.java
, public void processKeyEvent(int n, int n2, int n3)
method - n2
is keyCode (50 was voice control I did), n3
is keystatus - 1
is short press, 0
is short release, 3
is long press and 100
is long release (it least that's my understanding, it might not be correct, but also those are the only statuses I did see when I was capturing stuff for voice control button at least) - so I only did stuff for 50
n2 and 0
and 1
n3 and bail early before doing regular stuff and let everything else go with regular implementation - so that way long press on it still triggers Android Voice Assistant which I do use and didn't want to lose :) Here's list of keycodes I did capture there:
keymap
43 - vol down
42 - vol up
46 - next track
47 - prev track
39 - down
38 - up
40 - ok
36 - right
37 - left
50 - voice control
53 - phone
Then there is also option of trying to take over text data that is regularly displayed in VC (for text data only I guess - wouldn't really make sense for video mirroring stuff ). I did manage to set custom text in Audio section of it. If you do have lsd patching stuff working - you can check de/vw/mib/bap/mqbab2/audiosd/functions/CurrentStationInfo.java
(that was easy to find when grepping for "Android Auto" that is displayed on Audio tab in VC and stuff like this in setStationInfoForMirrorLink
method did end up rendering some initially set texts:
if (!AndroidAutoTitle.equals("")) {
currentStationInfo_Status.primaryInformation.setContent(AndroidAutoTitle);
currentStationInfo_Status.pi_Type = 72;
currentStationInfo_Status.pi_Id = 0;
}
if (!AndroidAutoArtist.equals("")) {
currentStationInfo_Status.secondaryInformation.setContent(AndroidAutoArtist);
currentStationInfo_Status.si_Type = 73;
} else {
currentStationInfo_Status.secondaryInformation.setNullString();
currentStationInfo_Status.si_Type = 73;
}
if (!AndroidAutoAlbum.equals("")) {
currentStationInfo_Status.tertiaryInformation.setContent(AndroidAutoAlbum);
currentStationInfo_Status.ti_Type = 74;
} else {
currentStationInfo_Status.tertiaryInformation.setNullString();
currentStationInfo_Status.ti_Type = 74;
}
AndroidAutoTitle
etc is just static string I added to the class and initialized with some "Test Title" stuff to see if it would work - and it does - I now do have 3 rows of "test" texts there - I will try to get some kind of parser going and try to forward that infromation there (but also - that method I did is just a callback, so I dunno about force refreshing that data and getting that callback to run when I want it to run).
Audio stuff is probably least useful to me, but it's most easy to find and interact with - I'm much more interested in feeding next turn info to regular VC display stuff, but harder to find relevant BAP stuff (?) I'm not sure what BAP is exactly, other than seeing it a lot in related code.
it can be done in the way that it will detect if android auto is running (for example from AA sensors data or from log file) otherwise it will switch back to normal map
it also only replace "MAP" so other VC functionality is still there
i think in final form it will run main script forever in loop and automatically switch based on what is available inside log files :)
Traces/logs I mentioned do have also strings about connecting to android auto:
I didn't look for one that disconnect as I was rushing today before holiday stuff :D but I would assume there would be ones for that as well, might check that in the evenining / tomorrow
so if dmdt
stuff only replace map and not other views in center of VC - that could be fully automated - awesome! :)
yes it can be fully automatic
sad thing is that refresh rate (until there is better way) using this approach is like 3 seconds per frame but for next turn navigation that is ok
disconnect will be also there as you have access to this
This is good news! So grabbing the turn by turn instructions is near, as well as Artist/Album Art
yes i pushed initial proof of parsing last line containing "onJob_updateNavigationNextTurnEvent"
for now you need to point it to specific file but later on i can also include some automatic search for latest log file :-)
new version is now also able to search for latest .esotrace file recursively in specified location im not sure how fast/effective this approach will be on MIB platform but there is always a room for optimizations
so in theory if pointed to proper location VCRenderAADash.py script should work now
im not sure how fast/effective this approach will be on MIB platform but there is always a room for optimizations
If this is worry about speed of reading / parsing / dumping - as mentioned there is tracing.json
conf file ( https://gist.github.com/grajen3/e7cbb6a69c10a8b45c763eb669ac64b7 is mine) where I see levels are being set - possibly everything to off
except for gal
/ GAL
(seems like there is entry for gal
binary and also there is GAL
channel?) - this configuration file is actually where I found esotrace_SD
that I look around for and found script that enabled dumping traces (tho not sure if it is actually used as I didn't attempt to edit it yet)
BTW, there is videoovermost
there - I seriously have no idea what those acronomys are, but you repo has MOST
in it - so it might be something to look around for :)
this is used for playback of DVD/TV terestrial on some platforms
Heh, my attempts to do parsing in LSD itself (to forward them to VC using abstractions already available in LSD) so far were not succesful - after building and testing parser something locally I got to frustrating error on headunit :(
Caused by: java.lang.NoClassDefFoundError: java.util.regex.Pattern
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(Unknown Source)
@andrewleech I'm not really a Java person so all of my tinkering is trial and error - in the jdk in lsd patching - I see there is src.zip
and it does contain class that j9vm is missing - do you think it's worth trying to include that (in its dependencies)? Or that's can of worms and I would be better spending my time on changing my parser to just not use regex :D
Alternatively I might borrow python parser from @OneB1t just to parse and save already parsed info in some meta files and just read those in LSD π€·
If there's a pure-java regex library it'd probably be safe to include, but that jdk is for x86 so I wouldn't copy files directly from it.
That being said it's surely possible to find the desired data in java without needing to parse with regex? Unfortunately I don't foresee having any time to set my bench unit up again and test stuff myself.
That being said it's surely possible to find the desired data in java without needing to parse with regex
For sure, just need to adjust my parser and was wondering if there is quick way to fix my problem instead of rewriting parts of it.
Unfortunately I don't foresee having any time to set my bench unit up again and test stuff myself.
Yeah, I get that - while I have some time every now and then, it's not unlimited so that's also why I looked for "quick solution" :D
But it's all good, thanks for your reply ;)
quick solution is to switch to python :-D that + chatgpt allows you to prototype in car
Well, I still need to do at least some stuff in java land as at least there I did have some success with putting custom text in ~native VW VC renderer so I rather stay in java land (possibly could be figured out underlying mechanisms, but there is so much abstraction and indirections that is quite easy to get lost for me trying just to follow code)
I did get rid of regex now, do have some issues with reading lines hah (it chokes up on encoding, which doesn't happen locally), but slowly getting further so will continue that course.
That aside - during my explorations in LSD on what exactly is available from java ~std stuff (to not be bitten by stuff like lack of regex related classes - in "decompiled" java code there is java/io
, java/util
so I can at least lookup if what I use is actually available )
But more share-worthy (I think) is that I stumbled on this https://gist.github.com/grajen3/0eed14c367e76408d4821388f5095566#file-viewcompositorimpl-java-L98 - if I read it correctly it creates window with GL context on display 16 - looking back at comments it seemed like you looked into trying to build your own native renderer against qnx and to use display 19 - might be possible to do that in javaland? This could be potential avenue to switch from loadandshowimage
stuff to achieve more reasonable framerate (+ maybe rendering things with GL might be ~easier than generating images in python? JOGL is used for using opengl. I put playing with this on backburner for now but someone might want to play with it, so sharing it.
The other file in above gist is dumping "displaycontent" from display 16 and 19 - so maybe worth trying this to dump image from android auto? (tho I suspect it will be a bust similar to what Andrew mentioned before with probably just blank image would came out due to way h264 video stream is actually shown on screen)
Generating images in java/python is very easy as long as you can have some working libraries.
also to create/modify renderer in pure java is definitely possible. This code you found can be really helpfull.
Have you considered comparing audi TT MHI firmware with ours here? On Audi TT everything is shown in the cluster because there is no other screen. It should be transfered via MOST to the screen there. Getting TT firmwsre shouldn't be a problem.
On Audi TT everything is shown in the cluster because there is no other screen.
My understanding is the Audi cluster screen is essentially the same as the main screen on other makes, it's connected with a normal monitor connection, eg. DVI or similar, not the MOST connection we're dealing with. As it's connected as a real monitor on the Tegra it can be rendered directly to.
I wonder if this idea can be implemented?
Virtual cockpit using CarPlay Output google maps or apple maps.
Or can install GOOLE EARTH in MIB2 high
Thanks for the suggestion.