jilleb / mib2-toolbox

The ultimate MIB2-HIGH toolbox.
MIT License
603 stars 143 forks source link

[REQUEST] Virtual Cockpit use CarPlay google map or apple map #159

Open Gold-TW opened 3 years ago

Gold-TW commented 3 years ago

I wonder if this idea can be implemented?

Virtual cockpit using CarPlay Output google maps or apple maps.

Or can install GOOLE EARTH in MIB2 high

Thanks for the suggestion.

jilleb commented 8 months ago

Awesome work!

grajen3 commented 8 months ago

Didn't do much lately - been quite happy with thing as-is for my usage so far since last update (granted had only some short local trips). Only minor thing I did was small patch to round to nearest 50 meters above 300m (when I shared update I was rounding to 25 meters, but that's not how google maps round things)

Small update for imperial units (miles) etc. Managed to confirm my assumptions in https://github.com/grajen3/mib2-lsd-patching/blob/d57796ff243dfdfdf32dacad468b1eaf76b638e3/patched/de/vw/mib/bap/mqbab2/navsd/functions/DistanceToNextManeuver.java#L153-L165 (both way to figure out if headunit distance unit is configured to miles/kilometers) and unit values for miles/yards etc: 20240113_133016 20240113_133028 20240113_133034 20240113_133043 last one is fun - it is 1/4mile unit - but value was 20 (so given x10 stuff) it was actually 2.0 x 1/4mile - which ended up rendering 1/2 mile :)

If someone used to those units in terms of how it usually is used in terms of "distance to next maneuver" would describe how things should be displayed I could do at least initial boilerplate to support it that later would be easier for someone fine-tune if needed.

Without being used to them - my assumption would be to like this (basically trying to fit metric/meters behavior to imperial units):

feet and quarter mile doesn't seem applicable to be used at all - but maybe I'm wrong - as I said I never used imperial units really (other than knowing my height in feet+inches or converting kilometers to miles)

Potential quirk to solve would be if gal had different trace structure if imperial/miles are used (it reports distanceMeters for me), but I didn't find proof that this could happen through ghidra - so part of above work would be just converting meters to miles/yards

chefranov commented 7 months ago

Fake video? https://www.instagram.com/reel/C2QMHwHtFko/

Tumero97 commented 7 months ago

Fake video? https://www.instagram.com/reel/C2QMHwHtFko/

No its real... but not OEM virtual cockpit. This is an AliExpress model

sagits commented 7 months ago

Hey, not directly related to this library, but I have found 2 interfaces that add an HDMI input to the audi virtual cockpit. Do you think they can help us? https://navtv.com/products/NTV-KIT789/audi-vc.html https://audiovisualworld.co.uk/audi-virtual-cockpit-hdmi-multimedia-camera-interface-sku151232.html

If one of those 2 works, then it should be just a matter of adding an HDMI output to the head unit and connecting it to the interface attached to the virtual cockpit. For the head unit HDMI output I have found 2 options: carplay / android auto with hdmi output usb to hdmi converter (for this to work we would need to change the head unit to an Android head unit like this one)

Do you guys think this will work? I'm afraid of buying it because I live in Brazil and will not be able to return if it didn't fit on my audi a4 b9 version or if somehow it don't work.

OneB1t commented 6 months ago

Anybody who is able to compile for qnx can this work?

https://www.qnx.com/developers/docs/6.5.0SP1.update/#com.qnx.doc.screen/topic/manual/tscreen_display_screenshot.html

grajen3 commented 6 months ago

How about doing unholy amounts of indirection:

https://github.com/jilleb/mib2-toolbox/assets/152773682/c909e385-91c1-4606-b2f0-cd1e9c6d1d35

and then continue polling for updates and if waypoints change - reload:

https://github.com/jilleb/mib2-toolbox/assets/152773682/18b4c545-ad4a-457b-b80b-b7286a8b7b84

pretty happy with that and will do write up on my findings and some code sharing a bit later (maybe weekend) - but this also gave me ETA and distance to destination (also from shared google trip/location data)

some notes is that google maps roads are not exactly on same coordinates as VAG nav roads - if I zoom in enough I can see that my route is shifted a bit - did some quick googling on that and appears that's quite normal between different map providers and this won't be constant offset I could adjust for

for now biggest issue I have with this is this

grajen3 commented 6 months ago

Ok above while somewhat works and somewhat usable is really just off and sometimes even without close zoom the offset between google gps polyline and vag maps is quite annoying. I did try snapping google's polyline to openstreetmap / mapbox - and while it was actually quite a bit better - the snapping part still has it's quirks (like trying to depart from highway to nearby gas station and then get on highway again - LOL), so I will scratch above idea (tho probably still use it for now until I have something better and something worth sharing).

Instead I will explore recreating actual google maps instrument cluster view on the phone using google maps sdks - early checks seems like this might work as long as I will be on the same network with phone and car as I did manage to render google map to bitmap on the phone, spin up server and download image from the phone from to my pc:

the left image is android auto emulator view for instrument cluster - it doesn't have bunch of UI controls that main screen has (that I would say trying to mirror main screen to instrument cluster would just not feel great even if were able to do that) image

given that I have turn by turn already in my cluster I will probably skip trying to recreate that, but I will probably play with rendering some data over map (like oil temp that I can already get with exlap on the phone etc). The silly red line on the image was just checking rendering over the google map to make sure overlaying stuff would work

but some tricky parts here will be:

andrewleech commented 6 months ago

That's some impressively amazing progress @grajen3 ! Yeah unfortunately I'm not surprised that GPS waypoints don't quite line up, but it's a really cool idea. If you can inject navigation data into the vag navigation, did you try just putting in the destination from the shared map and letting vag navigate independently? Much of the time it'll pick same / close route?

I like the idea of independently rendering a map and sending it, though yeah latency might be a pain. I used to hack / work on navdy HUD and its internal GPS was a bit laggy, that often resulted in missed turns because I was past the corner before the screen updated. Deal with that later though...

Your map render suggestion gave me an idea: https://xdaforums.com/t/aawireless-general-discussion-and-support.4247677/post-89380837

Also, has anyone looked into debug headunit server on phone? Might be useful notes here: https://togithub.com/opencardev/crankshaft/issues/451

grajen3 commented 6 months ago

If you can inject navigation data into the vag navigation, did you try just putting in the destination from the shared map and letting vag navigate independently? Much of the time it'll pick same / close route?

Not automatically / programatically but I did that couple of times manually and some scenarios it failed on:

So this is just all the same reasons why I prefer to use other navs (google maps in this case) in the first place :) If I would be fine with using native navigation I would just use that and get all the nice things it have, but those are just a bit of deal breakers and what I'm looking is to get those nice-too-haves of native navigation back (but won't be too sad if I won't be able to get them/those won't be perfect)

While not ideal with trying to draw google's polyline over vag map - at least it's somewhat consistent with google map route guidance. Given that waypoint mode routing stuff was a bit more steps needed to do versus what I did I would imagine this would be simpler to achieve if annoyances I had don't apply to you (or you are ok with them).

Your map render suggestion gave me an idea: https://xdaforums.com/t/aawireless-general-discussion-and-support.4247677/post-89380837

Oooh, this is great idea if, that's probably best place to do this (hah, especially that I have one of those aawireless dongles already :D) - if they could (possibly couldn't for legal reasons, not sure what terms apply but I would assume they have to play nicely with Google) and if they would be willing to do that. Btw, if they could do that, I feel like support for this would enable them to turn user's phones into possibly usable thing that navdy was promising to be so it is potential monetization source for them for some extra hardware/gadgets (I didn't know about that navdy product, but it looks pretty awesome as a concept at least, but I guess execution didn't live up to expectations) - it would seem to me that with something like https://hudway.co/glass (or even some film on windshield etc) and phone just displaying that second screen map (mirrored/inverted) this might just work - but then again this is something that google itself could have done hah.

Also, has anyone looked into debug headunit server on phone? Might be useful notes here: https://togithub.com/opencardev/crankshaft/issues/451

Not looked into that specifically, but was just checking what kind of metadata is (or should be) available (and was checking that via what kind of data navigation apps can provide to android system/aauto which is done via https://developer.android.com/reference/androidx/car/app/navigation/NavigationManager#updateTrip(androidx.car.app.navigation.model.Trip) - if we would get access to that this would give us destination / eta / lane guidance / probably more accurate manuever descriptors than what's in traces/our version of gal has

if we are thinking about capturing video streams here possibly - this is potentially interesting idea, but I would imagine phone would not allow creating 2 separate session (one for actualy headunit and one for whatever would try to capture things) and it would require something in the middle that would pass things through potentially extracting things (this is why I feel like aawireless would be potentially awesome for that as it might already handle stuff like that), but I have 0 context on actual android auto protocol and that seems way too complex for amount of time I could spend on stuff like this :D (and a bit out of what I know - at least with lsd patching it is C-like language and code itself is quite readable (granted with things decompiled not always straight forward with constants being inlined etc) - my attemps and decompiling google maps apks or using ghidra on bins extracted from cars - while still generally something one can follow, it's so mangled it's quite ... not fun to do as you get stuck a lot and often time don't see any progress in your tinkering sessions)


That said - for some code sharing on how I did waypoint mode (this will assume you have gpx file somewhere in filesystem) - https://github.com/grajen3/mib2-lsd-patching/blob/4e302ec72bf55e141eb843824b22b7d4c21a972a/lsd_java/de/vw/mib/asl/internal/navigation/waypointmode/StateDefault.java is where magic happens - as I have very messy debug code I will just paste some snippets here:

I just added this to the class for starting route import from gpx and being able to stop route guidance:

   public static boolean AndroidAutoImporting = false;
   public static boolean AndroidAutoRunning = false;

   public static boolean AndroidAutoImportAndStart(String route) {
       // instace is set in contsuctor / init and is just static field on this class
       if (instance == null) {
           return false;
       }

       AndroidAutoImporting = true;

       if (AndroidAutoRunning) {
           EventGeneric ev2 = new EventGeneric();
           ev2.setSenderRouterId(0);
           ev2.setSenderTargetId(10044);
           ev2.setSenderEventId(0);
           ev2.setReceiverRouterId(0);
           ev2.setReceiverTargetId(1330034);
           ev2.setReceiverEventId(1073742342);
           ServiceManager.eventMain.getEventDispatcher().send(ev2);
       } else {
           instance.target.changeExecutionMode(1);
           instance.target.getModelNotifier().setCurrentWPMModeDrive();
           instance.target.getInternalAPINotifier().setCurrentWPMModeDrive();
           ASLNavigationUtilFactory.getNavigationUtilApi().getNavigationDp().setWpmMode(1);
       }

       // clear up stuff
       int count = instance.target.getDataPool().getTrackList().getItemCount();
       for (int i = count - 1; i >= 0; i--) {
           EventGeneric ev = new EventGeneric();
           ev.setInt(0, i);
           instance.caseDeleteTrackFromWPMTourList(ev);
           try {
               Thread.sleep(500);
           } catch (Exception e) {
           }
       }

       ResourceLocator resourceLocator = new ResourceLocator(route);
       instance.target.getModelNotifier().setImportStateImporting();
       instance.target.getDsiNotifier().importTour(resourceLocator);
       return true;
   }

   public static void stopAndroidGuidance() {
       postponedRoute = null;
       if (AndroidAutoRunning) {
           EventGeneric ev2 = new EventGeneric();
           ev2.setSenderRouterId(0);
           ev2.setSenderTargetId(10044);
           ev2.setSenderEventId(0);
           ev2.setReceiverRouterId(0);
           ev2.setReceiverTargetId(1330034);
           ev2.setReceiverEventId(1073742342);
           ServiceManager.eventMain.getEventDispatcher().send(ev2);
           AndroidAutoRunning = false;
       }
   }

and then before returning here https://github.com/grajen3/mib2-lsd-patching/blob/4e302ec72bf55e141eb843824b22b7d4c21a972a/lsd_java/de/vw/mib/asl/internal/navigation/waypointmode/StateDefault.java#L514 to automatically start route after import finished I had

if (AndroidAutoImporting) {
            System.out.println("misiex  - imp 1");
            AndroidAutoImporting = false;

            if (n == 0) {
                this.target.changeExecutionMode(1);
                // start it
                if (navSegmentIDArray.length > 0) {
                    System.out.println("misiex  - imp 2");
                    int trailID = this.target.getDataPool().getTrackList().getItemCount() - 1;
                    System.out.println("misiex  - trail ID " + trailID);
                    if (trailID >= 0) {
                        AndroidAutoRunning = true;
                        EventGeneric ev = new EventGeneric();
                        ev.setInt(0, trailID);
                        this.caseLoadDetailsOfTour(ev);

                        EventGeneric ev2 = new EventGeneric();
                        ev2.setSenderRouterId(0);
                        ev2.setSenderTargetId(10044);
                        ev2.setSenderEventId(0);
                        ev2.setReceiverRouterId(0);
                        ev2.setReceiverTargetId(1330034);
                        ev2.setReceiverEventId(1074841914);
                        ServiceManager.eventMain.getEventDispatcher().send(ev2);
                    } else {
                        System.out.println("misiex  - dsiNavigationTrImportTrailsResult e1");
                    }
                } else {
                    System.out.println("misiex  - dsiNavigationTrImportTrailsResult e2");
                }
            } else {

            }
        } else {
            System.out.println("misiex  - dsiNavigationTrImportTrailsResult e3");
        }
        return null;

those magic "EventId" were figured out through some trial and error looking at various StateX.java files in navigation dirs

but also beware that patching them seems to sometimes cause problems - in particular in case of this file - https://github.com/grajen3/mib2-lsd-patching/blob/4e302ec72bf55e141eb843824b22b7d4c21a972a/lsd_java/de/vw/mib/asl/internal/navigation/waypointmode/StateDefault.java#L447-L451 was causing lot of errors being logged so I just commented it out because I didn't need what it was doing anyway (however in other cases something would completely break - like navigation system completely fails to start for example).

For the google map / location sharing trace I reused learnings from https://github.com/costastf/locationsharinglib (which is basically reverse engineering rpc calls that web's google maps do that you can check yourself in browser's dev tools if you have peoople sharing location or in our case trip with you - because I really don't like writing too much Java especially for handling like that I did have serverless function to massage google's endpoint data to something more reasonable to handle in java land like this (notice 2500 limit on points I did - this is because vag seems to have max 3000 points total in all of the imported GPX in waypoint mode - this is also why I trying to clear any previously imported routes in java code) - trick here is to get a hold off cookies needed and that is somewhat described in repository for locationsharinglib. This serverless function assumes account you use only have 1 person sharing trip with it

    const response = await fetch(
      "https://www.google.com/maps/rpc/locationsharing/read?authuser=2&hl=en&gl=us&&pb=!1m7!8m6!1m3!1i14!2i8413!3i5385!2i6!3x4095!2m3!1e0!2sm!3i407105169!3m7!2sen!5e1105!12m4!1e68!2m2!1sset!2sRoadmap!4e1!5m4!1e4!8m2!1e0!1e1!6m9!1e12!2i2!26m1!4b1!30m1!1f1.3953487873077393!39b1!44e1!50e0!23i4111425",
      {
        headers: {
          accept: "application/json",
          cookie: cookies,
        },
      }
    );
    const responseJson = JSON.parse((await response.text()).split(`'`)[1]);

    const destination = responseJson[0][0][8][1][0][1][0];
    const etaSeconds = responseJson[0][0][8][1][4][0][1][0];
    const etaMinutes = Math.round(etaSeconds / 60);

    const distance =
      responseJson[0][0][8][1][4][responseJson[0][0][8][1][4].length - 1][0][0];

    const stuff = responseJson[0][0][8][1][3];
    const points: Array<[number, number]> = [];

    let lat = 0;
    let lng = 0;

    let lastPoint;
    for (let i = 0; i < stuff[0].length; i++) {
      lat += stuff[0][i];
      lng += stuff[1][i];

        const point: [number, number] = [lat / 10000000, lng / 10000000];
        if (points.length + 1 < 2500) {
          points.push(point);
        } else {
          lastPoint = point;
      }
    }

    if (lastPoint) {
      points.push(lastPoint);
    }
   return {
      statusCode: 200,
      body: JSON.stringify(
        {
          active: true,
          destination,
          eta: {
            hours: Math.floor(etaMinutes / 60),
            minutes: etaMinutes % 60,
          },
          distance,
          points,
        },
        null,
        2
      ),
      headers: {
        "Content-Type": "application/json",
      },
    };

and in java land I would start polling this serverless function and writing out gpx file which I was updating later - there's more things I did - like some handling for not updating the route as I progressed the trip, but still being to able if there was detour (so checking points that lsd previously loaded against newly fetched points from the tail of the array/list (accounting for potentially removed points that had to be done to fit in waypoint number limitation etc) - also I had additional road snapping in serverless function etc. but all of this is really messy and is not that usable (IMO) so will skip on sharing that part, but in case anyone want to play with either programatically starting vag's route guidance OR get a hold of google map shared trip waypoints above snippets are I think the most relevant ones to start.

TheSpiritedMongol commented 6 months ago

I just find this in Chinese social media. Somehow they did it on an Audi A3. Im asked him, he is not answering me. https://github.com/jilleb/mib2-toolbox/assets/125920080/43d52543-35fe-4e0b-8016-2676beb433b4

chefranov commented 6 months ago

I just find this in Chinese social media. Somehow they did it on an Audi A3. Im asked him, he is not answering me. https://github.com/jilleb/mib2-toolbox/assets/125920080/43d52543-35fe-4e0b-8016-2676beb433b4

it looks like VC from Audi TT

chefranov commented 6 months ago

Maybe need to start investigation from Audi TT Virtual Cockpit FW? TT cockpit should be similar to A3 and support Car Play

TheSpiritedMongol commented 6 months ago

Maybe need to start investigation from Audi TT Virtual Cockpit FW? TT cockpit should be similar to A3 and support Car Play

Yeah, don't they have the same hardware?

Just read about that after iOS 17.4 official update, many cars are supporting Carplay on VC.

DAP56 commented 6 months ago

Maybe need to start investigation from Audi TT Virtual Cockpit FW? TT cockpit should be similar to A3 and support Car Play

Yeah, don't they have the same hardware?

Just read about that after iOS 17.4 official update, many cars are supporting Carplay on VC.

I have a Volvo XC90 rental vehicle while my Audi is in the body shop. iOS 17.4 allows the dual screen only during navigation (at least on this vehicle only) at all other times, it uses the android system map from the head unit. The CarPlay apple map during navigation on the cluster is a far zoomed out overview only - not really that helpful.

https://github.com/jilleb/mib2-toolbox/assets/87341721/71a822f7-d47c-417d-8e69-90d115d002f7

grajen3 commented 6 months ago

The CarPlay apple map during navigation on the cluster is a far zoomed out overview only - not really that helpful.

This is what 17.4 was actually supposed to change - see https://9to5mac.com/2024/03/06/ios-17-4-carplay/

Starting with iOS 17.4, cars with instrument cluster displays can also toggle the main display between a street-level view and the sky-level view. Toggling modes simply swaps which view appears on which screen.

Effectively, this just means that turn-by-turn directions can appear behind the steering wheel now rather than the overview mode.

Second maps screen in instrument cluster was a thing for a quite a bit of time (granted not many brands support it - like Volvo/Polestar with android automotive and BMW with iDrive8 that I know of) - and this both Google maps on Android Auto and Apple maps on Carplay (those are somewhat separate but similar implementations, so possibly some cars might support both or one of that, but as I'm not in the market for new car I'm not that up to date on that :D only relevancy I had with that is wether I could hack this into my current car hah)

OneB1t commented 6 months ago

Take screenshot of Android auto on the phone (there Is option inside android auto development menu) expose it as http endpoint download it over MIB WiFi And render it to MOST. Main issue Is to how to také screenshot...

TheSpiritedMongol commented 6 months ago

I just find this in Chinese social media. Somehow they did it on an Audi A3. Im asked him, he is not answering me. https://github.com/jilleb/mib2-toolbox/assets/125920080/43d52543-35fe-4e0b-8016-2676beb433b4

it looks like VC from Audi TT

Just get contact with him. He told me it’s very complicated, he built two VC hardwares (one of A3 and one of tt) together. And as result, the screen in the middle is not working anymore just like TT. So CarPlay only on VC. He said with the original A3 VC it’s impossible to run CarPlay on VC🥲

OneB1t commented 6 months ago

I just find this in Chinese social media. Somehow they did it on an Audi A3. Im asked him, he is not answering me. https://github.com/jilleb/mib2-toolbox/assets/125920080/43d52543-35fe-4e0b-8016-2676beb433b4

it looks like VC from Audi TT

Just get contact with him. He told me it’s very complicated, he built two VC hardwares (one of A3 and one of tt) together. And as result, the screen in the middle is not working anymore just like TT. So CarPlay only on VC. He said with the original A3 VC it’s impossible to run CarPlay on VC🥲

Running directly on VC is nearly impossible but sending it over MOST from MIB unit is possible.

OneB1t commented 6 months ago

Anyway i found something really interesting Im able to use dmminimal to create new window which generates OpenGL triangle

as code for dmminimal is really small i believe it will be possible to modify it (or rewrite to python) in order to make rendering much faster

commands:

dmminimal 89 (generate new OpenGL window with id 89)
dmdt dc 120 89 (create context number 120 with my window 89 inside it)
dmdt sc 4 120 (switch VC to context 120 with my window)

image image

also when AA is running we can see that stream 59 is visible under "dmdt gd" command

eventually it can look something like this [video] (https://github.com/jilleb/mib2-toolbox/assets/320479/4a1b0735-ce35-4533-83c1-5f916d90f02a)

grajen3 commented 6 months ago

dmminimal seems like worse target to try to modify than loadandshowimage because latter already have image loading part - it is simpler, but if we want to render images from files there you'd need to add image loading parts back

OneB1t commented 6 months ago

I think dmminimal Is better as it is rendering in White(true) loop

i Will try to repair "import ctypes" again ..

grajen3 commented 6 months ago

if you can get this working in python that would be great, but just looking at what ghidra decompiles either of those 2 - this is quite standard egl/opengl es code there and some kd windowing system? Seems like this https://www.qnx.com/developers/docs/6.4.1/composition_manager/dev_guide/externalapi.html describes that "stack"? image loading seems separate from that if you'd want to have that

grajen3 commented 6 months ago

I think dmminimal Is better as it is rendering in White(true) loop

I never messed with trying to modify binaries, but wouldn't changing control flow (goto / jump or whatever) be simpler than trying to link to currently unused libs? that was my reasoning about modifying target probably being simpler with loadandshowimage than dmminimal

OneB1t commented 6 months ago

Also it looks to me that main reason why it is not possible to switch display 4 (most) to DISPLAYABLE_EXTERNAL_SMARTPHONE Is resolution looks like im only able to switch to 800x480 window and nothing else

i started with micropython testing as that can call c libraries functions but im struggling to create development environment without a car to test on :-/

OneB1t commented 6 months ago

VCRenderAADash is now able to render all manuevers from "onJob_updateNavigationNextTurnEvent" It is also showing speed from AAsensor data and distance from "onJob_updateNavigationNextTurnDistance" event

Next goal is to make rendering faster so mainly speed data is accurate image

grajen3 commented 6 months ago

https://www.youtube.com/watch?v=T4xch2IFbio is concept of recreating google maps in background service / off screen on the phone (you can see that my phone is locked - bottom/left corner is mirror of my phone display, but even unlocked it wouldn't show the same map it shows on top left corner which is what would be supposed to be displayed in VC)

Top left just continuosly request new version of image from the phone which is what I would imagine head unit would have to do (if we manage to get VC rendering in nice framerate).

This was simulating actual driving with some android fake gps simulators, just so it's not stationary

OneB1t commented 6 months ago

that looks nice :-) but i can see that there is no next-turn info :-/ requesting new image from http endpoint is not an issue that can be done in few lines of code

grajen3 commented 6 months ago

that looks nice :-) but i can see that there is no next-turn info :-/

Not yet - but this can be added using https://github.com/3v1n0/GMapsParser directly (or implementing that myself) - basically just read google map notification that shows up and extra information and then draw it over google map bitmap/snapshot.

This is something that I would explore for sure, and will do just silly test now without much "styling" to just make sure I can extract that information in android auto mode

But I wanted to get to the point of seeing how fast realistically I could get phone to generate "images" and serve them on some endpoint for VC to consume and while not super smooth like actual android auto emulator - if we could get VC to get similar "performance" (both latency and framerate) - that might be actually "usable".

There's more thing to replicate later also - zoom in when close to manuever and zoom in/out otherwise based on speed or something etc - but all of this doesn't make sense doing if we can't get VC do render fast enough for it to be usable

requesting new image from http endpoint is not an issue that can be done in few lines of code

Oh yeah, I know - I've been doing that already in LSD (tho that was for shared trip data indrectly from google's internal API for that and not to pull image from the phone, but as long as phone will be on same network or there will be some tunnelling that's the simple part :) )

OneB1t commented 6 months ago

i do not like how overengineered this would be :-D there must be better way to steal android auto rendered image as development options for AA allow me to take screenshot/video

grajen3 commented 6 months ago

i do not like how overengineered this would be :-D there must be better way to steal android auto rendered image as development options for AA allow me to take screenshot/video

If you find a way, let me know. I tried decompiling android auto ( com.google.android.projection.gearhead ) and google maps ( com.google.android.apps.maps ) but this is almost unreadable (in no way it even resemble pretty readable decompiled LSD), but screenshot part I saw couldn't be invoked with Android Intents and only achievable via UI. There is option to save video stream, but other apps do not have access to it that directory security reasons (also it wasn't clear to me if the video was actually flushed immediately or not, as on most recent versions I didn't see video being saved to phone's FS at all)

But even then - it will be main android auto screen which is not designed to be shown in cluster with extra UI intended for user interactions and would only really work if you have maps as ~full screen on infotainment screen (that's not the worst limitation, but still would be pretty meeh to see vc mirroring as you change your playlist or something or passenger checks for something else etc and now you lose sight of actual "next turn info" because main screen shows something else ATM).

Ideally I would want to capture second screen designed for cluster instead of "recreating one", but this is way more complex than what I'm doing (and would be even more overengineered as I would need to simulate "android auto" host and reimplement bunch of what it does to launch "car service" for it - this require waaaay to much android (auto) internal knowledge to do ).

Your map render suggestion gave me an idea: https://xdaforums.com/t/aawireless-general-discussion-and-support.4247677/post-89380837

This would be perfect, but from what I can see there was no reply to that question / feature request so I wouldn't hold my breath for it

Least "overengineered" solution would be to "just" update gal to recent AA protocol and just tell phone that it supports 2 screens - but then we would only get additional h264 stream that we have no way to decode and can't forward to VC as from what I followed on this topic is not something that is supported (otherwise we could just forward gal's video dumped by enabling some configuration settting in gal.json to achieve "mirroring" on VC)

All in all - I do not think this overall thing is feasable as something that people will be able to just "enable" (at least not without additional hardware, because there is also option to have some device like ~raspberry pi in betwen phone and head unit that would advertise support for second screen receive streams for both main and cluster screens and then just passthrough main one to car and extract last frames from second to be used for VC)

I'm pretty sure this whole thing will be "tinkering-only" kind of deal and even if I manage to get something usable for me - because I use bunch of extra APIs (some of which need API keys or extra credentials) it's not something that can be packaged nicely (I'm not interested for paying for those service for other people to be able to use what I did) and would require extensive configuration to make it work (setting up services, providing keys/credentials and then still be pretty brittle, so wouldn't really advise others who are unable or just can't waste time on it, to actually even try to use it)

OneB1t commented 6 months ago

best approach will be to somehow try to grab process memory and get the image that way but no idea where to start with that aproach (maybe try to write some micropython code to call system functions which let us grab process memory)

i also want to poke aroung "take screenshot" functionality from dmdt maybe some idea can be there how to grab not just window but also a stream

grajen3 commented 6 months ago

I doubt decoded frames are in any of process memory - https://ubuntuforums.org/showthread.php?t=1948237 is kind of describing similar effect and for similar reason we see black frames when trying to do screenshots when android auto is showing - tegra is decoding video bitstream and outputting it directly to screen - frames don't go back to cpu/system ram - tegra might support a method of getting those back (or doing on demand decoding separately), but if screenshots come back black now - you'd need to figure out how to get that from tegra and not process memory

OneB1t commented 6 months ago

yes i know but you can modify renderer in gal.json and maybe some option will store image before rendering in process memory

about tegra "framebuffer" there is not a single word on the internet

grajen3 commented 6 months ago

If there is option like that would end up decoding on CPU and not offloading it to tegra I would very curious about resulting framerate :) There's a reason video decoding is done on GPUs and not CPUs

jilleb commented 6 months ago

If we could get the Android auto display context to be displayed in place of the cockpit navi...

OneB1t commented 6 months ago

I believe that main reason why it cannot be switched Is wrong resolution as only 800x480 windows can be assigned to display 4 (VC)

jilleb commented 6 months ago

I am digging into configurationmanager.res which has some definitions for the display configs inside... Maybe this can help 😁

OneB1t commented 6 months ago

Weird Is that display manager not even see this MOST display.

It Will be interesting to see if we can switch to rear camera but i do not have it unfortunatelly

jilleb commented 6 months ago

I've switched the AA context with the rear view cam once. That worked 😁 it would switch to the cam when I pressed the appconnect button on my hmi.

OneB1t commented 6 months ago

But is it possible to switch camera to VC via mdmt?

andrewleech commented 6 months ago

I'm pretty sure the AA context can't be switched onto VC for the same reason we can't screen shot it; the Tegra GPU will be decoding it directly onto the hardware screen output (connected to main unit display). The vc on the other hand is a "fake screen" in that the qnx context will essentially be read out the same way a screenshot util works and streamed out over MOST. The Tegra decoder won't be able to see this.

Maybe there would be a way to force the Tegra decoder to write to an opengl surface instead however, considering there was that opengl frame sent to vc a few comments back maybe this would be a possible solution. But still, would just be mirroring main screen at best.

OneB1t commented 6 months ago

Mirroring will have huge problem with finding correct place to cut data as main screen is 1280x720 MOST video stream should be 800x480 but in reality it is only like 560x480 in the middle (and even less with bigger dials and even more cutted when navigation is active as there is bar with time of arrival)

image

so red square is area which can be visible if we are going to have it pixel perfect without rescale

grajen3 commented 6 months ago

Yeah, this is exactly why I don't think mirroring would be that great. Maybe if you do the "split screen" view - it would be more usable as something to render on VC (tho aspect ratio will be weird and you might get black bars at the side?), but then it forces you to use that mode by default so when you interact with main screen you would need to remember to get back to split screen view - pretty poor UX :/

I think I am giving up on trying to get ~Android Auto/google maps actually displayed in VC at least for now and will check if I can get waypoint mode better (I've been actually using it a bit and it wasn't that off for the most part at least ) - do any of you know actual map provider for mib2high vag navigation? I've read they use HERE, but not clear if that's true or not. I will look into roadsnapping feature it offers for testing and try to figure out extracting google map navigation polyline without having to use "share trip" for that

thomasakarlsen commented 6 months ago

I think I am giving up on trying to get ~Android Auto/google maps actually displayed in VC at least for now and will check if I can get waypoint mode better (I've been actually using it a bit and it wasn't that off for the most part at least ) - do any of you know actual map provider for mib2high vag navigation? I've read they use HERE, but not clear if that's true or not. I will look into roadsnapping feature it offers for testing and try to figure out extracting google map navigation polyline without having to use "share trip" for that

Had it confirmed by both my dealership and importer that their provider is HERE. Did some complaining about speed limits a couple months ago.

olli991 commented 6 months ago

The backend for traffic information is actually TomTom. But you can use HERE as provider as well with a bit of adjustment.

grajen3 commented 6 months ago

Yeah, don't care about traffic information itself - I'm looking to make what I showed in https://github.com/jilleb/mib2-toolbox/issues/159#issuecomment-1970068441 better - since that I already did add snapping to roads via mapbox which is closer to VAG than google map with road coordinates, but still not the same - since I asked I got some playground running (that I previously had with google maps and mapbox, and now just added here maps) and it shows similarity to where mapbox is not exactly matching vag: image top-left is google maps bottom-left is mapbox top-right is here maps

red line is exact route polyline from google maps (shared trip stuff) lavender line is the above polyline "snapped to roads" via mapbox APIs

You can see that on here maps the lavendar maps go to the edge of the road instead of staying in the middle - this is also what I saw when driving the car in that bit of my usual route.

So I will look to figure out Here's road snapping (it does have it) and see how that performs - even ignoring those slight mismatches on mapbox - there were more serious problems with it that I saw in playground (that I didn't necesairly see actually driving, but I also didn't drive further in recent time) like below where road snapping at least with mapbox (and at least with options I used) was sometimes figuring it "weird detours" (in this case it's just silly, but in some more road dense area this inaccuracy might really put you in weird scenarios if you'd rely on this too much) image

All in all above just shows how those map providers can differ from each other in subtle (or sometimes less subtle) ways with road/object coordinates

grajen3 commented 6 months ago

Here maps road snapping seems better (assuming it will match builtin's nav) - will see how it works in practice next time I have a ride (green is here's road snapping): image image

OneB1t commented 6 months ago

[CMOSTEncoderInterface::reconfigureEncoder] invalid displayable with did:-2 anyone found where this came from ?

OneB1t commented 6 months ago

image

roundabouts does not have "turn number" but they are using "turn angle" so icon must be dynamically adjusted for that :/ should i integrate icon rotation or just create 18 icons with 20 degrees turn 😄