Open nodiaque opened 11 months ago
@nodiaque, I just updated my fork where the polling status is now published to a MQTT topic (which you specify) along with command response status where the topic is determined by the command name (README updated with schema). Please try the "latest" docker image and let me know what you think. I will publish the changes in a versioned release after I get some feedback. Thanks.
Oh nice, I'll be happy to try that. How do I switch to your fork?
The instructions are in my README. I will submit a PR here as well once I get feedback from any folks and I finalize the functionality.
just found out I was already on your fork. Starting the testing, I'll report soon
Ok, I just saw that I need to add a variable for the polling status. My idea was to use the current availability status channel that is under homeassistant/vin/available. Just change this one to false and HomeAssistant (or anything other home automation using this like OpenHab) will automaticly report as offline. Could also add a message channel at the sameplace to provide more information on why it'S online/offline. Aside of that, I do see a json answer at the path provided saying { "ok" : { "message" : "Data polled successfully" } }
I'll check once it fail to poll.
Someone can correct me if I'm wrong here. From my understanding of the code, setting "homeassistant/vin/available" to false would cause the Lovelace dashboard elements to go to "Unavailable", so I didn't want to do that which is why I elected to use a new topic altogether.
I'm not on HA so I cannot say on that. For me this put the things (in openhab we call them things) to offline. I can still read the last value they add but since it's offline, it tell me it failed last polling. Even having it "unavailable" in HA for me would say "well, onstar doesn't work right now".
Here's what I mean.
Part of my Lovelace dashboard even while I have 504 errors:
Here's what happens when I set the availability topic to "false":
I don't want my entire dashboard to look like this just because of the 504 errors.
You can parse the JSON per your above post to do exactly what you are needing to do with regards to determining if things are working or not.
@nodiaque. the latest update I made should give you what you were looking for without breaking existing functionality as I noted above:
Please re-pull my "latest" image which was just published and try again. Thanks.
yeah I just saw the lastpoolsuccessful one, I'll incorporate this somewhere.
Is it normal this container doesn't properly stop? At least in unraid, when I request a stop, it wait the full 60s timeout and then kill it
yeah I just saw the lastpoolsuccessful one, I'll incorporate this somewhere.
Is it normal this container doesn't properly stop? At least in unraid, when I request a stop, it wait the full 60s timeout and then kill it
I don't have any issues with this container running in a HAOS VM on VMware.
@BigThunderSR You could wrap your car info in a conditional card and hide/replace it when the entities go unavailable.
This might all be moot depending on all the changes that @garyd9 is making, so let's see where all that lands. 🙂
Hello,
As we know, onstar tend to be a big hit and miss. After a while, you request diagnostic and get nothing. It would be nice if the availability topic would change to false when it didn't update. Also having the last successful update time in a topic would be good.
Thank you