briancmpbll / home_assistant_custom_envoy

173 stars 76 forks source link

All sensors unavailable for one minute, same time each day. #126

Open Steve2017 opened 11 months ago

Steve2017 commented 11 months ago

I mentioned this in another issue and it has been suggested it should be separated.

The issue is that all sensors created by the integration become unavailable for a short period - usually a minute - at the same time each day. In my case, it is 23:00 local time (13:00 GMT). On some days I also experience a dropout at approximately 06:35, also for a very short period.

I am using v 0.2.10 of Enphase Envoy Envoy_2_Core_CustomTest which I gather is being folded into this repo. (Happened with earlier versions too)_ This is on Envoy S Metered with the Software Version: D7.6.175 (f79c8d) HA 2023.8.xx (was happening in July too)

Although the times might vary a little, I am not the only one reporting this behaviour: https://github.com/briancmpbll/home_assistant_custom_envoy/issues/122#issuecomment-1668339027 https://github.com/briancmpbll/home_assistant_custom_envoy/issues/122#issuecomment-1667567962 https://github.com/briancmpbll/home_assistant_custom_envoy/issues/122#issuecomment-1667803418

My dropouts are similar to the one in the first link above. Magnified view: Production-Consumption dropout All sensors, including micro-invertors demonstrate the same behaviour.

This is the log from an occurrence a few days ago, showing the system attempts a GET at 23:00:28 and times out about 30 seconds later. A new attempt then works:

> 2023-08-01 22:59:28.218 DEBUG (MainThread) [custom_components.enphase_envoy] Finished fetching envoy Envoy 1221xxxxx data in 1.092 seconds (success: True)
> 2023-08-01 23:00:28.127 DEBUG (MainThread) [custom_components.enphase_envoy.envoy_reader] Checking Token value: ******REDACTED TOKEN DATA*****
> 2023-08-01 23:00:28.128 DEBUG (MainThread) [custom_components.enphase_envoy.envoy_reader] Token is populated: ******REDACTED TOKEN DATA*****
> 2023-08-01 23:00:28.128 DEBUG (MainThread) [custom_components.enphase_envoy.envoy_reader] Token expires at: 2024-07-30 14:21:48
> 2023-08-01 23:00:28.128 DEBUG (MainThread) [custom_components.enphase_envoy.envoy_reader] HTTP GET Attempt #1: https://192.168.1.144/production.json?details=1: Header:{'Authorization': 'Bearer ******REDACTED TOKEN DATA*****'}
> 2023-08-01 23:00:58.130 ERROR (MainThread) [custom_components.enphase_envoy] Timeout fetching envoy Envoy 1221xxxxxx data
> 2023-08-01 23:00:58.135 DEBUG (MainThread) [custom_components.enphase_envoy] Finished fetching envoy Envoy 1221xxxxxxx data in 30.008 seconds (success: False)
> 2023-08-01 23:01:58.126 DEBUG (MainThread) [custom_components.enphase_envoy.envoy_reader] Checking Token value: 

From those production and consumption sensors in the graph, I use a template sensor to calculate excess production (ie power available for export). That too drops out (with a slight lag) for a short period.

The issue for me is not so much the original sensor values and the template sensor, because they come back after a minute. The issue is the template sensor is the source for a Reimann Sum sensor calculating export energy. It also goes off-line, but does not return until solar production resumes at about 06:40. (Bear in mind this is at 11pm - so zero solar production)

Downstream from there I have a number of child sensors to give me data on daily energy exports, export earnings, net energy costs etc, etc. All of those become unavailable after the Reimann Sum sensor is unavailable for one minute. They return when solar production resumes.

The dropouts at 06:35 are happening prior to solar production resuming, so I see nothing different there.

I have a similar set-up for measuring grid import power - just with sensors reversed in the formula. It experiences a similar Template sensor and Reimann Sum dropout, but the Reimann Sum sensor comes back on - I am assuming because the template sensor is returning new and changing values.

I have tried a number of suggested solutions for the downstream issue, such as removing state_class and device_class entries from the Template Sensor YAML or using the "available" function in the sensor YAML. None of those things worked.

To try to isolate something, I set up the REST/RESTful sensor integration for the Envoy. It works without dropout. It uses the token not the Enphase User-Name and Password method to gain access to the Envoy.

catsmanac commented 11 months ago

I am using v 0.2.10 of Enphase Envoy Envoy_2_Core_Custom_Test which I gather is being folded into this repo.

Let me explain:

Envoy_2_Core_Custom_Test was my attempt to propose @briancmpbll 's custom integration with the recent additions by @posixx as the successor of the current HA Core that doesn't support the token mechanism. In that process I had to apply some corrections to the code to pass pretty tight HA Core tests on code layout and components used. As result a minor modification to the code and communications was applied. These may play role here, not sure.

I ended the proposal effort with the announcement of the 2023.09 HA release will (most probably) contain an updated Enphase Envoy based on a different enphase communication library as used before and will support the new firmware (but drop support for very old firmware with only html pages). I plan to offer functional changes in the proposal effort to @briancmpbll version as well if deemed beneficial and archive/make read-only that Envoy_2_Core_Custom_Test in the future as I see no benefit in having a second one available.

So your issues are probably present in this custom integration current as well, but to be fair we should verify that using @briancmpbll code and not the one on it's way to become new HA Core and is in dead end street now.

Having said that, the new HA Core Enphase Envoy using that new communication library may do better, to be seen.

catsmanac commented 11 months ago

Timeout fetching envoy Envoy 1221xxxxxx data

This triggered me remembering I've seen this before in another custom integration by @vincentwolsink using installer accounts. They increased timeout on getting data from Envoy to 60 seconds.

You can try this by going into the /config/custom_components/enphase/envoy folder and change envoy_reader.py. Look for '_async_fetch_with_retry' and then couple of lines down for 'timeout=30' and change it to timeout=60 or so. Then cycle HA.

catsmanac commented 11 months ago

@Steve2017 as for Riemann integration you mentioned, mine is doing the same at a late evening drop. It has to wait for new timestamp/value before it picks up again when solar starts producing again. It's still working ok, but it shows gaps because of it. It's how integration works, root issue is no new timestamp/value after the drop of communication.

Potential solution, larger timeout or retry on timeout!?

Steve2017 commented 11 months ago

So your issues are probably present in this custom integration current as well, but to be fair we should verify that using @briancmpbll code and not the one on it's way to become new HA Core and is in dead end street now.

Would that be the v 0.1.3 version here? If so, I think I have tested that and still had the issue.... but happy to try again.

I'll try the timeout change. There are four references to timeout=30. Do I change all four or just the first? The first one comes at line 249 - 16 lines after a reference to the 2nd mention of '_async_fetch_with_retry'.
Given it comes not long after:

            _LOGGER.debug(
                "HTTP GET Attempt #%s: %s: Header:%s ",
                attempt + 1,

and GET attempt is where the error appears in the log????

Steve2017 commented 11 months ago

It's how integration works, root issue is no new timestamp/value after the drop of communication.

That's where I was hoping the use of 'available' might work - especially if it replaced the earlier 0kw data with 1w for that one attempt. Didn't help.

Restarting the Reimann Sum sensor might help, because a HA restart puts the Reimann Sum sensor back to its pre-dropout state.

catsmanac commented 11 months ago

@Steve2017 it's the one below

 async def _async_fetch_with_retry(self, url, **kwargs):

where de debug log for attempt is indeed

catsmanac commented 11 months ago

That's where I was hoping the use of 'available' might work - especially if it replaced the earlier 0kw data with 1w for that one attempt. Didn't help.

Seen this one?

Steve2017 commented 11 months ago

Seen this one?

No - but that is the same issue and I have seen other attempts to fix it. While the drop-out causes the initial problem, the ongoing problem seems to lie with the Reimann Sum and as Tom_I indicates there were attempts to fix it...unsuccessfully.

Tom has made some other suggestions, but none worked for me.

I might have to wait until the integration is back in the official version, then try raising an issue in the core.

I'm not sure how to implement the last suggestion in that thread, but no-one is saying it works.

catsmanac commented 11 months ago

Maybe something like

- platform: integration
  source:  {{ sensor.envoy_122302045041_inverter_122220009431 | float(default=0) }} 
  name: envoy_122302045041_inverter_122220009431_productie
  unique_id: sensor.envoy_122302045041_inverter_122220009431_productie
  unit_prefix: k
  round: 3
  method: left
Steve2017 commented 11 months ago

I get "invalid key: " when I try to use this format {{ sensor.envoy_122302045041_inverter_122220009431 | float(default=0) }} (using my sensors of course)

The message is: Error loading /config/configuration.yaml: invalid key: "{'sensor.solar_export_power3 | float(default=0)': None}"

catsmanac commented 11 months ago

You can try such an expression in developers tools / template. Could be some {} are wrongly positioned. That is not my stronghold for sure.

catsmanac commented 11 months ago

I tried this in my templates test section

{{ states('sensor.envoy_122302045041_inverter_122220009431') | float(default=0) }}

and it yielded the value

catsmanac commented 11 months ago

And found this calc to make kwh for wh

  - name: "envoy productie 7 dag teller kWh"
    unique_id: envoy_productie_7_dag_teller_kWh
    unit_of_measurement: "kWh"
    icon: mdi:solar-panel
    state: >
      {{ states('sensor.envoy_122302045041_last_seven_days_energy_production') | float(0) / 1000}}
    availability: "{{ states('sensor.envoy_122302045041_last_seven_days_energy_production') | is_number }}"

so run your input signal through this and then integrate?

Steve2017 commented 11 months ago

I also get a value in my test template section, but when I use that format as the source for the Reimann Sum sensor, it throws its hands up.

I'll wait until after 11pm (10 minutes away) to see if the longer timeout works.

Given the 'last_seven_days_energy_production' sensor also drops out, I am guessing the Reimann Sum will still have issues there, but worth a try.

Steve2017 commented 11 months ago

The longer timeout did not make any difference. It still drops out. I forgot to turn on debug which is not helpful.

Why does the integration use the ID and password method to obtain a token? Why not just enter a token directly in the configuration/setup and skip those steps?

aperezva commented 11 months ago

Hi, As I could see, HA will announce a new release of Official Enphase integration?

Could you confirm?

I have the same disconnection problems, but it not make sense to go deeper if we will have a need integration

Could you confirm?

catsmanac commented 11 months ago

Hi, As I could see, HA will announce a new release of Official Enphase integration?

Yes, not sure which version it will be. All the PR for Enphase changes in core are here (added to development version and here (pending)

This is the most important one: 'Refactor enphase_envoy to use pyenphase library'

catsmanac commented 11 months ago

Why does the integration use the ID and password method to obtain a token? Why not just enter a token directly in the configuration/setup and skip those steps?

to allow for an automatic token refresh if it's expired.

catsmanac commented 11 months ago

The longer timeout did not make any difference. It still drops out.

Too bad. Only option I can think of is adding a retry on timeout. Considering the next collection cycle 1 minute later works (right?) just retrying may work as well.

Steve2017 commented 11 months ago

to allow for an automatic token refresh if it's expired.

Too bad there is not an option to just use the token because it has a 12-month expiry. I understand the reasoning behind regular token updates/checks, but the Enphase website is very slow and non-responsive at times.

I wonder if that might be related to the issue.

Having a choice between ID/Password and Token would be nice. There is already a routine to check whether the token is still valid. Even better would be having Reimann Sum fixed by the HA dev team.

Steve2017 commented 11 months ago

Too bad. Only option I can think of is adding a retry on timeout. Considering the next collection cycle 1 minute later works (right?) just retrying may work as well.

The envoy_reader_py looks to already have retries at line 233 in its code???

async def _async_fetch_with_retry(self, url, **kwargs):
        """Retry 3 times to fetch the url if there is a transport error."""
        for attempt in range(3):
            header = " <Blank Authorization Header> "
            if self._authorization_header:
                header = " <Authorization header with Token hidden> "

My debug log shows failure after one attempt, where the code indicates retries occur after a 401, so maybe I am not even getting to a 401.

catsmanac commented 11 months ago

I wonder if that might be related to the issue.

I doubt it. Enphase website is only contacted when HA starts. Checking if the token is expired is using the date embedded in the token and comparing it to current time. If current time is beyond token expiration time then and only then it reaches out to Enphase website to get a new token.

The 2.10.0 dev-2-core one we tested for some time even stores the token locally so does not need to contact Enphase website when HA starts, but that's not when your issue happens and the issue happened with that one as well.

catsmanac commented 11 months ago

The envoy_reader_py looks to already have retries at line 233 in its code???

Yes, but that's only for the 401 to trigger cookies/token check. You are getting a timeout that takes it to the bottom of the where a test is present for httpx.Transporterror. That should include timeout as well but that doesn't seem to function.

            except httpx.TransportError:
                if attempt == 2:
                    raise

You can try removing httpx.Transport error from that line (leave the :) that should let it retry 3 times on any error and see it that captures the timeout.

            except:
                if attempt == 2:
                    raise
Steve2017 commented 11 months ago

You can try removing httpx.Transport error from that line

Still no joy.

It might be time to concentrate on how to make the Reimann Sum sensor become available again. As a test I set one up through the Helper GUI (so has unique ID) and it let me switch it back from unavailable to available.

Thanks for your patience. I think mine has run out.

There is still something fundamentally wrong with those sensors. When solar production was zero, I created a GUI one to measure solar energy export and instead of returning 0kWh, it returned 'unknown'. From programming language logic this might make sense, but from real world logic we knew that solar power = 0.... therefore solar energy export = 0. It was not "unknown"!

catsmanac commented 11 months ago

Understand @Steve2017 , though I feel I'm not done with it yet. There are indeed 2 separate issues, the Riemann and the communication issue.

As for your attempt with changed error handling, did you happen to have some log around that can help me how to really implement a retry at timeout?

As for the returned unknown, I'm not sure I fully comprehend what you did.

Steve2017 commented 11 months ago

As for your attempt with changed error handling, did you happen to have some log around that can help me how to really implement a retry at timeout?

No - it was late at night and I forgot to turn on debug again. I'll try tonight.

As for the returned unknown, I'm not sure I fully comprehend what you did.

Just to make sure there wasn't something different in the way Reimann Sum sensors were processed in the Helper GUI, I created a new energy sensor that duplicated the ones I had made in YAML in config.yaml. (I think the only real difference is the GUI assigns a unique ID)

At the time I created the new GUI sensor there was no solar production, so solar_exportpower was 0.0kw. This meant the energy sensor should have been showing 0.0kWh exported. I gather a Reimann Sum sensor will normally show 'unknown' until the first data reading is received, then instead of "unknown", it will show 0.12kWh (or whatever the calculation returns)._

Instead, it showed 'unknown' up until the Envoy Integration sensors went off-line for a minute. When the Envoy source sensors came back on line, it changed to 'unavailable', just like the YAML versions.

I can only assume the power sensor had been sending "no data" because there had been no new readings for several hours - and so the Reimann Sum sensor recorded that as unknown.

BTW: No dropout early this morning - only the regular 11pm event last night.

catsmanac commented 11 months ago

I'll try tonight.

I fully understand you kinda had it with this, so no need to stay up late to just get it.

I can only assume the power sensor had been sending "no data"

Thanks for explaining, I think I know what is happening:

  1. The Riemann needs 2 values in series with different timestamps to be able to integrate, until then it probably 'unknown'.
  2. The Envoy sends the last reported value and time. When production ends that value and time are no longer changing. During that time the Riemann is waiting for next value/time.
  3. When outage occurs the Riemann can't use that data point as next one and probably signals unavailable.
  4. When communication is restored HA inserts the 0 with the current timestamp I guess as timestamp will still be 7pm
  5. ENVOY is still sending zero value for 7PM, no newer data as timestamp is not newer as the inserted communication restored point.
  6. Then in the morning when production resumes and first value and time comes in the Riemann has its awaited second value and can integrate again.

Hope it makes sense.

Steve2017 commented 11 months ago

5. ENVOY is still sending zero value

I had been hoping using the available function to return 1.0 or 2.0 when it drops out would help if it then reverted to zero on the next report. That was not the case.

I'll turn debug on now. It'll be a large file, but at least it'll be on.

Steve2017 commented 11 months ago

You can set your watch by it. All is good, then GET attempt starts at 23:00:01 with 3 attempts. Times out 30 seconds later. Tries again at 23:01:31.425. All good.

20230810 Debug Log Enphase Dev2.txt

The Reimann Sum sensor for solar export and child sensors are now unavailable.

catsmanac commented 11 months ago

Thanks for providing this @Steve2017, I realize it was late (or already early) over there.

2023-08-10 23:00:01.426 HTTP GET Attempt #1: https://192.168.1.144/production.json?details=1
2023-08-10 23:00:31.428 HTTP GET Attempt #2: https://192.168.1.144/production.json?details=1: 
2023-08-10 23:00:31.432 HTTP GET Attempt #3: https://192.168.1.144/production.json?details=1: 
2023-08-10 23:00:31.436 ERROR  Timeout fetching envoy Envoy  data
2023-08-10 23:00:31.439 Finished fetching envoy Envoy data in 30.014 seconds (success: False)

The error catching worked as it looped 3 times. But as you can see from timing attempt 2 and 3 returned right away with time-out. Think we need to do some additional action there to start like a real new connection. Need to give that a bit of taught.

Since it happens like clockwork I would think it's something in the Envoy going on. But it has an ugly effect downstream. Al tough I don't think the resulting numbers are wrong, it's that seeing all those gaps makes one think there some other problem.

Steve2017 commented 11 months ago

@catsmanac - Yes - the 2nd and 3rd attempts don’t really look like attempts at all, with milliseconds between them. It made me wonder whether a very short timeout might be better, but I doubt it.

I was leaning towards the Envoy being the problem, except the REST sensors now running in parallel do not show any dropout. Unless the REST integration handles short dropouts in different ways, it’s hard to conclude that the Envoy is definitely the issue.

I’m not ruling out the Envoy being the cause. The fact that there are also the other users having similar issues also clouds the matter, however, it is well above my skill level.

catsmanac commented 11 months ago

It's unclear for sure. One can raise arguments for either potential cause, but without further details that will at best inconclusive.

I'll add some code to the section to see if we can identify it better, or deal with it better. I'm afraid it will be more trial and error, but for now I'll make it warnings in the log file so you don't have to switch debug on/off.

As you know the Envoy_2_Core_Custom_test has ended and I proposed all changes there to be included here in @briancmpbll ' PR list. The test version with these new PR can be installed with HACS adding a custom repository pointing to here, just as with the Envoy_2_Core_Custom_test. I'll add the test code there in a beta release, so you need to enable beta-releases in HACS. Still working n the code though. Later when all PR are included you can switch to @briancmpbll again.

Maybe, when the new HA Core gets released in some 2023.9.x version it solves the issue.

Steve2017 commented 11 months ago

I've just installed that DEV-Test version and so far it looks good.

I've also gone back and re-read the documentation suggested on the REST/RESTful solution and he avoids using Reimann Sum sensors. Instead, it uses Utility Meters to draw from the energy data already being put out by the Envoy - rather than convert the power data into an energy reading via Reimann Sum.

It might not fix the drop-out issue, but it does neatly skirt around it. If there is a dropout, there is no Reimann Sum to stall the process.

The energy import and export data from: https://envoy.local/ivp/meters/readings Lifetime Energy Import = value_json[1].actEnergyDlvd Lifetime Energy Export = value_json[1].actEnergyRcvd Net Power = value_json[1].activePower

The Envoy is calculating these because it shows in my Envoy web-page, but the Enphase Integration doesn't offer them. Perhaps another modification?

catsmanac commented 11 months ago

For my energy dashboard I'm also using lifetime energy production. I use Riemann only for the individual inverters to see how they are doing and a small drop-out is not really impacting.

As for /ivp/meters/readings aren't those same numbers in /production as well? These come from the CT meters connected I guess and I think they get reported in production and consumption? Guess it depends where the CT meters are connected.

I assume you have a ENVOY-S Metered (sorry, you mentioned it probably) with CT's installed and configured. I would need to see what the output of those pages is. My plain vanilla Envoy doesn't return anything from there.

Steve2017 commented 11 months ago

Yes - I have the Envoy S-Metered In short /readings gives a faster response:

1 new piece of information that is currently floating around is the Enphase Technical Brief. 4

The first thing I did was compare the performance of how long it takes to obtain information from the old endpoints https://envoy.local/production.json (2500ms or 2.5 seconds) vs some of the new api endpoints https://envoy.local/ivp/meters/readings (64ms)

That is from here: https://community.home-assistant.io/t/enphase-envoy-d7-firmware-with-jwt-a-different-approach/594082#:~:text=1%20new%20piece%20of%20information%20that%20is

2.5 seconds is not long, but it does make me wonder about the drop-outs.

catsmanac commented 11 months ago

Just published beta version v0.0.13+12-beta-01. Do a re-download, enable 'show beta versions'.

afbeelding

When an error occurs it will close the connection and retry. Hope that is as-if it is the next collection cycle. If an error occurs a warning will be written to the log, even if debug is not on. It will 3 times.

 Error in fetch_with_retry, try closing connection: <error description>

I tried it here by pulling the network cable from the Envoy and it retried 3 times.

Steve2017 commented 11 months ago

Installed - no issues

catsmanac commented 11 months ago

With the REST interface you use, are you only collecting the /ivp/metares/readings page or also the production.json page?

Steve2017 commented 11 months ago

I am only using the readings data.

I know there is a difference in the sense that the production.json page does offer some other calculations like the seven_day_energy production and consumption, but HA does that through history anyway. The other couple of differences are dealt with though template sensors and two utility meters.

The readings page offers import and export data that avoids the Reimann Sum issue.

catsmanac commented 11 months ago

Thanks, the problem may lay in the production.json page then. I.e. if you would use REST to access the production.json page at 23:00 maybe it would fail too.

I'm looking at adding it, but can't test it. So it may be a bumpy ride getting it added.

I've asked for an enhancement to add this to the new enphase core one coming, but that most probably will not make it to the first release they are planning early September

Steve2017 commented 11 months ago

if you would use REST to access the production.json page at 23:00 maybe it would fail too.

That is a thought. The Envoy might have a restart programmed each day to clear out the cache and glitches, much like you might do with a PC.

Again, it wouldn't be an issue if not for the Reimann Sum problem.

catsmanac commented 11 months ago

@Steve2017, I assume the latest code changes did not resolve the issue by retrying the data collection?

Steve2017 commented 11 months ago

@catsmanac Yes - v0.0.13+12-beta-01 made a difference. The graph below shows before and after. The Envoy still drops out, but the Riemann Sum sensors came back as soon as the Envoy did.

After Update to v0 0 13+12-beta-01

The Utility Meter sensor "Solar Export Daily" which is a child of this sensor also came back and reset at midnight without issue. After Update Utility Meter to v0 0 13+12-beta-01 small

Steve2017 commented 11 months ago

collecting the /ivp/meters/readings page or also the production.json page?

@catsmanac I've now duplicated all the original integration sensors with the REST sensors to allow comparison. While there are some small differences in data it's probably my scan rate - integration = 60 seconds, REST = 15 seconds.... and the difference between Utility Meters and Riemann Sum calculations.

The thing I did notice is the meters/reading page gives two native sensors not in the integration: (Lifetime) Solar Export Energy and (Lifetime) Grid Import Energy.

The integration requires those to be calculated from power sensors, using Riemann Sum, so my totals are only from when I started calculating them using HA about two years ago. The REST sensors appear to be from when the Envoy first started recording 6 or 7 years ago, well before I started using HA. (This is interesting because my Envoy was replaced a couple of years ago, so I'm assuming Enphase uploaded the old data to the new Envoy)

The difference is:

Where the integration natively provides Lifetime energy Consumption data, they are the same, because the REST calculations are based on Envoy lifetime energy data. (Import+Production-Export)

catsmanac commented 11 months ago

Yes - v0.0.13+12-beta-01 made a difference. The graph below shows before and after. The Envoy still drops out, but the Riemann Sum sensors came back as soon as the Envoy did.

So there is still a failure/unavailable at 23:00 with a gap in the source data? Had hoped that it would bring in data at 2nd or 3th try and not show a gap at all.

catsmanac commented 11 months ago

The thing I did notice is the meters/reading page gives two native sensors not in the integration: (Lifetime) Solar Export Energy and (Lifetime) Grid Import Energy.

@cddu33 is adding the meters/readings information currently

Steve2017 commented 11 months ago

adding the meters/readings information currently

Is the plan to use both together? If so, will the different response times be an issue?

And yes the Envoy still drops out. I don’t know if it is for a shorter period.

catsmanac commented 11 months ago

Is the plan to use both together? If so, will the different response times be an issue?

All envoy pages get collected sequentially and each page takes the time it needs with the 30 sec time out. So this is just one more page collected.

catsmanac commented 11 months ago

And yes the Envoy still drops out. I don’t know if it is for a shorter period.

Ah, so the code change didn't do much then. It's probably your change to the Riemann logic that solved it then? If so I don't forward the code change to production.

cddu33 commented 11 months ago

Hi me too at 23h The device is not reachable there is 11 p.m. every day at my house for 5 min