HASwitchPlate / openHASP-custom-component

Home Assistant custom component for openHASP
https://www.openhasp.com
MIT License
49 stars 9 forks source link

Enhance CC `push_image` service with an optional argument (`group_topic`) #84

Open htvekov opened 2 years ago

htvekov commented 2 years ago

openHASP 0.6.3 - HA supervised core-2021.10.5

Configuration


- id: hasp_sonos_image
  alias: hasp_sonos_image
  # restart mode used to minimize plate mqtt queue issues if skipping rapidly through a playlist
  mode: restart
  trigger:
    - platform: state
      entity_id: media_player.kokken
      attribute: entity_picture
  # condition added as media player picture entity sometimes changes to None briefly in between image changes. 
  condition:
    - condition: template
      value_template: "{{ state_attr('media_player.kokken', 'entity_picture') != None }}"
  action:
  - delay: '00:00:01' # Adding this delay has almost removed all plate crashes
  - service: openhasp.push_image
    data:
      obj: p2b40
      image: >-
        http://192.xxx.xxx.xxx:8123{{state_attr('media_player.kokken', 'entity_picture')}}
      width: 250
      height: 250
    target:
      entity_id:
        - openhasp.wt32_01
    # - openhasp.wt32_02 #Removed again. Pushing to two plates resulted in numerous crashes.

Describe the bug

EDIT: 191221 Added additional combined HA CC debug log / plate serial log entry.

HW:

I'm experiencing issues with random plate crashes when pushing media player images to plate/plates. Crashes seems to be completely random and unrelated to any other mqtt messages processed at plate at same time.

My test scenario is running a repeated non-stop playlist, which will push new images at some 3-5 min. intervals 24/7. Prior to image push two seperate mqtt messages are sent to plate with artist/title. Last message typically send some 100-300 ms. before image is pushed. To ensure that plates mqtt queue doesn't choke on messages to be processed prior to the image, I'll keep automation delay at min. 1 second as I have now.

Automation has been refined a long the way during my test process.

This test scenario has now been running for some sixteen hours with only one crash/reboot at some five hours ago. So almost stable, but not completely stable yet.

Seems like the issue lies within files access/availability though on the HA side ? Check my first combined log entry (HA CC debug log and plate serial log). Here I tried to force a crash, rapidly (2-3 seconds interval) flicking back and forth in a playlist and hardly giving HA's entities/file writes any time to 'settle'. On two accounts file apparently can't be opened by cc and no conversion is done according to log. But cc still pushes an image to plate ?? Second time that happens plate crashes and reboots.

Debug log

First log entry is a combined HA CC debug log / plate serial log

Extra two 'plates only' serial logs at the end. Seems to be completely identical.


Combined HA cc debug log with plate serial log

2021-12-19 00:24:05 ERROR (SyncWorker_11) [custom_components.openhasp.image] Failed to open http://192.XXX.XXX.XXX:8123/api/media_player_proxy/media_player.kokken?token=2829b0fd83402e23858cf296bbc59aee7782786e30669e87cdfb6d063b82fd6f&cache=b0fdf6b115485cf0
2021-12-19 00:24:05 DEBUG (MainThread) [custom_components.openhasp] Push hasp/plates/command/p2b40.src with http://192.XXX.XXX.XXX:8123/api/openhasp/serve/d3787797bbb2123c253fd80da8de726c
2021-12-19 00:24:05 ERROR (MainThread) [custom_components.openhasp.image] Unknown image_id d3787797bbb2123c253fd80da8de726c
2021-12-19 00:24:05 ERROR (MainThread) [custom_components.openhasp.image] Unknown image_id d3787797bbb2123c253fd80da8de726c
2021-12-19 00:24:06 DEBUG (MainThread) [custom_components.openhasp] p2b41.max - Template("{{ state_attr('media_player.kokken','media_duration') | int }}") changed, updating with: 192
2021-12-19 00:24:06 DEBUG (MainThread) [custom_components.openhasp] p2b41.max - Template("{{ state_attr('media_player.kokken','media_duration') | int }}") changed, updating with: 192
2021-12-19 00:24:06 DEBUG (MainThread) [custom_components.openhasp] p2b41.val - Template("{{ state_attr('media_player.kokken','media_position') | int }}") changed, updating with: 13
2021-12-19 00:24:06 DEBUG (MainThread) [custom_components.openhasp] p2b41.val - Template("{{ state_attr('media_player.kokken','media_position') | int }}") changed, updating with: 13
2021-12-19 00:24:07 ERROR (SyncWorker_13) [custom_components.openhasp.image] Failed to open http://192.XXX.XXX.XXX:8123/api/media_player_proxy/media_player.kokken?token=2829b0fd83402e23858cf296bbc59aee7782786e30669e87cdfb6d063b82fd6f&cache=a6afe20ac4bc6ae8
2021-12-19 00:24:07 DEBUG (MainThread) [custom_components.openhasp] Push hasp/plates/command/p2b40.src with http://192.XXX.XXX.XXX:8123/api/openhasp/serve/99a25d250cafb7590ce8e58350b0766a

*** PLATES CRASHES HERE ***

2021-12-19 00:24:07 DEBUG (MainThread) [custom_components.openhasp] p2b41.max - Template("{{ state_attr('media_player.kokken','media_duration') | int }}") changed, updating with: 157
2021-12-19 00:24:07 DEBUG (MainThread) [custom_components.openhasp] p2b41.max - Template("{{ state_attr('media_player.kokken','media_duration') | int }}") changed, updating with: 157
2021-12-19 00:24:09 DEBUG (MainThread) [custom_components.openhasp] p2b41.max - Template("{{ state_attr('media_player.kokken','media_duration') | int }}") changed, updating with: 164
2021-12-19 00:24:09 DEBUG (MainThread) [custom_components.openhasp] p2b41.val - Template("{{ state_attr('media_player.kokken','media_position') | int }}") changed, updating with: 16
2021-12-19 00:24:09 DEBUG (MainThread) [custom_components.openhasp] p2b41.max - Template("{{ state_attr('media_player.kokken','media_duration') | int }}") changed, updating with: 164
2021-12-19 00:24:09 DEBUG (MainThread) [custom_components.openhasp] p2b41.val - Template("{{ state_attr('media_player.kokken','media_position') | int }}") changed, updating with: 16
2021-12-19 00:24:09 DEBUG (SyncWorker_11) [custom_components.openhasp.image] image_to_rgb565 out_image: /tmp/tmpkgfvllw9
2021-12-19 00:24:09 DEBUG (MainThread) [custom_components.openhasp] Push hasp/plates/command/p2b40.src with http://192.XXX.XXX.XXX:8123/api/openhasp/serve/efd946f49b70fe1d327f0e7955575988
2021-12-19 00:24:11 DEBUG (SyncWorker_9) [custom_components.openhasp.image] image_to_rgb565 out_image: /tmp/tmpc1updf6q
2021-12-19 00:24:11 DEBUG (MainThread) [custom_components.openhasp] Push hasp/plates/command/p2b40.src with http://192.XXX.XXX.XXX:8123/api/openhasp/serve/5b143b2263b321b58a2b6c944f282a3e
2021-12-19 00:24:16 DEBUG (MainThread) [custom_components.openhasp] Received LWT = online
2021-12-19 00:24:16 WARNING (MainThread) [custom_components.openhasp] Refreshing wt32_02

00:24:04.938 -> 0KPrompt > [18:24:05.382] [103664/105872  2] [27564/28256  3] MQTT RCV: hasp/plates/command/p2b40.src = http://192.XXX.XXX.XXX:8123/api/openhasp/serve/d3787797bbb2123c253fd80da8de726c
00:24:05.125 -> 0KPrompt > [18:24:05.431] [100500/101692  1] [27564/28272  3] ATTR: HTTP result 404
00:24:05.171 -> 0KPrompt > [18:24:06.282] [103664/105664  1] [27672/28272  3] MQTT RCV: hasp/wt32_01/command/p2b41.max = 192
00:24:06.015 -> 0KPrompt > [18:24:06.466] [103664/105664  1] [27608/28272  3] MQTT RCV: hasp/plates/command/p2b14.txt = White Christmas - Spotify Singles - Holiday, Recorded at Air St
00:24:06.250 -> 0KPrompt > [18:24:06.648] [103664/105664  1] [28004/28248  1] MQTT RCV: hasp/wt32_01/command/p2b41.val = 13
00:24:06.390 -> 0KPrompt > [18:24:06.708] [103664/105664  1] [27428/28156  3] MQTT RCV: hasp/plates/command/p2b15.txt = George Ezra
00:24:06.485 -> 0KPrompt > [18:24:07.688] [103664/105664  1] [27508/28244  3] MQTT RCV: hasp/plates/command/p2b40.src = http://192.XXX.XXX.XXX:8123/api/openhasp/serve/99a25d250cafb7590ce8e58350b0766a
00:24:07.423 -> 0KPrompt > CORRUPT HEAP: Bad head at 0x3f80701c. Expected 0xabba1234 got 0x3fbffff4
00:24:07.470 -> abort() was called at PC 0x4008e221 on core 1
00:24:07.470 -> 
00:24:07.470 -> ELF file SHA256: 0000000000000000
00:24:07.470 -> 
00:24:07.470 -> Backtrace: 0x40093cd8:0x3ffb1760 0x40093f51:0x3ffb1780 0x4008e221:0x3ffb17a0 0x4008e34d:0x3ffb17d0 0x4015715f:0x3ffb17f0 0x40152a45:0x3ffb1ab0 0x401529cd:0x3ffb1b00 0x40098751:0x3ffb1b30 0x40086c52:0x3ffb1b50 0x4008e119:0x3ffb1b70 0x4000bec7:0x3ffb1b90 0x400d62d2:0x3ffb1bb0 0x400d9a59:0x3ffb1bd0 0x400de7a9:0x3ffb1e00 0x400db20d:0x3ffb1e20 0x400dc2d1:0x3ffb1ef0 0x400e6a83:0x3ffb1f10 0x401cbe9b:0x3ffb1f30 0x400f3da5:0x3ffb1f50 0x400817e0:0x3ffb1f90 0x401373e3:0x3ffb1fb0 0x40094f56:0x3ffb1fd0
00:24:07.517 -> 
00:24:07.517 -> Rebooting...

Two extra 'plates only' serial logs

10:13:52.779 -> 0KPrompt > [04:13:53.798] [104708/106308  1] [27780/28324  2] MQTT RCV: hasp/plates/command/p2b14.txt = Feliz Navidad
10:13:53.248 -> 0KPrompt > [04:13:53.813] [104708/106308  1] [27780/28324  2] HASP: txt is obsolete, use text instead
10:13:53.248 -> 0KPrompt > [04:13:53.867] [104708/106516  1] [27688/28328  3] MQTT RCV: hasp/plates/command/p2b15.txt = José Feliciano
10:13:53.294 -> 0KPrompt > [04:13:53.882] [104708/106516  1] [27688/28328  3] HASP: txt is obsolete, use text instead
10:13:53.340 -> 0KPrompt > [04:13:57.333] [104708/106516  1] [28072/28332  1] MQTT RCV: hasp/wt32_01/command/p2b40.src = http://192.xxx.xxx.xxx:8123/api/openhasp/serve/9a32464e4592396a48b1076f6e139ed7
10:13:56.806 -> 0KPrompt > 9CCORRUPT HEAP: Bad head at 0x3f80001c. Expected 0xabba1234 got 0x3fbffff4
10:13:56.806 -> abort() was called at PC 0x4008e221 on core 1
10:13:56.806 -> 
10:13:56.806 -> ELF file SHA256: 0000000000000000
10:13:56.806 -> 
10:13:56.806 -> Backtrace: 0x40093cd8:0x3ffb1760 0x40093f51:0x3ffb1780 0x4008e221:0x3ffb17a0 0x4008e34d:0x3ffb17d0 0x4015715f:0x3ffb17f0 0x40152a45:0x3ffb1ab0 0x401529cd:0x3ffb1b00 0x40098751:0x3ffb1b30 0x40086c52:0x3ffb1b50 0x4008e119:0x3ffb1b70 0x4000bec7:0x3ffb1b90 0x400d62d2:0x3ffb1bb0 0x400d9a59:0x3ffb1bd0 0x400de7a9:0x3ffb1e00 0x400db20d:0x3ffb1e20 0x400dc2d1:0x3ffb1ef0 0x400e6a83:0x3ffb1f10 0x401cbe9b:0x3ffb1f30 0x400f3da5:0x3ffb1f50 0x400817e0:0x3ffb1f90 0x401373e3:0x3ffb1fb0 0x40094f56:0x3ffb1fd0
10:13:56.852 -> 
10:13:56.852 -> Rebooting...

07:53:56.593 -> 0KPrompt > [01:53:58.116] [103476/107708  3] [27712/28292  3] MQTT RCV: hasp/plates/command/p2b15.txt = Meghan Trainor
07:53:56.825 -> 0KPrompt > [01:53:58.131] [103476/107708  3] [27712/28292  3] HASP: txt is obsolete, use text instead
07:53:56.871 -> 0KPrompt > [01:53:59.386] [103476/107708  3] [27712/28292  3] MQTT RCV: hasp/plates/command/p2b40.src = http://192.XXX.XXX.XXX:8123/api/openhasp/serve/d396ad860387af4d1fa954135680a1ad
07:53:58.096 -> 0KPrompt > CCORRUPT HEAP: Bad head at 0x3f80001c. Expected 0xabba1234 got 0x3fbffff4
07:53:58.143 -> abort() was called at PC 0x4008e221 on core 1
07:53:58.143 -> 
07:53:58.143 -> ELF file SHA256: 0000000000000000
07:53:58.143 -> 
07:53:58.143 -> Backtrace: 0x40093cd8:0x3ffb1760 0x40093f51:0x3ffb1780 0x4008e221:0x3ffb17a0 0x4008e34d:0x3ffb17d0 0x4015715f:0x3ffb17f0 0x40152a45:0x3ffb1ab0 0x401529cd:0x3ffb1b00 0x40098751:0x3ffb1b30 0x40086c52:0x3ffb1b50 0x4008e119:0x3ffb1b70 0x4000bec7:0x3ffb1b90 0x400d62d2:0x3ffb1bb0 0x400d9a59:0x3ffb1bd0 0x400de7a9:0x3ffb1e00 0x400db20d:0x3ffb1e20 0x400dc2d1:0x3ffb1ef0 0x400e6a83:0x3ffb1f10 0x401cbe9b:0x3ffb1f30 0x400f3da5:0x3ffb1f50 0x400817e0:0x3ffb1f90 0x401373e3:0x3ffb1fb0 0x40094f56:0x3ffb1fd0
07:53:58.189 -> 
07:53:58.189 -> Rebooting...
htvekov commented 2 years ago

Since update on v0.7.0 a few days ago, I haven't experienced a single plate crash. https://github.com/HASwitchPlate/openHASP/commit/ffaaafbfa36490b31a087f148d20e29f3fc05b74

I've still my v0.6.3.0 plate running in parallel next to the 0.7.0. This experienced still the occational crash now and then when images are pushed. So I guess this issue should be closed , as issue seems to be solved with this.

htvekov commented 2 years ago

Has been on latest 0.6.3-dev version on both my test plates for quite some time now. No plate crashes since latest code change in dev version. But now and then ,the HA side really can't keep up with multiple (in this case only two images) being requested. Second plate (wt_02) misses it's image from time to time as HA can't serve it.

My current service call in HA:

- service: openhasp.push_image
    data:
      obj: p2b40
      image: >-
        http://xxx.xxx.x.xx:8123{{state_attr(state_attr('group.sonos_all', 'entity_id')[0], 'entity_picture')}}
      width: 200
      height: 200
    target:
      entity_id:
        - openhasp.wt32_01
        - openhasp.wt32_02

Gives this error in HA log at random intervals (4 times in the logs since midnight (16 hours):

2022-02-13 15:32:44 ERROR (MainThread) [aiohttp.server] Unhandled exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 514, in start
resp, reset = await task
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 460, in _handle_request
reset = await self.finish_response(request, resp, start_time)
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 613, in finish_response
await prepare_meth(request)
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_fileresponse.py", line 279, in prepare
fobj = await loop.run_in_executor(None, filepath.open, "rb")
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/pathlib.py", line 1252, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/usr/local/lib/python3.9/pathlib.py", line 1120, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpsiyegxci'

For now I'll revert back to patching CC so I only issue one mqtt group topic message and only state one target entity_id

htvekov commented 1 year ago

Update on this topic: Plate crashes do not happen any longer with latest stable openHASP dev releases (introducing semaphores and some tweaks fixed this issue) But if I push media player images to three entity_id's instead of one, images will appear several seconds slower than if using just one entity_id.

Would it be possible to enhance CC with an optional argument (group_topic) to the data field, in order to be able to specify either use of group topic path or the specified entity's topic path ? Not sure whether or not CC/HA actually will allow a syntax where entity_id is completely omitted ?

I was thinking something like this:

config #1: If entity_id is mandatory as target in CC -> Use mqtt Group Name from stated entity_id

service: openhasp.push_image
data:
  image: http://192.xxx.x.xx:8123/local/Berit.png
  obj: p2b40
  fitscreen: 1
  group_topic: 1
  width: 320
  height: 320
target:
  entity_id: openhasp.wt32_01_plus

Or config #2: If entity_id is not mandatory as target in CC -> Use stated mqtt group_topic path

service: openhasp.push_image
data:
  image: http://192.xxx.x.xx:8123/local/Berit.png
  obj: p2b40
  fitscreen: 1
  group_topic: hasp/plates
  width: 320
  height: 320
target:
  entity_id:
htvekov commented 1 year ago

Hi' @dgomes

Any chance of getting some kind of optional group_topic implemented in the data section ? I'm currently pushing album art images simultaneously to six different openHASP plates - With three different resolutions

I had to step back from my current config where I iterate through all the connected openHASP devices and push_image to every single device with varying image sizes. Using this method is not viable, as last device will get the album art image update some 10-15 seconds after the first device. Also somewhat waste of bandwidth to issue the same image six times in sequence.

So now I'm back to sending one image only (highest resolution needed) and the lower resolution devices uses zoom argument to shrink and fit image. This works perfectly and all devices get the album art image update simultaneously.

But in order to do this, I had to hack CC init_py and push all images to the mqtt group topic (hardcoded). A more configurable solution would be really nice and also give me the possibility to actually push images to single devices only :-)

dgomes commented 1 year ago

Will need to investigate into HA groups... unfortunately the options proposed (although they work) are not HA best practices