Open htvekov opened 2 years ago
Since update on v0.7.0 a few days ago, I haven't experienced a single plate crash. https://github.com/HASwitchPlate/openHASP/commit/ffaaafbfa36490b31a087f148d20e29f3fc05b74
I've still my v0.6.3.0 plate running in parallel next to the 0.7.0. This experienced still the occational crash now and then when images are pushed. So I guess this issue should be closed , as issue seems to be solved with this.
Has been on latest 0.6.3-dev version on both my test plates for quite some time now. No plate crashes since latest code change in dev version. But now and then ,the HA side really can't keep up with multiple (in this case only two images) being requested. Second plate (wt_02) misses it's image from time to time as HA can't serve it.
My current service call in HA:
- service: openhasp.push_image
data:
obj: p2b40
image: >-
http://xxx.xxx.x.xx:8123{{state_attr(state_attr('group.sonos_all', 'entity_id')[0], 'entity_picture')}}
width: 200
height: 200
target:
entity_id:
- openhasp.wt32_01
- openhasp.wt32_02
Gives this error in HA log at random intervals (4 times in the logs since midnight (16 hours):
2022-02-13 15:32:44 ERROR (MainThread) [aiohttp.server] Unhandled exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 514, in start
resp, reset = await task
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 460, in _handle_request
reset = await self.finish_response(request, resp, start_time)
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 613, in finish_response
await prepare_meth(request)
File "/usr/local/lib/python3.9/site-packages/aiohttp/web_fileresponse.py", line 279, in prepare
fobj = await loop.run_in_executor(None, filepath.open, "rb")
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/pathlib.py", line 1252, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/usr/local/lib/python3.9/pathlib.py", line 1120, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpsiyegxci'
For now I'll revert back to patching CC so I only issue one mqtt group topic message and only state one target entity_id
Update on this topic: Plate crashes do not happen any longer with latest stable openHASP dev releases (introducing semaphores and some tweaks fixed this issue) But if I push media player images to three entity_id's instead of one, images will appear several seconds slower than if using just one entity_id.
Would it be possible to enhance CC with an optional argument (group_topic
) to the data
field, in order to be able to specify either use of group topic path or the specified entity's topic path ?
Not sure whether or not CC/HA actually will allow a syntax where entity_id
is completely omitted ?
I was thinking something like this:
config #1:
If entity_id
is mandatory as target in CC -> Use mqtt Group Name
from stated entity_id
service: openhasp.push_image
data:
image: http://192.xxx.x.xx:8123/local/Berit.png
obj: p2b40
fitscreen: 1
group_topic: 1
width: 320
height: 320
target:
entity_id: openhasp.wt32_01_plus
Or config #2:
If entity_id
is not mandatory as target in CC -> Use stated mqtt group_topic
path
service: openhasp.push_image
data:
image: http://192.xxx.x.xx:8123/local/Berit.png
obj: p2b40
fitscreen: 1
group_topic: hasp/plates
width: 320
height: 320
target:
entity_id:
Hi' @dgomes
Any chance of getting some kind of optional group_topic
implemented in the data section ?
I'm currently pushing album art images simultaneously to six different openHASP plates - With three different resolutions
I had to step back from my current config where I iterate through all the connected openHASP devices and push_image
to every single device with varying image sizes. Using this method is not viable, as last device will get the album art image update some 10-15 seconds after the first device. Also somewhat waste of bandwidth to issue the same image six times in sequence.
So now I'm back to sending one image only (highest resolution needed) and the lower resolution devices uses zoom
argument to shrink and fit image. This works perfectly and all devices get the album art image update simultaneously.
But in order to do this, I had to hack CC init_py and push all images to the mqtt group topic (hardcoded). A more configurable solution would be really nice and also give me the possibility to actually push images to single devices only :-)
Will need to investigate into HA groups... unfortunately the options proposed (although they work) are not HA best practices
openHASP 0.6.3 - HA supervised core-2021.10.5
Configuration
Describe the bug
EDIT: 191221 Added additional combined HA CC debug log / plate serial log entry.
HW:
I'm experiencing issues with random plate crashes when pushing media player images to plate/plates. Crashes seems to be completely random and unrelated to any other mqtt messages processed at plate at same time.
My test scenario is running a repeated non-stop playlist, which will push new images at some 3-5 min. intervals 24/7. Prior to image push two seperate mqtt messages are sent to plate with artist/title. Last message typically send some 100-300 ms. before image is pushed. To ensure that plates mqtt queue doesn't choke on messages to be processed prior to the image, I'll keep automation delay at min. 1 second as I have now.
Automation has been refined a long the way during my test process.
Pushing images to two plates (in automation as two entities) without my added automation delay, gave numerous random plate crashes. Without added delay, and with two plates in automation, skipping rapidly through playlist will force a plate crash very quickly (often within 10-30 seconds).
With only one plate in automation and no delay, plate will still crash - but with larger intervals.
With one plate in automation and the 1 second delay, plate will crash at large intervals (typically many hours apart)
Latest currently runnning test version is with automation 'as is' in this issue report. Just with the twist that I patched CC init.py with a fixed path
hasp/plates/command/
to update both plates at once with mqtt group commands.This test scenario has now been running for some sixteen hours with only one crash/reboot at some five hours ago. So almost stable, but not completely stable yet.
Seems like the issue lies within files access/availability though on the HA side ? Check my first combined log entry (HA CC debug log and plate serial log). Here I tried to force a crash, rapidly (2-3 seconds interval) flicking back and forth in a playlist and hardly giving HA's entities/file writes any time to 'settle'. On two accounts file apparently can't be opened by cc and no conversion is done according to log. But cc still pushes an image to plate ?? Second time that happens plate crashes and reboots.
Debug log
First log entry is a combined HA CC debug log / plate serial log
Extra two 'plates only' serial logs at the end. Seems to be completely identical.