Closed gilbrit closed 6 years ago
Just to add to this, while ffmpeg is running I get a lot of errors in the logs like this: 12:39:47 ERROR (MainThread) [homeassistant.core] Timer got out of sync. Resetting
The CPU is not maxed on the NAS so I think this is not cpu bound but network bound - possibly messing with the main loop in some way?
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates.
Please make sure to update to the latest Home Assistant version and check if that solves the issue. Let us know if that works for you by adding a comment :+1:
This issue will be auto-closed because there hasn't been any activity for a few months. Feel free to open a new one if you still experience this problem 👍
Make sure you are running the latest version of Home Assistant before reporting an issue.
You should only file an issue if you found a bug. Feature and enhancement requests should go in the Feature Requests section of our community forum:
Home Assistant release (
hass --version
): 0.63.0Python release (
python3 --version
): Python 3.6.4Component/platform: Canary
Description of problem: Firstly, the new features of the canary component are just what I was looking for as I have two canary cams in my home. However..
My broadband is not incredibly fast (8MB), and I run hass on a synology NAS which has a celeron CPU. As such it takes 10-15 seconds for ffmpeg to grab an image from the live stream. This causes load problems, as viewing my hass homepage runs multiple concurrent ffmpeg processes which: a) timeout b) seem to overlap - i.e. new ones are fired up before the old ones have timed out The net result is that there are lots of ffmpeg processes, the whole system bogs down, and I start to get errors in the logs about the timer being off sync. Then, generally, hass crashes :)
Expected:
It would be good to be able to configure the timeout. It would also be ideal to ensure new snaps aren't taken while the last ones are still loading?
Problem-relevant
configuration.yaml
entries and steps to reproduce:Traceback (if applicable):
Additional info:
FYI I am using the docker version on a Synology 918+ NAS