Closed Pavion closed 6 years ago
Timeout issue?
Get this real often in log. No debug message.
/mike
Still having this in a17.
Every time or only sometimes? If it's rare, it might be alright. The binding should also run stabler now. Please some more info / log if possible. Thank you.
a couple of timed each hour.
Sometimes get this error row before
2018-01-06 20:15:54.583 [ERROR] [n.internal.thread.SynoApiThreadEvent] - could not get event org.eclipse.smarthome.core.thing.internal.ThingImpl@664a3917: {} java.lang.NullPointerException: null at org.openhab.binding.synologysurveillancestation.internal.webapi.SynoWebApiHandler.getEventResponse(SynoWebApiHandler.java:461) [258:org.openhab.binding.synologysurveillancestation:2.2.0.a17] at org.openhab.binding.synologysurveillancestation.internal.thread.SynoApiThreadEvent.refresh(SynoApiThreadEvent.java:51) [258:org.openhab.binding.synologysurveillancestation:2.2.0.a17] at org.openhab.binding.synologysurveillancestation.internal.thread.SynoApiThread$1.run(SynoApiThread.java:53) [258:org.openhab.binding.synologysurveillancestation:2.2.0.a17] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:?] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?] at java.lang.Thread.run(Thread.java:748) [?:?]
A couple of times per hour is not that bad if you have a refresh rate every some seconds... Do you see any other negative effects of this error? My first goal is not to prevent each and every error but to make the whole thing stable and crash-safe. Some fixes I've already implemented in coming version, some exceptions are simply there and will be someday replaced with a meaningful error message.
Better logging and even greater timeout implemented in a20. Please check it and review / close this issue if your problem was solved or at least became very seldom and non-crashing.
This is now will try a20.
2018-01-07 13:12:59.890 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:13:50.568 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:14:10.131 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:24:40.053 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:25:51.595 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:28:31.179 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:28:31.181 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:30:07.776 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:30:20.736 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:32:07.478 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:32:09.586 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:32:19.586 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:32:19.870 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:32:20.049 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:35:10.155 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:38:00.089 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:39:59.866 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:40:59.584 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:40:59.860 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:41:01.955 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 10.131 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline 2018-01-07 13:24:40.053 [ERROR] [nternal.thread.SynoApiThreadSnapshot] - Snapshot: Handler gone offline
with a20 2018-01-08 21:10:25.631 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:11:14.856 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:12:15.739 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:12:15.742 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:13:24.509 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:14:25.109 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:14:25.111 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:15:25.401 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-08 21:16:05.996 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms)
As I've promised, error message is now meaningful :)
But frankly I don't think I can do more. My setup works even with 300 ms timeout and I wouldn't go above 4000 ms to prevent openHAB marking thread dead (at about 5000 ms). Some causes (or their combination) for timeout might be:
Other thing I wonder about is your log times. Two error messages with but milliseconds between:
2018-01-08 21:12:15.739 [ERROR] 2018-01-08 21:12:15.742 [ERROR] and 2018-01-08 21:14:25.109 [ERROR] 2018-01-08 21:14:25.111 [ERROR]
This should not be that short. Could you please check in Karaf if the binding is installed only once and perhaps restart your openHAB?
After #21 has been resolved, could you please check on this issue again?
Still comming but not so often 2018-01-15 15:04:19.029 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-15 15:06:19.194 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms)
/Mike
More errors can you add carmera id to error message. 2018-01-15 20:11:13.438 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-15 20:13:13.452 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-15 20:18:53.193 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-15 20:19:13.189 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-15 20:19:23.209 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms) 2018-01-15 20:25:53.191 [ERROR] [tation.internal.thread.SynoApiThread] - Thread Snapshot: Connection timeout (4000 ms)
I've added camera's DeviceId
to the log output. But it also seems, I can further increase the connection timeout without openHAB interrupting (tested with thread put to sleep for over 10 seconds). Both would be included in the next release. If you wish, I can compile you a working snapshot.
Still, such large timeouts should have their cause. Could it be a performance issue of your DiskStation or the device openHAB running at? Have you checked their load? Have you perhaps performed a heavy load operation at this time?
Workload ok on diskstation have no issues with surv station GUI. Raspberry and diskstation on same GB switch.
Should be fixed meanwhile. Please reopen if the issue persists.
https://community.openhab.org/t/binding-request-synology-surveillance-station/8200/60