Closed jwater7 closed 5 years ago
@pavoni @toggledbits - Let me know what you think of this implementation. It uses a threading.Timer as a fallback, but it corrects the issue of the VeraLock sending subscribers that it is unlocked right after it has just been commanded to lock.
I use a veraplus for testing (and also have a veraedge) and updated the firmware on it only a few days ago. I totally agree that this is a workaround for a vera bug, but it doesn't look like they have fixed it yet. Tracing, I get this: ... u'comment': u'family_door_deadbolt': u'state': u'1' u'comment': u'family_door_deadbolt: Please wait! Polling node': u'state': u'1' u'comment': u'family_door_deadbolt: SUCCESS! Successfully polled node': u'state': u'1' I would expect "state" to change from something else than '1'.
I don't have a good way of testing this with HA yet, I use HA in a docker container on my system. Do you have any instructions how to do this without installing all sorts of stuff on your system?
I like the concept of integrating in subscriptions too, do you have some psuedocode that could explain kind of what you're thinking?
Thanks for looking at this.
@jwater7 The key is to get the trace from here (which you can either do in hassio or in your test rig) Details of how to do this in HA here https://community.home-assistant.io/t/vera-lights-not-reporting-status-correctly/16342/21
This is the key trace:- https://github.com/pavoni/pyvera/blob/master/pyvera/subscribe.py#L71 And then track the state values matching https://github.com/pavoni/pyvera/blob/master/pyvera/subscribe.py#L13
For most devices I saw a selection of all kinds of states (waiting to start, requeuing...) but ending with a STATE_JOB_DONE
. If you get that then you can ignore the various intermediate states - and wait for the done
. At the intermediate states the state of the device is still 'pre-action' - which is why I optimistically set the new state in the vera cache - and then wait for the 'done' state to reset.
@pavoni, thanks for those instructions. I was able to confirm that even after the last message that Vera returns after the sequence of messages for the lock event, it still contains the "1" state, which maps to STATE_JOB_IN_PROGRESS. So, I can confirm that this is still a bug in vera. See below for an example of the last message in the sequence:
home-assistant-pyvera-test | 2018-05-25 16:58:28 DEBUG (Vera Poll Thread) [pyvera.subscribe] Event: family_door_deadbolt, state 1, <...snip...>, "locked": "1", "state": "1", "comment": "family_door_deadbolt: SUCCESS! Successfully polled node"}
So, until they fix this unfortunately I think it will have to be patched by pyvera.
One approach might be to search for the SUCCESS
string in the comment - and force a change of status if that's the case. I had to do something similar here:-
https://github.com/pavoni/pyvera/blob/master/pyvera/subscribe.py#L70 https://github.com/pavoni/pyvera/blob/master/pyvera/subscribe.py#L75
In this case Vera was saying STATE_NO_JOB
which is normally a state you can trust as a final outcome - but in this case there was a comment about sending.
If the SUCCESS
comment is reliable you can adjust the VeraLock
logic to something like:
if device.__class__.__name__ == 'VeraLock'):
# VeraLocks don't complete - so force state
success = comment.find('SUCCESS!') >= 0
if success and state == STATE_JOB_IN_PROGRESS:
state = STATE_JOB_DONE
If this works then we'd be able to trust the subscription state, remove the extra thread, and not have lock switches in HA bounce.
@pavoni - I really like that approach. I modified this branch with the proposed changes and removed the timer thread. I also added the start of some unit tests.
However, I'm still experiencing the lock switch widget in home assistant flipping back and then updating again after it locks. I'll try to get to the bottom of that, but if you have some good tips on where to look please let me know.
I also still need to figure out how to test a non-successful locking process and what happens if we don't get a "SUCCESS!" - have any ideas what might happen?
On the ‘bouncing’ switch, check you’re ‘optimistically’ setting the state in the local cache during the lock/unlock command. I think HA reads the state after the operation, so if you don’t set it HA will bounce until it gets the final state.
If that doesn’t work then just need to do lots of tracing.
Not sure about a final failing command, there will be be lots of retry commands from Vera (which you can see in the subscription debug), would hope ther will be a final idle / no job state from which we will pick up the final state. If you want to be very sure you could leave the HA poll refresh on - that will cause HA to poll the device regularly to get the current state. Not needed for a switch - but belt and braces might be best for a lock!
I think that might actually be what's causing the problem... HA is passing True for the refresh option of is_locked(refresh):
def update(self):
"""Update state by the Vera device callback."""
self._state = (STATE_LOCKED if self.vera_device.is_locked(True)
else STATE_UNLOCKED)
If HA does an update (not sure how often or when this happens), then it will do an sdata request and get the old state instead of the "optimistic" state and just change the cache anyway even after we set it optimistically.
Does that sound like what could be happening?
If that's the case, I'm not sure how to fix this without implementing some kind of timeout and transitioning-to state.
I added some logging to confirm my suspicion and I was able to capture the bounce from the forced refresh...
2018-05-31 20:59:27 DEBUG (SyncWorker_18) [pyvera] ** Setting lock state cache to 1
2018-05-31 20:59:27 DEBUG (SyncWorker_19) [pyvera] ** is_locked force refreshing, previous locked=1
2018-05-31 20:59:27 DEBUG (SyncWorker_19) [pyvera] ** is_locked after refresh locked=0
--------- after 10 seconds -----------
2018-05-31 20:59:37 DEBUG (SyncWorker_10) [pyvera] ** is_locked force refreshing, previous locked=0
2018-05-31 20:59:37 DEBUG (SyncWorker_10) [pyvera] ** is_locked after refresh locked=1
I'll keep digging more and see if I can think of a way to debounce this, suggestions? I think removing the force update in the HA is_locked() call could fix the issue but I assume it was put in there as a failsafe for some other issues.
@pavoni - Ok, I think I've come up with a solution that doesn't involve threaded timers, take a look and let me know if you'd like any additional changes. Thanks
I would be tempted to remove the True
in the getValues
in HA, remove the timers and just have this work the same way as other devices (now you've sorted the lock completion issues).
Assuming that this seems robust in your environment.
The method that I'm using in the latest changes of this branch don't use timers, just a timestamp, so I think this would be appropriate with or without the HA change to remove the force update. It's been pretty robust here, but it would be nice to get another set of eyes on it. I'd merge as-is unless you feel strongly about it and I could revert the last commit and we could just change HA and test. Let me know, thanks
Let me know what you’d like to do with this, it’s probably good to go as-is for the merge if you’re good with the approach
Sorry for the slow response.
If we can get away without the timer / debounce code - that would be my preference..
As long as it works correctly (without bouncing) with a revised version of the HA lock code that doesn't force the update here: https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/lock/vera.py#L52
And thanks for writing our first ever tests!
@jwater7 Sure this has been frustrating - but I think there is some great work here - so would like to finish off and merge if you're still willing.
Sorry I didn’t get back to you earlier, it came at a busy time for me and kind of fell off my radar. I haven’t had a chance yet to test out reverting my last changes and modifying home assistant. I might be able to get to that this weekend, also feel free to make any changes as well or branch what I’ve done if I’m taking too long :-) Thanks for keeping up on this and the reminder
I've been running the changes locally for about a week with no issues. It's still not perfect, but it's better than it was.
Sorry to hassle you - are you still keen to merge this?
No hassle, yes - let's merge. Need anything from me on this?
Looks like the changes are working well, can you do the merge?
I just put in another PR to add alerts (for things like PIN code use, pad lockouts, etc), which I am hoping to plumb through to HA once it lands. Is this repo still active?
It’s been a bit stale lately, but we should be able to get this stuff merged in if we coordinate with @pavoni
Now released in 0.3.1
.
Modify VeraLock to correct the progress sdata problem so that it will wait before giving back lock status until it has completed un/locking, also provided another example for sending a command to lock all VeraLocks