Closed wernerhp closed 4 weeks ago
Hello,
Not possible with the integration in its current state. All the IDS server sends back is "SUCCESS" or "FAILURE" I don't know how the app does it since I don't see any traffic flowing other than the arm command and FAILURE feedback.
I have an idea of how I can implement something in the API. I'll play around with it when I have some time. No promises though.
I've added a pre-release 1.7.4b1.
I've added a sensor sensor.[site]_arm_failure_cause
which tries to determine the cause for arm failure.
Please take a look and see whether this works for what you want. If there are multiple zones causing the issue, all of them are listed in the JSON feedback. I've added a sample JINJA template in the readme (see below) which you can use as a base to extract the info)
I've also added a new save method for persistent IDs for FCM feedback which hopefully works on all installation types.
Take a look at the Changelog for more info.
Correction, it's 1.8.0b1. I messed up the semantic versioning
Thanks. Will take a look.
An alternative might be to create violated sensors for PIRs, Beams and Doors which list all the respective sensor's states, e.g. binary_ensor.ids_x64_doors_violated
that has a list of all doors in the attributes? Thoughts? I tried making one from templates, but it doesn't seem to list them in attributes. Maybe I'm doing something wrong. Maybe it's too complicated of a template. State works, but the attributes, which is the important part don't.
Works in Dev Tools, but not as a templated sensor.
- binary_sensor:
- name: "IDS x64 Doors Violated"
unique_id: "ids_x64_doors_violated"
device_class: door
state: >
{%- set sensors = states.switch
| sort(attribute='entity_id')
| selectattr('entity_id', 'contains', 'switch.ids_x64')
| selectattr('entity_id', 'contains', 'door')
| selectattr('attributes.violated', 'eq', True)
| list
-%}
{% if sensors | list | length != 0 %}on{% else %}off{% endif %}
attributes:
violated: >-
{%- set sensors = states.switch
| sort(attribute='entity_id')
| selectattr('entity_id', 'contains', 'switch.ids_x64')
| selectattr('entity_id', 'contains', 'door')
| selectattr('attributes.violated', 'eq', True)
| list
-%}
{%- for sensor in sensors | list %}
- {{ sensor.entity_id }}
{%- endfor %}
- name: "IDS x64 Beams Violated"
unique_id: "ids_x64_beams_violated"
device_class: motion
state: >
{%- set sensors = states.switch
| sort(attribute='entity_id')
| selectattr('entity_id', 'contains', 'switch.ids_x64')
| selectattr('entity_id', 'contains', 'beam')
| selectattr('attributes.violated', 'eq', True)
| list
-%}
{% if sensors | length != 0 %}on{% else %}off{% endif %}
attributes:
violated: >
{%- set sensors = states.switch
| sort(attribute='entity_id')
| selectattr('entity_id', 'contains', 'switch.ids_x64')
| selectattr('entity_id', 'contains', 'beam')
| selectattr('attributes.violated', 'eq', True)
| list
-%}
[{%- for sensor in sensors | sort(attribute='entity_id') %}
{{ sensor.entity_id }},
{%- endfor %}]
Your idea for the template would work if the information were live, but it isn't (could be 30secs out of date). I didn't check the code of the template since you're relying on potentially outdated data so it's kind of moot.
This is kind of what this new sensor does. Except that it's updated. The new sensor does the following:
Prior to arming, it does a "pre-arm check":
Play around a bit with violated zones and trying to arm, then check the return for the sensor. Let me know if you need some other info, or clarity, and I'll take a look (if available / possible) I can add additional to the dict.
I'll add better info to the readme once the syntax and info is final.
Thanks, will take a look
2024-10-06 08:13:52.754 ERROR (Thread-2 (fcm_notification_thread)) [root] Uncaught thread exception
Traceback (most recent call last):
File "/usr/local/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.12/threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/site-packages/pyhyypapihawkmod/client.py", line 242, in fcm_notification_thread
self.fcm_listener.runner(callback=callback,
File "/usr/local/lib/python3.12/site-packages/pyhyypapihawkmod/push_receiver.py", line 722, in runner
self.__listen(credentials=credentials, on_notify_callback=on_notification, ids_callback=ids_callback, received_persistent_ids=self.received_persistent_ids)
File "/usr/local/lib/python3.12/site-packages/pyhyypapihawkmod/push_receiver.py", line 636, in __listen
self.fcm_registration.fcm_subscribe(credentials=credentials)
File "/usr/local/lib/python3.12/site-packages/pyhyypapihawkmod/push_receiver.py", line 240, in fcm_subscribe
firebase_installation_auth = credentials["auth_info"]["googleAuthToken"]
~~~~~~~~~~~^^^^^^^^^^^^^
KeyError: 'auth_info'
Strange, that code/function is unaffected by this update. The auth info is still obtained and saved in the same way.
Did you remove and re-add the integration in HASS? (Not just upgrade)
I've been using this version for a few days internally. Can't see any issues. Where / when did the error occur?
I didn't uninstall and reinstall. Upgraded from 1.7.x to 1.8.x. HA has been a bit unstable since updating tp 2024.10.0. Maybe it was related. After fixing some templates and getting everything stable I haven't seen the FCM error again, so 🤷♂️
Thanks for the feedback.
When you get to testing sensor.[site]_arm_failure_cause
, please let me know if it works how you expect it to work.
My internal test just sends a push notification with the "problem" zones when arm fails.
I've had some issues for the last few weeks where I just can't arm my alarm through HA - I was hoping this would find the issue. It works fine on the panel so I'm not sure if it's integration related.
If I have a violated zone it gives back _{"failurecause": "VIOLATED ZONES", "zones": {"1": "FRONT DR"}, "timestamp": 1728201775.0968704} - so I know that is working. When I try arm when it should be able to arm (no violations) I get _{"failurecause": 0, "timestamp": 1728330676.017343}.
Any ideas? I'm guessing not. I've tried removing and readding the integration.
@e1ace I've opened a new issue for this. #32 . This current thread is for this feature, so I don't want to derail it. We can continue discussion in #32.
@wernerhp @e1ace , I've managed to figure out how to get the failure cause from IDS via push messages.
I've released 1.8.0b10 into the test branch. As a precaution, please remove and add the integration. I can't be sure that what I've changed will break something if you don't. I did a lot of testing and changes so it's just easier if you re-add.
sensor.[site]_arm_failure_cause
was updated using a custom "pre_arm" check I made. So I could only assume what's wrong based on information I had. The current version uses push information from IDS servers.sensor.[site]_arm_failure_cause
has been reworked, and the output is now a bit different.sensor.[site]_arm_failure_cause
should now tell you exactly why you can't arm. This is info directly from IDS.Template example to get info
{% set sensor_name = 'sensor.huis_arm_failure_cause' %}
{% set notification = states(sensor_name)|from_json %}
{{ notification.title }}
{{ notification.body }}
{{ notification.timestamp }}
This may not work if you're using ADT. I could see different message encoding if I changed some client package information. But I don't have an ADT system to test, so for now it's just "not working for ADT".
I'm not at home - but I've done some basic testing - but I can't make sense of it. I'm going to need to actually write somethings done of each step and whats happening - from a fresh install. Essentially I found that I could bypass zones when I didn't configure with a code. Once I do a code then it stops allowing my to change the zones. I'm sure the code is correct since I use it on my panels.
Part of me feels like defaulting my panel and just re configuring everything with IDSwift.
That's quite odd. IDS allows you to "save" the PIN on the server side i.e. you don't need to enter the code when performing actions via that specific account/user. Even if this is active, using the code is still acceptable and shouldn't give you errors.
I don't think it has anything to do with the integration as such. If you do figure something out, let me know. If it's something I can fix I'll give it a shot.
If you want to try a clean HASS, there are VM images available for home assistant. That way you can test a completely clean installation without impacting your production server.
Hopefully you have an fairly recent backup of your panel on IDSwift. Just thinking about that makes me sad. Doesn't take too long, just a schlep.
This feature has been added in 1.8.2+
Any idea if it's possible to respond to an error or warning when arming while a zone is violated?
I want to set up an automation that announces on speakers that the alarm was not armed and which zones are violated.