Open tsensei781 opened 8 months ago
Same issue here withHA 2024.2 and the following signature (different manufacturer):
{
"node_descriptor": "NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.FullFunctionDevice|MainsPowered|RxOnWhenIdle|AllocateAddress: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)",
"endpoints": {
"1": {
"profile_id": "0x0104",
"device_type": "0x0202",
"input_clusters": [
"0x0000",
"0x0004",
"0x0005",
"0x0102"
],
"output_clusters": [
"0x000a",
"0x0019"
]
},
"242": {
"profile_id": "0xa1e0",
"device_type": "0x0061",
"input_clusters": [],
"output_clusters": [
"0x0021"
]
}
},
"manufacturer": "_TZ3210_ol1uhvza",
"model": "TS130F",
"class": "zhaquirks.tuya.ts130f.TuyaTS130FTOGP"
}
Same here :-)
{
"node_descriptor": "NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.FullFunctionDevice|MainsPowered|RxOnWhenIdle|AllocateAddress: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)",
"endpoints": {
"1": {
"profile_id": "0x0104",
"device_type": "0x0202",
"input_clusters": [
"0x0000",
"0x0004",
"0x0005",
"0x0102"
],
"output_clusters": [
"0x000a",
"0x0019"
]
},
"242": {
"profile_id": "0xa1e0",
"device_type": "0x0061",
"input_clusters": [],
"output_clusters": [
"0x0021"
]
}
},
"manufacturer": "_TZ3210_dwytrmda",
"model": "TS130F",
"class": "zhaquirks.tuya.ts130f.TuyaTS130FTOGP"
}
Any workaround known?
2024.2.1
I need to correct me: first power on of Device No. 1 paring starts instantly and i did pair the device in HA. device No. 2 did not want to pair and i need to start paring. After Pairing the state, percentage etc. was working.
I did a force pair with device No. 1 and now also this device is showing the state and is working as expected.
I need to correct me: first power on of Device No. 1 paring starts instantly and i did pair the device in HA. device No. 2 did not want to pair and i need to start paring. After Pairing the state, percentage etc. was working.
I did a force pair with device No. 1 and now also this device is showing the state and is working as expected.
what do you mean with "force pair"? Making it enter pair mode manually?
yes
Some update:
after some minutes, HA is loosing the values:
The only workaround I found was to use several automation scripts to force the position attribute for a given range. But that's not very straightforward and needs to be re-done for every cover ...
This is also something i am considering now. The value which can be read is 0. HA is getting the position from the device when changed and after some time HA seems to read out the device again and get the 0. So for my understanding the value needs to be write back to device or even don't update from device. which could be the better solution.
you are right, there is the need to write the value back on the device through "Set zigbee cluster attribute"
is this possible through yaml change for that device type? any hint?
I have a similar issue. The status always go back to "open" after a few minutes:
So at 21:24 I did full close action, after a few minutes it goes back to open. So I can't open the cover in the morning because it's already open.
When I press the button to do full down close, I then still get report "open":
I can use the physical switches without any issues. The cover opens or close no matter the status. It only affects HA.
My workaround is to call "close" for 0.5s before "open" in my automations. However it's not so simple with the HA buttons in dashboard.
As mentioned above an automation is solving the issue. The value is write back to the device. This is necessary for all devices. this is working for a while now.
Samehere in v2024.6.2 with the same device as @ssorgatem: _TZ3210_ol1uhvza
Same issue here withHA 2024.2 and the following signature (different manufacturer):
{ "node_descriptor": "NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.FullFunctionDevice|MainsPowered|RxOnWhenIdle|AllocateAddress: 142>, manufacturer_code=4417, maximum_buffer_size=66, maximum_incoming_transfer_size=66, server_mask=10752, maximum_outgoing_transfer_size=66, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)", "endpoints": { "1": { "profile_id": "0x0104", "device_type": "0x0202", "input_clusters": [ "0x0000", "0x0004", "0x0005", "0x0102" ], "output_clusters": [ "0x000a", "0x0019" ] }, "242": { "profile_id": "0xa1e0", "device_type": "0x0061", "input_clusters": [], "output_clusters": [ "0x0021" ] } }, "manufacturer": "_TZ3210_ol1uhvza", "model": "TS130F", "class": "zhaquirks.tuya.ts130f.TuyaTS130FTOGP" }
Something curious happens to me: I have 4 identical devices, bought on the same date and with the same firmware version. Two of them behave well and the others do not.
I don't understand it, honestly. I don't know where I have read that it could be related to the fact that some are connected directly to the Zigbee coordinator and others through a router, but I don't know how to check it.
My solution was to implement the following automation for each malfunctioning cover:
alias: Persiana cuarto (corregir posición)
description: ""
trigger:
- platform: state
entity_id:
- cover.persiana_cuarto
condition: []
action:
- variables:
position: "{{ state_attr('cover.persiana_cuarto', 'current_position') }}"
- delay:
seconds: 5
- repeat:
while:
- condition: template
value_template: >-
{{ state_attr('cover.persiana_cuarto', 'current_position') }} != position
sequence:
- variables:
position: "{{ state_attr('cover.persiana_cuarto', 'current_position') }}"
- delay:
seconds: 5
- service: zha.set_zigbee_cluster_attribute
data:
# Get IEEE from the device info page in HA
ieee: 00:0d:6f:00:05:7d:2d:34
# TuyaCoveringCluster (Endpoint id: 1, Id: 0x0102, Type: in)
endpoint_id: 1
cluster_id: 258
cluster_type: in
# current_position_lift_percentage (id: 0x0008)
attribute: 8
# set value from variable
value: "{{ position }}"
# do not override manufacturer code
manufacturer: "-1"
mode: restart
Thanks alorle for providing your script. much more efficient than mine.
Bug description
with the device identified as below: "manufacturer": "_TZ3210_dwytrmda", "model": "TS130F", "class": "zhaquirks.tuya.ts130f.TuyaTS130FTOGP"
It seems that the current_position_lift_percentage (id 0x0008) is not updated correctly leading to an unexpected behavior within HA. In my situation this variable is always set to 100 whatever the position of the cover is.
Steps to reproduce
Expected behavior
In the step 3 above, we should expect to always read the same position within HA.
Screenshots/Video
Screenshots/Video
[Paste/upload your media here]Device signature
Device signature
```json { "node_descriptor": "NodeDescriptor(logical_type=Diagnostic information
Diagnostic information
```json { "home_assistant": { "installation_type": "Home Assistant Container", "version": "2023.8.4", "dev": false, "hassio": false, "virtualenv": false, "python_version": "3.11.4", "docker": true, "arch": "x86_64", "timezone": "Europe/Paris", "os_name": "Linux", "os_version": "6.2.0-35-generic", "run_as_root": true }, "custom_components": { "pfsense": { "version": "0.1.0", "requirements": [ "mac-vendor-lookup>=0.1.11" ] }, "nest_protect": { "version": "0.4.0b1", "requirements": [] }, "cover_time_based_synced": { "version": "2.0.0", "requirements": [] }, "hacs": { "version": "1.33.0", "requirements": [ "aiogithubapi>=22.10.1" ] } }, "integration_manifest": { "domain": "zha", "name": "Zigbee Home Automation", "after_dependencies": [ "onboarding", "usb" ], "codeowners": [ "@dmulcahey", "@adminiuga", "@puddly" ], "config_flow": true, "dependencies": [ "file_upload" ], "documentation": "https://www.home-assistant.io/integrations/zha", "iot_class": "local_polling", "loggers": [ "aiosqlite", "bellows", "crccheck", "pure_pcapy3", "zhaquirks", "zigpy", "zigpy_deconz", "zigpy_xbee", "zigpy_zigate", "zigpy_znp" ], "requirements": [ "bellows==0.35.9", "pyserial==3.5", "pyserial-asyncio==0.6", "zha-quirks==0.0.102", "zigpy-deconz==0.21.0", "zigpy==0.56.4", "zigpy-xbee==0.18.1", "zigpy-zigate==0.11.0", "zigpy-znp==0.11.4" ], "usb": [ { "vid": "10C4", "pid": "EA60", "description": "*2652*", "known_devices": [ "slae.sh cc2652rb stick" ] }, { "vid": "1A86", "pid": "55D4", "description": "*sonoff*plus*", "known_devices": [ "sonoff zigbee dongle plus v2" ] }, { "vid": "10C4", "pid": "EA60", "description": "*sonoff*plus*", "known_devices": [ "sonoff zigbee dongle plus" ] }, { "vid": "10C4", "pid": "EA60", "description": "*tubeszb*", "known_devices": [ "TubesZB Coordinator" ] }, { "vid": "1A86", "pid": "7523", "description": "*tubeszb*", "known_devices": [ "TubesZB Coordinator" ] }, { "vid": "1A86", "pid": "7523", "description": "*zigstar*", "known_devices": [ "ZigStar Coordinators" ] }, { "vid": "1CF1", "pid": "0030", "description": "*conbee*", "known_devices": [ "Conbee II" ] }, { "vid": "10C4", "pid": "8A2A", "description": "*zigbee*", "known_devices": [ "Nortek HUSBZB-1" ] }, { "vid": "0403", "pid": "6015", "description": "*zigate*", "known_devices": [ "ZiGate+" ] }, { "vid": "10C4", "pid": "EA60", "description": "*zigate*", "known_devices": [ "ZiGate" ] }, { "vid": "10C4", "pid": "8B34", "description": "*bv 2010/10*", "known_devices": [ "Bitron Video AV2010/10" ] } ], "zeroconf": [ { "type": "_esphomelib._tcp.local.", "name": "tube*" }, { "type": "_zigate-zigbee-gateway._tcp.local.", "name": "*zigate*" }, { "type": "_zigstar_gw._tcp.local.", "name": "*zigstar*" }, { "type": "_slzb-06._tcp.local.", "name": "slzb-06*" } ], "is_built_in": true }, "data": { "ieee": "**REDACTED**", "nwk": 26601, "manufacturer": "_TZ3210_dwytrmda", "model": "TS130F", "name": "_TZ3210_dwytrmda TS130F", "quirk_applied": true, "quirk_class": "zhaquirks.tuya.ts130f.TuyaTS130FTOGP", "manufacturer_code": 4417, "power_source": "Mains", "lqi": 33, "rssi": null, "last_seen": "2023-10-22T11:29:41", "available": true, "device_type": "Router", "signature": { "node_descriptor": "NodeDescriptor(logical_type=Logs
Logs
```python [Paste the logs here] ```Additional information
No response