home-assistant / core

:house_with_garden: Open source home automation that puts local control and privacy first.
https://www.home-assistant.io
Apache License 2.0
71.92k stars 30.14k forks source link

ZHA component default logging level is DEBUG #123863

Closed hcordobest closed 1 month ago

hcordobest commented 1 month ago

The problem

I detected a big increase in my log files size, and it seems the ZHA component is been set to a logging level of DEBUG. I checked the defaults at https://github.com/home-assistant/core/blame/b7bbc938d348b2ae82766f434d508ca9f75b1522/homeassistant/components/zha/helpers.py#L190 and changed that to WARNING on my instance, and now my logs are behaving normally again.

What version of Home Assistant Core has the issue?

core-2024.8.1

What was the last working version of Home Assistant Core?

No response

What type of installation are you running?

Home Assistant Core

Integration causing the issue

ZHA

Link to integration documentation on our website

https://www.home-assistant.io/integrations/zha/

Diagnostics information

No response

Example YAML snippet

No response

Anything in the logs that might be useful for us?

This is the aspect of the log file with the original ZHA component code unchanged, and data values changed to XXXX (just in case):

2024-08-13 00:29:54.673 DEBUG (bellows.thread_0) [bellows.ash] Sending data  XXXXXX
2024-08-13 00:29:54.673 DEBUG (bellows.thread_0) [bellows.ash] Semaphore is locked, waiting
2024-08-13 00:29:54.685 DEBUG (bellows.thread_0) [bellows.ash] Received data XXXXXX
2024-08-13 00:29:54.685 DEBUG (bellows.thread_0) [bellows.ash] Received frame DataFrame(frm_num=2, re_tx=0, ack_num=5, ezsp_frame=b'XXXXXXX')
2024-08-13 00:29:54.687 DEBUG (bellows.thread_0) [bellows.ash] Sending frame AckFrame(res=0, ncp_ready=0, ack_num=3) + FLAG
2024-08-13 00:29:54.687 DEBUG (bellows.thread_0) [bellows.ash] Sending data  83401b7e
2024-08-13 00:29:54.687 DEBUG (bellows.thread_0) [bellows.ash] Sending frame DataFrame(frm_num=5, re_tx=False, ack_num=3, ezsp_frame=b'\XXXXXX') + FLAG
2024-08-13 00:29:54.687 DEBUG (bellows.thread_0) [bellows.ash] Sending data  XXXXXX
2024-08-13 00:29:54.687 DEBUG (MainThread) [bellows.ezsp.protocol] Received command sendBroadcast: {'status': <EmberStatus.SUCCESS: 0>, 'sequence': 8}
2024-08-13 00:29:54.688 DEBUG (bellows.thread_0) [bellows.ash] Received data ce6749d27e
2024-08-13 00:29:54.688 DEBUG (bellows.thread_0) [bellows.ash] Received frame DataFrame(frm_num=3, re_tx=0, ack_num=5, ezsp_frame=b'\XXXXXXXXX')
2024-08-13 00:29:54.688 DEBUG (bellows.thread_0) [bellows.ash] Sending frame AckFrame(res=0, ncp_ready=0, ack_num=4) + FLAG
2024-08-13 00:29:54.688 DEBUG (bellows.thread_0) [bellows.ash] Sending data  8430fc7e
2024-08-13 00:29:54.689 DEBUG (MainThread) [bellows.ezsp.protocol] Received command messageSentHandler: {'type': <EmberOutgoingMessageType.OUTGOING_DIRECT: 0>, 'indexOrDestination': 15716, 'apsFrame': EmberApsFrame(profileId=260, clusterId=61184, sourceEndpoint=1, destinationEndpoint=1, options=<EmberApsOption.APS_OPTION_RETRY|APS_OPTION_ENABLE_ROUTE_DISCOVERY: 320>, groupId=0, sequence=7), 'messageTag': 30, 'status': <EmberStatus.SUCCESS: 0>, 'messageContents': b''}
2024-08-13 00:29:54.689 DEBUG (MainThread) [bellows.zigbee.application] Received messageSentHandler frame with [<EmberOutgoingMessageType.OUTGOING_DIRECT: 0>, 15716, EmberApsFrame(profileId=260, clusterId=61184, sourceEndpoint=1, destinationEndpoint=1, options=<EmberApsOption.APS_OPTION_RETRY|APS_OPTION_ENABLE_ROUTE_DISCOVERY: 320>, groupId=0, sequence=7), 30, <EmberStatus.SUCCESS: 0>, b'']
2024-08-13 00:29:54.689 DEBUG (MainThread) [bellows.ezsp.protocol] Sending command  permitJoining: (254,) {}
2024-08-13 00:29:54.690 DEBUG (bellows.thread_0) [bellows.ash] Semaphore is locked, waiting
2024-08-13 00:29:54.691 DEBUG (bellows.thread_0) [bellows.ash] Received data XXXXXX
2024-08-13 00:29:54.691 DEBUG (bellows.thread_0) [bellows.ash] Received frame DataFrame(frm_num=4, re_tx=0, ack_num=5, ezsp_frame=b'\x83\x94\x01\x19\x00\x9c')
2024-08-13 00:29:54.691 DEBUG (bellows.thread_0) [bellows.ash] Sending frame AckFrame(res=0, ncp_ready=0, ack_num=5) + FLAG
2024-08-13 00:29:54.692 DEBUG (bellows.thread_0) [bellows.ash] Sending data  XXXXX

Additional information

No response

home-assistant[bot] commented 1 month ago

Hey there @dmulcahey, @adminiuga, @puddly, @thejulianjes, mind taking a look at this issue as it has been labeled with an integration (zha) you are listed as a code owner for? Thanks!

Code owner commands Code owners of `zha` can trigger bot actions by commenting: - `@home-assistant close` Closes the issue. - `@home-assistant rename Awesome new title` Renames the issue. - `@home-assistant reopen` Reopen the issue. - `@home-assistant unassign zha` Removes the current integration label and assignees on the issue, add the integration domain after the command. - `@home-assistant add-label needs-more-information` Add a label (needs-more-information, problem in dependency, problem in custom component) to the issue. - `@home-assistant remove-label needs-more-information` Remove a label (needs-more-information, problem in dependency, problem in custom component) on the issue.

(message by CodeOwnersMention)


zha documentation zha source (message by IssueLinks)

mib1185 commented 1 month ago

I checked the defaults at https://github.com/home-assistant/core/blame/b7bbc938d348b2ae82766f434d508ca9f75b1522/homeassistant/components/zha/helpers.py#L190

this is not the default, but this is the collection of debug logger configs which is used, when debug logging is enabled - so i think you have enabled debug logging for ZHA in your config (yaml or ui)?

hcordobest commented 1 month ago

I thought so at first, but there is no zha log level configuration set in either the yaml nor the ui, the only change I made was that code.

Nevertheless what you say makes a lot of sense, so I have reverted the code again and restarted, and now it still looks fine. But when the server showed this behaviour it had been running for about just two days and I did not set up the log level in either place.

Let's close the issue for now, as it is not reproducing, but I'll keep an eye just to check if it appears again unexpectedly.

Thanks!