Closed gravelfreeman closed 9 months ago
I'm having the same issue, even with the server on and reachable (both Home Assistant and TrueNAS are running on top of a Proxmox hypervisor).
Started happening when I updated both the component and TrueNAS to 23.10.
Same to me since i upgraded to cobia RC1 (log filling at a speed of madness even if the server is ON)
Same for me after upgrade to cobia RC1. Got ~4000 log messages in 2 minutes. Figured it out since the fans of my server ran at full speed and CPU usage is 100%.
Same issue here since upgrading to cobia
Same here, woke up to a crashed server and a 40GB log file
Seeing same issue after upgrade to Cobia. Seems something changed in the API calls?
Hey, same issue here - 40GB of logs.
Would be nice that the logging code to be reviewed. This is very annoying.
I did some research, the Truenas API has been adjusted and "old" endpoints are no longer supported, which is why the error 500 comes back there. This should be fixed with this pull-request https://github.com/truenas/middleware/pull/12379, so relevant values are returned at these interfaces again instead of error 500. It doesn't look like the code will be available in TS-23.10.1, I couldn't find out when it will finally end up in a release. Which of course should not be, in case of error 500 the API is additionally queried again and the function to call the API is executed recursively. https://github.com/tomaae/homeassistant-truenas/blob/1918a8da0378f2c64b2d81498cc96d1fdc35f4ac/custom_components/truenas/coordinator.py#L326C13-L350C39
Server filled up. Rolled back. Server Logs filled up again.
Rolled back again Disabled TrueNAS Plugin, and have no issues.
+1, server doesn't have to be off
This is critical since the "turn off" function is now missing. I'm using this to turn off my server when there's a power outage. Please someone fix this!
This should be already fixed in master, you can give it a test. This happenes when you have invalid certificate with recent HA update. cobia issue is something else. cobia is not yet supported, see #102
I have it too. But for me, turning ON my truenas server will kick off endless HAOS logs to the point that the HA device reaches 100% disk usage within hours after turning on the NAS.
Upgrading to Cobia seems to have triggered it.
This should be already fixed in master, you can give it a test. This happenes when you have invalid certificate with recent HA update. cobia issue is something else. cobia is not yet supported, see #102
When is cobia support expected?
I had to reinstall HAOS from scratch 2 times today after postponing my troubleshooting for a few days, trying to get this under control.
It seems I could not successfully delete the Integration using the GUI. My logs continued getting hammered by the ghost of the integration that survived the normal delete. hundreds of thousands of logs, or 8.4 GB in an hour.
I manually deleted the entire truenas folder in custom_components just now, and it looks like that finally stopped the logs....
For me, it did not matter whether my NAS was on or off. I did upgrade to Cobia about a week ago. But I believe I upgraded TrueNAS Scale earlier than I had originally wanted because of the HAOS issues....
I have pretty similar problem - after upgrading truenas to new version, integration now fails and fills disk with logs.
2023-11-23 08:59:08.257 WARNING (SyncWorker_6) [custom_components.truenas.truenas_controller] TrueNAS truenas.roberts fetching following graphs
failed, check your NAS: ['load', 'cputemp', 'cpu', 'arcsize', 'arcratio', 'memory', 'interface', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arc
size', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory']
2023-11-23 08:59:08.287 WARNING (SyncWorker_6) [custom_components.truenas.truenas_api] TrueNAS truenas.roberts unable to fetch data "reporting/g
et_data" (500)
2023-11-23 08:59:08.318 WARNING (SyncWorker_6) [custom_components.truenas.truenas_api] TrueNAS truenas.roberts unable to fetch data "reporting/get_data" (500)
2023-11-23 08:59:08.357 WARNING (SyncWorker_6) [custom_components.truenas.truenas_api] TrueNAS truenas.roberts unable to fetch data "reporting/get_data" (500)
2023-11-23 08:59:08.397 WARNING (SyncWorker_6) [custom_components.truenas.truenas_api] TrueNAS truenas.roberts unable to fetch data "reporting/get_data" (500)
and ever-increasing list of sensors/graphs (the same ones are repeated again and again, and the list gets longer every time)
2023-11-23 08:59:11.125 WARNING (SyncWorker_6) [custom_components.truenas.truenas_controller] TrueNAS truenas.roberts fetching following graphs failed, check your NAS: ['load', 'cputemp', 'cpu', 'arcsize', 'arcratio', 'memory', 'interface', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory', 'cputemp', 'arcsize', 'memory']
Same problem updating to Cobia. Had to delete the TrueNAS integration through HACS and restart Home Assistant.
This is a temporary fix that involves editing the configuration.yaml
file. You can do this using a text editor like File Editor or VSCode.
configuration.yaml
file.logger
. If it's not present, you'll need to add it.custom_components.truenas: critical
under the logs
section of logger
. This will set the logging level for TrueNAS custom components to critical, which should prevent the log from growing unnecessarily large.Here's an example. This is in my configuration.yaml
:
logger:
default: warning
logs:
homeassistant.components.automation: error
custom_components.truenas: critical
This configuration sets the default log level to warning
while setting specific log levels for automation components and the TrueNAS custom component.
configuration.yaml
file before making any changes.Hope this helps
this issue is not about cobia, but about log being filled when truenas is off. it shoudl be fixed in master.
this issue is not about cobia, but about log being filled when truenas is off. it shoudl be fixed in master.
When is the new release expected?
this issue is not about cobia, but about log being filled when truenas is off. it shoudl be fixed in master.
Actually, my Home Assistant install is running on my TrueNAS server and I had this issue.
Describe the issue
My server is currently turned off and the extension is fulling my logs up to 55gb and I have left 0gb of disk space.
How to reproduce the issue
Steps to reproduce the behavior:
Expected behavior
Shouldn't log infinitely the same messages.
Software versions