Closed wellsy57 closed 4 years ago
Never seen this before. How does your config look like for fronius_inverter
?
Yeah I thought it strange. Presently have this which was giving no errors or warnings up until 0.107.7
Current config:
- platform: fronius_inverter
ip_address: 192.168.1.211
power_units: W
units: kWh
monitored_conditions:
- ac_power
- day_energy
- year_energy
- total_energy
- ac_current
- ac_voltage
- ac_frequency
- dc_current
- dc_voltage
- platform: fronius_inverter
ip_address: 192.168.1.211
powerflow: True
smartmeter: True
# smartmeter_device_id: 240.453036
power_units: W
units: kWh
- platform: integration
source: sensor.fronius_grid_usage
name: fronius_grid_usage_integration
unit_prefix: k
round: 2
- platform: integration
source: sensor.fronius_house_load
name: fronius_house_load_integration
unit_prefix: k
round: 2
Oh, you have multiple instances configured against the same inverter. That explains why you are getting those errors. You should have only one instance.
Change your configuration to this instead. And either skip the monitored conditions completely (like I did) or use the ones you had but also add those from powerflow and smartmeter that you want to monitor.
- platform: fronius_inverter
ip_address: 192.168.1.211
power_units: W
units: kWh
powerflow: True
smartmeter: True
Ok thanks @nilrog that has done the trick....I remember when I set this up when my smartmeter was installed (after the earlier setup of the inverter) that it was not a trouble free process. The way I settled on was what worked at the time but obviously that was not the way I should have configured it right?
Even now until I removed all the monitored conditions there were a load of sensors unavailable but after correcting that all good! Perhaps tweaking the documentation may help others?
All I'm left with now is this in my log:
Log Details (WARNING) Logger: homeassistant.loader Source: loader.py:311 First occurred: 8:42:10 AM (1 occurrences) Last logged: 8:42:10 AM
You are using a custom integration for fronius_inverter which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
Is there any chance this custom component (which IMHO) is far superior to the core fronius integration can be promoted to core?
Cheers for your help!
Glad to hear you got it resolved :)
Even now until I removed all the monitored conditions there were a load of sensors unavailable but after correcting that all good! Perhaps tweaking the documentation may help others?
Yes, that's how it works...and is one of the examples in the readme. If you specify monitored conditions the sensors you list there are the only ones that will be created.
Is there any chance this custom component (which IMHO) is far superior to the core fronius integration can be promoted to core? I have my fingers crossed for that.
That is not for me to decide...I have only contributed to this one.
Hey...just re-read this "Yes, that's how it works...and is one of the examples in the readme. If you specify monitored conditions the sensors you list there are the only ones that will be created."
Well I can confirm that in fact those sensors were actually present in the developer-tools/state area....just shown as being unavailable? Is that how it works normally?
If they are shown as unavailable then they have, at some point, been known to HA. But they are no longer active so they are set to unavailable.
If you look at the config you posted here, then the second of the two instances you had running was configured to create all sensors since there was no monitored_conditions
set. If you removed that instance and kept the other...with monitored_conditions
still set then you would have a bunch of unavailable sensors since the remaining one created a limited set of sensors.
That's not a behaviour I have ever observed before. If I remove a sensor config and restart...its normally gone. I guess its to do with there being duplicates in the past though.
Closing this now thanks for your help with this. Cheers!
There is an issue with the integration which does not stop the integration working at all but it continues to raise the same errors and warning for a while now over all of releases I have tested since 0.107.7.
I am now testing in 0.110.1 and these are the errors and warnings which persist:
I am using this environment:
Cheers for building such a fantastic integration!