safepay / sensor.fronius

A Fronius Sensor for Home Assistant
MIT License
80 stars 32 forks source link

Entity id already exists #33

Closed wellsy57 closed 4 years ago

wellsy57 commented 4 years ago

There is an issue with the integration which does not stop the integration working at all but it continues to raise the same errors and warning for a while now over all of releases I have tested since 0.107.7.

I am now testing in 0.110.1 and these are the errors and warnings which persist:

Logger: homeassistant.components.sensor
Source: helpers/entity_platform.py:436
Integration: Sensor (documentation, issues)
First occurred: 10:58:15 PM (9 occurrences)
Last logged: 10:58:15 PM

Entity id already exists - ignoring: sensor.fronius_ac_current. Platform fronius_inverter does not generate unique IDs
Entity id already exists - ignoring: sensor.fronius_ac_voltage. Platform fronius_inverter does not generate unique IDs
Entity id already exists - ignoring: sensor.fronius_ac_frequency. Platform fronius_inverter does not generate unique IDs
Entity id already exists - ignoring: sensor.fronius_dc_current. Platform fronius_inverter does not generate unique IDs
Entity id already exists - ignoring: sensor.fronius_dc_voltage. Platform fronius_inverter does not generate unique IDs

I am using this environment:


arch | x86_64
-- | --
dev | false
docker | true
hassio | false
installation_type | Home Assistant Core on Docker
os_name | Linux
os_version | 4.2.8
python_version | 3.7.7
timezone | Australia/Brisbane
version | 0.110.1
virtualenv | false

Cheers for building such a fantastic integration!

nilrog commented 4 years ago

Never seen this before. How does your config look like for fronius_inverter?

wellsy57 commented 4 years ago

Yeah I thought it strange. Presently have this which was giving no errors or warnings up until 0.107.7

Current config:

  - platform: fronius_inverter
    ip_address: 192.168.1.211
    power_units: W
    units: kWh    
    monitored_conditions:
      - ac_power
      - day_energy
      - year_energy
      - total_energy
      - ac_current
      - ac_voltage
      - ac_frequency
      - dc_current
      - dc_voltage

  - platform: fronius_inverter
    ip_address: 192.168.1.211
    powerflow: True
    smartmeter: True
#    smartmeter_device_id: 240.453036
    power_units: W
    units: kWh

  - platform: integration
    source: sensor.fronius_grid_usage
    name: fronius_grid_usage_integration
    unit_prefix: k
    round: 2

  - platform: integration
    source: sensor.fronius_house_load
    name: fronius_house_load_integration
    unit_prefix: k
    round: 2    
nilrog commented 4 years ago

Oh, you have multiple instances configured against the same inverter. That explains why you are getting those errors. You should have only one instance.

Change your configuration to this instead. And either skip the monitored conditions completely (like I did) or use the ones you had but also add those from powerflow and smartmeter that you want to monitor.

  - platform: fronius_inverter
    ip_address: 192.168.1.211
    power_units: W
    units: kWh    
    powerflow: True
    smartmeter: True
wellsy57 commented 4 years ago

Ok thanks @nilrog that has done the trick....I remember when I set this up when my smartmeter was installed (after the earlier setup of the inverter) that it was not a trouble free process. The way I settled on was what worked at the time but obviously that was not the way I should have configured it right?

Even now until I removed all the monitored conditions there were a load of sensors unavailable but after correcting that all good! Perhaps tweaking the documentation may help others?

All I'm left with now is this in my log:

Log Details (WARNING) Logger: homeassistant.loader Source: loader.py:311 First occurred: 8:42:10 AM (1 occurrences) Last logged: 8:42:10 AM

You are using a custom integration for fronius_inverter which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.

Is there any chance this custom component (which IMHO) is far superior to the core fronius integration can be promoted to core?

Cheers for your help!

nilrog commented 4 years ago

Glad to hear you got it resolved :)

Even now until I removed all the monitored conditions there were a load of sensors unavailable but after correcting that all good! Perhaps tweaking the documentation may help others?

Yes, that's how it works...and is one of the examples in the readme. If you specify monitored conditions the sensors you list there are the only ones that will be created.

wellsy57 commented 4 years ago

Is there any chance this custom component (which IMHO) is far superior to the core fronius integration can be promoted to core? I have my fingers crossed for that.

nilrog commented 4 years ago

That is not for me to decide...I have only contributed to this one.

wellsy57 commented 4 years ago

Hey...just re-read this "Yes, that's how it works...and is one of the examples in the readme. If you specify monitored conditions the sensors you list there are the only ones that will be created."

Well I can confirm that in fact those sensors were actually present in the developer-tools/state area....just shown as being unavailable? Is that how it works normally?

nilrog commented 4 years ago

If they are shown as unavailable then they have, at some point, been known to HA. But they are no longer active so they are set to unavailable.

If you look at the config you posted here, then the second of the two instances you had running was configured to create all sensors since there was no monitored_conditions set. If you removed that instance and kept the other...with monitored_conditions still set then you would have a bunch of unavailable sensors since the remaining one created a limited set of sensors.

wellsy57 commented 4 years ago

That's not a behaviour I have ever observed before. If I remove a sensor config and restart...its normally gone. I guess its to do with there being duplicates in the past though.

Closing this now thanks for your help with this. Cheers!