BJReplay / ha-solcast-solar

Solcast Integration for Home Assistant
Apache License 2.0
191 stars 34 forks source link

Ability to set multiple hard limits #186

Closed ChirpyTurnip closed 1 month ago

ChirpyTurnip commented 1 month ago

The integration allows only one instance to be installed, so for those of us with two independent installations we must load both SolCast APIs against the one integration. It allows this by using a comma as a seperator, likewise, the API call limit can be set per API using commas (e.g. 50,50 or 50,10).

There is now the ability to set a hard limit, which is excellent, but I need to be able to set a hard limit per API. There is a significant difference between 10000 and 5000,5000 - they both add up to 10,000 however if installation 1 faces East / North, and installation 2 faces West and North, and both are over sized, then as the sun moves one inverter is at capacity while the other has spare capacity - a limit of 10000 allows the forecast to 'borrow' capacity from an inverter that isn't actually connected to the array with excess generation.

The result is that the forecast is always too high. Limiting each installation independently allows you to clip the forecast correctly... :-)

autoSteve commented 1 month ago

At face value this is not an unreasonable feature request, @ChirpyTurnip, and a clear feature gap. So it will be considered.

Implementing it will need to move config flow field validation outside of Voluptuous, and allow the entry of two comma separated limits as a string. It would also require a config schema version bump, re-writing the set hard limit service call, plus migration of existing hard limit setting to multi-hard limit. Plus translations. More Urdu 🤦🏻.

Currently the forecasts are held in memory as four sites, with the API key associated with each site obviously known and tracked. However the flow to calculate an overall forecast for each period is dumb as a bag of hammers, simply adding up all sites and then applying the hard limit. This flow would also need to be touched to add up by API key, limit, then combine.

Forecasted site breakdown attributes will not be effectively limited, but they aren't now anyway.

Non-trivial, and a lot of code touched.

If I were to do this I would not be able to test it well, and that makes me nervous. I would also approach it as an alpha over at autoSteve/ha-solcast-solar. What would your availability and responsiveness be like for testing, and would you be happy to switch to an autoSteve alpha repo, where things may go horribly wrong at times? 😉😬

Also, in which time zone are you? (And you don't read Urdu, do you by any chance?)

ChirpyTurnip commented 1 month ago

Hi! I am in Pacific\Auckland (GMT +12 give or take). No Urdu sorry.

Happy to test as well, so that is no problem. I do not know if it is easier to allow two parallel Solcast integrations installations (one API, one API limit, one inverter limit each) or continuing with the current approach. There could be a usecase for both....?

autoSteve commented 1 month ago

Perfect. You're from the future. (Do point out to me just before I am about to make a mistake. And the lotto numbers would be handy, too.)

ChirpyTurnip commented 1 month ago

If I had the lotto numbers or stock picks I'm sure there would be a queue for that....but it would equally be worthless as winnings split between millions is basically zero. :-)

autoSteve commented 1 month ago

Righto.

Theory of operation. Check me on this.

Using a proportion of per-API key total is "an" approach. Whether it matches reality I cannot say. But it's a forecast. No reality will be harmed... What I do know is that the sum of all site breakdowns will now match the forecasted overall as presented in the energy dashboard. That sum did not before where a hard limit was in play.

I have a working proof of concept that does the calcs and applies the hard limit (just one limit value for all API keys at the moment, multi is coming. It is poorly tested in the real world, but calculates the per-API key proportions perfectly.

So this code will not only solve for different hard limits per API key, but also fix other known per-site issues that relate to the hard limit (which was dumb as a bag of hammers).

On the to-do list.

If you're interested, here is where the per-site per-API-key limits are calculated. Needs a trivial tweak for multi-API key.

"""
Build per-site hard limit.
The API key hard limit for each site is calculated as proportion of the site contribution for the account. 
"""
sites_hard_limit = defaultdict(dict)
api_key_sites = defaultdict(dict)
for s in self.sites:
    api_key_sites[s['api_key']][s['resource_id']] = {
        'earliest_period': data['siteinfo'][s['resource_id']]['forecasts'][0]['period_start'],
        'last_period': data['siteinfo'][s['resource_id']]['forecasts'][-1]['period_start']
    }
for api_key, sites in api_key_sites.items():
    siteinfo = {site: {forecast['period_start']: forecast for forecast in data['siteinfo'][site]['forecasts']} for site in sites}
    earliest = dt.now(self._tz)
    latest = None
    for site, limits in sites.items():
        if limits['earliest_period'] < earliest:
            earliest = limits['earliest_period']
        latest = limits['last_period']
    periods = [earliest + timedelta(minutes=30 * x) for x in range(int((latest - earliest).total_seconds() / 1800))]
    for pv_estimate in ['pv_estimate', 'pv_estimate10', 'pv_estimate90']:
        sites_hard_limit[api_key][pv_estimate] = {}
    for period in periods:
        for pv_estimate in ['pv_estimate', 'pv_estimate10', 'pv_estimate90']:
            estimate = {site: siteinfo[site].get(period, {}).get(pv_estimate) for site in sites}
            total_estimate = sum(estimate[site] for site in sites if estimate[site] is not None)
            if estimate is not None and total_estimate is not None:
                if total_estimate == 0:
                    continue
                sites_hard_limit[api_key][pv_estimate][period] = {site: estimate[site] / total_estimate * self.hard_limit for site in sites}
ChirpyTurnip commented 1 month ago

Wow! Fast progress! I have been pondering this and didn't know you were already making a start! :-)

If I understand the below correctly then the approach might need a slight adjustment...but I think it will be a simplification.

However, realistically this should not be proportional - it should simply be clipping the forecast at the limit specified by the user. This gives a forecast like this:

'2024-10-20T19:30:00+13:00': 0
'2024-10-20T19:15:00+13:00': 36
'2024-10-20T19:00:00+13:00': 97
'2024-10-20T18:45:00+13:00': 151
'2024-10-20T18:30:00+13:00': 282
'2024-10-20T18:15:00+13:00': 456
'2024-10-20T18:00:00+13:00': 683
'2024-10-20T17:45:00+13:00': 959
'2024-10-20T17:30:00+13:00': 1239
'2024-10-20T17:15:00+13:00': 1543
'2024-10-20T17:00:00+13:00': 1858
'2024-10-20T16:45:00+13:00': 2175
'2024-10-20T16:30:00+13:00': 2486
'2024-10-20T16:15:00+13:00': 2803
'2024-10-20T16:00:00+13:00': 3107
'2024-10-20T15:45:00+13:00': 3404
'2024-10-20T15:30:00+13:00': 3691
'2024-10-20T15:15:00+13:00': 3970
'2024-10-20T15:00:00+13:00': 4237
'2024-10-20T14:45:00+13:00': 4484
'2024-10-20T14:30:00+13:00': 4711
'2024-10-20T14:15:00+13:00': 4928
'2024-10-20T14:00:00+13:00': 5000
'2024-10-20T13:45:00+13:00': 5000
'2024-10-20T13:30:00+13:00': 5000
'2024-10-20T13:15:00+13:00': 5000
'2024-10-20T13:00:00+13:00': 5000
'2024-10-20T12:45:00+13:00': 5000
'2024-10-20T12:30:00+13:00': 5000
'2024-10-20T12:15:00+13:00': 5000
'2024-10-20T12:00:00+13:00': 5000
'2024-10-20T11:45:00+13:00': 5000
'2024-10-20T11:30:00+13:00': 5000
'2024-10-20T11:15:00+13:00': 5000
'2024-10-20T11:00:00+13:00': 5000
'2024-10-20T10:45:00+13:00': 5000
'2024-10-20T10:30:00+13:00': 5000
'2024-10-20T10:15:00+13:00': 5000
'2024-10-20T10:00:00+13:00': 5000
'2024-10-20T09:45:00+13:00': 4853
'2024-10-20T09:30:00+13:00': 4640
'2024-10-20T09:15:00+13:00': 4391
'2024-10-20T09:00:00+13:00': 4134
'2024-10-20T08:45:00+13:00': 3851
'2024-10-20T08:30:00+13:00': 3553
'2024-10-20T08:15:00+13:00': 3243
'2024-10-20T08:00:00+13:00': 2929
'2024-10-20T07:45:00+13:00': 2605
'2024-10-20T07:30:00+13:00': 2228
'2024-10-20T07:15:00+13:00': 1809
'2024-10-20T07:00:00+13:00': 1345
'2024-10-20T06:45:00+13:00': 751
'2024-10-20T06:30:00+13:00': 192
'2024-10-20T06:15:00+13:00': 0

Initially I thought there might have been a potential use case for something more like your approach - that is if API1= Strings A&B, and API2 = String C [and maybe D] where these are connected to a single inverter. However the same logic from above pretty much applies exactly the same way EXCEPT that we want a SINGLE limit applied to the total output.

This means that the integration will need to work like this:

  1. If TWO limits are specified (e.g. 5000,7500) these will be used to clip the forecasts of API1 and API2. The outputs will be combined, and the result is the output.
  2. If ONE limit is specified (e.g. 8000) then forecasts from API1 and API2 will be combined and the total output will be clipped.
  3. If NO limit is specified than the output is not clipped.
  4. Damping is applied as it currently is (no changes).

Does that make sense?

CP. :-)

[re-posted - email reply looked rubbish].

autoSteve commented 1 month ago

this should not be proportional

It is proportional in applying the limit to each site contribution to the whole. This only affects the per-site breakdowns. The total is the total, presently limited per-API key. I don't see an issue, except...

Single limit for two API keys

This is do-able, but requires an additional logic choice: Is there one limit specified, or two? If one, then calculate differently.

Currently the code will assume the same limit for both API keys. This would suit your use case of a 5kW limit per API key. I take it you have two 5kW inverters.

For the case of one limit across all sites, regardless of which API key is providing them, then a proportion of contribution for all sites from all keys would be needed so that each site breakdown can be effectively clipped. The total would be a non-issue. This means more code, which is no biggie, and makes total sense to do.

autoSteve commented 1 month ago

Spoilers.

The only bug I have spotted so far is in the diagnostics. (The 'Unavailable' hard limit set should not be there... This is because when switching to per-API key limit sensors that sensor is no longer provided by the integration. When a single hard limit is set there is only one hard limit sensor, not multiple with a snip of the last six digits of the API key. Not sure if that can be fixed, and may require the user to kill off the unavailable entity.)

image image image image

And this is the resulting squished forecast. Forecast interval peaks without hard limit extended up to 10kW before limit set.

These are two sites. One Sydney Opera House, and the other @BJReplay's joint. Two API keys, single site per key.

image
BJReplay commented 1 month ago

So I was about to comment that I think you have a * 2 error in there Screenshot_20241020_182853_Home Assistant.jpg

when I noticed that I also have the Opera House unavailable.

Must figure out how to clean it up!

autoSteve commented 1 month ago

2x? That might probably be, @BJReplay... Checking. Thanks.

BJReplay commented 1 month ago

21.8kWh actual so far, 0.2kWh remaining, 24.05 50%, 21.40 10% forecasts.

autoSteve commented 1 month ago

Well @BJReplay, tell your doctor that your refreshed peepers after surgery are working very nicely... Fixed.

I appear to be reading low, but that is because I am not smashing your API key with any updates. (None at all, really.) So the forecast data is old.

image
BJReplay commented 1 month ago

Smash away.

Eyes are OK (good, really) but I am awaiting follow-up surgery in three weeks to get rid of scar tissue (not unexpected) and then a wait for a week to get new specs prescription. About six weeks away from being able to read comfortably with new specs.

autoSteve commented 1 month ago

Interesting. I did do one forecast update, nice and clean-like. I do not concur with your diag @BJReplay...

There is a hard limit set at 5kW. Did you pull over 5kW forecast today for estimate 50?

image

2024-10-20 18:36:41.502 INFO (MainThread) [custom_components.solcast_solar] Inverter hard limit value is set to limit maximum forecast values
2024-10-20 18:36:41.567 ERROR (MainThread) [homeassistant.components.sensor] Platform esphome does not generate unique IDs. ID 08:3A:F2:AC:ED:AC-sensor-store_temperature already exists - ignoring sensor.store_temperature
2024-10-20 18:44:17.872 INFO (MainThread) [custom_components.solcast_solar] Service call: update_forecasts
2024-10-20 18:44:17.873 DEBUG (MainThread) [custom_components.solcast_solar.coordinator] Checking for stale usage cache
2024-10-20 18:44:17.873 INFO (MainThread) [custom_components.solcast_solar.solcastapi] Getting forecast update for site 69d4-4611-63a4-5baa
2024-10-20 18:44:17.873 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Polling API for site 69d4-4611-63a4-5baa lastday 2024-10-27 numhours 174
2024-10-20 18:44:17.874 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Fetch data url: https://api.solcast.com.au/rooftop_sites/69d4-4611-63a4-5baa/forecasts
2024-10-20 18:44:17.874 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Fetching forecast
2024-10-20 18:44:18.650 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Fetch successful
2024-10-20 18:44:18.651 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] API returned data, API counter incremented from 0 to 1
2024-10-20 18:44:18.651 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Writing API usage cache file: /config/solcast-usage-******sxdNS5.json
2024-10-20 18:44:18.682 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] HTTP session returned data type <class 'dict'>
2024-10-20 18:44:18.682 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] HTTP session status 200/Success
2024-10-20 18:44:18.688 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] 349 records returned
2024-10-20 18:44:18.807 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Forecasts dictionary length 911
2024-10-20 18:44:18.808 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Undampened forecasts dictionary length 911
2024-10-20 18:44:18.808 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] HTTP data call processing took 0.115 seconds
2024-10-20 18:44:18.811 INFO (MainThread) [custom_components.solcast_solar.solcastapi] Getting forecast update for site a546-98c5-53c5-d101
2024-10-20 18:44:18.812 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Polling API for site a546-98c5-53c5-d101 lastday 2024-10-27 numhours 174
2024-10-20 18:44:18.815 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Fetch data url: https://api.solcast.com.au/rooftop_sites/a546-98c5-53c5-d101/forecasts
2024-10-20 18:44:18.815 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Fetching forecast
2024-10-20 18:44:19.339 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Fetch successful
2024-10-20 18:44:19.340 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] API returned data, API counter incremented from 0 to 1
2024-10-20 18:44:19.340 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Writing API usage cache file: /config/solcast-usage-******bvvHxW.json
2024-10-20 18:44:19.365 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] HTTP session returned data type <class 'dict'>
2024-10-20 18:44:19.365 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] HTTP session status 200/Success
2024-10-20 18:44:19.365 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] 349 records returned
2024-10-20 18:44:19.450 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Forecasts dictionary length 767
2024-10-20 18:44:19.450 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Undampened forecasts dictionary length 767
2024-10-20 18:44:19.450 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] HTTP data call processing took 0.085 seconds
2024-10-20 18:44:19.451 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Hard limit for API key ******sxdNS5: 5.0
2024-10-20 18:44:19.453 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Earliest period: 2024-10-08 13:30:00 UTC, latest period: 2024-10-27 12:30:00 UTC
2024-10-20 18:44:19.478 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Hard limit for API key ******bvvHxW: 5.0
2024-10-20 18:44:19.480 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Earliest period: 2024-10-11 13:30:00 UTC, latest period: 2024-10-27 12:30:00 UTC
2024-10-20 18:44:19.685 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Hard limit for API key ******sxdNS5: 5.0
2024-10-20 18:44:19.687 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Earliest period: 2024-10-08 13:30:00 UTC, latest period: 2024-10-27 12:30:00 UTC
2024-10-20 18:44:19.709 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Hard limit for API key ******bvvHxW: 5.0
2024-10-20 18:44:19.711 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Earliest period: 2024-10-11 13:30:00 UTC, latest period: 2024-10-27 12:30:00 UTC
2024-10-20 18:44:19.868 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Forecast data from 2024-10-20 to 2024-10-27 contains all 48 intervals
2024-10-20 18:44:19.868 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Calculating splines
2024-10-20 18:44:19.976 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Build forecast processing took 0.526 seconds
2024-10-20 18:44:20.049 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Saved dampened forecast cache
2024-10-20 18:44:20.092 DEBUG (MainThread) [custom_components.solcast_solar.solcastapi] Saved undampened forecast cache
2024-10-20 18:44:20.092 INFO (MainThread) [custom_components.solcast_solar.solcastapi] Forecast update completed successfully```
BJReplay commented 1 month ago

P.S. @autoSteve to test a hard limit on my system, looking at the actual output, 3kW would appear to be enough to clip, but if I hide output, it's actually closer to 2kW

Screenshot_20241020_184911_Home Assistant.jpg

Screenshot_20241020_184954_Home Assistant.jpg

autoSteve commented 1 month ago

So which is correct, I ask, my dear @BJReplay?

20.88 or 24.05?

Was it correct before, or is it correct now? There be more digging to do...

With no hard limit set I get 20.88 for BLAH... The Opera House shoots up to 65-odd from 48.41.

autoSteve commented 1 month ago

No more digging. Forecasts in the past are not updated, are they? No case to answer...

Moving back to coding for an overall every-site-regardless-of-API-key hard limit...

BJReplay commented 1 month ago

Screenshot_20241020_185840_Home Assistant.jpg

Screenshot_20241020_185907_Home Assistant.jpg

Just polled again but did not screen shot, but definitely 24.05, @autoSteve

autoSteve commented 1 month ago

Spoiler: Single hard limit, 5kW, applied to all sites, two API keys.

Great success.

image

And configure back to 5kW for each API key.

image
BJReplay commented 1 month ago

Here is what happened between the earliest and cumulative latest update od today (from my legacy control system which still pushes data and streams updates to Power BI)

Screenshot_20241020_204941_Power BI.jpg

Screenshot_20241020_204958_Power BI.jpg

autoSteve commented 1 month ago

A first alpha release for you, @ChirpyTurnip. 🎉🎂🥳🎁 Includes Urdu, and readme at https://github.com/autoSteve/ha-solcast-solar/tree/Hard-limit-by-API-key, which doesn't change much.

https://github.com/autoSteve/ha-solcast-solar/releases/tag/v4.2.3.1-alpha

If you configure autoSteve/ha-solcast-solar as a custom repo. in HACS, then remove @BJReplay's repo then get updated info for the integration will reveal v4.2.3.1-alpha for Solcast PV Forecast as a download option. Or it should. (It is not set as a pre-release. 🤭) (edit: I have just tested this. It works...)

Initial testing reveals smiles.

There is still a test.json that gets created in /config, containing the proportioned hard limit for sites. It may be of use in testing, so I left it as updating for now.

No forecast update is required to see the impact of adjusting hard limit, so go for it. Results should be near instant after setting options.

All of this is reversible, because hard limit values not saved, but do take a backup of /config/solcast* before updating, yeah?

autoSteve commented 1 month ago

~Already spotted an issue.~

~When using a single hard limit for all API keys the site tally for the day is halved. On it.~

autoSteve commented 1 month ago

Nope. I'm seeing things. This makes sense for the sites data. Ugh. Play on.

autoSteve commented 1 month ago

Images have not yet been updated in the readme to adjust for hard limit as kW. They will be. Posting here FYI and for me to remember.

autoSteve commented 1 month ago

Using "v4.2.3.1-alpha" did not seem to go well. "v4.2.3.1" for the win @ChirpyTurnip.

autoSteve commented 1 month ago

This is quite hysterical with an all-sites hard limit of 5kW set for my Frankentein's monster set-up of Sydney+Melbourne. I could not share this. Think ink card tests. "What do you see?" My head: "Umm, I see a franger tip, filled with air... In a box." (sorry, children...)

Must remove hard limit.

image
ChirpyTurnip commented 1 month ago

Hello!

Keen to test it! How do I get this into my system? It doesn't show as a pre-release (4.2.3 is the newest I see), and just manually copying the updated code over the top to the running version doesn't work as the integration will not start once I've done that.

I've already removed the limits from the Solcast side, so I'm ready to go!

ChirpyTurnip commented 1 month ago

OK....my bad.....it looks like it is now running. Can confirm that the the diagnostic now shows hard limits of 5kW per inverter... the forecast for the next few days isn't good enough to max out the forecast so will need to wait on my end for actual results. :-)

As an aside, it would be nice if it retained data - every restart causes another API call....if you have 50 that's not the end of the world, but if you only had 10 that's not a lot if you are doing some development and HA is constantly being restarted.

autoSteve commented 1 month ago

As an aside, it would be nice if it retained data - every restart causes another API call

Every restart it gets sites detail. This is not a metered call.

Or does it perform a forecast fetch for you???

Curious...

autoSteve commented 1 month ago

https://github.com/BJReplay/ha-solcast-solar/discussions/38#discussioncomment-9815828

ChirpyTurnip commented 1 month ago

I could be wrong, but it is lunchtime and I'm at 48 out of 50 API calls used. So either it is about to reset (UTC midnight) or I have used a lot of API calls already. The only thing I could chalk that up to was all the restarts this morning.

It is currently set to use the new 'automatic' sunrise to sunset method. :-)

BJReplay commented 1 month ago

So either it is about to reset (UTC midnight)

How's it looking now?

If you have your settings set to use all 50 calls, then that's not unexpected. Mine is set to 20, and it reliably uses 20 just before 11am AEST (midnight UTC).

autoSteve commented 1 month ago

The only thing I could chalk that up to was all the restarts this morning

Can't happen.

The only thing that will bump the API counter is a forecast fetch (or load of past actuals on startup when solcast.json does not exist or is corrupted, which is logged and will use two calls per site: one for the past, and another for a forecast update). It is all logged at INFO level.

Enable greater than info logging, or better debug logging because alpha.

Restarting will clear the log and only save n-1, so triage after the fact with lots of restarts is impossible.

Record the current use count (be aware that there is an instrumention issue, so examine by opening sensor and looking at recent history). Then restart. Observe the usage count. If it increases then we have a bug. It should not increase.

autoSteve commented 1 month ago

(And if it does increase then attach the [debug] logs here.)

autoSteve commented 1 month ago

(And note that setting debug logging in the UI gets cleared on restart, so enable it in configuration.yaml thusly: https://github.com/BJReplay/ha-solcast-solar/discussions/38#discussioncomment-9792389)

ChirpyTurnip commented 1 month ago

So either it is about to reset (UTC midnight)

How's it looking now?

If you have your settings set to use all 50 calls, then that's not unexpected. Mine is set to 20, and it reliably uses 20 just before 11am AEST (midnight UTC).

14 out of 50. It must be resetting at UTC....so that means it will use half the calls today, the other half tomorrow morning....the rest at about 1pm.

ChirpyTurnip commented 1 month ago

Other than that it is looking pretty good. It will be at least a week (based on current forecast) before we have clear skies again - so can't see the flattened forecast for now...but based on your charts above it should work.

BJReplay commented 1 month ago

It must be resetting at UTC

That's right - that's when Solcast resets its counter, so that's when the integration resets its internal counter.

autoSteve commented 1 month ago

so that's when the integration resets its internal counter

With a twist.

Very occasionally the timer that is set to fire at midnight UTC does not fire. As a belt/braces if the next fetch determines that the last reset timestamp is stale then usage will get reset then and the timer will be restarted.

Happened to me a couple of days ago, and I've no idea what causes it.

Another event that could prevent a usage reset on time is restarting HA just prior to midnight UTC. (Must add a 'stale' check on integration start... There is one, but it serves a different purpose, looking for a couple of days of inactivity or a new install and if so, reset usage and initiate a force fetch.)

Bottom line is that usage probably resets at midnight UTC, but may in fact reset later.

ChirpyTurnip commented 1 month ago

I have an OpenMeteo-based template based solar forecast sensor - I can't actually use as a forecast as while I can hack up and combine the forecast data I can't figure out how to make it 'valid' for use. However it is showing that for a period today the forecast is clipped to 5000 for my North/East array..... The SolCast sensors don't show that level of detail as the forecast is the combined output of all the installed plant. Is there way to show it, or would my only option be to drop one installation and show only the output from one of the installations / API?

autoSteve commented 1 month ago

The detailed forecast can be broken down as a sensor attribute by site. Enable in options. One could then display individual site data as an Apex chart based on the example in the readme.

image
ChirpyTurnip commented 1 month ago

Sooner or later everything always end up in a rat hole ;-)

OK. I have installed APEX charts, and used the default example from the readme created. It generates a graph, but it has issues.

  1. The sensor from my NE inverter output power as W, looks like the graph expect kW. This will require a template sensor to convert the input data to something the graph can use. So this value just returns N/A - but putting the sensor into Developer Tools gives me a value the matches the inverter output...so the entity is valid.
  2. It's unclear to me how to take the solcast sensor (whichever one) and isolate the data/forecast from one API - I don't want to see it at a string level, I want to see it at an inverter level (where the clipping to a max of 5000W should happen).

Help?

BJReplay commented 1 month ago

Help?

  1. There's some notes in the readme on dealing with W vs kW: The chart assumes that your Solar PV sensors are in kW, but if some are in W, add the line transform: "return x / 1000;" under the entity id to convert the sensor value to kW.
  2. Sorry, not sure, I've only got a single array / site.
ChirpyTurnip commented 1 month ago

Ah yes....should have seen that :-)

I have updated it, but the card continues to show N/A for the value. I've tried linking it the active power and the apparent power sensors, and I've tried to reduce the average to 1m....it stays on NA and refuses to generate an output.

For my detailed forecast I'm a bit stymied - it seems I can either get 'per array' so each API generates two forecasts (e.g. string 1 = 2kWh, String 2 = 3.1kWh) or it outputs a grand total for both APIs (e.g. 8.6kWh). The hard part for me is to pull the forecast for just one API (the sum of the two strings) and then to graph that.

BJReplay commented 1 month ago

I have updated it, but the card continues to show N/A for the value.

The chart can be a bit tricky - I had trouble setting it up, but you have to get each of the items exactly correct. I found having two windows open - one where you could see each of the names of the sensors / items in one window, and the dashboard where you were editing the chart YAML in another really helped. I eventually fixed all the typos / differences from the sample, and it came good.

ChirpyTurnip commented 1 month ago

I created template sensors from my inverter sensors and now the apex card will read it just fine. I don't know why it didn't like the raw data - must be something int he background like unit of measurement or something. Anyway, it is going now. I can now create one apex chart per system, but the challenge for me that remains is that each chart must be fed by the forecast of just one API.

That way I can see the forecast for System 1 and track the production from System 1, and then the same for System 2. Any pointers on how I can pull this from the sensors?

If I look at the sensors for 'today' there are ~1500 lines of attribute data, being the detailed forecast (for everything), and then the breakdowns (e.g. detailedForecast-1111-aaaa-2222-bbbb). Each of the breakdowns represents a string as at Solcast I have modeled it as String 1 and String 2 - so I need to combine two of these to make a 'system' (or the output from a single API).

I'm not good enough at yaml to combine this in a template sensor (because I can't just combine the data between two sensors, I would need to extract just the bits I want, then combine that together). The Solcast integration also does output this as sensors - but that would actually be be a cool addition (IMHO):

That data could then be consumed directly.....and then of course added to any number of graphs to allow tracking per string or per system....which is actually pretty useful, because you can then easily visualise the data at a more detailed level without needing to be a yaml expert as the data is just available for use. That in turn means users can:

  1. Easily see what part of the forecast needs to be tuned - i.e. find strings that consistently over/under forecast for any given string that is facing a given direction;
  2. Find problems with the system - it used to track really well, and still does, except for that one string that seems to be 25% lower than expected....maybe it's covered in a giant bird poop?

It would seem to me that having the integration generate some additional sensors (rather than additional data inside an existing sensor) would make the information more accessible. Especially for knuckle-draggers like me. :-)

ChirpyTurnip commented 1 month ago

Here is my current chart:

2024-10-23_07-58-01

It's tracking, but against the total forecast, not against the forecast for that one inverter.....

BJReplay commented 1 month ago

@autoSteve may well disagree because he likes difficult projects.

The point of Solcast (not this specific client for their API) is to tell you how much your system is going to generate today in particular.

It is not - in the hobbyists format - a system / forecast tuning tool. They offer that service: it is (rightly so, IMHO, a part of their paid subscription) for professional users who have a real need for very accurate near term forecasts.

It is not a system monitoring tool: they are usually available from manufacturers of inverters / systems / third parties (again rightly so, IMHO, a part of a paid subscription) where it matters for large installations.

Now, to graphing: what is the purpose of this integration? I believe that it is to provide near term information about forecast solar at the system level as accurately as reasonably possible.

Graphing is nice to have.

System tuning is nice to have.

But adding features that support what should be rare activities - inspection and cleaning - at the sub-API level just because a user wants 10 updates a day instead of 5 that add to the support and maintenance burden seem to be somewhat missing the point.

Do you want site or array level data to help you tune your parameters or check every now and again? Log into the Solcast Toolkit Website (Twice, if you have two APIs).

Do you want very complex pretty graphs?

Learn some yaml: it will come in handy.

autoSteve commented 1 month ago

Actually hard agree, @BJReplay.

What's the set-up, @ChirpyTurnip? I thought you had two API keys, and one site per account? Sounds like that is not the case.

You have breakdowns for all the sites in attributes, so should be able to build it. I've not tried to build it, so don't know how much of a challenge it is.

There is a HACS thing called 'Lovelace Card templater', and this might be of use. Not sure.

What I do with card templater is to wrap apex charts in it, and this enables me to use jinja2 to calculate things for the chart. Below is the first few lines of a chart that has a varying X axis that is compact during daylight hours, but progressively expands in the early morning or later at night. (https://github.com/BJReplay/ha-solcast-solar/discussions/94#discussioncomment-10364553)

Maybe some jinja2 is the answer?

type: custom:card-templater
card:
  type: custom:apexcharts-card
  graph_span_template: |-
    {% set sunrise = as_datetime(states('sensor.sun_next_rising'))  %}
    {% set sunset = as_datetime(states('sensor.sun_next_setting'))  %}
    {% if sunrise != none and sunset != none %}
      {% set compressed = (
...
image
ChirpyTurnip commented 1 month ago

Maybe we are at cross purposes?

I'm not asking for anything you don't already have - just suggesting a different way of outputting the data the integration already produces.

In my case I have two APIs (ergo two sites) covering two inverters, each with two strings modeled on the Solcast side.

If we ignore the 0/10/90 forecasting and just keep it simple by assuming that there is only a single forecast then for the 'today' forecast the integration would currently give me daily total, and in the attributes a breakdown per hour. The data can just be consumed, graphed, added to energy dashboard, easy peasy. No hard work needed.

If I enable the detailed breakdown option the 'today' forecast still has the daily total, but in the attribute data now provides:

To use this for anything useful I will need to break this apart and then extract and optionally combine the data into one of more other sensors.

For example, the energy dashboard provides an effortless way of showing the forecast and actual production. But this is for the total. If I wanted to create some apex charts for each of the inverters (i.e. two graphs to track two forecasts and two generation outputs) I have a problem.

On the inverter side I can get the 'per-inverter' generation, I can even get the 'per-string' generation....but on the Solcast side I need to rip open the 'today' forecast and pull out the hourly breakdowns for API1 - String1 and for API1 - String2, then I need to combine these to get a API - Total forecast.

My suggestion was more along the lines of, when I enable the option for the breakdown why doesn't it create additional sensors rather than adding to the attribute data. That way the data I would otherwise need to extract is now simply available in a simple sensor and ready for immediate use (ergo the list of proposed sensors in the previous post).

Jinja and yaml can solve many problems, but it isn't super simple. I really wanted to quickly graph the outputs of the new forecast with the limits applied so that I could see how the clipping was working when the forecast generation exceeds what the inverter could actually produce. I wanted to see a nice bell curve with a flat top....but this is not simple. The main stumbling block being to get the data I need out of the sensor attributes (there's 1500 lines of data there - that's quite a lot).

It just seemed to me that increasing the number of Solcast sensor outputs (i.e. add/expose more sensors) would just be easier for us mere mortals than making the sensor attribute outputs bigger.... The same could also be said for the 0 / 10 / 90 forecasts....these could also potentially be separate sensors.

It's not that I'm dumb, but I do have a full time job, and wife and kids, and my ability to spend a lot of time on home automation is constrained, especially since I'm looking after two completely independent installations (I do it for a family member's house too, who also has solar - though luckily just one inverter).

I like to make it really good (and I have some pretty complicated stuff in there) but I just don't have the bandwidth to build and maintain complex jinja.

I've previously tried to combine the data from a meteo forecast:

- sensor:
  - unique_id: "forecast_solar_today_ne"
    name: "Forecast Energy Production Today - NE Array"
    state: >
      {{ (states('sensor.energy_production_today_sa1') | float(default=0) + states('sensor.energy_production_today_sa3') | float(default=0)) | round(2) }}
    unit_of_measurement: "kWh"
    device_class: energy
    attributes:
      watts: >-
        {% set sensor1 = state_attr('sensor.energy_production_today_sa1', 'watts') %}
        {% set sensor2 = state_attr('sensor.energy_production_today_sa3', 'watts') %}
        {% set ns = namespace(output={}) %}
        {% for time in sensor1.keys() %}
          {% set value1 = sensor1[time] | float(0) %}
          {% set value2 = sensor2[time] | float(0) %}
          {% set sum_value = value1 + value2 %}
          {% set capped_value = sum_value if sum_value <= 5000 else 5000 %}
          {% set ns.output = dict({time: capped_value}, **ns.output) %}
        {% endfor %}
        {{ ns.output }}
      wh_period: >-
        {% set sensor1 = state_attr('sensor.energy_production_today_sa1', 'wh_period') %}
        {% set sensor2 = state_attr('sensor.energy_production_today_sa3', 'wh_period') %}
        {% set ns = namespace(output={}) %}
        {% for time in sensor1.keys() %}
          {% set value1 = sensor1[time] | float(0) %}
          {% set value2 = sensor2[time] | float(0) %}
          {% set sum_value = value1 + value2 %}
          {% set capped_value = sum_value if sum_value <= 5000 else 5000 %}
          {% set ns.output = dict({time: capped_value}, **ns.output) %}
        {% endfor %}
        {{ ns.output }}

But, the output of this couldn't be selected in the energy dashboard as a forecast (don't know why), and, the source data was just two sensors in their entirety - there was no need to first 'extract' a subset first. Which I could probably do, into some interim sensors, before building the output but that just feels like a lot of work. And even I did manage to do it, there's many users who simply couldn't do it, or would spend ages looking for code to do it....thereby solving the same problems again and again....

Ooooof! That was a bit of a brain dump sorry. Jinja and yaml I suck at....but man can I waffle!