Closed andreas-bulling closed 8 months ago
Please share your configuration to be able to provide a solution. You have a problem retrieving data from HA, this is typically a failed configuration.
This?!
logging_level: INFO
costfun: cost
sensor_power_photovoltaics: sensor.solar_total_yield_current
sensor_power_load_no_var_loads: sensor.ac_loads
set_total_pv_sell: false
set_nocharge_from_grid: false
set_nodischarge_to_grid: true
maximum_power_from_grid: 9000
number_of_deferrable_loads: 1
list_nominal_power_of_deferrable_loads:
- nominal_power_of_deferrable_loads: 2000
list_operating_hours_of_each_deferrable_load:
- operating_hours_of_each_deferrable_load: 10
list_start_timesteps_of_each_deferrable_load:
- start_timesteps_of_each_deferrable_load: 0
list_end_timesteps_of_each_deferrable_load:
- end_timesteps_of_each_deferrable_load: 0
list_peak_hours_periods_start_hours:
- peak_hours_periods_start_hours: "05:00"
list_peak_hours_periods_end_hours:
- peak_hours_periods_end_hours: "23:00"
list_treat_deferrable_load_as_semi_cont:
- treat_deferrable_load_as_semi_cont: true
list_set_deferrable_load_single_constant:
- set_deferrable_load_single_constant: false
load_peak_hours_cost: 0.3
load_offpeak_hours_cost: 0.2
photovoltaic_production_sell_price: 0.065
list_pv_module_model:
- pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
list_pv_inverter_model:
- pv_inverter_model: Fronius_International_GmbH__Fronius_Symo_12_5_3_480__480V_
list_surface_tilt:
- surface_tilt: 30
list_surface_azimuth:
- surface_azimuth: 160
list_modules_per_string:
- modules_per_string: X
list_strings_per_inverter:
- strings_per_inverter: X
set_use_battery: true
battery_nominal_energy_capacity: X
hass_url: X
long_lived_token: X
optimization_time_step: 30
historic_days_to_retrieve: 2
method_ts_round: nearest
lp_solver: COIN_CMD
lp_solver_path: /usr/bin/cbc
set_battery_dynamic: false
battery_dynamic_max: 0.9
battery_dynamic_min: -0.9
load_forecast_method: naive
battery_discharge_power_max: 6144
battery_charge_power_max: 6144
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_minimum_state_of_charge: 0.1
battery_maximum_state_of_charge: 1
battery_target_state_of_charge: 0.8
Yes.
Change the value of these two parameters:
hass_url: empty
long_lived_token: empty
I have removed them before posting here - they contain valid/correct values.
UPDATE: I just created a new token to make sure. Same error message.
What I find strange - I don't have a sensor called sensor.ac_loads_positive
. Also not configured in EMHASS. Why does it try to read that out?
I have removed them before posting here - they contain valid/correct values.
UPDATE: I just created a new token to make sure. Same error message.
Please just follow the solution that I gave in my previous post. No need to create any token. You need to write the word "empty" on the url and token parameters. That's all
Oh dear, that was an easy fix. Thanks a lot!
Hm, next error
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emhass (no readiness notification)
s6-rc: info: service legacy-services successfully started
2024-02-08 08:26:51,637 - web_server - INFO - Launching the emhass webserver at: http://0.0.0.0:5000
2024-02-08 08:26:51,637 - web_server - INFO - Home Assistant data fetch will be performed using url: http://supervisor/core/api
2024-02-08 08:26:51,637 - web_server - INFO - The data path is: /share
2024-02-08 08:26:51,638 - web_server - INFO - Using core emhass version: 0.7.6
waitress INFO Serving on http://0.0.0.0:5000
2024-02-08 08:27:22,792 - web_server - INFO - EMHASS server online, serving index.html...
2024-02-08 08:27:24,823 - web_server - INFO - Setting up needed data
2024-02-08 08:27:24,834 - web_server - INFO - Retrieving weather forecast data using method = solcast
2024-02-08 08:27:26,300 - web_server - INFO - Retrieving data from hass for load forecast using method = mlforecaster
2024-02-08 08:27:26,301 - web_server - INFO - Retrieve hass get data method initiated...
2024-02-08 08:27:36,229 - web_server - ERROR - The ML forecaster file was not found, please run a model fit method before this predict method
2024-02-08 08:27:36,230 - web_server - ERROR - Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/emhass/web_server.py", line 50, in action_call
input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/emhass/command_line.py", line 91, in set_input_data_dict
P_load_forecast = fcst.get_load_forecast(method=optim_conf['load_forecast_method'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/emhass/forecast.py", line 619, in get_load_forecast
forecast_out = mlf.predict(data_last_window)
^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'predict'
2024-02-08 08:27:42,767 - web_server - INFO - EMHASS server online, serving index.html...
2024-02-08 08:27:46,767 - web_server - INFO - Setting up needed data
2024-02-08 08:27:46,770 - web_server - INFO - Retrieve hass get data method initiated...
2024-02-08 08:27:46,777 - web_server - ERROR - The retrieved JSON is empty, check that correct day or variable names are passed
2024-02-08 08:27:46,777 - web_server - ERROR - Either the names of the passed variables are not correct or days_to_retrieve is larger than the recorded history of your sensor (check your recorder settings)
2024-02-08 08:27:46,777 - web_server - ERROR - Exception on /action/forecast-model-fit [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/emhass/web_server.py", line 50, in action_call
input_data_dict = set_input_data_dict(config_path, str(data_path), costfun,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/emhass/command_line.py", line 146, in set_input_data_dict
rh.get_data(days_list, var_list)
File "/usr/local/lib/python3.11/dist-packages/emhass/retrieve_hass.py", line 150, in get_data
self.df_final = pd.concat([self.df_final, df_day], axis=0)
^^^^^^
UnboundLocalError: cannot access local variable 'df_day' where it is not associated with a value
Hi sometimes there is a bunch of unreadable error traceback from Python. But we should focus on the log error from EMHASS. The messages are quite explicit here:
2024-02-08 08:27:26,300 - web_server - INFO - Retrieving data from hass for load forecast using method = mlforecaster
2024-02-08 08:27:26,301 - web_server - INFO - Retrieve hass get data method initiated...
2024-02-08 08:27:36,229 - web_server - ERROR - The ML forecaster file was not found, please run a model fit method before this predict method
...please run a model fit method before this predict method...
If you want to use the machine learning model then you have to train it first to create the trained object. Then you can use the predict method...
I have tried that - that's the second half of the log output, starting from
2024-02-08 08:27:46,777 - web_server - ERROR - Exception on /action/forecast-model-fit [POST]
Ok then you just don't have enough history data. By the default the system try to retrieve 9 days of data but if you have not configured that in your HA recorder then we are not able to fetch the data. Change your recorder configuration to retain much more data, wait for the database to fill and then retry. Or you can change how many days you want to retrieve, here is the documentation: https://emhass.readthedocs.io/en/latest/mlforecaster.html#a-basic-model-fit
But less days is not a good idea to obtain meaningful forecast results with this method.
Thanks, have done that and will see how it goes in a few days. Strangely enough - the option was not modified by me, i.e. it was at the default setting. If 9 days are required but the default in HA is shorter this probably should be added to the documentation...
One thing I find confusing: The default for historic_days_to_retrieve
is two days. Or are these two different parameters?
That link in my last post explain this ML Forecaster. The documentation explains that the default number of days to retrieve is 9 and it clearly states that this method should be provided with as much data as possible an that you need to configure your HA recorder to do that. It is in the docs.
One thing I find confusing: The default for historic_days_to_retrieve is two days. Or are these two different parameters?
Two different parameters. That one in the configuration pane is for optimization based on past history data. This one defaulting to 9 days is to train a machine learning model for load power forecast.
Problem never got solved for me. I gave up.
I see the following error message in the log after clicking "Optimise" in the web interface: