Closed purcell-lab closed 2 years ago
Hi, this is more of a problem from the emhass add-on and not this emhass core module.
The logs in the add-on were wrong. I've just fixed that. There is a mismatch between the passed length of the list and your optimization_time_step
and the default forecast length which is 1 day.
In other words, the passed list needs to have a 1 day length.
Is that 1 day length meaning 48 entries for each half hour block?
Also is the forecast list meant to be the next forty eight 30 minute blocks, or the forty eight 30 minute blocks starting from midnight. I would assume the latter, which is how my buy/ sell prices are available. However my PV forecast has today's forecast (from midnight) and tomorrow's forecast (from midnight) as attributes, so I may need to do some sorting there.
On Mon, 25 Apr 2022 at 17:19, David @.***> wrote:
Hi, this is more of a problem from the emhass add-on and not this emhass core module. The logs in the add-on were wrong. I've just fixed that. There is a mismatch between the passed length of the list and your optimization_time_step and the default forecast length which is 1 day. In other words, the passed list needs to have a 1 day length.
— Reply to this email directly, view it on GitHub https://github.com/davidusb-geek/emhass/issues/4#issuecomment-1108169240, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS4B3XWXF6BPAGJLVSLDKNTVGZBRHANCNFSM5UHOOBPQ . You are receiving this because you authored the thread.Message ID: @.***>
If your optimization_time_step
is 30min you should provide a list with 48 items, if optimization_time_step
is 60min you should provide a list with 24 items, and so on...
All the forecasts passed to emhass are assumed to be from now, meaning the moment when you call the optimization routine (and not midnight) to up to 24h in the future.
So it is possible that you need to edit your forecast lists using templates.
In the case of the pv_power_forecast
the internal forecaster from emhass is pretty accurate and the conversion from irradiance to power in watts is using accurate PVLib models.
If your
optimization_time_step
is 30min you should provide a list with 48 items, ifoptimization_time_step
is 60min you should provide a list with 24 items, and so on... All the forecasts passed to emhass are assumed to be from now, meaning the moment when you call the optimization routine (and not midnight) to up to 24h in the future. So it is possible that you need to edit your forecast lists using templates.
I am passing 48 items, but it still reports an error:
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"prod_price_forecast":[0.33, 0.28, 0.23, 0.16, 0.15, 0.13, 0.13, 0.13, 0.13, 0.13, 0.15, 0.14, 0.15, 0.14, 0.13, 0.13, 0.13, 0.13, 0.1, 0.11, 0.13, 0.15, 0.15, 0.13, 0.21, 0.3, 0.37, 0.37, 0.28, 0.22, 0.34, 0.29, 0.15, 0.15, 0.15, 0.15, 0.16, 0.22, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Mon, 25 Apr 2022 07:41:13 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45
EMHASS >> Action dayahead-optim executed...
mark@odroid:~$
From the logs:
172.30.32.1 - - [25/Apr/2022 17:40:17] "POST /action/publish-data HTTP/1.1" 201 -
[2022-04-25 17:40:58,680] ERROR in app_server: ERROR: The passed data is either not a list or the length is not correct, length should be 48
[2022-04-25 17:40:58,683] ERROR in app_server: Passed type is <class 'list'> and length is 48
[2022-04-25 17:41:12,805] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 17:41:12,838] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
172.30.32.1 - - [25/Apr/2022 17:41:13] "POST /action/dayahead-optim HTTP/1.1" 201 -
You need to update to >> v0.1.32
Still fails to load forecast with 0.1.32 :-(
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
* Serving Flask app 'app_server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.30.33.4:5000 (Press CTRL+C to quit)
[2022-04-25 19:28:00,255] ERROR in app_server: ERROR: The passed data is either not a list or the length is not correct, length should be 48
[2022-04-25 19:28:00,258] ERROR in app_server: Passed type is <class 'list'> and length is 48
[2022-04-25 19:28:15,617] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 19:28:15,651] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
172.30.32.1 - - [25/Apr/2022 19:28:16] "POST /action/dayahead-optim HTTP/1.1" 201 -
192.168.86.50 - - [25/Apr/2022 19:28:43] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [25/Apr/2022 19:28:43] "GET /static/style.css HTTP/1.1" 200 -
172.30.32.1 - - [25/Apr/2022 19:30:16] "POST /action/publish-data HTTP/1.1" 201 -
Again a very silly error! Fixed >> v0.1.33 The images will take at least 15mins to build
OK, we have passed the initial check, but get a ValueError Shape of passed values...
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
* Serving Flask app 'app_server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.30.33.4:5000 (Press CTRL+C to quit)
192.168.86.50 - - [25/Apr/2022 21:28:30] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [25/Apr/2022 21:28:30] "GET /static/style.css HTTP/1.1" 304 -
[2022-04-25 21:28:53,707] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 21:28:53,739] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
192.168.86.50 - - [25/Apr/2022 21:29:16] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [25/Apr/2022 21:29:17] "GET /static/style.css HTTP/1.1" 304 -
192.168.86.50 - - [25/Apr/2022 21:29:48] "POST /action/dayahead-optim HTTP/1.1" 201 -
[2022-04-25 21:30:00,408] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-25 21:30:00,435] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
192.168.86.50 - - [25/Apr/2022 21:30:02] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [25/Apr/2022 21:30:02] "GET /static/style.css HTTP/1.1" 304 -
172.30.32.1 - - [25/Apr/2022 21:30:16] "POST /action/publish-data HTTP/1.1" 201 -
192.168.86.50 - - [25/Apr/2022 21:30:56] "POST /action/dayahead-optim HTTP/1.1" 201 -
[2022-04-25 21:31:12,223] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/usr/src/app_server.py", line 196, in action_call
opt_res = dayahead_forecast_optim(input_data_dict, app.logger)
File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 125, in dayahead_forecast_optim
df_input_data_dayahead = input_data_dict['fcst'].get_prod_price_forecast(
File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 539, in get_prod_price_forecast
forecast_out = self.get_forecast_out_from_csv(df_final,
File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 356, in get_forecast_out_from_csv
forecast_out = pd.DataFrame(
File "/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py", line 694, in __init__
mgr = ndarray_to_mgr(
File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 351, in ndarray_to_mgr
_check_values_indices_shape_match(values, index, columns)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 422, in _check_values_indices_shape_match
raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (5, 1), indices imply (1, 1)
172.30.32.1 - - [25/Apr/2022 21:31:12] "POST /action/dayahead-optim HTTP/1.1" 500 -
When POST this data set:
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"prod_price_forecast":[0.11, 0.1, 0.1, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.1, 0.1, 0.1, 0.1, 0.11, 0.13, 0.15, 0.14, 0.16, 0.3, 0.37, 0.37, 0.37, 0.3, 0.21, 0.15, 0.15, 0.37, 0.37, 0.21, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.38, 0.34, 0.3, 0.21, 0.29, 0.29, 0.21, 0.3]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 500 INTERNAL SERVER ERROR
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Mon, 25 Apr 2022 11:31:12 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 290
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
mark@odroid:~$
What are your configuration params? From the configuration pane?
And what are you using as forecast for pv_power_forecast
and load_power_forecast
.
The forecast DataFrames from prod_price_forecast
and load_cost_forecast
will be using the same timestamps indexes as pv_power_forecast
and load_power_forecast
. So they should all match.
If you have multiple forecast to pass as lists, then pass them all together in the same dictionary when using the curl
command.
For example:
curl -i -H "Content-Type: application/json" -X POST -d '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],"load_cost_forecast":[0.25, 0.26, 0.39, 0.42, 0.46, 0.42, 0.35, 0.31, 0.27, 0.24, 0.24, 0.23, 0.24, 0.23, 0.26, 0.26, 0.26, 0.25, 0.24, 0.24, 0.24, 0.24, 0.22, 0.21, 0.25, 0.26, 0.26, 0.25, 0.34, 0.5, 0.51, 0.51, 0.5, 0.42, 0.51, 0.5, 0.28, 0.27, 0.26, 0.26, 0.27, 0.34, 0.5, 0.51, 0.51, 0.51, 0.51, 0.51],"prod_price_forecast":[0.11, 0.1, 0.1, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.1, 0.1, 0.1, 0.1, 0.11, 0.13, 0.15, 0.14, 0.16, 0.3, 0.37, 0.37, 0.37, 0.3, 0.21, 0.15, 0.15, 0.37, 0.37, 0.21, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.38, 0.34, 0.3, 0.21, 0.29, 0.29, 0.21, 0.3]}' http://localhost:5000/action/dayahead-optim
Not much changed from the defaults.
One load, which is my heatpump 5000W.
web_ui_url: 0.0.0.0
hass_url: empty
long_lived_token: empty
costfun: profit
optimization_time_step: 30
historic_days_to_retrieve: 2
sensor_power_photovoltaics: sensor.apf_generation_entity
sensor_power_load_no_var_loads: sensor.power_load_no_var_loads
number_of_deferrable_loads: 1
nominal_power_of_deferrable_loads: '5000'
operating_hours_of_each_deferrable_load: '5'
peak_hours_periods_start_hours: '16:00'
peak_hours_periods_end_hours: '08:00'
load_peak_hours_cost: 0.3
load_offpeak_hours_cost: 0.19
photovoltaic_production_sell_price: 0.065
maximum_power_from_grid: 30000
pv_module_model: CSUN_Eurasia_Energy_Systems_Industry_and_Trade_CSUN295_60M
pv_inverter_model: Fronius_International_GmbH__Fronius_Primo_5_0_1_208_240__240V_
surface_tilt: 30
surface_azimuth: 205
modules_per_string: 16
strings_per_inverter: 4
set_use_battery: false
battery_discharge_power_max: 1000
battery_charge_power_max: 1000
battery_discharge_efficiency: 0.95
battery_charge_efficiency: 0.95
battery_nominal_energy_capacity: 5000
battery_minimum_state_of_charge: 0.3
battery_maximum_state_of_charge: 0.9
battery_target_state_of_charge: 0.6
Ok, I just modified my previous comment. Try the curl
command with all the available forecasts at once.
Ok, I just modified my previous comment. Try the
curl
command with all the available forecasts at once.
Unsuccessful unfortunately.
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"pv_power_forecast":[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93, 1164.33, 1046.68, 1559.1, 2091.26, 1556.76, 1166.73, 1516.63, 1391.13, 1720.13, 820.75, 804.41, 251.63, 79.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],"load_cost_forecast":[0.25, 0.26, 0.39, 0.42, 0.46, 0.42, 0.35, 0.31, 0.27, 0.24, 0.24, 0.23, 0.24, 0.23, 0.26, 0.26, 0.26, 0.25, 0.24, 0.24, 0.24, 0.24, 0.22, 0.21, 0.25, 0.26, 0.26, 0.25, 0.34, 0.5, 0.51, 0.51, 0.5, 0.42, 0.51, 0.5, 0.28, 0.27, 0.26, 0.26, 0.27, 0.34, 0.5, 0.51, 0.51, 0.51, 0.51, 0.51],"prod_price_forecast":[0.11, 0.1, 0.1, 0.13, 0.13, 0.13, 0.13, 0.13, 0.13, 0.1, 0.1, 0.1, 0.1, 0.11, 0.13, 0.15, 0.14, 0.16, 0.3, 0.37, 0.37, 0.37, 0.3, 0.21, 0.15, 0.15, 0.37, 0.37, 0.21, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.37, 0.38, 0.34, 0.3, 0.21, 0.29, 0.29, 0.21, 0.3]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 500 INTERNAL SERVER ERROR
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Mon, 25 Apr 2022 20:48:55 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 290
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
mark@odroid:~$
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
* Serving Flask app 'app_server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.30.33.4:5000 (Press CTRL+C to quit)
192.168.86.50 - - [26/Apr/2022 06:46:05] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [26/Apr/2022 06:46:06] "GET /static/style.css HTTP/1.1" 200 -
[2022-04-26 06:46:34,048] WARNING in optimization: Failed LP solve with PULP_CBC_CMD solver, falling back to default Pulp
[2022-04-26 06:46:34,085] WARNING in optimization: Failed LP solve with default Pulp solver, falling back to GLPK_CMD
192.168.86.50 - - [26/Apr/2022 06:46:34] "POST /action/dayahead-optim HTTP/1.1" 201 -
192.168.86.50 - - [26/Apr/2022 06:47:02] "GET / HTTP/1.1" 200 -
192.168.86.50 - - [26/Apr/2022 06:47:02] "GET /static/style.css HTTP/1.1" 304 -
192.168.86.50 - - [26/Apr/2022 06:47:52] "POST /action/publish-data HTTP/1.1" 201 -
[2022-04-26 06:48:55,464] ERROR in app: Exception on /action/dayahead-optim [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/usr/src/app_server.py", line 196, in action_call
opt_res = dayahead_forecast_optim(input_data_dict, app.logger)
File "/usr/local/lib/python3.9/dist-packages/emhass/command_line.py", line 122, in dayahead_forecast_optim
df_input_data_dayahead = input_data_dict['fcst'].get_load_cost_forecast(
File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 492, in get_load_cost_forecast
forecast_out = self.get_forecast_out_from_csv(df_final,
File "/usr/local/lib/python3.9/dist-packages/emhass/forecast.py", line 356, in get_forecast_out_from_csv
forecast_out = pd.DataFrame(
File "/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py", line 694, in __init__
mgr = ndarray_to_mgr(
File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 351, in ndarray_to_mgr
_check_values_indices_shape_match(values, index, columns)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 422, in _check_values_indices_shape_match
raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (34, 1), indices imply (1, 1)
172.30.32.1 - - [26/Apr/2022 06:48:55] "POST /action/dayahead-optim HTTP/1.1" 500 -
Ok, looking for where the problem is. I'm able to reproduce the error.
Ok found the error and fixed. The error was right in this emhass core module. Wrong handling of the list of values indexes. Building the add-on images right now >> v0.1.36
Thanks, looks like upgrading to 0.1.36 needs to install module 'six'
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Traceback (most recent call last):
File "/usr/src/app_server.py", line 11, in <module>
import pandas as pd
File "/usr/local/lib/python3.9/dist-packages/pandas/__init__.py", line 22, in <module>
from pandas.compat import is_numpy_dev as _is_numpy_dev
File "/usr/local/lib/python3.9/dist-packages/pandas/compat/__init__.py", line 15, in <module>
from pandas.compat.numpy import (
File "/usr/local/lib/python3.9/dist-packages/pandas/compat/numpy/__init__.py", line 4, in <module>
from pandas.util.version import Version
File "/usr/local/lib/python3.9/dist-packages/pandas/util/__init__.py", line 1, in <module>
from pandas.util._decorators import ( # noqa:F401
File "/usr/local/lib/python3.9/dist-packages/pandas/util/_decorators.py", line 14, in <module>
from pandas._libs.properties import cache_readonly # noqa:F401
File "/usr/local/lib/python3.9/dist-packages/pandas/_libs/__init__.py", line 13, in <module>
from pandas._libs.interval import Interval
File "pandas/_libs/interval.pyx", line 1, in init pandas._libs.interval
File "pandas/_libs/hashtable.pyx", line 1, in init pandas._libs.hashtable
File "pandas/_libs/missing.pyx", line 1, in init pandas._libs.missing
File "/usr/local/lib/python3.9/dist-packages/pandas/_libs/tslibs/__init__.py", line 30, in <module>
from pandas._libs.tslibs.conversion import (
File "pandas/_libs/tslibs/conversion.pyx", line 1, in init pandas._libs.tslibs.conversion
File "pandas/_libs/tslibs/timezones.pyx", line 14, in init pandas._libs.tslibs.timezones
File "/usr/local/lib/python3.9/dist-packages/dateutil/tz/__init__.py", line 2, in <module>
from .tz import *
File "/usr/local/lib/python3.9/dist-packages/dateutil/tz/tz.py", line 19, in <module>
import six
ModuleNotFoundError: No module named 'six'
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
Very strange, I'll look whats going on but I don't completely control the building process as I'm using the official hass builder.
Ok, just added the new missing module in a new update. I did not understood how could that be missing as there was no problem with pandas in previous versions. Anyway, adding that missing module solved the issue on my dev-env. The new images are being built.
Thanks, I can now load the pricing into the add-on, but the timings are a little off.
If you have a look at the input I have a major price spike in one hour at 1730 in the second window: prod price $13.88 & load cost $15.36 / kWh (yes they are not typo's), but in the web UI they are scheduled to occur after midnight (00:30)
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"prod_price_forecast":[0.33, 13.88, 1.08, 0.34, 0.33, 0.32, 0.29, 0.22, 0.28, 0.2, 0.3, 0.21, 0.22, 0.3, 0.32, 0.32, 0.18, 0.21, 0.16, 0.15, 0.15, 0.14, 0.15, 0.16, 0.18, 0.18, 0.23, 0.33, 0.32, 0.29, 0.3, 0.2, 0.29, 0.2, 0.19, 0.13, 0.13, 0.12, 0.11, 0.12, 0.12, 0.14, 0.18, 0.21, 0.22, 0.32, 0.34, 0.58]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Wed, 27 Apr 2022 06:48:16 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45
EMHASS >> Action dayahead-optim executed...
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast":[0.46, 15.36, 1.29, 0.47, 0.45, 0.45, 0.42, 0.34, 0.4, 0.32, 0.42, 0.32, 0.34, 0.42, 0.45, 0.45, 0.29, 0.33, 0.27, 0.27, 0.26, 0.25, 0.26, 0.27, 0.3, 0.29, 0.34, 0.46, 0.45, 0.42, 0.43, 0.32, 0.42, 0.31, 0.3, 0.24, 0.24, 0.23, 0.22, 0.22, 0.23, 0.25, 0.29, 0.33, 0.34, 0.44, 0.47, 0.73]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Wed, 27 Apr 2022 06:48:50 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45
EMHASS >> Action dayahead-optim executed...
mark@odroid:~$
Hi, so the timings are off exactly by how much? Remember to send all the curl data at once
Looks like it has shifted the data by 7 hrs and wrapped the later data points into the earlier time slots.
I will adjust my scripts to inject at the same time.
On Wed, 27 Apr 2022 at 17:13, David @.***> wrote:
Hi, so the timings are off by how much? Remember to send all the curl data at once
— Reply to this email directly, view it on GitHub https://github.com/davidusb-geek/emhass/issues/4#issuecomment-1110630189, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS4B3XS4ZFQB44YEA3RFKGDVHDSJDANCNFSM5UHOOBPQ . You are receiving this because you authored the thread.Message ID: @.***>
Ok, and the first data point in the list that you send correspond to the data slot for the time when you are sending the optimization curl command?
Looks like it is just loading the list in from midnight, it loads the first list item into 00:00 the second item into 00:30, ... and the fourty eighth item into 23:30.
mark@odroid:~$ curl -i -H "Content-Type: application/json" -X POST -d '{"load_cost_forecast":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]}' http://localhost:5000/action/dayahead-optim
HTTP/1.1 201 CREATED
Server: Werkzeug/2.1.1 Python/3.9.2
Date: Wed, 27 Apr 2022 09:50:10 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 45
EMHASS >> Action dayahead-optim executed...
Have a look at the unit_load_cost column:
Ok, and the first data point in the list that you send correspond to the data slot for the time when you are sending the optimization curl command?
The data points I am sending are all forecasts for the next twenty four hours, so the first element is the forecast for the next 30 minute block.
e.g. time now is 20:22, the forecast of $0.40 is for the next time period which starts at 20:30
{{(state_attr('sensor.amber_general_forecast', 'forecasts') |map(attribute='per_kwh')|list)}}
[0.4, 0.44, 0.32, 0.45, 0.44, 0.45, 0.45, 0.45, 0.44, 0.33, 0.32, 0.32, 0.27, 0.27, 0.26, 0.28, 0.29, 0.33, 0.38, 0.45, 0.46, 0.45, 0.42, 0.39, 0.32, 0.35, 0.33, 0.33, 0.31, 0.28, 0.26, 0.24, 0.24, 0.28, 0.32, 0.43, 0.45, 0.46, 0.47, 0.51, 0.73, 0.74, 17.56, 1.36, 0.74, 0.47, 0.51, 0.47]
Ok thank you, I will take a look and fix this.
I feel like we are getting close, I can now see my injected variable buy/ sell costs and well as PV forecasts into the system, albeit offset to midnight. I guess if I run injection once a day at 23:35 it will be correct, but of course the forecasts are changing every thirty minutes, so it will get stale.
Hi, yes we're getting close! Just fixed this issue >> v0.1.38 The images are being built
Success with 0.1.38. maybe a 30 minute offset to what I was expecting. Thanks for getting us to this point!
I have now got relevant forecasts into the system and it is coming up with an optional solution, I'm currently running optimised for profit.
I just need to work though the results to work out why it isn't scheduling my deferable loads during my solar production as I have excess solar and my sell price for excess solar is less than the buy price from the grid during the windows the deferable loads have been scheduled.
Again thank you too for testing.
You can check the optimization results cost function evaluation using the results table. But it may simply be that summed up your profit is just better selling all that extra PV to the grid and scheduling those deferrable loads where the load cost is lower. I can see that your loads have been scheduled where your load cost is < 0.26 $/kWh, so it makes sense for me.
You can "force" your deferrable loads to match your PV production by simply using self-consumption
as your cost function. When you do this you will be able to check the summed up profit cost from the results table and compare it to your current results. It may be counter-intuitive but its the optimization result.
When trying to pass forecast data all looks good from the command line:
However it doesn't get updated in the model and the logs complain about length needing to be 48, but then confirming length is 48.