Closed GeoDerp closed 3 months ago
Attention: Patch coverage is 33.33333%
with 2 lines
in your changes are missing coverage. Please review.
Project coverage is 87.54%. Comparing base (
468772d
) to head (1a332c3
). Report is 27 commits behind head on master.
Files | Patch % | Lines |
---|---|---|
src/emhass/optimization.py | 33.33% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
The current docker has been set up to use web_server requirements if standalone mode only. Is this right? Never actually thought about it before now.
I'll think move the web_server requirements to be used on all modes makes more sense ? Or am I thinking in reverse and the web_server packages are only needed for the addon? @davidusb-geek
If it those three packages are required for all modes, could both requirments.txt and requirements_webserver be merged like on EMHASS-add-on?
If it those three packages are required for all modes, could both requirments.txt and requirements_webserver be merged like on EMHASS-add-on?
Yes probably the best solution.
Sill testing. Just takes a while with pip.
Unfortunately I don't believe I can finish testing this until tomorrow. Sorry about that.
This is just a test. Will continue this further possibly tomorrow. (if build works, feel free to test and pull). Otherwise test when I'm next available.
Requirement change: numpy<=1.26.0 to numpy<=1.26.2
This is just a test. Will continue this further possibly tomorrow. (if build works, feel free to test and pull). Otherwise test when I'm next available.
Requirement change: numpy<=1.26.0 to numpy<=1.26.2
No don't worry no rush. It seems that armv7 is failing again. A problem with the SciPy version 1.11.3?
At some point in the past I was able to solve a lot of problems for arm arch by using this type of command to install packages:
RUN pip install --index-url=https://www.piwheels.org/simple --no-cache-dir -r requirements.txt
@GeoDerp could you help us understand where the persistent files are stored now. They are now inside the running docker for the add-on? Before they were saved in the share folder on HA
could you help us understand where the persistent files are stored now
The persistent data for a addon is in the /data
folder. So DATA_PATH: '/data/'
https://github.com/davidusb-geek/emhass-add-on/blob/07366be0a4044dff043587b441e3d798e77d245a/emhass/config.yml#L31C2-L31C22
And the options.json is also there automatically. Hopefully that helps.
The rest is being stored in /app/
.
Do note that I believe /data is private to only the addon. The benefit is that uninstalling / updating / reinstalling the addon should manage the files correctly (would need to check to confirm). The downside you may not be able to access these files remotely.
You may wish to set this line back to /share if this isn't your desired result.
No this seems nice. At some point we were having issues with data persistance after reboot and add-on update. So we implemented this manual data saving on the share folder. But if your implementation works fine which seems to be the case for now is good for me. That time was just my lack of skills on docker that made me use the manual approach. But I like this implementation more. We can still access the files if needed by going inside the container.
@davidusb-geek , this is still in testing. Just presenting progress. One of the issues with numpy & scipy is their requirements with Blas/OpenBLAS. For some reason (32bit) armhf doesn't have that package. Therefore we are building this manually for that architecture.
There is a option to disable the use of blas for both. However I'm unsure if EMHASS uses this package. So I'm assuming it does.
I have also bumped the versions these 2 packages for testing purposes. I assume this could be bumped down and still work however. Will need to test functionality either way (if the build works).
At some point in the past I was able to solve a lot of problems for arm arch by using this type of command to install packages:
RUN pip install --index-url=https://www.piwheels.org/simple --no-cache-dir -r requirements.txt
This may still be the way to go. When testing I found this method to be slow (if not failing to pull). So I removed it for testing,
There is also an argument for how many users are still using 32bit versions on their Raspberry pi. A lot of these errors could be simply fixed by removing armhf
support. (which I believe is 32 bit floating point for Debian)
Its to my understanding this will stop support for pi1,2,3(32bit). So I understand If this architecture wants to be kept.
The argument will be against how well EMHASS will run on a pi zero, 1,2,3 , and is it worth keeping support for the time being. Happy to work around whatever. (This pr tries to keep armhf)
There is also an argument for how many users are still using 32bit versions on their Raspberry pi. A lot of these errors could be simply fixed by removing
armhf
support. (which I believe is 32 bit floating point for Debian)Its to my understanding this will stop support for pi1,2,3(32bit). So I understand If this architecture wants to be kept.
The argument will be against how well EMHASS will run on a pi zero, 1,2,3 , and is it worth keeping support for the time being. Happy to work around whatever. (This pr tries to keep armhf)
Yes I understand the argument. It there is no simple solution we will have no other choice than drop support for armhf. But let's try to keep at least armv7 alive ;-)
@davidusb-geek , realized why I was so confused. the armhf image of home-assistant/armhf-base-debian
was set to use packages from armel
. I don't know why. but the latest commits added a line to check armhf and change package architecture to armhf.
Currently testing this and a version with --index-url=https://www.piwheels.org/simple
to see if there is any difference.
Last commit, found that setting --index-url=
overrides pypy making the non arm architectures to struggle. Setting an if statement to only use https://www.piwheels.org/simple
on armhf
and armv7
. Fixes that solution.
It looks like all architectures work now. both arm builds take only 10m now also. (I think that's thanks to the new numpy/scipy versions)
Great! Nice finding of that solution
Just doing final testing now and ill do the last commit. (ill do a pr for EMHASS-Add-on also)
Just doing final testing now and ill do the last commit. (ill do a pr for EMHASS-Add-on also)
Ok no problem, we will merge when you say it's ready
I have realized that armhf at the moment is really only running armhf packages with armel. I can image it would effect performance (if not worse). However, to change the image completely to armhf however requires many steps (practically uninstalling and re-installing all packages in image). Im thinking this may be something that could be looked at in latter PR's.
Tested on a raspberry pi 3 running armf. build seems to be fine but run produces the following error:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/lib/python3.11/dist-packages/emhass/web_server.py", line 12, in <module>
import pandas as pd
File "/usr/local/lib/python3.11/dist-packages/pandas/__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
numpy: Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and relaunch
your python interpreter from there.
Will look at this later. If you have got any ideas let me know. So far the only thing I have tried is moving the work directory temporarily before pip install.
WORKDIR /usr/lib/python3/dist-packages/
RUN [[ "${TARGETARCH}" == "armhf" || "${TARGETARCH}" == "armv7" ]] && pip3 install --index-url=https://www.piwheels.org/simple --no-cache-dir --break-system-packages -r /app/requirements.txt || pip3 install --no-cache-dir --break-system-packages -r /app/requirements.txt
WORKDIR /app/
Will also finish up the EMHASS-Add-on Docker changes latter also. This may already be working, will test to confirm: https://github.com/GeoDerp/emhass-add-on
fyi, still testing on: https://github.com/GeoDerp/emhass
I may want to try and use armhf-base-raspbian
for the armhf build. This will however May require a more complex Docker or a dedicated Docker file for armhf. We will see.
Edit: added raspbian, seems to work well so far with native armhf support.
armhf seems to run without any faults. Will do a more thorough test tomorrow.
(Green: tested and success )
(Red: Untested/ tested and failed )
armhf
+Standalone
_inc. all lpsolver, with patch for pulp cbc
+Add-On
amd64
+Standalone
+Add-On
aarch64
+Standalone
+Add-On
_inc. weather_forecast_method and lpsolver (separately)
Note: sometimes optimizations work even with broken PV sensor passed
_Note: there was a 2 times where var_load
couldn't be received from HA._
I could not replicate either of these issues easily so i'm assuming its on my side. Probably be good to check.
arm/v7 (did a quicker test)
+Standalone
_inc. weather_forecastmethod
+Add-On
_inc. lpsolver
Perfect Optimization
, Advanced ML Tasks
, Publish Optimization Results
, ML forecast model fit
, ML forecast model predict
, ML forecast model tune
. (Not MPC)I may not test arm/v7, we will see. When these are all ticked fill free to merge this PR. Then after the EMHASS pip version bump we can see the build tests of EMHASS-Add-on#216 and Merge if successful. This may take a few days.
Ill see if I can test the other two tomorrow. @davidusb-geek if you have a good MPC example that could work for testing send it my way and ill use it to test MPC also.
Something like this?
curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93], "prediction_horizon":10, "soc_init":0.5,"soc_final":0.6}' http://localhost:5000/action/naive-mpc-optim
Let me know if there is anything else you will like me to test with above.
Ill see if I can test the other two tomorrow. @davidusb-geek if you have a good MPC example that could work for testing send it my way and ill use it to test MPC also.
Something like this?
curl -i -H 'Content-Type:application/json' -X POST -d '{"pv_power_forecast":[0, 70, 141.22, 246.18, 513.5, 753.27, 1049.89, 1797.93, 1697.3, 3078.93], "prediction_horizon":10, "soc_init":0.5,"soc_final":0.6}' http://192.168.3.159:5000/action/naive-mpc-optim
Let me know if there is anything else you will like me to test with above.
This example test is good enough for me for MPC.
Was unable to work on this today. Will try and get it done for the time you usually merge on the weekend.
Was unable to work on this today. Will try and get it done for the time you usually merge on the weekend.
Yes exactly, I will be able to take a more in depth look at all this this weekend. Thanks for these efforts!
All goes well the test will be done tomorrow. Otherwise the day after (Aus time). 👍
@davidusb-geek , im probably pretty happy with these changes. Ill see If I can test the other two by the time you test your self.
@davidusb-geek , im probably pretty happy with these changes. Ill see If I can test the other two by the time you test your self.
So the color-code table is updated?
im looking into virtualization of aarch64 and arm/v7 now.
@davidusb-geek , feel free to start testing this.
@davidusb-geek , will see how the test builds react to the change,